Step 1. Add the JitPack repository to your build file
Add it in your root settings.gradle at the end of repositories:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
mavenCentral()
maven { url 'https://jitpack.io' }
}
}
Add it in your settings.gradle.kts at the end of repositories:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
mavenCentral()
maven { url = uri("https://jitpack.io") }
}
}
Add to pom.xml
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>
Add it in your build.sbt at the end of resolvers:
resolvers += "jitpack" at "https://jitpack.io"
Add it in your project.clj at the end of repositories:
:repositories [["jitpack" "https://jitpack.io"]]
Step 2. Add the dependency
dependencies {
implementation 'com.github.Microsoft:Cognitive-Speech-STT-Android:'
}
dependencies {
implementation("com.github.Microsoft:Cognitive-Speech-STT-Android:")
}
<dependency>
<groupId>com.github.Microsoft</groupId>
<artifactId>Cognitive-Speech-STT-Android</artifactId>
<version></version>
</dependency>
libraryDependencies += "com.github.Microsoft" % "Cognitive-Speech-STT-Android" % ""
:dependencies [[com.github.Microsoft/Cognitive-Speech-STT-Android ""]]
We released a new Speech SDK supporting the new Speech Service. The new Speech SDK comes with support for Windows, Android, Linux, Javascript and iOS.
Please check out Microsoft Cognitive Services Speech SDK for documentation, links to the download pages, and the samples.
NOTE: The content of this repository is supporting the Bing Speech Service, not the new Speech Service. Bing Speech Service has been deprecated, please use the new Speech Service.
This repo contains the Android client library and samples for Speech-to-Text in Microsoft Speech API, an offering within Microsoft Cognitive Services on Azure, formerly known as Project Oxford.
The Speech To Text client library is a client library for Microsoft Speech, Speech-to-text API.
The easiest way to consume the client library is to add the com.microsoft.projectoxford:speechrecognition
package from Maven Central Repository. To find the latest version of client library, go to http://search.maven.org, and search for "g:com.microsoft.projectoxford".
To add the client library dependency from build.gradle file, add the following line in dependencies.
dependencies {
//
// Use the following line to include client library from Maven Central Repository
// Change the version number from the search.maven.org result
//
compile 'com.microsoft.projectoxford:speechrecognition:1.2.2'
// Your other Dependencies...
}
To add the client library dependency from Android Studio:
File
> Project Structure
.Dependencies
tab.Library dependency
from the drop-down list.com.microsoft.projectoxford
and hit the search icon from Choose Library Dependency
dialog.OK
to add the new dependency.libandroid_platform.so
from this page and put into your project's directory app/src/main/jniLibs/armeabi/
or app/src/main/jniLibs/x86/
.This sample demonstrates the following features using a wav file or external microphone input:
First, you must obtain a Speech API subscription key by following the instructions on Subscriptions.
Start Android Studio, choose Import project (Eclipse ADT, Gradle, etc.)
from the Quick Start
options and select Cognitive-Speech-STT-Android folder.
When a Gradle Sync
dialog pops up, choose OK to continue downloading the latest tools.
In Android Studio -> Project
panel -> Android
view, open file "SpeechRecoExample/res/values/strings.xml", and find the line "Please_add_the_subscription_key_here;". Replace the "Please_add_the_subscription_key_here" value with your subscription key from the first step.
If you want to use Recognition with intent, you also need to sign up for Language Understanding Intelligent Service (LUIS) and set the key values in luisAppID
and luisSubscriptionID
in Samples_SpeechRecoExample_res_values_strings.xml.
In Android Studio, select menu Build
> Make Project
to build the sample, and Run
to launch this sample app.
In Android Studio, select menu Run
, and Run app
to launch this sample app.
In the application, press the button Select Mode
to select what type of speech recognition you would like to use.
To start recognition, press the Start
button.
We welcome contributions. Feel free to file issues and submit pull requests on the repo and we'll try to address them as soon as possible. Learn more about how you can help on our Contribution Rules & Guidelines.
You can reach out to us anytime with questions and suggestions using our communities below:
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
All Microsoft Cognitive Services SDKs and samples are licensed with the MIT License. For more information, see LICENSE.
Sample images are licensed separately, please refer to LICENSE-IMAGE.
Developers using Cognitive Services, including this client library & sample, are expected to follow the "Developer Code of Conduct for Microsoft Cognitive Services", found at http://go.microsoft.com/fwlink/?LinkId=698895.