Build a smart Android app with LUIS and Xamarin Android

Xamarin.Android AI NLP

Xamarin.Android Natural language processing android app with LUIS

Hi everyone,

I was working on an android project which required NLP (Natural Language Processing) for interpreting a set of vocal commands in a smart way (AI, a little bit like Cortana, Alexa…). This may seem a little bit difficult from the first point of view but with all the tools which are available to us right now, it is a lot more simple than what you may think.

After making a few researches on the internet, I succeeded in implementing this functionality in the Android app which I was building, and here is an app which has just that functionality.

The source code for this app is found here 

How we will proceed

  • Create a LUIS app
  • Implement speech to text in our android app
  • Add LUIS to our app.

Create the LUIS App

We will need to create an app on Microsoft LUIS, to make our NLP model. This is done easily, follow this link to create your Natural Language Processing model.

Note: Take down your app Id and your subscription key from LUIS we will need it later

Creating the Android app

We will be using Xamarin.Android to create this app. Start your IDE (I used visual studio).

  • Start a new Android project.
  • Add the following nuget package to your solution Microsoft.Cognitive.LUIS
  • Here is the Code for the MainActivity’s layout.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:minWidth="25px"
    android:minHeight="25px">
    <Button
        android:text="Start Reccording"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:id="@+id/button1" />
    <TextView
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:id="@+id/textView1" />
</LinearLayout>

Implement speech to text in our android app

  • In the Activity’s code, we will use the recognizer intent to start speech recognition. This will be launched when the button on the layout is clicked.
  • Here is the code run when the button is clicked.
private void RecButton_Click(object sender, System.EventArgs e)
       {
           // change the text on the button
           recButton.Text = "End Recording";
           isRecording = !isRecording;
           if (isRecording)
           {
               // create intent and start the activity
               var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
               voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);

               // put a message on the modal dialog
               voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Speak now");

               // end speech if 1.5 secs have passed
               voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
               voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
               voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
               voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);

               voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);
               StartActivityForResult(voiceIntent, VOICE);
           }
       }

When the result of this speech recognition is sent to the Main Activity, you should get the command which the user sent. as shown bellow.

if (requestCode == VOICE)
           {
               if (resultCode == Result.Ok)
               {
                   var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
                   if (matches.Count != 0)
                   {
                       _userCommand = matches[0];
                   }
                   else
                   {
                       //Speech not recognized
                   }
               }
           }

Add LUIS to your app

  • Add a new class to your project IntentHandler
  • This class will be responsible for handling our intents, and taking the appropriate actions when the corresponding intent is matched, from the user’s command.
  • These intents correspond to those in your LUIS app online.
  • Inside the new class you just created, add this piece of code
class IntentHandler
    {
        [IntentHandler(0.5, Name = "OnTheLight")]
        public async void IdentifyObstacle(LuisResult result, object context)
        {
            var activity = context as MainActivity;
            activity.WriteInterpretation("The light was turned on.");
        }
        [IntentHandler(0.5, Name = "None")]
        public void None(LuisResult result, object context)
        {
            var activity = context as MainActivity;
            activity.WriteInterpretation("I couldn't understand you.");
        }
        [IntentHandler(0.5, Name = "OffTheLight")]
        public void SetPictures(LuisResult result, object context)
        {
            var activity = context as MainActivity;
            activity.WriteInterpretation("The light was turned off.");
        }
    }

Now, we will call LUIS after the user has said his commands and the IntentHandler will act consequently.

protected override async void OnActivityResult(int requestCode, [GeneratedEnum] Result resultCode, Intent data)
        {
            if (requestCode == VOICE)
            {
                if (resultCode == Result.Ok)
                {
                    var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
                    if (matches.Count != 0)
                    {
                        _userCommand = matches[0];

                        IntentHandler intentHandler = new IntentHandler();

                        using (var router = Microsoft.Cognitive.LUIS.IntentRouter.Setup(APP_ID, SUBCRIPTION_KEY, intentHandler))
                        {
                            var handled = await router.Route(_userCommand, this);
                            if (!handled)
                            {
                                WriteInterpretation("Could not process text.");
                            }
                            else
                            {
                                //Speech understood and processed.
                            }
                        }
                    }
                    else
                    {
                        //Speech not recognized
                    }
                }

                base.OnActivityResult(requestCode, resultCode, data);
            }
        } 

Note: APP_ID, SUBCRIPTION_KEY constants refer to your app id and your subscription key which I said you should note at the begining of this post.

Now, compile and run this app, You need internet connection for it to function properly since LUIS will query your model online. 

Speak your commands to the app, and after being processed by LUIS, the correct intent will almost always be hit (This will depend on how well you will train your model and some other few factors…) from your intent handler.

The source code for this app is found here 

If you liked this post, or it was useful to you, please 👍 like it, share it on twitter, facebook or other social media… in case you want to get updated on any new useful post, follow me on twitter  and like my page on facebook.

Do you want to know how to build a cross platform app for your chat bot ? check this post.

Follow me on social media and stay updated