{"_id":"5845a4a99f6fbb1b00430809","__v":0,"user":"55bb3add54f9640d006e6cdc","version":{"_id":"5845a4a89f6fbb1b004307b7","project":"54d3007669578e0d002730c9","__v":2,"createdAt":"2016-12-05T17:32:24.708Z","releaseDate":"2016-12-05T17:32:24.708Z","categories":["5845a4a89f6fbb1b004307b8","5845a4a89f6fbb1b004307b9","5845a4a89f6fbb1b004307ba","5845a4a89f6fbb1b004307bb","5845a4a89f6fbb1b004307bc","5845a4a89f6fbb1b004307bd","5845a4a89f6fbb1b004307be","5845a4a89f6fbb1b004307bf","5845a4a89f6fbb1b004307c0","592deb23644f060f008e5aa6"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"25.0.0","version":"25"},"category":{"_id":"5845a4a89f6fbb1b004307bd","project":"54d3007669578e0d002730c9","__v":0,"version":"5845a4a89f6fbb1b004307b7","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-07-08T18:48:02.369Z","from_sync":false,"order":5,"slug":"integrations","title":"Integrations"},"parentDoc":null,"project":"54d3007669578e0d002730c9","updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-10-08T06:11:53.861Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"* [Overvew](#overview)\n* [1. Preparation](#1-preparation)\n* [2. Integration with Voice Commands](#2-integration-with-voice-commands)\n* [3. Integration with Voice Command Service](#3-integration-with-voice-command-service)\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Overview\"\n}\n[/block]\nThe **Cortana integration** feature allows you to export your agents in a [Cortana compatible VCD format](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn706593.aspx) and use it for building apps. \n\nThe example code for an integration can be found [here](https://github.com/api-ai/apiai-dotnet-client/tree/master/samples/cortana).\n\nBelow are instructions.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"1. Preparation\"\n}\n[/block]\nCreate a Universal Windows Project in Visual Studio.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/WtCTKYcOTdGo1rAvBNwo_create_project.png\",\n        \"create_project.png\",\n        \"737\",\n        \"528\",\n        \"#399ef7\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nAdd a reference to the API.AI .NET library using [Nuget](https://www.nuget.org/packages/ApiAiSDK/).\n\nCreate a new application page and add a microphone/listen button. Then, add TextBlock control to display results.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/Us3w99VcTrO0ao32MUFc_int2.png\",\n        \"int2.png\",\n        \"548\",\n        \"477\",\n        \"#34a4e3\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nIn the API.AI developer console, go to 'Integrations' from the left side menu and enable Microsoft Cortana integration. \n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/QOLeWkGSbinnDv2EiwgU_Cortana_Integration_enable.png\",\n        \"Cortana_Integration_enable.png\",\n        \"2008\",\n        \"1340\",\n        \"#299bdf\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nDownload a Cortana voice commands file from the API.AI developer console > Agent settings > Export and Import.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/Z8KwmZlSIO00v9P2toXw_Cortana_export.png\",\n        \"Cortana_export.png\",\n        \"2060\",\n        \"836\",\n        \"#2a9ae3\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nUnpack and copy the `_W10.xml` file to your project. Rename it `VoiceCommands.xml`. Set the **Build Action** of the file to `Content`.\n\nAdd a new field of `AIService` type to the App class.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"public AIService AIService { get; private set; }\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nInitialize the `aiService` field in the App constructor.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"var config = new AIConfiguration(\\\"CLIENT_ACCESS_TOKEN\\\",                            \\n                                 SupportedLanguage.English);\\nAIService = AIService.CreateService(config);\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nThere are two integration options in Cortana:\n\n* Register voice command patterns and Cortana will open your app and pass some parameters to it. [Official Cortana documentation](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn630430.aspx)\n* Register voice command service to provide custom logic for request processing. Voice command service can provide answers without opening your app. [Official Cortana documentation](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn974228.aspx)\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"2. Integration with Voice Commands\"\n}\n[/block]\nAdd code to register `VoiceCommands.xml` at the end of `OnLaunched` method.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"try\\n{\\n    var storageFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri(\\\"ms-appx:///VoiceCommands.xml\\\"));\\n    await AIService.InstallVoiceCommands(storageFile);\\n}\\ncatch (Exception ex)\\n{\\n    Debug.WriteLine(ex);\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nImplement `OnActivated` method to receive parameters from Cortana. The API.AI SDK has a pre-built method for processing application launch parameters (passed by Cortana) – AIService.ProcessOnActivatedAsync` for processing `. This method calls API.AI to get an action from API.AI based on speech / text input.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"protected async override void OnActivated(IActivatedEventArgs e)\\n{\\n    AIResponse aiResponse = null;\\n    try\\n    {\\n        aiResponse = await AIService.ProcessOnActivatedAsync(e);\\n    }\\n    catch (Exception)\\n    {\\n        // ignored\\n    }\\n\\n    NavigateToMain(aiResponse);\\n}\\n\\nprivate void NavigateToMain(AIResponse aiResponse)\\n{\\n    Frame rootFrame = Window.Current.Content as Frame;\\n\\n    // Do not repeat app initialization when the Window already has content,\\n    // just ensure that the window is active\\n    if (rootFrame == null)\\n    {\\n        // Create a Frame to act as the navigation context and navigate to the first page\\n        rootFrame = new Frame();\\n\\n        rootFrame.NavigationFailed += OnNavigationFailed;\\n\\n        // Place the frame in the current Window\\n        Window.Current.Content = rootFrame;\\n    }\\n\\n    rootFrame.Navigate(typeof(MainPage), aiResponse);\\n\\n    // Ensure the current window is active\\n    Window.Current.Activate();\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nIn the main application page, add processing for the `AIResponse` result. For example, output it to a `TextBlock`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"protected override void OnNavigatedTo(NavigationEventArgs e)\\n{\\n  base.OnNavigatedTo(e);\\n\\n  var response = e.Parameter as AIResponse;\\n  if (response != null)\\n  {\\n    var aiResponse = response;\\n    resultTextBlock.Text = JsonConvert.SerializeObject(aiResponse, Formatting.Indented);\\n  }\\n}\\n        \",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"3. Integration with Voice Command Service\"\n}\n[/block]\nAdd Windows Runtime Component project to the solution.\n\nIn the `Extensions` node of your application `Package.appxmanifest` add the following XML tags.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"<Extensions>\\n    <!-- NB!: Replace with full class name for your voice command service --> \\n    <uap:Extension Category=\\\"windows.appService\\\" EntryPoint=\\\"ApiAiDemo.VoiceCommands.ApiAiVoiceCommandService\\\">\\n      <uap:AppService Name=\\\"ApiAiVoiceCommandService\\\" />\\n    </uap:Extension>\\n    <uap:Extension Category=\\\"windows.personalAssistantLaunch\\\"/>\\n</Extensions>\",\n      \"language\": \"xml\"\n    }\n  ]\n}\n[/block]\nAdd class `ApiAiVoiceCommandService` to your Windows Runtime Component and add the following code to your VoiceCommandService class.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"public sealed class ApiAiVoiceCommandService : IBackgroundTask\\n{\\n\\tpublic async void Run(IBackgroundTaskInstance taskInstance)\\n  {\\n  }\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nAlso, you will need `BackgroundTaskDeferral` to wait until the command is processed, and until `VoiceCommandServiceConnection` interacts with Cortana and `ApiAi` interacts with the API.AI service.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"private BackgroundTaskDeferral serviceDeferral;\\nprivate VoiceCommandServiceConnection voiceServiceConnection;\\nprivate ApiAi apiAi;\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nIn the `Run` method we will process Cortana request using the following steps.\n\nStore `BackgroundTaskDeferral` instance to wait until work will be completed.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"serviceDeferral = taskInstance.GetDeferral();\\ntaskInstance.Canceled += OnTaskCanceled;\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nInitialize `ApiAi` instance.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"var config = new AIConfiguration(\\\"YOUR_CLIENT_ACCESS_TOKEN\\\",\\n      SupportedLanguage.English);\\n\\napiAi = new ApiAi(config);\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nGet `AppServiceTriggerDetails` to get `VoiceCommandServiceConnection`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"var triggerDetails = taskInstance.TriggerDetails as AppServiceTriggerDetails;\\n\\nif (triggerDetails != null)\\n{\\n  voiceServiceConnection = VoiceCommandServiceConnection.FromAppServiceTriggerDetails(triggerDetails);\\n  voiceServiceConnection.VoiceCommandCompleted += VoiceCommandCompleted;\\n  var voiceCommand = await voiceServiceConnection.GetVoiceCommandAsync();\\n...\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nUse `VoiceCommandServiceConnection` to receive request text and command name.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"var recognizedText = voiceCommand.SpeechRecognitionResult?.Text;\\nvar voiceCommandName = voiceCommand.CommandName;\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nCheck the command name for different cases. For example you can:\n* Make requests to API.AI and launch your app with `AIResponse` from the API.AI.\n* Make request to API.AI and send response to Cortana with `SendResponseToCortanaAsync` method.\n*(e.g. see different processing for \"type\" and \"unknown\" voice commands below)*\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"switch (voiceCommandName)\\n{\\n  case \\\"type\\\":\\n  \\tvar aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n  \\tawait apiAi.LaunchAppInForegroundAsync(voiceServiceConnection, aiResponse);\\n  \\tbreak;\\n  case \\\"unknown\\\":\\n  \\tvar aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n  \\tif (aiResponse != null)\\n  \\t{\\n    \\tawait apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\\n  \\t}\\n  \\tbreak;\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\nOf course, you will need to wrap the entire code with a `try...catch` block and make task completed in the `finally` block.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"try\\n{\\n  ...\\n}\\ncatch(Exception e)\\n{\\n\\tvar message = e.ToString();\\n\\tDebug.WriteLine(message);\\n}\\nfinally\\n{\\n\\tserviceDeferral?.Complete();\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"You must add `serviceDeferral?.Complete();` at the end of processing to prevent service to hold resources. Also you should use `serviceDeferral?.Complete();` in `TaskCanceled` handler.\"\n}\n[/block]\nFull code of the service will look like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"public sealed class ApiAiVoiceCommandService : IBackgroundTask\\n{\\n    private BackgroundTaskDeferral serviceDeferral;\\n    private VoiceCommandServiceConnection voiceServiceConnection;\\n    private ApiAi apiAi;\\n\\n    public async void Run(IBackgroundTaskInstance taskInstance)\\n    {\\n        serviceDeferral = taskInstance.GetDeferral();\\n        taskInstance.Canceled += OnTaskCanceled;\\n        \\n        var triggerDetails = taskInstance.TriggerDetails as AppServiceTriggerDetails;\\n\\n        if (triggerDetails != null)\\n        {\\n\\n            var config = new AIConfiguration(\\\"YOUR_CLIENT_ACCESS_TOKEN\\\",                                  SupportedLanguage.English);\\n\\n            apiAi = new ApiAi(config);\\n            apiAi.DataService.PersistSessionId();\\n            \\n            try\\n            {\\n                voiceServiceConnection = VoiceCommandServiceConnection.FromAppServiceTriggerDetails(triggerDetails);\\n                voiceServiceConnection.VoiceCommandCompleted += VoiceCommandCompleted;\\n                var voiceCommand = await voiceServiceConnection.GetVoiceCommandAsync();\\n                var recognizedText = voiceCommand.SpeechRecognitionResult?.Text;\\n                var voiceCommandName = voiceCommand.CommandName;\\n\\n                switch (voiceCommandName)\\n                {\\n                    case \\\"type\\\":\\n                        {\\n                            var aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n                            await apiAi.LaunchAppInForegroundAsync(voiceServiceConnection, aiResponse);\\n                        }\\n                        break;\\n                    case \\\"unknown\\\":\\n                        {\\n                            if (!string.IsNullOrEmpty(recognizedText))\\n                            {\\n                                var aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n                                if (aiResponse != null)\\n                                {\\n                                    await apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\\n                                }\\n                            }\\n                        }\\n                        break;\\n\\n                    case \\\"greetings\\\":\\n                        {\\n                            var aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n                            \\n                            var repeatMessage = new VoiceCommandUserMessage\\n                            {\\n                                DisplayMessage = \\\"Repeat please\\\",\\n                                SpokenMessage = \\\"Repeat please\\\"\\n                            };\\n\\n                            var processingMessage = new VoiceCommandUserMessage\\n                            {\\n                                DisplayMessage = aiResponse?.Result?.Fulfillment?.Speech ?? \\\"Pizza\\\",\\n                                SpokenMessage = \\\"\\\"\\n                            };\\n\\n                            var resp = VoiceCommandResponse.CreateResponseForPrompt(processingMessage, repeatMessage);\\n                            await voiceServiceConnection.ReportSuccessAsync(resp);\\n                            break;\\n                        }\\n\\n                    default:\\n                        if (!string.IsNullOrEmpty(recognizedText))\\n                        {\\n                            var aiResponse = await apiAi.TextRequestAsync(recognizedText);\\n                            if (aiResponse != null)\\n                            {\\n                                await apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\\n                            }\\n                        }\\n                        else\\n                        {\\n                            await SendResponse(\\\"Cannot recognize\\\");\\n                        }\\n                        \\n                        break;\\n                }\\n                \\n            }\\n            catch(Exception e)\\n            {\\n                var message = e.ToString();\\n                Debug.WriteLine(message);\\n            }\\n            finally\\n            {\\n                serviceDeferral?.Complete();\\n            }\\n        }\\n    }\\n\\n    private void OnTaskCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)\\n    {\\n        serviceDeferral?.Complete();            \\n    }\\n\\n    private void VoiceCommandCompleted(VoiceCommandServiceConnection sender, VoiceCommandCompletedEventArgs args)\\n    {\\n        serviceDeferral?.Complete();\\n    }\\n\\n    private async Task SendResponse(string textResponse)\\n    {\\n        var userMessage = new VoiceCommandUserMessage\\n        {\\n            DisplayMessage = textResponse,\\n            SpokenMessage = textResponse\\n        };\\n\\n        var response = VoiceCommandResponse.CreateResponse(userMessage);\\n        await voiceServiceConnection.ReportSuccessAsync(response);\\n    }\\n}\",\n      \"language\": \"csharp\"\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"cortana-integration","type":"basic","title":"Cortana"}
* [Overvew](#overview) * [1. Preparation](#1-preparation) * [2. Integration with Voice Commands](#2-integration-with-voice-commands) * [3. Integration with Voice Command Service](#3-integration-with-voice-command-service) [block:api-header] { "type": "basic", "title": "Overview" } [/block] The **Cortana integration** feature allows you to export your agents in a [Cortana compatible VCD format](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn706593.aspx) and use it for building apps. The example code for an integration can be found [here](https://github.com/api-ai/apiai-dotnet-client/tree/master/samples/cortana). Below are instructions. [block:api-header] { "type": "basic", "title": "1. Preparation" } [/block] Create a Universal Windows Project in Visual Studio. [block:image] { "images": [ { "image": [ "https://files.readme.io/WtCTKYcOTdGo1rAvBNwo_create_project.png", "create_project.png", "737", "528", "#399ef7", "" ] } ] } [/block] Add a reference to the API.AI .NET library using [Nuget](https://www.nuget.org/packages/ApiAiSDK/). Create a new application page and add a microphone/listen button. Then, add TextBlock control to display results. [block:image] { "images": [ { "image": [ "https://files.readme.io/Us3w99VcTrO0ao32MUFc_int2.png", "int2.png", "548", "477", "#34a4e3", "" ] } ] } [/block] In the API.AI developer console, go to 'Integrations' from the left side menu and enable Microsoft Cortana integration. [block:image] { "images": [ { "image": [ "https://files.readme.io/QOLeWkGSbinnDv2EiwgU_Cortana_Integration_enable.png", "Cortana_Integration_enable.png", "2008", "1340", "#299bdf", "" ] } ] } [/block] Download a Cortana voice commands file from the API.AI developer console > Agent settings > Export and Import. [block:image] { "images": [ { "image": [ "https://files.readme.io/Z8KwmZlSIO00v9P2toXw_Cortana_export.png", "Cortana_export.png", "2060", "836", "#2a9ae3", "" ] } ] } [/block] Unpack and copy the `_W10.xml` file to your project. Rename it `VoiceCommands.xml`. Set the **Build Action** of the file to `Content`. Add a new field of `AIService` type to the App class. [block:code] { "codes": [ { "code": "public AIService AIService { get; private set; }", "language": "csharp" } ] } [/block] Initialize the `aiService` field in the App constructor. [block:code] { "codes": [ { "code": "var config = new AIConfiguration(\"CLIENT_ACCESS_TOKEN\", \n SupportedLanguage.English);\nAIService = AIService.CreateService(config);", "language": "csharp" } ] } [/block] There are two integration options in Cortana: * Register voice command patterns and Cortana will open your app and pass some parameters to it. [Official Cortana documentation](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn630430.aspx) * Register voice command service to provide custom logic for request processing. Voice command service can provide answers without opening your app. [Official Cortana documentation](https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn974228.aspx) [block:api-header] { "type": "basic", "title": "2. Integration with Voice Commands" } [/block] Add code to register `VoiceCommands.xml` at the end of `OnLaunched` method. [block:code] { "codes": [ { "code": "try\n{\n var storageFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri(\"ms-appx:///VoiceCommands.xml\"));\n await AIService.InstallVoiceCommands(storageFile);\n}\ncatch (Exception ex)\n{\n Debug.WriteLine(ex);\n}", "language": "csharp" } ] } [/block] Implement `OnActivated` method to receive parameters from Cortana. The API.AI SDK has a pre-built method for processing application launch parameters (passed by Cortana) – AIService.ProcessOnActivatedAsync` for processing `. This method calls API.AI to get an action from API.AI based on speech / text input. [block:code] { "codes": [ { "code": "protected async override void OnActivated(IActivatedEventArgs e)\n{\n AIResponse aiResponse = null;\n try\n {\n aiResponse = await AIService.ProcessOnActivatedAsync(e);\n }\n catch (Exception)\n {\n // ignored\n }\n\n NavigateToMain(aiResponse);\n}\n\nprivate void NavigateToMain(AIResponse aiResponse)\n{\n Frame rootFrame = Window.Current.Content as Frame;\n\n // Do not repeat app initialization when the Window already has content,\n // just ensure that the window is active\n if (rootFrame == null)\n {\n // Create a Frame to act as the navigation context and navigate to the first page\n rootFrame = new Frame();\n\n rootFrame.NavigationFailed += OnNavigationFailed;\n\n // Place the frame in the current Window\n Window.Current.Content = rootFrame;\n }\n\n rootFrame.Navigate(typeof(MainPage), aiResponse);\n\n // Ensure the current window is active\n Window.Current.Activate();\n}", "language": "csharp" } ] } [/block] In the main application page, add processing for the `AIResponse` result. For example, output it to a `TextBlock`. [block:code] { "codes": [ { "code": "protected override void OnNavigatedTo(NavigationEventArgs e)\n{\n base.OnNavigatedTo(e);\n\n var response = e.Parameter as AIResponse;\n if (response != null)\n {\n var aiResponse = response;\n resultTextBlock.Text = JsonConvert.SerializeObject(aiResponse, Formatting.Indented);\n }\n}\n ", "language": "csharp" } ] } [/block] [block:api-header] { "type": "basic", "title": "3. Integration with Voice Command Service" } [/block] Add Windows Runtime Component project to the solution. In the `Extensions` node of your application `Package.appxmanifest` add the following XML tags. [block:code] { "codes": [ { "code": "<Extensions>\n <!-- NB!: Replace with full class name for your voice command service --> \n <uap:Extension Category=\"windows.appService\" EntryPoint=\"ApiAiDemo.VoiceCommands.ApiAiVoiceCommandService\">\n <uap:AppService Name=\"ApiAiVoiceCommandService\" />\n </uap:Extension>\n <uap:Extension Category=\"windows.personalAssistantLaunch\"/>\n</Extensions>", "language": "xml" } ] } [/block] Add class `ApiAiVoiceCommandService` to your Windows Runtime Component and add the following code to your VoiceCommandService class. [block:code] { "codes": [ { "code": "public sealed class ApiAiVoiceCommandService : IBackgroundTask\n{\n\tpublic async void Run(IBackgroundTaskInstance taskInstance)\n {\n }\n}", "language": "csharp" } ] } [/block] Also, you will need `BackgroundTaskDeferral` to wait until the command is processed, and until `VoiceCommandServiceConnection` interacts with Cortana and `ApiAi` interacts with the API.AI service. [block:code] { "codes": [ { "code": "private BackgroundTaskDeferral serviceDeferral;\nprivate VoiceCommandServiceConnection voiceServiceConnection;\nprivate ApiAi apiAi;", "language": "csharp" } ] } [/block] In the `Run` method we will process Cortana request using the following steps. Store `BackgroundTaskDeferral` instance to wait until work will be completed. [block:code] { "codes": [ { "code": "serviceDeferral = taskInstance.GetDeferral();\ntaskInstance.Canceled += OnTaskCanceled;", "language": "csharp" } ] } [/block] Initialize `ApiAi` instance. [block:code] { "codes": [ { "code": "var config = new AIConfiguration(\"YOUR_CLIENT_ACCESS_TOKEN\",\n SupportedLanguage.English);\n\napiAi = new ApiAi(config);", "language": "csharp" } ] } [/block] Get `AppServiceTriggerDetails` to get `VoiceCommandServiceConnection`. [block:code] { "codes": [ { "code": "var triggerDetails = taskInstance.TriggerDetails as AppServiceTriggerDetails;\n\nif (triggerDetails != null)\n{\n voiceServiceConnection = VoiceCommandServiceConnection.FromAppServiceTriggerDetails(triggerDetails);\n voiceServiceConnection.VoiceCommandCompleted += VoiceCommandCompleted;\n var voiceCommand = await voiceServiceConnection.GetVoiceCommandAsync();\n...", "language": "csharp" } ] } [/block] Use `VoiceCommandServiceConnection` to receive request text and command name. [block:code] { "codes": [ { "code": "var recognizedText = voiceCommand.SpeechRecognitionResult?.Text;\nvar voiceCommandName = voiceCommand.CommandName;", "language": "csharp" } ] } [/block] Check the command name for different cases. For example you can: * Make requests to API.AI and launch your app with `AIResponse` from the API.AI. * Make request to API.AI and send response to Cortana with `SendResponseToCortanaAsync` method. *(e.g. see different processing for "type" and "unknown" voice commands below)* [block:code] { "codes": [ { "code": "switch (voiceCommandName)\n{\n case \"type\":\n \tvar aiResponse = await apiAi.TextRequestAsync(recognizedText);\n \tawait apiAi.LaunchAppInForegroundAsync(voiceServiceConnection, aiResponse);\n \tbreak;\n case \"unknown\":\n \tvar aiResponse = await apiAi.TextRequestAsync(recognizedText);\n \tif (aiResponse != null)\n \t{\n \tawait apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\n \t}\n \tbreak;\n}", "language": "csharp" } ] } [/block] Of course, you will need to wrap the entire code with a `try...catch` block and make task completed in the `finally` block. [block:code] { "codes": [ { "code": "try\n{\n ...\n}\ncatch(Exception e)\n{\n\tvar message = e.ToString();\n\tDebug.WriteLine(message);\n}\nfinally\n{\n\tserviceDeferral?.Complete();\n}", "language": "csharp" } ] } [/block] [block:callout] { "type": "info", "body": "You must add `serviceDeferral?.Complete();` at the end of processing to prevent service to hold resources. Also you should use `serviceDeferral?.Complete();` in `TaskCanceled` handler." } [/block] Full code of the service will look like this. [block:code] { "codes": [ { "code": "public sealed class ApiAiVoiceCommandService : IBackgroundTask\n{\n private BackgroundTaskDeferral serviceDeferral;\n private VoiceCommandServiceConnection voiceServiceConnection;\n private ApiAi apiAi;\n\n public async void Run(IBackgroundTaskInstance taskInstance)\n {\n serviceDeferral = taskInstance.GetDeferral();\n taskInstance.Canceled += OnTaskCanceled;\n \n var triggerDetails = taskInstance.TriggerDetails as AppServiceTriggerDetails;\n\n if (triggerDetails != null)\n {\n\n var config = new AIConfiguration(\"YOUR_CLIENT_ACCESS_TOKEN\", SupportedLanguage.English);\n\n apiAi = new ApiAi(config);\n apiAi.DataService.PersistSessionId();\n \n try\n {\n voiceServiceConnection = VoiceCommandServiceConnection.FromAppServiceTriggerDetails(triggerDetails);\n voiceServiceConnection.VoiceCommandCompleted += VoiceCommandCompleted;\n var voiceCommand = await voiceServiceConnection.GetVoiceCommandAsync();\n var recognizedText = voiceCommand.SpeechRecognitionResult?.Text;\n var voiceCommandName = voiceCommand.CommandName;\n\n switch (voiceCommandName)\n {\n case \"type\":\n {\n var aiResponse = await apiAi.TextRequestAsync(recognizedText);\n await apiAi.LaunchAppInForegroundAsync(voiceServiceConnection, aiResponse);\n }\n break;\n case \"unknown\":\n {\n if (!string.IsNullOrEmpty(recognizedText))\n {\n var aiResponse = await apiAi.TextRequestAsync(recognizedText);\n if (aiResponse != null)\n {\n await apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\n }\n }\n }\n break;\n\n case \"greetings\":\n {\n var aiResponse = await apiAi.TextRequestAsync(recognizedText);\n \n var repeatMessage = new VoiceCommandUserMessage\n {\n DisplayMessage = \"Repeat please\",\n SpokenMessage = \"Repeat please\"\n };\n\n var processingMessage = new VoiceCommandUserMessage\n {\n DisplayMessage = aiResponse?.Result?.Fulfillment?.Speech ?? \"Pizza\",\n SpokenMessage = \"\"\n };\n\n var resp = VoiceCommandResponse.CreateResponseForPrompt(processingMessage, repeatMessage);\n await voiceServiceConnection.ReportSuccessAsync(resp);\n break;\n }\n\n default:\n if (!string.IsNullOrEmpty(recognizedText))\n {\n var aiResponse = await apiAi.TextRequestAsync(recognizedText);\n if (aiResponse != null)\n {\n await apiAi.SendResponseToCortanaAsync(voiceServiceConnection, aiResponse);\n }\n }\n else\n {\n await SendResponse(\"Cannot recognize\");\n }\n \n break;\n }\n \n }\n catch(Exception e)\n {\n var message = e.ToString();\n Debug.WriteLine(message);\n }\n finally\n {\n serviceDeferral?.Complete();\n }\n }\n }\n\n private void OnTaskCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)\n {\n serviceDeferral?.Complete(); \n }\n\n private void VoiceCommandCompleted(VoiceCommandServiceConnection sender, VoiceCommandCompletedEventArgs args)\n {\n serviceDeferral?.Complete();\n }\n\n private async Task SendResponse(string textResponse)\n {\n var userMessage = new VoiceCommandUserMessage\n {\n DisplayMessage = textResponse,\n SpokenMessage = textResponse\n };\n\n var response = VoiceCommandResponse.CreateResponse(userMessage);\n await voiceServiceConnection.ReportSuccessAsync(response);\n }\n}", "language": "csharp" } ] } [/block]