In my last articles "How to connect Sonos to Google Assistant - Part 1" and "Control Sonos with Google Assistant - DynDNS - Part 2" I explained how to create a service to consume webhooks from Google to control your Sonos with your voice and how to setup a DynDNS domain. This part will focus on the configuration of an application in Googles Dialogflow. These platform gives us the ability to design flexible dialogs we can have with the Google Assistant by defining phrases the user has to say and the assistant responds. The cool thing here is that the Google Assistant in comparison to Alexa is capable of handling variations to the phrases you specify. If you for instance define the sentence "play a song from $artist", the Assistant will recognize "play a track from $artist" as well. In my eyes this is super fancy and a great technology.
Create the Dialogflow agent
In the first step you have to go to Dialogflow and create a new agent. As far as I know it, the name doesn't really matter so you can choose what you want. I called my agent "Sonos".
Create the first intent
Once the agent is created you will land on the intent page where you will create the first intent to play a song by it's name. To do so you have to add one or more phrases and select the words which are actually parameters. The word which is a parameter doesn't really matter but makes it more clear when working with the intents what the parameter is about. A phrase to play a specific song could be "play songname" or "play title songname". To make songname a parameter you have to double click on the word and a dropdown will open with different parameter types. You also can define custom types which represent a list of possible values but we won't need this right now. A song name is actually just a string and since every word we say is a string we can choose the type @sys.any. Now Google knows where to find the parameter songname in the phrase but we also want to get the name as parameter to our service. So we add a parameter with the name song of the type @sys.any with the value of $song out of the phrase.
Now we can tell Google to play a song and Google would know how to extract the title. What we want to do now is to have the Asstant answer to us and that Google sends the request to our NodeJS service. So you can add any response you want by adding one or more phrases where you can use the parameters above. Important here is to toglle the switch to make this intent the last one in the dialog. If not the Assistant would wait for another commands after saying "Hey Google, tell the Sonos to play songname" which we don't want. Also you to tell Google to send the request to our service. To do so you must toggle the checkboxes for Fulfillment at the bottom.
Tell Google about our service
Yay you created your first intent Sadly thats yet not enough. Right now Google would understand what you tell but wouldn't know where to send it to. This you can to by opening the Fulfillment page in the menu on the left side. Simply insert your domain pointing to your service (Part 2) and enable the Webhook. The endpoint of your service is called /sonos-google. PS: Only https connections are allowed by Google.
Enable the integration
The next step to play a song is to enable and configure the interaction of Dialogflow with the Google Assistant. Click on Integrations in the left menu, enable Google Assistant and click on the integration settings. Here you can define implicit intents. By default a dialog with your agent starts with "Hey Google, talk to my Sonos" and the response of the "Default Welcome Intent" answers. Now you entered the dialog with your agent and you can tell him "play songname". This is super weired for our use case. Luckily Google allow us to define multiple implicit intents which means that we can call the intent directly without entering a dialog like "Hey Google, tell Sonos to play songname". Here you should select all of your own created intents to have a better usability.
Define the invocation name
As the last step you have to define the invocation name which is the name you call your agent. Here it is a bit tricky because it seems like the name must be unique. I was lucky and could call my agent in German "dem Sonos" which is like "the Sonos" which results in a sentence like "tell the Sonos to play". You have to find your invocation name by youself but be creative and check out what would feel for you natural to say. Lets start with opening the Google actions console and import our Sonos agent project from Dialogflow. Then I selected as category "Smart Home" and then the only category available. At the end you should end on a page where you can select Invocation on the left menu. Here you can specify the agent name.
Testing the agent
Jippey, you created your first agent. You definitely have to make sure if it's working Go to the Google Assistant Simulator and try pasting "Hey Google, tell the Sonos to play test" and google should respond you with the response you defined in your intent. In my case it's "I'll play test".
Create other intents like stop, resume, e.t.c.
If everything is working until now, ... Isn't it super exiting what we can do with Google. We don't even have to wait for the slow guys from Sonos
To have a fully functional integration you should add more intents for volume louder, volume quieter, next track, previous track, play songs from artist, play playlist, play an specific album, stop and resume. I trust you that you manage it If not just ask me to help you or write a comment.
I hope you enjoyed this series and everything worked for you.
Please give me feedback to improve my articles and to correct wrong parts in this articles.