Blog

How to automate testing for Google Assistant Apps?

13 May, 2019
Xebia Background Header Wave

The best way to avoid regressions and get fast feedback during development is automated testing. Especially if you’re testing manually by speaking to your voice app. The logic of your voice app is implemented in a webhook called a fulfillment. Since fulfillments are regular JSON APIs they are tested using existing technology.
Microphone

The JSON request and response of fulfillments are verbose. This could make your tests difficult to write and maintain. To avoid this you can create helper functions. The goal is to make the test code easy to understand.
This blogpost will demonstrate automated testing for a Google Assistant app that is built using Dialogflow. We will use Typescript to implement the implement the logic and Jest as a testrunner. The ideas in this blogpost are also applicable to voice apps which are not built using Dialogflow or Typescript. When you build a Google Assistant apps without Dialogflow you can use the Actions SDK. However, you would need to implement the natural language understanding yourself.
Using Dialogflow, a simple interaction where we don’t expect the user to give any parameters looks like this:

import { dialogflow } from "actions-on-google";
const app = dialogflow();
app.intent('How are you intent', (conv) => {
  conv.add(`I'm fine`);
});

When a user speaks a phrase that is associated with the How are you intent in Dialogflow, our fulfillment will respond with I’m fine. We want to specify this behavior clearly in our unit test. To do this we create some helper functions: getResponseForIntentName and expectResponse. That allows us to specify a unit test that is easy to read:

describe('How are you intent', () => {
  it('should respond', async () => {
    const response = await getResponseForIntentName('How are you intent');
    expectResponse("I'm fine", response);
  });
});

To build these helpers we need know what a Dialogflow fulfillment request looks like. We can get an example request by having looking at the Diagnostic info in Dialogflow. Another option is to have our fulfillment log its request: console.log(conv.body);.

Dialogflow Diagnostic Info

The Dialogflow Diagnostic Info shows the request that is sent to your fulfillment.


A minimal request that can be processed by the actions-on-google library will look like this:

{
  queryResult: {
    intent: {
      displayName: 'How are you intent',
    },
  },
}

Only the displayName of the intent is required for the fulfillment to process its logic. We will create requests like this in the getResponseForIntentName helper.
To be able to start the express server and make the request we will use supertest.
Because we are using Typescript we get errors when we make typos and get auto completion while typing. The type definitions of the request and response payload are defined in the actions-on-google library. See GoogleCloudDialogflowV2WebhookRequest in the example below.

function getResponseForIntentName(name: string) {
  const request: GoogleCloudDialogflowV2WebhookRequest = {
    queryResult: {
      intent: {
        displayName: name,
      },
    },
  };
  return supertest(createApp())
    .post('/')
    .send(request);
}

Now we want to assert that the response is looking ok. Because we’re only interested in the returned text we will use Jests toMatchObject to make an assertion on parts of the response. To make this work in Typescript we can use the Partial<…> type.

function expectResponse(responseText: string, actual: Response) {
  const conversationResponse: Partial = {
    richResponse: {
      items: [{ simpleResponse: { textToSpeech: responseText } }],
    },
  };
  const webhookResponse: GoogleCloudDialogflowV2WebhookResponse = {
    payload: {
      google: conversationResponse,
    },
  };
  expect(actual.status).toBe(200);
  expect(actual.body).toMatchObject(webhookResponse);
}

The testing helpers can be extended to support more advanced use cases. Dialogflow can parse parameters from the voice input and make them part of the request. To make this testable, you can extend the getResponseForIntentName helper. A response can store some state across interactions as well, this is called context. To test context you can extend the expectResponse helper. Check out examples with more advanced usecases here.
What type of testing this is, depends on who you ask. You could call it unit testing because you get fast feedback and you can test all the branches in the logic. On the other hand you are not isolating a single unit with mocks and we start the whole server. Therefore you could call it integration testing or component testing. In reality it does not matter much, as long as it delivers value.

Conclusion

Although developing for voice apps is a new type of development, you don’t have to start talking to your computer to test your logic. Automated testing makes the development process scalable. This means development of a large app or developing with many people. Immediate feedback will save time, for you and your team, so you can build more cool new features!
A runnable example of the tests in this blogpost can be found here.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts