Testing conversation flows
Once a conversation flow has been created it is important to ensure it continues to work. Clai tests are designed to help with this.
A Clai test is a sequence of user utterances and bot responses created from a conversation. Durring a test run the ClaiRegressionTesting channel sends the text of each user utterance to rasa and compares the resulting conversation to the content of the test to determine if the test passed or failed.
Tests are normally uneditable but can be updated by overwriting their current content with the results of the most recent test run if they are failing.
Tests inherit the language of the conversation they were created from and will only appear in the dialouge menu when that language is selected.
Creating Tests
You can create a new test two different ways.
Creating via the webchat panel
Click on the clipboard icon in the top bar of the webchat panel to create a new test from the current conversation.
Creating via the conversations screen
In Incoming -> Conversations you can create a new test by clicking on the clipboard icon in the action bar above the conversation.
Conversations from versions before 1.0 are not guaranteed to be supported.
Running Tests
You can trigger a run of all the tests in your project via the training button’s dropdown menu. In the same menu there is an option to run all the tests that use the currently selected language.
You can also run tests individually by selecting them in the dialogue menu and clicking on the play button in the top right corner of the test.
When your test run is complete a notification will appear in the top right corner with the number of passing and failing tests. All tests that failed in the most recent test run can be found in the Failing tests smart group. This smart group ignores the by-language filter normally applied to tests to make it easier to update failing tests.
Updating failing tests
When a test fails the current content of the test is diffed against the results of the test run. The events that match have no colored background, the expected events in red are from the original conversation, and the actual events from the results of the test run are displayed in green.
You may overwrite the test with the newly parsed results by clicking on the set actual as expected button in the topbar of the test. This will delete all the expected events and save the actual events as part of the test.
If a significant change to the conversation’s flow has been made you will need to delete your test and create a new one via the steps described earlier in this section.