How to reduce the training time of your chatbot

In order to reduce the chatbot training time, we rely on linguistics: training the chatbot with data that is tagged for linguistic phenomena.  We solve this problem by reducing the words in different user queries to their lemmatized form, so we can later train the bot with those terms linked to their respective inflected forms.

We all know that chatbot training requires a phase (o "some training time") before they can start interacting with users. However, taking a lot of time can be risky for businesses since its customers may go to other competitors who have their technology ready to satisfy his clients.

Contact Us For More Info!
The training period of a conversational chatbot involves feeding the bot with different variations of all the possible user intents. For example, if you take the sentence "turn on the lights in your living room" it can be asked in different ways:

  • turn on the lights in the living room
  • turn on the living room lights
  • I’d like to turn on the lights in the living room
  • can you turn on the living room lights?
  • please, turn on the living room lights

Imagine how much time we could reduce the training time if were able to teach the bot that all these requests are variations of the same intent and have the same meaning. Our research from the chatbot industry showed that all platforms only allow their users to perform this task by using significant manual tagging and different training iterations.

To reduce the training process and allow companies to launch their chatbots as fast as possible, in Bitext we put our technology to work trying to automate the process we explained in the example above, to significantly shorten training time and increase chatbot's accuracy.

Try Out Our Data

How do we do it? 

We solve the chatbot training problem by reducing different user requests to a normalized form that captures their common meaning. Then, we nurture the bot with the normalized forms, linked to their respective surface forms. As a result, the complexity that the chatbot needs to handle is reduced drastically.

If we take the example again "turn on the lights in the living room" we normalized it into two types of forms to solve different problems during various stages of bot's lifecycle: while training the chatbot and when it's live, and users are interacting with it.

Type 1: Intent rewriting: training stage

  { "intent": "turn on",

    "object": "lights"

    "place": "living room",

    "polarity": "affirmative", }

Type 2: Sentence rewriting: for live use

  { " turn on lights in living room"}

This preprocessing of user requests reduces training time from months to weeks or weeks to days. Additionally, it is possible also to make continuous improvement easy and controllable.

Next week we will publish an article explaining more in depth the approach we have introduced today so if you want to be the first one reading it subscribe to our blog.



Subscribe Here!