On the Stanford parser (and Bitext parser)

[fa icon="calendar'] Aug 17, 2021 8:10:26 PM / by Bitext posted in Sentiment Analysis, Bitext

[fa icon="comment"] 0 Comments

In some of our recent talks, colleagues have asked us about the Stanford parser and how it compared to Bitext technology (namely at our last workshop on Semantic Analysis of Big Data in San Francisco, and in our presentation in the Semantic Garage also in San Francisco).

Read More [fa icon="long-arrow-right"]

What is the difference between stemming and lemmatization?

[fa icon="calendar'] Jul 7, 2021 8:54:10 PM / by Bitext posted in Machine Learning, NLP, Bitext, Natural Language, Text Analytics, Artificial Intelligence, Deep Learning, Chatbots, Stemming, AI, Multilanguage, Lemmatization, NLP for Core, NLP for Chatbots, Conversational AI

[fa icon="comment"] 4 Comments

Stemming and lemmatization are methods used by search engines and chatbots to analyze the meaning behind a word. Stemming uses the stem of the word, while lemmatization uses the context in which the word is being used. We'll later go into more detailed explanations and examples.  

Read More [fa icon="long-arrow-right"]

Evaluate the Quality of your Chatbots and Conversational Agents

[fa icon="calendar'] Jun 10, 2021 4:07:00 PM / by Bitext posted in API, Machine Learning, NLP, Big Data, Bitext, Natural Language, Artificial Intelligence, Deep Learning, Chatbots, Phrase Extraction, NLG, TechCrunch, NLU, AI, Multilanguage, NLP for Core, NLP for Chatbots

[fa icon="comment"] 0 Comments

It is always important to evaluate the quality of your chatbots and conversational agents in order to know the its real health, accuracy and efficiency. Chatbot accuracy can only be increased by constantly evaluating and retraining it with new data that answers your customer's queries. 

Chatbots require large amounts of training data to perform correctly. If you want your chatbot to recognize a specific intent, you need to provide a large number of sentences that express that intent, usually generated by hand. This manual generation is error-prone and can cause erroneous results.

How can we solve it?

With artificially-generated data. Since Dialogflow is one of the most popular chatbot-building platforms, we chose to perform our tests using it.

Read More [fa icon="long-arrow-right"]

What do you evaluate in your chatbots? Some ideas

[fa icon="calendar'] May 31, 2021 10:00:00 AM / by Bitext posted in Machine Learning, NLP, Big Data, Bitext, Deep Linguistic Analysis, Natural Language, Text Analytics, Artificial Intelligence, Deep Learning, Chatbots, NLU, POS tagging, AI, Multilanguage, NLP for Core, NLP for Chatbots, "Multilingual synthetic data"

[fa icon="comment"] 0 Comments

In this blog we will discuss three ways of doing your chatbot evaluation by using:

  1. real world evaluation data
  2. synthetic data
  3. "in scope" or "out of scope" queries
You have a chatbot up and running, offering help to your customers. But how do you know whether the help you are providing is correct or not?  Chatbot evaluation can be complex, especially because it is affected by many factors. 
Read More [fa icon="long-arrow-right"]

How chatbots enhance customer experience in contact centers

[fa icon="calendar'] May 25, 2021 5:00:00 PM / by Bitext posted in API, Machine Learning, NLP, Big Data, Bitext, Natural Language, Artificial Intelligence, Deep Learning, Chatbots, Phrase Extraction, NLG, TechCrunch, NLU, AI, Multilanguage, NLP for Core, NLP for Chatbots

[fa icon="comment"] 0 Comments

Chatbots can improve customer experience in contact centers by:

  • Reducing customer wait time
  • Achieving a higher customer satisfaction
  • Cutting down contact center expenses and increasing productivity
  • Getting to know your customer better
  • Using human agents only when it is necessary

Most customer service and contact center executives are honing in on bots because they can handle large volumes of queries. Thus, their service center staff can focus on more complex tasks. As the technology behind bots has improved in terms of natural language processing (NLP), machine learning (ML), and intent-matching capabilities, companies are increasingly willing to trust them to handle direct customer interaction.

Read More [fa icon="long-arrow-right"]

How to reduce the training time of your chatbot

[fa icon="calendar'] May 17, 2021 12:00:00 AM / by Bitext posted in Chatbots

[fa icon="comment"] 0 Comments

In order to reduce the chatbot training time, we rely on linguistics: training the chatbot with data that is tagged for linguistic phenomena.  We solve this problem by reducing the words in different user queries to their lemmatized form, so we can later train the bot with those terms linked to their respective inflected forms.

We all know that chatbot training requires a phase (o "some training time") before they can start interacting with users. However, taking a lot of time can be risky for businesses since its customers may go to other competitors who have their technology ready to satisfy his clients.

Contact Us For More Info!
The training period of a conversational chatbot involves feeding the bot with different variations of all the possible user intents. For example, if you take the sentence "turn on the lights in your living room" it can be asked in different ways:

Read More [fa icon="long-arrow-right"]

Subscribe Here!