Customers are using channels such as Facebook Messenger as the place where to complain when their online order is delivered late. Therefore, companies have started to trust in chatbots for handling these issues. Can you see the potential of applying sentiment analysis to chatbot conversations?
In the past weeks a lot has been said about last Google I/O's presentation of Duplex, an assistant powered by AI that can make phone calls and talk to humans to make arrangements for you. Some people are so impressed by the achievement that they are already pointing out the ethical consequences of not being able to tell apart a human from a machine, and some are playing it down, highlighting that we have only seen a demo.
For a monolingual person, it comes as an uncomfortable reality that most people in the world (60%) speak more than one language in their day-to-day lives. It is very common that a person who speaks one language at home and then another one outside, be it at school or at work. But many more scenarios are possible.
What happens after you build a chatbot for a client? I mean, when the architecture is set up, the NLU module has been built, the conversation flows are designed, and everything is ready. All is left is going into production, where you will have thousands of users' interactions that will train your bot and make it super smart, right?