Google I/O: Multiple Actions or Double Intent?

In the past weeks a lot has been said about last Google I/O's presentation of Duplex, an assistant powered by AI that can make phone calls and talk to humans to make arrangements for you. Some people are so impressed by the achievement that they are already pointing out the ethical consequences of not being able to tell apart a human from a machine, and some are playing it down, highlighting that we have only seen a demo.

We mainly agree with the second group. It made us ask ourselves why people focus so much on something that is almost just appearances when Natural Language Understanding (NLU), the core part of any conversation, isn't totally solved, as John H. Richardson points out today at WIRED.

We are much more impressed, however, by other achievement shown at the same presentation: Multiple Actions. It is the same as what many people call "double intent", and what we first talked about a long time ago.

This kind of problems, such as coordination, negation or conditionality, can only be solved using linguistic knowledge if you can't afford to leave it to the long term. Other approaches are just shots in the dark and will take too long; and it seems the NLP community is starting to notice this. Pichai even mentioned the linguistic scientific name of the technique shown at the Multiple Actions demo, "coordination reduction".

Our approach allows us to solve not only coordination, but also negation and virtually any subordination that would mislead an AI without a parser behind it. It's as simple as live simplification of all user queries, so the chatbot or VA only has to react to these simpler queries.

  • Coordination

  • Negation

  • Subordination

Want to jump on the linguistics side of NLP and see how it performs by testing our technology with your own queries? Just try our Query Rewriting and Query Rewriting + Negation services now available at our API platform.

TRY OUR API 

Subscribe to Email Updates