When we are running a search, we want to find relevant results not only for the exact expression we typed on the search bar, but also for the other possible forms of the words we used. For example, it’s very likely we will want to see results containing the form “skirt” if we have typed “skirts” in the search bar.
Almost three years after Apple launched its well-known voice assistant Siri for the Arabic language, there is still room for further improvement. Siri can currently understand more than 20 languages and dialects; but, when it comes to Arabic, its abilities are not good enough to fully understand what users need. Several utterance errors together with poor understanding skills are quite frustrating for Arabic speakers. What’s going wrong here?
One of the flaws of usual training data generation is that, when you ask somebody to manually create training data for you, they will make an effort to write these sentences correctly, following the spelling and punctuation norms of your language. Even if some errors appear, they will be minimal, because they are trying to do things right —this is, to provide “orthographically right” sentences.
All Machine Learning (ML) engines that work with text can benefit from a solid linguistic background. If they are working in a multilingual environment, the need of a good lexicon (with forms, lemmas and attributes) is overwhelming. Even so, basic features such as Word Embeddings hugely improve when enriched with linguistic knowledge, and if this is not usually applied, is because of a lack of linguists working for ML companies.
The company CB Insights has recently published a document named “Lessons From The Failed Chatbot Revolution”. This ominous title reveals a hard truth: chatbots have not been the revolution we expected.
A few days ago, Amazon Web Services organized AWS re:Invent, one of the world biggest IT events, focusing on everything Amazon has to offer. Among the great amount of novelties that were announced, some of them were very interesting for us.