In previous posts, we have outlined the crucial role of Machine Learning for Analytics (in How to Make Machine Learning more Effective using Linguistic Analysis?), and the implications of using Machine Learning for analyzing and structuring text (in How Phrase Structure helps Machine Learning?). In a following post, we will explain how Linguistics can complement Machine Learning and how it can be integrated in the same technology stack.
This post dives into one of the topics of a previous post "How to Make Machine Learning more effective using Linguistic Analysis". We referred to the strong points of Machine Learning technology for insight extraction. We also stated that text analysis is not the area where machine learning shines the most. Here we go into some detail on this last statement.
Text analysis is becoming a pervasive task in many business areas. Machine Learning is the most common approach used in text analysis, and is based on statistical and mathematical models.
Everything looks promising in the world of bots: big players are pushing platforms to build them (Google, Amazon, Facebook, Microsoft, IBM, Apple), large retail companies are adopting them (Starbucks, Domino’s, British Airways), press is excited about movies becoming reality; and we users are eager to use. However, one dark hole remains in this scenario. The bot development process.
In some of our recent talks, colleagues have asked us about the Stanford parser and how it compared to Bitext technology (namely at our last workshop on Semantic Analysis of Big Data in San Francisco, and in our presentation in the Semantic Garage also in San Francisco).
Stemming and lemmatization are methods used by search engines and chatbots to analyze the meaning behind a word. Stemming uses the stem of the word, while lemmatization uses the context in which the word is being used. We'll later go into more detailed explanations and examples.