2016 has been a very successful year for Artificial Intelligence and for Machine Learning, however both fields have an incredible potential to grow, and that is what we expect to see in this newly started year.
We are sure that In three years’ time, when we look back to the current existing applications of AI and ML they will seem to us just a child’s play.
Running away from all the hype Artificial Intelligence is generating, in this post we want to focus on those aspects that will be the topic of research due to their potential.
In Machine Learning and Artificial Intelligence, we have seen big improvements in areas like voice and image recognition, however, when it comes to language, there is still a long way to go.
The role of a robust parser in this process is key, as we previously said, it offers much more possibilities than any other rule-based or statistical approach. If we are looking to communicate with machines the most important step will be for them to understand the structure behind human language.
To keep making progress is the NLP field, we believe focusing on linguistics is the right path to follow. According to our internal research and tests, taking into consideration syntactic and morphological information increases the results when it comes to training artificial intelligence systems.
Adoption of machine learning and deep learning:
If you are a tech enthusiast or you work in this field, machine learning and deep learning powered systems are not a new concept for you, however due to lack of information and technical complexity of this projects many companies still have not adopted this technological approach in their daily operations, even when it will potentially save them time and money.
This year when the hype surrounding these technologies settles and additional human and technical resources become available, more companies will put their focus on the adoption of machine learning and deep learning. This will increase cross domain adoption and new user cases.
It is not a new concept either, but we expect to see an increase of companies using data enrichment to achieve higher percentages of success.
In the ML and DL field, engineers and developers will keep polishing algorithms and adopting new ones that will allow achieving better results. However, this is an already tried out method with ever diminishing marginal returns. Is there any other way to have significant increases in accuracy?
In any deep learning project, we can distinguish two different components, the algorithm, and the data, so if we cannot see a clear advancement path in the first one we should consider improving the second one.
Datasets are becoming larger and more available but it is still expensive and resource consuming to have one that is perfect for your training needs, that is why polishing up the datasets that you already have developed is the easiest path to take. By adding NLP features to your current textual datasets, you can see significant improvements in the overall quality of the results.
Our internal research shows, that adding enriched data that contains morphological and syntactic information to the data given, translates into accuracy increases of 10 to 15%.