AI, Climate and Synthetic Data

These days the COP25 Climate Summit is being held in Madrid. Many subjects are being discussed on the matter of a possible climate crisis, and how to face it. Has Machine Learning (ML) and Natural Language Processing (NLP) something to say about it? Surprisingly, yes, it does!

It seems obvious, but computers need energy to work. There are more and more computers every day, and their energy needs are also higher. In the past, the computing power needed to train state-of-the-art AI systems nearly doubled every two years (as we learnt from this article). Yet, the trend has been skyrocketing since 2012: currently this requirement doubles in just 3.4 months (not 2 years anymore!). This graph is self explanatory. 

What does this mean? Even if computers are now more efficient than ever, if the computing power needed doubles every 3 months, the energy required will also be higher and higher. AI and ML are severely affecting power requirements in the world. Needless to say that this fact is not good for the climate —nor for the economy of the companies that want to use such tools, of course—.

Can something be done? Yes, not relying so much on algorithms, but rather on data. The goal of these new ML algorithms is to work even in absence of good training data. The good news is that Bitext’s Multilingual Synthetic Data technology  is already able to solve this data scarcity. How does this solution work? Simply by having machines create correct and realistic quality training data by itself, so that your ML algorithms won’t need so much computing power to be effective. On top of it all, they will be even cheaper for you!
SYNTHETIC TRAINING DATA

For more information, visit our website and follow Bitext on Twitter or LinkedIn.

 

 

Subscribe Here!