An Artificial Intelligence(AI)machine is designed to analyse and interpret data and then solve the problem or address the issue based on those interpretations. With machine learning, the computer learns once how to act or react to a certain result and knows in the future to act in the same way.
As mentioned by Thierry Breton, Chairman and CEO of Atos SE, France, today the human intelligence alone is no longer enough to understand and explore the deluge of data we are witnessing. Every eighteen months, the volume of data we produce doubles.
Data is the necessary fuel of artificial intelligence.
In recent years, the explosion in data volume, due in part to mobile and IoT, has provided the raw material for AI to develop. Remember that. AI and ML without data? What are you going to do? What are the machines going to learn? That’s where the patterns are.
The data drives the result set. Of course, with digital disruptions, we are able to acquire datasets needed for AI algorithms. If not available, then we try new methods like transfer learning etc.
Sometimes we are continuously pushed with huge mammoth of data, would that mean the AI model will work perfectly? Yes and No. However, my experience says, “adding more data” will not magically improve the performance of AI model. The focus should be on “adding more information”.
The distinction between “adding data” and “adding information” is crucial. By blindly adding more and more data, we encounter the risk of adding data that contain misinformation that can accordingly downgrade the performance of our models, will add bias effect. With the abundant access of data, as well as the computing power to process it, this becomes increasingly important to consider.
This growth is the fuel of the AI revolution – but three conditions must be met for companies to fully embrace it.
- They need to identify the situations and use cases where AI makes the most sense and brings the greatest value.
- They need to have access to the computing power that can process and explore these massive amounts of data.
- They need to be sure that they can manage their data securely.
With experience executing AI projects, I believe topmost bottleneck that are holding back AI are:
- Company culture which does not recognize need of AI
- Lack of data and data quality issues(Choose quality over quantity). Data scientist find themselves in situation where a large amount of data they collect is of terrible quality.
- Infrastructure challenges – considering the computational horsepower required to drive the AI and ML models and their supporting IT, it can and does require a significant technology underpinning to make it work. You can’t just run them on a small execution server. The larger the data sets get, the more complex the use cases get, the more compute is required — especially if you want to do AI in real time.
- Difficulties in finding the business use cases which could benefit the organization/clients.
- Data privacy – The other reason cloud makes sense is because of the data that’s core to AI and ML. When we bring up data, we also have data sovereignty to deal with.
- Bias – Data on which AI is trained if has bias, AI also behaves in similar way.
- Skills – data scientists, data crunchers, AI-ML developers is still a long way to go.