Evolution of Artificial Intelligence
It is important to understand that technology is as good as it’s applications. Machine learning and AI have been around for a while and have been used in advanced defence projects and academic research for decades. Last few years have changed the landscape primarily because the hardware has caught up with the need of high volume computations required by AI algorithms. From historical perspective Artificial intelligence (AI) was born in the 1950s, when the English polymath Alan Turing created a test to determine a machine’s ability to mimic human cognitive functions, including perception, reasoning, learning, and problem solving. AI grew with the rise of machine learning (ML)—wherein systems absorb and “learn” from data. They then use this knowledge base to make better predictions and decisions over time. In 2010, the advent of deep neural networks ushered in the deep learning (DL) era.
All ML and DL solutions require two steps: training and inference. Take the software in autonomous cars. To help systems detect obstacles in the road, developers present images to the neural net—for instance, those of dogs or pedestrians—and perform recognition tests. Network parameters are then refined until the neural net displays high accuracy in visual detection. After the network has viewed millions of images and is fully trained, it enables recognition of dogs and pedestrians during the inference phase.
Training now accounts for about 95 percent of AI-related workloads in the public cloud because most AI applications are still relatively immature and require huge amounts of data to refine them. As AI models mature, inference will gain more share in the cloud. In fact, DL inference could account for 30 to 40 percent of public-cloud workloads over the next three to five years, with training dropping to 60 to 70 percent. Inference will also gain share with the rise of edge computing (which takes place within devices), as innovation enables low-power, high-performance inference chips.