You’d be forgiven for assuming that the history of machine learning, artificial intelligence and smart computers is all very recent. When we think of these technologies, we tend to imagine something very contemporary, something that has only been developed in the last decade. But you might be surprised to know that the history of machine learning history goes as far back as the 1940s.
And whilst it is impossible to pinpoint when machine learning was invented or who invented it - it is a combination of many individuals' work, who contributed with separate inventions, algorithms, or frameworks - we can look back at the key moments in its development.
What is machine learning?
Machine learning is an application of AI that includes algorithms that parse data, learn from that data, and then apply what they’ve learned to make informed decisions.
An easy example of a machine learning algorithm is an on-demand music streaming service like Spotify.
For Spotify to make a decision about which new songs or artists to recommend to you, machine learning algorithms associate your preferences with other listeners who have a similar musical taste. This technique, which is often simply touted as AI, is used in many services that offer automated recommendations.
Machine learning fuels all sorts of tasks that span across multiple industries, from data security firms that hunt down malware to finance professionals who want alerts for favorable trades. The AI algorithms are programmed to constantly be learning in a way that simulates as a virtual personal assistant—something that they do quite well.
The early days
Machine learning history starts in 1943 with the first mathematical model of neural networks presented in the scientific paper "A logical calculus of the ideas immanent in nervous activity" by Walter Pitts and Warren McCulloch.
Then, in 1949, the book The Organization of Behavior by Donald Hebb is published. The book had theories on how behavior relates to neural networks and brain activity and would go on to become one of the monumental pillars of machine learning development.
In 1950 Alan Turing created the Turing Test to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human. He presented the principle in his paper Computing Machinery and Intelligence while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'"
Playing games and plotting routes
The first ever computer learning program was written in 1952 by Arthur Samuel. The program was the game of checkers, and the IBM computer improved at the game the more it played, studying which moves made up winning strategies and incorporating those moves into its program.
Then in 1957 Frank Rosenblatt designed the first neural network for computers - the perceptron - which simulated the thought processes of the human brain.
The next significant step forward in ML wasn’t until 1967 when the “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. This could be used to map a route for traveling salesmen, starting at a random city but ensuring they visit all cities during a short tour.
Twelve years later, in 1979 students at Stanford University invent the ‘Stanford Cart’ which could navigate obstacles in a room on its own. And in 1981, Gerald Dejong introduced the concept of Explanation Based Learning (EBL), where a computer analyses training data and creates a general rule it can follow by discarding unimportant data.
Big steps forward
In the 1990s work on machine learning shifted from a knowledge-driven approach to a data-driven approach. Scientists began creating programs for computers to analyze large amounts of data and draw conclusions — or “learn” — from the results.
And in 1997, IBM’s Deep Blue shocked the world by beating the world champion at chess.
The term “deep learning” was coined in 2006 by Geoffrey Hinton to explain new algorithms that let computers “see” and distinguish objects and text in images and videos.
Four years later, in 2010 Microsoft revealed their Kinect technology could track 20 human features at a rate of 30 times per second, allowing people to interact with the computer via movements and gestures. The follow year IBM’s Watson beat its human competitors at Jeopardy.
Google Brain was developed in 2011 and its deep neural network could learn to discover and categorize objects much the way a cat does. The following year, the tech giant’s X Lab developed a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats.
In 2014, Facebook developed DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can.
2015 - Present day
Amazon launched its own machine learning platform in 2015. Microsoft also created the Distributed Machine Learning Toolkit, which enabled the efficient distribution of machine learning problems across multiple computers.
Then more 3,000 AI and Robotics researchers, endorsed by Stephen Hawking, Elon Musk and Steve Wozniak (among many others), signed an open letter warning of the danger of autonomous weapons which select and engage targets without human intervention.
In 2016 Google’s artificial intelligence algorithm beat a professional player at the Chinese board game Go, which is considered the world’s most complex board game and is many times harder than chess. The AlphaGo algorithm developed by Google DeepMind managed to win five games out of five in the Go competition.
Waymo started testing autonomous cars in the US in 2017 with backup drivers only at the back of the car. Later the same year they introduce completely autonomous taxis in the city of Phoenix.
In 2020, while the rest of the world was in the grips of the pandemic, open AI announced a ground-breaking natural language processing algorithm GPT-3 with a remarkable ability to generate human-like text when given a prompt. Today, GPT-3 is considered the largest and most advanced language model in the world, using 175 billion parameters and Microsoft Azure’s AI supercomputer for training.
The future of machine learning
Improvements in unsupervised learning algorithms
In the future, we’ll see more effort dedicated to improving unsupervised machine learning algorithms to help to make predictions from unlabeled data sets. This function is going to become increasingly important as it allows algorithms to discover interesting hidden patterns or groupings within data sets and help businesses understand their market or customers better.
The rise of quantum computing
One of the major applications of machine learning trends lies in quantum computing that could transform the future of this field. Quantum computers lead to faster processing of data, enhancing the algorithm’s ability to analyze and draw meaningful insights from data sets.
Focus on cognitive services
Software applications will become more interactive and intelligent thanks to cognitive services driven by machine learning. Features such as visual recognition, speech detection, and speech understanding will be easier to implement. We’re going to see more intelligent applications using cognitive services appear on the market.
thanks for this information providing in easy way.