Machine learning (ML): All there is to know
The human experience has long been shaped by how we live and work with machines. Now more than ever, our increasingly digital world is rapidly redefining the way we do our jobs, interact with each other and even perceive the world. The overlap between what humans can do and what computers are capable of is growing at an extraordinary pace.
Even learning new skills – once perceived as something reserved for humans and other intelligent sentient creatures – is now entering the realm of computer science. This is thanks to the recent push in artificial intelligence (AI), the development of computer software that emulates human thought and performs complex tasks. Machine learning (ML), a subfield of AI, has been identified as a key component in the world of tomorrow, but what does this mean and how does it affect us?
Table of contents
What is ML?
Establishing a clear machine learning definition can be challenging. Machine learning (ML) is a type of artificial intelligence that allows machines to learn from data without being explicitly programmed. It does this by optimizing model parameters (i.e. internal variables) through calculations, such that the model’s behaviour reflects the data or experience. The learning algorithm then continuously updates the parameter values as learning progresses, enabling the ML model to learn and make predictions or decisions based on data science.
The applications of machine learning are wide-ranging, spanning industries such as healthcare, finance, marketing, transportation, and more. Machine learning models are already being used for image recognition, natural language processing, fraud detection, recommendation systems, autonomous vehicles and personalized medicine.
Overall, machine learning plays a crucial role in enabling computers to learn from experience and data to improve performance on specific tasks without being programmed. It has the potential to revolutionize various industries by automating complex processes and making intelligent predictions or decisions by “digesting” vast amounts of information.
How does machine learning compare to deep learning and neural networks?
Deep learning is a subset of machine learning, which is focused on training artificial neural networks. With multiple layers, neural networks are inspired by the structure and function of the human brain. Like our brains, they consist of interconnected nodes (neurons) which transmit signals.
These complex algorithms excel at image and speech recognition, natural language processing and many other fields, by automatically extracting features from raw data through multiple layers of abstraction. Deep learning can handle datasets on a massive scale, with high-dimensional inputs. To do so, it needs a significant amount of computational power and extensive training.
Sign up for email updates
Stay updated on artificial intelligence and related standards!
How your data will be used
Please see ISO privacy notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
How machine learning works
The first step in machine learning is collecting relevant data which may come from sources such as databases, sensors or the Internet.
- Preprocessing data: Once the data is collected, it needs to be preprocessed to ensure its quality and suitability for analysis.
- Training the model: The next step is to train a machine learning model – an algorithm or mathematical representation that learns to make predictions or decisions from input data.
- Feature selection and engineering: That machine learning model then selects the most relevant features from the input data that will have a significant impact on the model’s performance.
- Evaluating and optimizing the model: Once a model is trained, it needs to be evaluated to assess its performance and determine whether it meets the desired criteria.
- Deployment and monitoring: After successful training and evaluation, the model can be deployed in real-world applications of machine learning.
Machine learning models
Machine learning builds on existing computer science, relying heavily on statistics, probability theory and optimization techniques. There are three main types of machine learning models:
Supervised learning
Used to predict outcomes or classify data, supervised machine learning is based on labelled training datasets. As data is fed to the ML model, it goes through a cross-validation process which adjusts its weight until it is fitted appropriately. This model supports things like face recognition, object detection or quality control.
Unsupervised learning
As opposed to supervised learning, unsupervised learning is based on unlabelled datasets. The objective of unsupervised learning is to teach ML models to detect hidden patterns or structures without human supervision. Businesses can therefore use unsupervised learning to support customer segmentation, cross-selling strategies or data analysis.
Reinforcement learning
While similar to supervised learning, reinforcement learning relies on trial and error. Without labelled training datasets, reinforcement learning trains ML models to develop best recommendations based on a series of successful outcomes.
Differences between a machine learning model and a machine learning algorithm
In essence, a machine learning model is an end product. It is the representation of what happens when a machine learning algorithm is applied to a dataset. Its purpose is to generalize beyond the training data rather than simply memorize the examples it was trained on. In other words, the model is a tool that can be used to do things like predict outcomes and identify patterns.
In contrast, the machine learning algorithm is the technique used to train a machine learning model. There exist a number of algorithms – linear regression, support vector machines, deep neural networks – and each has its own formulations and complexities. However, the end goal of all of them is to reduce the margin of error between model predictions and the target output of training datasets.
In an image classification system, for instance, the machine learning model is the mathematical function that identifies whether an image contains a cat or a dog, having learned patterns from the training data. The machine learning algorithm is the method used to train this model, optimizing its parameters to improve classification accuracy. Once trained, the model can be used to classify new unseen images as containing either a cat or a dog.
What are the advantages of machine learning?
Machine learning offers a wide range of benefits across various industries and applications. With the ability to process vast amounts of data in real time, machine learning can also identify inefficiencies in processes, optimize workflows and improve overall productivity.
Here are some more specific advantages of machine learning:
- Automation of repetitive tasks, saving time and resources: This allows humans to focus on more complex and creative aspects of their work.
- Personalization and recommendations: By analysing user preferences and behaviour, machine learning powers personalized experiences. Platforms like Netflix, Amazon and Spotify use it to suggest content based on individual user patterns.
- Data analysis and pattern recognition: Machine learning excels at analysing large datasets to identify patterns and trends that may not be apparent through traditional methods. This can lead to valuable insights and informed decision making.
- Improved decision making: By providing accurate and data-driven insights, machine learning aids more informed decision making across various domains, from marketing strategies to supply chain optimization.
- Predictive analytics: Machine learning algorithms can make predictions based on historical data, anticipating future trends, customer behaviour and market dynamics. This is particularly useful in financial forecasting, demand prediction and risk management.
- Enhanced customer experiences: Machine learning enables the chatbots and virtual assistants that interact with users in a natural language format, providing quicker and more personalized responses to enhance customer support and engagement.
- Fraud detection and cybersecurity: Machine learning algorithms can detect unusual patterns and behaviours in data, aiding fraud detection in financial transactions and enhancing cybersecurity by identifying potential threats.
- Medical diagnosis and healthcare: Machine learning helps predict patient outcomes and personalize treatment plans. It can analyse medical images, such as X-rays and MRIs, to assist in the detection of diseases.
- Optimized resource allocation: Machine learning predicts demand, manages inventory and streamlines supply chain processes. This is crucial for industries dealing with perishable goods or fluctuating market demands.
- Efficient recruitment and HR processes: Machine learning algorithms can speed the recruitment process by analysing resumés, identifying suitable candidates and predicting employee performance.
Machine learning: promises and challenges
Machine learning in artificial intelligence opens a realm of possibilities for businesses and society. As well as the numerous benefits listed above, it is part of an AI landscape which promises world-changing innovation in the field of climate change resilience and mitigation, powering the acceleration of solutions to some of the planet’s most serious problems.
However, this comes with risks. It’s essential to address ethical considerations, data privacy and potential biases to ensure responsible and fair use of these technologies. Additionally, the effectiveness of machine learning applications depends on the quality of the data and the appropriateness of the chosen algorithms for specific tasks.
This is where International Standards play a critical role in providing clear guidelines and regulations to prevent misuse and protect users. ISO, in collaboration with the International Electrotechnical Commission (IEC), has published a number of standards related to machine learning through its dedicated group of experts on artificial intelligence (ISO/IEC JTC 1/SC 42). Its most recent standard on the subject is ISO/IEC 23053 which provides a framework for AI systems using machine learning.
History of machine learning
To fully answer the question “what is machine learning?”, we must retrace our steps. ML can trace its origins back to the 1950s. From its very first iterations to the rapidly evolving technology we know today, ML has been shaped – and continues to be shaped – by decades of breakthroughs and setbacks.
Humble beginnings (1950s-1960s)
The very first step in artificial intelligence and machine learning was taken by Arthur Samuel in 1950. His work demonstrated that computers were capable of learning when he taught a programme to play checkers. However, this wasn’t a programme that was explicitly designed to carry out specific commands. This programme could learn from past mistakes and moves to improve its performance. Samuel would later coin the term “machine learning” and define it as “the field of study that gives computers the ability to learn without being explicitly programmed”.
Only eight years later, in 1958, Frank Rosenblatt introduced the Perceptron, a simplified model of an artificial neuron. This algorithm could learn to recognize patterns in data and was the first iteration of an artificial neural network. Evgenii Lionudov and Aleksey Lyapunov would complement these innovations in the 1960s through their work on backpropagation algorithms and the theory of machine learning. By the 1980s, there existed an algorithm capable of efficiently training multi-layered neural networks.
The lost years (1960s-1970s)
Marvin Minsky and Seymour Papert’s Perceptrons, published in 1969, shone a bright light on the limitations of neural networks. Combined with the limited computing power, a lack of available data and other factors, this influential book inadvertently contributed to the first “AI winter” marked by minimal funding and low research interest.
The renaissance (1980s-1990s)
John Hopfield would put an end to this “AI winter” with the introduction of his recurrent neural network – the Hopfield network – in 1982. This encouraged David Rumelhart, Geoffrey Hinton, Ronald Williams and others to revive the study of backpropagation and multi-layered neural networks. The year 1989 saw the first real breakthrough in the field of computer vision through Yann LeCun’s work on convolutional neural networks (CNNs).
The introduction of support vector machines (SVMs) by Vladimir Vapnik in 1995 and the development of long short-term memory (LSTM) networks by Sepp Hochreiter and Jürgen Schmidhuber in 1997 garnered even more momentum for this burgeoning field.
The breakthroughs (2010s)
Machine learning marked a decisive victory over traditional computers in 2012 when AlexNet, a convolutional neural network, outperformed traditional computer vision methods in the 2012 ImageNet competition.
From there, a series of landmark breakthroughs followed. In 2014, Ian Goodfellow’s generative adversarial networks (GANs) would empower researchers to generate realistic synthetic data. In 2016, the world champion of Japanese board game Go was defeated by DeepMind’s AlphaGo system. And in 2017, transformer models revolutionized natural language processing capabilities.
Recent developments (2010s-present)
Since then, the field has continued to develop deep learning architectures and expanded the applications of machine learning to industries like healthcare, finance and even entertainment. Machine learning has also started to find its way into Internet of Things (IoT) devices and into other fields such as quantum computing, neuroscience and physics.
Amidst all this fast-paced progress, there is today a growing emphasis on considerations surrounding the responsible use of machine learning systems. What’s more, the advancements in unsupervised and self-learning techniques have placed ever more weight on the management of data and how ML models are applied in real-life scenarios.
- ISO/IEC 23053:2022Framework for AI systems using machine learning
- ISO/IEC 42001:2023AI management systems
- ISO/IEC 23894:2023AI — Guidance on risk management
Will machine learning be the future of AI?
The ultimate goal of AI is to design machines that are capable of reasoning, learning and adapting to various domains. This will require advanced capabilities in a variety of AI subfields and machine learning is a vital part of this.
The future of machine learning, as part of the wider field of AI, is exciting for many and concerning for some. The development of International Standards is crucial if we are to minimize its risks and maximize its many benefits in every part of our lives.