
Approaches
Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system:
- Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
- Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
- Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize.
Although each algorithm has advantages and limitations, no single algorithm works for all problems.
Supervised learning
A support-vector machine is a supervised learning model that divides the data into regions separated by a linear boundary. Here, the linear boundary divides the black circles from the white.
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.
Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email.
Similarity learning
is an area of supervised machine learning closely related to regression
and classification, but the goal is to learn from examples using a
similarity function that measures how similar or related two objects
are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
Unsupervised learning
Unsupervised learning algorithms find structures in data that has not
been labeled, classified or categorized. Instead of responding to
feedback, unsupervised learning algorithms identify commonalities in the
data and react based on the presence or absence of such commonalities
in each new piece of data. Central applications of unsupervised machine
learning include clustering, dimensionality reduction, and density estimation. Unsupervised learning algorithms also streamlined the process of identifying large indel based haplotypes of a gene of interest from pan-genome.
Clustering via Large Indel Permuted Slopes, CLIPS, turns the alignment image into a learning regression problem. The varied slope (b) estimates between each pair of DNA segments enables to identify segments sharing the same set of indels.
Cluster analysis is the assignment of a set of observations into subsets (called clusters)
so that observations within the same cluster are similar according to
one or more predesignated criteria, while observations drawn from
different clusters are dissimilar. Different clustering techniques make
different assumptions on the structure of the data, often defined by
some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.
Semi-supervised learning
Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy.
In weakly supervised learning,
the training labels are noisy, limited, or imprecise; however, these
labels are often cheaper to obtain, resulting in larger effective
training sets.
Reinforcement learning
Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions
in an environment so as to maximize some notion of cumulative reward.
Due to its generality, the field is studied in many other disciplines,
such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcements learning algorithms use dynamic programming techniques.
Reinforcement learning algorithms do not assume knowledge of an exact
mathematical model of the MDP and are used when exact models are
infeasible. Reinforcement learning algorithms are used in autonomous
vehicles or in learning to play a game against a human opponent.
Dimensionality reduction
Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature
set, also called the "number of features". Most of the dimensionality
reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis
(PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a
smaller space (e.g., 2D). This results in a smaller dimension of data
(2D instead of 3D), while keeping all original variables in the model
without changing the data.
The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularization.
Other types
Other
approaches have been developed which do not fit neatly into this
three-fold categorization, and sometimes more than one is used by the
same machine learning system. For example, topic modeling, meta-learning.
Self-learning
Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It is learning with no external rewards and no external teacher advice. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
- in situation s perform action a
- receive a consequence situation s'
- compute emotion of being in the consequence situation v(s')
- update crossbar memory w'(a,s) = w(a,s) + v(s')
It is a system with only one input, situation, and only one output,
action (or behavior) a. There is neither a separate reinforcement input
nor an advice input from the environment. The backpropagated value
(secondary reinforcement) is the emotion toward the consequence
situation. The CAA exists in two environments, one is the behavioral
environment where it behaves, and the other is the genetic environment,
wherefrom it initially and only once receives initial emotions about
situations to be encountered in the behavioral environment. After
receiving the genome (species) vector from the genetic environment, the
CAA learns a goal-seeking behavior, in an environment that contains both
desirable and undesirable situations.
Feature learning
Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.
Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization and various forms of clustering.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.
Feature learning is motivated by the fact that machine learning
tasks such as classification often require input that is mathematically
and computationally convenient to process. However, real-world data such
as images, video, and sensory data has not yielded attempts to
algorithmically define specific features. An alternative is to discover
such features or representations through examination, without relying on
explicit algorithms.
Sparse dictionary learning
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions, and is assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the K-SVD
algorithm. Sparse dictionary learning has been applied in several
contexts. In classification, the problem is to determine the class to
which a previously unseen training example belongs. For a dictionary
where each class has already been built, a new training example is
associated with the class that is best sparsely represented by the
corresponding dictionary. Sparse dictionary learning has also been
applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
Anomaly detection
In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions.
In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.
Three broad categories of anomaly detection techniques exist.
Unsupervised anomaly detection techniques detect anomalies in an
unlabeled test data set under the assumption that the majority of the
instances in the data set are normal, by looking for instances that seem
to fit the least to the remainder of the data set. Supervised anomaly
detection techniques require a data set that has been labeled as
"normal" and "abnormal" and involves training a classifier (the key
difference to many other statistical classification problems is the
inherently unbalanced nature of outlier detection). Semi-supervised
anomaly detection techniques construct a model representing normal
behavior from a given normal training data set and then test the
likelihood of a test instance to be generated by the model.
Robot learning
Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML).
Association rules
Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".
Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński
and Arun Swami introduced association rules for discovering
regularities between products in large-scale transaction data recorded
by point-of-sale (POS) systems in supermarkets. For example the rule found in the sales data of a supermarket would indicate that if a
customer buys onions and potatoes together, they are likely to also buy
hamburger meat. Such information can be used as the basis for decisions
about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.
Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs.
Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting.
Shapiro built their first implementation (Model Inference System) in
1981: a Prolog program that inductively inferred logic programs from
positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.
Models
Performing machine learning can involve creating a model,
which is trained on some training data and then can process additional
data to make predictions. Various types of models have been used and
researched for machine learning systems.
Artificial neural networks
An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another.
Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
Deep learning
consists of multiple hidden layers in an artificial neural network.
This approach tries to model the way the human brain processes light and
sound into vision and hearing. Some successful applications of deep
learning are computer vision and speech recognition.
Decision trees
A decision tree showing survival probability of passengers on the Titanic
Decision tree learning uses a decision tree as a predictive model
to go from observations about an item (represented in the branches) to
conclusions about the item's target value (represented in the leaves).
It is one of the predictive modeling approaches used in statistics, data
mining, and machine learning. Tree models where the target variable can
take a discrete set of values are called classification trees; in these
tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers)
are called regression trees. In decision analysis, a decision tree can
be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making.
Support-vector machines
Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning
methods used for classification and regression. Given a set of training
examples, each marked as belonging to one of two categories, an SVM
training algorithm builds a model that predicts whether a new example
falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling
exist to use SVM in a probabilistic classification setting. In addition
to performing linear classification, SVMs can efficiently perform a
non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
Regression analysis
Illustration of linear regression on a data set
Regression analysis encompasses a large variety of statistical
methods to estimate the relationship between input variables and their
associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space.
Bayesian networks
A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet.
A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph
(DAG). For example, a Bayesian network could represent the
probabilistic relationships between diseases and symptoms. Given
symptoms, the network can be used to compute the probabilities of the
presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
Gaussian processes
An example of Gaussian Process Regression (prediction) compared with other regression models
A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations.
Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point.
Gaussian processes are popular surrogate models in Bayesian optimization used to do hyperparameter optimization.
Genetic algorithms
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes
in the hope of finding good solutions to a given problem. In machine
learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
Belief functions
The theory of belief functions, also referred to as evidence theory
or Dempster–Shafer theory, is a general framework for reasoning with
uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories.
These theoretical frameworks can be thought of as a kind of learner and
have some analogous properties of how evidence is combined (e.g.,
Dempster's rule of combination), just like how in a pmf-based Bayesian approach
would combine probabilities. However, there are many caveats to these
beliefs functions when compared to Bayesian approaches in order to
incorporate ignorance and Uncertainty quantification.
These belief function approaches that are implemented within the
machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.
However, the computational complexity of these algorithms are dependent
on the number of propositions (classes), and can lead a much higher
computation time when compared to other machine learning approaches.
Training models
Typically,
machine learning models require a high quantity of reliable data in
order for the models to perform accurate predictions. When training a
machine learning model, machine learning engineers need to target and
collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting
is something to watch out for when training a machine learning model.
Trained models derived from biased or non-evaluated data can result in
skewed or undesired predictions. Bias models may result in detrimental
outcomes thereby furthering the negative impacts on society or
objectives. Algorithmic bias
is a potential result of data not being fully prepared for training.
Machine learning ethics is becoming a field of study and notably be
integrated within machine learning engineering teams.
Federated learning
Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.