II. Background and context
Concepts such as artificial intelligence (AI), machine learning, algorithm, and AI system have a wide array of meanings across academic, policy, and public discourse. Unhelpfully, the concepts are often used interchangeably. For the sake of clarity, some definitions and distinctions will be offered.
Artificial intelligence refers to the demonstration of intelligence by a machine, wherein intelligence is understood in terms of its expression in humans and animals. As an academic field artificial intelligence studies “intelligent agents” or “computational intelligence”, understood as systems that perceive their environment and take actions that maximize their chances of achieving their goals. Machine learning can be understood as a specialised type of AI in which the agent, or computer program, improves its performance at some task through experience. Machine learning systems use “prior knowledge together with training data to guide learning.”
In simple terms, machine learning can be thought of as a type of software that learns from a training dataset, wherein labels are created and applied by human labellers according to prior knowledge. A classic example is an image recognition program which is taught to distinguish between classes of objects. In this case the training dataset would consist of a series of pre-labelled images from which the system can derive classification rules to apply to new images or datasets.
Read more
Algorithms can be understood as core components of machine learning and artificial intelligence systems that guide the processes of learning and turning input data into outputs. In mathematical terms an algorithm can be understood as a mathematical construct with “a finite, abstract, effective, compound control structure, imperatively given, accomplishing a given purpose under given provisions.” For clarity, a simpler definition can be offered: an algorithm is a well-defined sequence of steps that produce an output from some set of inputs.
A machine learning algorithm can be understood as a type of algorithm in which a part of the sequence of steps has been learnt rather than pre-defined. For example, a machine learning algorithm used for classification tasks develops classes that can generalise beyond the training data. The algorithm creates a model to classify new inputs. A machine learning model is the internal data of the algorithm that is fitted to input data to improve performance.
Image recognition technologies, for example, can decide what types of objects appear in a picture. The algorithm ‘learns’ by defining rules to determine how new inputs will be classified. The model can be taught to the algorithm via hand labelled inputs (supervised learning); in other cases, the algorithm itself defines best-fit models to make sense of a set of inputs (unsupervised learning). In both cases, the algorithm defines decision-making rules to handle new inputs. Critically, a human user will typically not be able to understand the rationale of decision-making rules produced by the algorithm.
Popular and policy definitions of these terms often do not follow these technical definitions which can cause confusion. The World Health Organization (WHO), for example defines artificial intelligence as “the performance by computer programs of tasks that are commonly associated with intelligent beings.” Definitions of this type are on the one hand problematically broad, insofar as they turn on the definition of “intelligence” and scope of behaviours of “intelligent beings,” and thus cannot be used to classify a particular system or AI or not-AI alone. With that said, the openness of the definition can also be helpful in policy terms by enabling additional systems to be captured beyond the state-of-the-art at the point of drafting.
Regardless of their limitations, policy definitions of AI are arguably more important than technical definitions if our concern is with harmonisation across regulatory and policy frameworks. The ‘Artificial Intelligence Act’ (AIA), a proposed horizontal risk-based regulatory framework proposed by the European Commission, offers a particularly broad definition of AI that promises to be an influential international policy going forward:
“‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Appendix I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
Appendix of the AIA offers a non-comprehensive list of techniques and approaches that can be considered AI, which encompasses machine learning, logic and knowledge-based approaches, and a variety of statistical methods:
- “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- (c) Statistical approaches, Bayesian estimation, search and optimization methods.”
As this definition shows the AIA’s definition of ‘AI system’ does not align strictly with the technical definitions offered above. For example, in this definition machine learning is treated as a component of AI rather than as a specialised type of AI. To avoid ambiguity, we offer the following working definition of ‘artificial intelligence system’ for the purposes of this report:
‘Artificial intelligence systems’ refers to standalone or hardware-embedded software that acts as an intelligent agent or displays computational intelligence. An AI system can consist of one or more algorithms or models, but typically refers to complex systems in which multiple algorithms or models work together to perform a complex task.
Public discourse is currently dominated by concerns with a particular class of AI systems that make decisions and recommendations about important matters in life. These systems augment or replace analysis and decision-making by humans and are often used due to the scope or scale of data and rules involved. The number of features considered in classification tasks can run into the millions. This task replicates work previously undertaken by human workers, but on a much larger scale using qualitatively distinct decision-making logic. These systems make generally reliable (but not necessarily correct) decisions based upon complex rules that challenge or confound human capacities for action and comprehension. In other words, this report addresses AI systems whose actions are difficult for humans to predict or whose decision-making logic is difficult to explain after the fact.