Author: Alessia Guarnaccia

The term AGI (Artificial General Intelligence) refers to “a hypothetical computer program capable of performing intellectual tasks as well as, or better than, a human being” (H. Hodson). With the supposed ability to apply ‘intelligence’ to any problem, the ambition is to attribute general cognitive abilities to it.

AGI, as the theoretical and practical study of such engineered systems and the methods used to create them, is part of the broader fields of artificial intelligence (AI) and cognitive science; it is also “closely related to other fieldssuch as metalearning (also known as “meta-learning” or “learning to learn”) and computational neuroscience“. Research goals associated with AGI are related to concepts such as “synthetic intelligence” (‘SI’) (Joscha Bach, 2009), “computational intelligence” (‘CI’), “natural intelligence”, “cognitive architecture”, “biologically inspired cognitive architecture” (“BICA”); it is related to, but not fully overlapping with, the overtly anthropomorphic concept of “human-level AI” (Cassimatis, 2006), a term used to refer to reasonably human-like AI.

The term “general” contrasts the concept with so-called “narrow AI”, which instead refers to “a type of artificial intelligence designed to perform a specific task effectively”, and which, operating under fixed rules and constraints, “is not capable of transferring its knowledge or remembering its experience in an unfamiliar environment”, so that if the context or behavioural specifications are changed, “some degree of human reprogramming is generally required to enable the system to maintain its level of intelligence”(Kurzweil, 1999). On the other hand, “generally intelligent natural systems“, such as humans, have “a broad capacity for self-adaptation to changes in their goals or circumstances“, performing “transfer learning” capable of generalising knowledge and applying it to goals or contexts other than the original ones: a fundamental quality that the AGI-focused research community aims to achieve.

The concept of “general intelligence” (GI) itself still lacks a clear and universally accepted definition; the same is true of “intelligence” in nature (over 70 different definitions from researchers in different disciplines were summarised and organised in a single paper by Legg and Hutter in 2007). Over time, a number of key approaches to “conceptualising the nature of general intelligence” have emerged. According to the so-called “pragmatic approach”, it is the “comparison with human capabilities” and the ability to “do the practical, useful and important things that are typical of humans” that can shed light on whether the system under study has the connotations of a general intelligence system. This perspective is presented in the articleHuman Level Artificial Intelligence? Be Serious!“, in AI Magazine by Nils J. Nilsson, one of the founding fathers of the field of AI (Nilsson, 2005), where the author argues for the “need to develop generic, trainable systems” capable of “learning and being taught to perform any of the tasks” that humans can perform, and states that one should start with a system with minimal, but extensive, built-in capabilities, including the ability to improve through learning. In contrast, the “psychological approach” to characterising general intelligence, rather than directly examining practical abilities, “seeks to isolate the deeper underlying capacities that make them possible”. An early phase of intelligence research focused heavily on measurement: Charles Spearman proposed the so-called ““g factor””, short for “general factor”, in 1904, arguing that it was “biologically determined and represented an individual’s general level of intellectual ability”; William Stern introduced the notion of the intelligence quotient (IQ) in 1912, the formula for which was later improved by Lewis M. Terman. At a later stage, the concept of “intelligence as a single, undifferentiated ability” was challenged, and “theories, definitions and measurement approaches emerged that shared the idea that intelligence is multifaceted and variable” in individuals and social groups. A well-known example of these approaches is Howard Gardner‘s “theory of multiple intelligences” (1983), which proposes eight distinct forms or types of intelligence: (1) linguistic, (2) logical-mathematical, (3) musical, (4) bodily-kinesthetic, (5) visual-spatial, (6) interpersonal, (7) intrapersonal and (8) naturalistic. In contrast to the human-based approaches, some researchers have attempted to understand the phenomenon in universal terms (“the mathematical approach”), defining it as a measure of an agent‘s average ability to achieve goals (obtain rewards) in a wide and weighted variety of environments (S. Legg, M. Hutter), and arriving at the basic intuition that “absolute general intelligence would only be achievable only with infinite computational capacity”, furthermore, for any computable system, “there will be contexts and goals for which the subject will not be very intelligent”. It is then the so-called “adaptationist approach” that sees general intelligence as closely related to the context in which it exists, defining it precisely as “adaptation to the environment using limited resources” (Pei Wang, 2006), whereby “a system has higher general intelligence if it is able to adapt effectively to a more general class of environments within realistic resource constraints”.

Over time the different theoretical perspectives on general intelligence have led to different technical approaches to the goal of transferring this competence to artificial systems: W. Duch distinguished three paradigms: symbolic (Symbolic AGI), emergentist (Emergentist AGI) and hybrid (Hybrid AGI), to which B. Goertzel added the category of “universal” (Universal AI). From an engineering point of view, it was then the emergence of formal neurons (“artificial neuron”), artificial neural networks (ANNs) and the dizzying development of deep learning that made it possible to testconcepts such as “Hebbian learning” (according to which neural pathways are strengthened each time they are used, Donald Hebb, 1949), “reinforcement learning”, “genetic algorithm” and other techniques to artificially implement “cognitive architectures with increasingly complex and self-organising dynamic properties”.

Some academic sources reserve the term AGI “for those computer programs that are capable of being sentient” and “possess all the characteristics of a human mind, including consciousness“. The term “strong artificial intelligence” (“strong AI”) is used in this sense, as opposed to “weak AI”, which focuses on specific tasks, “the latter being useful for testing hypotheses about minds, but not being such themselves”(J. Searle). This perspective stems from the historical goal “of “identifying a set of rules that explains and hopefully emulates the workings of biological intelligences“, which is fully in line with the “thinking machines” debate.

One of the key moments in this debate was Alan M. Turing‘s conception of the Turing Machine (MT) (a mathematical model of computation describing an abstract machine capable of “manipulating, reading and/or writing symbols on a potentially infinite tape according to a table of rules” and capable of “implementing any computer algorithm“); which evolved into the so-called Universal Turing Machine (UTM) (a Turing machine that “can simulate any MT on any input…reading both the description of the machine to be simulated and its input from its own tape”). These formalisations were then followed by the conception of the Turing test, originally called the “imitation game” (proposed by the author in the articleComputing machinery and intelligence” that appeared in Mind in 1950): a criterion based on textual conversations in natural language between a human and a machine to determine whether the latter is “capable of exhibiting intelligent behaviour equivalent to, or indistinguishable from, that of a human being‘. The underlying principle is that “an MT (Turing machine), properly programmed, is capable of performing any human-computable function” and that “if a problem is human-computable, then there will be a Turing machine that can solve (compute) it” (Church-Turing thesis).

With particular reference to the possibility of a machine behaving intelligently in a way that makes it indistinguishable from a human being, one aspect considered crucial is that of the intentionality of mental states (“the constitutive capacity of thought to always have a content, to be essentially directed towards an object, without which thought itself would not exist“) and the possibility of extending intentionality to artificial systems. In 1874, the experimental psychologist Franz Brentano, in his “Theory of Intentionality“, pointed out precisely these two characteristics of intentionality: the possession of an informational content and the peculiarity of being directed towards an “intentional object“, thus defining intentionality as the main characteristic of psychic (or mental) phenomena by which they can be distinguished from physical phenomena. The philosopher John R. Searle takes this view, believing that intrinsic intentionality (a property of consciousness) is peculiar only to biological systems; the mental experiment he devised, known as the “Chinese room”, is famous as a “counterexample to the theory of strong artificial intelligence” (an argument presented in the article “Minds, Brains and Programs”, published in 1980 in the journal The Behavioural and Brain Sciences), in which he attempts to show that “syntax (the “grammar”, the computer’s ability to carry out a procedure) does not imply semantics (the “meaning”, the fact that the computer knows what it is doing)”.

At present, the so-called “connectionist objection” (“connectionism”) claims that the understanding of meanings can “emerge from a computational complex that goes beyond the serial logic of digital computers” in such a way as to make simple systems increasingly similar to complex and dynamic ones, up to a level that, in perspective, resembles biological systems in such a way as not to exclude the possibility, in the future, of an artificial mind that exhibits intrinsic consciousness and intentionality.

Another aspect considered important in the debate on the subject is that a general AIcapable of emulating the human mind could also improve itself recursively (“recursive self-improvement”), thus “starting from the human level, autonomously improving itself, producing solutions and technologies much faster than human scientists”, with developments that are difficult to predict. It is a scenario that raises a number of questions and that requires a strong acceleration of awareness of the “existential risks” associated with this perspective (Etical AI).

Link to the event registration:

Hebb, D.O., The Organization of Behavior, Wiley & Sons, New York 1949
Dennett D.C., The Intentional Stance, MIT press, Cambridge, Massachusetts, 1987
Edelman G., Neural Darwinism. The Theory of Neuronal Group Selection, Basic Books, New York,1987
Domeniconi J., Discussions on neural networks and learning, 2001
Kurzweil R., The Age of Spiritual Machine, Penguin, New York, 1999

The image shown was processed with DALL-E