Speaker
Description
Artificial intelligence (AI) emerged from the convergence of several ideas and theories developed over a significant period in the first half of the twentieth century. Among these, information theory holds a prominent place, achieving its complete formulation with Claude Shannon in the 1940s. Shannon is considered one of the founding fathers of AI, being a key contributor to the 1956 Dartmouth seminar that officially launched the field. However, some aspects of this origin and the influence of information theory seem somewhat inconsistent from a broader perspective. Notably, information theory primarily deals with the transmission of signals rather than directly addressing the transmission of meaning—a key element of intelligence. At the same time, Nyquist's earlier work, which anticipated and facilitated the construction of a solid theory of information, concerned the transmission of intelligence, as per the author's terms. In my talk, I will attempt to trace the historical and theoretical connections between information theory and AI, exploring their mutual influence through the shared notion of “intelligence”. I will demonstrate that while this shared usage does not signify a perfect conceptual overlap, it has nevertheless sparked numerous advancements in AI. These developments have far exceeded the initial expectations of those who first provided a technical and formal foundation for AI, and they have done so despite the great expectations generated in public opinion by early AI slogans.