Pierre Depaz est enseignant, artiste et programmeur. Dipômé de l'IEP de Lille et de la NYU Tisch School of the Arts, il est actuellement Lecturer of Interactive Media à NYU Berlin, et enseignant vacataire à Sciences Po, tandis qu'il poursuit son travail doctoral sur le rôle de l'esthétique dans la compréhension du code source à Paris-3 Sorbonne-Nouvelle, sousl la direction d'Alexandre Gefen et de Nick Montfort.
Sa recherche se concentre sur comment les systèmes computationels créent des cadres de représentation inter- et intra-personels, et inclut des publications telles que Computer Simulations as Political Manifestos (Goethe Institute, 2016), L’agit-prop à l’ère 2.0 : les campagnes du collectif Kazeboon dans l’Égypte en Révolution (CIRCAV vol. 27, 2018) et Coroutines (Officialfan.club, 2018).
Sa pratique artistique s'étend entre jeux vidéos, simulations, installations interactives, performances en réseaux et projets web expérimentaux, oeuvres qui ont ét'exposées à NYC, Paris, Le Caire, Abu Dhabi, Bruxelles et Berlin. Il a écrit des logiciels pour, entre autres, le Whitney Museum of American Art, le New Museum, le Washington Post, Open Society Foundations et le Bezirksamt-Neukölln—ainsi que ce site web.
The crystallization of the scientific efforts that were to gave rise to artificial intelligence (AI) took place in the decades following the second world war, and involved a select group of people, closely interacting with each other. While rooted in mathematics, biology and a burgeoning computer science, the "fathers" of AI, as they came to be known, relied on a rangeof ideas taken from both a history of scientific thought as well as from a history of accounts ofautonomous, machine-like beings. So, while not explicitly referencing fictional works, thoseinvolved in the early stages of AI research (e.g. Turing, McCarthy, McCullough, and Wiener) have implicitly based their work on assumptions resulting from an entanglement of beliefs around the brain, language and inanimate matter. These beliefs, straddling the line between fiction and scientific hypotheses, offer a new perspective on the linguistic implications of AI research and production.
By examining the scientific and fictional concepts that those individuals engaged with in the 1940-1970 period in the United States, this contribution aims at expliciting the thin line betweenthe scientific work of AI pioneers and fictional accounts of all-powerful languages and symbol manipulation. Particularly, I will focus on the connection between the myth of the Golem in Jewish folklore, Freud's narrative accounts of human psychology, as well as Leibniz's characteristica universalis, a fantasized universal formal language. The interplay of these different fictions provides a backdrop for the contemporary approach of AI methodologies, in which a certain power of language is considered foundational. The result, I will argue, is a paradigm through which form can be entirely separated from content, in which meaning no longer has anything to do with its vehicle.
Developing this point further, this contribution will examine the nature and discourses surrounding Lisp, a programming language designed by McCarthy for the specific purposes of AI development, and still in use today. A semantic analysis of Lisp itself, as well as a discussion of the social contexts in which Lisp is evoked or referred to (sometimes as “God's language”5), will reveal a technical object whose perceived power is based on flexibility and abstraction. Thiscontribution will, through the analysis of the different metaphors of religion, wizardry6 and perfection in popular programming culture, highlight how Lisp has, throughout the years, acted as a technical vehicle for the assumptions of AI pioneers.
Through close readings of source texts, I intend to shed new light on those fiction-infusedassumptions behind early AI research, and how those assumptions have, through the technical development of an AI-oriented language like Lisp, informed the current production of AI-generated works; ultimately, this perspective will allow us to better understand the nature of contemporary productions such as GPT-3.