Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Humanizing Artificial Intelligence (eBook)

Psychoanalysis and the Problem of Control

Luca M. Possati (Herausgeber)

eBook Download: EPUB
2023
115 Seiten
De Gruyter (Verlag)
978-3-11-100759-5 (ISBN)

Lese- und Medienproben

Humanizing Artificial Intelligence -
Systemvoraussetzungen
99,95 inkl. MwSt
(CHF 97,65)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

What does humankind expect from AI? What kind of relationship between man and intelligent machine are we aiming for? Does an AI need to be able to recognize human unconscious dynamics to act for the 'best' of humans-that 'best' that not even humans can clearly define? Humanizing AI analyses AI and its numerous applications from a psychoanalytical point of view to answer these questions.

This important, interdisciplinary contribution to the social sciences, as applied to AI, shows that reflecting on AI means reflecting on the human psyche and personality; therefore conceiving AI as a process of deconstruction and reconstruction of human identity. AI gives rise to processes of identification and de-identification that are not simply extensions of human identities-as post-humanist or trans-humanist approaches believe-but completely new forms of identification.

Humanizing AI will benefit a broad audience: undergraduates, postgraduates and teachers in sociology, social theory, science and technology studies, cultural studies, philosophy, social psychology, and international relations. It will also appeal to programmers, software designers, students, and professionals in the sciences.



Luca M. Possati is a postdoc researcher at TU Delft (the Netherlands). He was a researcher and lecturer at the?University of Porto, the Institut Catholique de Paris and the Fonds Ricoeur at the Ecole des hautes études en sciences sociales (EHESS) in Paris. He is associate editor for Humanities & Social Sciences Communications and has published numerous papers and books on phenomenology and history of contemporary philosophy.

Introduction


Luca M. Possati

This book intends to collect a series of contributions for an interpretation of artificial intelligence from a psychoanalytic point of view. It claims neither to define a method nor to draw universal conclusions about the nature of technology or the human mind. At its core there is the analysis of the human-technology relationship and its further relationship with the phenomenon of technological innovation. It is therefore a highly interdisciplinary work and intends to address researchers, students, and the public interested in these issues.

Why do we need a psychoanalytically based approach to AI? How can a discipline, or rather a set of doctrines that are very different from each other and lacking in methodological unity, tell us something significant about the nature of the human being and his relationship with technology? This book does not intend to defend the point of view of psychoanalysis, or of a specific psychoanalytic school. Instead, this book intends to open a series of explorations on the statute of psychoanalysis, and more generally on the concept of the unconscious and on its transformations in relation to new digital technologies and AI.

The need for this research lies in a new interpretation of the classic problem of the control of technology. It is evident that the earlier assumption that technological evolution would automatically lead to significant social and human progress can no longer be sustained today. The ambivalence of technology has become a standing topic in public, philosophical, and scientific debates. The scientific discussion about how to acquire and establish orientational knowledge for decision-makers facing the ambivalence of technology is divided into two branches: the ethics of technology and technology assessment (Grunwald 1999, 2018). These two branches are based on different assumptions concerning how to orient technology policy: the philosophical ethics branch, of course, emphasises the normative implications of decisions related to technology and the importance of moral conflicts, while the technology assessment branch relies mainly on sociological or economic research.

The problem of evaluating and controlling technological development is at the heart of the so-called “Collingridge dilemma,” which can be formulated as follows:

“attempting to control a technology is difficult, and not rarely impossible, because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.” (Collingridge 1980, 19)

For David Collingridge, technological development is always faster than the ability to understand its social effects. This asymmetry creates a strange effect: when changing a technology is simpler, especially at the beginning of its development, it is not perceived as a necessity, but when change is perceived as a necessity, it is no longer simple—it has become expensive and dangerous. “It is just impossible to foresee complex interactions between a technology and society over the time span required with sufficient certainty to justify controlling the technology now, when control may be very costly and disruptive” (Collingridge 1980, 12).

It’s important to note that the dilemma is not about technological development itself but about the perception that humans have of it and the awareness of its limits and effects. In fact, scholars underline that the technological development we produce exceeds our level of awareness and knowledge, and this affects our ability to forecast the social implications of technology: “A technology can be known to have unwanted social effects only when these effects are actually felt” (Collingridge 1980, 14). Why is it that, as technologies develop and become diffused, they become ever more resistant to controls that seek to alleviate their unwanted social consequences? To solve the dilemma, Collingridge (1980) develops a reversible, flexible decision-making theory that can be used when the decision-maker is still ignorant of the effects of a technology. According to Collingridge, the essence of the control of technology is not in forecasting its social consequences “but in retaining the ability to change a technology, even when it is fully developed and diffused, so that any unwanted social consequences it may prove to have can be eliminated or ameliorated” (20 – 21). The important thing is to understand how to make the decisions that influence technological development in such a way as not to remain prisoners of them.

Now, there are different interpretations of the problem of control. Some are markedly alarmist and refer to the concept of singularity (Kurzweil 2005). In this regard, the example of Bostrom (2014) and Tegmark (2017) can be cited. The problem of control is then interpreted as the problem of how to prevent superintelligence from harming human beings. However, this interpretation risks fueling apocalyptic visions and excessive concerns.

There is also another interpretation of the control problem which abandons an alarmist tone and focuses more on the human-machines relationship and its potential. As Russell (2019) claims, “If we build machines to optimize objectives, the objectives we put into the machines have to match what we want, but we do not know how to define human objectives completely and correctly” (170, emphasis added). Human beings put their goals into the machine, but this is exactly the problem. Humans want the machine to do what we want, “but we do not know how to define human objectives completely and correctly,” and we often act in ways that contradict our own preferences. Humanity is not a single, rational entity but “is composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogeneous entities. Loads and loads of them” (211).

The main challenge is to understand the nature of our goals and preferences. In Russell’s (2019) view, “Preference change presents a challenge for theories of rationality at both the individual and societal level. . . . Machines cannot help but modify human preferences, because machines modify human experience” (241). How can we communicate our needs, values, and preferences to AI systems? This is a crucial problem in our world, where the influence of AI-based technologies is growing enormously. Unconscious dynamics influence AI and digital technology in general, and understanding them is essential to ensuring that we have better control of AI systems. For this reason, studying the way in which technology influences and orients our emotional and cognitive unconscious is a crucial undertaking to ensure a balanced relationship between human beings and technology.

In the first chapter, Paul Jorion develops an analysis of the concepts of artificial consciousness (AC) and artificial general intelligence (AGI). He claims that the connection between them is misguided as it is based on a folk psychology representation of consciousness. There exists, however, a path leading from AI to AGI which skips entirely the need to develop AC as a stepping-stone in that direction; that path inspired by Freud’s metapsychology sets at the core of the human mind a network of memory traces acted by an affect dynamics.

In Chapter 2, starting with Freud’s concept of the psychic machine, Hub Zwart discusses Lacan’s effort to elaborate on this view with the help of 20th-century research areas (computer science, linguistics, cybernetics, molecular biology, etc.), resulting in the famous theorem that the unconscious is structured as a language. Subsequently, two closely related questions are addressed, resulting from a mutual encounter between psychoanalysis and AI, namely: How can psychoanalysis contribute to coming to terms with AI and to what extent does AI allow us to update psychoanalytic theories of the unconscious?

In Chapter 3, Kerrin A. Jacobs claims that AI companionship promises a new way of coping with loneliness experiences in highly digitalised societies. In a first step some basic criteria that characterise the relationship with a companion AI (social x-bots) as distinct from human relatedness are sketched. While AI companionship is often praised for the potential to cope with loneliness its crucial flaw is its lacking of an intersubjective dimension, which is essential for the human condition. The central hypothesis is that AI companionship cannot solve the problem of loneliness, which is elaborated on in a second step.

In Chapter 4, Andre Nusselder analyses decision-making by football referees supported by Video Assistance Referee (VAR) technologies, with the goal of implementing AI so that accurate and fair decisions can be made without interrupting the flow of the game too much. This chapter analyses the connection between these technologies and affective self-regulation of those involved. It does so from Norbert Elias’s theory of civilisation in which he analyses – using Freud’s metapsychology – how increased civilised behaviour leads to increased self-control of individuals. The chapter argues that the aim of making football a fairer game with the use of AI has a similar impact on those involved and is thus a next step in the movement towards the posthuman condition which takes place subtly as humans adapt to it...

Erscheint lt. Verlag 4.10.2023
Zusatzinfo 4 col. ill.
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Netzwerke
Sozialwissenschaften Soziologie
Schlagworte Artificial Intelligence • Control • media • New Structuralism • Psychoanalysis
ISBN-10 3-11-100759-6 / 3111007596
ISBN-13 978-3-11-100759-5 / 9783111007595
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Wasserzeichen)
Größe: 1,9 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Das umfassende Handbuch

von Martin Linten; Axel Schemberg; Kai Surendorf

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 20,45
das Praxisbuch für Administratoren und DevOps-Teams

von Michael Kofler

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 27,25