Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Faking It (eBook)

Artificial Intelligence in a Human World

(Autor)

eBook Download: EPUB
2023 | 1. Auflage
248 Seiten
Flint (Verlag)
978-1-80399-460-4 (ISBN)

Lese- und Medienproben

Faking It -  Toby Walsh
Systemvoraussetzungen
18,49 inkl. MwSt
(CHF 17,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
'Refreshingly clear-eyed ... Faking It is an insightful and intelligent book that's a must for those looking for facts about AI hype.' - Books+Publishing 'AI will be as big a game-changer as the smart phone and the personal computer - or bigger! This book will help you navigate the revolution.' - Dr Karl Kruszelnicki Artificial intelligence is, as the name suggests, artificial and fundamentally different to human intelligence. Yet often the goal of AI is to fake human intelligence. This deceit has been there from the very beginning. We've been trying to fake it since Alan Turing answered the question 'Can machines think?' by proposing that machines pretend to be humans. Now we are starting to build AI that truly deceives us. Powerful AIs such as ChatGPT can convince us they are intelligent and blur the distinction between what is real and what is simulated. In reality, they lack true understanding, sentience and common sense. But this doesn't mean they can't change the world. Can AI systems ever be creative? Can they be moral? What can we do to ensure they are not harmful? In this fun and fascinating book, Professor Toby Walsh explores all the ways AI fakes it, and what this means for humanity - now and in the future.

TOBY WALSH is one of the world's leading researchers in Artificial Intelligence. He is a Professor of Artificial Intelligence at the University of New South Wales and leads a research group at Data61, Australia's Centre of Excellence for ICT Research. He has been elected a fellow of the Association for the Advancement of AI for his contributions to AI research, and has won the prestigious Humboldt research award. He regularly appears on the BBC and writes for The Guardian, New Scientist, and The New York Times.

2.


AI HYPE


A big problem with artificial intelligence today is all the hype it is generating. It’s almost impossible to open a newspaper without reading multiple stories about AI. And the pace of change appears to be accelerating. That’s not surprising, when you consider the billions of dollars being invested in the field.

Unfortunately, many of the claims being made about AI are overinflated. In numerous cases they’re simply wrong. Journalists talk about the imminent arrival of machines that will put us all out of work, or even machines that will take over the planet. It is easy to feel alarmed.

Here, for example, are a few of the many hundreds of recent AI headlines I saw as I was writing this chapter:

‘Mass Layoffs Overseas and a Rise in Artificial Intelligence and Bots in the Workplace Has Some Aussie Workers Feeling Nervous’ (Herald Sun).

‘The AI Arms Race Is On. But We Should Slow Down AI Progress Instead’ (Time).

‘Artificial Intelligence Is Slowly Taking Over the World and Humans Are Unaware of It’ (Transcontinental Times).

‘“ChatGPT Said I Did Not Exist”: How Artists and Writers Are Fighting Back Against AI’ (The Guardian).

‘In San Francisco, Some People Wonder When A.I. Will Kill Us All’ (CNBC).

I decided that I needed to read no further when the BBC, of all places, gave me the headline ‘AI: How “Freaked Out” Should We Be?’.

Fortunately, much of this is pure hype. You don’t need to worry too much.

Mass layoffs overseas? Actually, tech companies took on more staff during the Covid-19 pandemic than they have recently laid off. Amazon, for example, doubled in size, hiring over half a million extra staff over the course of the pandemic.1 And only about a half of this, a quarter of a million people, have been laid off across the whole of Big Tech during the current belt tightening.* Even a company like Meta, which is doing especially poorly, employs more people now than it did before the pandemic, despite all their layoffs.

As for job losses elsewhere, very few jobs have actually been taken by AI yet, despite all the fear. In a public lecture in 2016, Geoffrey Hinton, one of the leading figures behind deep learning, issued a stark warning to radiologists:

I think if you work as a radiologist, you are like the coyote that’s already over the edge of the cliff, but hasn’t yet looked down so doesn’t realise there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.2

But radiologists today are still in high demand. In 2022, radiologists ranked among the top ten highest-earning medical professions in the United States, ahead of surgeons, obstetricians and gynaecologists.3 There are AI tools today that can help speed up a radiologist’s workflow, and double-check their findings. But AI tools are not replacing them. Fortunately, the medical profession rejected Hinton’s advice and has continued to train new radiologists. And it would be a bad idea to stop doing so now.

Let’s consider other jobs. Just two of the 270 jobs in the 1950 US census have been completely eliminated by automation. Can you guess which ones? Unsurprisingly, elevator operators and locomotive firemen are now out of a job. But that is it, when we’re thinking about jobs that have been completely eliminated. And I suspect that very few of the remaining 268 jobs reported in the 1950 US census will be eliminated in the next decade. Perhaps telephonists might be replaced by computer software that speaks to you.* This would take us down to 267 different types of job. On the other hand, lots of new jobs exist today that weren’t in the 1950 US census. Web developer, photocopier repairperson and solar panel installer, to name just three. None of these jobs existed back in the 1950s.

Robots today can take over part of many jobs, but not the whole thing. Ultimately, I suspect, it won’t be robots putting humans out of work, but humans who use AI taking over the jobs of humans who don’t. AI can do some of the dull and repetitive aspects of a job, which improves the productivity of human workers. However, there remains space for humans to do all the other parts, especially when it comes to thinking critically, applying judgement and showing creativity or empathy.

Dartmouth and all that


Hype around artificial intelligence is something that can be traced back to the very start of the field. Really, it is another of AI’s original sins.

The field began, as I mentioned in the last chapter, at a famous conference held at Dartmouth College in 1956. The organisers of this conference secured funding from the Rockefeller Foundation by making the bold claim that they would make significant progress on solving AI by the end of that summer:

We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.4

In reality, it took many decades to make significant advances in any part of AI.

Take a classic (and now largely solved) AI problem such as speech recognition. The goal of speech recognition is to get a computer to understand human speech. In the 1950s, when research into speech recognition began, computer programs were quickly developed that could recognise individual spoken digits. A decade later, in the 1960s, speech-recognition systems could recognise just 100 different words. Progress was proving very slow indeed. By the 1970s, the state of the art was around 1000 words. And in the 1980s, speech-recognition systems could at last understand 10,000 words. But it still took hours to decode just a minute of speech.

It wasn’t until the 1990s that we had continuous, real-time speech recognition with a human-sized vocabulary. And it wasn’t until the 2010s – over half a century after research into speech recognition had started – that speech-recognition systems were, like the humans they were trying to emulate, able to understand any speaker. Finally you didn’t need to train the software to understand each new speaker individually.

In this domain, therefore, the hype of the original claims about artificial intelligence proved to be wildly over the top. The challenge of creating accurate and reliable speech-recognition software wasn’t solved in a summer but over the course of 50 years.

The same has been true for many other problems in artificial intelligence. As we saw in Chapter 1, the challenge of creating an AI that could play human-level chess was solved in 1997 by IBM’s Deep Blue, nearly 50 years after Alan Turing’s first attempts to get a computer to play chess. Or take the even greater challenge of playing the ancient Chinese game of Go. This was solved in 2016 by DeepMind’s AlphaGo, almost 50 years after Albert Lindsey Zobrist’s first computer Go program, in 1968.

I wouldn’t want you to conclude, by the way, that all AI problems will take roughly 50 years of effort to solve. Some AI problems have already taken longer. Consider, for example, human-level machine translation. This could be said to have been solved around 2018, by the powerful deep-learning methods now used by Google Translate. Machine translation was therefore solved more than 70 years after researchers first began studying how computers might translate language. And there are other problems in AI, like common-sense reasoning, that remain unsolved today after more than 75 years of study.

Repeated promises


Unfortunately, other pioneers in AI didn’t learn from the optimism and overconfidence of the Dartmouth participants. Many others believed they could solve AI problems much more quickly than was possible. A decade after Dartmouth, in 1966, Seymour Papert, a renowned professor at the MIT Computer Science and AI Lab, asked a group of his undergraduate students to solve ‘object recognition’ over their summer break. It’s a story that has become legendary in AI research.

Object recognition is the classic problem of identifying objects in an image. It’s of course vital for a robot to perceive the world around it, so that it can, for example, navigate its way around a factory. Papert wrote a proposal outlining the challenge:

The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be a real landmark in the development of ‘pattern recognition’.5

Papert put an exceptional young undergraduate, Gerald Jay Sussman, in...

Erscheint lt. Verlag 2.11.2023
Verlagsort London
Sprache englisch
Themenwelt Geisteswissenschaften Philosophie Allgemeines / Lexika
Geisteswissenschaften Philosophie Ethik
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Naturwissenschaften
Sozialwissenschaften Soziologie
Schlagworte abstracting intelligence • AI • AI learning • AI Morality • ai research • association for the advancement of AI • centre of excellence for ICT research • data61 • ethics • machine intelligence • Robot Intelligence • The Artificial in Artificial Intelligence • the morality of AI • university of south wales
ISBN-10 1-80399-460-6 / 1803994606
ISBN-13 978-1-80399-460-4 / 9781803994604
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Wasserzeichen)
Größe: 2,3 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
CHF 37,95
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 24,30