The Data Science Handbook (eBook)
368 Seiten
Wiley (Verlag)
978-1-394-23450-9 (ISBN)
Practical, accessible guide to becoming a data scientist, updated to include the latest advances in data science and related fields.
Becoming a data scientist is hard. The job focuses on mathematical tools, but also demands fluency with software engineering, understanding of a business situation, and deep understanding of the data itself. This book provides a crash course in data science, combining all the necessary skills into a unified discipline.
The focus of The Data Science Handbook is on practical applications and the ability to solve real problems, rather than theoretical formalisms that are rarely needed in practice. Among its key points are:
- An emphasis on software engineering and coding skills, which play a significant role in most real data science problems.
- Extensive sample code, detailed discussions of important libraries, and a solid grounding in core concepts from computer science (computer architecture, runtime complexity, and programming paradigms).
- A broad overview of important mathematical tools, including classical techniques in statistics, stochastic modeling, regression, numerical optimization, and more.
- Extensive tips about the practical realities of working as a data scientist, including understanding related jobs functions, project life cycles, and the varying roles of data science in an organization.
- Exactly the right amount of theory. A solid conceptual foundation is required for fitting the right model to a business problem, understanding a tool's limitations, and reasoning about discoveries.
Data science is a quickly evolving field, and this 2nd edition has been updated to reflect the latest developments, including the revolution in AI that has come from Large Language Models and the growth of ML Engineering as its own discipline. Much of data science has become a skillset that anybody can have, making this book not only for aspiring data scientists, but also for professionals in other fields who want to use analytics as a force multiplier in their organization.
Field Cady is a data scientist, researcher and author based in Seattle, WA, USA. He has worked for a range of companies including Google, the Allen Institute for Artificial Intelligence, and several startups. He received a BS in physics and math from Stanford and did graduate work computer science at Carnegie Mellon. He is the author of The Data Science Handbook (Wiley 2017).
1
Introduction
The goal of this book is to turn you into a data scientist, and there are two parts to this mission. First, there is a set of specific concepts, tools, and techniques that you can go out and solve problems with today. They include buzzwords such as machine learning (ML), Spark, and natural language processing (NLP). They also include concepts that are distinctly less sexy but often more useful, like regular expressions, unit tests, and SQL queries. It would be impossible to give an exhaustive list in any single book, but I cast a wide net.
That brings me to the second part of my goal. Tools are constantly changing, and your long‐term future as a data scientist depends less on what you know today and more on what you are able to learn going forward. To that end, I want to help you understand the concepts behind the algorithms and the technological fundamentals that underlie the tools we use. For example, this is why I spend a fair amount of time on computer memory and optimization: they are often the underlying reason that one approach is better than another. If you understand the key concepts, you can make the right trade‐offs, and you will be able to see how new ideas are related to older ones.
As the field evolves, data science is becoming not just a discipline in its own right, but also a skillset that anybody can have. The software tools are getting better and easier to use, best practices are becoming widely known, and people are learning many of the key skills in school before they’ve even started their career. There will continue to be data science specialists, but there is also a growing number of the so‐called “citizen data scientists” whose real job is something else. They are engineers, biologists, UX designers, programmers, and economists: professionals from all fields who have learned the techniques of data science and are fruitfully applying them to their main discipline.
This book is aimed at anybody who is entering the field. Depending on your background, some parts of it may be stuff you already know. Especially for citizen data scientists, other parts may be unnecessary for your work. But taken as a whole, this book will give you a practical skillset for today, and a solid foundation for your future in data science.
1.1 What Data Science Is and Isn’t
Despite the fact that “data science” is widely practiced and studied today, the term itself is somewhat elusive. So before we go any further, I’d like to give you the definition that I use. I’ve found that this one gets right to the heart of what sets it apart from other disciplines. Here goes:
Data science means doing analytically oriented work that, for one reason or another, requires a substantial amount of software engineering skills.
Often the final deliverable is the kind of thing a statistician or business analyst might provide, but achieving that goal demands software skills that your typical analyst simply doesn’t have – writing a custom parser for an obscure data format, complex preprocessing logic that must be kept in order, etc. Other times the data scientist will need to write production software based on their insights, or perhaps make their model available in real time. Often the dataset itself is so large that just creating a pie chart requires that the work be done in parallel across a cluster of computers. And sometimes, it’s just a really gnarly SQL query that most people struggle to wrap their heads around.
Nate Silver, a statistician famous for accurate forecasting of US elections, once said: “I think data scientist is a sexed‐up term for statistician.” He has a point, but what he said is only partly true. The discipline of statistics deals mostly with rigorous mathematical methods for solving well‐defined problems; data scientists spend most of their time getting data and the problem into a form where statistical methods can even be applied. This involves making sure that the analytics problem is a good match to business objectives, choosing what to measure and how to quantify things (typically more the domain of a BI analyst), extracting meaningful features from the raw data, and coping with any pathologies of the data or weird edge cases (which often requires a level of coding more typical of a software engineer). Once that heavy lifting is done, you can apply statistical tools to get the final results – although, in practice, you often don’t even need them. Professional statisticians need to do a certain amount of preprocessing themselves, but there is a massive difference in degree.
Historically, statistics focused on rigorous methods to analyze clean datasets, such as those that come out of controlled experiments in medicine and agriculture. Often the data was gathered explicitly to support the statisticians’ analysis! In the 2000s though a new class of datasets became popular to analyze. “Big Data” used new cluster computing tools to study large, messy, heterogenous datasets of the sort that would make statisticians shudder: HTML pages, image files, e‐mails, raw output logs of web servers, and so on. These datasets don’t fit the mold of relational databases or statistical tools, and they were not designed to facilitate any particular statistical analysis; so for decades, they were just piling up without being analyzed. Data science came into being as a way to finally milk them for insights. Most of the first data scientists were computer programmers or ML experts who were working on Big Data problems, not statisticians in the traditional sense.
The lines have now blurred: statisticians do more coding than they used to, Big Data tools are less central to the work of a data scientist, and ML is used by a broad swatch of people. And this is healthy: the differences between these fields are, after all, really just a matter of degree and/or historical accident. But, in practical terms, “data scientists” are still the jacks‐of‐all‐trades in the middle. They can do statistics, but if you’re looking to tease every last insight out of clinical trial data, you should consult a statistician. They can train and deploy ML models, but if you’re trying to eke performance out of a large neural network an ML engineer would be better. They can turn business questions into math problems, but they may not have the deep business knowledge of an analyst.
1.2 This Book’s Slogan: Simple Models Are Easier to Work With
There is a common theme in this book that I would like to call out at as the book’s explicit motto: simple models are easier to work with. Let me explain.
People tend to idolize and gravitate toward complicated analytical models like deep neural nets, Bayesian networks, ARIMA models, and the like. There are good reasons to use these tools; the best‐performing models in the world are usually complicated, there may be fancy ways to bake in expert knowledge, etc. There are also bad reasons to use these tools, like ego and pressure to use to latest buzzwords.
But seasoned data scientists understand that there is more to a model than how accurate it is. Simple models are, above all, easier to reason about. If you’re trying to understand what patterns in the data your model is picking up on, simple models are the way to go. Oftentimes this is the whole point of a model anyway: we are just trying to get insights into the system we are studying, and a model’s performance is just used to gauge how fully it has captured the relevant patterns in the data.
A related advantage of simple models is supremely mundane: stuff breaks, and they make it easier to find what’s broken. Bad training data, perverse inputs to the model, and data that is incorrectly formatted – all of these are liable to cause conspicuous failures, and it’s easy to figure out what went wrong by dissecting the model. For this reason, I like “stunt double models,” which have the same input/output format as a complicated one and are used to debug the model’s integration with other systems.
Simple models are less prone to overfitting. If your dataset is small, a fancy model will often actually perform worse: it essentially memorizes the training data, rather than extracting general patterns from it. The simpler a model, the less you have to worry about the size of your dataset (though admittedly this can create a square‐peg‐in‐a‐round‐hole situation where the model can’t fit the data well and performance degrades).
Simple models are easier to hack and jury‐rig. Frequently they have a small number of tunable parameters, with clear meanings that you can adjust to suit the business needs at hand.
The inferior performance of simple models can act as a performance benchmark, a level that the fancier model must meaningfully exceed in order to justify its extra complexity. And if a simple model performs particularly badly, this may suggest that there isn’t enough signal in the data to make the problem worthwhile.
On the other hand, when there is enough training data and it is representative of what you expect to see, fancier models do perform better. You usually don’t want to leave money on the table by deploying grossly inferior models simply because they are easier to debug. And there are many situations, like cutting‐edge AI, where the relevant patterns are very complicated, and it takes a complicated model to accurately capture them. Even in...
Erscheint lt. Verlag | 31.10.2024 |
---|---|
Sprache | englisch |
Themenwelt | Mathematik / Informatik ► Mathematik ► Statistik |
Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
Schlagworte | Analytics • Business Intelligence • Data Management • data munging • Data Science • Data Visualization • Deep learning • feature extraction • machine learning • NLP • numerical computing • programming • Python |
ISBN-10 | 1-394-23450-3 / 1394234503 |
ISBN-13 | 978-1-394-23450-9 / 9781394234509 |
Haben Sie eine Frage zum Produkt? |
Größe: 4,7 MB
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich