Hands-on large language models
language understanding and generation
Seiten
2024
|
1. Auflage
O'Reilly Media (Verlag)
978-1-0981-5096-9 (ISBN)
O'Reilly Media (Verlag)
978-1-0981-5096-9 (ISBN)
AI has acquired startling new language capabilities in just the past few years. Driven by the rapid advances in deep learning, language AI systems are able to write and understand text better than ever before. This trend enables the rise of new features, products, and entire industries. With this book, Python developers will learn the practical tools and concepts they need to use these capabilities today.
You'll learn how to use the power of pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; build systems that classify and cluster text to enable scalable understanding of large numbers of text documents; and use existing libraries and pretrained models for text classification, search, and clusterings.
This book also shows you how to:
Build advanced LLM pipelines to cluster text documents and explore the topics they belong to
Build semantic search engines that go beyond keyword search with methods like dense retrieval and rerankers
Learn various use cases where these models can provide value
Understand the architecture of underlying Transformer models like BERT and GPT
Get a deeper understanding of how LLMs are trained
Optimize LLMs for specific applications with methods such as generative model fine-tuning, contrastive fine-tuning, and in-context learning
You'll learn how to use the power of pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; build systems that classify and cluster text to enable scalable understanding of large numbers of text documents; and use existing libraries and pretrained models for text classification, search, and clusterings.
This book also shows you how to:
Build advanced LLM pipelines to cluster text documents and explore the topics they belong to
Build semantic search engines that go beyond keyword search with methods like dense retrieval and rerankers
Learn various use cases where these models can provide value
Understand the architecture of underlying Transformer models like BERT and GPT
Get a deeper understanding of how LLMs are trained
Optimize LLMs for specific applications with methods such as generative model fine-tuning, contrastive fine-tuning, and in-context learning
Jay Alammar is Director and Engineering Fellow at Cohere (pioneering provider of large language models as an API). In this role, he advises and educates enterprises and the developer community on using language models for practical use cases). Through his popular AI/ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in the documentation of packages like NumPy and pandas) to the cutting-edge (Transformers, BERT, GPT-3, Stable Diffusion). Jay is also a co-creator of popular machine learning and natural language processing courses on Deeplearning.ai and Udacity.
Erscheinungsdatum | 24.09.2024 |
---|---|
Reihe/Serie | Animals |
Zusatzinfo | Illustrationen |
Verlagsort | Sebastopol |
Sprache | englisch |
Maße | 178 x 233 mm |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
ISBN-10 | 1-0981-5096-1 / 1098150961 |
ISBN-13 | 978-1-0981-5096-9 / 9781098150969 |
Zustand | Neuware |
Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Buch | Softcover (2024)
REDLINE (Verlag)
CHF 27,95
Eine kurze Geschichte der Informationsnetzwerke von der Steinzeit bis …
Buch | Hardcover (2024)
Penguin (Verlag)
CHF 39,20
was sie kann & was uns erwartet
Buch | Softcover (2023)
C.H.Beck (Verlag)
CHF 25,20