Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Building Modern Data Applications Using Databricks Lakehouse - Will Girten

Building Modern Data Applications Using Databricks Lakehouse

Develop, optimize, and monitor data pipelines on Databricks

(Autor)

Buch | Softcover
246 Seiten
2024
Packt Publishing Limited (Verlag)
978-1-80107-323-3 (ISBN)
CHF 59,30 inkl. MwSt
Get up to speed with the Databricks Data Intelligence Platform to build and scale modern data applications, leveraging the latest advancements in data engineering

Key Features

Learn how to work with real-time data using Delta Live Tables
Unlock insights into the performance of data pipelines using Delta Live Tables
Apply your knowledge to Unity Catalog for robust data security and governance
Purchase of the print or Kindle book includes a free PDF eBook

Book DescriptionWith so many tools to choose from in today’s data engineering development stack as well as operational complexity, this often overwhelms data engineers, causing them to spend less time gleaning value from their data and more time maintaining complex data pipelines. Guided by a lead specialist solutions architect at Databricks with 10+ years of experience in data and AI, this book shows you how the Delta Live Tables framework simplifies data pipeline development by allowing you to focus on defining input data sources, transformation logic, and output table destinations.
This book gives you an overview of the Delta Lake format, the Databricks Data Intelligence Platform, and the Delta Live Tables framework. It teaches you how to apply data transformations by implementing the Databricks medallion architecture and continuously monitor the data quality of your pipelines. You’ll learn how to handle incoming data using the Databricks Auto Loader feature and automate real-time data processing using Databricks workflows. You’ll master how to recover from runtime errors automatically.
By the end of this book, you’ll be able to build a real-time data pipeline from scratch using Delta Live Tables, leverage CI/CD tools to deploy data pipeline changes automatically across deployment environments, and monitor, control, and optimize cloud costs.What you will learn

Deploy near-real-time data pipelines in Databricks using Delta Live Tables
Orchestrate data pipelines using Databricks workflows
Implement data validation policies and monitor/quarantine bad data
Apply slowly changing dimensions (SCD), Type 1 and 2, data to lakehouse tables
Secure data access across different groups and users using Unity Catalog
Automate continuous data pipeline deployment by integrating Git with build tools such as Terraform and Databricks Asset Bundles

Who this book is forThis book is for data engineers looking to streamline data ingestion, transformation, and orchestration tasks. Data analysts responsible for managing and processing lakehouse data for analysis, reporting, and visualization will also find this book beneficial. Additionally, DataOps/DevOps engineers will find this book helpful for automating the testing and deployment of data pipelines, optimizing table tasks, and tracking data lineage within the lakehouse. Beginner-level knowledge of Apache Spark and Python is needed to make the most out of this book.

Will Girten is a lead specialist solutions architect who joined Databricks in early 2019. With over a decade of experience in data and AI, Will has worked in various business verticals, from healthcare to government and financial services. Will's primary focus has been helping enterprises implement data warehousing strategies for the lakehouse and performance-tuning BI dashboards, reports, and queries. Will is a certified Databricks Data Engineering Professional and Databricks Machine Learning Professional. He holds a Bachelor of Science in computer engineering from the University of Delaware.

Table of Contents

An Introduction to Delta Live Tables
Applying Data Transformations Using Delta Live Tables
Managing Data Quality Using Delta Live Tables
Scaling DLT Pipelines
Mastering Data Governance in the Lakehouse with Unity Catalog
Managing Data Locations in Unity Catalog
Viewing Data Lineage Using Unity Catalog
Deploying, Maintaining, and Administrating DLT Pipelines Using Terraform
Leveraging Databricks Asset Bundles to Streamline Data Pipeline Deployment
Monitoring Data Pipelines in Production

Erscheinungsdatum
Verlagsort Birmingham
Sprache englisch
Maße 191 x 235 mm
Themenwelt Informatik Datenbanken Data Warehouse / Data Mining
Mathematik / Informatik Informatik Theorie / Studium
ISBN-10 1-80107-323-6 / 1801073236
ISBN-13 978-1-80107-323-3 / 9781801073233
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Datenanalyse für Künstliche Intelligenz

von Jürgen Cleve; Uwe Lämmel

Buch | Softcover (2024)
De Gruyter Oldenbourg (Verlag)
CHF 104,90
Daten importieren, bereinigen, umformen und visualisieren

von Hadley Wickham; Mine Çetinkaya-Rundel …

Buch | Softcover (2024)
O'Reilly (Verlag)
CHF 76,85