Practical Enterprise Data Lake Insights
Apress (Verlag)
978-1-4842-3521-8 (ISBN)
- First book to provide an end-to-end solution approach
- Includes data capture strategies for time series and relational data
- Covers data processing using Hive and Spark
Use this practical guide to successfully handle the challenges encountered when designing an enterprise data lake and learn industry best practices to resolve issues.
When designing an enterprise data lake you often hit a roadblock when you must leave the comfort of the relational world and learn the nuances of handling non-relational data. Starting from sourcing data into the Hadoop ecosystem, you will go through stages that can bring up tough questions such as data processing, data querying, and security. Concepts such as change data capture and data streaming are covered. The book takes an end-to-end solution approach in a data lake environment that includes data security, high availability, data processing, data streaming, and more.
Each chapter includes application of a concept, code snippets, and use case demonstrations to provide you with a practical approach. You will learn the concept, scope, application, and starting point.
- Get to know data lake architecture and design principles
- Implement data capture and streaming strategies
- Implement data processing strategies in Hadoop
- Understand the data lake security framework and availability model
This book is for Big data architects and solution architects.
Saurabh K. Gupta is a technology leader, published author, and database enthusiast with more than 11 years of industry experience in data architecture, engineering, development, and administration. Working as a Manager, Data & Analytics at GE Transportation, his focus lies with data lake analytics programs that build a digital solution for business stakeholders. In the past, he has worked extensively with Oracle database design and development, PaaS and IaaS cloud service models, consolidation, and in-memory technologies. He has authored two books on advanced PL/SQL for Oracle versions 11g and 12c. He is a frequent speaker at numerous conferences organized by the user community and technical institutions.
Venkata Giri currently works with GE Digital and has been involved with building resilient distributed services at a massive scale. He has worked on big data tech stack, relational databases, high availability, and performance tuning. With over 20 years of experience in data technologies, he has in-depth knowledge of big data ecosystems, complex data ingestion pipelines, data engineering, data processing, and operations. Prior to working at GE, he worked with the data teams at Linkedin and Yahoo.
Chapter 1: Data Lake Concepts Overview
Chapter Goal: This chapter highlights key concepts of Data Lake and Tech Stack. It briefs the readers on the background of Data Management, the need to have a Data Lake, and focus on latest running trends.No of pages: 20Sub -Topics:
1. Familiarization with Enterprise Data Lake ecosystem
2. Understand key components of Data Lake
3. Data understanding - Structured vs Unstructured
Chapter
2: Data Replication Strategies
Chapter Goal: The chapter will focus on how to replicate data into Hadoop from source systems. Depending on the nature of source systems, strategies may change. The chapter will start with a talk trivial approaches to ET
L data into Hadoop and then dive into the latest trends of change data capture.No of pages: 25Sub - Topics:
1. Conventional ET
L strategies
2. Change data capture for relational data
3. Change data capture for time-series data
Chapter -
3: Bring Data into Hadoop
Chapter Goal: The chapter will focus on how to get data into a Hadoop cluster. It will talk on several approaches and utilities that can be used to bring data into Hadoop for processing.Page count: 30Sub -Topics:
1. RDBMS to Hadoop
2. MPP database systems to Hadoop
3. Unstructured data into Hadoop
Chapter
4: Data Streaming Strategies
Chapter Goal: The chapter will deep dive into data streaming principles of Kafka. It will talk on how Kafka works and understand how it resolves the challenge of getting data into Data Lake.No of pages: 50Sub - Topics:
1. How to stream the data? Kafka
2. How to persist the changes
3. How to batch the data
4. How to massage the data
5. Tools and technologies - HVR, Oracle golden gate for big data
Chapter
5: Data Processing in Hadoop
Chapter Goal: This chapter will provide an insight into various data querying platforms. It all started with Map Reduce but Hive is quickly acquiring de facto status in the industry. Chapter will deep dive into Hive, its SQ
L like semantics and show case its most recent capabilities. A dedicated section on Spark will give a detailed walk-through on Spark approach to process data in Hadoop.No of pages: 30Sub - Topics:
1. Map reduce
2. Query engines - intro/bigdata sql/bigSQ
L3. Hive - focus
4. Spark - focus
5. Presto
Chapter
6: Data Security and Compliance
Chapter Goal: This chapter will talk on security aspects of a data lake in Hadoop. The fact that security had been deliberately compromised in the past by organizations, does has a weight. The chapter talks about how to build a safety net around data lake and mitigate the risks of unauthorized access or injection attacks on a Data Lake. Page count: 20Sub - Topics:
1. Encryption in-transit and at rest
2. Data masking
3. Kerberos security and LDAP authentication
4. Ranger
Chapter
7: Ensure Availability of a Data Lake
Chapter Goal: This chapter throws light on yet another key aspect of data landscape i.e. availability. It will discuss topics like disaster recovery strategies, how to setup replication between two data centers, and how to tackle consistency and integrity of data.Page count: 20Sub - Topics:
1. Disaster Recovery Strategies
2. Setup Data center replication
3. Active-passive mode
4. Active-active mode
Erscheinungsdatum | 20.07.2018 |
---|---|
Zusatzinfo | 6 Illustrations, color; 82 Illustrations, black and white |
Verlagsort | Berkley |
Sprache | englisch |
Maße | 155 x 235 mm |
Gewicht | 534 g |
Einbandart | kartoniert |
Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
Mathematik / Informatik ► Informatik ► Netzwerke | |
Mathematik / Informatik ► Mathematik ► Finanz- / Wirtschaftsmathematik | |
Wirtschaft ► Betriebswirtschaft / Management ► Wirtschaftsinformatik | |
Schlagworte | BigData • datalake • Data Lake • Data Management • Enterprise • Replication • Streaming |
ISBN-10 | 1-4842-3521-5 / 1484235215 |
ISBN-13 | 978-1-4842-3521-8 / 9781484235218 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich