Data Engineering with Databricks Cookbook : Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake

Enregistré dans:
Détails bibliographiques
Auteur principal: Chadha, Pulkit. (Auteur)
Support: E-Book
Langue: Anglais
Publié: Birmingham : Packt Publishing.
Autres localisations: Voir dans le Sudoc
Résumé: Work through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your dataKey FeaturesLearn data ingestion, data transformation, and data management techniques using Apache Spark and Delta LakeGain practical guidance on using Delta Lake tables and orchestrating data pipelinesImplement reliable DataOps and DevOps practices, and enforce data governance policies on DatabricksPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionData Engineering with Databricks Cookbook will guide you through recipes to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, beginning with an introduction to data ingestion and loading with Apache Spark. As you progress, you'll be introduced to various data manipulation and data transformation solutions that can be applied to data. You'll find out how to manage and optimize Delta tables, as well as how to ingest and process streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Later chapters will show you how to use Databricks to implement DataOps and DevOps practices and teach you how to orchestrate and schedule data pipelines using Databricks Workflows. Finally, you'll understand how to set up and configure Unity Catalog for data governance. By the end of this book, you'll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.What you will learnPerform data loading, ingestion, and processing with Apache SparkDiscover data transformation techniques and custom user-defined functions (UDFs) in Apache SparkManage and optimize Delta tables with Apache Spark and Delta Lake APIsUse Spark Structured Streaming for real-time data processingOptimize Apache Spark application and Delta table query performanceImplement DataOps and DevOps practices on DatabricksOrchestrate data pipelines with Delta Live Tables and Databricks WorkflowsImplement data governance policies with Unity CatalogWho this book is forThis book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming
Accès en ligne: Accès à l'E-book
LEADER 04009nmm a2200433 i 4500
001 ebook-280311435
005 20240917153233.0
007 cu|uuu---uuuuu
008 240917s2024||||uk ||||g|||| ||||||eng d
020 |a 9781837632060 
035 |a (OCoLC)1456999280 
035 |a FRCYB88957554 
035 |a FRCYB26088957554 
035 |a FRCYB24788957554 
035 |a FRCYB24888957554 
035 |a FRCYB29388957554 
035 |a FRCYB084688957554 
035 |a FRCYB087588957554 
035 |a FRCYB56788957554 
035 |a FRCYB097088957554 
035 |a FRCYB087088957554 
040 |a ABES  |b fre  |e AFNOR 
041 0 |a eng  |2 639-2 
100 1 |a Chadha, Pulkit.  |4 aut.  |e Auteur 
245 1 0 |a Data Engineering with Databricks Cookbook :  |b Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake   |c Pulkit Chadha. 
264 1 |a Birmingham :  |b Packt Publishing. 
264 2 |a Paris :  |b Cyberlibris,  |c 2024. 
336 |b txt  |2 rdacontent 
337 |b c  |2 rdamedia 
337 |b b  |2 isbdmedia 
338 |b ceb  |2 RDAfrCarrier 
500 |a Couverture (https://static2.cyberlibris.com/books_upload/136pix/9781837632060.jpg). 
506 |a L'accès en ligne est réservé aux établissements ou bibliothèques ayant souscrit l'abonnement  |e Cyberlibris 
520 |a Work through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your dataKey FeaturesLearn data ingestion, data transformation, and data management techniques using Apache Spark and Delta LakeGain practical guidance on using Delta Lake tables and orchestrating data pipelinesImplement reliable DataOps and DevOps practices, and enforce data governance policies on DatabricksPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionData Engineering with Databricks Cookbook will guide you through recipes to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, beginning with an introduction to data ingestion and loading with Apache Spark. As you progress, you'll be introduced to various data manipulation and data transformation solutions that can be applied to data. You'll find out how to manage and optimize Delta tables, as well as how to ingest and process streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Later chapters will show you how to use Databricks to implement DataOps and DevOps practices and teach you how to orchestrate and schedule data pipelines using Databricks Workflows. Finally, you'll understand how to set up and configure Unity Catalog for data governance. By the end of this book, you'll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.What you will learnPerform data loading, ingestion, and processing with Apache SparkDiscover data transformation techniques and custom user-defined functions (UDFs) in Apache SparkManage and optimize Delta tables with Apache Spark and Delta Lake APIsUse Spark Structured Streaming for real-time data processingOptimize Apache Spark application and Delta table query performanceImplement DataOps and DevOps practices on DatabricksOrchestrate data pipelines with Delta Live Tables and Databricks WorkflowsImplement data governance policies with Unity CatalogWho this book is forThis book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming 
856 |q HTML  |u https://srvext.uco.fr/login?url=https://univ.scholarvox.com/book/88957554  |w Données éditeur  |z Accès à l'E-book 
886 2 |2 unimarc  |a 181  |a i#  |b xxxe## 
993 |a E-Book  
994 |a BNUM 
995 |a 280311435