Data Pipelines with Apache Airflow

Data Pipelines with Apache Airflow

Pipelines can be challenging to manage, especially when your data has to flow through a collection of application components, servers, and cloud services. Airflow lets you schedule, restart, and backfill pipelines, and its easy-to-use UI and workflows with Python scripting has users praising its incredible flexibility. Data Pipelines with Apache Airflow takes you through best practices for creating pipelines for multiple tasks, including data lakes, cloud deployments, and data science.

Data Pipelines with Apache Airflow teaches you the ins-and-outs of the Directed Acyclic Graphs (DAGs) that power Airflow, and how to write your own DAGs to meet the needs of your projects. With complete coverage of both foundational and lesser-known features, when you’re done you’ll be set to start using Airflow for seamless data pipeline development and management.

Key Features

Framework foundation and best practices

Airflow's execution and dependency system

Testing Airflow DAGs

Running Airflow in production

For data-savvy developers, DevOps and data engineers, and system

administrators with intermediate Python skills.

About the technology

Data pipelines are used to extract, transform and load data to and from multiple sources, routing it wherever it’s needed -- whether that’s visualisation tools, business intelligence dashboards, or machine learning models. Airflow streamlines the whole process, giving you one tool for programmatically developing and monitoring batch data pipelines, and integrating all the pieces you use in your data stack.

Bas Harenslak and Julian de Ruiter are data engineers with extensive experience using Airflow to develop pipelines for major companies including Heineken, Unilever, and Booking.com. Bas is a committer, and both Bas and Julian are active contributors to Apache Airflow.



Pipelines can be challenging to manage, especially when your data has to flow through a collection of application components, servers, and cloud services. Airflow lets you schedule, restart, and backfill pipelines, and its easy-to-use UI and workflows with Python scripting has users praising its incredible flexibility. Data Pipelines with Apache Airflow takes you through best practices for creating pipelines for multiple tasks, including data lakes, cloud deployments, and data science.

Data Pipelines with Apache Airflow teaches you the ins-and-outs of the Directed Acyclic Graphs (DAGs) that power Airflow, and how to write your own DAGs to meet the needs of your projects. With complete coverage of both foundational and lesser-known features, when you’re done you’ll be set to start using Airflow for seamless data pipeline development and management.

Key Features

Framework foundation and best practices

Airflow's execution and dependency system

Testing Airflow DAGs

Running Airflow in production

For data-savvy developers, DevOps and data engineers, and system

administrators with intermediate Python skills.

About the technology

Data pipelines are used to extract, transform and load data to and from multiple sources, routing it wherever it’s needed -- whether that’s visualisation tools, business intelligence dashboards, or machine learning models. Airflow streamlines the whole process, giving you one tool for programmatically developing and monitoring batch data pipelines, and integrating all the pieces you use in your data stack.

Bas Harenslak and Julian de Ruiter are data engineers with extensive experience using Airflow to develop pipelines for major companies including Heineken, Unilever, and Booking.com. Bas is a committer, and both Bas and Julian are active contributors to Apache Airflow.


Auteur | Bas Harenslak
Taal | Engels
Type | Paperback
Categorie | Computers & Informatica

bol logo

Kijk verder

Boekomslag voor ISBN: 9781491933220
Boekomslag voor ISBN: 9781492087830
Boekomslag voor ISBN: 9781492087786
Boekomslag voor ISBN: 9781617297205
Boekomslag voor ISBN: 9781492034025
Boekomslag voor ISBN: 9781492043454


Boekn ©