Enterprise Database Systems
Data Pipeline to Tableau
Data Pipeline: Process Implementation Using Tableau & AWS
Data Pipeline: Using Frameworks for Advanced Data Management

Data Pipeline: Process Implementation Using Tableau & AWS

Course Number:
it_dsdptbdj_01_enus
Lesson Objectives

Data Pipeline: Process Implementation Using Tableau & AWS

  • Course Overview
  • describe data pipeline and its features and list the steps involved in building one
  • recognize the processes involved in building data pipelines
  • identify the different stages of a data pipeline
  • list various technologies that can be used to implement a data pipeline
  • list various data sources that are involved in the data pipeline transformation phases
  • define scheduled data pipelines and list all the associated components, tasks, and attempts
  • install the Tableau server and command line utilities
  • build data pipelines using the Tableau command line utilities
  • demonstrate the steps involved in building data pipelines on AWS
  • install Tableau command line utilities, build a pipeline with Tableau command line utilities, and build data pipelines on AWS

Overview/Description

Explore the concept of data pipelines, the processes and stages involved in building them, and technologies such as Tableau and Amazon Web Services (AWS) that can be used in this 11-video course. Learners begin with an initial look at the data pipeline and its features, and then the steps involved in building one. You will go on to learn about the processes involved in building data pipelines, the different stages of a pipeline, and the various essential technologies that can be used to implement one. Next, learners explore the various types of data sources that are involved in the data pipeline transformation phases. Then you learn to define scheduled data pipelines and list all the associated components, tasks, and attempts. You will learn how to install Tableau Server and command line utilities and then build data pipelines using the Tableau command line utilities. Finally, take a look at the steps involved in building data pipelines on AWS. The closing exercise involves building data pipelines with Tableau.



Target

Prerequisites: none

Data Pipeline: Using Frameworks for Advanced Data Management

Course Number:
it_dsdptbdj_02_enus
Lesson Objectives

Data Pipeline: Using Frameworks for Advanced Data Management

  • Course Overview
  • recognize the features of Celery and Luigi that can be used to set up data pipelines
  • implement Python Luigi in order to set up data pipelines
  • list Dask task scheduling and big data collection features
  • implement Dask arrays in order to manage NumPy APIs
  • list frameworks that can be used to implement data exploration and visualization in data pipelines
  • integrate Spark and Tableau to manage data pipelines
  • use Python to build visualizations for streaming data
  • recognize the data pipeline building capabilities provided by Kafka, Spark, and PySpark
  • set up Luigi to implement data pipelines, integrate Spark and Tableau for data pipeline management, and build visualizations for data pipelines using Python

Overview/Description

Discover how to implement data pipelines using Python Luigi, integrate Spark and Tableau to manage data pipelines, use Dask arrays, and build data pipeline visualization with Python in this 10-video course. Begin by learning about features of Celery and Luigi that can be used to set up data pipelines, then how to implement Python Luigi to set up data pipelines. Next, turn to working with Dask library, after listing the essential features provided by Dask from the perspective of task scheduling and big data collections. Learn about implementation of Dask arrays to manage NumPy application programming interfaces (APIs). Explore frameworks that can be used to implement data exploration and visualization in data pipelines. Integrate Spark and Tableau to manage data pipelines. Move on to streaming data visualization with Python, using Python to build visualizations for streaming data. Then learn about the data pipeline building capabilities provided by Kafka, Spark, and PySpark. The concluding exercise involves setting up Luigi to implement data pipelines, Spark and Tableau integration, and building pipelines with Python.



Target

Prerequisites: none

Close Chat Live