• Welcome to CloudMonks
  • +91 96660 64406
  • info@thecloudmonks.com

Azure Data Factory Training

Hello
Hello

About The Course

Azure data engineers are responsible for data-related tasks that include provisioning data storage services, batch data and ingesting streaming, implementing security requirements, transforming data, implementing data retention policies, identifying performance bottlenecks, and accessing external data sources. In the world of big data, raw, unorganized data is often stored in relational, non-relational, and other storage systems. However, on its own, raw data doesn't have the proper context or meaning to provide meaningful insights to analysts, data scientists, or business decision makers.

Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights. Azure Data Factory Training in Hyderabad is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

For example, imagine a gaming company that collects petabytes of game logs that are produced by games in the cloud. The company wants to analyze these logs to gain insights into customer preferences, demographics, and usage behavior. It also wants to identify up-sell and cross-sell opportunities, develop compelling new features, drive business growth, and provide a better experience to its customers.

To analyze these logs, the company needs to use reference data such as customer information, game information, and marketing campaign information that is in an on-premises data store. The company wants to utilize this data from the on-premises data store, combining it with additional log data that it has in a cloud data store.

To extract insights, it hopes to process the joined data by using a Spark cluster in the cloud (Azure HDInsight), and publish the transformed data into a cloud data warehouse such as Azure Synapse Analytics to easily build a report on top of it. They want to automate this workflow, and monitor and manage it on a daily schedule. They also want to execute it when files land in a blob store container.

Best ADF Training in Hyderabd is the platform that solves such data scenarios. It is the cloud-based ETL and data integration service that allows you to create data-driven workflows for orchestrating data movement and transforming data at scale. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores. You can build complex ETL processes that transform data visually with data flows or by using compute services such as Azure HDInsight Hadoop, Azure Databricks, and Azure SQL Database.

Module 1: Cloud Computing Concepts

  • What is the "Cloud"?
  • Why Cloud Services
  • Types of cloud services
    • Infrastructure as a Service(IaaS)
    • Platform as a Service(PaaS)
    • Software as a Service(SaaS)

Module 2:BigData Introduction

  • What is BigData?
  • Characteristics of Bigdata
  • Types Of BigData
    • Structured Data
    • Unstructured Data
    • Semi Structured Data

Module 3: Azure Cloud Storage Technologies

  • Azure Blob Storage
  • Azure Data Lake Storage Gen1
  • Azure Data Lake Storage Gen2
  • Azure SQL Database
  • Synapse Dedicated SQL Pool

Module 4: Azure Blob Storage

  • Storage Account
  • Containers
  • Types Of Blobs
  • Performance Tiers
  • Access Tiers
  • Data Replication Policies

Module 5: Azure Data Lake Storage Gen2

  • Enable Hierarchical Name Space
  • Access Control List (ACL)
  • Features of ADLS Gen2

Module 6: Azure SQL Database

  • Compute & Storage Configurations
  • vCore Based Purchasing Model
  • DTU Based Purchasing Model
  • Firewall Rules

Module 7: Azure Data Factory Introduction

  • What is Azure Data Factory(ADF)?
  • Azure Data Factory Key Components
    • Pipeline
    • Activity
    • Linked Service
    • Data Set
    • Integration Runtime
    • Triggers
    • Data Flows

Module 8: Working with Copy Activity

  • Create Linked services for data stores and compute
  • Creation of Datasets that points to File and Table
  • Design Pipelines with Copy activities
  • Define Copy activity and it features
  • Copy Activity-Copy Behaviour
  • Copy Activity_Data Integration Units
  • Copy Activity- User Properties
  • Copy Activity- Number of Parallel Copies
  • Monitoring Pipeline
  • Debug Pipeline
  • Trigger pipeline manually

Module 9 :ADF-Activities

  • Lookup Activity
  • Getmeta Data Activity
  • Delete Activity
  • Dataflow Activity
  • Excute Pipeline Activity
  • Appened Variable Activity
  • Fail Activity
  • Store procedure Activity
  • Set Variable activity
  • Validation Activity
  • Web Activity
  • Wait Activity
  • Script Activity
  • Filter Activity
  • ForEach Activity
  • If Condition Activity
  • Switch Activity
  • Until Activity
  • Notebook Activity

Module 10 : Practical Scenarios and Use Cases

  • ADF_PracticeSession1_Blob_To_Blob
  • ADF_PracticeSession2_CopyActivity_Prefix_Wildcard_FilePath_Blob_To_Blob
  • ADF_PracticeSession3_Blob_To_Azure_SQLDB
  • ADF_PracticeSession4_Blob_To_Azure_SQLDB
  • ADF_PracticeSession5_Dataset_Parameters_Blob_To_Azure_SQLDB
  • ADF_PracticeSession6_Blob_To_ADLS_Gen2
  • ADF_PracticeSession7_ADLS_Gen1_To_ADLS_Gen2
  • ADF_PracticeSession8_Pipeline_Dataset_LinkedService_Parameters
  • ADF_PracticeSession9_FilteringFileFormats_Getmetadata_Filter_ForEach_Copy_Activity
  • ADF_PracticeSession10_FilteringFileFormats_Getmetadata_Filter_ForEach_Copy_Activity
  • ADF_PracticeSession11_BulkCopy_Tables_Files
  • ADF_PracticeSession12_Container_Parameterization_Blob_To_Blob_Storage
  • ADF_PracticeSession13_ExecuteCopyActivity_BasedOnFileCount
  • ADF_PracticeSession14_StoredProcedures_Parameters
  • ADF_PracticeSession15_CopyActivity_CustomSQL_Queries_StoredProcedures
  • ADF_PracticeSession16_Pipeline_Audit_Log
  • ADF_PracticeSession17_Copybehaviour
  • ADF_PracticeSession18_CSV_To_JSON_Format
  • ADF_PracticeSession19_Copy_JSON_File_To_AzureSQL
  • ADF_PracticeSession20_Add_AdditionalColumns_WhileCopyingData
  • ADF_PracticeSession21_CopyDataTool
  • ADF_PracticeSession22_Custom_Email_Notification
  • ADF_PracticeSession23_AzureKeyVault_Integration
  • ADF_PracticeSession24_Incremental_Load
  • ADF_PracticeSession25_Integration_Runtime
  • ADF_PracticeSession26_On-Premise_SQLServer_ADLS_Gen2
  • ADF_PracticeSession27_On-Premise_FileSystem_ADLS_Gen2
  • ADF_PracticeSession28_REST_API_Integration
  • ADF_PracticeSession29_Eventbased_Trigger
  • ADF_PracticeSession30_Scheduled_Trigger
  • ADF_PracticeSession31_TumblingWindow_Trigger
  • ADF_PracticeSession32_Blob_SQLDB_Executepipeline_Activity
  • ADF_PracticeSession33_SQLDB_BLOB_Overwrite_Append_Mode
  • ADF_PracticeSession34_Dataflows_Introduction
  • ADF_PracticeSession35_Dataflows_Select_Filter_DerivedColumn_Transformation
  • ADF_PracticeSession36_Dataflows_Select__DerivedColumn_Aggregator_Sort_Transformation
  • ADF_PracticeSession37_Dataflows_ConditionalSplit_Transformation
  • ADF_PracticeSession38_Dataflows_Join_Transformation
  • ADF_PracticeSession39_Dataflows_Union_Transformation
  • ADF_PracticeSession40_Dataflows_Lookup_Transformation
  • ADF_PracticeSession41_Dataflows_Exists_Transformation
  • ADF_PracticeSession42_Dataflows_Rank_Transformation
  • ADF_PracticeSession43_Dataflows_Pivot_Transformation
  • ADF_PracticeSession44_Slowly Changing Dimension Type1 (SCD1) with HashKey Function
  • ADF_PracticeSession45_Slowly Changing Dimension Type2
  • Assignment-PracticeSessions

  • ADF_PracticeSession46_Dataflows_UnPivot_Transformation
  • ADF_PracticeSession47_Dataflows_SurrogateKey_Transformation
  • ADF_PracticeSession48_Dataflows_Windows_Transformation
  • ADF_PracticeSession49_Dataflows_AlterRow_Transformation
  • ADF_PracticeSession50_Switch Activity-Move and delete data
  • ADF_PracticeSession51_Until Activity-Parameters & Variables
  • ADF_PracticeSession53_Remove Duplicate rows using data flows
  • ADF_PracticeSession54_AWS_S3_Integration
  • ADF_PracticeSession55_GCP_Integration
  • ADF_PracticeSession56_Snowflake_Integration

Azure Databricks:

Module 11: Introduction to Azure Databricks

  • Introduction to Databricks

Module 12:Databricks Integration with Azure Blob Storage

  • Read data from Blob Storage and Creating Blob mount point

Module 13:Databricks Integration with Azure Data Lake Storage Gen2

  • Reading files from Azure Data Lake Storage Gen2

Azure Synapse Analytics:

Module 14: Interduction To Azure Synapse

  • Technical requirements
  • Interdiction the components of Azure synapse
  • Creating synapse Workspace
  • Understanding Azure Data Lake Exploring Synapse Studio

Module 15: Using Synapse Pipelines To Orchestarte Your Data

  • Technical requirements
  • Introducing synapse pipe lines
    • Integration runtime
    • Activities
    • Pipelines
    • Triggers
  • Creating linked services
  • Defining source and target
  • Using various activities in synapse pipelines
  • Scheduling synapse pipelines
  • Creating pipelines using samples

Train your teams on the theory and enable technical mastery of cloud computing courses essential to the enterprise such as security, compliance, and migration on AWS, Azure, and Google Cloud Platform.

Talk With Us