Today's business managers depend heavily on reliable data integration systems that run complex ETL/ELT workflows (extract, transform/load and load/transform data). Gaurav Malhotra joins Scott Hanselman to discuss how you can iteratively build, debug, deploy, and monitor your data integration workflows (including analytics workloads in Azure Databricks) using Azure Data Factory pipelines. For more information:Ingest, prepare, and transform using Azure Databricks and Data Factory (blog)Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory (docs)Create a free account (Azure)Follow @SHanselman Follow @AzureFriday Follow @gauravmalhot12