Course Summary

Your responsibilities for this role include transforming data into reusable analytics assets by using Microsoft Fabric components, such as:
Lakehouses
Data warehouses
Notebooks
Dataflows
Data pipelines
Semantic models
Reports The DP-600 certification is geared towards data engineering and analytics specialists within the Microsoft Fabric ecosystem, with an emphasis on planning, transforming data into reusable analytics assets, data engineering and analytics tasks.

Plan, implement, and manage a solution for data analytics (10–15%)
Plan a data analytics environment
Identify requirements for a solution, including components, features, performance, and capacity stock-keeping units (SKUs)

Recommend settings in the Fabric admin portal

Choose a data gateway type

Create a custom Power BI report theme

Implement and manage a data analytics environment
Implement workspace and item-level access controls for Fabric items

Implement data sharing for workspaces, warehouses, and lakehouses

Manage sensitivity labels in semantic models and lakehouses

Configure Fabric-enabled workspace settings

Manage Fabric capacity

Manage the analytics development lifecycle
Implement version control for a workspace

Create and manage a Power BI Desktop project (.pbip)

Plan and implement deployment solutions

Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models

Deploy and manage semantic models by using the XMLA endpoint

Create and update reusable assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models

Prepare and serve data (40–45%)
Create objects in a lakehouse or warehouse
Ingest data by using a data pipeline, dataflow, or notebook

Create and manage shortcuts

Implement file partitioning for analytics workloads in a lakehouse

Create views, functions, and stored procedures

Enrich data by adding new columns or tables

Copy data
Choose an appropriate method for copying data from a Fabric data source to a lakehouse or warehouse

Copy data by using a data pipeline, dataflow, or notebook

Add stored procedures, notebooks, and dataflows to a data pipeline

Schedule data pipelines

Schedule dataflows and notebooks

Transform data
Implement a data cleansing process

Implement a star schema for a lakehouse or warehouse, including Type 1 and Type 2 slowly changing dimensions

Implement bridge tables for a lakehouse or a warehouse

Denormalize data

Aggregate or de-aggregate data

Merge or join data

Identify and resolve duplicate data, missing data, or null values

Convert data types by using SQL or PySpark

Filter data

Optimize performance
Identify and resolve data loading performance bottlenecks in dataflows, notebooks, and SQL queries

Implement performance improvements in dataflows, notebooks, and SQL queries

Identify and resolve issues with Delta table file sizes

Implement and manage semantic models (20–25%)
Design and build semantic models
Choose a storage mode, including Direct Lake

Identify use cases for DAX Studio and Tabular Editor 2

Implement a star schema for a semantic model

Implement relationships, such as bridge tables and many-to-many relationships

Write calculations that use DAX variables and functions, such as iterators, table filtering, windowing, and information functions

Implement calculation groups, dynamic strings, and field parameters

Design and build a large format dataset

Design and build composite models that include aggregations

Implement dynamic row-level security and object-level security

Validate row-level security and object-level security

Optimize enterprise-scale semantic models
Implement performance improvements in queries and report visuals

Improve DAX performance by using DAX Studio

Optimize a semantic model by using Tabular Editor 2

Implement incremental refresh

Explore and analyze data (20–25%)
Perform exploratory analytics
Implement descriptive and diagnostic analytics

Integrate prescriptive and predictive analytics into a visual or report

Profile data

Query data by using SQL
Query a lakehouse in Fabric by using SQL queries or the visual query editor

Query a warehouse in Fabric by using SQL queries or the visual query editor

Connect to and query datasets by using the XMLA endpoint

As a candidate for this exam, you should have subject matter expertise in designing, creating, and deploying enterprise-scale data analytics solutions.

Plan, implement, and manage a solution for data analytics (10–15%) Prepare and serve data (40–45%) Implement and manage semantic models (20–25%) Explore and analyze data (20–25%)

Following your booking, a confirmation message will be sent to all participants, ensuring you're well-informed of your successful enrollment. Calendar placeholders will also be dispatched to assist you in scheduling your commitments around the course. Rest assured, all course materials and access to necessary labs or platforms will be provided no later than one week before the course begins, allowing you ample time to prepare and engage fully with the learning experience ahead.

Our comprehensive training package includes all the necessary materials and resources to facilitate a full learning experience. Enrollees will be provided with detailed course content, encompassing a wide array of topics to ensure a thorough understanding of the subject matter. Additionally, participants will receive a certificate of completion to recognize their dedication and hard work. It's important to note that while the course fee covers all training materials and experiences, the examination fee for certification is not included but can be purchased separately.

Questions About This Course?