Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Fabric now lets you transform data in your Data Warehouses by using dbt—all within the Fabric web experience. No external adapters, no CLI, no Airflow. Just SQL, a warehouse, and a streamlined UI.
This walkthrough demonstrates how to use dbt directly within Fabric to transform data in a warehouse—without external tools. This approach minimizes setup complexity and enables users to use existing compute resources. As a result, dbt jobs make enterprise-scale data modeling more accessible to the SQL community.
This tool is designed to help data engineers and analysts:
- Import a Sample Data Warehouse in Fabric
- Transform your data with dbt items
- Run and validate your models—all natively in Fabric.
Prerequisites
Before you create a dbt job in Fabric, make sure your environment is set up correctly:
- Enable dbt jobs
- Create a workspace if you don't have one.
- Set up a Fabric Data Warehouse if you don't have one.
- Set permissions and access
Supported commands
Fabric supports the following core dbt commands directly from the dbt job interface.
| Command | Description |
|---|---|
dbt build |
Builds all models, seeds, and tests in the project. |
dbt run |
Runs all SQL models in dependency order. |
dbt seed |
Loads CSV files from the seeds/ directory. |
dbt test |
Runs schema and data tests defined in schema.yml. |
dbt compile |
Generates compiled SQL without running transformations. |
dbt snapshot |
Captures and tracks slowly changing dimensions over time. |
You can also selectively run or exclude specific models by using selectors.
Create a dbt job
Start building transformations by creating a dbt job item in your Fabric workspace.
Go to your Fabric workspace.
Select +New item, then search for and select dbt job from the item creation menu.
Enter a name and select a location.
Choose the target Fabric Data Warehouse connection.
Configure job parameters and save the new dbt job item.
Open the dbt job to view its file structure, configure settings, and run dbt commands directly from the Fabric UI.
Schedule dbt jobs
Automate dbt job runs by using the built-in schedule feature to refresh models, run tests, or keep data pipelines up to date.
Open your dbt job in Fabric.
Select the Schedule tab in the top panel.
Select Add schedule to configure a new scheduled run.
- Repeat: Choose how often to run the job (for example, by the minute, hourly, daily, weekly).
- Interval: Set the frequency (for example, every 15 minutes).
- Start date and time: When the schedule should begin.
- End date and time: (Optional) When the schedule should stop.
- Time zone: Select your preferred time zone for scheduling.
Select Save to activate the schedule.
Monitor dbt jobs
Fabric provides several tools to help you monitor and validate your dbt jobs:
Visual aids
Lineage View: Generates a dependency graph of your models, showing how data flows between sources and transformations.
Compiled SQL View: Displays the rendered SQL code that dbt runs, so you can debug or optimize queries.
Run Results Panel: Shows model-level success, failure, and execution time for each dbt command.
Monitoring and troubleshooting
- Run Summary: Shows the total models run, runtime, and success status.
- Error Logs: Provide stack traces and query payloads for troubleshooting.
- Download Logs: Export detailed logs or payloads for offline analysis.
Best practices
- Keep your models modular and test-driven for easier debugging and faster runs.
- To optimize performance, avoid long dependency chains and use well-partitioned transformations.