An Azure service for ingesting, preparing, and transforming data at scale.
Hi Kevin Burke,
for kicking off a Self-Hosted Integration Runtime job (your “hybrid worker” activity) that runs a .bat file on-prem, you actually only need the ability to execute a pipeline in ADF. The quickest way to get that is:
• Assign the built-in Data Factory Contributor role (or the general Contributor role) on the resource group (or directly on the Data Factory) where your ADF lives.
– This role already includes all the Microsoft.DataFactory actions you need (create/run pipelines, list triggers, read linked services/integration runtimes, etc.).
– If you give it at the RG level, you don’t need to assign anything else at the factory level.
If you’d rather lock it down so the identity can only run pipelines (and nothing else), you can create a pipeline-level custom role that just grants the “createrun” permission:
- Grab the built-in role JSON:
Get-AzRoleDefinition -Name "Data Factory Contributor" - Clear out everything except:
-
Microsoft.DataFactory/factories/pipelines/createRun/action - (Optionally)
Microsoft.DataFactory/factories/read&…/listso it can see the pipelines
-
- Assign that new custom role at your factory’s scope
Full steps here:
And general reader on built-in roles for ADF here:
Hope that helps! If you need to tighten permissions further or if you’re seeing access-denied errors, let me know:
• Which identity is actually running the pipeline (SPN, managed identity)?
• Where have you assigned your current roles (subscription, RG, factory)?
• Are you seeing any specific error messages when you trigger the job?
Note: This content was drafted with the help of an AI system. Please verify the information before relying on it for decision-making.