Part 6: Introducing Deployment Stacks to Azure Data Factory (2024)

Introduction

This is part 6 on our series on Azure Data Factory CI/CD. This section will cover how to incorporate Azure Deployment Stacks into your Azure DevOps Pipelines.

Part 1

  1. Architecture and Scenario
  2. Creating resources in Azure
  3. Create Azure Storage Containers
  4. Create Azure Key Vaults
  5. Create Azure Data Factory: With Key Vault Access

Part 2

  1. Configure Azure Data Factory Source Control
  2. Construct Azure Data Factory Data Pipeline
  3. Publishing Concept for Azure Data Factory
  4. Configure Deployed Azure Resources.

Part 3

  1. The YAML Pipeline Structure
  2. The Publish Process
  3. ARM Template Parameterization
  4. ADF ARM Template Deployment

Part 4

  1. How to use Azure DevOps Pipeline Templates

Part 5

  1. How to Deploy Linked Templates for Azure Data Factory

What are Deployment Stacks?

As per MS Learn documentation a Deployment Stack is:


An Azure deployment stack is a resource that enables you to manage a group of Azure resources as a single, cohesive unit. When you submit a Bicep file or an ARM JSON template to a deployment stack, it defines the resources that the stack manage. If a resource previously included in the template is removed, it will either be detached or deleted based on the specifiedactionOnUnmanagebehavior of the deployment stack. Access to the deployment stack can be restricted using Azure role-based access control (Azure RBAC), similar to other Azure resources.

The TL/DR summary is that deployment stacks are an Azure resource that tracks what has been deployed as part of an ARM deployment. This will allow us to destroy resources that are no longer part of the deployment.

How Does This Impact Data Factory?

When are editing and building pipelines in our Data Factory there is often the need to remove old Datasets, Linked Services, and/or Pipelines. If you are familiar with the ADF deployment process, then you are aware that we are deploying ARM templates in incremental mode.

This means that anything we remove from the Data Factory such as a Linked Services will still be out in the Data Factory's upper environments which we shouldn't have portal access to. This is a concern as it can be seen as a security threat as it violates Least Privilege Access principles as the Data Factory could have resources it no longer needs access to, specifically if leveraging things like connection strings. Additionally, there is a risk that pipelines that are no longer maintained could be triggered on accident in upper environments.

Thus, if we had a way to remove resources which are no longer defined in our collaboration branch we should!

How to Implement?

So, Deployment Stacks may sound great? However, how hard are they to update in our pipelines? Well, if you've been following to this point and leveraging YAML Templates, not hard at all! If your pipelines aren't leveraging YAML Templates, that's alright as the process isn't all that complicated.

First, we have to understand that to implement Deployment Stacks it's really just a different deployment command. In our previous posts we leveraged AzureResourceManagerTemplateDeployment@3. For Deployment Stacks there is no ADO task available so we will leverage the Azure CLI. This can also be done with PowerShell.

By changing the deployment command, we will be telling the Azure Resource Manager to deploy our resources like we were doing before AND create a deployment stack to track them.

This will be achieved by switching our AzureResourceManagerTemplateDeploymentTask with an Azure CLI task for one that will execute the `az stack group create` command. Here is what the expanded full task would look like in a single environment with just the minimum necessary.

 - task: AzureCLI@2 displayName: create deployment stack inputs: azureSubscription: AzureDevServiceConnection scriptType: 'pscore' scriptLocation: 'inlineScript' inlineScript: > az stack group create --name "DeploymentStackResourceName" --action-onunmanage deleteAll ` --deny-settings-mode denyDelete --resource-group "ResourceGroupName ` --template-file ARMTemplateForFactory.json --parameters "ParameterFile" --yes

If wanting to follow along with a template, please check out this task template I have created on my YAML Template Repository.Furthermore, building on our article leveraging deployment templates across environments I have updated our adf deployment job template to now deploy via stacks.

One note I will call out here....have no fear the `create` command does effectively an upsert so will create the stack if it doesn't exist and update if it already does. Thus we can keep the command as `create`.

End Result

To save the details here I clone the existing pipeline 'pl_copy_data' and renamed it 'delete-me'. This was on purpose as I wanted to see if it would delete the underlining LinkedServices or just the pipeline. To outline the steps up to this point:

  1. Create a new pipeline from 'pl_copy_data' called 'delete-me'
  2. Deployed ADF w/ the new pipeline
  3. Removed the 'delete-me' pipeline from my git backed ADF instance
  4. Redeployed my ADF instance

After these steps I now see the following under the Resource Group blade-Deployment Stacks:

Part 6: Introducing Deployment Stacks to Azure Data Factory (1)

We can see now that it shows the /factories/pipelines type resource called 'delete-me' is showing in a deleted state. Just to confirm I launched the ADF instance and looked at what pipelines are available to it:

Part 6: Introducing Deployment Stacks to Azure Data Factory (2)

This confirms that the delete-me pipeline has been successfully removed and in addition the pl_copy_data and it's Linked Services still are intact.

Conclusion

By introducing Deployment Stacks into our ADF CI/CD pipelines we now have a way to remove resources that are no longer being leveraged by the Data Factory automatically via existing CI/CD processes. This is a big step to cleaning up and securing ones Azure Data Factory Environment.

Please be sure to check out any of the blogs in ourUnlock the Power of Azure Data Factory: A Guide to Boosting Your Data Ingestion Process Series and our series on YAML Pipelines as well TheYAMLPipelineOne on GitHub for additional YAML Pipeline references.

Part 6: Introducing Deployment Stacks to Azure Data Factory (2024)
Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 6029

Rating: 4.1 / 5 (42 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.