Category: Bicep

Azure Quick Review (azqr): A Practical Overview

If you’re managing resources in Azure, you’ve likely faced challenges around optimizing and securing your cloud environment. Azure Quick Review (azqr), an open-source tool from Microsoft, is a straightforward solution that can help you quickly assess your Azure environment and highlight potential issues. Here’s why azqr is so useful for your day-to-day cloud operations.

What is Azure Quick Review (azqr)?

Azure Quick Review is a command-line tool designed to simplify the assessment of your Azure subscriptions and resources. It’s available on GitHub (https://github.com/Azure/azqr) and provides an automated way to perform a high-level analysis of your Azure infrastructure. The main goal is to offer you insights into security, compliance, performance, and cost-related aspects of your Azure resources, all in a digestible format.

Why Use Azure Quick Review?

Managing Azure resources manually can become cumbersome, especially when your cloud footprint is growing. Azure Quick Review offers several practical benefits:

  1. Automated Assessments

azqr automates the assessment process for your Azure environment. Instead of manually checking each resource’s configuration, you can use azqr to perform comprehensive evaluations in minutes.

It covers key Azure resources like virtual machines, SQL databases, storage accounts, and more.

  1. Consistent, Standardized Reviews

One of the main issues with manual audits is inconsistency. Azure Quick Review brings a standardized approach to your resource analysis. It ensures that each assessment follows the same set of guidelines and best practices, which is particularly helpful when working in teams.

  1. Focus on Security and Compliance

Azure Quick Review evaluates the security posture of your resources by flagging configuration issues. It checks for vulnerabilities, like public endpoints where they shouldn’t be, or missing network security groups (NSGs).

You can use azqr to ensure your deployments comply with organizational policies or regulatory requirements. Its output can be a handy guide to tightening security gaps in your Azure setup.

  1. Cost Optimization Insights

As your Azure usage grows, so does the likelihood of mismanagement and unnecessary costs. azqr highlights resources that could be over-provisioned or underutilized, offering you potential cost-saving opportunities.

The report can help identify expensive configurations and unused resources that can be scaled back or shut down.

  1. Quick, Readable Reports

Azure Quick Review outputs the analysis in a clear and accessible format. The results include color-coded indications of areas needing attention and a summary that prioritizes key actions.

Reports generated by azqr are ideal for sharing with stakeholders or for keeping as a quick reference.

  1. Easy to Set Up and Use

Installation is simple. You can install azqr via Python’s pip, and from there, it’s easy to integrate into your existing workflows. If you already use command-line tools for Azure, azqr feels like a natural extension of that.

How to Get Started with azqr

Getting started is straightforward:

  • Installation: First, clone the GitHub repository or install it via pip (pip install azqr).
  • Running the Tool: You can use azqr commands to scan specific subscriptions or resource groups. The CLI provides flexibility to focus on exactly what you need.
  • Reviewing Results: Once complete, you get a summary report highlighting potential security gaps, compliance issues, and opportunities to optimize your resources.

For more details, check out the GitHub page: Azure Quick Review on GitHub

When Should You Use azqr?

Azure Quick Review is particularly useful in several scenarios:

  • Periodic Audits: Use azqr to conduct regular reviews of your Azure environment to ensure compliance and security standards are up to date.
  • Pre-Deployment Checks: Before making new services live, azqr can be used to review configurations and spot potential issues.
  • Cost Management Exercises: Regularly run azqr to help with cost audits, identifying waste or unnecessary spending.

Report

Here is what the report looks like, you get a list of Recommendations, ImpactedResources, ResourceTypes, Inventory, Advisor, Defender, Costs and Pivot Table tabs.

Summary

Azure Quick Review (azqr) is an invaluable tool for Azure users who want to stay on top of resource management without spending hours on manual reviews. It’s straightforward to use, delivers consistent insights, and helps you optimize both costs and security across your cloud environment. By incorporating azqr into your routine, you can gain a clearer understanding of your Azure resources and keep your deployments running efficiently and securely.



Deploying a Log Analytics Workspace Using Azure Verified Modules

Deploying a Log Analytics Workspace using Azure Verified Modules (AVM) with Bicep is a streamlined process that leverages the standardized practices of the Azure infrastructure-as-code framework. Here’s a concise guide on how to set up your workspace using the AVM project on GitHub:

The following is example Bicep code, I will let you create the resource group yourself and figure out the tags.

// Log Analytics Parameters
param logAnalyticsSkuName string = ‘PerGB2018’
var logAnalyticsName = ‘law-demo’

@description(‘Log Analytics Daily Quota in GB. Default: 1GB’)
param logAnalyticsDailyQuotaGb int = 1

@description(‘Number of days data will be retained for.’)
param logAnalyticsDataRetention int = 365

//MARK: azureLogAnalytics
@description(‘Deploy Azure Log Analytics’)
module logAnalytics ‘br/public:avm/res/operational-insights/workspace:0.7.0’ = {
scope: resourceGroup(rgLogAnalyticsDemo)
name: ‘logAnalyticsDemo’
params: {
name: logAnalyticsName
skuName: logAnalyticsSkuName
location: location
dailyQuotaGb: logAnalyticsDailyQuotaGb
dataRetention: logAnalyticsDataRetention
tags: tags
}
}

The use of Azure Verified Modules ensures that your infrastructure as code practices align with Microsoft’s standards, simplifying maintenance and scalability concerns. This approach not only streamlines the deployment process but also enhances the security and reliability of your Azure resources.

For more detailed instructions and to view the exact Bicep configurations, visit the GitHub repository pages for Azure Verified Modules and Bicep Registry Modules. These resources will provide you with up-to-date code examples and further details on customizing your deployment.



Implementing Azure SQL Server Firewall Rules with Bicep and Azure Verified Modules

When managing Azure resources, ensuring your SQL server is secure from unauthorized access is a priority. One way to secure your Azure SQL server is by implementing firewall rules. In this post, I’ll guide you through using Bicep and the Azure Verified Modules from GitHub to set up firewall rules for an Azure SQL server.

Example Bicep:

@description('Deploy Azure SQL Server')
 module createsqlServer '../sql/server/main.bicep' = {
   scope: resourceGroup(rgSQL)
   name: 'sqlServer-${environmentName}'
   params: {
     name: 'sql-demoserver'
     administratorLogin: administratorLogin
     administratorLoginPassword: administratorLoginPassword
     managedIdentities: {
       systemAssigned: false
       userAssignedResourceIds: [
         createManagedIdentity.outputs.resourceId
       ]
     }
     primaryUserAssignedIdentityId: createManagedIdentity.outputs.resourceId
     location: location
     tags: tags
     databases: [
        {
          name: 'demidb1'
          skuName: 'ElasticPool'
          skuTier: 'GeneralPurpose'
          capacity: 0
          maxLogSizeBytes: 34359738368
          compatibilityLevel: 120
          elasticPoolId: createSqlServerElasticPool.outputs.resourceId
        }
     ]
     firewallRules: [
      {
        name: '<database firewall rule 1>'
        startIpAddress: 'enter ip address here'
        endIpAddress: 'enter ip address here'
      }
      {
       name: '<database firewall rule 2>'
       startIpAddress: '<enter ip address here>'
       endIpAddress: '<enter ip address here>'
      }
   ]
   }
 }

This Bicep file defines 2 simple rule that allows traffic from certain defined IP addresses. Be sure to adjust the startIpAddress and endIpAddress to fit your security requirements. This example doesn’t show the code for the creation of the elasticPool or the Managed Identity.

This example serves as a foundational guide to get you started with automated deployment of firewall rules using Infrastructure as Code (IaC) practices with Bicep.



Azure Firewall Rule Collection Groups: Managing Windows Updates and Time Server Sync with Microsoft’s Verified Modules

Introduction

Azure Firewall is a powerful cloud-native service that provides network security across your Azure environment. Managing traffic in and out of your virtual networks requires a precise and structured approach, and Azure Firewall helps achieve this with rule collection groups and rules. In this guide, we will explore how to use the Azure Verified Modules GitHub project. For more details, see the official Microsoft documentation on Azure Firewall rule collections. to enable Windows Updates and Time Server synchronization by configuring Azure Firewall rule collection groups.

What Are Azure Firewall Rule Collection Groups?

Azure Firewall’s rules are organized into rule collection groups and rules. This structure helps you maintain and manage the settings effectively.

  • Rule Collection Groups: These are containers for organizing rule collections by priority. You can think of them as a way to categorize rules that have similar purposes.
  • Rule Collections: Inside a group, rule collections are the actual sets of rules that Azure Firewall enforces. Rule collections are defined based on the type of traffic they manage—network rules, application rules, or NAT rules.
  • Rules: Finally, within each rule collection, you define the specific rules that manage traffic. These rules control things like which IP addresses can communicate through which ports and protocols.

When working with Azure Firewall, setting up rules for basic functionalities like Windows Updates or Time Server synchronization may seem straightforward, but it can require detailed settings that can benefit from automation and consistency. This is where Microsoft’s Azure Verified Modules come in.

Azure Verified Modules GitHub Project

Microsoft provides the Azure Verified Modules. You can also refer to the Azure Firewall documentation for a broader understanding of firewall management. GitHub project to simplify and standardize Azure infrastructure deployments. These modules are built to help you achieve common tasks in Azure using well-defined Infrastructure as Code (IaC) practices. They leverage tools like Bicep and ARM templates, providing verified configurations that are reliable, repeatable, and secure.

We will use this project to configure Azure Firewall to allow Windows Updates and NTP (Network Time Protocol) server synchronization.

Setting Up Azure Firewall Rules for Windows Updates and Time Sync

To enable Windows Updates and time server synchronization, you need to create specific rule collection groups and rules. These rules will allow Azure virtual machines to reach Microsoft’s update servers and time servers.

Note you can either clone the repo or make use of the public repo that Azure provide, below shows you how to clone the repo.

Step 1: Cloning the Verified Modules Repository

Start by cloning the Azure Verified Modules repository:

# Clone the repository
git clone https://github.com/Azure/Azure-Verified-Modules

Step 2: Understanding the Module Structure

In the repository, you will find modules for managing Azure Firewall rule collections. The folder structure helps you locate different types of rule collections (such as Network Rules, Application Rules, and NAT Rules).

For enabling Windows Updates and time synchronization, we will focus on Application Rule Collections. Application rules are ideal for this scenario as they define fully qualified domain names (FQDNs) to allow outbound HTTP/S traffic, which is the type of traffic used for both updates and NTP.

Step 3: Using the Application Rules Module

Navigate to the module related to Firewall Application Rules. The module you need allows Azure Firewall to be configured with outbound rules for reaching well-known domains, like Windows Update servers and NTP servers. For more details, you can refer to the official documentation on Azure Firewall application rules.

The basic configuration requires defining a rule collection group that contains an application rule collection for the update and time synchronization domains.

Example Bicep File

Below is an example Bicep configuration that uses Azure Firewall to create rules allowing Windows Updates and NTP. Refer to the Azure Bicep documentation for more information on using Bicep to manage Azure resources.

param firewallName string
param resourceGroupName string
param location string = resourceGroup().location
param windowsUpdateDomains array = [
  'windowsupdate.microsoft.com',
  '*.windowsupdate.microsoft.com',
  '*.update.microsoft.com',
  '*.delivery.mp.microsoft.com'
]
param ntpDomains array = [
  'time.windows.com'
]

resource firewall 'Microsoft.Network/azureFirewalls@2021-08-01' existing = {
  name: firewallName
  resourceGroup: resourceGroupName
}

resource appRuleCollection 'Microsoft.Network/azureFirewalls/ruleCollections@2021-08-01' = {
  name: '${firewallName}-app-rule-collection'
  parent: firewall
  properties: {
    priority: 200
    ruleCollectionType: 'FirewallPolicy'
    rules: [
      {
        name: 'Allow-Windows-Updates'
        ruleType: 'ApplicationRule'
        targetFqdns: windowsUpdateDomains
        protocols: [
          {
            protocolType: 'Https'
            port: 443
          }
        ]
      }
      {
        name: 'Allow-NTP-Sync'
        ruleType: 'ApplicationRule'
        targetFqdns: ntpDomains
        protocols: [
          {
            protocolType: 'Https'
            port: 443
          }
        ]
      }
    ]
  }
}

Explanation of the Bicep Configuration

  • Parameters: The script defines several parameters, including the firewall name, resource group, and domains for Windows Updates and NTP.
  • Firewall Resource: The firewall resource is defined as an existing resource, meaning that the script expects the Azure Firewall to already be deployed.
  • Application Rule Collection: This rule collection contains two application rules—one for Windows Updates and one for NTP. Each rule targets the necessary FQDNs and specifies HTTPS (port 443) as the protocol.

Step 4: Deploying the Bicep File

To deploy this Bicep file, use the following Azure CLI command:

az deployment group create \
  --resource-group <YourResourceGroupName> \
  --template-file <PathToYourBicepFile>.bicep

Replace <YourResourceGroupName> and <PathToYourBicepFile> with your specific values. This command will deploy the rule collection group to your existing Azure Firewall.

Verifying the Configuration

Once the deployment is complete, it is important to verify that the Azure Firewall is properly configured to allow the necessary traffic. You can also refer to the Azure Firewall verification steps in the official Microsoft Docs for additional troubleshooting and validation methods.

  • Windows Updates: You can verify if Windows Updates are working by manually triggering an update check on a virtual machine. If configured correctly, the VM should reach Microsoft’s update servers without issues.
  • NTP Synchronization: For NTP, you can verify the time sync status by running the following command on a Windows VM:w32tm /query /statusThis command should display the synchronization details if the connection to the NTP server is successful.

Best Practices for Managing Firewall Rules

When managing Azure Firewall rules, it is crucial to follow a few best practices to keep your infrastructure secure and manageable. Microsoft provides a set of best practices that you can find in their official documentation:

  • Organize Rules by Function: Group rules by their function—for example, rules for Windows Updates, security services, etc. This makes it easier to manage and understand.
  • Use Appropriate Priorities: Rule priorities determine the order in which they are evaluated. Ensure that more specific rules have higher priorities (lower numerical values).
  • Minimize Wildcards: Avoid using wildcards (*) in domain names as much as possible. While convenient, they can open unnecessary access and compromise security.
  • Test Thoroughly: Always test new rules in a controlled environment before applying them in production. This helps to avoid disruptions.

Real world example

param firewallPolicyName string
param appGatewaySubnetAddress string 
param vmSubnetAddress string
param location string

resource updateAzureFirewallPolicy 'Microsoft.Network/firewallPolicies@2024-01-01' = {
  name: firewallPolicyName
  location:location
  properties: {
    sku: {
      tier: 'Premium'
    }
    threatIntelMode: 'Deny'
    threatIntelWhitelist: {
      fqdns: []
      ipAddresses: []
    }
    dnsSettings: {
      servers: []
      enableProxy: true
    }
    sql: {
      allowSqlRedirect: false
    }
    intrusionDetection: {
      mode: 'Deny'
      configuration: {
        signatureOverrides: []
        bypassTrafficSettings: []
        privateRanges: [
          '172.16.0.0/12'
         ]
      }
    }
  }
}

resource createAzureFireWallServerActivationRuleCollectionGroup 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2024-01-01' = {
  parent: updateAzureFirewallPolicy
  name: 'ServerActivation'
  properties: {
    priority: 2900
    ruleCollections: [
      {
        ruleCollectionType: 'FirewallPolicyFilterRuleCollection'
        action: {
          type: 'Allow'
        }
        rules: [
          {
            ruleType: 'NetworkRule'
            name: 'KMS'
            ipProtocols: [
              'TCP'
            ]
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            sourceIpGroups: []
            destinationAddresses: []
            destinationIpGroups: []
            destinationFqdns: [
              'kms.core.windows.net'
            ]
            destinationPorts: [
              '1688'
            ]
          }
          {
            ruleType: 'NetworkRule'
            name: 'AppGatewaytoCustomer'
            ipProtocols: [
              'TCP'
            ]
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            sourceIpGroups: []
            destinationAddresses: [
              vmSubnetAddress
            ]
            destinationIpGroups: []
            destinationFqdns: []
            destinationPorts: [
              '8080'
              '443'
            ]
          }
          {
            ruleType: 'NetworkRule'
            name: 'AllowWindowsTimeServer'
            ipProtocols: [
              'UDP'
            ]
            sourceAddresses: [
              '*'
            ]
            sourceIpGroups: []
            destinationAddresses: []
            destinationIpGroups: []
            destinationFqdns: [
              'time.windows.com'
            ]
            destinationPorts: [
              '123'
            ]
          }
          {
            ruleType: 'NetworkRule'
            name: 'AllowAzureBackupAllowMonitor'
            ipProtocols: [
              'TCP'
            ]
            sourceAddresses: [
              '*'
            ]
            sourceIpGroups: []
            destinationAddresses: [
              'AzureBackup'
              'AzureMonitor'
            ]
            destinationIpGroups: []
            destinationFqdns: []
            destinationPorts: [
              '*'
            ]
          }
          {
            ruleType: 'NetworkRule'
            name: 'msedge.api.cdp.microsoft.com'
            ipProtocols: [
              'TCP'
            ]
            sourceAddresses: [
              '*'
            ]
            sourceIpGroups: []
            destinationAddresses: []
            destinationIpGroups: []
            destinationFqdns: [
              'msedge.api.cdp.microsoft.com'
            ]
            destinationPorts: [
              '*'
            ]
          }
          {
            ruleType: 'NetworkRule'
            name: 'AllowLinuxTimeServer'
            ipProtocols: [
              'UDP'
            ]
            sourceAddresses: [
              '*'
            ]
            sourceIpGroups: []
            destinationAddresses: []
            destinationIpGroups: []
            destinationFqdns: [
              'ntp.ubuntu.com'
            ]
            destinationPorts: [
              '*'
            ]
          }
        ]
        name: 'ServerActivation'
        priority: 2900
      }
      {
        ruleCollectionType: 'FirewallPolicyFilterRuleCollection'
        action: {
          type: 'Allow'
        }
        rules: [
          {
            ruleType: 'ApplicationRule'
            name: 'Ubuntu Updates'
            protocols: [
              {
                protocolType: 'Http'
                port: 80
              }
              {
                protocolType: 'Https'
                port: 443
              }
            ]
            fqdnTags: []
            webCategories: []
            targetFqdns: [
              '*.ubuntu.com'
            ]
            targetUrls: []
            terminateTLS: false
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            destinationAddresses: []
            sourceIpGroups: []
            httpHeadersToInsert: []
          }
          {
            ruleType: 'ApplicationRule'
            name: 'Windows Updates'
            protocols: [
              {
                protocolType: 'Http'
                port: 80
              }
              {
                protocolType: 'Https'
                port: 443
              }
            ]
            fqdnTags: [
              'WindowsUpdate'
              'WindowsDiagnostics'
            ]
            webCategories: []
            targetFqdns: []
            targetUrls: []
            terminateTLS: false
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            destinationAddresses: []
            sourceIpGroups: []
            httpHeadersToInsert: []
          }
          {
            ruleType: 'ApplicationRule'
            name: 'Azure Monitor'
            protocols: [
              {
                protocolType: 'Http'
                port: 8080
              }
              {
                protocolType: 'Https'
                port: 443
              }
            ]
            fqdnTags: []
            webCategories: []
            targetFqdns: [
              '*.azure.com'
              '*.windows.net'
            ]
            targetUrls: []
            terminateTLS: false
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            destinationAddresses: []
            sourceIpGroups: []
            httpHeadersToInsert: []
          }
        ]
        name: 'LinuxUpdates'
        priority: 3000
      }
    ]
  }
}

resource createAzureFireWallTimeSyncRuleCollectionGroup 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2024-01-01' = {
  parent: updateAzureFirewallPolicy
  name: 'TimeSync'
  properties: {
    priority: 2000
    ruleCollections: [
      {
        ruleCollectionType: 'FirewallPolicyFilterRuleCollection'
        action: {
          type: 'Allow'
        }
        rules: [
          {
            ruleType: 'NetworkRule'
            name: 'TimeSync'
            ipProtocols: [
              'Any'
            ]
            sourceAddresses: [
              appGatewaySubnetAddress
            ]
            sourceIpGroups: []
            destinationAddresses: [
              '*'
            ]
            destinationIpGroups: []
            destinationFqdns: []
            destinationPorts: [
              '123'
            ]
          }
        ]
        name: 'TimeSync'
        priority: 2000
      }
    ]
  }
  dependsOn:[
    createAzureFireWallServerActivationRuleCollectionGroup
  ]
}

Conclusion

Configuring Azure Firewall with rule collection groups and rules for enabling Windows Updates and time server synchronization is an important part of managing Windows workloads in Azure. By leveraging Microsoft’s Azure Verified Modules from GitHub, you can automate the deployment of these configurations, ensuring a consistent and reliable approach.

Using the example Bicep script above, you can quickly implement rules that allow your VMs to stay updated and correctly synchronized with Microsoft’s time servers, all while maintaining a secure and organized firewall configuration. Azure Firewall, combined with the automation provided by these verified modules, makes network management simpler and more effective.



Deploy Azure Azure Web Apps Windows and Linux using Bicep!

When setting up your infrastructure in Azure, using the Azure Verified Modules can streamline the creation of any Azure Resource such as Azure Web Apps for Windows and Linux with App Service Plans. This post guides you through the code for doing just that, I leave it to you to create the parameters and fill them in 🙂

//MARK: App Insights Instance
@description('Create Application Insights Instance')
module createAppInsights 'br/public:avm/res/insights/component:0.4.1' = if (deployWebApps) {
  scope: resourceGroup(resourceGroupArray[2].name)
  name: 'createAppInsights'
  params: {
    name: appInsightsName
    workspaceResourceId: logAnalytics.outputs.resourceId
    diagnosticSettings: [
      {
        workspaceResourceId: logAnalytics.outputs.resourceId
      }
    ]
  }
  dependsOn: [
    logAnalytics
  ]
}

//MARK: App Service Plan Windows
//Deploy an azure web app with App service plan for Windows
module appServicePlanWindows 'br/public:avm/res/web/serverfarm:0.2.4' = if (deployWebApps) {
  scope: resourceGroup(resourceGroupArray[2].name)
  name: 'appServicePlanWindows'
  params: {
    name: 'aspwin-${customerName}-${environmentName}-${locationShortCode}'
    location: location
    tags: tags
    skuName: skuNameAppServicePlanWindows
    kind: skuKindAppServicePlanWindows
    skuCapacity: skuCapacityAppServicePlanWindows
    diagnosticSettings: [
      {
        workspaceResourceId: logAnalytics.outputs.resourceId
      }
    ]
  }
  dependsOn: [
    logAnalytics
  ]
}

//MARK: App Service Windows
module appServiceWindows 'br/public:avm/res/web/site:0.9.0' = if (deployWebApps) {
  scope: resourceGroup(resourceGroupArray[2].name)
  name: 'appService'
  params: {
    name: 'appwin-${customerName}-${environmentName}-${locationShortCode}'
    kind: 'app'
    location: location
    tags: tags
    serverFarmResourceId: appServicePlanWindows.outputs.resourceId
    managedIdentities: {
      systemAssigned: false
      userAssignedResourceIds: [
        managedIdentity.id
      ]
    }
    diagnosticSettings: [
      {
        workspaceResourceId: logAnalytics.outputs.resourceId
      }
    ]
    siteConfig: {
      alwaysOn: true
      http20Enabled: false
      appSettings: [
        {
          name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
          value: createAppInsights.outputs.instrumentationKey
        }
      ]
    }
    appInsightResourceId: createAppInsights.outputs.resourceId
  }
  dependsOn: [
    logAnalytics
    appServicePlanWindows
    createAppInsights
  ]
}

//MARK: App Service Plan Linux
//Deploy an azure web app with App service plan for Linux
module appServicePlanLinux 'br/public:avm/res/web/serverfarm:0.2.4' = if (deployWebApps) {
  scope: resourceGroup(resourceGroupArray[2].name)
  name: 'appServicePlanLinux'
  params: {
    name: 'asplinux-${customerName}-${environmentName}-${locationShortCode}'
    location: location
    tags: tags
    skuName: 'P1v3'
    kind: skuKindAppServicePlanLinux
    zoneRedundant: false
    diagnosticSettings: [
      {
        workspaceResourceId: logAnalytics.outputs.resourceId
      }
    ]
  }
  dependsOn: [
    logAnalytics
  ]
}

//MARK: App Service Linux
module appServiceLinux 'br/public:avm/res/web/site:0.9.0' = if (deployWebApps) {
  scope: resourceGroup(workloadsResourceGroupArray[2].name)
  name: 'appServiceLinux'
  params: {
    name: 'applin-${customerName}-${environmentName}-${locationShortCode}'
    kind: 'app'
    location: location
    tags: tags
    serverFarmResourceId: appServicePlanLinux.outputs.resourceId
    managedIdentities: {
      systemAssigned: false
      userAssignedResourceIds: [
        managedIdentity.id
      ]
    }
    diagnosticSettings: [
      {
        workspaceResourceId: logAnalytics.outputs.resourceId
      }
    ]
    siteConfig: {
      alwaysOn: true
      http20Enabled: false
      appSettings: [
        {
          name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
          value: createAppInsights.outputs.instrumentationKey
        }
      ]
    }
    appInsightResourceId: createAppInsights.outputs.resourceId
  }
  dependsOn: [
    logAnalytics
    appServicePlanLinux
    createAppInsights
  ]
}

This code will now create an a Windows App Service Plan with a Web App, a Linux App Service Plan with a Web App and also an Application Insights instance and hook up both web apps to the same App Insights instance for monitoring.
Yes this code could be tidied up even further but the purpose it so show you how easy it can be to deploy resources using Bicep along with the Azure Verified Modules github repository.



Deploy Azure SQL Server, Database and Elastic pool in one go using Bicep!

When setting up your infrastructure in Azure, using the Azure Verified Modules can streamline the creation of any Azure Resource such as an Azure SQL Server, an Elastic Pool, and 2 demo databases. This post guides you through the code for doing just that, I leave it to you to create the parameters and fill them in 🙂

Bicep Code

//MARK: SQL Server
@description('SQL Server')
module sqlServer 'br/public:avm/res/sql/server:0.8.0' = if (deploySQLServer) {
  scope: resourceGroup("resourceGroupArray[4].name")
  name: 'sqlServer-${environmentName}'
  params: {
    name: sqlServerName
    administratorLogin: sqlAdministratorLogin
    administratorLoginPassword: keyVault.getSecret(config.kvSQLPassword)
    managedIdentities: {
      systemAssigned: false
      userAssignedResourceIds: [
        managedIdentity.id
      ]
    }
    primaryUserAssignedIdentityId: managedIdentity.id
    location: location
    tags: tags
    databases: [
      {
        name: 'demodb1'
        maxSizeBytes: 2147483648
        skuName: 'ElasticPool'
        skuTier: 'GeneralPurpose'
        zoneRedundant: false
        capacity: 0
        elasticPoolId: 'subscriptions/${subscriptionId}/resourceGroups/${resourceGroupArray[4].name}/providers/Microsoft.Sql/servers/${sqlServerName}/elasticpools/${elasticPoolName}'
      }
      {
        name: 'demodb2'
        maxSizeBytes: 2147483648
        skuName: 'ElasticPool'
        skuTier: 'GeneralPurpose'
        zoneRedundant: false
        capacity: 0
        elasticPoolId: 'subscriptions/${subscriptionId}/resourceGroups/${resourceGroupArray[4].name}/providers/Microsoft.Sql/servers/${sqlServerName}/elasticpools/${elasticPoolName}'
      }
    ]
    elasticPools: [
      {
        maxSizeBytes: 34359738368
        name: elasticPoolName
        perDatabaseSettings: {
            minCapacity: 0
            maxCapacity: 2
        }
        skuCapacity: 2
        skuName: 'GP_Gen5'
        skuTier: 'GeneralPurpose'
        zoneRedundant: false
        maintenanceConfigurationId: '/subscriptions/${subscriptionId}/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/SQL_Default'
      }
    ]
  }
}

This code will now create an Azure SQL Server, an Elastic Pool and 2 demo databases within the Elastic Pool. Yes this code could be tidied up even further but the purpose it so show you how easy it can be to deploy resources using Bicep along with the Azure Verified Modules github repository.



Azure Application Gateway using Azure Verified Modules

Creating an Azure Application Gateway Using Bicep Azure Verified Modules

As cloud solutions grow more complex, using infrastructure as code (IaC) has become crucial for managing, deploying, and maintaining resources smoothly. Bicep, Microsoft’s language for Azure, makes it much easier to create templates that simplify resource management. Azure Verified Modules take it a step further by providing reusable, pre-validated components that speed up your development process.

In this post, I’ll walk through creating an Azure Application Gateway using Azure Bicep, leveraging verified modules to keep things simple and efficient.

Why Use Azure Verified Modules?

Azure Verified Modules are prebuilt and tested components that help you create standardized, production-ready Azure infrastructure faster. By using verified modules:

  • You reduce the chances of introducing configuration errors.
  • You can leverage Azure best practices directly within your deployments.
  • Modules are reusable, which promotes consistency and speeds up development.

Getting Started with Bicep

Before diving into the Application Gateway setup, ensure you have the necessary tools:

  • Azure CLI: Install or update Azure CLI to the latest version.
  • Bicep CLI: You can install Bicep via Azure CLI with the command: az bicep install.
  • Visual Studio Code (VS Code): A good code editor will make writing Bicep files easier, especially with the Bicep VS Code extension.

Once your environment is ready, you can begin writing the Bicep code to deploy an Application Gateway.

Step-by-Step: Creating an Application Gateway

Step 1: Import the Azure Verified Module

To create an Azure Application Gateway, you first need to use the Azure Verified module. Microsoft provides verified modules that can be referenced directly in your Bicep file, making your infrastructure code easier to manage.

For example, to add an Application Gateway, you might use the following code:

module createcreateApplicationGatewayppGateway 'br/public:network/application-gateway:1.0.0' = {
  scope: resourceGroup(hubSubscriptionId, hubResourceGroupNameAppGateway)
  name: 'createAppGateway'
  params: {
    name: applicationGatewayName
    location: location
    sku: 'WAF_v2'
    tags: tags
    firewallPolicyResourceId: createApplicationGatewayWafPolicy.outputs.resourceId
    gatewayIPConfigurations: [
      {
        name: 'appGatewayIpConfig'
        properties: {
          subnet: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubNetworkRG}/providers/Microsoft.Network/virtualNetworks/${hubVnetName}/subnets/AppGwSubnet'
          }
        }
      }
    ]
    frontendIPConfigurations: [
      {
        name: 'appGwPublicFrontendIp'
        properties: {
          privateIPAllocationMethod: 'Dynamic'
          publicIPAddress: {
            id: createPublicIPAddressForAppGateway.outputs.resourceId
          }
        }
      }
    ]
    frontendPorts: [
      {
        name: 'port_80'
        properties: {
          port: 80
        }
      }
    ]
    backendAddressPools: [
      {
        name: 'vm1BackendPool'
        properties: {
          backendAddresses: [
            {
              ipAddress: '10.0.1.4' <- dont hardcode like this use variables
            }
          ]
        }
      }
    ]
    backendHttpSettingsCollection: [
      {
        name: 'defaultHTTPSetting'
        properties: {
          port: 8080
          protocol: 'Http'
          cookieBasedAffinity: 'Disabled'
          pickHostNameFromBackendAddress: false
          requestTimeout: 20
          probe: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/probes/HttpProbe'
          }
        }
      }
    ]
    httpListeners: [
      {
        name: 'http'
        properties: {
          frontendIPConfiguration: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/frontendIPConfigurations/appGwPublicFrontendIp'
          }
          frontendPort: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/frontendPorts/port_80'
          }
          protocol: 'Http'
          requireServerNameIndication: false
          hostNames: []
          customErrorConfigurations: []
        }
      }
    ]
    requestRoutingRules: [
      {
        name: 'routingRule1'
        properties: {
          ruleType: 'Basic'
          priority: 100
          httpListener: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/httpListeners/http'
          }
          backendAddressPool: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/backendAddressPools/vm1BackendPool'
          }
          backendHttpSettings: {
            id: '/subscriptions/${hubSubscriptionId}/resourceGroups/${hubResourceGroupNameAppGateway}/providers/Microsoft.Network/applicationGateways/${applicationGatewayName}/backendHttpSettingsCollection/defaultHTTPSetting'
          }
        }
      }
    ]
    probes: [
      {
        name: 'HttpProbe'
        properties: {
          protocol: 'Http'
          host: '10.0.1.4' <- dont hardcode like this use variables
          path: '/api/doc/'
          interval: 30
          timeout: 30
          unhealthyThreshold: 3
          pickHostNameFromBackendHttpSettings: false
          minServers: 0
          match: {
            statusCodes: [
              '200-399'
            ]
          }
        }
      }
    ]
    enableHttp2: false
    urlPathMaps: []
    autoscaleMaxCapacity: 10
    autoscaleMinCapacity: 1
  }
  dependsOn: [
    createPublicIPAddressForAppGateway
  ]
}

This code snippet shows how you can use the verified module to create an Application Gateway, specifying various configurations such as frontend ports, IP configurations, and request routing rules.

Step 2: Define Dependencies

In the example above, you’ll need to ensure other dependent resources (like the Virtual Network, Subnet, and Public IP) are available before deploying the Application Gateway. You can define these resources in the same Bicep file to create a complete setup, or reference existing resources.

For example, adding a Virtual Network might look like this:

resource myVNet 'Microsoft.Network/virtualNetworks@2021-02-01' = {
  name: 'myVNet'
  location: resourceGroup().location
  properties: {
    addressSpace: {
      addressPrefixes: [
        '10.0.0.0/16'
      ]
    }
    subnets: [
      {
        name: 'appGatewaySubnet'
        properties: {
          addressPrefix: '10.0.1.0/24'
        }
      }
    ]
  }
}

Step 3: Deploy the Bicep Template

Once your Bicep template is complete, you can deploy it using the Azure CLI. Run the following command to deploy:

az deployment group create --resource-group <your-resource-group> --template-file main.bicep

This command will take care of provisioning the resources defined in your Bicep file.

Best Practices for Application Gateway Deployment

Summary

In summary, using Azure Bicep with verified modules simplifies the process of creating and managing Azure infrastructure. Application Gateway helps you efficiently manage incoming traffic, while Bicep makes deployment clean and straightforward. Explore other verified modules in Azure’s public registry to quickly build reliable infrastructure following best practices.

Have you tried using Bicep for your infrastructure projects? Share your experiences or questions in the comments below!




Understanding and Implementing Privileged Identity Management (PIM) Using BICEP

Introduction

In today’s digital landscape, managing privileged identities has become paramount for enterprises aiming to safeguard their IT environments against unauthorized access and potential breaches. Privileged Identity Management (PIM) solutions are vital for controlling, managing, and auditing privileged access across all parts of an IT environment. In this blog post, I will delve into how to implement PIM using BICEP and the necessity of a P2 Entra license.

What is Privileged Identity Management (PIM)?

Privileged Identity Management (PIM) refers to the control and monitoring of access and rights for users with elevated permissions who have the authority to make critical changes within Azure. PIM solutions help to prevent security breaches by ensuring that only authorized users can access sensitive systems and data.

Why BICEP?

BICEP is a domain-specific language developed by Microsoft, used primarily for deploying Azure resources declaratively. It simplifies the management of infrastructure as code (IaC), offering a cleaner and more concise syntax compared to traditional ARM templates. Using BICEP to implement PIM can streamline the deployment of Azure resources that are specifically configured for enhanced security protocols.

The Role of P2 Entra Licensing

Microsoft Entra, formerly known as Azure Active Directory (Azure AD), offers comprehensive identity and access management solutions, with P2 licensing providing advanced protection features critical for PIM. A P2 license is essential for accessing premium PIM capabilities in Azure, such as just-in-time (JIT) privileged access, risk-based conditional access policies, and detailed auditing and reporting features.


//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//// This script is used to elevate a group or a user to a built-in role in Azure using Privileged Identity Management (PIM)
//// The script uses the roleEligibilityScheduleRequests API to elevate the user or group to the specified role
//// The script supports the following built-in roles: GlobalAdmin, Owner, Contributor, Reader
//// The script requires the subscription ID, the principal ID of the user or group to elevate, and the built-in role to assign
//// The script also requires the start date and time for the eligibility schedule and the duration for which the eligibility is valid
//// The script creates a roleEligibilityScheduleRequests resource for each built-in role to assign
//// The script uses the subscription scope to assign the role to the user or group
//// The script uses the role definition IDs for each built-in role to assign
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

targetScope = 'subscription'

param subscriptionId string

// Set the subscription scope using the subscription ID
var subscriptionScope = '/subscriptions/${subscriptionId}'

@description('The start date and time for the eligibility schedule in ISO 8601 format')
param startDateTime string = utcNow()

@description('The duration for which the eligibility is valid in ISO 8601 format (e.g., P90D for 90 days)')
param duration string = 'P90D'

@allowed([
  'GlobalAdmin'
  'Owner'
  'Contributor'
  'Reader'
 ])
@description('Built-in role to assign')
param builtInRoleType string

var role = {
  GlobalAdmin: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/62e90394-69f5-4237-9190-012177145e10'
  Owner: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635'
  Contributor: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'
  Reader: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7'
}

param principalIdToElevate string  // The principal ID of the user or group to elevate

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//// Deployment starts here
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////


@description('Elevate User to Reader')
resource pimAssignment 'Microsoft.Authorization/roleEligibilityScheduleRequests@2022-04-01-preview' = if (builtInRoleType == 'Reader') {
  name: guid(subscriptionScope, principalIdToElevate, role.Reader)
  scope: subscription()
  properties: {
    principalId: principalIdToElevate
    requestType: 'AdminAssign'
    roleDefinitionId: role[builtInRoleType]
    scheduleInfo: {
      expiration: {
        duration: duration
        type: 'AfterDuration'
      }
      startDateTime: startDateTime
    }
  }
}

@description('Elevate User to Contributor')
resource pimAssignment2 'Microsoft.Authorization/roleEligibilityScheduleRequests@2022-04-01-preview' = if (builtInRoleType == 'Contributor') {
  name: guid(subscriptionScope, principalIdToElevate, role.Contributor)
  scope: subscription()
  properties: {
    principalId: principalIdToElevate
    requestType: 'AdminAssign'
    roleDefinitionId: role[builtInRoleType]
    scheduleInfo: {
      expiration: {
        duration: duration
        type: 'AfterDuration'
      }
      startDateTime: startDateTime
    }
  }
}

@description('Elevate User to Owner')
resource pimAssignment3 'Microsoft.Authorization/roleEligibilityScheduleRequests@2022-04-01-preview' = if (builtInRoleType == 'Owner') {
  name: guid(subscriptionScope, principalIdToElevate, role.Owner)
  scope: subscription()
  properties: {
    principalId: principalIdToElevate
    requestType: 'AdminAssign'
    roleDefinitionId: role[builtInRoleType]
    scheduleInfo: {
      expiration: {
        duration: duration
        type: 'AfterDuration'
      }
      startDateTime: startDateTime
    }
  }
}

@description('Elevate User to Global Admin')
resource pimAssignment4 'Microsoft.Authorization/roleEligibilityScheduleRequests@2022-04-01-preview' = if (builtInRoleType == 'GlobalAdmin') {
  name: guid(subscriptionScope, principalIdToElevate, role.GlobalAdmin)
  scope: subscription()
  properties: {
    principalId: principalIdToElevate
    requestType: 'AdminAssign'
    roleDefinitionId: role[builtInRoleType]
    scheduleInfo: {
      expiration: {
        duration: duration
        type: 'AfterDuration'
      }
      startDateTime: startDateTime
    }
  }
}

To call this script you can use the below code

 az deployment sub create `
        --name $deploymentID `
        --location $location `
        --template-file ./PIM.bicep `
        --parameters subscriptionId=$subscriptionID builtInRoleType=$builtInRoleType principalIdToElevate=$principalIdToElevate `
        --confirm-with-what-if `
        --output none

I hope you find this script useful, let me know if you have any feedback on this post.




Implementing TDE in Azure SQL with Custom Managed Keys from Azure Key Vault

In today’s data-driven landscape, safeguarding sensitive information is paramount, especially when leveraging cloud technologies. Transparent Data Encryption (TDE) offers a robust solution by encrypting SQL Server data at rest, thus enhancing security by preventing unauthorized access. This blog post delves into implementing TDE on Azure SQL Database using custom managed keys stored within Azure Key Vault.

TDE is a technology that performs real-time encryption and decryption of the data and log files in SQL Server, Azure SQL Database, and Azure SQL Managed Instance. It ensures that data files are encrypted on the disk, enhancing security by adding a layer of protection that does not require changes to existing applications. TDE works by encrypting the storage of an entire database using a symmetric key called the Database Encryption Key (DEK). This key is then protected by a certificate stored in the server, or, in more secure environments, by an asymmetric key protected by an external key management system like Azure Key Vault.

The primary advantage of TDE is its transparency to applications accessing the database. Applications can run queries and process data as usual, with no need to modify existing queries or application code. This seamless approach not only simplifies the implementation of encryption but also minimizes the impact on performance. By securing data at rest without altering how data is accessed, TDE provides a straightforward, effective method for meeting regulatory and compliance requirements, making it an essential component of a comprehensive data protection strategy in the cloud.

In this blog post, we’ll explore how to enhance the security features of TDE by integrating it with custom managed keys from Azure Key Vault, offering more granular control over encryption keys and compliance with stringent data protection policies.

In the code below I am referencing the Azure Verified Modules project which is written by Microsoft and worth checking out.

Step 1 – Create an Azure KeyVault Key which we can use for TDE

// KeyVault settings
var keyVaultName = 'kv-${customerName}-${environmentName}-${locationShortCode}'

//MARK: createKeyVaultKeyForAzureSQLTDE
@description('create an RSA key within KeyVault')
module createKeyVaultKeyForAzureSQLTDE 'modules/key-vault/vault/key/main.bicep' = {
  scope: resourceGroup(spokeRg.name)
  name: 'keyVaultKey'
  params: {
    keyVaultName: keyVaultName
    name: keyName
    keySize: 2048
    kty: 'RSA'
    keyOps: [
      'encrypt'
      'decrypt'
      'sign'
      'verify'
      'wrapKey'
      'unwrapKey'
    ]
    tags: tags
    attributesEnabled: true
  }
  dependsOn: [
    spokeRg
    createKeyVault
  ]
}

The code above will create a new Key which we can use for TDE within an existimg KeyVault.

Once we have the key created we now need to figure out to to add this to Azrue SQL which proved to be harder than it should have (might be a bug in the modules, not sure yet). I couodnt figure out how to add the uri in the format it asks for – everything I tried didnt work so gave up and used the code below.

To get this working i used a custom module which is below:-

// sqlserver-keyvault-encryption.bicep
param sqlServerName string
param keyVaultName string
param keyName string
param keyVersion string
param keyUri string
param autoRotationEnabled bool

resource sqlServer 'Microsoft.Sql/servers@2022-05-01-preview' existing = {
  name: sqlServerName
}

// Create sql server key from key vault
resource sqlServerKey 'Microsoft.Sql/servers/keys@2022-05-01-preview' = {
  name: '${keyVaultName}_${keyName}_${keyVersion}'
  parent: sqlServer
  properties: {
    serverKeyType: 'AzureKeyVault'
    uri: keyUri
  }
}

// Create the encryption protector
resource propector 'Microsoft.Sql/servers/encryptionProtector@2022-05-01-preview' = {
  name: 'current'
  parent: sqlServer
  properties: {
    serverKeyType: 'AzureKeyVault'
    serverKeyName: sqlServerKey.name
    autoRotationEnabled: autoRotationEnabled
  }
}

I then call this custom modue and pass in the necessary parameters like so: –

//MARK: encryption
module encryption 'modules/custom/sqlserver-keyvault-encryption.bicep' = {
  scope: resourceGroup(spokeResourceNetworkGroupName)
  name: 'sqlserver-keyvault-encryption'
  params: {
    sqlServerName: sqlServerName
    keyVaultName: keyVaultName
    keyName: keyName
    keyVersion: last(split(keyVaultKey.properties.keyUriWithVersion, '/'))
    keyUri: keyVaultKey.properties.keyUriWithVersion
    autoRotationEnabled: autoRotationEnabled
  }
  dependsOn: [
    spokeRg
    createsqlServer
  ]
}

This will then add TDE Encryption using custom managed keys and look like this:-

The screenshot above shows you when you have configured SQL TDE at the server level as database level is still in preview.

Summary

I hope this helps someone at somepoint as I spent some time trying to get the url for the version of the secret out of keyvault but it just wasnt happening.