Azure SQL Server VNet Integrated using Bicep

I have a terrible memory so this blog post is mainly to remind me how to VNet Integrate Azure SQL.

The code below is creating an Azure SQL Server and VNet integrating it – the VirtualNetworkRule is the key part and the following is how to go about it.

I use this existing Bicep repo for all of the Bicep that I write – https://github.com/Azure/ResourceModules/

@description('Deploy an Azure SQL Server')
module createAzureSQL 'modules/azuresql_modules/deploy.bicep' = if (deployAzureSQL) {
  scope: resourceGroup(dataTierRg)
  name: azureSQLServerName
  params: {
    name: azureSQLServerName
    location: sqllocation 
    administratorLogin: azureSQLServerAdminLogin
    administratorLoginPassword: azureSQLAdminPassword
    tags: tags
    virtualNetworkRules: [
      {
        name: 'vnet-rule-${azureSQLServerName}'
        serverName: azureSQLServerName
        ignoreMissingVnetServiceEndpoint: false 
        virtualNetworkSubnetId: '/subscriptions/${subscriptionID}/resourceGroups/${appTierRg}/providers/Microsoft.Network/virtualNetworks/${appVNetName}/subnets/dataSubNet'
      }
    ]
  }
  dependsOn: [
    newRG
    createAppVNet
  ]
}

To get this to work you should also add a service endpoint into your subnet like the following:-

@description('An array of the subnets for the Application VNet.') 
var appSubnets = {
  shared: [

    {
      name: 'appSubnet'
      addressPrefix: '172.16.0.0/24'
      delegations: [
        {
          name: 'delegation'
          properties: {
            serviceName: 'Microsoft.Web/serverfarms'
          }
        }
      ]
    }
    {
      name: 'dataSubNet'
      addressPrefix: '172.16.1.0/24'
      serviceEndpoints: [
        {
          service: 'Microsoft.Sql'
        }
      ]
    }
  ]
}

Let me know if you found this example useful.



My 2022 Yearly Review

2022 was a strange year for me, felt like I hadn’t done much and then I started writing this blog post and it turns out wasn’t too bad after all. I did however get quite a few MVP nominations for people deserving of the award and got a good number of people started doing their first ever LinkedIn Learning course.

My first-ever LinkedIn Learning course went live in March of this past year, and then I finished the second part I was asked to do.

January

February

  • Glasgow Azure User Group.

March

  • Visit Intercept offices, The Netherlands.

April

  • Glasgow Azure User Group.

May

  • DevOps on Azure workshop – remotely
  • Deep Dive Governance and Azure Policy Workshop – remotely.
  • DevOps on Azure workshop – remotely.

June

  • Glasgow Azure User Group.
  • Intercept Summer Party.

July

  • Deep Dive Governance and Azure Policy Workshop – remotely.

August

  • Glasgow Azure User Group.

September

  • On holiday playing golf in the Scottish Pairs 2022 event.

October

  • Azure Cloud Essentials Workshop at the Microsoft office in Copenhagen, Denmark.
  • Earned the Microsoft Certified: Azure Network Engineer Associate Exam AZ-700.
  • Disneyland Paris holiday.
  • Ignite Conference in Seattle, USA.
  • Glasgow Azure User Group.

November

  • Azure DevOps and Azure Cloud Essentials Workshop at the Microsoft Reactor in Stockholm, Sweden.
  • Reached 15k twitter followers.

December

  • Data Workshop in Vienna at the Microsoft Austria office.
  • Intercept Xmas Party in the Netherlands.
  • Festive Tech Calendar ran for the whole month.

Summary

I visited a lot of new countries and gave talks at some very cool venues, next year I hope we can do more of those.

I still want to visit Croatia, Switzerland and hopefully Iceland too.

Happy New Year if your reading this far down.



Microsoft Ignite – In person Seattle review

This is my review of Ignite, in person, in Seattle.

This was my first ever time in Seattle, really enjoyed the city and look forward to returning (I love being in the United States). I was delighted to be able to attend this years event.

This years Ignite in person has been, well different, to be fair I think I had been greatly spoiled by the previous Ignite I attended back in 2019 in Orlando.

Microsoft has tried out some new ideas and below is my feedback and findings from Days 1 and 2 (supposed to be 3 day event, but it wasn’t in perso in Seattle at least).

No Ignite bag was a little disappointing (it was a thing each year until now), a swag free conference, I mean I do get it, but a lot of people weren’t pleased – I think we are going to have to get used to swag free conferences. If you were a sponsor the list of what you had to do to be able to giveaway anything was pretty high.

The vendors were down on a separate floor and to be honest, this was odd in my opinion, not sure that would have went down too well, you had to go into rooms to speak to some of them and that isnt what people liked or felt comfortable doing, when I asked people.

The food was fine, not sure the information on what was available where, was up to scratch but there you have it.

The venue itself, at least the Hub was cool, no MVP / RD wall was disappointing, or dragon popcorn but it was a nice looking area. However, for me, the area didn’t work when it came to sessions, many speakers had no microphones on day 1 and you couldn’t hear anything people were saying unless right up front (some sessions were packed), hopefully they learn that having speaker areas with so many people, you need to make sure people can hear what’s been said, if its in person, its not recorded and therefor you miss out which annoyed some people I spoke too.

The keynote was weird with Satya not being there in person, again, odd to have Ignite, in person, that doesnt have a live keynote from Satya.

In Summary, Day 1 was rescued by the people I met, the MVP after party (not all mvps got the invite) and playing table tennis and chatting with people from all over.

Day 2 was noticeably quieter, I mean like a third of the people decided to go enjoy the sun or something else, I was thinking to myself where is everyone, it very much had the vibe that the big speakers had left town and the Hub area was surprisingly quiet. Ask the Expert areas were quiet especially on day 2, it had that feeling of the last day at a conference early on which was not great.

Some sessions were ok, content was mostly high level marketing content and the sessions were beginner to intermediate levels. Content around the Hub was hard to hear and that for me didn’t work.

So lets cover the good the bad and the ugly from my point of view for this year’s Ignite.

GOOD
We were back in person at Ignite, meeting friends and new people from around the world is always the best part of a conference for me – going to dinner at night and hearing what people are working on and building and the questions they ask is the best thing going. Seattle is a very nice place to visit.

BAD
No Satya, no swag, 2 days conference not 3 as advertised (many flew home on Saturday due to this), no MVP wall, no sticker swap, no store to buy merchandise or surface laptops etc, and no book store either. The energy just wasnt there for me, some people tried to get the crowd to hoot and holler but it just wasnt happening.

UGLY
After reflection skipping this part, ask me in person.

Ways I would try to improve Ignite

  • Less marketing content, more deep dives.
  • Have the person delivering the Keynotes always be actually there in person, especially Satya.
  • Tell people its a swag free conference up front, to avoid disappointment.
  • Bring back the MVP / RD Wall, book store, merchandise stands etc.
  • Inform people better of any after event parties.
  • Have working wifi at the start of the conference.
  • Session speakers speak slowly, this isn’t a race, if you need more time have longer session. Not everyone is a native English speaker and rambling through content at 100 mph is an awful experience for a lot of people.
  • Tell people that where your sitting is not where the speaker is, some rooms were both called Tahoma 3 for example and one had the speaker and the other was earphones only – it wasnt clear.
  • Full sessions and being told to go watch it online, after travelling a long way to be in person this wasn’t a great experience. (have bigger rooms)
  • Two days isn’t long enough, lots of content at the same time yesterday was not fun, yes its recorded but I came to see people talk and the content needs to be spread out more.
  • Have the chance to get your photo taken professionally so that you can get a nice photo of you at the event.
  • Have more fun things on when it’s the drinks / nibbles at the end of each day.

Summary
In summary, Microsoft tried a new way of doing things from what I gathered and it fell short.

Would I go next year? – it depends.



Bicep Scenarios

I have been working with Bicep a lot on a recent project and I tweeted a reply to a tweet as you can see below, in order to reply properly I decided to write this blog post as hopefully being a way to give constructive feedback.

Lets take the example of vnet peering – if you google/bing for vnet peering bicep you will end up here most likely – https://docs.microsoft.com/en-us/azure/templates/microsoft.network/virtualnetworks/virtualnetworkpeerings?pivots=deployment-language-bicep

This is somewhat helpful in terms of it shows you the format, however it wont help a lot of people as its not got any examples of how it works.

I always use https://github.com/Azure/ResourceModules as I can look at an example, they have deployment examples links for every resource which is like so: –

Now with this example I can see what the values look like and get a feel for what is needed. The first screen shot does explain the values which is ok at best, an example would be good, a common scenario would be even better.

When it comes to documentation examples can be what makes or breaks the documentation.
For me scenarios are the answer to what I would like to see – If I want to create a keyvault, theres every chance I need to create a secret and populate this, maybe I need an access policy or a managed identity to go along with this – these are real work scenarios that the documentation dont cover – yes we can’t cover everything but if we go back to the first screen shot above – this isnt helping me a great deal truth be told.

Now there is a common scenarios page from the Bicep team – https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/scenarios-secrets which is a good start, I feel we really need a lot more of these or at least link to github repositories for further help.

When trying to figure out how to get a load balancer to play nicely with 2 virtual machines I had to manually depoy it and then reverse engineer the bicep – the sample in the documentation was a good start, didnt cover what I needed but it was a good start.

Ive had dm’s saying that there isnt enough people with enough time to write all of the scenario’s, which is fine, finding a way to make public contributions of example scenario;s might be one way to go about it.

Either way I feel there is room for improvement on the documentation for a lot of microsoft docs, its an unpopular opinion but there you have it.

If you have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Reverse Engineering Arm Templates to use with Bicep

When working with Bicep there are times when the documentation isn’t there, and the examples aren’t there yet either.

Recently I needed to deploy Azure Database for PostgreSQL Flexible server using Bicep, the docs for doing this are ok, examples would make the Bicep docs so much better, but anyway, if you have been using Bicep then you manage to get it working.

I tend to go to the following website https://github.com/Azure/ResourceModules/ when I am working with Bicep and use these modules as much as I can – note that they are constantly being worked/changed so bare that in mind.

I figure out the Bicep for my postgreSQL server and now I need too add in a couple of extensions – I’m new to postGreSQL and never touched it and google for the article and it turns out its super simple to manually add in any extension – this article show you how – https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions

So I decided to download the arm template for my postgreSQL server before the extensions were added and then compare this to the arm template after I have added the extensions into postGreSQL by hand. Then I compare the files and see whats been added.

Comparing both files I see the following:-

I then use a website which is available to use to generate Bicep code from existing arm templates. The website for this is https://bicepdemo.z22.web.core.windows.net/ – using this I click on the decompile button top right and point it at the ARM template I downloaded from the Azure portal after I had added in the extension manually. This then generates the Bicep code for me and I can see the section I needed to add in the extension.

resource flexibleServers_psql_dev_weu_name_azure_extensions 'Microsoft.DBforPostgreSQL/flexibleServers/configurations@2022-01-20-preview' = {
  parent: flexibleServers_psql_dev_weu_name_resource
  name: 'azure.extensions'
  properties: {
    value: 'LTREE,UUID-OSSP'
    source: 'user-override'
  }
}

So now I have the missing extension code I need for my Bicep code and we can remove the manually added extensions – redeploy the code and we are all good.

Summary

If you don’t know the Bicep code for what you need and you cant find any samples, try manually deploying your service, download the arm template and use the https://bicepdemo.z22.web.core.windows.net/ to decompile the ARM templae back into Bicep.

If you have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Azure ML Investigations

A customer where I work, suddenly had an issue where the machine learning model that they had deployed to AKS stopped working after having been deployed over 1 year ago. They asked if someone could take a look into it, I always say jump at the chance to learn something new and perhaps blog (I a,ways forget to blog so here we are).

Now I have never really done anything with Azure ML before, I have clicked around the portal for a few minutes and thats it, so here is how I went about it, just incase you have to do the same thing at some point.

An email from the customer show me the error they are receiving which says “Internal Server Error. Run: Server internal error is from Module Execute Python Script”

Ok, so straight away im thinking something to do with Python , lets try tro replicate the error first of all (always my first port of call when debugging isues) – so I open up Postman which I can use to test the Rest call and send a post to the URI and yeah I see the same issue. The customer has mentioned AKS so I look into that, all appears to be fine there.

Time to crack open the Azure Machine Learning studio and go on the hunt for anything that might look interesting. Now I know the name of the model from the customer so I start poking around there and I click on the test and test it and I get this error with a stack trace still mentioning an internal server error due to what looks like Python using version 3.6.

The code hadnt been changed or redeployed for over 1 year but suddenly stops working and I’m thinking thats odd, must be some sort of dependency being pulled in when its running. After a fair bit of googling I came across this link – Python SDK release notes – Azure Machine Learning | Microsoft Docs which after carefully reading says that there is a breaking change with the Azure Machine Learning SDK for Python version 1.41.0.

Now, unfortunately I spend a fair bit of time looking for this, trying to figure out how, or even where to update this in the model, with absolutely no luck whatsoever.

I then ended up on a call with a super helpful Microsoft Engineer who knows Azure ML inside out and after we discuss the issue we figure out between us that the dependency on a package called VowalRabbitt is causing the issue – the latest version of this package required Python newer than 3.6. So we figure lets try pinning the version of the package in the designer to use VowalRabbitt 8.10.1 – rebuilt the model and bish bash bosh it all works again like a charm.

The very best part of this was that I reached out on Twitter and 3 different very kind people asked if they could help. I wanted to say a huge big thank you to Kevin Oliver (@TechnicalPanda), Pedro Fiadeiro (@plfiadeiro) and also Sammy Deprez (@sammydeprez) – without your help I would have been very stuck so many thanks to you all!.

In summary I learned a lot about Azure ML, how models are used and tested and then deployed, I went down many a rabbit hole, pun intended and eventually came up trumps – I knew next to nothing about Azure ML before this came up, I said I would fix it and I never gave up. This is what I love doing, fixing stuff I know nothing about by asking questions and finding out the answers.

In the unlikely event that this helps anyone – awesome, if not thanks for reading.

If yuou have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Grafana Alert Rules, Contact Points and Notification Policies with Azure

Recently Microsoft announed that they will have a grana service available to use on Azure – awesome stuff.

I like Grafana for dashboards, its got a bit to go however, esepcially when it comes to alerts and doing things at scale.

You can choose to run Grafana locally (im running it on windows), you can run it in a container and you can run it on Azure, no matter where you use it, there are a few things which I wanted to cover off to help people who are considering using it.

Now I am currently using version 8.4.5 and I wanted to create some azure alerts to see what Grafana had in the way of alerts, it has some nice stuff to be fair, how it goes about it needs some work but I reckon it will definetly get there in upcoming versions.

Currently creating dashboards is very simple, when it comes to azure you need to:-

  • Create a data source (azure monitor)
  • Add a panel to a new dashboard
  • Select the datasource and then choose either Metrics, Logs or Azure Resource Graph.
  • Fill out the details

Simple stuff, now what about if you want to create an alert, well the dashboard you create is stored in JSON which contains all of th panels and the settings etc – alerts are stored seperately, to be honest I think alerts are still being worked on.

Anyways alerts are stored elsewhere, the good news is there is an API for Grafana, the bad news is, its not the best, either that or the documentation is wrong – if you try it out and it all just works please do give me a shout.

If you want to create alert rules which is, you creating settings, that define when an alert should fire, lets say if your virtual machine cpu goes abouve 75% cpu for between 1-5 minutes then raise a alert. An Alert is made up of whats called contact points ans notification policies in Grafana, now this idea I do like.

To create an Alert Rule you could do the following:-

POST http://localhost:3000/api/ruler/grafana/api/v1/rules/{your folder name here}

In the JSON believe replace {your datasource uid here} with the uid of your own datasource and also replace {your subscription id here} with your own subsciptionId.

{
    "name": "FUNCTION APPS - HTTP Server Errors (Total)",
    "interval": "1m",
    "rules": [
        {
            "expr": "",
            "for": "5m",
            "labels": {
                "Customer": "test customer",
                "alertto": "gregor"
            },
            "annotations": {
                "summary": "FUNCTION APPS - HTTP Server Errors > 100"
            },
            "grafana_alert": {
                "id": 115,
                "orgId": 27,
                "title": "FUNCTION APPS - HTTP Server Errors (Total)",
                "condition": "B",
                "data": [
                    {
                        "refId": "A",
                        "queryType": "Azure Monitor",
                        "relativeTimeRange": {
                            "from": 600,
                            "to": 0
                        },
                        "datasourceUid": "{your datasource uid here}",
                        "model": {
                            "azureMonitor": {
                                "aggregation": "Total",
                                "alias": "{{ resourcename }} - {{ metric }}",
                                "dimensionFilters": [],
                                "metricDefinition": "Microsoft.Web/sites",
                                "metricName": "Http5xx",
                                "metricNamespace": "Microsoft.Web/sites",
                                "resourceGroup": "rg-grafanaresources",
                                "resourceName": "grafana1",
                                "timeGrain": "auto"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "queryType": "Azure Monitor",
                            "refId": "A",
                            "subscription": "{your subscription id here}"
                        }
                    },
                    {
                        "refId": "B",
                        "queryType": "",
                        "relativeTimeRange": {
                            "from": 0,
                            "to": 0
                        },
                        "datasourceUid": "-100",
                        "model": {
                            "conditions": [
                                {
                                    "evaluator": {
                                        "params": [
                                            100
                                        ],
                                        "type": "gt"
                                    },
                                    "operator": {
                                        "type": "and"
                                    },
                                    "query": {
                                        "params": [
                                            "A"
                                        ]
                                    },
                                    "reducer": {
                                        "params": [],
                                        "type": "last"
                                    },
                                    "type": "query"
                                }
                            ],
                            "datasource": {
                                "type": "__expr__",
                                "uid": "-100"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "refId": "B",
                            "type": "classic_conditions"
                        }
                    }
                ],
                "intervalSeconds": 60,
                "rule_group": "FUNCTION APPS - HTTP Server Errors (Total)",
                "no_data_state": "NoData",
                "exec_err_state": "Alerting"
            }
        },
        {
            "expr": "",
            "for": "5m",
            "labels": {
                "Customer": "test customer",
                "alertto": "gregor"
            },
            "annotations": {
                "summary": "Azure SQL - DATA IO % > 75%"
            },
            "grafana_alert": {
                "id": 121,
                "orgId": 27,
                "title": "Azure SQL - Log IO %",
                "condition": "B",
                "data": [
                    {
                        "refId": "A",
                        "queryType": "Azure Monitor",
                        "relativeTimeRange": {
                            "from": 600,
                            "to": 0
                        },
                        "datasourceUid": "{your datasource uid here}",
                        "model": {
                            "azureMonitor": {
                                "aggregation": "Average",
                                "alias": "{{ resourcename }} - {{ metric }}",
                                "dimensionFilters": [],
                                "metricDefinition": "Microsoft.Sql/servers/databases",
                                "metricName": "log_write_percent",
                                "metricNamespace": "Microsoft.Sql/servers/databases",
                                "resourceGroup": "rg-grafanaresources",
                                "resourceName": "grafanadb/grafanadb",
                                "timeGrain": "auto"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "queryType": "Azure Monitor",
                            "refId": "A",
                            "subscription": "{your subscription id here}"
                        }
                    },
                    {
                        "refId": "B",
                        "queryType": "",
                        "relativeTimeRange": {
                            "from": 0,
                            "to": 0
                        },
                        "datasourceUid": "-100",
                        "model": {
                            "conditions": [
                                {
                                    "evaluator": {
                                        "params": [
                                            75
                                        ],
                                        "type": "gt"
                                    },
                                    "operator": {
                                        "type": "and"
                                    },
                                    "query": {
                                        "params": [
                                            "A"
                                        ]
                                    },
                                    "reducer": {
                                        "params": [],
                                        "type": "last"
                                    },
                                    "type": "query"
                                }
                            ],
                            "datasource": {
                                "type": "__expr__",
                                "uid": "-100"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "refId": "B",
                            "type": "classic_conditions"
                        }
                    }
                ],
                "intervalSeconds": 60,
                "rule_group": "Azure SQL - Log IO %",
                "no_data_state": "NoData",
                "exec_err_state": "Alerting"
            }
        }        
    ]
}

Lots of companies have products that produce nice dashboards but in my opinion a dashboard is useless on its own, you shouldnt have to look at a dashboard for the most part, especially if your doing something at scale at least. So, I want to have a dashboard with alerts that email me or create a TopDesk ticket or ServiceNow ticket when there is something awry.

Contact points in Grafana are basically how should someone or something be contacted, these are normally email addresses or end points like an azure function end point which you ca nuse to create tickets for example.

Notification policies are policies that act on the settings you provide, an example would be if a label is matched then use of of the contact points to do something – so if an alert is raised and the label is prodcution on your dashboard then you can send an alert to the contact point you created to call an azure function which will create a ServiceNow ticket.

The Grafana API can be found here – https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json

Its an interesting mix of v1 / v2 end points and some work some dont. I have had no luck getting endpoints for contact points and notification policies to work – but you can use the following calls to get and save the config should you want to create more of these at scale in other dashboards.

GET http://localhost:3000/api/alertmanager/grafana/config/api/v1/alerts

{
    "template_files": {},
    "alertmanager_config": {
        "route": {
            "receiver": "grafana-default-email",
            "routes": [
                {
                    "object_matchers": [
                        [
                            "customer",
                            "=",
                            "test customer"
                        ]
                    ]
                }
            ]
        },
        "templates": null,
        "receivers": [
            {
                "name": "grafana-default-email",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "ED40XnQnz",
                        "name": "email receiver",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "<example@email.com>"
                        },
                        "secureFields": {}
                    }
                ]
            },
            {
                "name": "Gregor Suttie",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "ED4AunQ7kz",
                        "name": "Gregor Suttie",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "azuregreg@azure.com",
                            "singleEmail": false
                        },
                        "secureFields": {}
                    }
                ]
            }
        ]
    }
}

Ans you can post the same JSON (without the uid filled out) to create Contact points and Notification policies)

POST http://localhost:3000/api/alertmanager/grafana/config/api/v1/alerts

{
    "template_files": {},
    "alertmanager_config": {
        "route": {
            "receiver": "grafana-default-email",
            "routes": [
                {
                    "object_matchers": [
                        [
                            "customer",
                            "=",
                            "test customer"
                        ]
                    ]
                }
            ]
        },
        "templates": null,
        "receivers": [
            {
                "name": "grafana-default-email",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "",
                        "name": "email receiver",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "<example@email.com>"
                        },
                        "secureFields": {}
                    }
                ]
            },
            {
                "name": "Gregor Suttie",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "",
                        "name": "Gregor Suttie",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "azuregreg@azure.com",
                            "singleEmail": false
                        },
                        "secureFields": {}
                    }
                ]
            }
        ]
    }
}

API – the api for grafana is as I mentioned before i bit hit and miss.I use it from Postman and here us how I set Postman to get it working.

I create an API key from within Grafana (under Configuration and then API Keys) and set that as a Bearer Token under the Autentication section in Postman like so:-

And the Headers are pretty standard like so:-

If yuou have questions or get stuck reach out to me here int he comments below or on twitter.

Don’t forget to subscribe to my YouTube Channel.



Config Mapping in Bicep – cool stuff!

Recently I was looking at ways to create config for different environments whilst using Bicep and a friend sent me is code which solves the exact issue I was trying to consider a good solution for.

I want to highlight this, as I feel this is probably not that well known and people will find this useful.

Shout out to https://twitter.com/ElYusubov for showing me how to do this – chekout out his Bicep content as its been a great help.

Problem statement:
How do I create config for dev,test,acceptance, prod in a bicep file and make it reusable, so that I dont need paramater json files and other types of solutions etc.

Consider that you want to create an App Service plan and you need different settings per environment.
We could create the App Service Plan for each environment and create some config in the Bicep file, and all would be good.

This will work but there is a better way so let me show you Ely’s example for this exact scenario.

The following image shows you how to create such config.

The code for this can be found here.

Now we need to create the App Service plan and make this re-usable for different environments (we dont want to create content seperately for each environment if we can help it).

The following is the code for creating the actual App Service plan itself

Again this code is available here -> Learn-Bicep/main.bicep at main · ElYusubov/Learn-Bicep (github.com)

Don’t forget to subscribe to my YouTube Channel.



Bicep Access policies and Managed Identity

In this post I cover off access policies in KeyVault and also User Assigned Managed Identity using Bicep.

If you are doing anything with Bicep then this is still the best resource I have found – https://github.com/Azure/ResourceModules/ – it shows you how to do it, but you need to figure it out from there but its still got a LOT of how to go about stuff in Bicep.

Access Policies in Bicep
When you create a KeyVault you have to give people / accounts access to be able to use KeyValut in Azure.

Within the portal you need to go to the following areas:-

So in order to give applications and users access we add access policies. Now if you look up the Microsoft docs page for this, you’ll more than likely end up here:-
https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/vaults/accesspolicies?tabs=bicep

This kind of page isn’t going to help you very much, infact its not going to help you. These types of pages in Microsoft docs are, I would say close to pointless to be honest.

I found a page that you can compare it to that covers what I think we need to see more of in the way of docs when it comes to Microsoft docs.

Compare the docs page to this wonderful blog post:-
https://ochzhen.com/blog/key-vault-access-policies-using-azure-bicep

Its covering everything your ever going to need to know in a simple blog post which also has this layout:-

Why is this so good im my opinion?, its telling me all about what I need to know about access policies, its explaining it all and has really useful samples – the docs need explanations and real world examples, the examples they give are normally far too basic imo.

Managed Identity in Bicep

First thing to say is that Managed Identity in Azure has its own are in the Azure Portal, wut? yeah its been here a while now 🙂

Ok so you do you want to use a System assigned Managed Identity or a User Assigned Managed Identity? – Please, learn this by watching this video to learn about both – https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview

I always opt for User Assigned Managed Identity and I want to use this to access my Azure resources and I use this so that my nice new docker container which is now in container instances can make use of the User Assigned Managed Identity to go to keyvault and get secrets (as an example).

So within my new User Assigned Managed Identity I can add Azure Role Assignments like so:-

Here I am giving Owner rights on the Resource Group and KeyVault Admin access to my Managed Identity as examples.

What does this look like in Bicep?

To create a User Assigned Managed Identity you can do it very easily :-

And then to add Role Assignments and use existing definitions you can do the following:-

Full code can be found here:- https://gist.github.com/gsuttie/3ab106252faf6ef7726441f70d611c7d

So there is an issue/bug with doing this, let me explain. In the example above I create a User Assigned Managed Identity and then add 2 Azure Role assignments to the managed identity. If i delete the managed Identity I end up with the following:-

The roel assignment remains – meaning if I run my Bicep code again it wont work and gives me an error – something like this:-

“RoleAssignmentUpdateNotPermitted”, “message”: “Tenant ID, application ID, principal ID, and scope are not allowed to be updated.”

I hope this blog post makese sense and is helpful to someone.

Don’t forget to subscribe to my YouTube Channel.




.NET 4.x app running in a container using Docker

So this blog post comes from doing some work recently to containerize a .Net 4.8 project that needs to run on a Windows container.

I wrote about Windows containers already so this is going to be some other tips to help someone who may have run into the same issues I did.

Lets jump right in, the Dockerfile looks like this:-

So we are using the mcr.microsoft.com/dotnet/framework/sdk:4.8 docker image for building the code, this image has a number of useful tools as part of the image.

Then we copy some files over and restore nuget packages to a packages folder.
We then use msbuild to build the project to an output folder.

On line 12, I am then using another image mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019 for setting up iis as well as copying some files from the build layer into the image which will run the code within IIS.

The contents of startiis.ps1 are as follows:-

So this creeates a new folder for the site, enables IIS remote management which means we can connect to the running container using IIS, and then it creates a new-app pool and website and then finally removes the default iis website.

Note: Now I had some issues trying to web.config transforms and ended up ditching that and ended up creating a config for each environment and switching to the one I need using an environment variable. I also tried Config Builders but I really dont like them as they dont update the actual file they change the values on the fly which isnt not fun when trying to debug anything.

So in the DockerFile I have an entrypoint like so:-
ENTRYPOINT powershell.exe c:/Initialisecontainer.ps1; C:/ServiceMonitor.exe w3svc

This will start the container and inside Initialisecontainer.ps1 you can copy over the web.config with the web.test.config for example.

Tip: to ensure the container keeps IIS running I add the C:/ServiceMonitor.exe w3svc and this works. I tried numerous ways to get the docker container to stay running and went down the rabbit hole here but the following does work.

So now when I have built the container I can run it with this command:-

docker run -d -e ASPNET_Environment=Test <name of my container>

There is many ways to do this and I opted for simplicity over anythingelse.

The main thing is, the setting up of IIS which is pretty easy and then removing the default site which if you deploy to causes IIS to stop and therfore kill your container.

Please reach out if there is a better way, or if you have questions.

Don’t forget to subscribe to my YouTube Channel.