Category: Uncategorized

Bicep Scenarios

I have been working with Bicep a lot on a recent project and I tweeted a reply to a tweet as you can see below, in order to reply properly I decided to write this blog post as hopefully being a way to give constructive feedback.

Lets take the example of vnet peering – if you google/bing for vnet peering bicep you will end up here most likely – https://docs.microsoft.com/en-us/azure/templates/microsoft.network/virtualnetworks/virtualnetworkpeerings?pivots=deployment-language-bicep

This is somewhat helpful in terms of it shows you the format, however it wont help a lot of people as its not got any examples of how it works.

I always use https://github.com/Azure/ResourceModules as I can look at an example, they have deployment examples links for every resource which is like so: –

Now with this example I can see what the values look like and get a feel for what is needed. The first screen shot does explain the values which is ok at best, an example would be good, a common scenario would be even better.

When it comes to documentation examples can be what makes or breaks the documentation.
For me scenarios are the answer to what I would like to see – If I want to create a keyvault, theres every chance I need to create a secret and populate this, maybe I need an access policy or a managed identity to go along with this – these are real work scenarios that the documentation dont cover – yes we can’t cover everything but if we go back to the first screen shot above – this isnt helping me a great deal truth be told.

Now there is a common scenarios page from the Bicep team – https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/scenarios-secrets which is a good start, I feel we really need a lot more of these or at least link to github repositories for further help.

When trying to figure out how to get a load balancer to play nicely with 2 virtual machines I had to manually depoy it and then reverse engineer the bicep – the sample in the documentation was a good start, didnt cover what I needed but it was a good start.

Ive had dm’s saying that there isnt enough people with enough time to write all of the scenario’s, which is fine, finding a way to make public contributions of example scenario;s might be one way to go about it.

Either way I feel there is room for improvement on the documentation for a lot of microsoft docs, its an unpopular opinion but there you have it.

If you have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Reverse Engineering Arm Templates to use with Bicep

When working with Bicep there are times when the documentation isn’t there, and the examples aren’t there yet either.

Recently I needed to deploy Azure Database for PostgreSQL Flexible server using Bicep, the docs for doing this are ok, examples would make the Bicep docs so much better, but anyway, if you have been using Bicep then you manage to get it working.

I tend to go to the following website https://github.com/Azure/ResourceModules/ when I am working with Bicep and use these modules as much as I can – note that they are constantly being worked/changed so bare that in mind.

I figure out the Bicep for my postgreSQL server and now I need too add in a couple of extensions – I’m new to postGreSQL and never touched it and google for the article and it turns out its super simple to manually add in any extension – this article show you how – https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions

So I decided to download the arm template for my postgreSQL server before the extensions were added and then compare this to the arm template after I have added the extensions into postGreSQL by hand. Then I compare the files and see whats been added.

Comparing both files I see the following:-

I then use a website which is available to use to generate Bicep code from existing arm templates. The website for this is https://bicepdemo.z22.web.core.windows.net/ – using this I click on the decompile button top right and point it at the ARM template I downloaded from the Azure portal after I had added in the extension manually. This then generates the Bicep code for me and I can see the section I needed to add in the extension.

resource flexibleServers_psql_dev_weu_name_azure_extensions 'Microsoft.DBforPostgreSQL/flexibleServers/configurations@2022-01-20-preview' = {
  parent: flexibleServers_psql_dev_weu_name_resource
  name: 'azure.extensions'
  properties: {
    value: 'LTREE,UUID-OSSP'
    source: 'user-override'
  }
}

So now I have the missing extension code I need for my Bicep code and we can remove the manually added extensions – redeploy the code and we are all good.

Summary

If you don’t know the Bicep code for what you need and you cant find any samples, try manually deploying your service, download the arm template and use the https://bicepdemo.z22.web.core.windows.net/ to decompile the ARM templae back into Bicep.

If you have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Azure ML Investigations

A customer where I work, suddenly had an issue where the machine learning model that they had deployed to AKS stopped working after having been deployed over 1 year ago. They asked if someone could take a look into it, I always say jump at the chance to learn something new and perhaps blog (I a,ways forget to blog so here we are).

Now I have never really done anything with Azure ML before, I have clicked around the portal for a few minutes and thats it, so here is how I went about it, just incase you have to do the same thing at some point.

An email from the customer show me the error they are receiving which says “Internal Server Error. Run: Server internal error is from Module Execute Python Script”

Ok, so straight away im thinking something to do with Python , lets try tro replicate the error first of all (always my first port of call when debugging isues) – so I open up Postman which I can use to test the Rest call and send a post to the URI and yeah I see the same issue. The customer has mentioned AKS so I look into that, all appears to be fine there.

Time to crack open the Azure Machine Learning studio and go on the hunt for anything that might look interesting. Now I know the name of the model from the customer so I start poking around there and I click on the test and test it and I get this error with a stack trace still mentioning an internal server error due to what looks like Python using version 3.6.

The code hadnt been changed or redeployed for over 1 year but suddenly stops working and I’m thinking thats odd, must be some sort of dependency being pulled in when its running. After a fair bit of googling I came across this link – Python SDK release notes – Azure Machine Learning | Microsoft Docs which after carefully reading says that there is a breaking change with the Azure Machine Learning SDK for Python version 1.41.0.

Now, unfortunately I spend a fair bit of time looking for this, trying to figure out how, or even where to update this in the model, with absolutely no luck whatsoever.

I then ended up on a call with a super helpful Microsoft Engineer who knows Azure ML inside out and after we discuss the issue we figure out between us that the dependency on a package called VowalRabbitt is causing the issue – the latest version of this package required Python newer than 3.6. So we figure lets try pinning the version of the package in the designer to use VowalRabbitt 8.10.1 – rebuilt the model and bish bash bosh it all works again like a charm.

The very best part of this was that I reached out on Twitter and 3 different very kind people asked if they could help. I wanted to say a huge big thank you to Kevin Oliver (@TechnicalPanda), Pedro Fiadeiro (@plfiadeiro) and also Sammy Deprez (@sammydeprez) – without your help I would have been very stuck so many thanks to you all!.

In summary I learned a lot about Azure ML, how models are used and tested and then deployed, I went down many a rabbit hole, pun intended and eventually came up trumps – I knew next to nothing about Azure ML before this came up, I said I would fix it and I never gave up. This is what I love doing, fixing stuff I know nothing about by asking questions and finding out the answers.

In the unlikely event that this helps anyone – awesome, if not thanks for reading.

If yuou have questions reach out to me here in the comments below or on twitter.

Don’t forget to subscribe to myYouTube Channel.



Grafana Alert Rules, Contact Points and Notification Policies with Azure

Recently Microsoft announed that they will have a grana service available to use on Azure – awesome stuff.

I like Grafana for dashboards, its got a bit to go however, esepcially when it comes to alerts and doing things at scale.

You can choose to run Grafana locally (im running it on windows), you can run it in a container and you can run it on Azure, no matter where you use it, there are a few things which I wanted to cover off to help people who are considering using it.

Now I am currently using version 8.4.5 and I wanted to create some azure alerts to see what Grafana had in the way of alerts, it has some nice stuff to be fair, how it goes about it needs some work but I reckon it will definetly get there in upcoming versions.

Currently creating dashboards is very simple, when it comes to azure you need to:-

  • Create a data source (azure monitor)
  • Add a panel to a new dashboard
  • Select the datasource and then choose either Metrics, Logs or Azure Resource Graph.
  • Fill out the details

Simple stuff, now what about if you want to create an alert, well the dashboard you create is stored in JSON which contains all of th panels and the settings etc – alerts are stored seperately, to be honest I think alerts are still being worked on.

Anyways alerts are stored elsewhere, the good news is there is an API for Grafana, the bad news is, its not the best, either that or the documentation is wrong – if you try it out and it all just works please do give me a shout.

If you want to create alert rules which is, you creating settings, that define when an alert should fire, lets say if your virtual machine cpu goes abouve 75% cpu for between 1-5 minutes then raise a alert. An Alert is made up of whats called contact points ans notification policies in Grafana, now this idea I do like.

To create an Alert Rule you could do the following:-

POST http://localhost:3000/api/ruler/grafana/api/v1/rules/{your folder name here}

In the JSON believe replace {your datasource uid here} with the uid of your own datasource and also replace {your subscription id here} with your own subsciptionId.

{
    "name": "FUNCTION APPS - HTTP Server Errors (Total)",
    "interval": "1m",
    "rules": [
        {
            "expr": "",
            "for": "5m",
            "labels": {
                "Customer": "test customer",
                "alertto": "gregor"
            },
            "annotations": {
                "summary": "FUNCTION APPS - HTTP Server Errors > 100"
            },
            "grafana_alert": {
                "id": 115,
                "orgId": 27,
                "title": "FUNCTION APPS - HTTP Server Errors (Total)",
                "condition": "B",
                "data": [
                    {
                        "refId": "A",
                        "queryType": "Azure Monitor",
                        "relativeTimeRange": {
                            "from": 600,
                            "to": 0
                        },
                        "datasourceUid": "{your datasource uid here}",
                        "model": {
                            "azureMonitor": {
                                "aggregation": "Total",
                                "alias": "{{ resourcename }} - {{ metric }}",
                                "dimensionFilters": [],
                                "metricDefinition": "Microsoft.Web/sites",
                                "metricName": "Http5xx",
                                "metricNamespace": "Microsoft.Web/sites",
                                "resourceGroup": "rg-grafanaresources",
                                "resourceName": "grafana1",
                                "timeGrain": "auto"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "queryType": "Azure Monitor",
                            "refId": "A",
                            "subscription": "{your subscription id here}"
                        }
                    },
                    {
                        "refId": "B",
                        "queryType": "",
                        "relativeTimeRange": {
                            "from": 0,
                            "to": 0
                        },
                        "datasourceUid": "-100",
                        "model": {
                            "conditions": [
                                {
                                    "evaluator": {
                                        "params": [
                                            100
                                        ],
                                        "type": "gt"
                                    },
                                    "operator": {
                                        "type": "and"
                                    },
                                    "query": {
                                        "params": [
                                            "A"
                                        ]
                                    },
                                    "reducer": {
                                        "params": [],
                                        "type": "last"
                                    },
                                    "type": "query"
                                }
                            ],
                            "datasource": {
                                "type": "__expr__",
                                "uid": "-100"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "refId": "B",
                            "type": "classic_conditions"
                        }
                    }
                ],
                "intervalSeconds": 60,
                "rule_group": "FUNCTION APPS - HTTP Server Errors (Total)",
                "no_data_state": "NoData",
                "exec_err_state": "Alerting"
            }
        },
        {
            "expr": "",
            "for": "5m",
            "labels": {
                "Customer": "test customer",
                "alertto": "gregor"
            },
            "annotations": {
                "summary": "Azure SQL - DATA IO % > 75%"
            },
            "grafana_alert": {
                "id": 121,
                "orgId": 27,
                "title": "Azure SQL - Log IO %",
                "condition": "B",
                "data": [
                    {
                        "refId": "A",
                        "queryType": "Azure Monitor",
                        "relativeTimeRange": {
                            "from": 600,
                            "to": 0
                        },
                        "datasourceUid": "{your datasource uid here}",
                        "model": {
                            "azureMonitor": {
                                "aggregation": "Average",
                                "alias": "{{ resourcename }} - {{ metric }}",
                                "dimensionFilters": [],
                                "metricDefinition": "Microsoft.Sql/servers/databases",
                                "metricName": "log_write_percent",
                                "metricNamespace": "Microsoft.Sql/servers/databases",
                                "resourceGroup": "rg-grafanaresources",
                                "resourceName": "grafanadb/grafanadb",
                                "timeGrain": "auto"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "queryType": "Azure Monitor",
                            "refId": "A",
                            "subscription": "{your subscription id here}"
                        }
                    },
                    {
                        "refId": "B",
                        "queryType": "",
                        "relativeTimeRange": {
                            "from": 0,
                            "to": 0
                        },
                        "datasourceUid": "-100",
                        "model": {
                            "conditions": [
                                {
                                    "evaluator": {
                                        "params": [
                                            75
                                        ],
                                        "type": "gt"
                                    },
                                    "operator": {
                                        "type": "and"
                                    },
                                    "query": {
                                        "params": [
                                            "A"
                                        ]
                                    },
                                    "reducer": {
                                        "params": [],
                                        "type": "last"
                                    },
                                    "type": "query"
                                }
                            ],
                            "datasource": {
                                "type": "__expr__",
                                "uid": "-100"
                            },
                            "hide": false,
                            "intervalMs": 1000,
                            "maxDataPoints": 43200,
                            "refId": "B",
                            "type": "classic_conditions"
                        }
                    }
                ],
                "intervalSeconds": 60,
                "rule_group": "Azure SQL - Log IO %",
                "no_data_state": "NoData",
                "exec_err_state": "Alerting"
            }
        }        
    ]
}

Lots of companies have products that produce nice dashboards but in my opinion a dashboard is useless on its own, you shouldnt have to look at a dashboard for the most part, especially if your doing something at scale at least. So, I want to have a dashboard with alerts that email me or create a TopDesk ticket or ServiceNow ticket when there is something awry.

Contact points in Grafana are basically how should someone or something be contacted, these are normally email addresses or end points like an azure function end point which you ca nuse to create tickets for example.

Notification policies are policies that act on the settings you provide, an example would be if a label is matched then use of of the contact points to do something – so if an alert is raised and the label is prodcution on your dashboard then you can send an alert to the contact point you created to call an azure function which will create a ServiceNow ticket.

The Grafana API can be found here – https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json

Its an interesting mix of v1 / v2 end points and some work some dont. I have had no luck getting endpoints for contact points and notification policies to work – but you can use the following calls to get and save the config should you want to create more of these at scale in other dashboards.

GET http://localhost:3000/api/alertmanager/grafana/config/api/v1/alerts

{
    "template_files": {},
    "alertmanager_config": {
        "route": {
            "receiver": "grafana-default-email",
            "routes": [
                {
                    "object_matchers": [
                        [
                            "customer",
                            "=",
                            "test customer"
                        ]
                    ]
                }
            ]
        },
        "templates": null,
        "receivers": [
            {
                "name": "grafana-default-email",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "ED40XnQnz",
                        "name": "email receiver",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "<example@email.com>"
                        },
                        "secureFields": {}
                    }
                ]
            },
            {
                "name": "Gregor Suttie",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "ED4AunQ7kz",
                        "name": "Gregor Suttie",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "azuregreg@azure.com",
                            "singleEmail": false
                        },
                        "secureFields": {}
                    }
                ]
            }
        ]
    }
}

Ans you can post the same JSON (without the uid filled out) to create Contact points and Notification policies)

POST http://localhost:3000/api/alertmanager/grafana/config/api/v1/alerts

{
    "template_files": {},
    "alertmanager_config": {
        "route": {
            "receiver": "grafana-default-email",
            "routes": [
                {
                    "object_matchers": [
                        [
                            "customer",
                            "=",
                            "test customer"
                        ]
                    ]
                }
            ]
        },
        "templates": null,
        "receivers": [
            {
                "name": "grafana-default-email",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "",
                        "name": "email receiver",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "<example@email.com>"
                        },
                        "secureFields": {}
                    }
                ]
            },
            {
                "name": "Gregor Suttie",
                "grafana_managed_receiver_configs": [
                    {
                        "uid": "",
                        "name": "Gregor Suttie",
                        "type": "email",
                        "disableResolveMessage": false,
                        "settings": {
                            "addresses": "azuregreg@azure.com",
                            "singleEmail": false
                        },
                        "secureFields": {}
                    }
                ]
            }
        ]
    }
}

API – the api for grafana is as I mentioned before i bit hit and miss.I use it from Postman and here us how I set Postman to get it working.

I create an API key from within Grafana (under Configuration and then API Keys) and set that as a Bearer Token under the Autentication section in Postman like so:-

And the Headers are pretty standard like so:-

If yuou have questions or get stuck reach out to me here int he comments below or on twitter.

Don’t forget to subscribe to my YouTube Channel.



Config Mapping in Bicep – cool stuff!

Recently I was looking at ways to create config for different environments whilst using Bicep and a friend sent me is code which solves the exact issue I was trying to consider a good solution for.

I want to highlight this, as I feel this is probably not that well known and people will find this useful.

Shout out to https://twitter.com/ElYusubov for showing me how to do this – chekout out his Bicep content as its been a great help.

Problem statement:
How do I create config for dev,test,acceptance, prod in a bicep file and make it reusable, so that I dont need paramater json files and other types of solutions etc.

Consider that you want to create an App Service plan and you need different settings per environment.
We could create the App Service Plan for each environment and create some config in the Bicep file, and all would be good.

This will work but there is a better way so let me show you Ely’s example for this exact scenario.

The following image shows you how to create such config.

The code for this can be found here.

Now we need to create the App Service plan and make this re-usable for different environments (we dont want to create content seperately for each environment if we can help it).

The following is the code for creating the actual App Service plan itself

Again this code is available here -> Learn-Bicep/main.bicep at main · ElYusubov/Learn-Bicep (github.com)

Don’t forget to subscribe to my YouTube Channel.



Bicep Access policies and Managed Identity

In this post I cover off access policies in KeyVault and also User Assigned Managed Identity using Bicep.

If you are doing anything with Bicep then this is still the best resource I have found – https://github.com/Azure/ResourceModules/ – it shows you how to do it, but you need to figure it out from there but its still got a LOT of how to go about stuff in Bicep.

Access Policies in Bicep
When you create a KeyVault you have to give people / accounts access to be able to use KeyValut in Azure.

Within the portal you need to go to the following areas:-

So in order to give applications and users access we add access policies. Now if you look up the Microsoft docs page for this, you’ll more than likely end up here:-
https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/vaults/accesspolicies?tabs=bicep

This kind of page isn’t going to help you very much, infact its not going to help you. These types of pages in Microsoft docs are, I would say close to pointless to be honest.

I found a page that you can compare it to that covers what I think we need to see more of in the way of docs when it comes to Microsoft docs.

Compare the docs page to this wonderful blog post:-
https://ochzhen.com/blog/key-vault-access-policies-using-azure-bicep

Its covering everything your ever going to need to know in a simple blog post which also has this layout:-

Why is this so good im my opinion?, its telling me all about what I need to know about access policies, its explaining it all and has really useful samples – the docs need explanations and real world examples, the examples they give are normally far too basic imo.

Managed Identity in Bicep

First thing to say is that Managed Identity in Azure has its own are in the Azure Portal, wut? yeah its been here a while now 🙂

Ok so you do you want to use a System assigned Managed Identity or a User Assigned Managed Identity? – Please, learn this by watching this video to learn about both – https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview

I always opt for User Assigned Managed Identity and I want to use this to access my Azure resources and I use this so that my nice new docker container which is now in container instances can make use of the User Assigned Managed Identity to go to keyvault and get secrets (as an example).

So within my new User Assigned Managed Identity I can add Azure Role Assignments like so:-

Here I am giving Owner rights on the Resource Group and KeyVault Admin access to my Managed Identity as examples.

What does this look like in Bicep?

To create a User Assigned Managed Identity you can do it very easily :-

And then to add Role Assignments and use existing definitions you can do the following:-

Full code can be found here:- https://gist.github.com/gsuttie/3ab106252faf6ef7726441f70d611c7d

So there is an issue/bug with doing this, let me explain. In the example above I create a User Assigned Managed Identity and then add 2 Azure Role assignments to the managed identity. If i delete the managed Identity I end up with the following:-

The roel assignment remains – meaning if I run my Bicep code again it wont work and gives me an error – something like this:-

“RoleAssignmentUpdateNotPermitted”, “message”: “Tenant ID, application ID, principal ID, and scope are not allowed to be updated.”

I hope this blog post makese sense and is helpful to someone.

Don’t forget to subscribe to my YouTube Channel.




.NET 4.x app running in a container using Docker

So this blog post comes from doing some work recently to containerize a .Net 4.8 project that needs to run on a Windows container.

I wrote about Windows containers already so this is going to be some other tips to help someone who may have run into the same issues I did.

Lets jump right in, the Dockerfile looks like this:-

So we are using the mcr.microsoft.com/dotnet/framework/sdk:4.8 docker image for building the code, this image has a number of useful tools as part of the image.

Then we copy some files over and restore nuget packages to a packages folder.
We then use msbuild to build the project to an output folder.

On line 12, I am then using another image mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019 for setting up iis as well as copying some files from the build layer into the image which will run the code within IIS.

The contents of startiis.ps1 are as follows:-

So this creeates a new folder for the site, enables IIS remote management which means we can connect to the running container using IIS, and then it creates a new-app pool and website and then finally removes the default iis website.

Note: Now I had some issues trying to web.config transforms and ended up ditching that and ended up creating a config for each environment and switching to the one I need using an environment variable. I also tried Config Builders but I really dont like them as they dont update the actual file they change the values on the fly which isnt not fun when trying to debug anything.

So in the DockerFile I have an entrypoint like so:-
ENTRYPOINT powershell.exe c:/Initialisecontainer.ps1; C:/ServiceMonitor.exe w3svc

This will start the container and inside Initialisecontainer.ps1 you can copy over the web.config with the web.test.config for example.

Tip: to ensure the container keeps IIS running I add the C:/ServiceMonitor.exe w3svc and this works. I tried numerous ways to get the docker container to stay running and went down the rabbit hole here but the following does work.

So now when I have built the container I can run it with this command:-

docker run -d -e ASPNET_Environment=Test <name of my container>

There is many ways to do this and I opted for simplicity over anythingelse.

The main thing is, the setting up of IIS which is pretty easy and then removing the default site which if you deploy to causes IIS to stop and therfore kill your container.

Please reach out if there is a better way, or if you have questions.

Don’t forget to subscribe to my YouTube Channel.




SEE WHATS CHANGED IN AZURE USING RESOURCE GRAPH: Part 2

This second part goes a little further than part 1. I could have went about this many ways, but I wanted to use Azure Resource Graph with a Logic App as why not eh?

So I created a dashboard in Azure from the queries I wrote and it looks something like this: –

So from the bottom part which says – Resources thats have No Tags I wanted to email a report to myself every x days.
So I created a Logic App which looks like this:-

Step 1 – is a recurrence trigger of x days
Step 2 – I call the manangement api which is -> https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01

With a JSON Payload like so:-

{
  "query": "Resources | join kind=leftouter (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) | where tagKey == '' and tagValue == '' | order by resourceGroup | project SubName, subscriptionId, name, ['Resource Type']=type, location, resourceGroup, tags",
  "subscriptions": [
    "subscription id goes here"
  ]
}

Step 3 – I then parse the JSON which looks like this:-


Step 4 – I use an initialize variable with the sample json returned from the API call in Step 2
Step 5 – I create an HTML Table which formats my results like so:-

Step 6 – I then send the email using whatever you like – I chose SendGrid and that looks like this:-

So every x days I get a report telling me whatever is in the Azure Resource Graph, formatted in an email.

Don’t forget to subscribe to my YouTube Channel.



See whats changed in Azure using Resource Graph

At work I needed to look and see if it was easy to figure out what resources have changed in Azure, in say the last day, week or month. Now there are multiple ways to do this but the one I like the most is using Azure Resource Graph (which I really like), that and some Kusto Query Language (KQL) and you can find out just about anything you want to about your resources. I’ve used Azure Resource Graph a lot and do workshops on it regularly, not sure why people dont use it more often tbh.


So if I want to see what has changed in a resource group in the last few days its a sample as reading this Microsoft web page: – https://docs.microsoft.com/en-us/azure/governance/resource-graph/how-to/get-resource-changes.

From here we can start to think about the kinds fo queries of interest.

These include:-

  • Show me all of the resources that have been created in the past 7 days.
  • Show me the tags for all resources and group them by Tag.
  • Show me all resources which havent been tagged.

This is the kind of things that interest me at the moment.

If we have a new subscription we can add a landing zone and add tagging by default to the subscription, the resource group, the resources etc, using Azure policy, which is all fine.

One other thing I like doing is adding a created date Tag to the resources on the day they were created incase I want to know this information at a later stage.

Lets get back to existing subscriptions where we cant add a landing zone for different reasons, I want to know which resources are tagged with a certain tag.

I found this really useful blog from some guy called John Klimster, who I have met in person and is a thoroughly nice chap and has a great blog!

If you’re interested the post I found particularly helpful for working in KQL and using Tags was this one from John.

I’m off to figure out what I need to do which will be hook my queries up to a Logic App and send a report via a schedule of some sort i reckon.

Don’t forget to subscribe to my YouTube Channel.



Festive Tech Calendar 2021- its a wrap

I would just like to say a big huge THANK YOU! to everyone who was involved in this years Festive Tech Calendar, organisers, helpers and the people who contributed thier submissions.

December is a busy time for most people trying to tie up loose ends with projects etc, and I appreciate its not a nice time to have to deliver a workshop or record a talk for our event.

This year we had 137 entries at the start if December but due to Covid we have had a number of people who had to withdraw.
To each and every person who withdrew, I just wanted to go on record to say you never need to say sorry to anyone, health and family health goes before everything else, thank you for submitting and I hope everyone submits again next year.

We had a record number of people taking part this year and its been the biggest event we have ran yet so next year we will continue to do the Festive Tech Calendar, if you missed out this time please submit again next year.

Everyone who submits gets to take part – and I do mean everyone.

I have learned a few things running these events of the past 4 years and I plan to record some videos on how to go about running an event like this using Sessionize.com, more on that coming in February.

This year we decided to raise money for charity and we set a target of £5000 which was hugely ambituous.

I am delighted to say that at the time of writing this we achieved £5000 and you can still donate https://www.justgiving.com/fundraising/festivetechcalendar

Before I go here are some figures thus far:-

  • 90 Videos
  • 7.7k Views
  • 514 hours worth of content watched

Next year we will make some changes so that we can concentrate more on raising money for a charity as this year time ran away from us with regards to this.

Thanks for reading and I hope you all sign up again next year!.