Category: Azure

Azure DP-300

Azure DP-300 Exam Study guide

Plan and Implement Data Platform Resources (15-20%)
Deploy resources by using manual methods



Azure App Service

Troubleshooting App Services in Azure

In this blog post, I wanted to cover how to go about troubleshooting an App Service in Aure which is a web app with a SQL server backend whereby users have reported issues with the slow performance of the website.

The first thinh I tend to look at is the backend store, in this case, Azure SQL Server and we have some really great tooling we can use to troubleshoot perforamce issues with Azure SQL.

The first port of call was to open up the Azure Portal and go to the Resource Group with the issues and click on the SQL Server database and head to the Intelligent Performance section on the left-hand menu as highlight below: –

Performance Overview
This currently has a recommendations area that suggests adding 5 different Indexes which are all set as HIGH impact.

Indexes can sometimes cause adverse effects so it’s recommended to look at the suggestions, copy the script from the recommendations and consider if this Index will indeed help with the performance of queries.

Query Performance Insight
The second area I look at is query performance insight and from here we can see the average CPU, Data IO, Log IO on the SQL Server database across the last 24 Hours as an average. We also get an insight into what queries are running and taking the longest time to complete.

I changed the graph above to show the last 7 days and I can see CPU is maxed out at 100% for a long period within the last 7 days as seen below:-

Long Running Queries
This area identifies queries which are taking a long time to complete and always worth checking on this regularly.
The following is a screen shot of long running queries within the database for the past week. To find this information select the database instance in the portal and then select Query Performance Insight and select Long running queries, then I chose custom and changed the time period to Past week.

We can see above the yellow query is the database query which has the longest duration this past week, you can click on the yellow area and it will show you the details of the query which is a long running query.

Automatic Tuning

Azure SQL Database built-in intelligence automatically tunes your databases to optimize performance. What can automatic tuning do for you?

  • Automated performance tuning of databases
  • Automated verification of performance gains
  • Automated rollback and self-correction
  • Tuning history
  • Tuning action Transact-SQL (T-SQL) scripts for manual deployments
  • Proactive workload performance monitoring
  • Scale out capability on hundreds of thousands of databases
  • Positive impact to DevOps resources and the total cost of ownership

I would recommend that FVN turn this on and leave it like the following:-

This means that Azure will tune the indexes using built in intelligence and create indexes when it thinks you need them based on usage patterns. A word of caution here as these recommendations aren’t always correct so please bare this in mind.

Log Analytics
I always recommend adding the Azure SQL Analytics workspace solution to the subscription and this gives us further insight into the SQL Server in Azure. Once you turn this on you need to wait sometime before it can gather a decent amount of data.

The screen shot below shows us the type of information we can get from it, this screen shot was taken not long after being turned on so if you wait some time it will have much more useful details:-

From here we can get more information about deadlocks, timeouts, etc.


Now lets take a look at the website which is in an App Service in Azure and see what tool we can use to help us troubleshoot issues with the performance.

I always recommned adding Application Insights into Azure for resources when possible, and here if we click on the App Insights for the web app we can instantly get soe basic info. If you click on the Application Dashboard as seen below we get a high level vue of whats going on in our App Service.

The Application dashboard for a typical web app might look something like this: –

Ok, so let’s now do some further investigation into our app service issues. This time I chose the App Service itself and then I chose Diagnose and solve problems from the left-hand menu. This feature is underused in my opinion and is very useful indeed, not sure if many people have looked at it but it can be pretty helpful with recommendations and also pointing out some things that you may want to think about remediating.

Once in the Diagnose and solve problems area I usually click on Availability and Performance within the Troubleshooting categories section and if you do, you’ll see something like this: –

In the image above we can see that we have some App Performance issues to go and investigate. Clicking into the App Performance section we get in-depth details about the Performance and we get Observations that say things like Slow Request Execution with details of the web page, average latency, total execution time, etc. The detail here is very helpful in tracking down potential issues in the code, or the configuration of your web application. There are a number of options to check within each section of the 6 troubleshooting categories, an example is shown below for the Availbility and Performance section: –

Summary
In summary, there are a number of really awesome tools to aid us with troubleshooting App Service perormance issues, go check them out the next time your web app is running poorly.



Azure Functions

Azure Durable Functions – Support Caller

I wrote an Azure Durable function which makes a phone call to out of hours support engineers when an alert is raised within their production Azure environment, and I wanted to talk about how I did it and what I used.

When an alert is raised with the customers Azure environment I send an HTTP Post to my Azure durable function endpoint from the reporting tool we use, which is PRTG, you can do the same from Azure just as easily, we use PRTG to monitor Azure resources for things like High CPU and the amount of free disk space remaining, etc.

Durable functions was chosen so that I can make use of what’s called an orchestration durable function – you can read more about durable functions: – https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp

If you read the above articles you’ll get a good grasp of what an orchestrator in durable functions can do, to convey why I used them I have the following workflow requirements:-

  1. Receive details of the alert.
  2. Retrieve the support people’s phone numbers.
  3. If an alert is raised call the first number 3 times in 5 minutes, if answered by a human, read out the alert message and some extra content and ask the user to acknowledge the issue by pressing 1 on the keypad.
  4. If the support engineer doesn’t answer after 3 attempts then move onto the next number.
  5. If the support engineer answers and presses 1 stop the orchestration.

1 – Receive details of the alert
This is really easy to do, here I have a template set up in PRTG which forwards the details of the alert to my durable function like so:-

2 – Retrieve the support people’s phone numbers
I am storing the support people’s phone numbers in a CSV file which is uploaded to a simple Azure storage account, this allows the customer to edit the support numbers easily.

3 – Making the call
Here I make use of Twilio Rest API and I create a CallResource object and then call the Create method, Twilio has a thing called Twiml which you can create a message of your own out of and it will read this message out to the person who picks up the phone call. All of the details about who you call, what the call says, and the action they need to take are stored in config so it can be very easily changed for different customers.

The code to make a call is actually really simple.

var call = CallResource.Create(twiml: 
new Twilio.Types.Twiml($"<Response><Gather action='{callbackhandlerURL}'>{messageToReadToUser}"),
to: to,
from: from);

4 / 5 – Answering the call
This was the tricky part, figuring out if they had picked up the call was my initial challenge and I tried numerous things from the Twilio docs which were misleading, didn’t seem to work as I expected. The samples for this part of the documentation are sadly lacking.
Now when the call comes in the support engineer is asked to press 1 to acknowledge they have received the call and the orchestration can end, part of this involves having a callback URL so that Twilio can send you details back to a URL of your choice so that you can get the details of the call, things like call length, etc and if they pressed 1 during the call.


Orchestration
The orchestration part was pretty tricky for me to get right, huge thanks to @marcduiker he was an enormous help to me on this, figuring out how to do some of the steps proved tricky but very interesting!

Marc is putting together an Azure Functions University series where you can go and learn all about Azure Functions – please go check that out.


The orchestration logic was something like the following:-

MainOrchestrator – this function’s job is to be the orchestrator, within this function we call sub orchestrators, and also activity functions, think of an activity function as a separate function that does something, I had a GetNumbersFromStorage activity function and a SendNotification activity function. so the idea behind durable functions is to be able to call multiple azure functions using patterns, one of which is the orchestrator pattern.

RetryOrchestrator – this function’s job is to work out what to do when the call wasn’t answered the first time, do we need to make another call, how many times have we called this number, and have we ensured that the calls are spread out of 5 minutes so we don’t make multiple calls at the same time.


Twilio
To make this all work I created a Twilio account and purchased a number, this means you can use this number to make the calls. It costs 2 pence per call and 7 pence per call if you want to detect if someone answered the call using answering machine detection, so there are options available.

Summary
Durable functions have a lot of great use cases, definetly check them out and build something yourself to get a handle on how they work. The Azure durable function docs are really good.




AKS Zero To Hero – Series for everyone

Richard Hooper and I have started a new series called AKS Zero to Hero, the aim here is for Richard to teach me AKS from zero to knowledge to hopefully becoming a hero when it comes to AKS.

We see a lot of customers either already using AKS or wanting help getting started with AKS so it’s about time I got up to speed. If you are new to AKS or a seasoned professional we will be covering as much AKS content as we possibly can, the aim is to try to have content out each week.

We will be taking an asp.net core project which will be open-sourced on GitHub at https://github.com/CloudFamily/AKS_Zero_to_Hero and we will deploy this to AKS and cover as many areas of AKS as we can possibly cover. The series will run for a while so please hit subscribe and click on the bell notification to be alerted when a new video drops.

The YouTube playlist for all of our videos thus far can be found below.

Please give us feedback, ask questions etc and we can try and answer them in an ask me anything session which we will be planning within the next month.

Don’t forget to subscribe to my own YouTube Channel.



Using the latest version of the Azure CLI

In this blog post, I wanted to quickly cover how you can keep the Azure CLI up to date on your local system and within Azure. I use the Azure CLI as my go-to choice for writing deployment scripts in Azure. The reason you want to keep this up to date is for new additions as we all bug fixes for previous versions.

The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.

Its super simple to keep this up to date and you can do this by opening a PowerShell or Bash script window and typing:-

az upgrade

But instead of doing this maybe you want to keep it up to date without having to keep checking, you can also do this by using the following command:-

az config set auto-upgrade.enable=yes

But even better yet you can keep the Azure CLI up to date without ever being prompted by using the following command:-

az config set auto-upgrade.prompt=no

And that’s it, no you no longer need to worry about am I using the latest version of the Azure CLI.

You can read more on this at the following URL: – https://docs.microsoft.com/en-us/cli/azure/update-azure-cli?WT.mc_id=AZ-MVP-5003451

Don’t forget to subscribe to my YouTube Channel.



Immutable storage for Azure Storage Blobs

If you have storage blobs containing things like backups or files then Azure now has Immutable storage available for Azure Storage Blobs generally available in all public regions.

Immutable means that it is unable to change or be changed and this means that if a customer has let’s say a backup then they can store this unchanged which for some companies is very nice to have.

To take advantage or to test out immutable storage lets go through what we need to do to test it out.

  • First of all, create a storage account.
  • Click on Containers and create a new container, give it a name and choose Private (no anonymous access).
  • Once created click on the name of your new container and then upload some files.
  • Once you have uploaded some files click on Access Policy on the left-hand side, notice we have 2 sections, Stored access policies and Immutable blob storage, under Immutable blob storage, select Add policy.
  • We now have 2 options to choose from
    • Time-based retention
    • Legal hold

Time-based retention allows us to add a number of days value between 1 day and 400 years, this also makes the files immutable.

Note:- You cannot change this value to 0 at any time. Once the interval you add expires – Upon the expiration of the retention interval, the data will continue to be in a non-modifiable state but can be deleted. Retention policy changes may require some time to take effect. 5 edits are permitted to the policy.

Legal hold retention means you add a tag to the blob container – each legal hold policy needs to be associated with 1 or more tags. Tags are used as a name identifier, such as a case ID, to categorize and view records.

You cannot delete or modify any files with the container whilst there is either a Time-based retention policy or a Legal hold policy, however if you delete the legal hold policy you can then delete or modify files with the container.
With Time-based retention, you can allow additionally protected appends and change the retention interval.
Time-based retentions need to be locked in order to be active and to add a lock click on the 3 dots and choose Lock policy.

Note:- Once you apply the lock you cannot delete the lock and just before you click save on applying the lock you will see the following reminder:-

Summary
I can see some people having the need to keep backups and have them immutable for a number of legal reasons and this new feature will be very handy for them.

Don’t forget to subscribe to my YouTube Channel.



GitHub Actions 101

In this blog post series I am going to cover my journey to learning about GitHub Actions.

To get started with learning about GitHub Actions lets start by describing what they are.


So what exactly are GitHub Actions?

“GitHub Actions makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.

GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow.

You can write your own actions to use in your workflow or share the actions you build with the GitHub community

Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.


To get started with learning about GitHub Actions lets start off by listing some of the best resources I have come across for getting started.

Don’t forget to checkout my YouTube Channel.



A New Adventure

I’m very excited to share with you the news that I have accepted a position as an Azure Architect at a company in the Netherlands called Intercept.

Intercept has very recently been awarded Microsoft Partner of the Year 2020 in the Netherlands beating off strong competition from 18 other companies.

Intercept are Microsoft Azure Management Elite Partners and Gold Partners in 7 areas at present which is pretty impressive.

I start my new role on September 1st, I will be working in and around Azure daily and that is what I want to be doing, so to say I am excited is an understatement.

During Covid-19 I was furloughed due to a customer not being able to support remote workers and during this time a great number of people from Twitter and LinkedIn reached out to me asking if I would be interested in working with them. I thank each and every one of you as being furloughed was not much fun but to be asked if I would like to work with you and your companies, that was neat, to say the least.

I interviewed at a number of companies and had numerous fantastic offers given to me but ultimately my new role ticked more boxes than the rest and I couldn’t say no.

The job role as well as the people I had spoken to at Intercept were the deciding factors for me.

Again thank you to everyone who reached out to me, you have no idea how much I appreciated it, beers are on me if we manage to meet in person, going forward.

So I look forward to rolling my sleeves up again and changing career direction ever so slightly. I am a renewed Azure MVP and that’s where I want to be working and learning day to day. I cannot wait to get started and helping people even more in the future.

November 2017 I set myself a goal of becoming an Azure Architect and gaining as much knowledge as I could with Azure – the exams have helped and I look forward to using Azure daily.

I remind myself that I am less than 3 years into my journey, I have a blog, YouTube channel, 11 Azure certification badges as below and I all whilst being a development manager of 10+ people and not using Azure daily.

All it takes is hard work, goals, determination and you can do anything.

Don’t forget to subscribe to my YouTube Channel and you can find me on twitter @gregor_suttie.


Tags:


DP-900 Azure Data Fundamentals

Happy to share that I sat the Beta for this exam and passed – here is a link to my study guide https://gregorsuttie.com/2020/06/09/dp-900-microsoft-azure-data-fundamentals-exam-study-guide/

Another exam done, and the data side of Azure is something I would love to explore further if I ever get the chance.

Don’t forget to subscribe to my YouTube Channel.




Think of the next person

I have been lucky enough to work with one manager who was very good at being disciplined and I wanted to share with you some of my learnings and talk about some stuff I have seen of late which really isn’t helpful and how easy they are to remediate.

So what is IT industry discipline, well it’s not something you read much about or will find in books, for example, it’s just something you pick up as you progress in your career and is much easier to pick examples of what not to do and then have a way to make things better.

Making things better should always be at the back of your mind in the IT industry, how can we make things better. I have a developer background so most of what I will talk about will cover some basic stuff yet I still see it on almost every project that I come across.

If you think about the above paragraph of making things better here is a good rule of thumb, imagine the next person who comes along has even less knowledge about whatever the thing is your doing, how can you help make their lives a little bit easier?

Here is a list of some examples

  • Don’t leave server folders lying around like New Folder, New Folder(1) – instead, have a proper naming convention and stick to it (think of the next person coming along).
  • Don’t leave crap lying around with xxx appended to the start or have DELETEME items lying around anywhere – instead source code everything and delete the rest (think of the next person coming along).
  • Don’t leave old deployments lying around instead archive them off, or have a process to delete the last x number of deployments (think of the next person coming along).
  • Don’t have one person having vital knowledge about a system in their head, document it, and share with as many people deemed reasonable – instead document everything, yes everything, there I said it, we all have things we should have documented yet we don’t (think of the next person coming along).
  • Don’t let people leave your company without doing a proper handover – companies have on-boarding processes, where is your off-boarding process? (think of the next person coming along).
  • Don’t move to the Cloud and suddenly we don’t have any diagrams – instead diagram your architecture and keep it up to date, have a process in place to check this diagram is still valid (think of the next person coming along).

The above is much more than technical debt, everyone has technical debt but this is about thinking about the next person.

We can improve, we should improve processes today not tomorrow or next week. Small improvements over time make a big difference.

I am going to be adding to this blog post over time as more things come to me but for now, think of the next person each time you do something on a project or when its related to work – go that little bit extra and before you know it you’ll enjoy working on the project when the processes in place are right.

Don’t forget to subscribe to my YouTube Channel.