Category: AzureDevOps

Microsoft Learn

Microsoft Learn in my eyes is highly under rated, I want to show you why there is more to it than you have probably realised.

Learning Paths
Learning paths are a great way to explore a topic, there are currently around 1000 learning paths, so what are you waiting for, there is something for everyone in there, which means you. #alwaysbelearning

Filter
You can filter your learning by –

  • Product
  • Roles
  • Levels
  • Type (Learning Paths or Modules)

Bookmarks
Bookmark your learning choices and come back to them, you owe it to yourself to have learning goals and to finish the learning path or module, don’t start it and leave it, become good at finishing and not good at starting.

Collections
Collections are where you can group your own collection of learning paths and modules which might relate to a specific learning goal you have. This is perfect if you are studying for an exam or want to know more about a more general topic like server-less as an example.

Achievements
If you complete a module within a learning path you earn points and badges along the way and you can see these listed under achievements which can be found under your profile and looks like so: –

I myself have realised I haven’t been using Microsoft Learn for a while and there is a lot of great new content which I am off to check out now.

Let me know which level your on – I’m currently on level 8.



Azure Advent Calendar wrap-up

The #azureadventcalendar was a shared idea between myself and @pixel_robots

Some quick stats as I write this: –

15,800 thousand YouTube views
15,000 website views from over 120 countries
1,300 hours of videos watched
1,200 subscribers

We set out with the idea of asking the Azure community for 25 videos / blog posts with a Christmas theme, with the idea in mind that it would give people the chance to show off their skills, learn new skills and contribute back to the community over December.

We asked people via twitter who would like to contribute to this idea in the middle of September to give people time to decide if they could manage to contribute in December (a 20-30 minute video isn’t easy, especially towards that time of year).

Before we knew it we had more than 25 filled up and it was clear that this might be a bit more popular than first thought, we increased it to 50 and before you know it we had increased it to 75. In order to avoid too many duplicate subjects we decided to cap it at 75.

Wow! 75 videos/blog post contributions would be pretty amazing.

We considered several ideas but wanted to keep it simple: –

  • Anyone could contribute
  • We could have had advertisements but kept it without as it was a community project for the community by the community and this was important to us both.

I would create the website and keep that up to date daily, and chase people for content, Richard was looking after our YouTube channel and scheduling the videos to go out at midnight.

Richard also designed the logo which I loved the second I saw it and we decided to use this as the brand and he also created video thumbnails for each video for people to use on twitter, videos and blog posts.

Now the real reason this was successful was due to the contributors, we were both blown away by the quality of content from each contributor and the Christmas theme just made it pretty cool.

Richard and I both had our Twitter and LinkedIn full with tweets and articles with the above logo in it, very regularly throughout the month which was super cool to see.

Setup
The website was basic and I was updating it daily with links to blog posts and using a very simple .Net Web app, and using Azure DevOps to build and deploy the web app to Azure, I also made use of staging slots to deploy the changes, check the links etc worked and then swapped the staging slot for production – super easy to do and well worth it.

Richard had the YouTube channel setup with the logo and scheduled the videos to be released using a schedule which was pretty sweet. He also created a thumbnail for each video for the contributor to use as they saw fit.

Highlights
The highlights for me were many, but one that stands out for me personally was seeing people who had never taken part in something like this, some had never created a blog post, many had never created a video before.

The hard part of the project was chasing people for content, especially when it was mid December and everyone is busy!

To end this post I want to mention the next project which you should keep your eye on by Joe Carlyle and Thomas Thornton called the #AzureSpringCleanup – personally looking forward to see more azure community coming together and creating awesome new content.

Please leave any feedback you have on the #azureadventcalendar below.




Azure Web App Staging Slots

With this years Azure Advent Calendar I made some site improvements and also upgraded the site from .Net 2.2 to 3.0, the code built and ran locally just fine, I push it to production and boom! – sites down, not good for a number of reasons.

The take away from this is I knew better, I tried to push some changes which in hindsight could easy have broken the site and by running it locally I thought its all good, the site has no tests as its content only.

By upgrading the site and attempting to add in Azure Application configuration I ran into some nuget package issues which I though I had resolved.

Get to the point of the blog post already Gregor!

Azure has a thing called Azure Deployment Slots for Web apps and with this feature we can have the following: –

  • Have 2 copies of the site running at the same time (one prod, one staging)
  • Deploy new features to Staging ad then test (however you test)
  • If all is good you switch slots so that the new version is now the prod version and the old prod version is switched into the staging version – if anything is borked then switch back and your back to good.

That’s the short version of what deployment slots are used for, I encourage you to take a look at them and I have this now setup for the azure advent calendar and wont be so careless next time.

 



Microsoft Security Code Analysis for Azure Devops – Part 3 BinSkim

Microsoft has recently released a new set of security tooling for Azure Devops which is called Microsoft Security Code Analysis.

The Microsoft Security Code Analysis Extension is a collection of tasks for the Azure DevOps Services platform. These tasks automatically download and run secure development tools in the build pipeline.

In this post I’ll cover BinSkim and how to use it.


BinSkim
BinSkim is a Portable Executable (PE) light-weight scanner that validates compiler/linker settings and other security-relevant binary characteristics. The build task provides a command line wrapper around the BinSkim.exe application. BinSkim is an open source tool.

Setup:

  1. Open your team project from your Azure DevOps Account.
  2. Navigate to the Build tab under Build and Release
  3. Select the Build Definition into which you wish to add the BinSkim build task.
    New – Click New and follow the steps detailed to create a new Build Definition.
    Edit – Select the Build Definition. On the subsequent page, click Edit to begin editing the Build Definition.
  4. Click + to navigate to the Add Tasks pane.
  5. Find the BinSkim build task either from the list or using the search box and then click Add.
  6. The BinSkim build task should now be a part of the Build Definition. Add it after the publishing steps for your build artifacts.

Customizing the BinSkim Build Task:

  1. Click the BinSkim task to see the different options available within.
  2. Set the build configuration to Debug to produce *.pdb debug files. They are used by BinSkim to map issues found in the output binary back to source code.
  3. Choose Type = Basic & Function = Analyze to avoid researching and creating your own commandline.
  4. Target – One or more specifiers to a file, directory, or filter pattern that resolves to one or more binaries to analyze.
    • Multiple targets should be separated by a semicolon(;).
    • Can be a single file or contain wildcards.
    • Directories should always end with \*
    • Examples:
      • *.dll;*.exe
      • $(BUILD_STAGINGDIRECTORY)\*
      • $(BUILD_STAGINGDIRECTORY)\*.dll;$(BUILD_STAGINGDIRECTORY)\*.exe;
    • Make sure the first argument to BinSkim.exe is the verb analyze using full paths, or paths relative to the source directory.
    • For Command Line input, multiple targets should be separated by a space.
    • You can omit the /o or /output file parameter; it will be added for you or replaced.
    • Standard Command Line Configuration
      • analyze $(Build.StagingDirectory)\* –recurse –verbose
      • analyze *.dll *.exe –recurse –verbose
      • Note that the trailing \* is very important when specifying a directory or directories for the target.

    BinSkim User Guide

    For more details on BinSkim whether command line arguments or rules by ID or exit codes, visit the BinSkim User Guide



Microsoft Security Code Analysis for Azure Devops – Part 2 Credential Scanner

Microsoft has recently released a new set of security tooling for Azure Devops which is called Microsoft Security Code Analysis.

The Microsoft Security Code Analysis Extension is a collection of tasks for the Azure DevOps Services platform. These tasks automatically download and run secure development tools in the build pipeline.

In this post I’ll show you how to get the new extension and how to go about using it.

Credential Scanner (aka CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files. Some of the commonly found types of credentials are default passwords, SQL connection strings and Certificates with private keys.
The CredScan build task is included in the Microsoft Security Code Analysis Extension. This page has the steps needed to configure & run the build task as part of your build definition.

Lets start by adding Cred Scan to a build for an existing project – I’ll use the AzureAdventCalendar project which I already have setup within my Azure Devops project at https://dev.azure.com.

Setup:

  1. Open your team project from your Azure DevOps Account.
  2. Navigate to the Build tab under Build and Release
  3. Select the Build Definition into which you wish to add the CredScan build task.
    New – Click New and follow the steps detailed to create a new Build Definition.
    Edit – Select the Build Definition. On the subsequent page, click Edit to begin editing the Build Definition.
  4. Click + to navigate to the Add Tasks pane.
  5. Find the CredScan build task either from the list or using the search box and then click Add.
  6. The Run CredScan build task should now be a part of the Build Definition.

 


 

 

 


 

 

 


Customizing the CredScan Build Task:

Available options include: –

  • Output Format – TSV/ CSV/ SARIF/ PREfast
  • Tool Version (Recommended: Latest)
  • Scan Folder – The folder in your repository to scan
  • Searchers File Type – Options to locate the searchers file used for scanning.
  • Suppressions File – A JSON file can be used for suppressing issues in the output log (more details in the Resources section).
  • (New) Verbose Output – self explanatory
  • Batch Size – The number of concurrent threads used to run Credential Scanners in parallel. Defaults to 20 (Value must be in the range of 1 to 2147483647).
  • (New) Match Timeout – The amount of time to spend attempting a searcher match before abandoning the check.
  • (New) File Scan Read Buffer Size – Buffer size while reading content in bytes. (Defaults to 524288)
  • (New) Maximum File Scan Read Bytes – Maximum number of bytes to read from a given file during content analysis. (Defaults to 104857600)
  • Run this task (under Control Options) – Specifies when the task should run. Choose “Custom conditions” to specify more complex conditions.

*Version – Build task version within Azure DevOps. Not frequently used.


Resources

Local suppressions scenarios and examples

Two of the most common suppression scenarios are detailed below: –

1. Suppress all occurrences of a given secret within the specified path

The hash key of the secret from the CredScan output file is required as shown in the sample below

{
“tool”: “Credential Scanner”,
“suppressions”: [
{
“hash”: “CLgYxl2FcQE8XZgha9/UbKLTkJkUh3Vakkxh2CAdhtY=”,
“_justification”: “Secret used by MSDN sample, it is fake.”
}
]
}

Warning: The hash key is generated by a portion of the matching value or file content. Any source code revision could change the hash key and disable the suppression rule.

2. To suppress all secrets in a specified file (or to suppress the secrets file itself)
The file expression could be a file name or any postfix portion of the full file path/name. Wildcards are not supported.

Example
File to be suppressed: [InputPath]\src\JS\lib\angular.js
Valid Suppression Rules:[InputPath]\src\JS\lib\angular.js — suppress the file in the specified path
\src\JS\lib\angular.js
\JS\lib\angular.js
\lib\angular.js
angular.js — suppress any file with the same name
        {
“tool”: “Credential Scanner”,
“suppressions”: [
{
“file”: “\\files\\AdditonalSearcher.xml”,
“_justification”: “Additional CredScan searcher specific to my team”
},
{
“file”: “\\files\\unittest.pfx”,
“_justification”: “Legitimate UT certificate file with private key”
}
]
}

Warning: All future secrets added to the file will also get suppressed automatically.


Secrets management guidelines
While detecting hard coded secrets in a timely manner and mitigating the risks is helpful, it is even better if one could prevent secrets from getting checked in altogether. In this regard, Microsoft has released CredScan Code Analyzer as part of Microsoft DevLabs extension for Visual Studio. While in early preview, it provides developers an inline experience for detecting potential secrets in their code, giving them the opportunity to fix those issues in real-time. For more information, please refer to this blog on Managing Secrets Securely in the Cloud.
Below are few additional resources to help you manage secrets and access sensitive information from within your applications in a secure manner:


Extending search capabilities
CredScan relies on a set of content searchers commonly defined in the buildsearchers.xml file. The file contains an array of XML serialized objects that represent a ContentSearcher object. The program is distributed with a set of searchers that have been well tested but it does allow you to implement your own custom searchers too.

A content searcher is defined as follows:

  • Name – The descriptive searcher name to be used in CredScan output file. It is recommended to use camel case naming convention for searcher names.
  • RuleId – The stable opaque id of the searcher.
    • CredScan default searchers are assigned with RuleIds like CSCAN0010, CSCAN0020, CSCAN0030, etc. The last digit is reserved for potential searcher regex group merging or division.
    • RuleId for customized searchers should have its own namespace in the format of: CSCAN-{Namespace}0010, CSCAN-{Namespace}0020, CSCAN-{Namespace}0030, etc.
    • The fully qualified searcher name is the combination of the RuleId and the searcher name, e.g. CSCAN0010.KeyStoreFiles, CSCAN0020.Base64EncodedCertificate, etc.
  • ResourceMatchPattern – Regex of file extensions to check against searcher
  • ContentSearchPatterns – Array of strings containing Regex statements to match. If no search patterns are defined, all files matching the resource match pattern will be returned.
  • ContentSearchFilters – Array of strings containing Regex statements to filter searcher specific false positives.
  • Matchdetails – A descriptive message and/or mitigation instructions to be added for each match of the searcher.
  • Recommendation – Provides the suggestions field content for a match using PREfast report format.
  • Severity – An integer to reflect the severity of the issue (Highest = 1).

 

 

Join me in part 3 where I cover off BinSkim



Microsoft Security Code Analysis for Azure Devops – Part 1

Microsoft has recently released a new set of security tooling for Azure Devops which is called Microsoft Security Code Analysis.

The Microsoft Security Code Analysis Extension is a collection of tasks for the Azure DevOps Services platform. These tasks automatically download and run secure development tools in the build pipeline.

In this post I’ll show you what they cover below, in part 2, I’ll show you them in action in Azure Devops.


Credential Scanner
Passwords and other secrets stored in source code is currently a big problem. Credential Scanner is a static analysis tool that detects credentials, secrets, certificates, and other sensitive content in your source code and your build output.

More Information


BinSkim
BinSkim is a Portable Executable (PE) light-weight scanner that validates compiler/linker settings and other security-relevant binary characteristics. The build task provides a command line wrapper around the BinSkim.exe application. BinSkim is an open source tool.

More Information (BinSkim on GitHub)


TSLint
TSLint is an extensible static analysis tool that checks TypeScript code for readability, maintainability, and functionality errors. It is widely supported across modern editors and build systems and can be customized with your own lint rules, configurations, and formatters. TSLint is an open source tool.

More Information on Github


Roslyn Analyzers
Microsoft’s compiler-integrated static analysis tool for analyzing managed code (C# and VB).

More Information (Roslyn Analyzers on docs.microsoft.com)


Microsoft Security Risk Detection
Security Risk Detection is Microsoft’s unique cloud-based fuzz testing service for identifying exploitable security bugs in software.

More Information (MSRD on docs.microsoft.com)


Anti-Malware Scanner
The Anti-Malware Scanner build task is now included in the Microsoft Security Code Analysis Extension. It must be run on a build agent which has Windows Defender already installed.

More Information


Analysis and Post-Processing of Results

The Microsoft Security Code Analysis extension has three build tasks to help you process and analyze the results found by the security tools tasks.

  • The Publish Security Analysis Logs build task preserves logs files from the build for investgiation and follow-up.
  • The Security Report build task collects all issues reported by all tools and adds them to a single summary report file.
  • The Post-Analysis build task allows customers to inject build breaks and fail the build should an anlysis tool report security issues found in the code that was scanned.

Publish Security Analysis Logs
The Publish Security Analysis Logs build task preserves the log files of the security tools run during the build. They can be published to the Azure DevOps Server artifacts (as a zip file), or copies to an accessible file share from your private build agent.

More Information


Security Report
The Security Report build task parses the log files created by the security tools run during the build and creates a summary report file with all issues found by the analysis tools.
The task can be configured to report findings for specific tools or for all tools, and you can also choose what level of issues (errors or errors and warnings) should be reported.

More Information


Post-Analysis (Build Break)
The Post-analysis build task enables the customer to inject a build break and fail the build in case one ore more analysis tools reports findings or issues in the code.
Individual build tasks will succeed, by design, as long as the tool completes successfully, whether there are findings or not. This is so that the build can run to completion allowing all tools to run.
To fail the build based on security issues found by one of the tools run in the build, then you can add and configure this build task.
The task can be configured to break the build for issues found by specific tools or for all tools, and also based on the severity of issues found (errors or errors and warnings).

More Information



Azure DevOps Generator – New Content

Recently Microsoft open sourced the Azure Devops Generator and recently its had some new content added which I wanted to highlight. You can use this tool to learn all sorts of Azure Devops tips and tricks from building code, seeing how it hangs together, deploying and even checking your code for vulnerabilities with arm templates and GitHub resources etc.

 

I can’t stress how useful this resource has been for me to spinning up test Azure Devops Projects for blog posts, testing security add-ons, etc. (more blogs to follow very soon). Please play with this and learn, the demo generator has a lot more in it than the lat time I checked and was pleasantly surprised, its an awesome tool.

The following is a quick tour of what is there at present: –

General Tab
The general tab is for creating projects in Azure DevOps from existing project templates, this will give you full source code, build and release pipelines, wikis, example kanban boards with issues etc and more
Note: There are different types of project if you scroll down the list.


Devops Labs Tab

On this tab we have more sample projects, but this time they cover the concepts of things like using Terraform, Ansible, Docker, Azure Key Vault and more, if you want to learn more about these then here is a great way to give them a spin.


Microsoft Learn Tab
Using Microsoft Learn we can learn how to do things like: –

  • Create a build pipeline with Azure Pipelines
  • Scan code for vulnerabilities in Azure Pipelines
  • Manage database changes in Azure Pipelines
  • Run non-functional tests in Azure Pipelines

Microsoft Cloud Adoption Framework Tab

The Cloud Adoption Plan template creates a backlog for managing cloud adoption efforts based on the guidance in the Microsoft Cloud Adoption Framework.


Private Tab

Azure DevOps Demo Generator enables users to create their own templates using existing projects and provision new projects using extracted template. The capacity to have custom templates can be helpful in many situations, such as constructing custom training content, providing only certain artifacts, etc.


You can even create a template from an existing project you have within Azure DevOps by selecting ‘Create New Template’ – this is super nice, I’ll leave you to explore this further.

Enjoy!



Scottish Summit 2020

On February 29th 2020, we are hosting a brand new, FREE event here in Scotland, UK which is called the Scottish Summit which will have several tracks running.

We are bringing over 60 sessions to you covering multiple tracks as per below:-

  • Dynamics for Customer Engagement
  • Azure
  • Big Data
  • Power Platform
  • Microsoft ERP
  • Personal Development
  • SharePoint
  • Office 365

To find out more about the event you can view the website and see the list of speakers.

I am giving  a talk titled “Super charge your Azure learning” where I will cover how I have learned Azure and go over all the very best resources I have came across in the last 18 months of learning Azure. This talk will be for all levels, people getting started, people who know some Azure and want to learn a bit more, right up to Azure experts who might want to branch out their learning into new areas.

Topics will include:-

  • My Journey
  • Getting started learning Azure
  • Azure Services
  • Azure Devops
  • Exams
  • Top tips and best learning resources
  • And much more

I cant wait to welcome people from around the world to the Scottish Summit and hopefully you catch the world premier of my talk.

If you wish to attend then grab your FREE ticket – hope to see you there!



Azure Architect Expert Study Notes

The following is a quick and dirty list I made for the Architect exams so that I could read them quickly before the exam itself. This is mostly for the AZ-302 but good to know regardless of what exam your doing.

  • Blob Storage is NOT for storing Virtual machine vhd files, blob storage is for block blobs and append blobs and not page blobs)
  • You can use Traffic Manager to sit above 2 virtual machines and register endpoints, if one of the region goes down the other stays up.

The following traffic routing methods are available in Traffic Manager:

  • Priority: Select Priority when you want to use a primary service endpoint for all traffic, and provide backups in case the primary or the backup endpoints are unavailable.
  • Weighted: Select Weighted when you want to distribute traffic across a set of endpoints, either evenly or according to weights, which you define.
  • Performance: Select Performance when you have endpoints in different geographic locations and you want end users to use the “closest” endpoint in terms of the lowest network latency.
  • Geographic: Select Geographic so that users are directed to specific endpoints (Azure, External, or Nested) based on which geographic location their DNS query originates from. This empowers Traffic Manager customers to enable scenarios where knowing a user’s geographic region and routing them based on that is important. Examples include complying with data sovereignty mandates, localization of content & user experience and measuring traffic from different regions.
  • Multivalue: Select MultiValue for Traffic Manager profiles that can only have IPv4/IPv6 addresses as endpoints. When a query is received for this profile, all healthy endpoints are returned.
  • Subnet: Select Subnet traffic-routing method to map sets of end-user IP address ranges to a specific endpoint within a Traffic Manager profile. When a request is received, the endpoint returned will be the one mapped for that request’s source IP address.

App Service plan pricing Tiers

There are a few categories of pricing tiers:

  • Shared compute: Free and Shared, the two base tiers, runs an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that runs on the shared resources, and the resources cannot scale out.
  • Dedicated compute: The Basic, Standard, Premium, and PremiumV2 tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available to you for scale-out.
  • Isolated: This tier runs dedicated Azure VMs on dedicated Azure Virtual Networks, which provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities.
  • Consumption: This tier is only available to function apps. It scales the functions dynamically depending on workload. For more information, see Azure Functions hosting plans comparison

Logic Apps

TO enable high throughput on a Logic App you can go to workflow settings and then choose High Throughput and click ON, this allows up to 300,000 executions every 5 minutes.

App Service Plans

The basic App Service Plan doesn’t support auto-scaling


Create a Linux virtual machine with Accelerated Networking

To create a Windows VM with Accelerated Networking, see Create a Windows VM with Accelerated Networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking


Azure Migrate

Migrate databases to Azure with familiar tools

Azure Database Migration Service integrates some of the functionality of our existing tools and services. It provides customers with a comprehensive, highly available solution. The service uses the Data Migration Assistant to generate assessment reports that provide recommendations to guide you through the changes required prior to performing a migration. It’s up to you to perform any remediation required. When you’re ready to begin the migration process, Azure Database Migration Service performs all of the required steps. You can fire and forget your migration projects with peace of mind, knowing that the process takes advantage of best practices as determined by Microsoft.

Note: Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.


Types of storage accounts

Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications. The types of storage accounts are:

  • General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage.
  • General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible.
  • Block blob storage accounts: Blob-only storage accounts with premium performance characteristics. Recommended for scenarios with high transactions rates, using smaller objects, or requiring consistently low storage latency.
  • FileStorage (preview) storage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high performance scale applications.
  • Blob storage accounts: Blob-only storage accounts. Use general-purpose v2 accounts instead when possible.

Azure Functions ARR affinity

If you create azure functions as part of the Basic app service plan, you can enable ARR Affinity which basically allows support for sticky sessions.

Azure App Service Access Restrictions

Access Restrictions enable you to define a priority ordered allow/deny list that controls network access to your app. The list can include IP addresses or Azure Virtual Network subnets. When there are one or more entries, there is then an implicit “deny all” that exists at the end of the list.

Auto Swap Staging Slots (Auto Swap isn’t supported in web apps on Linux.)

VNet Peering – connecting VM’s within the same Azure Region

Global VNet Peering – connecting VM’s across Azure Regions

Choose between Azure messaging services – Event Grid, Event Hubs, and Service Bus

Comparison of services

Service Purpose Type When to use
Event Grid Reactive programming Event distribution (discrete) React to status changes
Event Hubs Big data pipeline Event streaming (series) Telemetry and distributed data streaming
Service Bus High-value enterprise messaging Message Order processing and financial transactions

Event Grid

Event Grid is an eventing backplane that enables event-driven, reactive programming. It uses a publish-subscribe model. Publishers emit events, but have no expectation about which events are handled. Subscribers decide which events they want to handle.

Event Grid is deeply integrated with Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications. Event Grid isn’t a data pipeline, and doesn’t deliver the actual object that was updated.

Event Grid supports dead-lettering for events that aren’t delivered to an endpoint.

It has the following characteristics:

  • dynamically scalable
  • low cost
  • serverless
  • at least once delivery

Event Hubs

Azure Event Hubs is a big data pipeline. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to a variety of stream-processing infrastructures and analytics services. It is available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing as well as repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis.

It has the following characteristics:

  • low latency
  • capable of receiving and processing millions of events per second
  • at least once delivery

Service Bus

Service Bus is intended for traditional enterprise applications. These enterprise applications require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that cannot be lost or duplicated, use Azure Service Bus. Service Bus also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.

Service Bus is a brokered messaging system. It stores messages in a “broker” (for example, a queue) until the consuming party is ready to receive the messages.

It has the following characteristics:

  • reliable asynchronous message delivery (enterprise messaging as a service) that requires polling
  • advanced messaging features like FIFO, batching/sessions, transactions, dead-lettering, temporal control, routing and filtering, and duplicate detection
  • at least once delivery
  • optional in-order delivery

Notification Hubs

Has an SLA of 99.99% on the Basic and Standard tiers

RPO – Recovery Point Objective – The amount of data loss if a recovery needs to be done

RTO – Recovery Time Objective – The amount of time it takes to complete a recovery or restore

Azure Backup

Recover Points

  • Application Consistent – Here the backup takes into consideration any pending i/o operations and memory content operations. This allows the application to start in a consistent state after recovery.
  • File System Consistent – This provides a consistent backup of disk files. Here the application needs to maintain its own mechanism to manage its consistency.
  • Crash Consistent – Happens when the VM Shuts down at the time of the backup. Data exists on the disk at the time of the backup, but not guarantee on the disk consistency.

Azure Backup is good for retention periods of days, weeks, months and eve years.

Virtual Machines SLA’s

One VM = 99.9% availability

Two or more VM’s in an Availability Zone = 99.99% availability

Two or more VM’s in an Availability Set = 99.95% availability

Availability Zones

Within 1 Region you may have 2 availability zones

So this can mean 2 Availability Zones each having 2 data centres.

Deploy 2 copies of your vm, 1 to a datacentre in zone1, the other vm to the other availability zone

Availability Sets

  • Fault domains (3 by default), ie separate server racks which have separate power etc. Your vm is deployed to say all 3 fault domains and then if a fault domain goes down your still good on the other 2.
  • Update Domains (5 by default), when your vm might need updating, this concept means that some copies can be updated so that others stay up

If you add 6 vm’s to an availability set then the 6th vm would go into update domain 0 as the numbering starts at 0.

Azure Load Balancer (works at layer 4)

  • Is used to distribute traffic to virtual machines
  • Increases fault tolerance and availability for your application
  • Works at the network layer
  • Uses a public Ip address in front of the Azure Load Balancer
  • The back end pool is literally your Virtual Machines
  • The load balancer uses a health probe which needs the protocol, port, interval and threshold set

Important Notes:-

  • The load balancer cannot be used to route traffic between resources in different regions, only the same region.
  • If you want to achieve a higher availability of 99.99% then you should use a Standard Load Balancer instead of a Basic Load Balancer, and have at least 2 healthy virtual machines in the backend pool of the load balancer.
  • The vm’s should be assigned a standard static public IP address

Application Gateway (works at layer 7)

  • Web Traffic Load Balancer
  • Works at the application layer
  • URL Routing – example would be /video goes to backendpool1, /images goes to backendpool2
  • SSL termination

WAF (web application firewall)

  • Centralized protection for your web applications from common exploits and vulnerabilities
  • If you want to deploy an application gateway you need an empty subnet available for your virtual network.
  • SLA 99.5% – 2 or more medium or large instances

Azure Traffic Manager

  • DNS based traffic load balancer
  • Can Distribute traffic across regions
  • You can use different traffic routing methods
    • Priority – choose which region you prefer
    • Geographic – direct end users to specific endpoints based on geographic location
    • Multivalue – all healthy endpoints are returned to the user
  • If your using Azure Site Recovery then you have to create an Azure Site Recovery Vault to store the data
  • Premium Storage tier only allows storage of blobs, nothing else
  • Default NSG Rules – deny all inbound from internet, allow all outbound to the internet, to stop subnets having access out add a new NSG rule and add a service tag of internet, destination port ranges * and then Action Deny with a low priority value of say 100 so that it over rules the default NSG outbound security rules
  • If you want to get access to the Windows Graphic Device interface use Azure Batch
  • When creating an Azure gateway the Ip Address has to be a public static ip address (sku standard)
  • Using Powershell to get an azure keyvault secret
    (Get-AZKeyVaultSecret -vaultname ‘myvaultname’ -name ‘mysecretname’ ).SecretValueText
  • Azure AD Conditional access requires Premium Tier on Azure AD
  • When you set up ASR in another region and point it to some VMs, it installs the Azure Site Recovery extension called Mobility Service in the source VMs
  • Azure Site Recovery is for replicating Virtual or Physical Machines from various sources. It does NOT support Azure App Services. But it does support Hyper-V and VMWare Virtual Machines, and Windows or Linux Physical Machines.
  • ASR requires port 443 and 9443 in order to do it’s replication from the source servers
  • To replicate Hyper-V virtual machines between two on-premises data centers, you need SCVMM to be on both systems already
  • ASR can replicate sites between regions as long as they are in the same geography. It would not support US East machines being replicated to Japan East because it crosses a geographic boundary.
  • VMs across multiple Availability Zones provides the highest Microsoft SLA at 99.99%. Using availability sets provides 99.95% SLA. Standalone VMs behind a load balancer does not provide an SLA. Using Azure Site Recovery provides Business Continuity, and not a high-availability.
  • How does SQL Database implement high availability at the Premium Tier?

The Premium tier of SQL Database runs the database in a 4-node Always On Availability Group Cluster. This has one primary database node with 3 secondary processes keeping copies of the data.

  • Using SQL Database Always On Encryption with Deterministic Encryption. This allows the database to perform database operations on the table such as joins and equality tests, while keeping the data encrypted in the table and from regular application reads. SQL Database Always On Encryption with Randomized Encryption does not allow table operations
  • With Storage Queues, calling UpdateMessage can be used to extend the lease and prevent the message from being given to another process. RenewLock is for Service Bus Queue and not Storage Queues. Rearchitecting the application may not be a simple solution, although it may be wise.
  • You can scale a web app using metrics provided by Application Insights, which needs to be implemented before you can enable such scaling
  • Transparent Data Encryption allows the data stored on the disk to be encrypted and it supports geo-replication and geo-restore. Always Encrypted will not suffice as this is focused on transport encryption (data in transit is encrypted)
  • Azure Confidential Compute (ACC) is only supported on the DC-Series VMs, Azure Confidential Compute allows code and data in the processor to be secured when running. Azure Confidential Compute is not supported on any other VM series except DC-series.
  • SendGrid is an email solution which provides email functionality via distribution groups as well as metric gathering
  • Azure AD Privileged Identity Management is a tool that will allow you to see who has elevated permissions within your environment. You can examine the history of that access, and whether they use those permissions. And you can ask users to justify the need for those elevated permissions in a security review.
  • Azure Site Recovery (ASR) does not support the recovery of most PaaS solutions such as Azure Storage and Azure App Services. ASR is for infrastructure workloads such as Windows and Linux VM’s, SAP, VMWare, Sharepoint, IIS, and SQL Server
  • Function Keys and Azure API Management can both protect a Function app’s public endpoint. Function keys are unique codes that can be required to be used when calling an endpoint. This only protects the endpoint when the function key is a true secret. Azure API Management can be put in front of the function and require other forms of authentication such as Azure AD or OAuth. Functions do not support Shared Access Signatures (SAS).
  • Shared Access Signatures (SAS) and Azure API Management can both protect a Service Bus’ public endpoint. Shared Access Signatures (SAS) are unique codes that can be required to be used when calling an endpoint. This is why they are called “shared”. This only protects the endpoint when the SAS is a true secret. Azure API Management can be put in front of the function and require other forms of authentication such as Azure AD or OAuth. Service Bus does not support Function Keys or Multi-Factor Authentication.
  • Always Encrypt allows you to choose which columns to encrypt, and SQL Database will do the work for you. When using a command line, the data will come out encrypted. But a trusted application can see the data, and use it in JOINs, SELECTs, and WHERE clauses. Application side encryption will not allow JOINs, etc. A Trusted Execution Environment (TEE) is not used for SQL Database service. All data is stored at rest encrypted using TDE by default.


Azure Certification Exams Passed

Proud to say that I now have the following exams passed: –

  • Azure solutions Architect Expert
  • Azure Devops Engineer Expert
  • Azure Developer Associate