mountainss Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

#Microsoft Azure Hub-Spoke model by Enterprise Design 2 of 4 Lift and Shift #Azure #Hyperv #VMware

Microsoft Azure Hybrid Cloud Architecture HUB-Spoke Model

Microsoft Azure Hub-Spoke model

This blogpost about Microsoft Azure Hub-Spoke model by Enterprise Design 2 of 4 “Lift and Shift” is part of a Datacenter transition to Microsoft Azure Intelligent Cloud. It’s talking about Azure Architecture, Security, Assessment, Azure Policy, and implementation of the design. Here you find the first blogposts :

It’s important for your business to have your Azure Architectural design with Security in place before you start your “Lift and Shift” actions, think about Identity Management and Provisioning, RBAC for your Administrators and Super Users with Two-Factor Authentication. Security with Network Security Groups and Firewalls 

Azure Multi-Factor-Authentication (MFA)

Microsoft Azure Hub-Spoke model : “Lift and Shift”

 

Microsoft Azure HUB subscription for “Lift and Shift”

To “Lift and Shift” to the Azure HUB Subscription we have the following in place by Design :

  1. Azure Scaffold and Hierarchy (Governance)
  2. Virtual Networks (VNET) with the Subnets and IP-Number plan
  3. ExpressRoute VPN Connection with a backup failover Site-2-Site VPN connection to Azure.
  4. Resource Groups, like Active Directory, ADFS Farm, Authentication, SQL Backend.
  5. Resource Policies
  6. Resource Locks
  7. Network Security Groups (NSG)
  8. DNS
  9. Azure Firewall
  10. Azure internal Load Balancers.
  11. Azure Storage Accounts
  12. Azure Virtual Machine sizes
  13. Azure Virtual Machine Image
  14. Managed Disks and Encryption.
  15. Redundancy for Virtual Machines
  16. Azure Key Vault for Encryption.
  17. Azure Recovery Vault ( Backup)
  18. Azure Policy
  19. Managed Identities, Azure MFA, RBAC,ADFS
  20. Azure Monitor
  21. Azure Naming Convention
  22. Azure Tagging
  23. Azure Cost Management
  24. ARM (JSON) Deployment template (for New requests)

To help you more with your Azure Virtual Datacenter have a look here

 

Azure Hierarchy

Azure Scaffold

When creating a building, scaffolding is used to create the basis of a structure. The scaffold guides the general outline and provides anchor points for more permanent systems to be mounted. An enterprise scaffold is the same: a set of flexible controls and Azure capabilities that provide structure to the environment, and anchors for services built on the public cloud. It provides the builders (IT and business groups) a foundation to create and attach new services keeping speed of delivery in mind. Read more hereI did the “Lift and Shift” between quotes because it’s important to follow the process workflow to be successful in your Datacenter transition to the Microsoft Azure Cloud.

 

Here you find all the Microsoft Azure Migration information

 

 

App Migration to Azure: Your options explained by Jeremy Winter

The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the migration suitability of on-premises machines, performs performance-based sizing, and provides cost estimations for running on-premises machines in Azure. If you’re contemplating lift-and-shift migrations, or are in the early assessment stages of migration, this service is for you. After the assessment, you can use services such as Azure Site Recovery and Azure Database Migration Service, to migrate the machines to Azure.

In your datacenter you got all kind of different workloads and solutions like :

  • Hyper-V Clusters
  • VMware Clusters
  • SQL Clusters
  • Print Clusters
  • File Clusters
  • Web Farm
  • Two or three tiers solutions
  • Physical Servers
  • Different Storage solutions

When you do your Datacenter Assessment it’s important to get your workloads visible, because “Lift and Shift” with Azure Site Recovery (ASR) of a Virtual Machine is an different scenario then SQL database migration to Azure. That’s why Microsoft has different tooling like :

To get your dependencies in your Datacenter on the map, Microsoft has Azure Service Maps.

Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.

This is very handy to get insides of your Datacenter communication workloads.

More information on using Azure Service Maps here

Installation example of Hyper-V Virtual Machines with ASR

In the following step-by-step guide we will install the Azure Site Recovery Agent on a Hyper-V host and migrate a virtual machine to Microsoft azure in a “Lift and Shift” way.

First create a Recovery Services Vault => Click Add.

Then you go to your new created Recovery Vault and click on Getting started for Site Recovery. => Prepare infrastructure and follow the steps.

When you have selected Hyper-V VM to Azure, the next step is the ASR Deployment Planner tool kit. Here you find more information on Azure Site Recovery Deployment Planner user guide for Hyper-V-to-Azure production deployments.

Then in step 3 you will make your Hyper-V Site in Microsoft azure with the Right Hyper-V Servers.

Give your Hyper-V Site the right name, especially when you have a lot of Hyper-V Clusters with Different workloads.

Here is where the registration begins with the Azure Site Recovery (ASR) Agent installation on your Hyper-V Host.
Follow the five steps and make sure your Hyper-V Node can access Azure via secure port 443(https) via Proxy or firewall rules.

Install as Administrator the AzureSiteRecoveryProvider.exe file on the Hyper-V host.

Click on Next

Choose your Installation location and Click on Install.

The Azure Site Recovery agent is installed and need to be registered with your Azure Recovery Vault.
For this you need the key file from the Azure portal to download at step 4. Click on Register.

Browse to your downloaded key file from the Azure Portal Recovery Vault and click on Next.

When you have a proxy you can select that, otherwise select Next.

Now your Azure ASR Agent on Hyper-V is registered with your Azure Site Recovery Vault.

In the Azure Portal you will see your Hyper-V Node, in my Demo LAB it’s WAC01.MVPLAB.LOCAL.

In the next step you can choose an existing Storage account, or a new one with different specifications.

Check also after storage your network in azure.

In this step we create the replication policy.

Set your own settings.

The Replication policy is added to the configuration.

When you click on OK the Infrastructure is done.

We are now going to enable the replication :

Select your Source and location.

here you select your target Storage account, Resource Group and Network.

The connections are made between Hyper-V, ASR Vault and Storage.

Select the Virtual Machine(s) from the Hyper-V host to replicate for migration with ASR

Configure the properties.

Click on OK

From here the Replication will begin from Hyper-V Host to Azure  🙂

Azure Sire Recovery Replication Job status.

Replicated item(s)

To make your recovery plan and do the failover for migration to azure, you have to wait until the first replication is done for 100%.

Azure Site Recovery Plan for failover (Migration)

Make recovery Plan.

Click OK

The Target in the recovery plan can only be selected when the first replication is done.

Overview of the Azure Site Recovery Migration failover.

From the Hyper-V Host you can pause or see the replication health status.

Hyper-V Health Status

Azure Migrate Virtual Machines using Azure Site Recovery video with Microsoft Jeff Woolsey

Microsoft Azure Data Migration Assistant

To migrate your SQL Backend to Microsoft Azure, use this step-by-step instructions help you perform your first assessment for migrating to on-premises SQL Server, SQL Server running on an Azure VM, or Azure SQL Database, by using Data Migration Assistant.

Conclusion :

“Lift and Shift” Migration of your complete datacenter exists of different scenarios for your workloads to Microsoft Azure. With that said, Microsoft has for each scenario tooling available to get the job done. It’s all about a good Architectural Design, Security in place, People and process to get your Intelligent Azure Cloud up and running for your Business.

Next Blogpost Microsoft Azure Hub-Spoke model by Enterprise Design 3 of 4 :
SQL assessment and Data Migration to Azure

Advertisements


Leave a comment

BlueHat v18 Hardening #Hyperv through offensive security research #Security #Bluehatv18 #Bluehat

BlueHat v18 || Hardening Hyper-V through offensive security research

From Microsoft Security Response Center (MSRC) :

“Humans are susceptible to social engineering. Machines are susceptible to tampering. Machine learning is vulnerable to adversarial attacks. Singular machine learning models can be “gamed” leading to unexpected outcomes.”

In this talk, they compare the difficulty of tampering with cloud-based models and client-based models. Then discuss how they develop stacked ensemble models to make machine learning defenses less susceptible to tampering and significantly improve overall protection for customers. They talk about the diversity of base ML models and technical details on how they are optimized to handle different threat scenarios. Lastly, they describe suspected tampering activity they have witnessed using protection telemetry from over half a billion computers, and whether mitigation worked.

BlueHat v18 Content Now Available


Leave a comment

#Microsoft Azure Hub-Spoke model by Enterprise Design 1 of 4 #Azure #Cloud

 

Azure Hub-Spoke Architecture

Microsoft Azure Hub-Spoke Architecture

This Enterprise reference architecture shows how to implement a hub-spoke topology in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub, and can be used to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN gateway connection.

We only use the Azure Private peering

For this Hybrid Cloud Strategy we made four Microsoft Azure Subscriptions via the EA Portal :

  1. Azure HUB Subscription for the connectivity via Azure ExpressRoute to On-premises Datacenter.
  2. Azure Spoke 1 for Production workload and Cloud Services
  3. Azure Spoke 2 for Test and Acceptance Cloud Services
  4. Azure Spoke 3 for Future plans

The naming convention rules and restrictions for Azure resources and a baseline set of recommendations for naming conventions. You can use these recommendations as a starting point for your own conventions specific to your needs.

The choice of a name for any resource in Microsoft Azure is important because:

  • It is difficult to change a name later.
  • Names must meet the requirements of their specific resource type.

Consistent naming conventions make resources easier to locate. They can also indicate the role of a resource in a solution.The key to success with naming conventions is establishing and following them across your applications and organizations.

Azure connectivity and RBAC Identity

This tenant is federated with via ADFS and Azure Connect to Office 365. Identity management is provisioned
via Microsoft Identity Manager 2016 (MIM2016). With this already in place, we can Configure Microsoft Azure RBAC in the subscriptions.

Access management for cloud resources is a critical function for any organization that is using the cloud. Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of resources in Azure.

Business Development

For Business Development we have a separated Active Directory in one forest and also federated via ADFS to Microsoft Office 365. For this environment we build one Azure subscription with a temporary Site-to-Site VPN connection to On-premises datacenter for the “Lift and Shift” migration via Azure-Site-Recovery (ASR)

S2S VPN IKE v2 tunnel with Cisco and Azure.

Azure Virtual Networks

Next step is to build the connections between the Azure HUB Subscription and the Azure Spoke subscription(s) when every Microsoft Azure subscription has It’s own Virtual Network (VNET). This is called VNET peering.

Virtual network peering enables you to seamlessly connect two Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network, through private IP addresses only. Azure supports:

  • VNet peering – connecting VNets within the same Azure region
  • Global VNet peering – connecting VNets across Azure regions

Here you see my step-by-step VNET peering creation from HUB to Spoke 1 :

Go to the VNET of the Azure HUB Subscription. and then to Peerings => Add.

Here you make the connection with Spoke 1 Azure subscription.

For Azure HUB is Peering to Spoke 1 Done.

Now we go to the VNET of Azure Subscription Spoke 1 to make the connection.

Go to VNET => Peerings => Click on Add in the Azure Spoke 1 Subscription

Connect here to the Azure HUB

The VNET Peering between Azure HUB subscription and Spoke 1 is Connected.

In this order you have to make the other VNET Peerings from the Azure HUB subscription to the other Spoke Subscriptions so that the network connectivity between VNETs is working. Because we have the Azure Internet Edge in the HUB for the other subscriptions.

In the Azure Reference Architecture we also do Security by Design in the Cloud with Firewall and Azure Network Security Groups (NSG) and every Azure component get it’s own Tag for Security Groups and Billing – Usage.

Azure Storage

In every Microsoft Azure Subscription (HUB and Spoke ) we created a Storage Account. You can choose for different kind of storage in Microsoft Azure.

Durable and highly available. Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.
Secure. All data written to Azure Storage is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today’s applications.
Managed. Microsoft Azure handles maintenance and any critical problems for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides SDKs for Azure Storage in a variety of languages — .NET, Java, Node.js, Python, PHP, Ruby, Go, and others — as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.

Azure Storage includes these data services:
Azure Blobs: A massively scalable object store for text and binary data.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.

Creating your Azure Storage accounts by Design.

One of our Architecture Security by Design policy, is to Encrypt all the storage in Azure via Microsoft Azure Key vault.

Deploying Azure IaaS Virtual Machine with ARM Templates

Enterprise organizations with more then ten employees managing IT datacenters are working by process and order to do the job for the business. When they are all using the Azure Portal and deploy Virtual Machines manually you will get a mess and things can go wrong. In Microsoft Azure you have the Azure Resource Manager for deploying  JSON ARM Templates. With these Azure Resource Manager Templates you can automate your workload deployments in Microsoft Azure. For example : We build a JSON template to deploy a Windows Server in the right Azure Subscription in the right Azure Resource Group and with the following extensions to it :

  • Antimalware agent installed
  • Domain joined in the right OU (Active Directory)
  • Azure Log analytics agent installed ( Connected to Azure Monitor and SCOM )
  • Encryption by default.

Using with our Azure naming conventions and Azure policy we always deploy consistent without making mistakes or by wrong typing in the Azure portal. When you write and make your ARM templates for different workloads, you can store them in Azure DevOps Repo ( Repository) and you can connect your private repo to GitHub.

Making ARM templates works really Awesome with Microsoft Visual Studio Code which is opensource and free of charge. You can add your favorite VSC extensions to work with like Azure Resource Manager.

 Our Azure ARM Template to deploy Virtual Machines into Azure HUB-Spoke model with VSC

Azure monitoring and Recovery Service Vault

To manage your Azure Hybrid Cloud environment you have to monitor everything to keep in control of your Virtual Datacenter. And of course you have to plan your business continuity with Azure Recovery Services (Backup) by Design. We made in every Azure Subscription an Azure Recovery Services Vault for making Backups. This is because you don’t want backup traffic over your VNET peering’s. In the Azure HUB subscription we made a second Azure Site Recovery (ASR) Vault for the “Lift & Shift” migration of On-premises Virtual Machines to the landing zone in Azure HUB.

With Microsoft Azure Monitor we use Log Analytics and Service maps and with the same OMS agent on the Virtual Machine, we still can use Microsoft System Center Operation Manager (SCOM) connected to the same agent 🙂

When you have 45 locations, 45.000 students with BYOD and 10.000 Managed workstations, you will monitor 24 x 7 to keep everything running for your Business. Monitoring Express Route with a Backup connection is a must for your Hybrid Virtual Datacenter. Here you have more information about monitoring Express Route Circuit

Monitoring our Express Route

With this all installed in Microsoft Azure by Design, we have the policy Security First !

Microsoft Azure Security Center

Azure Security Center provides unified security management and advanced threat protection across hybrid cloud workloads. With Security Center, you can apply security policies across your workloads, limit your exposure to threats, and detect and respond to attacks.

We are already installing Azure Threat Protection (ATP) for our On-premises Datacenter for Security.

Azure Security Center

We still have a lot to configure in Microsoft Azure to get the Basic Architecture Design in place. When that is done, I will make three more blogposts about this datacenter transformation :

  • “Lift and Shift” migration with ASR for Virtual Machines on Hyper-V and VMware.
  • SQL assessment and Data Migration to Azure
  • Optimize of all Workloads in Microsoft Azure.

Hope this blogpost will help you too with your Datacenter transition to Microsoft Azure Cloud.


Leave a comment

#Microsoft Azure Security Center Investigation Dashboard (Preview) #Azure #Security #ASC #Cloud


Yesterday I was playing with Mimikatz (Hackertool) for Security pen tests and it was not working because Azure Security Center Quarantined the file 🙂

On my Surface I got an Azure monitoring Agent running

Microsoft Azure Security Center Investigation Dashboard

The Investigation feature in Security Center allows you to triage, understand the scope, and track down the root cause of a potential security incident.
The intent is to facilitate the investigation process by linking all entities (security alerts, users, computers and incidents) that are involved with the incident you are investigating. Security Center can do this by correlating relevant data with any involved entities and exposing this correlation in using a live graph that helps you navigate through the objects and visualize relevant information.

Microsoft Azure Security Center found also a rare SVCHOST Service on my Surface, and the ASC investigation dashboard gives you great overview of the security risk.

You can Run a Playbook based on this alert Rare SVCHOST Service

Try it yourself, more information about Azure Security Center Investigation Dashboard (Preview) can be found here

Microsoft azure Security Center

 

 


Leave a comment

Creating VM Cluster on Azure #Cloud with Terraform #IaC #Azure #Terraform #Linux #Winserv

Type az and you should see this Azure CLI

Type Terraform and you should see the terraform commands

 

Install and configure Terraform to provision VMs and other infrastructure into Azure

Before you begin with Terraform and deploying your solution to Microsoft Azure you have to install Azure CLI and Terraform for your OS.

In the following step-by-step guide we will deploy a VM Cluster with Terraform into Microsoft Azure Cloud Services.

First we open Powershell in Administrator mode :

You should have your Terraform script ready.

It’s great to edit your Terraform script in Visual Studio Code

Create a Terraform configuration file
In this section, you create a file that contains resource definitions for your infrastructure.
Create a new file named main.tf.
Copy following sample resource definitions into the newly created main.tf file:


resource “azurerm_resource_group” “test” {
name = “acctestrg”
location = “West US 2”
}

resource “azurerm_virtual_network” “test” {
name = “acctvn”
address_space = [“10.0.0.0/16”]
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
}

resource “azurerm_subnet” “test” {
name = “acctsub”
resource_group_name = “${azurerm_resource_group.test.name}”
virtual_network_name = “${azurerm_virtual_network.test.name}”
address_prefix = “10.0.2.0/24”
}

resource “azurerm_public_ip” “test” {
name = “publicIPForLB”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
public_ip_address_allocation = “static”
}

resource “azurerm_lb” “test” {
name = “loadBalancer”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”

frontend_ip_configuration {
name = “publicIPAddress”
public_ip_address_id = “${azurerm_public_ip.test.id}”
}
}

resource “azurerm_lb_backend_address_pool” “test” {
resource_group_name = “${azurerm_resource_group.test.name}”
loadbalancer_id = “${azurerm_lb.test.id}”
name = “BackEndAddressPool”
}

resource “azurerm_network_interface” “test” {
count = 2
name = “acctni${count.index}”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”

ip_configuration {
name = “testConfiguration”
subnet_id = “${azurerm_subnet.test.id}”
private_ip_address_allocation = “dynamic”
load_balancer_backend_address_pools_ids = [“${azurerm_lb_backend_address_pool.test.id}”]
}
}

resource “azurerm_managed_disk” “test” {
count = 2
name = “datadisk_existing_${count.index}”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
storage_account_type = “Standard_LRS”
create_option = “Empty”
disk_size_gb = “1023”
}

resource “azurerm_availability_set” “avset” {
name = “avset”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
platform_fault_domain_count = 2
platform_update_domain_count = 2
managed = true
}

resource “azurerm_virtual_machine” “test” {
count = 2
name = “acctvm${count.index}”
location = “${azurerm_resource_group.test.location}”
availability_set_id = “${azurerm_availability_set.avset.id}”
resource_group_name = “${azurerm_resource_group.test.name}”
network_interface_ids = [“${element(azurerm_network_interface.test.*.id, count.index)}”]
vm_size = “Standard_DS1_v2”

# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true

# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true

storage_image_reference {
publisher = “Canonical”
offer = “UbuntuServer”
sku = “16.04-LTS”
version = “latest”
}

storage_os_disk {
name = “myosdisk${count.index}”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

# Optional data disks
storage_data_disk {
name = “datadisk_new_${count.index}”
managed_disk_type = “Standard_LRS”
create_option = “Empty”
lun = 0
disk_size_gb = “1023”
}

storage_data_disk {
name = “${element(azurerm_managed_disk.test.*.name, count.index)}”
managed_disk_id = “${element(azurerm_managed_disk.test.*.id, count.index)}”
create_option = “Attach”
lun = 1
disk_size_gb = “${element(azurerm_managed_disk.test.*.disk_size_gb, count.index)}”
}

os_profile {
computer_name = “hostname”
admin_username = “testadmin”
admin_password = “Password1234!”
}

os_profile_linux_config {
disable_password_authentication = false
}

tags {
environment = “staging”
}
}


Type : terraform init

You should see this screen.

Type : az login

We now logging into Microsoft Azure subscription.

https://microsoft.com/devicelogin

Insert the code from your Powershell screen.

Now we have the Terraform INIT running and we are connected to our Azure Subscription 😉

Type : terraform plan

It will refreshing the state and getting ready for deployment.

Type : terraform apply

and then type : yes <enter>

Terraform is now creating the azure resources

Azure resource group acctestrg is made

Terraform deployment VM Cluster on Azure is Ready 😉

Azure VM Cluster is running.

When you want to remove the complete Azure VM Cluster with terraform, it’s really easy :

Type : terraform destroy

and then type : yes <enter>

Azure resources are being deleted via terraform script

Terraform destroyed the Azure VM Cluster


All Azure Resources of the VM Cluster are removed.

Hope this step-by-step guide deploying infrastructure as Code with terraform will help you with your own Cloud solutions in Microsoft azure.

Ps. don’t forget to install Visual Studio Code Azure Terraform extension and play !

#MVPbuzz



Leave a comment

Enhancing Microsoft #Security using Artificial Intelligence E-book #AI #Azure #MachineLearning

At the Center of intelligent security management is the concept of working smarter, not harder. However, this is a significant undertaking when you consider the ever-evolving landscape of threats and security challenges, combined with the myriad of devices, apps, and user scenarios. In this e-book, learn how you can intelligently detect, protect, and respond to threats by leveraging the strong integration between Microsoft security solutions and our partners.
Read the full e-book to learn how Microsoft is using artificial intelligence (AI) in security features like:

  • Windows Hello
  • Azure Active Directory
  • Azure Advanced Threat Protection
  • Windows Defender SmartScreen
  • Windows Defender Network Protection
  • Exchange Online Protection and more…

You can download Enhancing Microsoft Security using Artificial Intelligence E-book here


Leave a comment

#Microsoft Azure DevOps Projects and Infrastructure as Code #Azure #IaC #DevOps


Microsoft Azure DevOps Project for CI/CD

The Azure DevOps Project presents a simplified experience where you bring your existing code and Git repository, or choose from one of the sample applications to create a continuous integration (CI) and continuous delivery (CD) pipeline to Azure. The DevOps project automatically creates Azure resources such as a new Azure virtual machine, creates and configures a release pipeline in VSTS that includes a build definition for CI, sets up a release definition for CD, and then creates an Azure Application Insights resource for monitoring.

Infrastructure as Code (IaC) gives you benefits like :

  • Consistency in naming conventions of Azure components
  • Working together in the same way with your company policies
  • Reusability of Templates
  • Automatic documentation and CMDB of deployments in your repository
  • Rapid deployments
  • Flexibility and Scalability in code for Azure Deployments

As an Large Enterprise Company you don’t want to Click and Type in the Azure Portal with lot of employees to get the job done in a consistent way. The changes and deployments will be different in time because people can make mistakes. For Developers it’s important to make your building process before you publish your application, so why not for DevOps and ITpro to do the same thing for Infrastructure.

In the following step-by-step guide you will learn how to make a Microsoft Azure DevOps Project and make a CI/CD Pipeline deploying a virtual machine with your ASP.net Application.

Prerequisites :
An Azure subscription. You can get one free through Visual Studio Dev Essentials.
Access to a GitHub or external Git repository that contains .NET, Java, PHP, Node, Python, or static web code.

Here you find the GitHub for Developer Guide

When you have your prerequisites in place you can start with the following steps :

Search for DevOps at All Services in the Azure Portal

Select .NET and Click on Next

You can see where you are in the flow of creating your CI/CD Pipeline, when you need a Azure SQL Database for your ASP.net application you can select Add a Database (Option). This will provide you Azure SQL as a Service (PaaS).

Database-as-a-Service
(I didn’t Choose for SQL)


In this step select Virtual Machine and click Next

From here you can create a VSTS account or your Existing account of Visual Studio Team Services. After selecting VSTS you can manage your Azure settings and by clicking on Change you can select the Azure options.

 

Select the Virtual Machine you need for your Application.

Here you see the Deployment Running

Important for Infrastructure as Code (IaC), the Deployment template can be saved into the library and / or you can download it for reusability or make your own policies into the template.

When you save it into the Azure Library you get the release notes and who’s the publisher

In the Microsoft Azure DevOps Project main Dashboard you will see the status of your CI/CD Pipeline and that release is in progress or not. On the right-side of the Dashboard you see the Azure resources like the Application endpoint, the Virtual Machine and Application Insights for monitoring. When the CI/CD Pipeline deployment is succeeded you can browse to your ASP.net Application.

Your Application.

Your Virtual Machine Running and in the Monitoring.


The Microsoft Azure DevOps Project CI/CD Pipeline is Completed.

Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your live web application. It will automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It’s designed to help you continuously improve performance and usability. It works for apps on a wide variety of platforms including .NET, Node.js and J2EE, hosted on-premises or in the cloud. It integrates with your DevOps process, and has connection points to a variety of development tools. It can monitor and analyze telemetry from mobile apps by integrating with Visual Studio App Center and HockeyApp.

You can drill down into the error to see what is happening.

Azure Application Insights topology

Application Insights is aimed at the development team, to help you understand how your app is performing and how it’s being used. It monitors:
Request rates, response times, and failure rates – Find out which pages are most popular, at what times of day, and where your users are. See which pages perform best. If your response times and failure rates go high when there are more requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates – Find out whether external services are slowing you down.
Exceptions – Analyse the aggregated statistics, or pick specific instances and drill into the stack trace and related requests. Both server and browser exceptions are reported.
Page views and load performance – reported by your users’ browsers.
AJAX calls from web pages – rates, response times, and failure rates.
User and session counts.
Performance counters from your Windows or Linux server machines, such as CPU, memory, and network usage.
Host diagnostics from Docker or Azure.
Diagnostic trace logs from your app – so that you can correlate trace events with requests.
Custom events and metrics that you write yourself in the client or server code, to track business events such as items sold or games won.

You can also drill down into Microsoft Azure Log Analytics and run your analytics queries to get the right information you want for troubleshooting. More information on Azure Log Analytics and queries is on MSFT docs.

From App Insight we see it was an Exception error

Because the Azure DevOps Project is connected with VSTS you can follow the Build and Release here to and you got your documentation of the CI/CD Pipeline.

From here you can work with your Developers and DevOps and manage the User and Groups security in de CI/CD Pipeline for the next Build. Working together to build innovative apps via VSTS from one Dashboard :

VSTS Dashboard

Next day you see it was one time error and the Pipeline is running Fine 😉

For more information about all the possibilities with Microsoft Azure DevOps Project go to MSFT Docs

DevOps and Microsoft :

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.

To Learn DevOps please visit this Microsoft DevOps Site

Conclusion : 

Invest in your CI/CD Pipeline and make your own environment is important before you deploy into Azure production for your business. Make your ARM Templates and Code in repositories like Git or VSTS. When you have this all in place your are more in control of your consistent Deployments and Changes in the Azure Cloud. I hope this blogpost is useful for you and your Company. Start today with Infrastructure as Code (IaC) and get the benefits 😉