Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

Deploying Azure Stack HCI Cluster with Windows Admin Center #WAC #AzureStackHCI #WindowsAdminCenter #Hyperv #AKS

Azure Stack HCI is a Hyper-Converged Infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads and their storage in a hybrid on-premises environment. Azure hybrid services enhance the cluster with capabilities such as cloud-based monitoring, Site Recovery, and VM backups, as well as a central view of all of your Azure Stack HCI deployments in the Azure portal. You can manage the cluster with your existing tools including Windows Admin Center, System Center, and PowerShell.

Azure Stack HCI, version 20H2 is a new operating system now in Public Preview and available for download. It’s intended for on-premises clusters running virtualized workloads, with hybrid-cloud connections built-in. As such, Azure Stack HCI is delivered as an Azure service and billed on an Azure subscription. Azure Stack HCI also now includes the ability to host the Azure Kubernetes Service; for details, see Azure Kubernetes Service on Azure Stack HCI.

Get Started with Azure Stack HCI and Windows Admin Center

Windows Admin Center is a locally deployed, browser-based app for managing Azure Stack HCI. The simplest way to install Windows Admin Center is on a local management PC (desktop mode), although you can also install it on a server (service mode).

If you install Windows Admin Center on a server, tasks that require CredSSP, such as cluster creation and installing updates and extensions, require using an account that’s a member of the Gateway Administrators group on the Windows Admin Center server. For more information, see the first two sections of Configure User Access Control and Permissions.

Before you begin, you have to know that Azure Stack HCI is still in Preview and not for Production usage ready. But I’m installing it in my MVPLAB for testing purpose only and learn all the New Features.

What’s New in Azure Stack HCI

Clusters running Azure Stack HCI, version 20H2 have the following new features as compared to Windows Server 2019-based solutions:

  • New capabilities in Windows Admin Center: With the ability to create and update hyper-converged clusters via an intuitive UI, Azure Stack HCI is easier than ever to use.
  • Stretched clusters for automatic failover: Multi-site clustering with Storage Replica replication and automatic VM failover provides native disaster recovery and business continuity to clusters that use Storage Spaces Direct.
  • Affinity and anti-affinity rules: These can be used similarly to how Azure uses Availability Zones to keep VMs and storage together or apart in clusters with multiple fault domains, such as stretched clusters.
  • Azure portal integration: The Azure portal experience for Azure Stack HCI is designed to view all of your Azure Stack HCI clusters across the globe, with new features in development.
  • GPU acceleration for high-performance workloads: AI/ML applications can benefit from boosting performance with GPUs.
  • BitLocker encryption: You can now use BitLocker to encrypt the contents of data volumes on Azure Stack HCI, helping government and other customers stay compliant with standards such as FIPS 140-2 and HIPAA.
  • Improved Storage Spaces Direct volume repair speed: Repair volumes quickly and seamlessly.

In the Following Step-by-Step guide we install Azure Stack HCI Cluster with Windows Admin Center.

 

Click on Add and then Create New Server Cluster.

Choose for Azure Stack HCI.

Here you can also choose for both Azure Stack HCI nodes are in the same Site or you have more Azure Stack HCI Nodes in Two Sites for disaster Recovery and Business Continuity.
In my MVPLAB I have all Azure Stack HCI nodes in One Site. More information about Microsoft Azure Stack HCI Stretching Clusters can be found here.

Prerequisites before you begin with Windows Admin Center wizard for Creating Azure Stack HCI Cluster.

This is what I like about Windows Admin Center, supporting you in all steps and choices for making an Azure Stack HCI Cluster with Storage Spaces Direct.

 

Specify your administrator Account and password and add the Azure Stack HCI Node Servers

Add the Nodes to the Domain.

Install Required Features on the Azure Stack HCI Node Servers

Install Updates on the Azure Stack HCI Node Servers

Here you get options from your hardware vendor
I don’t get this because it’s virtual.

Restart the Azure Stack HCI Node Servers and Click Next Networking

Networking adapters are UP and Running.

When you have Enough Nics in your Azure Stack HCI Node Server, you can choose here for a Teamed Management NIC.
I choose for a single management NIC.
Plan your Azure Stack HCI Node network

Configure your Production and Storage network

Here you can configure different Switches for your workloads.
Windows Admin Center will work with Software Defined Networking (SDN)
I Skipped this in my MVPLAB.

Before creating the Azure Stack HCI Cluster, we have to Validate the Cluster first.

When the Cluster Validation is done, you can download the Cluster Validation report.

Here we give the Cluster a Name and a static IP.
Click Create Cluster.

Microsoft Azure Stack HCI Cluster is created 😉
Click Next for Storage.

Click Next

I Got some small disks Click Next.

Storage is validated and suitable for Storage Spaces Direct.

Storage Spaces Direct is enabled on your Azure Stack HCI Cluster.
Click Next for SDN

Here you can configure the Network Controller for the Azure Stack HCI Cluster

Done your Azure Stack HCI Cluster is made 🙂

Here we have the Dashboard in Windows Admin Center of my Azure Stack HCI Cluster

Management of your Azure Stack HCI Cluster

Managing your Azure Stack HCI Cluster with Windows Admin Center is important, because I have connected WAC with my Azure Subscription I can use Azure Monitor.
From here the Cluster is also connected with my Analytics workspace of Azure Monitor.

Azure Stack HCI Cluster Nodes connected with Azure Monitor.

With Windows Admin Center you can manage the Azure Stack HCI updates with Cluster Aware Updating (CAU) without any downtime for your workloads.


Start Cluster Aware Updating

Click on Install

One Azure Stack HCI Node is waiting and the other is Installing.

Now the other Azure Stack HCI Node is Installing the Update.

Updates Succeeded on both Azure Stack HCI Nodes.

Microsoft Azure Stack HCI Cluster is Running

Create your Virtual Machine on Azure Stack HCI Cluster.

Conclusion

Windows Admin Center supports you all the way for making your Microsoft Azure Stack HCI Cluster in easy steps deployment wizard. Of course you can make also your own PowerShell deployment scripts when you have to make more Azure Stack HCI Clusters for different platforms like Deploying virtual machines or AKS Kubernetes Clusters for Container Applications or a SQL environment.
Here you find more information about PowerShell commands

After deploying Azure Stack HCI Clusters with your own PowerShell Script, you can add the Cluster into Windows Admin Center for IT Management.
The Installation time of the Cluster is really fast. I hope this will give you more inside information about the Preview of Microsoft Azure Stack HCI Cluster and Windows Admin Center better Together!
Next Step is AKS Kubernetes on Azure Stack HCI 😉

Kubernetes Containers on your Azure Stack HCI


Leave a comment

Installing and Maintaining #Azure Kubernetes Cluster with Multi Pool Nodes (Preview) for #Linux #Winserv Containers

Install AKS-Preview extension via Azure Cloudshell

NOTE ! This is a Preview blogpost, do not use in production! (only for test environments)

To create an AKS cluster that can use multiple node pools and run Windows Server containers, first enable the WindowsPreview feature flags on your subscription. The WindowsPreview feature also uses multi-node pool clusters and virtual machine scale set to manage the deployment and configuration of the Kubernetes nodes. Register the WindowsPreview feature flag using the az feature register command as shown in the following example:

I Have registered the following Preview Features from the Azure CloudShell :

  • az feature register –name WindowsPreview –namespace Microsoft.ContainerService
  • az feature register –name MultiAgentpoolPreview –namespace Microsoft.ContainerService
  • az feature register –name VMSSPreview –namespace Microsoft.ContainerService

This will take a few minutes and you can check the registration with the following command :

az feature list -o table –query “[?contains(name, ‘Microsoft.ContainerService/WindowsPreview’)].{Name:name,State:properties.state}”

When ready, refresh the registration of the Microsoft.ContainerService resource provider using the az provider register command:

 

Creating Azure Kubernetes Cluster

First you create a Resource Group in the right Azure Region for your AKS Cluster to run:

az group create –name myResourceGroup –location eastus

I created Resource Group KubeCon in location West-Europe.

Creating KubeCluster

With the following CLI command in Azure Cloudshell, I created the Kubernetes Cluster with a single node:

$PASSWORD_WIN=”P@ssw0rd1234″

az aks create –resource-group KubeCon –name KubeCluster –node-count 1 –enable-addons monitoring –kubernetes-version 1.14.0 –generate-ssh-keys –windows-admin-password $PASSWORD_WIN –windows-admin-username azureuser –enable-vmss –network-plugin azure

The Azure Kubernetes Cluster “KubeCluster” is created in the resource group “KubeCon” in a view minutes.

Adding a Windows Pool

Adding a Windows Server Node Pool

By default, an AKS cluster is created with a node pool that can run Linux containers. Use az aks nodepool add command to add an additional node pool that can run Windows Server containers.

az aks nodepool add –resource-group KubeCon –cluster-name KubeCluster –os-type Windows –name pool02 –node-count 1 –kubernetes-version 1.14.0

I added the Windows Server Pool via the Azure Portal.

When this has finished, we have an Azure Kubernetes Cluster with Multi node Pools for Linux and Windows Server Containers :

Pools for Linux and Windows Server Containers

The following will be created in Microsoft Azure too :

VNET, NSG and Virtual Machine Scale Set (VMSS)

Azure Monitor for containers is a feature designed to monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Monitoring your containers is critical, especially when you’re running a production cluster, at scale, with multiple applications.
Azure Monitor for containers gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux and stored in your Log Analytics workspace.

Container Insights Monitoring of the Linux Node

Container Insights Monitoring of the Windows Server Node

Here you can read all about Azure Monitoring with Container Insights

Scaling Multi Pool Node AKS Cluster

To Scale your Multi Pool Node AKS Cluster, you need to do this via the Azure Cloudshell CLI.

Here you see the two pools ( Linux and Windows Server)

Scaling up the Windows Server Pool

You can do this with the following command :

az aks nodepool scale –resource-group KubeCon –cluster-name KubeCluster –name pool02 –node-count 2 –no-wait

Scaling

Scaling Succesful after a few minutes

Upgrading Windows Server Pool Instance

When I scaled the Cluster there was a update released by Microsoft.

Windows Server Pool Instances

Just Click on Upgrade

Upgrade is Done 😉


Leave a comment

#Microsoft SQL Always-On Cluster vs #SQL Managed Instance in #Azure

SQL Always-On Cluster in Azure

Before we start with building this SQL Always-On Cluster we already have some Azure SDK Components Active in the Azure Subscription to work with:

  • Virtual Network VNET-001 is already installed
  • Subnet-SQL and Subnet-Domaincontrollers
  • Network Security Groups (NSG) with the right rules active
  • Two domain controllers
  • Azure Keyvault ( for disk Encryption)

We deployed three Virtual Machines in an Availability Set :

  • Primary SQL Node VM01
  • Secondary SQL Node VM02
  • Witness Server

The deployment was with ARM Template :

VM Deployment

Copy and paste the JSON in your template editor like Visual Studio Code for Example or in the Azure Portal Template builder.

Visual Studio Code

Azure Portal Template

Read more how to deploy ARM Templates via Microsoft Azure Portal here

You also can create a Private or Public Repository on GitHub and store your ARM Templates there in a Library.

GitHub Learning Lab
Learn new skills by completing fun, realistic projects in your very own GitHub repository. Get advice and helpful feedback from our friendly Learning Lab bot.

Create a SQL Server 2014 Always On Availability Group in an existing Azure VNET and an existing Active Directory instance via GitHub :

https://github.com/Azure/azure-quickstart-templates/tree/master/sql-server-2014-alwayson-existing-vnet-and-ad

Configure Always On Availability Group in Azure VM manually :

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-availability-group-tutorial

Important Tip :

Don’t forget to get the right connectivity between Azure Load Balancer and the SQL Always-On Listener :

## Get the Cluster Resource Information:
Clear-Host
Get-ClusterResource `
| Where-Object {$_.ResourceType.Name -like “IP Address”} `
| Get-ClusterParameter `
| Where-Object {($_.Name -like “Network”) -or ($_.Name -like “Address”) -or ($_.Name -like “ProbePort”) -or ($_.Name -like “SubnetMask”)}

#############################################################

## Set Cluster Parameters:
$ClusterNetworkName = “Cluster Network 1” # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of higher to find the name)
$IPResourceName = “IPlistener” # the IP Address resource name
$ListenerILBIP = “10.x.x.x” # the IP Address of the Internal Load Balancer (ILB). This is the static IP address for the load balancer you configured in the Azure portal.
[int]$ListenerProbePort = 80

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$ListenerILBIP”;”ProbePort”=$ListenerProbePort;”SubnetMask”=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”EnableDhcp”=0}

############################################################

Before you start with your SQL workloads from On-premises Datacenter on Microsoft Azure, have a look if PaaS Azure SQL Managed Instances is something for your Organization with all the benefits.

What is Microsoft Azure SQL Managed Instance?

Managed instance is a new deployment option of Azure SQL Database, providing near 100% compatibility with the latest SQL Server on-premises (Enterprise Edition) Database Engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for on-premises SQL Server customers. The managed instance deployment model allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, the managed instance deployment option preserves all PaaS capabilities (automatic patching and version updates, automated backups, high-availability ), that drastically reduces management overhead and TCO.

Read here more on Microsoft Docs about Azure SQL Services

Conclusion :

When you have a lot of SQL workloads and want to go to Microsoft Azure Cloud Services, analyze your existing workloads well and have a look first at Microsoft Azure SQL Managed Instances. With this Azure PaaS Service, you don’t have to manage the Complete Infrastructure like in a SQL Always-On Cluster (IaaS).

Have a good look at the requirements and Microsoft Data Migration Services can help you out.

SQL Server instance migration to Azure SQL Database managed instance

 


Leave a comment

#Microsoft SQL Server 2019 Preview Overview #SQL #SQL2019 #Linux #Containers #MSIgnite

Microsoft SQL Server 2019 Preview

What’s New in Microsoft SQL Server 2019 Preview

• Big Data Clusters
o Deploy a Big Data cluster with SQL and Spark Linux containers on Kubernetes
o Access your big data from HDFS
o Run Advanced analytics and machine learning with Spark
o Use Spark streaming to data to SQL data pools
o Use Azure Data Studio to run Query books that provide a notebook experience

• Database engine
o UTF-8 support
o Resumable online index create allows index create to resume after interruption
o Clustered columnstore online index build and rebuild
o Always Encrypted with secure enclaves
o Intelligent query processing
o Java language programmability extension
o SQL Graph features
o Database scoped configuration setting for online and resumable DDL operations
o Always On Availability Groups – secondary replica connection redirection
o Data discovery and classification – natively built into SQL Server
o Expanded support for persistent memory devices
o Support for columnstore statistics in DBCC CLONEDATABASE
o New options added to sp_estimate_data_compression_savings
o SQL Server Machine Learning Services failover clusters
o Lightweight query profiling infrastructure enabled by default
o New Polybase connectors
o New sys.dm_db_page_info system function returns page information

• SQL Server on Linux
o Replication support
o Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
o Always On Availability Group on Docker containers with Kubernetes
o OpenLDAP support for third-party AD providers
o Machine Learning on Linux
o New container registry
o New RHEL-based container images
o Memory pressure notification

• Master Data Services
o Silverlight controls replaced

• Security
o Certificate management in SQL Server Configuration Manager

• Tools
o SQL Server Management Studio (SSMS) 18.0 (preview)
o Azure Data Studio

Introducing Microsoft SQL Server 2019 Big Data Clusters

SQL Server 2019 big data clusters make it easier for big data sets to be joined to the dimensional data typically stored in the enterprise relational database, enabling people and apps that use SQL Server to query big data more easily. The value of the big data greatly increases when it is not just in the hands of the data scientists and big data engineers but is also included in reports, dashboards, and applications. At the same time, the data scientists can continue to use big data ecosystem tools while also utilizing easy, real-time access to the high-value data in SQL Server because it is all part of one integrated, complete system.

Read the complete Awesome blogpost from Travis Wright about SQL Server 2019 Big Data Cluster here

Starting in SQL Server 2017 with support for Linux and containers, Microsoft has been on a journey of platform and operating system choice. With SQL Server 2019 preview, we are making it easier to adopt SQL Server in containers by enabling new HA scenarios and adding supported Red Hat Enterprise Linux container images. Today we are happy to announce the availability of SQL Server 2019 preview Linux-based container images on Microsoft Container Registry, Red Hat-Certified Container Images, and the SQL Server operator for Kubernetes, which makes it easy to deploy an Availability Group.

SQL Server 2019 preview containers now available

Microsoft Azure Data Studio

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. Previously released under the preview name SQL Operations Studio, Azure Data Studio offers a modern editor experience with lightning fast IntelliSense, code snippets, source control integration, and an integrated terminal. It is engineered with the data platform user in mind, with built-in charting of query resultsets and customizable dashboards.

Read the Complete Blogpost About Microsoft Azure Data Studio for SQL Server here

SQL Server 2019: Celebrating 25 years of SQL Server Database Engine and the path forward

Awesome work Microsoft SQL Team and Congrats on your 25th Anniversary !


Leave a comment

Cluster Operating System Rolling Upgrade in Windows Server Technical Preview #Winserv #Hyperv

Cluster OS Rolling Upgrade

Cluster Operating System (OS) Rolling Upgrade is a new feature in Windows Server Technical Preview that enables an administrator to upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server Technical Preview without stopping the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties against Service Level Agreements (SLA) can be avoided.

Cluster OS Rolling Upgrade provides the following benefits:

  • Hyper-V virtual machine and Scale-out File Server workloads can be upgraded from Windows Server 2012 R2 to Windows Server Technical Preview without downtime. Other cluster workloads will be unavailable during the time it takes to failover to Windows Server Technical Preview.
  • It does not require any additional hardware.
  • The cluster does not need to be stopped or restarted.
  • A new cluster is not required. In addition, existing cluster objects stored in Active Directory are used.
  • The upgrade process is reversible until the customer crosses the “point-of-no-return”, when all cluster nodes are running Windows Server Technical Preview, and when the Update-ClusterFunctionalLevel PowerShell cmdlet is run.
  • The cluster can support patching and maintenance operations while running in the mixed-OS mode.
  • It supports automation via PowerShell and WMI.
  • The ClusterFunctionalLevel property indicates the state of the cluster on Windows Server Technical Preview cluster nodes.

This guide describes the various stages of the Cluster OS Rolling Upgrade process, installation steps, feature limitations and frequently asked questions (FAQs), and is applicable to the following Cluster OS Rolling Upgrade scenarios in Windows Server Technical Preview:

  • Hyper-V clusters
  • Scale-Out File Server clusters

The following scenarios are not supported in Windows Server Technical Preview:

  • Cluster OS Rolling Upgrade of a cluster using storage with the Data Deduplication feature
  • Cluster OS Rolling Upgrade of virtual machines with Data Protection Manager (DPM) backups
  • Cluster OS Rolling Upgrade of guest clusters using virtual hard disk (.vhdx file) as shared storage
ImportantImportant
This preview release should not be used in production environments.

Read more about Cluster Operating System Rolling Upgrade in Windows Server Technical Preview here

Cluster OS Rolling Upgrade Process :

Cluster OS Rolling Upgrade Process


Leave a comment

#Microsoft System Center Management Pack for Windows Server Cluster #SCOM #sysctr #Winserv #Cluster

MP ClusterThe Windows Server Failover Cluster Management Pack provides both proactive and reactive monitoring of your Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 failover cluster deployments. It monitors Cluster services components—such as nodes, networks, resources, and resource groups—to report issues that can cause downtime or poor performance.
The monitoring provided by this management pack includes availability and configuration monitoring. In addition to health monitoring capabilities, this management pack includes dashboard views, extensive knowledge with embedded inline tasks, and views that enable near real-time diagnosis and resolution of detected issues.
With this management pack, Information Technology (IT) administrators can automate one-to-many management of users and computers, simplifying administrative tasks and reducing IT costs. Administrators can efficiently implement security settings, enforce IT policies, and distribute software consistently across a given site, domain, or range of organizational units.

You can download the Microsoft System Center Management Pack for Windows Server Cluster with documentation here


Leave a comment

Building High Performance Storage for #Hyperv Cluster on Scale-Out File Servers using #Violin Windows Flash Arrays

Hyperv Cluster

This white paper demonstrates the capabilities and performance for Violin Windows Flash Array (WFA), a next generation All-Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, WFA provides built-in high performance, availability and scalability by the tight integration of Violin’s All Flash Array and Microsoft Windows Server 2012 R2 Scale-Out File Server Cluster.

You can download this WhitePaper here


Leave a comment

#SQL 2014 Guest Cluster with Shared VHDX on #Hyperv 2012 R2 Cluster for #WAPack

Our configuration and requirements before we begin the Guest SQL 2014 Cluster with Shared VHDX on a Hyper-V Cluster :

  1. Microsoft Hyper-V 2012 R2 Cluster is running with Cluster Shared Volumes.
  2. We made Two virtual Machines with Microsoft Windows Server 2012 R2 called SQL01 and SQL02
  3. Networking Two NICS : One on the Production Switch and NIC two on the Heart Beat Switch.

This step shows how to create and then share a virtual hard disk that is in the .vhdx file format. Repeat this step for each shared .vhdx file that you want to add. For example, you may want to add one or more shared disks that will act as data disks, and a separate shared disk that you can designate as the disk witness for the guest failover cluster.

  1. In Failover Cluster Manager, expand the cluster name, and then click Roles.
  2. In the Roles pane, right-click the virtual machine on which you want to add a shared virtual hard disk, and then click Settings.
  3. In the virtual machine settings, under Hardware, click SCSI Controller.
  4. In the details pane, click Hard Drive, and then click Add.
  5. In the Hard Drive details pane, under Virtual hard disk, click New.The New Virtual Hard Disk Wizard opens.
  6. On the Before You Begin page, click Next.
  7. On the Choose Disk Format page, accept the default format of VHDX, and then click Next.
    noteNote
    To share the virtual hard disk, the format must be .vhdx.
  8. On the Choose Disk Type page, select Fixed size or Dynamically expanding, and then click Next.
    noteNote
    A differencing disk is not supported for a shared virtual hard disk.
  9. On the Specify Name and Location page, do the following:
    1. In the Name box, enter the name of the shared virtual hard disk.
    2. In the Location box, enter the path of the shared storage location.For Scenario 1, where the shared storage is a CSV disk, enter the path:C:\ClusterStorage\VolumeX, where C:\ represents the system drive, and X represents the desired CSV volume number.

      For Scenario 2, where the shared storage is an SMB file share, specify the path:

      \\ServerName\ShareName, where ServerName represents the client access point for the Scale-Out File Server, and ShareName represents the name of the SMB file share.

    3. Click Next.
  10. On the Configure Disk page, accept the default option of Create a new blank virtual hard disk, specify the desired size, and then click Next.
  11. On the Completing the New Virtual Hard Disk Wizard page, review the configuration, and then click Finish.
    ImportantImportant
    If the virtual machine is running, do not click Apply in the virtual machine settings before you continue to the next procedure. If you do click Apply on a running virtual machine, you will need to stop the virtual machine or remove and then add the virtual hard disk without clicking Apply.
Advanced Features
  1. In the virtual machine settings, under SCSI Controller, expand the hard drive that you created in the previous procedure.
  2. Click Advanced Features.
  3. In the details pane, select the Enable virtual hard disk sharing check box.
    noteNote
    If the check box appears dimmed and is unavailable, you can do either of the following:

    • Remove and then add the virtual hard disk to the running virtual machine. When you do, ensure that you do not click Apply when the New Virtual Hard Disk Wizard completes. Instead, immediately configure sharing in Advanced Features.
    • Stop the virtual machine, and then select the Enable virtual hard disk sharing check box.
  4. Click Apply, and then click OK.
  5. Add the virtual hard disk to each virtual machine that will use the shared .vhdx file. When you do, repeat this procedure to enable virtual hard disk sharing for each virtual machine that will use the disk.
TipTip
To share a virtual hard disk by using Windows PowerShell, use the Add-VMHardDiskDrive cmdlet with the –ShareVirtualDisk parameter. You must run this command as an administrator on the Hyper-V host for each virtual machine that will use the shared .vhdx file. For example, the following command adds a shared virtual hard disk (Data1.vhdx) on volume 1 of CSV to a virtual machine that is named VM1.Add-VMHardDiskDrive -VMName VM1 -Path C:\ClusterStorage\Volume1\Data1.vhdx -ShareVirtualDisk The following command adds a shared virtual hard disk (Witness.vhdx) that is stored on an SMB file share (\\Server1\Share1) to a virtual machine that is named VM2.Add-VMHardDiskDrive -VMName VM2 -Path \\Server1\Share1\Witness.vhdx -ShareVirtualDisk

For the Second Guest clusternode you will add the same VHDX files you created on the first Guest clusternode and share them also like on the first clusternode.

Failover Cluster Manager 2

So now we have a Shared Quorum Disk and a Shared Data Disk

The next steps :

  • Bring the disks Online on the first Guest node with Disk Management
  • Install Failover Cluster on both nodes.
  • Make a two node Cluster
  • Install a Microsoft SQL 2014 Guest Cluster.

Disk ManagementShared VHDX disks are Online

Install Microsoft Failover Cluster.

Failover Cluster Manager 3Cluster02 is Online with both Shared VHDX disks for the Guest SQL 2014 Cluster

Install Microsoft SQL 2014 Server for Clustering.

Failover Cluster Manager 3b

Microsoft SQL 2014 Server is installed on the Guest Cluster on top of an Hyper-V 2012 R2 Cluster

Failover Cluster Manager 4

Guest SQL 2014 Cluster is running on Hyper-V 2012 R2.

SQL2014 Studio

So now we are ready for installing Microsoft SPF and Windows Azure Pack for Windows Server 2012 R2 🙂


2 Comments

Add new Windows Server 2012 R2 Hyper-V Node to the Cluster with SC2012R2 VMM #SCVMM #Hyperv

Private Cloud Rack TestLAB

We have the following configuration :

  • Microsoft Operating System is Windows Server 2012 R2 in a Single forest.
  • Microsoft Windows Server 2012 R2 Hyper-V Cluster
  • Microsoft System Center 2012 R2 Virtual Machine Manager
  • Microsoft SQL 2012 SP1

Here you see the Step-by-Step guide to add a Hyper-v node to the Cluster with System Center 2012 R2 Virtual Machine Manager :

Add  Hyper-V node

Go to all Hosts and add the new Hyper-V node into SCVMM with this wizard.

When you see the new Hyper-V host in System Center 2012 R2 Virtual Machine Manager, you have to setup the host with all the settings from Virtual Machine Manager by opening
the properties of the new host.

SCVMM Network 2

Here you set the right NICs to the Logical networks.

Here we set the following NICs :

  • Production network
  • Management network
  • Live Migration network
  • Heartbeat network

After that we set the high available virtual Switches for the new Hyper-V Node :

 

SCVMM Network 3aJust set the right NIC and UP-Link port profile to the Virtual Switch.

Then you have to add the virtual NIC to the Virtual Switch :

SCVMM Network 4Here you set the VM Network and static IP address.

For production we set two NICs into a Team for capacity :

SCVMM TeamMake the Team with the right NICs and UP-Link port profile

When all of the properties are set, click on OK and the settings will be provisioned to the new Hyper-V node.

Hyperv settings to node

Before we add the new Hyper-V node to the Cluster, you can validate the Hyper-V Cluster if it’s still good and healty :

SCVMM Network 5Validate Hyper-V Cluster to see if there are any issues in the Cluster.

SCVMM Network 6Hyper-V Cluster is validating for any issues.

When the running Hyper-V Cluster is validated and good we can add the new Hyper-V node to the Cluster with System Center 2012 R2 Virtual Machine Manager :

SCVMM Network 7Right click on the Clustername and Click on add Cluster Node.

SCVMM Network 8Here you see the host to add.

SCVMM Network 9Click on Add and the Hyper-V Node will be add to the Hyper-V Cluster.

SCVMM Network 11Job Successful.

Node in ClusterThe New Node is now running in the Hyper-V Cluster and Provisioned by SCVMM