Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

Installing and Maintaining #Azure Kubernetes Cluster with Multi Pool Nodes (Preview) for #Linux #Winserv Containers

Install AKS-Preview extension via Azure Cloudshell

NOTE ! This is a Preview blogpost, do not use in production! (only for test environments)

To create an AKS cluster that can use multiple node pools and run Windows Server containers, first enable the WindowsPreview feature flags on your subscription. The WindowsPreview feature also uses multi-node pool clusters and virtual machine scale set to manage the deployment and configuration of the Kubernetes nodes. Register the WindowsPreview feature flag using the az feature register command as shown in the following example:

I Have registered the following Preview Features from the Azure CloudShell :

  • az feature register –name WindowsPreview –namespace Microsoft.ContainerService
  • az feature register –name MultiAgentpoolPreview –namespace Microsoft.ContainerService
  • az feature register –name VMSSPreview –namespace Microsoft.ContainerService

This will take a few minutes and you can check the registration with the following command :

az feature list -o table –query “[?contains(name, ‘Microsoft.ContainerService/WindowsPreview’)].{Name:name,State:properties.state}”

When ready, refresh the registration of the Microsoft.ContainerService resource provider using the az provider register command:

 

Creating Azure Kubernetes Cluster

First you create a Resource Group in the right Azure Region for your AKS Cluster to run:

az group create –name myResourceGroup –location eastus

I created Resource Group KubeCon in location West-Europe.

Creating KubeCluster

With the following CLI command in Azure Cloudshell, I created the Kubernetes Cluster with a single node:

$PASSWORD_WIN=”P@ssw0rd1234″

az aks create –resource-group KubeCon –name KubeCluster –node-count 1 –enable-addons monitoring –kubernetes-version 1.14.0 –generate-ssh-keys –windows-admin-password $PASSWORD_WIN –windows-admin-username azureuser –enable-vmss –network-plugin azure

The Azure Kubernetes Cluster “KubeCluster” is created in the resource group “KubeCon” in a view minutes.

Adding a Windows Pool

Adding a Windows Server Node Pool

By default, an AKS cluster is created with a node pool that can run Linux containers. Use az aks nodepool add command to add an additional node pool that can run Windows Server containers.

az aks nodepool add –resource-group KubeCon –cluster-name KubeCluster –os-type Windows –name pool02 –node-count 1 –kubernetes-version 1.14.0

I added the Windows Server Pool via the Azure Portal.

When this has finished, we have an Azure Kubernetes Cluster with Multi node Pools for Linux and Windows Server Containers :

Pools for Linux and Windows Server Containers

The following will be created in Microsoft Azure too :

VNET, NSG and Virtual Machine Scale Set (VMSS)

Azure Monitor for containers is a feature designed to monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Monitoring your containers is critical, especially when you’re running a production cluster, at scale, with multiple applications.
Azure Monitor for containers gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux and stored in your Log Analytics workspace.

Container Insights Monitoring of the Linux Node

Container Insights Monitoring of the Windows Server Node

Here you can read all about Azure Monitoring with Container Insights

Scaling Multi Pool Node AKS Cluster

To Scale your Multi Pool Node AKS Cluster, you need to do this via the Azure Cloudshell CLI.

Here you see the two pools ( Linux and Windows Server)

Scaling up the Windows Server Pool

You can do this with the following command :

az aks nodepool scale –resource-group KubeCon –cluster-name KubeCluster –name pool02 –node-count 2 –no-wait

Scaling

Scaling Succesful after a few minutes

Upgrading Windows Server Pool Instance

When I scaled the Cluster there was a update released by Microsoft.

Windows Server Pool Instances

Just Click on Upgrade

Upgrade is Done 😉


Leave a comment

#Microsoft SQL Always-On Cluster vs #SQL Managed Instance in #Azure

SQL Always-On Cluster in Azure

Before we start with building this SQL Always-On Cluster we already have some Azure SDK Components Active in the Azure Subscription to work with:

  • Virtual Network VNET-001 is already installed
  • Subnet-SQL and Subnet-Domaincontrollers
  • Network Security Groups (NSG) with the right rules active
  • Two domain controllers
  • Azure Keyvault ( for disk Encryption)

We deployed three Virtual Machines in an Availability Set :

  • Primary SQL Node VM01
  • Secondary SQL Node VM02
  • Witness Server

The deployment was with ARM Template :

VM Deployment

Copy and paste the JSON in your template editor like Visual Studio Code for Example or in the Azure Portal Template builder.

Visual Studio Code

Azure Portal Template

Read more how to deploy ARM Templates via Microsoft Azure Portal here

You also can create a Private or Public Repository on GitHub and store your ARM Templates there in a Library.

GitHub Learning Lab
Learn new skills by completing fun, realistic projects in your very own GitHub repository. Get advice and helpful feedback from our friendly Learning Lab bot.

Create a SQL Server 2014 Always On Availability Group in an existing Azure VNET and an existing Active Directory instance via GitHub :

https://github.com/Azure/azure-quickstart-templates/tree/master/sql-server-2014-alwayson-existing-vnet-and-ad

Configure Always On Availability Group in Azure VM manually :

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-availability-group-tutorial

Important Tip :

Don’t forget to get the right connectivity between Azure Load Balancer and the SQL Always-On Listener :

## Get the Cluster Resource Information:
Clear-Host
Get-ClusterResource `
| Where-Object {$_.ResourceType.Name -like “IP Address”} `
| Get-ClusterParameter `
| Where-Object {($_.Name -like “Network”) -or ($_.Name -like “Address”) -or ($_.Name -like “ProbePort”) -or ($_.Name -like “SubnetMask”)}

#############################################################

## Set Cluster Parameters:
$ClusterNetworkName = “Cluster Network 1” # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of higher to find the name)
$IPResourceName = “IPlistener” # the IP Address resource name
$ListenerILBIP = “10.x.x.x” # the IP Address of the Internal Load Balancer (ILB). This is the static IP address for the load balancer you configured in the Azure portal.
[int]$ListenerProbePort = 80

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$ListenerILBIP”;”ProbePort”=$ListenerProbePort;”SubnetMask”=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”EnableDhcp”=0}

############################################################

Before you start with your SQL workloads from On-premises Datacenter on Microsoft Azure, have a look if PaaS Azure SQL Managed Instances is something for your Organization with all the benefits.

What is Microsoft Azure SQL Managed Instance?

Managed instance is a new deployment option of Azure SQL Database, providing near 100% compatibility with the latest SQL Server on-premises (Enterprise Edition) Database Engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for on-premises SQL Server customers. The managed instance deployment model allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, the managed instance deployment option preserves all PaaS capabilities (automatic patching and version updates, automated backups, high-availability ), that drastically reduces management overhead and TCO.

Read here more on Microsoft Docs about Azure SQL Services

Conclusion :

When you have a lot of SQL workloads and want to go to Microsoft Azure Cloud Services, analyze your existing workloads well and have a look first at Microsoft Azure SQL Managed Instances. With this Azure PaaS Service, you don’t have to manage the Complete Infrastructure like in a SQL Always-On Cluster (IaaS).

Have a good look at the requirements and Microsoft Data Migration Services can help you out.

SQL Server instance migration to Azure SQL Database managed instance

 


Leave a comment

#Microsoft SQL Server 2019 Preview Overview #SQL #SQL2019 #Linux #Containers #MSIgnite

Microsoft SQL Server 2019 Preview

What’s New in Microsoft SQL Server 2019 Preview

• Big Data Clusters
o Deploy a Big Data cluster with SQL and Spark Linux containers on Kubernetes
o Access your big data from HDFS
o Run Advanced analytics and machine learning with Spark
o Use Spark streaming to data to SQL data pools
o Use Azure Data Studio to run Query books that provide a notebook experience

• Database engine
o UTF-8 support
o Resumable online index create allows index create to resume after interruption
o Clustered columnstore online index build and rebuild
o Always Encrypted with secure enclaves
o Intelligent query processing
o Java language programmability extension
o SQL Graph features
o Database scoped configuration setting for online and resumable DDL operations
o Always On Availability Groups – secondary replica connection redirection
o Data discovery and classification – natively built into SQL Server
o Expanded support for persistent memory devices
o Support for columnstore statistics in DBCC CLONEDATABASE
o New options added to sp_estimate_data_compression_savings
o SQL Server Machine Learning Services failover clusters
o Lightweight query profiling infrastructure enabled by default
o New Polybase connectors
o New sys.dm_db_page_info system function returns page information

• SQL Server on Linux
o Replication support
o Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
o Always On Availability Group on Docker containers with Kubernetes
o OpenLDAP support for third-party AD providers
o Machine Learning on Linux
o New container registry
o New RHEL-based container images
o Memory pressure notification

• Master Data Services
o Silverlight controls replaced

• Security
o Certificate management in SQL Server Configuration Manager

• Tools
o SQL Server Management Studio (SSMS) 18.0 (preview)
o Azure Data Studio

Introducing Microsoft SQL Server 2019 Big Data Clusters

SQL Server 2019 big data clusters make it easier for big data sets to be joined to the dimensional data typically stored in the enterprise relational database, enabling people and apps that use SQL Server to query big data more easily. The value of the big data greatly increases when it is not just in the hands of the data scientists and big data engineers but is also included in reports, dashboards, and applications. At the same time, the data scientists can continue to use big data ecosystem tools while also utilizing easy, real-time access to the high-value data in SQL Server because it is all part of one integrated, complete system.

Read the complete Awesome blogpost from Travis Wright about SQL Server 2019 Big Data Cluster here

Starting in SQL Server 2017 with support for Linux and containers, Microsoft has been on a journey of platform and operating system choice. With SQL Server 2019 preview, we are making it easier to adopt SQL Server in containers by enabling new HA scenarios and adding supported Red Hat Enterprise Linux container images. Today we are happy to announce the availability of SQL Server 2019 preview Linux-based container images on Microsoft Container Registry, Red Hat-Certified Container Images, and the SQL Server operator for Kubernetes, which makes it easy to deploy an Availability Group.

SQL Server 2019 preview containers now available

Microsoft Azure Data Studio

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. Previously released under the preview name SQL Operations Studio, Azure Data Studio offers a modern editor experience with lightning fast IntelliSense, code snippets, source control integration, and an integrated terminal. It is engineered with the data platform user in mind, with built-in charting of query resultsets and customizable dashboards.

Read the Complete Blogpost About Microsoft Azure Data Studio for SQL Server here

SQL Server 2019: Celebrating 25 years of SQL Server Database Engine and the path forward

Awesome work Microsoft SQL Team and Congrats on your 25th Anniversary !


Leave a comment

Cluster Operating System Rolling Upgrade in Windows Server Technical Preview #Winserv #Hyperv

Cluster OS Rolling Upgrade

Cluster Operating System (OS) Rolling Upgrade is a new feature in Windows Server Technical Preview that enables an administrator to upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server Technical Preview without stopping the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties against Service Level Agreements (SLA) can be avoided.

Cluster OS Rolling Upgrade provides the following benefits:

  • Hyper-V virtual machine and Scale-out File Server workloads can be upgraded from Windows Server 2012 R2 to Windows Server Technical Preview without downtime. Other cluster workloads will be unavailable during the time it takes to failover to Windows Server Technical Preview.
  • It does not require any additional hardware.
  • The cluster does not need to be stopped or restarted.
  • A new cluster is not required. In addition, existing cluster objects stored in Active Directory are used.
  • The upgrade process is reversible until the customer crosses the “point-of-no-return”, when all cluster nodes are running Windows Server Technical Preview, and when the Update-ClusterFunctionalLevel PowerShell cmdlet is run.
  • The cluster can support patching and maintenance operations while running in the mixed-OS mode.
  • It supports automation via PowerShell and WMI.
  • The ClusterFunctionalLevel property indicates the state of the cluster on Windows Server Technical Preview cluster nodes.

This guide describes the various stages of the Cluster OS Rolling Upgrade process, installation steps, feature limitations and frequently asked questions (FAQs), and is applicable to the following Cluster OS Rolling Upgrade scenarios in Windows Server Technical Preview:

  • Hyper-V clusters
  • Scale-Out File Server clusters

The following scenarios are not supported in Windows Server Technical Preview:

  • Cluster OS Rolling Upgrade of a cluster using storage with the Data Deduplication feature
  • Cluster OS Rolling Upgrade of virtual machines with Data Protection Manager (DPM) backups
  • Cluster OS Rolling Upgrade of guest clusters using virtual hard disk (.vhdx file) as shared storage
ImportantImportant
This preview release should not be used in production environments.

Read more about Cluster Operating System Rolling Upgrade in Windows Server Technical Preview here

Cluster OS Rolling Upgrade Process :

Cluster OS Rolling Upgrade Process


Leave a comment

#Microsoft System Center Management Pack for Windows Server Cluster #SCOM #sysctr #Winserv #Cluster

MP ClusterThe Windows Server Failover Cluster Management Pack provides both proactive and reactive monitoring of your Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 failover cluster deployments. It monitors Cluster services components—such as nodes, networks, resources, and resource groups—to report issues that can cause downtime or poor performance.
The monitoring provided by this management pack includes availability and configuration monitoring. In addition to health monitoring capabilities, this management pack includes dashboard views, extensive knowledge with embedded inline tasks, and views that enable near real-time diagnosis and resolution of detected issues.
With this management pack, Information Technology (IT) administrators can automate one-to-many management of users and computers, simplifying administrative tasks and reducing IT costs. Administrators can efficiently implement security settings, enforce IT policies, and distribute software consistently across a given site, domain, or range of organizational units.

You can download the Microsoft System Center Management Pack for Windows Server Cluster with documentation here


Leave a comment

Building High Performance Storage for #Hyperv Cluster on Scale-Out File Servers using #Violin Windows Flash Arrays

Hyperv Cluster

This white paper demonstrates the capabilities and performance for Violin Windows Flash Array (WFA), a next generation All-Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, WFA provides built-in high performance, availability and scalability by the tight integration of Violin’s All Flash Array and Microsoft Windows Server 2012 R2 Scale-Out File Server Cluster.

You can download this WhitePaper here


Leave a comment

#SQL 2014 Guest Cluster with Shared VHDX on #Hyperv 2012 R2 Cluster for #WAPack

Our configuration and requirements before we begin the Guest SQL 2014 Cluster with Shared VHDX on a Hyper-V Cluster :

  1. Microsoft Hyper-V 2012 R2 Cluster is running with Cluster Shared Volumes.
  2. We made Two virtual Machines with Microsoft Windows Server 2012 R2 called SQL01 and SQL02
  3. Networking Two NICS : One on the Production Switch and NIC two on the Heart Beat Switch.

This step shows how to create and then share a virtual hard disk that is in the .vhdx file format. Repeat this step for each shared .vhdx file that you want to add. For example, you may want to add one or more shared disks that will act as data disks, and a separate shared disk that you can designate as the disk witness for the guest failover cluster.

  1. In Failover Cluster Manager, expand the cluster name, and then click Roles.
  2. In the Roles pane, right-click the virtual machine on which you want to add a shared virtual hard disk, and then click Settings.
  3. In the virtual machine settings, under Hardware, click SCSI Controller.
  4. In the details pane, click Hard Drive, and then click Add.
  5. In the Hard Drive details pane, under Virtual hard disk, click New.The New Virtual Hard Disk Wizard opens.
  6. On the Before You Begin page, click Next.
  7. On the Choose Disk Format page, accept the default format of VHDX, and then click Next.
    noteNote
    To share the virtual hard disk, the format must be .vhdx.
  8. On the Choose Disk Type page, select Fixed size or Dynamically expanding, and then click Next.
    noteNote
    A differencing disk is not supported for a shared virtual hard disk.
  9. On the Specify Name and Location page, do the following:
    1. In the Name box, enter the name of the shared virtual hard disk.
    2. In the Location box, enter the path of the shared storage location.For Scenario 1, where the shared storage is a CSV disk, enter the path:C:\ClusterStorage\VolumeX, where C:\ represents the system drive, and X represents the desired CSV volume number.

      For Scenario 2, where the shared storage is an SMB file share, specify the path:

      \\ServerName\ShareName, where ServerName represents the client access point for the Scale-Out File Server, and ShareName represents the name of the SMB file share.

    3. Click Next.
  10. On the Configure Disk page, accept the default option of Create a new blank virtual hard disk, specify the desired size, and then click Next.
  11. On the Completing the New Virtual Hard Disk Wizard page, review the configuration, and then click Finish.
    ImportantImportant
    If the virtual machine is running, do not click Apply in the virtual machine settings before you continue to the next procedure. If you do click Apply on a running virtual machine, you will need to stop the virtual machine or remove and then add the virtual hard disk without clicking Apply.
Advanced Features
  1. In the virtual machine settings, under SCSI Controller, expand the hard drive that you created in the previous procedure.
  2. Click Advanced Features.
  3. In the details pane, select the Enable virtual hard disk sharing check box.
    noteNote
    If the check box appears dimmed and is unavailable, you can do either of the following:

    • Remove and then add the virtual hard disk to the running virtual machine. When you do, ensure that you do not click Apply when the New Virtual Hard Disk Wizard completes. Instead, immediately configure sharing in Advanced Features.
    • Stop the virtual machine, and then select the Enable virtual hard disk sharing check box.
  4. Click Apply, and then click OK.
  5. Add the virtual hard disk to each virtual machine that will use the shared .vhdx file. When you do, repeat this procedure to enable virtual hard disk sharing for each virtual machine that will use the disk.
TipTip
To share a virtual hard disk by using Windows PowerShell, use the Add-VMHardDiskDrive cmdlet with the –ShareVirtualDisk parameter. You must run this command as an administrator on the Hyper-V host for each virtual machine that will use the shared .vhdx file. For example, the following command adds a shared virtual hard disk (Data1.vhdx) on volume 1 of CSV to a virtual machine that is named VM1.Add-VMHardDiskDrive -VMName VM1 -Path C:\ClusterStorage\Volume1\Data1.vhdx -ShareVirtualDisk The following command adds a shared virtual hard disk (Witness.vhdx) that is stored on an SMB file share (\\Server1\Share1) to a virtual machine that is named VM2.Add-VMHardDiskDrive -VMName VM2 -Path \\Server1\Share1\Witness.vhdx -ShareVirtualDisk

For the Second Guest clusternode you will add the same VHDX files you created on the first Guest clusternode and share them also like on the first clusternode.

Failover Cluster Manager 2

So now we have a Shared Quorum Disk and a Shared Data Disk

The next steps :

  • Bring the disks Online on the first Guest node with Disk Management
  • Install Failover Cluster on both nodes.
  • Make a two node Cluster
  • Install a Microsoft SQL 2014 Guest Cluster.

Disk ManagementShared VHDX disks are Online

Install Microsoft Failover Cluster.

Failover Cluster Manager 3Cluster02 is Online with both Shared VHDX disks for the Guest SQL 2014 Cluster

Install Microsoft SQL 2014 Server for Clustering.

Failover Cluster Manager 3b

Microsoft SQL 2014 Server is installed on the Guest Cluster on top of an Hyper-V 2012 R2 Cluster

Failover Cluster Manager 4

Guest SQL 2014 Cluster is running on Hyper-V 2012 R2.

SQL2014 Studio

So now we are ready for installing Microsoft SPF and Windows Azure Pack for Windows Server 2012 R2 🙂


2 Comments

Add new Windows Server 2012 R2 Hyper-V Node to the Cluster with SC2012R2 VMM #SCVMM #Hyperv

Private Cloud Rack TestLAB

We have the following configuration :

  • Microsoft Operating System is Windows Server 2012 R2 in a Single forest.
  • Microsoft Windows Server 2012 R2 Hyper-V Cluster
  • Microsoft System Center 2012 R2 Virtual Machine Manager
  • Microsoft SQL 2012 SP1

Here you see the Step-by-Step guide to add a Hyper-v node to the Cluster with System Center 2012 R2 Virtual Machine Manager :

Add  Hyper-V node

Go to all Hosts and add the new Hyper-V node into SCVMM with this wizard.

When you see the new Hyper-V host in System Center 2012 R2 Virtual Machine Manager, you have to setup the host with all the settings from Virtual Machine Manager by opening
the properties of the new host.

SCVMM Network 2

Here you set the right NICs to the Logical networks.

Here we set the following NICs :

  • Production network
  • Management network
  • Live Migration network
  • Heartbeat network

After that we set the high available virtual Switches for the new Hyper-V Node :

 

SCVMM Network 3aJust set the right NIC and UP-Link port profile to the Virtual Switch.

Then you have to add the virtual NIC to the Virtual Switch :

SCVMM Network 4Here you set the VM Network and static IP address.

For production we set two NICs into a Team for capacity :

SCVMM TeamMake the Team with the right NICs and UP-Link port profile

When all of the properties are set, click on OK and the settings will be provisioned to the new Hyper-V node.

Hyperv settings to node

Before we add the new Hyper-V node to the Cluster, you can validate the Hyper-V Cluster if it’s still good and healty :

SCVMM Network 5Validate Hyper-V Cluster to see if there are any issues in the Cluster.

SCVMM Network 6Hyper-V Cluster is validating for any issues.

When the running Hyper-V Cluster is validated and good we can add the new Hyper-V node to the Cluster with System Center 2012 R2 Virtual Machine Manager :

SCVMM Network 7Right click on the Clustername and Click on add Cluster Node.

SCVMM Network 8Here you see the host to add.

SCVMM Network 9Click on Add and the Hyper-V Node will be add to the Hyper-V Cluster.

SCVMM Network 11Job Successful.

Node in ClusterThe New Node is now running in the Hyper-V Cluster and Provisioned by SCVMM


5 Comments

From Microsoft Private #Cloud to Hybrid Cloud Services Part 1 of 5 #Hyperv Clustering

This is the first of five blog posts “From Microsoft Private Cloud to Hybrid Cloud Services”.
Here you can find our Configuration hardware on-premisses and the Start of this blog series.

PRIVATE CLOUD Infrastructure :

Before we start with installing Microsoft Windows Server 2012 R2 on our hardware, it is important to check your vendor support for Windows Server 2012 R2.
For production is vendor support very important ! (Check this for your own hardware configuration first)
Microsoft built a lot of drivers into Operating System W2012R2, we use DELL Power Edge Servers and found the following about support :

Dell PowerEdge R710 Support

CHECK For Windows Server 2012 R2 Hyper-V Node

LIST Built in Drivers Windows Server 2012 R2Here you can find more information on Dell Power Edge Servers

Our Dell Power Edge Servers (R410 – R610 – R710 – M620) hardware configuration drivers are in the box of Windows Server 2012 R2 🙂

We already had Microsoft Windows Server 2012 Datacenter edition installed in our forest called ICTCUMULUS, with the DELL management DVD for W2012.
Dell Support & Drivers for the Power Edge R710

We successfully upgraded all the Servers from Windows Server 2012 => Windows Server 2012 R2 and we raised domain and forest functional level to R2 :

Raise Forest Functional Level

Here you see the configuration step-by step guide for Microsoft Windows Server 2012 R2 Hyper-V Cluster :

Dell PowerEdge R710 Back Config networkDELL Power Edge R710 Hyper-V Node Back

We started with building the production Teams on each Hyper-V node with Microsoft Windows Server 2012 R2 NIC-Teaming :

Enable NIC Teaming

Enable NIC-Teaming on the local Server with Server Manager

NIC Teaming Productie Teaming Mode

NIC Teaming Productie LB mode

NIC Teaming Productie Standby Adapter

Our two 1 Gbit NICs are both active for the capacity

Productie Team StatusOn each Hyper-V node.

NIC Teaming ProductieNIC Productie Team Settings.

NIC Teaming Productie operationeelHyper-V Productie Team is Operational.

Network overzicht Hyper-V NodeWe configured all four Hyper-V nodes the same :

  • NIC 1+2 Production Team
  • NIC 3 CSV for Storage (I-SCSI Trafic)
  • NIC4 Heartbeat network for Cluster.

For more information about Microsoft Windows Server 2012 R2 NIC Teaming is on this WhitePaper

Now we have network configured we can attach the I-SCSI LUNS to the Server, with our Dell Powervault MD3200i storage box :

Dell PowerVault 3200i Storage LUN Config

With the Storage Manager of Dell we made a Host Group “Hyper-V_Cluster” with the four Hyper-V nodes.
There we provision 5 LUNS ( 4 x VDISK of 1TB and 1 x VDISK for QUORUM) for the Microsoft Windows Server 2012 R2 Hyper-V nodes.

ISCSI Initiator Icone

With iSCSI Initiator of Windows Server 2012 R2 we attach the storage which is provisioned with the Dell Storage Manager.

ISCSI Initiator Quick Connect

ISCSI Discovery Target Portal

ISCSI Volumes and DevicesHere you get your LUNS by clicking on Auto Configure.

Now we have the storage presented to the Hyper-V host and can we configure the LUNS with Diskmanagement :

Disk management Config 1Here we see the Disks in Disk Management.

Disk management Config 2

Disk management Config 3

Disk management Config 4

Disk management Config 5

Disk management Config 6

Disk management Config 7

Disk management Config 8

Disk management Config 9

Disk management Config 10

Do this with all the LUNS and give them the right Volume Label like for example CVS1, CSV2, CSV3, CSV4….

Disk management Config 11

Disk management Config 12From Server Manager you can see all the Hyper-V Nodes.

Now we Have Networking and Storage ready to go, we can install the Features Hyper-V and Failover Clustering of Windows Server 2012 R2.

Enabling Features with dism commando

Install prerequisite .net Framework 3.5 via de command prompt. Make sure you mount the Windows 2012 DVD in D:
Run as Administrator CMD.exe  and type the following commando : dism /online /enable-feature /all /featurename:NetFx3 /source:d:\sources\sxs

  1. Install the following features of Windows Server 2012 R2 : Hyper-V and Failovercluster
  2. Update Software via Windows Update.
  3. Open failover Cluster Manager

 

FailOverCluster 1

 

FailOverCluster 2

FailOverCluster 3

FailOverCluster 4

FailOverCluster 5

FailOverCluster 6

FailOverCluster 7

FailOverCluster 8

FailOverCluster 9

In our report we had a message about Gateway :

FailOverCluster 10FailOverCluster 11

FailOverCluster 12

Here we give the Cluster a name with IP-adres.

FailOverCluster 13

FailOverCluster 14

FailOverCluster 15After the Cluster is ready, we add the storage to Cluster Shared Volumes (CSV)

FailOverCluster 16 CSV

FailOverCluster 17

For Hyper-V we set the Storage path for the VM Configs and VHDx Disks in Hyper-V Settings for each host.

Virtual Hard Disks Cluster path

With these steps we checked our hardware that’s in the driver box of Microsoft Windows Server 2012 R2, we provisioned iSCSI Storage and made a Windows Server 2012 R2 Hyper-V Cluster.
My Next blog is all about Management with System Center 2012 R2 Virtual Machine Manager and App-Controller.