Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management

#HyperV Network Virtualization technical details and Gateway Architecture #SCVMM #Cloud #SDN


S2S Hybridcloud

In Hyper-V Network Virtualization (HNV), a customer is defined as the “owner” of a group of virtual machines that are deployed in a datacenter. A customer can be a corporation or enterprise in a multitenant public datacenter, or a division or business unit within a private datacenter. Each customer can have one or more VM networks in the datacenter, and each VM network consists of one or more virtual subnets


Generic Routing Encapsulation figure 1.

Generic Routing Encapsulation This network virtualization mechanism uses the Generic Routing Encapsulation (NVGRE) as part of the tunnel header. In NVGRE, the virtual machine’s packet is encapsulated inside another packet. The header of this new packet has the appropriate source and destination PA IP addresses in addition to the Virtual Subnet ID, which is stored in the Key field of the GRE header.

The Virtual Subnet ID allows hosts to identify the customer virtual machine for any given packet, even though the PA’s and the CA’s on the packets may overlap. This allows all virtual machines on the same host to share a single PA, as shown in Figure 1.

Sharing the PA has a big impact on network scalability. The number of IP and MAC addresses that need to be learned by the network infrastructure can be substantially reduced. For instance, if every end host has an average of 30 virtual machines, the number of IP and MAC addresses that need to be learned by the networking infrastructure is reduced by a factor of 30.The embedded Virtual Subnet IDs in the packets also enable easy correlation of packets to the actual customers.

With Windows Server 2012 and later, HNV fully supports NVGRE out of the box; it does NOT require upgrading or purchasing new network hardware such as NICs (Network Adapters), switches, or routers. This is because the NVGRE packet on the wire is a regular IP packet in the PA space, which is compatible with today’s network infrastructure.

Windows Server 2012 made working with standards a high priority. Along with key industry partners (Arista, Broadcom, Dell, Emulex, Hewlett Packard, and Intel) Microsoft published a draft RFC that describes the use of Generic Routing Encapsulation (GRE), which is an existing IETF standard, as an encapsulation protocol for network virtualization. For more information, see the following Internet Draft: Network Virtualization using Generic Routing Encapsulation. As NVGRE-aware becomes commercially available the benefits of NVGRE will become even greater.

Here you can read more on Microsoft Technet about Hyper-V Network Virtualization technologies

Hyper-V Network Virtualization Gateway Architectural Guide :

SCVMM2012R2 Design

System Center 2012 R2 Virtual Machine Manager Figure 2.

In the VMM model the Hyper-V Network Virtualization Gateway is managed via a PowerShell plug-in module. Partners building Hyper-V Network Virtualization gateways need to create a PowerShell plug-in module which physically runs on the VMM server. This plug-in module will communicate policy to the gateway. Figure 2 shows a block diagram of VMM managing a Hyper-V Network Virtualization deployment. Note that a partner plug-in runs inside the VMM server. This plug-in communicates to the gateway appliances. The protocol used for this communication is not specified here. The partner may determine the appropriate protocol. Note that VMM uses the Microsoft implementation of WS-Management Protocol called Windows Remote Management (WinRM) and Windows Management Instrumentation (WMI) to manage the Windows Server 2012 hosts and update network virtualization policies.

Cross Premise Gateway

Cross Premises Gateway Figure 3.

The Hybrid Cloud scenario enables an enterprise to seamlessly expand their on-premises datacenter into the cloud. This requires a site to site VPN tunnel. This can be accomplished with Windows Server 2012 as the host platform and a per tenant Windows Server 2012 guest virtual machine running a Site To Site (S2S) VPN tunnel connecting the cloud datacenter with various on-premise datacenters. Windows Server 2012 S2S VPN supports IKEv2 and configuration of remote policy can be accomplished via PowerShell/WMI. In addition Windows Server 2012 guest virtual machines support new network interface offload capabilities that enhance the performance and scalability of the gateway appliance. These offload capabilities are discussed below in the Hardware Considerations section.

Figure 3 shows a scenario where Red Corp and Blue Corp are customers of Hoster Cloud. Red Corp and Blue Corp seamlessly extend their datacenter into Hoster Cloud has deployed Windows Server 2012 based per tenant virtual machine gateways allowing Red Corp and Blue Corp to seamlessly extend their on-premise datacenters. In Figure 10 there is no requirement that Red Corp or Blue Corp run Windows Server 2012 S2S VPN, only that the customer’s on premise S2S VPN support IKEv2 to interact with corresponding Windows Server 2012 S2S virtual machines running on HostGW.

Figure 9 shows the internal architecture for HostGW. Each Routing Domain requires its own virtual machine. The technical reason for this is that a vmnic can only be associated with a single Virtual Subnet (VSID) and a VSID can only be part of a single routing domain. The VSID switch port ACL does not support trunking of VSIDs. Therefore the simplest way to provide isolation is with a per tenant (Routing Domain) gateway virtual machine.

Each of the virtual machines is dual homed which means they have two virtual network interfaces. One of the virtual network interfaces has the appropriate VSID associated with it. The other virtual network interface has a VSID of 0 which means traffic is not modified by the WNV filter. The Windows Server 2012 virtual machine is running RRAS and using IKEv2 to create a secure tunnel between Hoster Cloud and the customer’s on premise gateway.

HybridCloud VM Gateway

Hybrid Cloud with Windows Server 2012 based Per Tenant VM Gateways Figure 4.

Figure 4 shows the architecture where VMM is managing a Hyper-V Network Virtualization deployment. The partner has a plug-in that runs in the VMM server. When using Windows Server 2012 as a Hyper-V Network Virtualization gateway appliance a local management process running in Windows is required as the end point for this communication from the plug-in running in the VMM server. This is how the plug-in is able to communicate network virtualization policy to the WNV filter running on HostGW

You can Read more on Microsoft Technet about Hyper-V Network Virtualization Gateway Architecture here

Author: James van den Berg

I'm Microsoft Architect and ICT Specialist and Microsoft MVP Cloud and Datacenter Management

3 thoughts on “#HyperV Network Virtualization technical details and Gateway Architecture #SCVMM #Cloud #SDN

  1. Pingback: Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #69 - Flo's Datacenter Report

  2. Pingback: Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #69 - Windows Management - TechCenter - Dell Community

  3. Pingback: Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #69 - Dell TechCenter - TechCenter - Dell Community

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.