mountainss Cloud and Datacenter Management Blog

Microsoft SystemCenter blogsite about virtualization on-premises and Cloud

Deploy highly scalable tenant network infrastructure for hosting providers #WAPack #SCVMM #Hyperv #CloudOS

Leave a comment


Hosting Provider Cloud

The following diagram shows the recommended design for this solution, which connects each tenant’s network to the hosting provider’s multi-tenant gateway using a single site-to-site VPN tunnel. This enables the hosting provider to support approximately 100 tenants on a single gateway cluster, which decreases both the management complexity and cost. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then routes each tenant’s network data and uses the “Network Virtualization using Generic Routing Encapsulation” (NVGRE) protocol for network virtualization.

Solution design element Why is it included in this solution?
Windows Server 2012 R2 Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure and to decrease software update frequency.
Windows Server 2012 R2 Gateway Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.
Microsoft SQL Server 2012 Provides database services for Virtual Machine Managerand Windows Azure Pack.
System Center 2012 R2 Virtual Machine Manager Manages virtual networks (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.
Windows Server Failover Clustering All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.

The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.

Scale-out File Server Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.

For an overview of this feature, see Scale-Out File Server for application data overview.

For a more in-depth discussion of possible storage solutions, see Provide cost-effective storage for Hyper-V workloads by using Windows Server.

Site-to-site VPN Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.
Windows Azure Pack Provides a self-service portal for tenants to manage their own virtual networks. Windows Azure Pack provides a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. Though Windows Azure Pack is used for the self-service portal in this solution, you can use a different self-service portal if you choose.

For an overview of this product, see Windows Azure Pack for Windows Server

System Center 2012 R2 Orchestrator Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.

Windows Server 2012 R2 together with System Center 2012 R2 Virtual Machine Manager (VMM) give hosting providers a multi-tenant gateway solution that supports multiple host-to-host VPN tenant connections, Internet access for tenant virtual machines by using a gateway NAT feature, and forwarding gateway capabilities for private cloud implementations. Hyper-V Network Virtualization provides tenant virtual network isolation with NVGRE, which allows tenants to bring their own address space and allows hosting providers better scalability than is possible using VLANs for isolation.

The components of the design are separated onto separate servers because they each have unique scaling, manageability, and security requirements.

For more information about the advantages of HNV and Windows Server Gateway, see:

VMM offers a user interface to manage the gateways, virtual networks, virtual machines and other fabric items.

When planning this solution, you need to consider the following:

  • High availability design for the servers running Hyper-V, guest virtual machines, SQL server, gateways, VMM, and other servicesYou’ll want to ensure that your design is fault tolerant and is capable of supporting your stated availability terms.
  • Tenant virtual machine Internet access requirementsConsider whether or not your tenants want their virtual machines to have Internet access. If so, you will need to configure the NAT feature when you deploy the gateway.
  • Infrastructure physical hardware capacity and throughputYou’ll need to ensure that your physical network has the capacity to scale out as your IaaS offering expands.
  • Site-to-site connection throughputYou’ll need to investigate the throughput you can provide your tenants and whether site-to-site VPN connections will be sufficient.
  • Network isolation technologiesThis solution uses NVGRE for tenant network isolation. You’ll want to investigate if you have or can obtain hardware that can optimize this this protocol. For example, network interface cards, switches, and so on.
  • Authentication mechanismsThis solution uses two Active Directory domains for authentication; one for the infrastructure servers and one for the gateway cluster and scale-out file server for the gateway. If you don’t have an Active Directory domain available for the infrastructure, you’ll need to prepare a domain controller before you start deployment.
  • IP addressingYou’ll need to plan for the IP address spaces used by this solution.

 

ImportantImportant
If you use jumbo frames in you network environment, you may need to plan for some configuration adjustments before you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Determine your tenant requirements

To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.

Use the following questions to help you plan for your tenant requirements.

Design consideration Design effect
How many tenants do you expect to host, and how fast do you expect that number to grow? Determines how many Hyper-V hosts you’ll need to support your tenant workloads.

Using Hyper-V Resource Metering may help you track historical data on the use of virtual machines and gain insight into the resource use of the specific servers. For more information, see Introduction to Resource Metering on the Microsoft Virtualization Blog.

What kind of workloads do you expect your tenants to move to your network? Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.
What is your failover agreement with your tenants? Affects your cluster configuration and other failover technologies that you deploy.

For more information about physical compute planning considerations, see section “3.1.6 Physical compute resource: hypervisor” in the Design options guide in Cloud Infrastructure Solution for Enterprise IT.

Determine your failover cluster strategy

Plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.

For this solution, you configure the scale-out file servers, compute Hyper-V hosts, management Hyper-V hosts, and gateway Hyper-V hosts as failover clusters. You also configure the SQL, Virtual Machine Manager, and gateway guest virtual machines as failover clusters. This configuration provides protection from potential physical computer and virtual machine failure.

Design consideration Design effect
What is your risk tolerance for unavailability of applications and services? Add nodes to your failover clusters to increase the availability of applications and services.

Determine your SQL high availability strategy

You’ll need to choose a SQL option for high availability for this solution. SQL Server 2012 has several options:

  • AlwaysOn Failover Cluster InstancesThis option provides local high availability through redundancy at the server-instance level—a failover cluster instance.
  • AlwaysOn Availability GroupsThis option enables you to maximize availability for one or more user databases.

For more information see Overview of SQL Server High-Availability Solutions.

For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.

Determine your gateway requirements

You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.

For Windows Server Gateway configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.

The design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).

Design consideration Design effect
How will your tenants connect to your network?
  • If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.This is the configuration that is covered by this planning and design guide.
  • If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.
  • If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.
ImportantImportant
You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.

Plan your network infrastructure

For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.

In this step, we provide planning examples to help you create your network infrastructure plan.

The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.

Network Clusternode

You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.

For example, this design recommends the networks listed in the following table. Your exact line speeds, addresses, VLANs, and so on may differ based on your particular environment.

Subnet/VLAN plan

Line speed (Gb/S) Purpose Address VLAN Comments
1 Management/Infrastructure 172.16.1.0/23 2040 Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.
10 Network Virtualization 10.0.0.0/24 2044 Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.
10 External 131.107.0.0/24 2042 External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.
1 Clustering 10.0.1.0/24 2043 Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.
10 Storage 10.20.31.0/24 2041 Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.

VMM VM network plan

This design uses the VM networks listed in the following table. Your VM networks may differ based on your particular needs.

Name IP pool address range Notes
External None
Live migration 10.0.3.1 – 10.0.3.254
Management None
Storage 10.20.31.1 – 10.20.31.254

After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.

This design uses the following uplink port profiles, as defined in VMM:

VMM uplink port profile plan

Name General property Network configuration
Rack01_Gateway
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_External, Logical Network: External
  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure
  • Network Virtualization_0, Logical Network: Network Virtualization
Rack01_Compute
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_External, Logical Network: External
  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure
  • Network Virtualization_0, Logical Network: Network Virtualization
Rack01_Infrastructure
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure

This design deploys the following logical switch using these uplink port profiles, as defined in VMM:

VMM logical switch plan

Name Extension Uplink Virtual port
VMSwitch Microsoft Windows Filtering Platform
  • Rack01_Compute
  • Rack01_Gateway
  • Rack01_Infrastructure
  • High bandwidth
  • Infrastructure
  • Live migration workload
  • Low bandwidth
  • Medium bandwidth

The design isolates the heaviest traffic loads on the fastest network links. For example, the storage network traffic is isolated from the network virtualization traffic on separate fast links. If you must use slower network links for some of the heavy traffic loads, you could use NIC teaming.

ImportantImportant
If you use jumbo frames in you network environment, you may need to make some configuration adjustments when you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Plan your Windows Azure Pack deployment

If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.

After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. Since this solution only uses the Windows Azure Pack as an optional component, it only utilizes some of the Web Site Clouds features, using an Express deployment, with all the Windows Azure Pack components installed on a single virtual machine. If you use Windows Azure Pack as your production portal however, you should use a distributed deployment and plan for the additional resources required.

To determine your host requirements for a production distributed deployment, see Windows Azure Pack architecture.

Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.

Clusters hosters

 

The following table shows the physical hosts that we recommend for this solution. The number of nodes used was chosen to represent the minimum needed to provide high availability. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters to support the networking isolation requirements of the design. We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.

Physical host recommendation

Physical hosts Role in solution Virtual machine roles
2 hosts configured as a failover cluster Management/infrastructure cluster:

Provides Hyper-V hosts for management/infrastructure workloads (VMM, SQL, Service Provider Foundation, guest clustered scale-out file server for gateway domain, domain controller).

  • Guest clustered SQL
  • Guest clustered VMM
  • Guest clustered scale-out file server for gateway domain
  • Service Provider Foundation endpoint
2 hosts configured as a failover cluster Compute cluster:

Provides Hyper-V hosts for tenant workloads and Windows Azure Pack for Windows Server.

  • Tenant
  • Windows Azure Pack portal accessible from public networks
2 hosts configured as a failover cluster Storage cluster:

Provides scale-out file server for management and infrastructure cluster storage.

None (this cluster just hosts file shares)
2 hosts configured as a failover cluster Windows Server gateway cluster:

Provides Hyper-V hosts for the gateway virtual machines.

For gateway physical host and gateway virtual machine configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

Guest clustered gateway

Here you can read the steps how to implement the solution and the whole Microsoft Article

See also :

Content type References
Product evaluation/getting started Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM
Planning and design Hybrid Cloud Multi-Tenant Networking Planning and Design Guide

Microsoft System Center: Building a Virtualized Network Solution

Reference
Community resources
Related solutions
Related technologies

 

 

Advertisements

Author: James van den Berg

I'm Microsoft Architect and ICT Specialist and Microsoft MVP System Center Cloud and Datacenter Management

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s