Add cloud to your datacenter with an easy-to-deploy integrated system
The Cloud Platform System (CPS) portfolio of integrated systems provides an Azure-consistent cloud-in-a-box for your virtualized Windows and Linux workloads, accelerating your journey to cloud with a factory-integrated solution. CPS combines Microsoft’s proven software stack of Windows Server 2012 R2, System Center 2012 R2, and Windows Azure Pack, with server, storage, and networking hardware from industry leading vendors. CPS improves your time-to-value and enables a consistent cloud experience, allowing you to scale from as small as four nodes with CPS Standard to up to 128 with CPS Premium, depending on your needs.
Cloud Consistency with Azure Resource Manager in anticipation of Microsoft Azure Stack, this AWESOME whitepaper of Kristian Nese and Flemming Riis is focusing on Azure Resource Manager and will prepare you on how to create templates, deploy IaaS and PaaS as well as exploring many of its management capabilities.
How to deploy the Microsoft Remote Desktop Session Host (RDSH) workload on the Microsoft Cloud Platform System (CPS).
How to deploy the Microsoft SharePoint Server, Microsoft Exchange Server, Microsoft SQL Server, Skype for Business, and Active Directory Domain Services (AD DS) workloads on the Microsoft Cloud Platform System (CPS)
This release introduces a new interactive installation, upgrade, and offline installation process that addresses user feedback about the original process. A new Windows Azure Pack Web Sites Microsoft Management Console (MMC) snap-in is included in this update. This updates offers functionality that was previously available only through Windows PowerShell commands. In addition, the MMC snap-in can be used to perform clean installations on a stand-alone or preconfigured file server and to configure a secondary controller
New features
Update Rollup 6 for Windows Azure Pack Web Sites version 2 adds the following new features to the platform:
Web Jobs
Windows Azure Pack Web Sites enables tenants to run custom jobs (running executables or scripts) on your web app in one of three ways: on demand or continuously. For more information, see
Windows Azure Pack Web Sites enables tenants to deploy a separate deployment slot. For example, a tenant could create a deploymentto a staging slot to validate changes before swapping the staging slot with the production slot. For more information, see
Here you can see the difference between Microsoft Azure Pack and Microsoft Azure Stack :
Microsoft Windows Azure Pack
Microsoft Azure Stack
With Microsoft Azure Stack the Architecture is the same as Microsoft Azure and this is important for your applications :
It’s all about Applications on premises and in the Cloud with the same experience
Another important slide is Microsoft Azure (Stack) Resource Manager to manage your Cloud Services :
Here you see some important Slides from Microsoft Ignite 2015 session ” Bringing Azure to Your Datacenter “
Security Controller added
The Same Infrastructure as Azure
Microsoft CloudOS with Azure Stack
Modern Management with System Center Hybrid
Watch this session with Mark Russinovich, Jeffrey Snover and Jeremy Winter to learn how you can continue to be a strategic partner to the business by leveraging the power of Microsoft Azure in your datacenter to deliver the rapid innovation your business requires.
Adds support for Webjobs in Windows Azure Pack Websites.
Symptom Tenant users cannot set up and run background tasks to perform batch jobs that are related to their websites. The Windows Azure Webjobs functionality is not supported in the Windows Azure Pack (WAP) Websites Resource Provider.
Resolution Tenant users now have access to this functionality through the WAP Websites Resource Provider. This functionality is enabled through the Administrator site when you configure a plan and make it available to tenants in the Tenants site. With this functionality, creation Webjobs creation can be run manually or continuously in the background.
Issue 2
Adds support for Deployment Slots in Windows Azure Pack Websites.
Symptom Tenant users cannot deploy to slots other than their production slot in a WAP Websites environment. The Slots or Staging environments for Websites that are currently available in the public Azure Web App Service is not available in the WAP Websites Resource Provider.
Resolution Tenants can now use deployment slots that are associated with their websites. Web app content and configuration elements can be swapped between two deployment slots. This includes the production slot.
Issue 3
Adds support for Virtual Machine Checkpoint.
Symptom Tenant users cannot create a checkpoint of the current state of a virtual machine (VM) through the VMM/SPF Resource Provider.
Resolution Tenants can now create a checkpoint of a virtual machine and restore it when this is necessary. This differs from the regular VM Checkpoints in that only a single Checkpoint can be created, and every time that a Checkpoint is created, the previous one is overwritten.
Issue 5
Adds support to maintain Data Consistency between the SQL Resource Provider configured properties for resources (such as databases, resource pools, and workload groups) with the actual provisioned resources on the SQL Server Hosting server (or servers). This allows for the detection of differences of the SQL Resource Provider configuration data after administrators directly change the SQL Server computer.
Symptom SQL Server RP configuration data may become unsynchronized with the actual resources that are deployed in the target Instances of SQL Server. This issue occurs if changes are made directly to the servers and not through the WAP SQL Server RP.
Resolution There’s a new set of RESTful API calls that enable the resynchronization of the SQL RP configuration data with the actual state of the target SQL Server.
Issue 6
Compatibility with the next version of Windows Server:
New UI artifacts are for review only; they do not have complete functionality behind them.
Dependency on the URL Rewrite IIS Module has been removed.
Issue 7
Fixes several SQL Server Resource Provider issues:
An incorrect error message is displayed when database creation fails because total server capacity is exceeded by the consumption defined in the plan.
The Get-MgmtSvcSqlResourcePoolByTemplate command does not provide the expected output when it is provided with an invalid RG template ID.
A tenant can delete any unmanaged database by guessing its name.
Tenants cannot increase the size of their databases even after the administrator increases the quota for the plan.
This release introduces a new interactive installation, upgrade, and offline installation process that addresses user feedback about the original process. A new Windows Azure Pack Web Sites Microsoft Management Console (MMC) snap-in is included in this update. This updates offers functionality that was previously available only through Windows PowerShell commands. In addition, the MMC snap-in can be used to perform clean installations on a stand-alone or preconfigured file server and to configure a secondary controller.
New features
Update Rollup 6 for Windows Azure Pack Web Sites version 2 adds the following new features to the platform:
Web Jobs
Windows Azure Pack Web Sites enables tenants to run custom jobs (running executables or scripts) on your web app in one of three ways: on demand or continuously. For more information, see How to Using the WebJobs feature of Windows Azure Pack Websites
Site Slots
Windows Azure Pack Web Sites enables tenants to deploy a separate deployment slot. For example, a tenant could create a deploymentto a staging slot to validate changes before swapping the staging slot with the production slot. For more information, see How to Use the Site Slots feature in Windows Azure Pack Websites
If you implemented IP-based SSL by using VIP mapping or local address mappings with IPv6 addresses, please do not upgrade to Update Rollup 6 unless you migrate to IPv4 before you apply this update. This scenario is not supported in this update. However, it will be supported in a future update.
Roles have to be repaired for the following configuration changes to take effect. (This may cause the role to restart.)
Add/delete/update VIP Mappings
Front Ends
Add/delete/update IP range for IP Restrictions
Workers
Upload new publishing certificate
Publishers
Upload new default certificate
Front Ends
Ports that are used by Update Rollup 6 for Windows Azure Pack Web Sites version 2
Because of changes to IP-based SSL, Update Rollup 6 for Windows Azure Pack Websites version 2 uses additional TCP ports for communication between roles. These changes are listed in the following table. Those changes that are displayed in bold type are new in Update Rollup 6. Before you install or upgrade, you should make sure that communication with these ports is enabled.
Collapse this tableExpand this table
Role
Service
Port
Database Server
SQL Server
1433 or a nondefault port if is configured
File Server
SMB
445
Web Farm Framework Agent
8173
WinRM (HTTP)
5985
WinRM (HTTPS)
5986
FrontEnd
HTTP
80
HTTP
443
HTTP/HTTPS
Any ports that are used in nonstandard port VIP Mappings
Certificate Sync Service
1233
Kudu Credential Cache Listener
1233
WinRM (HTTP)
5985
WinRM (HTTPS)
5986
Management Server
Web Farm Framework Agent
8173
WinRM (HTTP)
5985
WinRM (HTTPS)
5986
Publisher
FTP
21
Web Deploy
443/8172
Web Farm Framework Agent
8173
WinRM (HTTP)
5985
WinRM (HTTPS)
5986
Web Worker
DWAS/W3WP
80
Web Farm Framework Agent
8173
WinRM (HTTP)
5985
WinRM (HTTPS)
5986
Worker Management Endpoint
456
Issues that are fixed in this update rollup
This update fixes the following issues:
Issue 1
By default, SSL version 3 is enabled in Windows Azure Pack Web Sites version 2.
Issue 2
You cannot add workers from the administrator portal.
Issue 3
If services are currently running when the upgrade is started, a webhosting.msi upgrade may fail.
Issue 4
Controller may operate on remote servers that do not belong to the controller. This issue can be caused by stale DNS entries.
Issue 5
Installation fails when Hosting or Metering databases are already present, and no useful error messages are displayed.
Issue 6
You cannot delegate IPSecurity settings for IPRestrictions to users who run the following Windows PowerShell cmdlet:
The cmdlet seems to finish. But when an IP restriction in a site’s Web.config file occurs, a 500.19 Http response is returned when a request to the site is made.
Issue 7
After they upgrade to Update Rollup 4 for Windows Azure Pack Web Sites version 2 , users cannot access usage statistics from the metering database.
Issue 8
If TrustedHosts is configured by Group Policy, Windows Azure Pack Web Sites cannot configure the TrustedHosts setting. This setting is needed to communicate with the file server through Windows Remote Management.
The following diagram shows the recommended design for this solution, which connects each tenant’s network to the hosting provider’s multi-tenant gateway using a single site-to-site VPN tunnel. This enables the hosting provider to support approximately 100 tenants on a single gateway cluster, which decreases both the management complexity and cost. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then routes each tenant’s network data and uses the “Network Virtualization using Generic Routing Encapsulation” (NVGRE) protocol for network virtualization.
Solution design element
Why is it included in this solution?
Windows Server 2012 R2
Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure and to decrease software update frequency.
Windows Server 2012 R2 Gateway
Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.
Microsoft SQL Server 2012
Provides database services for Virtual Machine Managerand Windows Azure Pack.
System Center 2012 R2 Virtual Machine Manager
Manages virtual networks (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.
Windows Server Failover Clustering
All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.
The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.
Scale-out File Server
Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.
Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.
Windows Azure Pack
Provides a self-service portal for tenants to manage their own virtual networks. Windows Azure Pack provides a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. Though Windows Azure Pack is used for the self-service portal in this solution, you can use a different self-service portal if you choose.
Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.
Windows Server 2012 R2 together with System Center 2012 R2 Virtual Machine Manager (VMM) give hosting providers a multi-tenant gateway solution that supports multiple host-to-host VPN tenant connections, Internet access for tenant virtual machines by using a gateway NAT feature, and forwarding gateway capabilities for private cloud implementations. Hyper-V Network Virtualization provides tenant virtual network isolation with NVGRE, which allows tenants to bring their own address space and allows hosting providers better scalability than is possible using VLANs for isolation.
The components of the design are separated onto separate servers because they each have unique scaling, manageability, and security requirements.
For more information about the advantages of HNV and Windows Server Gateway, see:
VMM offers a user interface to manage the gateways, virtual networks, virtual machines and other fabric items.
When planning this solution, you need to consider the following:
High availability design for the servers running Hyper-V, guest virtual machines, SQL server, gateways, VMM, and other servicesYou’ll want to ensure that your design is fault tolerant and is capable of supporting your stated availability terms.
Tenant virtual machine Internet access requirementsConsider whether or not your tenants want their virtual machines to have Internet access. If so, you will need to configure the NAT feature when you deploy the gateway.
Infrastructure physical hardware capacity and throughputYou’ll need to ensure that your physical network has the capacity to scale out as your IaaS offering expands.
Site-to-site connection throughputYou’ll need to investigate the throughput you can provide your tenants and whether site-to-site VPN connections will be sufficient.
Network isolation technologiesThis solution uses NVGRE for tenant network isolation. You’ll want to investigate if you have or can obtain hardware that can optimize this this protocol. For example, network interface cards, switches, and so on.
Authentication mechanismsThis solution uses two Active Directory domains for authentication; one for the infrastructure servers and one for the gateway cluster and scale-out file server for the gateway. If you don’t have an Active Directory domain available for the infrastructure, you’ll need to prepare a domain controller before you start deployment.
IP addressingYou’ll need to plan for the IP address spaces used by this solution.
To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.
Use the following questions to help you plan for your tenant requirements.
Design consideration
Design effect
How many tenants do you expect to host, and how fast do you expect that number to grow?
Determines how many Hyper-V hosts you’ll need to support your tenant workloads.
Using Hyper-V Resource Metering may help you track historical data on the use of virtual machines and gain insight into the resource use of the specific servers. For more information, see Introduction to Resource Metering on the Microsoft Virtualization Blog.
What kind of workloads do you expect your tenants to move to your network?
Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.
What is your failover agreement with your tenants?
Affects your cluster configuration and other failover technologies that you deploy.
For more information about physical compute planning considerations, see section “3.1.6 Physical compute resource: hypervisor” in the Design options guide in Cloud Infrastructure Solution for Enterprise IT.
Determine your failover cluster strategy
Plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.
For this solution, you configure the scale-out file servers, compute Hyper-V hosts, management Hyper-V hosts, and gateway Hyper-V hosts as failover clusters. You also configure the SQL, Virtual Machine Manager, and gateway guest virtual machines as failover clusters. This configuration provides protection from potential physical computer and virtual machine failure.
Design consideration
Design effect
What is your risk tolerance for unavailability of applications and services?
Add nodes to your failover clusters to increase the availability of applications and services.
Determine your SQL high availability strategy
You’ll need to choose a SQL option for high availability for this solution. SQL Server 2012 has several options:
AlwaysOn Failover Cluster InstancesThis option provides local high availability through redundancy at the server-instance level—a failover cluster instance.
AlwaysOn Availability GroupsThis option enables you to maximize availability for one or more user databases.
For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.
Determine your gateway requirements
You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.
For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.
The design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).
Design consideration
Design effect
How will your tenants connect to your network?
If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.This is the configuration that is covered by this planning and design guide.
If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.
If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.
Important
You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.
Plan your network infrastructure
For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.
In this step, we provide planning examples to help you create your network infrastructure plan.
The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.
You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.
For example, this design recommends the networks listed in the following table. Your exact line speeds, addresses, VLANs, and so on may differ based on your particular environment.
Subnet/VLAN plan
Line speed (Gb/S)
Purpose
Address
VLAN
Comments
1
Management/Infrastructure
172.16.1.0/23
2040
Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.
10
Network Virtualization
10.0.0.0/24
2044
Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.
10
External
131.107.0.0/24
2042
External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.
1
Clustering
10.0.1.0/24
2043
Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.
10
Storage
10.20.31.0/24
2041
Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.
VMM VM network plan
This design uses the VM networks listed in the following table. Your VM networks may differ based on your particular needs.
Name
IP pool address range
Notes
External
None
Live migration
10.0.3.1 – 10.0.3.254
Management
None
Storage
10.20.31.1 – 10.20.31.254
After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.
This design uses the following uplink port profiles, as defined in VMM:
This design deploys the following logical switch using these uplink port profiles, as defined in VMM:
VMM logical switch plan
Name
Extension
Uplink
Virtual port
VMSwitch
Microsoft Windows Filtering Platform
Rack01_Compute
Rack01_Gateway
Rack01_Infrastructure
High bandwidth
Infrastructure
Live migration workload
Low bandwidth
Medium bandwidth
The design isolates the heaviest traffic loads on the fastest network links. For example, the storage network traffic is isolated from the network virtualization traffic on separate fast links. If you must use slower network links for some of the heavy traffic loads, you could use NIC teaming.
If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.
After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. Since this solution only uses the Windows Azure Pack as an optional component, it only utilizes some of the Web Site Clouds features, using an Express deployment, with all the Windows Azure Pack components installed on a single virtual machine. If you use Windows Azure Pack as your production portal however, you should use a distributed deployment and plan for the additional resources required.
Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.
The following table shows the physical hosts that we recommend for this solution. The number of nodes used was chosen to represent the minimum needed to provide high availability. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters to support the networking isolation requirements of the design. We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.
Physical host recommendation
Physical hosts
Role in solution
Virtual machine roles
2 hosts configured as a failover cluster
Management/infrastructure cluster:
Provides Hyper-V hosts for management/infrastructure workloads (VMM, SQL, Service Provider Foundation, guest clustered scale-out file server for gateway domain, domain controller).
Guest clustered SQL
Guest clustered VMM
Guest clustered scale-out file server for gateway domain
Service Provider Foundation endpoint
2 hosts configured as a failover cluster
Compute cluster:
Provides Hyper-V hosts for tenant workloads and Windows Azure Pack for Windows Server.
Tenant
Windows Azure Pack portal accessible from public networks
2 hosts configured as a failover cluster
Storage cluster:
Provides scale-out file server for management and infrastructure cluster storage.
None (this cluster just hosts file shares)
2 hosts configured as a failover cluster
Windows Server gateway cluster:
Provides Hyper-V hosts for the gateway virtual machines.
CPS is an integrated, ready-to-run private cloud solution for your datacenter, powered by Dell hardware and Microsoft cloud software. CPS maximizes the economic benefits of a software-designed datacenter when operating cloud services. A layer of software abstraction across the physical layers – storage, network and compute – enables separation of the fabric from the tenant services that run on top of it. Windows Azure Pack provides a consistent self-service approach that is common between Microsoft Azure and CPS. System Center provides the administrative controls and Windows Server provides the platform for virtualizing a wide range of computing services. The entire solution is pre-configured before arriving at your loading dock, offering a turnkey private cloud solution.
This paper provides an overview of storage-focused performance of a single rack CPS stamp in the following three scenarios, scaled across a deployment of tenant virtual machines (VMs):
Scenario 1. Boot storm: cold start of VMs.
Scenario 2. VM microbenchmarks: synthetic storage loads that are generated within VMs.
Scenario 3. VM database OLTP: simulated database online transaction processing (OLTP) using Microsoft SQL Server, run within the VMs.