Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

Pre-Techday Event 27 mei 2015 “Theme Night – Hybrid Identity & Business Continuity” #HybridCloud #Azure #sysctr #TechDaysNL

SCUG_theme-night_logo_Hybrid_Cloud

Wat een geweldige maand hebben wij als IT-pro’s ! Deze maand staat vol van grote evenementen zoals //Build 2015 en Microsoft Ignite waarbij nieuwe ontwikkelingen op het gebied van Cloud, Windows 10, Windows Server, Continuum en Hololens worden aangekondigd.

De System Center User Group NL & Hyper-V.NU bieden jou op woensdag 27 mei de kans om dichter bij huis deze nieuwe ontwikkelingen op de voet te volgen. Voorafgaand aan Techdays 2015 organiseren we “Theme Night – Hybrid Identity & Business Continuity”.

Schrijf je nu in via de SCUG.NL Site en lees meer over de Internationale Topsprekers van Microsoft !

TechDays2015NLHope to see you soon ! 😉


Leave a comment

#MSIgnite #Ignite #Ignite2015 Getting Started with Microsoft #NanoServer

NanoServer

Microsoft Nano Server is a remotely administered server operating system optimized for hosting in private clouds and datacenters. It is similar to Windows Server in Server Core mode, but markedly smaller. Also, there is no local logon capability, nor does it support Terminal Services. It takes up far less disk space, sets up significantly faster, and requires far fewer restarts than Windows Server.

Nano Server is ideal for a number of scenarios:

  • As a “compute” host for Hyper-V virtual machines, either in clusters or not
  • As a storage host for Scale-Out File Server, either in clusters or not
  • As a container or virtual machine guest operating system for applications that are developed entirely in the cloud

This guide describes how to configure a Nano Server image with the packages you’ll need, add additional device drivers, and deploy it with an Unattend.xml file. It also explains the options for managing Nano Server remotely, managing the Hyper-V role running on Nano Server, and setup and management of a failover cluster of computers that are running Nano Server.

Get started with Microsoft NanoServer Here


Leave a comment

Microsoft System Center 2012 R2 Virtual Machine Manager RU6 Available #SCVMM #Azure #HybridCloud #Hyperv

build scvmm

Today I’m very happy that Microsoft Released System Center 2012 R2 Virtual Machine Manager Roll Update 6 via WindowsUpdate :

Updates

 

When you installed update Rollup 6 you can add your Microsoft Azure Subscription to manage VM’s in the Cloud.

SCVMM2012R2RU6
Click on Azure subscriptions

SCVMM2012R2RU6-2

Click on add subscription

Add Azure sub

Creating a Certificate

The first thing the Windows Azure administrator (private key holder) needs to do is use their local machine to create a certificate. In order to do this they will need Visual Studio installed or the SDK Windows 8.1. The technique that I usually use to create a private/public key pair is with a program called makecert.exe.

Here are the steps to create a self-signed certificate in .pfx format.
1.Open a Visual Studio command prompt (Run as administrator) or just CMD.exe (Run as Administrator)
2.Execute this command:
makecert -r -pe -n “CN=azureconfig” -sky exchange “azureconfig.cer” -sv “azureconfig.pvk”

CMD

This is what you need to make the Certificate.

3. You will be prompted for a password to secure the private key three times. Enter a password of your choice.

Cer password

4.This will generate an azureconfig.cer (the public key certificate) and an azureconfig.pvk (the private key file) file.
5.Then enter the following command to create the .pfx file (this format is used to import the private key to Windows Azure). After the –pi switch, enter the password you chose.

pvk2pfx -pvk “azureconfig.pvk” -spc “azureconfig.cer” -pfx “azureconfig.pfx” -pi password-entered-in-previous-step

6. Upload the certificate to Azure via Settings => Management Certificate

Upload cer
7. Install the PFX locally

Install pfx

You are now ready to setup the Microsoft Azure subscription in SCVMM.

SCVMM2012R2RU6-3

Here you can see what you can do with the Microsoft Azure Virtual Machines in the Cloud with System Center 2012 R2 VMM RU6 :

SCVMM Azure Features


Leave a comment

Set up protection between on-premises #VMware virtual machines or physical servers and #Azure #HybridCloud

asrvmware_arch

Azure Site Recovery contributes to your business continuity and disaster recovery (BCDR) strategy by orchestrating replication, failover and recovery of virtual machines and physical servers. Read about possible deployment scenarios in the Azure Site Recovery overview.

This walkthrough describes how to deploy Site Recovery to:

  • Protect on-premises VMware virtual machines to Azure
  • Protect on-premises physical Windows and Linux servers to Azure

Business advantages include:

  • Protection of physical Windows or Linux servers.
  • Simple replication, failover, and recovery using the Azure Site Recovery portal.
  • Data replication over the Internet, a site-to-site VPN connection, or over Azure ExpressRoute.
  • Failback (restore) from Azure to an on-premises VMware infrastructure.
  • Simplified discovery of VMware virtual machines.
  • Multi VM consistency so that virtual machines and physical servers running specific workloads can be recovered together to a consistent data point.
  • Recovery plans for simplified failover and recovery of workloads tiered over multiple machines.

Deployment components

  • On-premises machines—Your on-premises site has machines that you want to protect. These are either virtual machines running on a VMware hypervisor, or physical servers running Windows or Linux.
  • On-premises process server—Protected machines send replication data to the on-premises process server. The process server performs a number of actions on that data. It optimizes it before sending it on to the master target server in Azure. It has a disk-based cache to cache replication data that it receives. It also handles push installation of the Mobility Service which must be installed on each virtual machine or physical server you want to protect, and performs automatic discovery of VMware vCenter servers. The process server is a virtual or physical server running Windows Server 2012 R2. We recommend it’s placed on the same network and LAN segment as the machines that you want to protect, but it can run on a different network as long as protected machines have L3 network visibility to it. During deploy you’ll set up the process server and register it to the configuration server.
  • Azure Site Recovery vault—The vault coordinates and orchestrates data replica, failover, and recovery between your on-premises site and Azure.
  • Azure configuration server—The configuration server coordinates communication between protected machines, the process server, and master target servers in Azure. It sets up replication and coordinates recovery in Azure when failover occurs. The configuration server runs on an Azure Standard A3 virtual machine in your Azure subscription. During deployment you’ll set up the server and register it to the Azure Site Recovery vault.
  • Master target server—The master target server in Azure holds replicated data from your protected machines using attached VHDs created on blob storage in your Azure storage account. You deploy it as an Azure virtual machine as a Windows server based on a Windows Server 2012 R2 gallery image (to protect Windows machines) or as a Linux server based on a OpenLogic CentOS 6.6 gallery image (to protect Linux machines). Two sizing options are available – standard A3 and standard D14. The server is connected to the same Azure network as the configuration server. During deployment you’ll create the server and register it to the configuration server.
  • Mobility service—You install the Mobility service on each VMware virtual machine or Windows/Linux physical server that you want to protect. The service sends replication data to the process server, which in turn sends it to the master target server in Azure. The process server can automatically install the Mobility service on protected machines, or you can deploy the service manually using your internal software deployment process.
  • Data communication and replication channel—There are a couple of options. Note that neither option requires you to open any inbound network ports on protected machines. All network communication is initiated from the on-premises site.
    • Over the Internet—Communicates and replicates data from protected on-premises servers and Azure over a secure public internet connection. This is the default option.
    • VPN/ExpressRoute—Communicates and replicates data between on-premises servers and Azure over a VPN connection. You’ll need to set up a site-to-site VPN or an ExpressRoute connection between the on-premises site and your Azure network.

Here you find the Microsoft Step-by-Step blogpost to Set up protection between on-premises VMware virtual machines or physical servers and Azure


Leave a comment

Watch #Microsoft Ignite Keynote Technology Event Live ! #Azure #sysctr #Hyperv #HybridCloud

Ignite Live

Watch the Ignite keynote live from Chicago! Tune in early at 8:30AM CDT on May 4, 2015 to catch the pre-show. Microsoft CEO, Satya Nadella, will take the stage at 9:00AM CDT to outline Microsoft’s company strategy and how we are working hard to empower every person and every organization on the planet to achieve more. Check out our other keynote speakers too. Challenge what you know, reveal new opportunities, spark innovation, and see where technology is headed at the largest and most comprehensive Microsoft technology event !

Here you can watch the keynote Live


Leave a comment

Microsoft Datacenter extension reference architecture diagram – Interactive #HybridCloud #sysctr #Azure #Hyperv

Datacenter Extension Ref Architecture

The diagram below illustrates how an organization can extend its on-premises datacenter to Microsoft Azure.  It’s an interactive diagram.  Download the file and open it in your browser.  If Internet Explorer asks you if you want to allow blocked content, you’ll need to allow it to enable the interactivity.  This message appears because the page contains script that enables the interactivity.  Hovering your mouse over most objects in the diagram will provide additional details about the implementation of the object.  Clicking on many of the objects will open a relevant design or implementation article about the object.

If you already have some experience with Azure, this will help you understand how to use it as a true extension to your on-premises datacenter.  If not, it’s recommended that you gain a basic understanding of Azure before using this diagram.  The example data and links within the diagram should save you countless hours of searching for all the information you’ll need to extend your on-premises datacenter to Azure.  A video walkthrough of the diagram is also available :

Here you can download the interactive DataCenter Extension Reference Architecture Diagram


Leave a comment

Deploy highly scalable tenant network infrastructure for hosting providers #WAPack #SCVMM #Hyperv #CloudOS

Hosting Provider Cloud

The following diagram shows the recommended design for this solution, which connects each tenant’s network to the hosting provider’s multi-tenant gateway using a single site-to-site VPN tunnel. This enables the hosting provider to support approximately 100 tenants on a single gateway cluster, which decreases both the management complexity and cost. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then routes each tenant’s network data and uses the “Network Virtualization using Generic Routing Encapsulation” (NVGRE) protocol for network virtualization.

Solution design element Why is it included in this solution?
Windows Server 2012 R2 Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure and to decrease software update frequency.
Windows Server 2012 R2 Gateway Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.
Microsoft SQL Server 2012 Provides database services for Virtual Machine Managerand Windows Azure Pack.
System Center 2012 R2 Virtual Machine Manager Manages virtual networks (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.
Windows Server Failover Clustering All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.

The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.

Scale-out File Server Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.

For an overview of this feature, see Scale-Out File Server for application data overview.

For a more in-depth discussion of possible storage solutions, see Provide cost-effective storage for Hyper-V workloads by using Windows Server.

Site-to-site VPN Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.
Windows Azure Pack Provides a self-service portal for tenants to manage their own virtual networks. Windows Azure Pack provides a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. Though Windows Azure Pack is used for the self-service portal in this solution, you can use a different self-service portal if you choose.

For an overview of this product, see Windows Azure Pack for Windows Server

System Center 2012 R2 Orchestrator Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.

Windows Server 2012 R2 together with System Center 2012 R2 Virtual Machine Manager (VMM) give hosting providers a multi-tenant gateway solution that supports multiple host-to-host VPN tenant connections, Internet access for tenant virtual machines by using a gateway NAT feature, and forwarding gateway capabilities for private cloud implementations. Hyper-V Network Virtualization provides tenant virtual network isolation with NVGRE, which allows tenants to bring their own address space and allows hosting providers better scalability than is possible using VLANs for isolation.

The components of the design are separated onto separate servers because they each have unique scaling, manageability, and security requirements.

For more information about the advantages of HNV and Windows Server Gateway, see:

VMM offers a user interface to manage the gateways, virtual networks, virtual machines and other fabric items.

When planning this solution, you need to consider the following:

  • High availability design for the servers running Hyper-V, guest virtual machines, SQL server, gateways, VMM, and other servicesYou’ll want to ensure that your design is fault tolerant and is capable of supporting your stated availability terms.
  • Tenant virtual machine Internet access requirementsConsider whether or not your tenants want their virtual machines to have Internet access. If so, you will need to configure the NAT feature when you deploy the gateway.
  • Infrastructure physical hardware capacity and throughputYou’ll need to ensure that your physical network has the capacity to scale out as your IaaS offering expands.
  • Site-to-site connection throughputYou’ll need to investigate the throughput you can provide your tenants and whether site-to-site VPN connections will be sufficient.
  • Network isolation technologiesThis solution uses NVGRE for tenant network isolation. You’ll want to investigate if you have or can obtain hardware that can optimize this this protocol. For example, network interface cards, switches, and so on.
  • Authentication mechanismsThis solution uses two Active Directory domains for authentication; one for the infrastructure servers and one for the gateway cluster and scale-out file server for the gateway. If you don’t have an Active Directory domain available for the infrastructure, you’ll need to prepare a domain controller before you start deployment.
  • IP addressingYou’ll need to plan for the IP address spaces used by this solution.

 

ImportantImportant
If you use jumbo frames in you network environment, you may need to plan for some configuration adjustments before you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Determine your tenant requirements

To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.

Use the following questions to help you plan for your tenant requirements.

Design consideration Design effect
How many tenants do you expect to host, and how fast do you expect that number to grow? Determines how many Hyper-V hosts you’ll need to support your tenant workloads.

Using Hyper-V Resource Metering may help you track historical data on the use of virtual machines and gain insight into the resource use of the specific servers. For more information, see Introduction to Resource Metering on the Microsoft Virtualization Blog.

What kind of workloads do you expect your tenants to move to your network? Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.
What is your failover agreement with your tenants? Affects your cluster configuration and other failover technologies that you deploy.

For more information about physical compute planning considerations, see section “3.1.6 Physical compute resource: hypervisor” in the Design options guide in Cloud Infrastructure Solution for Enterprise IT.

Determine your failover cluster strategy

Plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.

For this solution, you configure the scale-out file servers, compute Hyper-V hosts, management Hyper-V hosts, and gateway Hyper-V hosts as failover clusters. You also configure the SQL, Virtual Machine Manager, and gateway guest virtual machines as failover clusters. This configuration provides protection from potential physical computer and virtual machine failure.

Design consideration Design effect
What is your risk tolerance for unavailability of applications and services? Add nodes to your failover clusters to increase the availability of applications and services.

Determine your SQL high availability strategy

You’ll need to choose a SQL option for high availability for this solution. SQL Server 2012 has several options:

  • AlwaysOn Failover Cluster InstancesThis option provides local high availability through redundancy at the server-instance level—a failover cluster instance.
  • AlwaysOn Availability GroupsThis option enables you to maximize availability for one or more user databases.

For more information see Overview of SQL Server High-Availability Solutions.

For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.

Determine your gateway requirements

You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.

For Windows Server Gateway configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.

The design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).

Design consideration Design effect
How will your tenants connect to your network?
  • If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.This is the configuration that is covered by this planning and design guide.
  • If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.
  • If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.
ImportantImportant
You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.

Plan your network infrastructure

For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.

In this step, we provide planning examples to help you create your network infrastructure plan.

The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.

Network Clusternode

You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.

For example, this design recommends the networks listed in the following table. Your exact line speeds, addresses, VLANs, and so on may differ based on your particular environment.

Subnet/VLAN plan

Line speed (Gb/S) Purpose Address VLAN Comments
1 Management/Infrastructure 172.16.1.0/23 2040 Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.
10 Network Virtualization 10.0.0.0/24 2044 Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.
10 External 131.107.0.0/24 2042 External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.
1 Clustering 10.0.1.0/24 2043 Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.
10 Storage 10.20.31.0/24 2041 Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.

VMM VM network plan

This design uses the VM networks listed in the following table. Your VM networks may differ based on your particular needs.

Name IP pool address range Notes
External None
Live migration 10.0.3.1 – 10.0.3.254
Management None
Storage 10.20.31.1 – 10.20.31.254

After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.

This design uses the following uplink port profiles, as defined in VMM:

VMM uplink port profile plan

Name General property Network configuration
Rack01_Gateway
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_External, Logical Network: External
  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure
  • Network Virtualization_0, Logical Network: Network Virtualization
Rack01_Compute
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_External, Logical Network: External
  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure
  • Network Virtualization_0, Logical Network: Network Virtualization
Rack01_Infrastructure
  • Load Balancing Algorithm: Host Default
  • Teaming mode: LACP
Network sites:

  • Rack01_LiveMigration, Logical Network: Host Networks
  • Rack01_Storage, Logical Network: Host Networks
  • Rack01_Infrastructure, Logical Network: Infrastructure

This design deploys the following logical switch using these uplink port profiles, as defined in VMM:

VMM logical switch plan

Name Extension Uplink Virtual port
VMSwitch Microsoft Windows Filtering Platform
  • Rack01_Compute
  • Rack01_Gateway
  • Rack01_Infrastructure
  • High bandwidth
  • Infrastructure
  • Live migration workload
  • Low bandwidth
  • Medium bandwidth

The design isolates the heaviest traffic loads on the fastest network links. For example, the storage network traffic is isolated from the network virtualization traffic on separate fast links. If you must use slower network links for some of the heavy traffic loads, you could use NIC teaming.

ImportantImportant
If you use jumbo frames in you network environment, you may need to make some configuration adjustments when you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Plan your Windows Azure Pack deployment

If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.

After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. Since this solution only uses the Windows Azure Pack as an optional component, it only utilizes some of the Web Site Clouds features, using an Express deployment, with all the Windows Azure Pack components installed on a single virtual machine. If you use Windows Azure Pack as your production portal however, you should use a distributed deployment and plan for the additional resources required.

To determine your host requirements for a production distributed deployment, see Windows Azure Pack architecture.

Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.

Clusters hosters

 

The following table shows the physical hosts that we recommend for this solution. The number of nodes used was chosen to represent the minimum needed to provide high availability. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters to support the networking isolation requirements of the design. We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.

Physical host recommendation

Physical hosts Role in solution Virtual machine roles
2 hosts configured as a failover cluster Management/infrastructure cluster:

Provides Hyper-V hosts for management/infrastructure workloads (VMM, SQL, Service Provider Foundation, guest clustered scale-out file server for gateway domain, domain controller).

  • Guest clustered SQL
  • Guest clustered VMM
  • Guest clustered scale-out file server for gateway domain
  • Service Provider Foundation endpoint
2 hosts configured as a failover cluster Compute cluster:

Provides Hyper-V hosts for tenant workloads and Windows Azure Pack for Windows Server.

  • Tenant
  • Windows Azure Pack portal accessible from public networks
2 hosts configured as a failover cluster Storage cluster:

Provides scale-out file server for management and infrastructure cluster storage.

None (this cluster just hosts file shares)
2 hosts configured as a failover cluster Windows Server gateway cluster:

Provides Hyper-V hosts for the gateway virtual machines.

For gateway physical host and gateway virtual machine configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

Guest clustered gateway

Here you can read the steps how to implement the solution and the whole Microsoft Article

See also :

Content type References
Product evaluation/getting started Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM
Planning and design Hybrid Cloud Multi-Tenant Networking Planning and Design Guide

Microsoft System Center: Building a Virtualized Network Solution

Reference
Community resources
Related solutions
Related technologies

 

 


Leave a comment

#Linux Integration Services v4.0 Preview for Microsoft #Azure and More #Hyperv

Microsoft_Loves_Linux

 

When installed in a supported Linux virtual machine running on Hyper-V, the Linux Integration Services provide:

•Driver support: Linux Integration Services supports the network controller and the IDE and SCSI storage controllers that were developed specifically for Hyper-V.
•Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
•Time Keeping: The clock inside the virtual machine will remain accurate by synchronizing to the clock on the virtualization server via Timesync service, and with the help of the pluggable time source device.
•Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager by using the “Shut down” command.
•Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use multiple virtual processors per virtual machine. The actual number of virtual processors that can be allocated to a virtual machine is only limited by the underlying hypervisor.
•Heartbeat: This feature allows the virtualization server to detect whether the virtual machine is running and responsive.
•KVP (Key Value Pair) Exchange: Information about the running Linux virtual machine can be obtained by using the Key Value Pair exchange functionality on the Windows Server 2008 virtualization server.
•Integrated Mouse Support: Linux Integration Services provides full mouse support for Linux guest virtual machines.
•Live Migration: Linux virtual machines can undergo live migration for load balancing purposes.
•Jumbo Frames: Linux virtual machines can be configured to use Ethernet frames with more than 1500 bytes of payload.
•VLAN tagging and trunking: Administrators can attach single or multiple VLAN ids to synthetic network adapters.
•Static IP Injection: Allows migration of Linux virtual machines with static IP addresses.
•Linux VHDX resize: Allows dynamic resizing of VHDX storage attached to a Linux virtual machine.
•Synthetic Fibre Channel Support: Linux virtual machines can natively access high performance SAN networks.
•Live Linux virtual machine backup support: Facilitates zero downtime backup of running Linux virtual machines.
•Dynamic memory ballooning support: Improves Linux virtual machine density for a given Hyper-V host.
•Dynamic memory hot-add.
•Synthetic video device support: Provides improved graphics performance for Linux virtual machines.
•PAE kernel support: Provides drivers that are compatible with PAE enabled Linux virtual machines.

Here you can download Linux Integration Services v4.0 Preview for Microsoft Azure

Hyperv and Linux


Leave a comment

Free Ebook: Microsoft System Center Deploying #HyperV with Software-Defined Storage & Networking #SCVMM #SDN

Deploy HyperV with SDN Ebook

This book, or proof-of-concept (POC) guide, will cover a variety of aspects that make up the foundation of the software-defined datacenter: virtualization, storage, and networking. By the end, you should have a fully operational, small-scale configuration that will enable you to proceed with evaluation of your own key workloads, experiment with additional features and capabilities, and continue to build your knowledge.

The book won’t, however, cover all aspects of this software-defined datacenter foundation. The book won’t, for instance, explain how to configure and implement Hyper-V Replica, enable and configure Storage Quality of Service (QoS), or discuss Automatic Virtual Machine Activation. Yet these are all examples of capabilities that this POC configuration would enable you to evaluate with ease.

Chapter 1: Design and planning This chapter focuses on the overall design of the POC configuration. It discusses each layer of the solution, key features and functionality within each layer, and the reasons why we have chosen to deploy this particular design for the POC.
Chapter 2: Deploying the management cluster This chapter focuses on configuring the core management backbone of the POC configuration. You’ll deploy directory, update, and deployment services, along with resilient database and VM management infrastructure. This lays the groundwork for streamlined deployment of the compute, storage, and network infrastructure in later chapters.
Chapter 3: Configuring network infrastructure With the management backbone configured, you will spend time in System Center Virtual Machine Manager, building the physical network topology that was defined in Chapter 2. This involves configuring logical networks, uplink port profiles, port classifications, and network adaptor port profiles, and culminates in the creation of a logical switch.
Chapter 4: Configuring storage infrastructure This chapter focuses on deploying the software-defined storage layer of the POC. You’ll use System Center Virtual Machine Manager to transform a pair of bare-metal servers, with accompanying just a bunch of disks (JBOD) enclosures, into a resilient, high-performance Scale-Out File Server (SOFS) backed by tiered storage spaces.
Chapter 5: Configuring compute infrastructure With the storage layer constructed and deployed, this chapter focuses on deploying the compute layer that will ultimately host workloads that will be deployed in Chapter 6. You’ll use the same bare-metal deployment capabilities covered in Chapter 4 to deploy several Hyper-V hosts and then optimize these hosts to get them ready for accepting virtualized workloads.
Chapter 6: Configuring network virtualization In Chapter 3, you will have designed and deployed the underlying logical network infrastructure and, in doing so, laid the groundwork for deploying network virtualization. In this chapter, you’ll use System Center Virtual Machine Manager to design, construct, and deploy VM networks to suit a number of different enterprise scenarios.

By the end of Chapter 6, you will have a fully functioning foundation for a software-defined datacenter consisting of software-defined compute with Hyper-V, software-defined storage, and software-defined networking.

Here you can download the Free ebook: Microsoft System Center Deploying Hyper-V with Software-Defined Storage & Networking

Thank you Microsoft TechNet, Cloud Platform Team, and Mitch Tulloch for this Free Awesome Ebook 😉


Leave a comment

Video #Microsoft Windows Azure Pack Infrastructure as a Service Integrate the Fabric #WAPack #SCVMM

Microsoft Video on Windows Azure Pack Infrastructure As a Service Integrate the Fabric

Windows Azure Pack

Try and build your own Microsoft Private Cloud with this Free Windows Azure Pack for Windows Server 2012 R2 Guide