Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

Software Defined Networking #SDN with Windows Server 2016 and #SCVMM 2016 TP3

SDN with SCVMM

This topic helps you evaluate the Software Defined Networking (SDN) features in Windows Server 2016 Technical Preview and Virtual Machine Manager 2016 Technology Preview 3. In particular, this topic is focused on scenarios that incorporate VMM with the Microsoft Network Controller, a new feature in Windows Server 2016 Technical Preview. For more information about the Microsoft Network Controller, see Network Controller.

You can also deploy an SDN infrastructure using scripts. For more information, see Deploy Software Defined Networks using scripts.

Here you can read more about how to deploy SDN with Windows Server 2016 and SCVMM 2016 TP3

System Center 2016 TP3


Leave a comment

What’s New in #HyperV Network Virtualization in Windows Server Technical Preview #SDN #SCVMM

This topic describes the Hyper-V Network Virtualization (HNV) functionality that is new or changed in Windows Server 2016 Technical Preview.

Updates in HNV


HNV offers enhanced support in the following areas:

Feature/Functionality New or improved Description
Programmable Hyper-V switch New HNV policy is programmable through the Microsoft Network Controller.
VXLAN encapsulation support New HNV now supports VXLAN encapsulation.
Software Load Balancer (SLB) interoperability New HNV is fully integrated with the Microsoft Software Load Balancer.
Compliant IEEE sEthernet header Improved Compliant with IEEE Ethernet standards

HNV is a fundamental building block of Microsoft’s updated Software Defined Networking (SDN) solution, and is fully integrated into the SDN stack.

Microsoft’s new Network Controller pushes HNV policies down to a Host Agent running on each host using Open vSwitch Database Management Protocol (OVSDB) as the SouthBound Interface (SBI). The Host Agent stores this policy using a customization of the VTEP schema and programs complex flow rules into a performant flow engine in the Hyper-V switch.

The flow engine inside the Hyper-V switch is the same as Microsoft Azure’s, which has been proven at hyper-scale in the Microsoft Azure public cloud. Additionally, the entire SDN stack up through the Network Controller, and Network Resource Provider (details coming soon) is consistent with Microsoft Azure, thus bringing the power of the Microsoft Azure public cloud to our enterprise and hosting service provider customers.

System_CAPS_noteNote
For more information about OVSDB, see RFC 7047.

The Hyper-V switch supports both stateless and stateful flow rules based on simple “match action” within Microsoft’s flow engine.

Network Control

The Virtual eXtensible Local Area Network (VXLAN – RFC 7348) protocol has been widely adopted in the market place, with support from vendors like Cisco, Brocade, Dell, HP and others. Microsoft’s HNV also now supports this encapsulation scheme using MAC distribution mode through the Microsoft Network Controller to program mappings for tenant overlay network IP addresses (Customer Address – CA) to the physical underlay network IP addresses (Provider Address – PA). Both NVGRE and VXLAN Task Offloads are supported for improved performance through third-party drivers.

Windows Server 2016 Technical Preview includes a software load balancer (SLB) with full support for virtual network traffic and seamless interaction with HNV. The SLB is implemented through the performant flow engine in the data plane v-Switch and controlled by the Network Controller for Virtual IP (VIP) / Dynamic IP (DIP) mappings.

HNV implements correct L2 Ethernet headers to ensure interoperability with third-party virtual and physical appliances that depend on industry-standard protocols. Microsoft ensures that all transmitted packets have compliant values in all fields to ensure this interoperability. In addition, support for Jumbo Frames (MTU > 1780) in the physical L2 network will be required to account for packet overhead introduced by encapsulation protocols (NVGRE, VXLAN) while ensuring guest Virtual Machines attached to an HNV Virtual Network maintain a 1514 MTU.


Leave a comment

Download Now the FREE E-Book Building a Virtualized Network Solution Second Edition #sysctr #SDN

SDN Second Edition

Part of a series of specialized guides on System Center, this book is specifically designed for architects and cloud fabric administrators who want to understand what decisions to make during the design process and the implications of those decisions, what constitutes best practice, and, ultimately, what to do to build out a virtualized network solution that meets today’s business requirements while also providing a platform for future growth and expansion. This second edition includes coverage of the Hyper-V Network Virtualization gateway, designing a solution that extends an on-premises virtualized network solution to an external (hosted) environment, details of how to troubleshoot and diagnose some of the key connectivity challenges, and a look at the Cloud Platform System (CPS) and some of the key considerations that went into designing and building the network architecture and solution for that environment.

You can Download the Free Ebook Building a Virtualized Network Solution, Second Edition here


Leave a comment

New in Windows Server Technical Preview, Network Controller #Winserv #SDN #Hyperv #NetworkController

Network Controller

New in Windows Server  Technical Preview, Network Controller provides a centralized, programmable point of automation to manage, configure, monitor, and troubleshoot virtual and physical network infrastructure in your datacenter. Using Network Controller, you can automate the configuration of network infrastructure instead of performing manual configuration of network devices and services.

Network Controller Features

The following Network Controller features allow you to configure and manage virtual and physical network devices and services.

  • Fabric Network Management
  • Firewall Management
  • Network Monitoring
  • Network Topology and Discovery Management
  • Service Chaining Management
  • Software Load Balancer Management
  • Virtual Network Management
  • Windows Server Gateway Management

Fabric Network Management

This Network Controller feature allows you to easily manage the fabric, or physical network, for your datacenter stamp or cluster. Using this feature, you can configure IP subnets, virtual Local Area Networks (VLANs), Layer 2 and Layer 3 switches, and network adapters installed in host computers.

Fabric network management includes planning, designing, implementation, and auditing of the fabric network resources and network infrastructure services.

Firewall Management

This Network Controller feature allows you to configure and manage allow/deny firewall Access Control rules for your workload VMs for both East/West and North/South network traffic in your datacenter. The firewall rules are plumbed in the vSwitch port of workload VMs, and so they are distributed across your workload in the datacenter. Using the Northbound API, you can define the firewall rules for both incoming and outgoing traffic from the workload VM. You can also configure each firewall rule to log the traffic that was allowed or denied by the rule.

Network Monitoring

This Network Controller feature allows you to monitor the physical and virtual network in your datacenter stamp or cluster. The Network Monitoring service uses the network object model, provided by the topology service, to determine the network devices and links to be monitored. Physical network monitoring is performed using both active network and element data.

Active network data, such as network loss and latency, is detected by sending network traffic and measuring round-trip time. The Network Monitoring service automatically determines the network points between which traffic must be sent, the quantum of traffic to be sent in order to cover all network paths, and also the loss/latency baseline and deviations over a period of time. A key aspect of this solution is fault localization. The Network Monitoring service attempts to localize devices that are causing network loss and latency. The solution leverages advanced algorithms to identify both network paths and devices in the paths that are causing performance degradation.

Element data is collected using Simple Network Management Protocol (SNMP) polling and traps. The monitoring service collects a limited set of critical data available through public management information bases (MIBs). For example, the service monitors link state, system restarts, and Border Gateway Protocol (BGP) peer status.

The monitoring system reports health of both devices and device groups. Health is reported based on both active and element data. Devices are, for example, physical switches and routers. Device groups are a combination of physical devices which has some relevance within the datacenter. For instance, device groups can be racks or subnets or simply host groups. In addition to providing health information, the monitoring service also reports vital statistics such as network loss, latency, device CPU/memory usages, link utilization, and packet drops.

The Network Monitoring service also performs impact analysis. Impact analysis is the process of identifying overlay networks affected by the underlying faulty physical networks. The service uses topology information to determine virtual network footprint and to report the health of impacted virtual networks. For example, if a host loses network connectivity, the system marks all virtual networks on this host and that are connected to the faulty network as impacted. Similarly, if a rack loses uplink connectivity to the core network, the system determines the logical network affected and marks all virtual networks in this rack and connected to the affected logical network as impacted.

Finally, the system integrates with the SCOM server to report both health and statistics data. Health is reported in an aggregated manner making it easy to traverse and understand key issues.

Network Topology and Discovery Management

This Network Controller feature allows you to automatically discover network elements in the cloud datacenter network. Network Topology and Discovery also determines how network devices are interconnected to build a topology and dependency map.

Service Chaining Management

This Network Controller feature allows you to create rules that redirect network traffic to one or more VMs that are configured as virtual appliances. There are many types of virtual appliances, such as firewall appliances, security appliances that perform deep packet inspection, and antivirus appliances. You can obtain these VM-based virtual appliances from a wide variety of independent software vendors (ISVs).

Software Load Balancer Management

This Network Controller feature allows you to enable multiple servers to host the same workload, providing high availability and scalability.

Virtual Network Management

This Network Controller feature allows you to deploy and configure Hyper-V Network Virtualization, including the Hyper-V Virtual Switch and virtual network adapters on individual VMs, and to store and distribute virtual network policies.

Network Controller supports both Network Virtualization Generic Routing Encapsulation (NVGRE) and Virtual Extensible Local Area Network (VXLAN).

Windows Server Gateway Management

This Network Controller feature allows you to deploy, configure, and manage Hyper-V hosts and virtual machines (VMs) that are members of a Windows Server Gateway cluster, providing gateway services to your tenants. Network Controller allows you to automatically deploy VMs running Windows Server Gateway, which is also called the Routing and Remote Access Service (RRAS) Multitenant Gateway, with the following gateway features:

  • Add and remove gateway VMs from the cluster and specify the level of backup required.
  • Site-to-site virtual private network (VPN) gateway connectivity between remote tenant networks and your datacenter using IPsec.
  • Site-to-site VPN gateway connectivity between remote tenant networks and your datacenter using Generic Routing Encapsulation (GRE).
  • Point-to-site VPN gateway connectivity so that your tenants’ administrators can access their resources on your datacenter from anywhere.
  • Layer 3 forwarding capability.
  • Border Gateway Protocol (BGP) routing, which allows you to manage the routing of network traffic between your tenants’ VM networks and their remote sites.

Network Controller is capable of dual-tunnel configuration of site-to-site VPN gateways and the automatic placement of tunnel end-points on separate gateways. In addition, Network Controller can load balance site-to-site and point-to-site VPN connections between gateway VMS, as well as logging configuration and state changes by using logging services.

For more information on BGP, see Border Gateway Protocol (BGP) Overview.

For more information on the RRAS Multitenant Gateway, see Windows Server 2012 R2 RRAS Multitenant Gateway Deployment Guide.

For more information on Windows Server Gateway, see Windows Server Gateway


Leave a comment

Free Ebook: Microsoft System Center Deploying #HyperV with Software-Defined Storage & Networking #SCVMM #SDN

Deploy HyperV with SDN Ebook

This book, or proof-of-concept (POC) guide, will cover a variety of aspects that make up the foundation of the software-defined datacenter: virtualization, storage, and networking. By the end, you should have a fully operational, small-scale configuration that will enable you to proceed with evaluation of your own key workloads, experiment with additional features and capabilities, and continue to build your knowledge.

The book won’t, however, cover all aspects of this software-defined datacenter foundation. The book won’t, for instance, explain how to configure and implement Hyper-V Replica, enable and configure Storage Quality of Service (QoS), or discuss Automatic Virtual Machine Activation. Yet these are all examples of capabilities that this POC configuration would enable you to evaluate with ease.

Chapter 1: Design and planning This chapter focuses on the overall design of the POC configuration. It discusses each layer of the solution, key features and functionality within each layer, and the reasons why we have chosen to deploy this particular design for the POC.
Chapter 2: Deploying the management cluster This chapter focuses on configuring the core management backbone of the POC configuration. You’ll deploy directory, update, and deployment services, along with resilient database and VM management infrastructure. This lays the groundwork for streamlined deployment of the compute, storage, and network infrastructure in later chapters.
Chapter 3: Configuring network infrastructure With the management backbone configured, you will spend time in System Center Virtual Machine Manager, building the physical network topology that was defined in Chapter 2. This involves configuring logical networks, uplink port profiles, port classifications, and network adaptor port profiles, and culminates in the creation of a logical switch.
Chapter 4: Configuring storage infrastructure This chapter focuses on deploying the software-defined storage layer of the POC. You’ll use System Center Virtual Machine Manager to transform a pair of bare-metal servers, with accompanying just a bunch of disks (JBOD) enclosures, into a resilient, high-performance Scale-Out File Server (SOFS) backed by tiered storage spaces.
Chapter 5: Configuring compute infrastructure With the storage layer constructed and deployed, this chapter focuses on deploying the compute layer that will ultimately host workloads that will be deployed in Chapter 6. You’ll use the same bare-metal deployment capabilities covered in Chapter 4 to deploy several Hyper-V hosts and then optimize these hosts to get them ready for accepting virtualized workloads.
Chapter 6: Configuring network virtualization In Chapter 3, you will have designed and deployed the underlying logical network infrastructure and, in doing so, laid the groundwork for deploying network virtualization. In this chapter, you’ll use System Center Virtual Machine Manager to design, construct, and deploy VM networks to suit a number of different enterprise scenarios.

By the end of Chapter 6, you will have a fully functioning foundation for a software-defined datacenter consisting of software-defined compute with Hyper-V, software-defined storage, and software-defined networking.

Here you can download the Free ebook: Microsoft System Center Deploying Hyper-V with Software-Defined Storage & Networking

Thank you Microsoft TechNet, Cloud Platform Team, and Mitch Tulloch for this Free Awesome Ebook 😉


Leave a comment

Software-defined networking with Windows Server 2012 R2 and System Center 2012 R2 #SDN #SCVMM #Azure

SDN Microsoft Azure


Read Software-defined networking with Windows Server 2012 R2 and System Center 2012 R2 to learn more about software defined networking


Leave a comment

#HybridCloud with Microsoft Azure Virtual Networks Overview #sysctr #Azure #SCVMM #SDN

Connecting an Azure virtual network to another Azure virtual network is very similar to connecting a virtual network to an on-premises site location. Both connectivity types use a virtual network gateway to provide a secure tunnel using IPsec/IKE. The VNets you connect can be in different subscriptions and different regions. You can even combine VNet to VNet communication with multi-site configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity, as shown in the diagram below:

VnetToVnet

What can I do with Vnet to Vnet connectivity ?

Cross region geo-redundancy and geo-presence

  • You can set up your own geo-replication or synchronization with secure connectivity without going over internet-facing endpoints.
  • With Azure Load Balancer and Microsoft or third party clustering technology, you can setup highly available workload with geo-redundancy across multiple Azure regions. One important example is to setup SQL Always On with Availability Groups spreading across multiple Azure regions.

Regional multi-tier applications with strong isolation boundary

  • Within the same region, you can setup multi-tier applications with multiple virtual networks connected together with strong isolation and secure inter-tier communication.

Cross subscription, inter-organization communication in Azure

  • If you have multiple Azure subscriptions, you can now connect workloads from different subscriptions together securely between virtual networks.
  • For enterprises or service providers, it is now possible to enable cross organization communication with secure VPN technology within Azure.

Here you can find all the information about Vnet to Vnet in Microsoft Azure

If you’ve already created a design plan for your virtual network, the following how-to guidance will help you configure specific settings. Keep in mind that properly designing your virtual network to support your environment is critical. Many settings cannot be changed once your virtual network is in use. If you haven’t yet made design decisions regarding your virtual network, please see Virtual Network Overview.

  • Configure a Cloud-Only Virtual Network in the Management Portal
  • Configure a Site-to-Site VPN in the Management Portal
  • Configure a Site-to-Site VPN using Windows Server 2012 Routing and Remote Access Service (RRAS)
  • Configure a Point-to-Site VPN in the Management Portal
  • Configure a Multi-Site VPN
  • Configure VNet to VNet Connectivity
  • Configure a Virtual Network Gateway in the Management Portal
  • Change a Virtual Network Gateway Routing Type
  • Create an Affinity Group in the Management Portal
  • Configure a Virtual Network Using a Network Configuration File
  • Export Virtual Network Settings to a Network Configuration File
  • Import a Network Configuration File
  • Deleting a Virtual Network
  • View and Edit Virtual Network Properties
  • Add or Remove DNS Servers for a Virtual Network
  • Configure a Static Internal IP Address (DIP) for a VM
  • Move a VM or Role Instance to a Different Subnet

Here you find the Microsoft Azure Virtual Networks Configuration Tasks

HybridCloud Networking