Integrating your storage by using Microsoft cloud services gives you access to a broad range of services and cloud platform options. You can use prepackaged solutions that are bundled with existing services, use existing services as a starting point for your storage solution with additional configuration or coding for a custom fit, or use storage building blocks, along with coding, to create your own storage solution or apps from scratch.
Any developer or IT professional can be productive with Azure. The integrated tools, pre-built templates and managed services make it easier to build and manage enterprise, mobile, Web and Internet of Things (IoT) apps faster, using skills you already have and technologies you already know. Microsoft is also the only vendor positioned as a Leader across Gartner’s Magic Quadrants for Cloud Infrastructure as a Service, Application Platform as a Service, and Cloud Storage Services for the second consecutive year.
In today’s world it is all about mobility and Applications. On your work, at School, at Home, and even when you do Sport like biking or running. I think that’s why
Microsoft makes One Platform Windows 10 to get the best experience with Applications on every Device.
Making your own websites is really Easy with Microsoft Azure and is Cool to work with.
To run all those Web Apps in the Cloud you need Storage and capacity to store your data save in the Cloud with high security policies.
Microsoft Azure has all kind of diffenrent Storage in the Cloud for your data.
Microsoft Azure Data and Storage
Learn about Azure Storage, and how to create applications using Azure blobs, tables, queues, and files :
Of course when you have a lot of data, you like to analyze it for the business and make good Reports or Dashboard to
make the right decisions. Microsoft Azure Cloud Services has Data and Analytics :
Microsoft Azure Data and Analytics
Learn to create Hadoop clusters, process big data, develop solutions using streaming or historical data, and analyze the results :
When you are still working on an On-premises datacenter only, Microsoft makes it’s easy to transform your datacenter into Hybrid Cloud Scenarios.
You can think of a Twin Datacenter for your Core Business Applications, or save your longtime protection data into an Azure Backup Vault.
Microsoft Azure Hybrid Integration
Learn how to integrate the enterprise and the cloud with BizTalk Services :
To make those Microsoft Solutions Available for everyone, you need Developers and Developer Environments.
Microsoft Azure Developer Services
Learn how to detect issues, diagnose crashes and track usage of your mobile apps, and web apps hosted anywhere: on Azure or on your own IIS or J2EE servers :
This is a super simple “getting started” experience for deploying single and multi-container Dockerized applications utilizing Azure Resource Manager templates and the new Docker Extension
The Storage Spaces Direct stack includes the following, starting from the bottom:
Hardware: The storage system consisting of a minimum of four storage nodes with local storage. Each storage node can have internal disks, or disks in an external SAS connected JBOD enclosure. The disk devices can be SATA disks, NVMe disks or SAS disks.
Software Storage Bus: The Software Storage Bus spans all the storage nodes and brings together the local storage in each node, so all disks are visible to the Storage Spaces layer above.
Storage Pool: The storage pool spans all local storage across all the nodes.
Storage Spaces: Storage Spaces (aka virtual disks) provide resiliency to disk or node failures as data copies are stored on different storage nodes.
Resilient File System (ReFS) ReFS provides the file system in which the Hyper-V VM files are stored. ReFS is a premier file system for virtualized deployments and includes optimizations for Storage Spaces such as error detection and automatic correction. In addition, ReFS provides accelerations for VHD(X) operations such as fixed VHD(X) creation, dynamic VHD(X) growth, and VHD(X) merge.
Clustered Shared Volumes: CSVFS layers above ReFS to bring all the mounted volumes into a single namespace.
Scale-Out File Server This is the top layer of the storage stack that provides remote access to the storage system using the SMB3 access protocol.
Windows Server Technical Preview introduces Storage Spaces Direct, which enables building highly available (HA) storage systems with local storage. This is a significant step forward in Microsoft Windows Server software-defined storage (SDS) as it simplifies the deployment and management of SDS systems and also unlocks use of new classes of disk devices, such as SATA and NVMe disk devices, that were previously not possible with clustered Storage Spaces with shared disks.
With the introduction of Premium Storage, Microsoft Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data on the latest technology Solid State Drives (SSDs) whereas Standard Storage stores data on Hard Disk Drives (HDDs). Premium Storage is designed for Azure Virtual Machine workloads which require consistent high IO performance and low latency in order to host IO intensive workloads like OLTP, Big Data, and Data Warehousing on platforms like SQL Server, MongoDB, Cassandra, and others. With Premium Storage, more customers will be able to lift-and-shift demanding enterprise applications to the cloud.
Premium Storage is currently available for Page Blobs and Data Disks used by Azure Virtual Machines. You can provision a Premium Storage Data Disk with the right performance characteristics to meet your requirements. You can attach multiple disks to a VM and enable up to 32 TB of storage per VM with more than 64,000 IOPS per VM at low-millisecond latency for read operations.
This book, or proof-of-concept (POC) guide, will cover a variety of aspects that make up the foundation of the software-defined datacenter: virtualization, storage, and networking. By the end, you should have a fully operational, small-scale configuration that will enable you to proceed with evaluation of your own key workloads, experiment with additional features and capabilities, and continue to build your knowledge.
The book won’t, however, cover all aspects of this software-defined datacenter foundation. The book won’t, for instance, explain how to configure and implement Hyper-V Replica, enable and configure Storage Quality of Service (QoS), or discuss Automatic Virtual Machine Activation. Yet these are all examples of capabilities that this POC configuration would enable you to evaluate with ease.
Chapter 1: Design and planning This chapter focuses on the overall design of the POC configuration. It discusses each layer of the solution, key features and functionality within each layer, and the reasons why we have chosen to deploy this particular design for the POC.
Chapter 2: Deploying the management cluster This chapter focuses on configuring the core management backbone of the POC configuration. You’ll deploy directory, update, and deployment services, along with resilient database and VM management infrastructure. This lays the groundwork for streamlined deployment of the compute, storage, and network infrastructure in later chapters.
Chapter 3: Configuring network infrastructure With the management backbone configured, you will spend time in System Center Virtual Machine Manager, building the physical network topology that was defined in Chapter 2. This involves configuring logical networks, uplink port profiles, port classifications, and network adaptor port profiles, and culminates in the creation of a logical switch.
Chapter 4: Configuring storage infrastructure This chapter focuses on deploying the software-defined storage layer of the POC. You’ll use System Center Virtual Machine Manager to transform a pair of bare-metal servers, with accompanying just a bunch of disks (JBOD) enclosures, into a resilient, high-performance Scale-Out File Server (SOFS) backed by tiered storage spaces.
Chapter 5: Configuring compute infrastructure With the storage layer constructed and deployed, this chapter focuses on deploying the compute layer that will ultimately host workloads that will be deployed in Chapter 6. You’ll use the same bare-metal deployment capabilities covered in Chapter 4 to deploy several Hyper-V hosts and then optimize these hosts to get them ready for accepting virtualized workloads.
Chapter 6: Configuring network virtualization In Chapter 3, you will have designed and deployed the underlying logical network infrastructure and, in doing so, laid the groundwork for deploying network virtualization. In this chapter, you’ll use System Center Virtual Machine Manager to design, construct, and deploy VM networks to suit a number of different enterprise scenarios.
By the end of Chapter 6, you will have a fully functioning foundation for a software-defined datacenter consisting of software-defined compute with Hyper-V, software-defined storage, and software-defined networking.
Data deduplication involves finding and removing duplication within data without compromising its fidelity or integrity. The goal is to store more data in less space by segmenting files into small variable-sized chunks (32–128 KB), identifying duplicate chunks, and maintaining a single copy of each chunk. Redundant copies of the chunk are replaced by a reference to the single copy. The chunks are compressed and then organized into special container files in the System Volume Information folder. The result is an on-disk transformation of each file as shown in Figure 1. After deduplication, files are no longer stored as independent streams of data, and they are replaced with stubs that point to data blocks that are stored within a common chunk store. Because these files share blocks, those blocks are only stored once, which reduces the disk space needed to store all files. During file access, the correct blocks are transparently assembled to serve the data without calling the application or the user having any knowledge of the on-disk transformation to the file. This enables administrators to apply deduplication to files without having to worry about any change in behavior to the applications or impact to users who are accessing those files.
On-disk transformation of files during data deduplication
After a volume is enabled for deduplication and the data is optimized, the volume contains the following:
Unoptimized files. For example, unoptimized files could include files that do not meet the selected file-age policy setting, system state files, alternate data streams, encrypted files, files with extended attributes, files smaller than 32 KB, other reparse point files, or files in use by other applications (the “in use” limit is removed in Windows Server 2012 R2).
Optimized files. Files that are stored as reparse points that contain pointers to a map of the respective chunks in the chunk store that are needed to restore the file when it is requested.
Chunk store. Location for the optimized file data.
Additional free space. The optimized files and chunk store occupy much less space than they did prior to optimization.
To cope with data storage growth in the enterprise, administrators are consolidating servers and making capacity scaling and data optimization key goals. Data deduplication provides practical ways to achieve these goals, including:
Capacity optimization. Data deduplication stores more data in less physical space. It achieves greater storage efficiency than was possible by using features such as Single Instance Storage (SIS) or NTFS compression. Data deduplication uses subfile variable-size chunking and compression, which deliver optimization ratios of 2:1 for general file servers and up to 20:1 for virtualization data.
Scale and performance. Data deduplication is highly scalable, resource efficient, and nonintrusive. It can process up to 50 MB per second in Windows Server 2012 R2, and about 20 MB of data per second in Windows Server 2012. It can run on multiple volumes simultaneously without affecting other workloads on the server. Low impact on the server workloads is maintained by throttling the CPU and memory resources that are consumed. If the server gets very busy, deduplication can stop completely. In addition, administrators have the flexibility to run data deduplication jobs at any time, set schedules for when data deduplication should run, and establish file selection policies.
Reliability and data integrity. When data deduplication is applied, the integrity of the data is maintained. Data Deduplication uses checksum, consistency, and identity validation to ensure data integrity. For all metadata and the most frequently referenced data, data deduplication maintains redundancy to ensure that the data is recoverable in the event of data corruption.
Bandwidth efficiency with BranchCache. Through integration with BranchCache, the same optimization techniques are applied to data transferred over the WAN to a branch office. The result is faster file download times and reduced bandwidth consumption.
Optimization management with familiar tools. Data deduplication has optimization functionality built into Server Manager and Windows PowerShell. Default settings can provide savings immediately, or administrators can fine-tune the settings to see more gains. One can easily use Windows PowerShell cmdlets to start an optimization job or schedule one to run in the future. Installing the Data Deduplication feature and enabling deduplication on selected volumes can also be accomplished by using an Unattend.xml file that calls a Windows PowerShell script and can be used with Sysprep to deploy deduplication when a system first boots.
Deduplicating Microsoft System Center 2012 R2 DPM storage :
Business benefits
Using deduplication with DPM can result in large savings. The amount of space saved by deduplication when optimizing DPM backup data varies depending on the type of data being backed up. For example, a backup of an encrypted database server may result in minimal savings since any duplicate data is hidden by the encryption process. However backup of a large Virtual Desktop Infrastructure (VDI) deployment can result in very large savings in the range of 70-90+% range, since there is typically a large amount of data duplication between the virtual desktop environments. In the configuration described in this topic Microsoft ran a variety of test workloads and saw savings ranging between 50% and 90%.
Recommended deployment
To deploy DPM as a virtual machine backing up data to a deduplicated volume Microsoft recommend the following deployment topology:
DPM running in a virtual machine in a Hyper-V host cluster.
DPM storage using VHD/VHDX files stored on an SMB 3.0 share on a file server.
For this example deployment Microsoft configured the file server as a scaled-out file server (SOFS) deployed using storage volumes configured from Storage Spaces pools built using directly connected SAS drives. Note that this deployment ensures performance at scale.
Note the following:
This scenario is supported for DPM 2012 R2
The scenario is supported for all workloads for which data can be backed up by DPM 2012 R2.
All the Windows File Server nodes on which DPM virtual hard disks reside and on which deduplication will be enabled must be running Windows Server 2012 R2 with Update Rollup November 2014.