mountainss Cloud and Datacenter Management Blog

Microsoft Hybrid Cloud blogsite about Management


Leave a comment

Getting started with #Microsoft Azure Cognitive Services in #Containers #Azure #AI #AKS #Docker

Microsoft Visual Studio Code Tools for AI

With container support, customers can use Azure’s intelligent Cognitive Services capabilities, wherever the data resides. This means customers can perform facial recognition, OCR, or text analytics operations without sending their content to the cloud. Their intelligent apps are portable and scale with greater consistency whether they run on the edge or in Azure.

Bringing AI to the Edge via  Corporate Vice President, Azure AI Eric Boyd

Get started with these Azure Cognitive Services Containers

Building solutions with machine learning often requires a data scientist. Azure Cognitive Services enable organizations to take advantage of AI with developers, without requiring a data scientist. We do this by taking the machine learning models and the pipelines and the infrastructure needed to build a model and packaging it up into a Cognitive Service for vision, speech, search, text processing, language understanding, and more. This makes it possible for anyone who can write a program, to now use machine learning to improve an application. However, many enterprises still face challenges building large-scale AI systems. Today Microsoft announced container support for Cognitive Services, making it significantly easier for developers to build ML-driven solutions.

Microsoft got the following Containers :

  • Text Analytics Containers
  • Face Container
  • Recognize Text Container

More information from Director of Program Management Applied AI Lance Olson here

Start with Installing and running Containers

Request access to the private container registry

You must first complete and submit the Cognitive Services Vision Containers Request form to request access to the Face container. The form requests information about you, your company, and the user scenario for which you’ll use the container. Once submitted, the Azure Cognitive Services team reviews the form to ensure that you meet the criteria for access to the private container registry.

Important !

You must use an email address associated with either a Microsoft Account (MSA) or Azure Active Directory (Azure AD) account in the form. If your request is approved, you then receive an email with instructions describing how to obtain your credentials and access the private container registry.

Read more about installing the Containers here

The Face container uses a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
Configuration settings
Configuration settings in the Face container are hierarchical, and all containers use a shared hierarchy, based on the following top-level structure:

  • ApiKey
  • ApplicationInsights
  • Authentication
  • Billing
  • CloudAI
  • Eula
  • Fluentd
  • Logging
  • Mounts

Read more here about Configuring the Containers

Follow Containers in the Cloud Community Group

 

Advertisements


Leave a comment

via @MSAzureCAT Enterprise #Cloud Control Plane Planning #AzureDevOps #Pipelines

End-to-end Pipelines for Automating Microsoft Azure Deployments

 

Overview :

Imagine a fully automated, end-to-end pipeline for your cloud deployments—one that encompasses and automates everything:

• Source code repos.
• The build and release iterations.
• Agile processes supported by continuous integration and continuous deployment (CI/CD)
• Security and governance.
• Business unit chargebacks.
• Support and maintenance.

Azure services and infrastructure-as-code (IaC) make control plane automation very achievable. Many enterprise IT groups dream of creating or unifying their disparate automation processes and supporting a common, enterprise-wide datacenter control plane in the cloud that is integrated with their existing or new DevOps workflows. Their development environments may use Jenkins, Azure DevOps Services (formerly Visual Studio Team Services), Visual Studio Team Foundation Server (TFS), Atlassian, or other services. The challenge is to automate beyond the CI/CD pipeline to the management and policy layers. From a planning and architecture standpoint, it can seem like an overwhelming program of interdependent systems and processes. This guide outlines a planning process that you can use for automated support of your cloud deployments and DevOps workflows beyond the CI/CD pipeline. The Azure platform provides services you can use, or you can choose to work with third-party or open source options. The process is based on real-world examples that we have deployed with enterprise customers on Azure.

This whitepaper was authored by Tim Ehlen. It was edited by Nanette Ray. It was reviewed by AzureCAT.

Download the Awesome eBook here on the AzureCAT Team Blog

Follow AzureCAT and SQLCAT on Twitter


Leave a comment

Using #Azure Pipelines for your Open Source Project #AzureDevOps

Azure Pipelines for your Open Source Projects

Damian speaks to Edward Thomson about how to get started with Azure Pipelines – right from GitHub. The deep integration and GitHub Marketplace app for Azure Pipelines makes it incredibly easy to build your projects no matter what language you’re using. You can even use the builds as part of your PR checks!

https://github.com/marketplace/azure-pipelines

Edward shows us the incredible (free!) offers for open and closed source projects, and walks through creating and running a new Azure Pipelines build from scratch in only a few minutes.

Subscribe to Azure DevOps on YouTube


Leave a comment

#Microsoft Azure Hub-Spoke model by Enterprise Design 1 of 4 #Azure #Cloud

 

Azure Hub-Spoke Architecture

Microsoft Azure Hub-Spoke Architecture

This Enterprise reference architecture shows how to implement a hub-spoke topology in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub, and can be used to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN gateway connection.

We only use the Azure Private peering

For this Hybrid Cloud Strategy we made four Microsoft Azure Subscriptions via the EA Portal :

  1. Azure HUB Subscription for the connectivity via Azure ExpressRoute to On-premises Datacenter.
  2. Azure Spoke 1 for Production workload and Cloud Services
  3. Azure Spoke 2 for Test and Acceptance Cloud Services
  4. Azure Spoke 3 for Future plans

The naming convention rules and restrictions for Azure resources and a baseline set of recommendations for naming conventions. You can use these recommendations as a starting point for your own conventions specific to your needs.

The choice of a name for any resource in Microsoft Azure is important because:

  • It is difficult to change a name later.
  • Names must meet the requirements of their specific resource type.

Consistent naming conventions make resources easier to locate. They can also indicate the role of a resource in a solution.The key to success with naming conventions is establishing and following them across your applications and organizations.

Azure connectivity and RBAC Identity

This tenant is federated with via ADFS and Azure Connect to Office 365. Identity management is provisioned
via Microsoft Identity Manager 2016 (MIM2016). With this already in place, we can Configure Microsoft Azure RBAC in the subscriptions.

Access management for cloud resources is a critical function for any organization that is using the cloud. Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of resources in Azure.

Business Development

For Business Development we have a separated Active Directory in one forest and also federated via ADFS to Microsoft Office 365. For this environment we build one Azure subscription with a temporary Site-to-Site VPN connection to On-premises datacenter for the “Lift and Shift” migration via Azure-Site-Recovery (ASR)

S2S VPN IKE v2 tunnel with Cisco and Azure.

Azure Virtual Networks

Next step is to build the connections between the Azure HUB Subscription and the Azure Spoke subscription(s) when every Microsoft Azure subscription has It’s own Virtual Network (VNET). This is called VNET peering.

Virtual network peering enables you to seamlessly connect two Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network, through private IP addresses only. Azure supports:

  • VNet peering – connecting VNets within the same Azure region
  • Global VNet peering – connecting VNets across Azure regions

Here you see my step-by-step VNET peering creation from HUB to Spoke 1 :

Go to the VNET of the Azure HUB Subscription. and then to Peerings => Add.

Here you make the connection with Spoke 1 Azure subscription.

For Azure HUB is Peering to Spoke 1 Done.

Now we go to the VNET of Azure Subscription Spoke 1 to make the connection.

Go to VNET => Peerings => Click on Add in the Azure Spoke 1 Subscription

Connect here to the Azure HUB

The VNET Peering between Azure HUB subscription and Spoke 1 is Connected.

In this order you have to make the other VNET Peerings from the Azure HUB subscription to the other Spoke Subscriptions so that the network connectivity between VNETs is working. Because we have the Azure Internet Edge in the HUB for the other subscriptions.

In the Azure Reference Architecture we also do Security by Design in the Cloud with Firewall and Azure Network Security Groups (NSG) and every Azure component get it’s own Tag for Security Groups and Billing – Usage.

Azure Storage

In every Microsoft Azure Subscription (HUB and Spoke ) we created a Storage Account. You can choose for different kind of storage in Microsoft Azure.

Durable and highly available. Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.
Secure. All data written to Azure Storage is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today’s applications.
Managed. Microsoft Azure handles maintenance and any critical problems for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides SDKs for Azure Storage in a variety of languages — .NET, Java, Node.js, Python, PHP, Ruby, Go, and others — as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.

Azure Storage includes these data services:
Azure Blobs: A massively scalable object store for text and binary data.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.

Creating your Azure Storage accounts by Design.

One of our Architecture Security by Design policy, is to Encrypt all the storage in Azure via Microsoft Azure Key vault.

Deploying Azure IaaS Virtual Machine with ARM Templates

Enterprise organizations with more then ten employees managing IT datacenters are working by process and order to do the job for the business. When they are all using the Azure Portal and deploy Virtual Machines manually you will get a mess and things can go wrong. In Microsoft Azure you have the Azure Resource Manager for deploying  JSON ARM Templates. With these Azure Resource Manager Templates you can automate your workload deployments in Microsoft Azure. For example : We build a JSON template to deploy a Windows Server in the right Azure Subscription in the right Azure Resource Group and with the following extensions to it :

  • Antimalware agent installed
  • Domain joined in the right OU (Active Directory)
  • Azure Log analytics agent installed ( Connected to Azure Monitor and SCOM )
  • Encryption by default.

Using with our Azure naming conventions and Azure policy we always deploy consistent without making mistakes or by wrong typing in the Azure portal. When you write and make your ARM templates for different workloads, you can store them in Azure DevOps Repo ( Repository) and you can connect your private repo to GitHub.

Making ARM templates works really Awesome with Microsoft Visual Studio Code which is opensource and free of charge. You can add your favorite VSC extensions to work with like Azure Resource Manager.

 Our Azure ARM Template to deploy Virtual Machines into Azure HUB-Spoke model with VSC

Azure monitoring and Recovery Service Vault

To manage your Azure Hybrid Cloud environment you have to monitor everything to keep in control of your Virtual Datacenter. And of course you have to plan your business continuity with Azure Recovery Services (Backup) by Design. We made in every Azure Subscription an Azure Recovery Services Vault for making Backups. This is because you don’t want backup traffic over your VNET peering’s. In the Azure HUB subscription we made a second Azure Site Recovery (ASR) Vault for the “Lift & Shift” migration of On-premises Virtual Machines to the landing zone in Azure HUB.

With Microsoft Azure Monitor we use Log Analytics and Service maps and with the same OMS agent on the Virtual Machine, we still can use Microsoft System Center Operation Manager (SCOM) connected to the same agent 🙂

When you have 45 locations, 45.000 students with BYOD and 10.000 Managed workstations, you will monitor 24 x 7 to keep everything running for your Business. Monitoring Express Route with a Backup connection is a must for your Hybrid Virtual Datacenter. Here you have more information about monitoring Express Route Circuit

Monitoring our Express Route

With this all installed in Microsoft Azure by Design, we have the policy Security First !

Microsoft Azure Security Center

Azure Security Center provides unified security management and advanced threat protection across hybrid cloud workloads. With Security Center, you can apply security policies across your workloads, limit your exposure to threats, and detect and respond to attacks.

We are already installing Azure Threat Protection (ATP) for our On-premises Datacenter for Security.

Azure Security Center

We still have a lot to configure in Microsoft Azure to get the Basic Architecture Design in place. When that is done, I will make three more blogposts about this datacenter transformation :

  • “Lift and Shift” migration with ASR for Virtual Machines on Hyper-V and VMware.
  • SQL assessment and Data Migration to Azure
  • Optimize of all Workloads in Microsoft Azure.

Hope this blogpost will help you too with your Datacenter transition to Microsoft Azure Cloud.


Leave a comment

Microsoft #Azure Service Fabric Mesh for your #Microservices and #Container Apps in the #Cloud

Microsoft Service Fabric Mesh

Azure Service Fabric Mesh is a fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. Applications hosted on Service Fabric Mesh run and scale without you worrying about the infrastructure powering it. Service Fabric Mesh consists of clusters of thousands of machines. All cluster operations are hidden from the developer. Simply upload your code and specify resources you need, availability requirements, and resource limits. Service Fabric Mesh automatically allocates the infrastructure and handles infrastructure failures, making sure your applications are highly available. You only need to care about the health and responsiveness of your application-not the infrastructure.

With Service Fabric Mesh you can:

  • “Lift and shift” existing applications into containers to modernize and run your current applications at scale.
  • Build and deploy new microservices applications at scale in Azure. Integrate with other Azure services or existing applications running in containers. Each microservice is part of a secure, network isolated application with resource governance policies defined for CPU cores, memory, disk space, and more.
  • Integrate with and extend existing applications without making changes to those applications. Use your own virtual network to connect existing application to the new application.
  • Modernize your existing Cloud Services applications by migrating to Service Fabric Mesh.

Build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones. Azure services that support Availability Zones fall into two categories:

  • Zonal services – you pin the resource to a specific zone (for example, virtual machines, managed disks, IP addresses)
  • Zone-redundant services – platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).

To achieve comprehensive business continuity on Azure, build your application architecture using the combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications and data using Availability Zones within an Azure region for high-availability and asynchronously replicate across Azure regions for disaster recovery protection.

Store state in an Azure Service Fabric Mesh application by mounting an Azure Files based volume inside the container

Twitter AMA on Service Fabric Mesh :

The Service Fabric team will be hosting an Ask Me Anything (AMA) (more like “ask us anything”!) session for Service Fabric Mesh on Twitter on Tuesday, October 30thfrom 9am to 10:30am PST. Tweet to @servicefabric or @AzureSupport using #SFMeshAMA with your questions on Mesh and Service Fabric. More information here

More information about Azure Service Fabric Mesh :

Microsoft Azure Service Fabric Mesh LAB on Github

Get started with Microsoft Azure Service Fabric for your Microservices and Container Apps

Service Fabric Microsoft Ignite 2018 sessions

JOIN Containers in the Cloud Community Group on LinkedIn here


Leave a comment

Make your first Pipeline with Azure DevOps Project in the #Cloud #Azure #AzureDevOps


Start here your Azure DevOps Project in Azure.

Microsoft Azure DevOps Services (Tools) to make your own CI/CD Pipeline in the Cloud

Azure Pipelines is a cloud service that you can use to automatically build and test your code project and make it available to other users. It works with just about any language or project type.
Pipelines combines both Continuous Integration (CI) and Continuous Deployment (CD) to constantly and consistently test and build your code and ship it to any target.

Microsoft made it really easy to make your first Azure DevOps Pipeline in the Cloud.
Here you find a step-by-step guide to make your first Azure pipeline :

When you already made your Cloud application, you can choose option Bring your Own Code 😉

But in this step-by-step guide, I choose for a HTML5 Azure Web App template which is available in Azure.

Static Azure Website => Next.

When you create your Azure DevOps project you can see the Flow steps for Creation.

For the Service of the Web App, there are two options in this deployment template :

  1. Web App for Containers
  2. Web App as a Service.

Azure Web Apps enables you to build and host web applications in the programming language of your choice without managing infrastructure. It offers auto-scaling and high availability, supports both Windows and Linux, and enables automated deployments from GitHub, Azure DevOps, or any Git repo

Web App for Containers provides built-in Docker images on Linux with support for specific versions, such as PHP 7.0 and Node.js 4.5. Web App for Containers uses the Docker container technology to host both built-in images and custom images as a platform as a service. In this tutorial, you learn how to build a custom Docker image and deploy it to Web App for Containers. This pattern is useful when the built-in images don’t include your language of choice, or when your application requires a specific configuration that isn’t provided within the built-in images.

The last step needs information about :

  • Organization: for the site name.
  • Projectname
  • Subscription ID
  • Web App Name
  • Azure Location.

And then click on Done

 

Deployment overview.

Your Azure DevOps Pipeline is Running as easy like that 🙂

But most important your Azure Web App is running.

Running in your Container in Azure Cloud Services.

Azure DevOps Container Web App Pipeline is running.

From here you can build your Project and Share it with your Developer Team.
More information you can find on Azure DevOps Docs

Here you see some snapshots on the latest Releases of Azure DevOps release features when I made this blogpost :

When you want to keep up-to-date on Microsoft Azure DevOps, here are some links :

Follow Microsoft Azure DevOps on Twitter

Start here free with Azure DevOps

Microsoft Azure DevOps Blog

JOIN the Azure DevOps Community Group on LinkedIn


Leave a comment

#Microsoft SQL Server 2019 Preview Overview #SQL #SQL2019 #Linux #Containers #MSIgnite

Microsoft SQL Server 2019 Preview

What’s New in Microsoft SQL Server 2019 Preview

• Big Data Clusters
o Deploy a Big Data cluster with SQL and Spark Linux containers on Kubernetes
o Access your big data from HDFS
o Run Advanced analytics and machine learning with Spark
o Use Spark streaming to data to SQL data pools
o Use Azure Data Studio to run Query books that provide a notebook experience

• Database engine
o UTF-8 support
o Resumable online index create allows index create to resume after interruption
o Clustered columnstore online index build and rebuild
o Always Encrypted with secure enclaves
o Intelligent query processing
o Java language programmability extension
o SQL Graph features
o Database scoped configuration setting for online and resumable DDL operations
o Always On Availability Groups – secondary replica connection redirection
o Data discovery and classification – natively built into SQL Server
o Expanded support for persistent memory devices
o Support for columnstore statistics in DBCC CLONEDATABASE
o New options added to sp_estimate_data_compression_savings
o SQL Server Machine Learning Services failover clusters
o Lightweight query profiling infrastructure enabled by default
o New Polybase connectors
o New sys.dm_db_page_info system function returns page information

• SQL Server on Linux
o Replication support
o Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
o Always On Availability Group on Docker containers with Kubernetes
o OpenLDAP support for third-party AD providers
o Machine Learning on Linux
o New container registry
o New RHEL-based container images
o Memory pressure notification

• Master Data Services
o Silverlight controls replaced

• Security
o Certificate management in SQL Server Configuration Manager

• Tools
o SQL Server Management Studio (SSMS) 18.0 (preview)
o Azure Data Studio

Introducing Microsoft SQL Server 2019 Big Data Clusters

SQL Server 2019 big data clusters make it easier for big data sets to be joined to the dimensional data typically stored in the enterprise relational database, enabling people and apps that use SQL Server to query big data more easily. The value of the big data greatly increases when it is not just in the hands of the data scientists and big data engineers but is also included in reports, dashboards, and applications. At the same time, the data scientists can continue to use big data ecosystem tools while also utilizing easy, real-time access to the high-value data in SQL Server because it is all part of one integrated, complete system.

Read the complete Awesome blogpost from Travis Wright about SQL Server 2019 Big Data Cluster here

Starting in SQL Server 2017 with support for Linux and containers, Microsoft has been on a journey of platform and operating system choice. With SQL Server 2019 preview, we are making it easier to adopt SQL Server in containers by enabling new HA scenarios and adding supported Red Hat Enterprise Linux container images. Today we are happy to announce the availability of SQL Server 2019 preview Linux-based container images on Microsoft Container Registry, Red Hat-Certified Container Images, and the SQL Server operator for Kubernetes, which makes it easy to deploy an Availability Group.

SQL Server 2019 preview containers now available

Microsoft Azure Data Studio

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. Previously released under the preview name SQL Operations Studio, Azure Data Studio offers a modern editor experience with lightning fast IntelliSense, code snippets, source control integration, and an integrated terminal. It is engineered with the data platform user in mind, with built-in charting of query resultsets and customizable dashboards.

Read the Complete Blogpost About Microsoft Azure Data Studio for SQL Server here

SQL Server 2019: Celebrating 25 years of SQL Server Database Engine and the path forward

Awesome work Microsoft SQL Team and Congrats on your 25th Anniversary !