Earlier I wrote a blogpost about Microsoft Azure Service Fabric Standalone Cluster for Dev testing. This was 5 – Node Azure Service Fabric Cluster locally installed, but now I like to have a bigger ASF Cluster on my
Windows Server 2019 for testing with Visual Studio.
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers.
When you want to great your own Azure Service Fabric Cluster for Production, you have to prepare your self and making a plan before you build.
When you have your Azure Service Fabric Standalone Cluster running, you want to deploy your microservices, apps or containers on it and test your solution. In the following steps I deploy with Visual Studio a Web App to Azure Service Fabric Cluster Standalone version 7.1.409
The world of data is moving and changing a lot with new IT technologies coming up like leaves on a tree.
Data is everywhere, on Servers, workstations, BYOD Devices in the Cloud but how do you keep your data save and protected for your business today and in the future? There are a lot of reasons why you should Backup your data :
One of your employees accidentally deleted important files for example.
Your data got compromised by a virus.
Your Server crashed
You have to save your data for a period of time by Law
And there will be more reasons why you should do backup…………….
A lot of Enterprise organizations are moving to the Cloud with workloads for the Business, but how is your Backup and Disaster Recovery managed today? A lot of data transitions are made but what if your Backup and Disaster Recovery solution is out dated or reaching end of Life? You can have a lot of Questions like :
What data should I backup?
Should I just upgrade the Backup Solution?
How can I make my Data Management Backup -DR Solution Cheaper and ready for the future?
How can I make my new Backup-DR Solution independent? ( Vendor Lockin)
And there will be more questions when you are in this scenario where you have to renew your Backup – DR Solution.
Here we have the following Great Backup Solution from 2014 :
Offsite Microsoft DPM Backup Solution since 2014
Here we have 3 System Center Data Protection Manager Backup Pods with a Tape library and One DPM pod connected with a Microsoft Azure Backup Vault in the Cloud. You do the Security updates and the Rollups for Windows Server 2012 R2 and System Center Data Protection Manager 2012 to keep the Solution save and running.
Long Time Protection to Tape
DPM 2012 Server with direct attached Storage for Short time protection
The four DPM Backup Pods have the same Storage configuration for short time protection with a retention time of 15 days. After that Longtime protection is needed with Backup to tape and Backup to Microsoft Azure Backup Vault.
Since 2014 the Backup data is depending on these solution configurations.
Tape Management cost a lot of time and money
The fourth DPM Backup pod got a Azure Backup Vault in the Cloud to save Tape Management time.
DPM Backup to Microsoft Azure Cloud Backup Vault.
So this is the Start of the Journey to a New Data Management Backup – DR Solution transformation. The next Couple of weeks I will search for the different scenarios and solutions on the Internet and talk with the Community looking for Best Practices. I will do Polls on Social Media and a Serie of blogposts for the Data Management Backup – DR Solution to keep the business continuity.
Will it be a Cloud Backup – DR Solution?
Will it be a Hybrid Cloud Backup – DR Solution?
Everything in One Management Console?
Or More then One Backup -DR Solution for the right Job?
We will see what the journey will bring us based on Best Practices 😉
Microsoft Keynote HoloLens 2 at Mobile World Congress (MWC) 2019
HoloLens 2
Microsoft HoloLens 2: Partner Spotlight with Philips
Microsoft HoloLens 2: Partner Spotlight with Bentley
Conclusion:
I see Awesome possibilities for Maintenance in Smart Cities and Smart Buildings with Intelligent Cloud and Intelligent Edge together with the Microsoft Hololens 2 and Microsoft Azure. Intelligent Dashboards in your Hololens 2 hybrid with your Azure App for example. Great for Manufacturers, Healthcare, Architects, Maintenance Companies but also for Teachers and Students doing innovative Education 🙂
Learn Azure in a Month of Lunches breaks down the most important Azure concepts into bite-sized lessons with exercises and labs—along with project files available in GitHub—to reinforce your skills. Learn how to:
Use core Azure infrastructure and platform services—including how to choose which service for which task.
Plan appropriately for availability, scale, and security while considering cost and performance.
Integrate key technologies, including containers and Kubernetes, artificial intelligence and machine learning, and the Internet of Things.
Get best practices on how to monitor your Kubernetes clusters from field experts in this episode of the Kubernetes Best Practices Series. In this intermediate level deep dive, you will learn about monitoring and logging in Kubernetes from Dennis Zielke, Technology Solutions Professional in the Global Black Belts Cloud Native Applications team at Microsoft.
Multi-cluster view from Azure Monitor
Azure Monitor provides a multi-cluster view showing the health status of all monitored AKS clusters deployed across resource groups in your subscriptions. It shows AKS clusters discovered that are not monitored by the solution. Immediately you can understand cluster health, and from here you can drill down to the node and controller performance page, or navigate to see performance charts for the cluster. For AKS clusters discovered and identified as unmonitored, you can enable monitoring for that cluster at any time.
Container Live Logs provides a real-time view into your Azure Kubernetes Service (AKS) container logs (stdout/stderr) without having to run kubectl commands. When you select this option, new pane appears below the containers performance data table on the Containers view, and it shows live logging generated by the container engine to further assist in troubleshooting issues in real time.
Live logs supports three different methods to control access to the logs:
AKS without Kubernetes RBAC authorization enabled
AKS enabled with Kubernetes RBAC authorization
AKS enabled with Azure Active Directory (AD) SAML based single-sign on
You even can search in the Container Live Logs for Troubleshooting and history.
Azure Service Fabric Mesh is a fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. Applications hosted on Service Fabric Mesh run and scale without you worrying about the infrastructure powering it. Service Fabric Mesh consists of clusters of thousands of machines. All cluster operations are hidden from the developer. Simply upload your code and specify resources you need, availability requirements, and resource limits. Service Fabric Mesh automatically allocates the infrastructure and handles infrastructure failures, making sure your applications are highly available. You only need to care about the health and responsiveness of your application-not the infrastructure.
With Service Fabric Mesh you can:
“Lift and shift” existing applications into containers to modernize and run your current applications at scale.
Build and deploy new microservices applications at scale in Azure. Integrate with other Azure services or existing applications running in containers. Each microservice is part of a secure, network isolated application with resource governance policies defined for CPU cores, memory, disk space, and more.
Integrate with and extend existing applications without making changes to those applications. Use your own virtual network to connect existing application to the new application.
Modernize your existing Cloud Services applications by migrating to Service Fabric Mesh.
Build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones. Azure services that support Availability Zones fall into two categories:
Zonal services – you pin the resource to a specific zone (for example, virtual machines, managed disks, IP addresses)
Zone-redundant services – platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).
To achieve comprehensive business continuity on Azure, build your application architecture using the combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications and data using Availability Zones within an Azure region for high-availability and asynchronously replicate across Azure regions for disaster recovery protection.
The Service Fabric team will be hosting an Ask Me Anything (AMA) (more like “ask us anything”!) session for Service Fabric Mesh on Twitter on Tuesday, October 30thfrom 9am to 10:30am PST. Tweet to@servicefabricor @AzureSupport using #SFMeshAMA with your questions on Mesh and Service Fabric. More information here
More information about Azure Service Fabric Mesh :
If you are a developer or architect who wants to get started with Microsoft Azure, this book is for you! Written by developers for developers, this guide will show you how to get started with Azure and which services you can use to run your applications, store your data, incorporate intelligence, build IoT apps, and deploy your solutions in a more efficient and secure way.
This 300 pages guide presents a structured approach for designing cloud applications that are scalable, resilient, and highly available. The guidance in this e-book is intended to help your architectural decisions regardless of your cloud platform, though we will be using Azure so we can share the best practices that we have learned from many years of customer engagements.
In the following chapters, we will guide you through a selection of important considerations and resources to help determine the best approach for your cloud application:
Choosing the right architecture style for your application based on the kind of solution you are building.
Choosing the most appropriate compute and data store technologies.
Incorporating the ten high-level design principles to ensure your application is scalable, resilient, and manageable.
Utilizing the five pillars of software quality to build a successful cloud application.
Applying design patterns specific to the problem you are trying to solve