mountainss Cloud and Datacenter Management Blog

Microsoft SystemCenter blogsite about virtualization on-premises and Cloud


Leave a comment

Deploying Containers on #Kubernetes Cluster in #Docker for Windows CE and on #Azure AKS

Kubernetes Custer via Docker for Windows CE Edge

Docker CE for Windows is Docker designed to run on Windows 10. It is a native Windows application that provides an easy-to-use development environment for building, shipping, and running dockerized apps. Docker CE for Windows uses Windows-native Hyper-V virtualization and networking and is the fastest and most reliable way to develop Docker apps on Windows. Docker CE for Windows supports running both Linux and Windows Docker containers.
Download Docker for Windows Community Edition Edge here

From Docker for Windows version 18.02 CE Edge includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

I’m using Docker for Windows CE version 18.05.0

Now your Single node Kubernetes Cluster is running.

To get the Kubernetes Dashboard you have to install it with Kubectl :

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Run kubectl proxy

Keep this running.

Go with your browser to : http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login  and you can skip kubeconfig for now.

You are now in the Kubernetes Dashboard.

Now it’s time to make your first containers (Pods) on Kubernetes.
Click on +CREATE in the upper right corner.

For example code I used a yaml script to deploy Nginx with 3 replicas

Deploying the Nginx Containers (Pods)

Nginx is running on Kubernetes.

With Microsoft Visual Studio Code and the Kubernetes extension you can play with Nginx Containers (pods) locally on your laptop.

When you need more capacity and want to scale-up with more Containers (Pods) for your solution, you can use Microsoft Azure Cloud with Azure Kubernetes Services

Monitor Azure Kubernetes Service (AKS) with container health (Preview) and with Analytics

 

Advertisements


Leave a comment

#Microsoft Build 2018 Sessions and Content Overview #Azure #AzureStack #MSBuild2018

Microsoft Build 2018 – Technology Keynote: Microsoft Azure

With Scott Guthrie @scottgu


Inside Azure Datacenter Architecture

with Mark Russinovich @markrussinovich


Architecting and Building Hybrid Cloud Apps for Azure and Azure Stack.
With Filippo Seracini @pipposera and Ricardo Mendes @rifmendes from the AzureStack Team

Container DevOps in Azure
With Jessica Deen @jldeen and Steven Murawski @stevenmurawski


Best Practices with Azure & Kubernetes

Follow @rimmanehme

Microsoft Azure CosmosDB @ Build 2018 The Catalyst for next Generation Apps


From Zero to Azure with Python & VSC


Secure the intelligent edge with Azure Sphere


Satya Nadella – Vision Keynote

Here you can find all the Microsoft Build 2018 Sessions and content.


Leave a comment

Deploy #Azure WebApp with Visual Studio Code and Play with #Kudu and App Service Editor and #VSC

When you have installed Microsoft Visual Studio Code which is Free and Open Source with Git integration, Debugging and lot of Extensions available,
You activate the Microsoft Azure App Service extension in VSC.

Azure App Service Extension

You can install really easy more Azure Extensions here.

On the Left you will see your Azure Subscription and by pushing the + you will create a new Azure WebApp.

Enter the name of the Resource Group

Select your OS Windows or Linux

Add the Name of the New App Service Plan

Choose a App Service plan See more information here

Select Azure Region

After this it will install your Microsoft Azure Web App in the Cloud in a couple of seconds 🙂

 

When you open the Azure Portal you will see your App Service plan running.

From here you can configure your Azure Web App for Continues Delivery, and use different tools like VSC, Kudu or Azure App Service Editor.

Azure Web Apps enables you to build and host web applications in the programming language of your choice without managing infrastructure. It offers auto-scaling and high availability, supports both Windows and Linux, and enables automated deployments from GitHub, Visual Studio Team Services, or any Git repo.

Learn how to use Azure Web Apps with Microsoft quickstarts, tutorials, and samples.

Configure Continues Deployment from the Azure Portal.

Or
Continuous Deployment to Azure App Service

Developer tools from the Azure Portal with App Service Editor.

 

Azure App Services Editor

From here you can open Kudu to manage your Azure Web App and Debug via Console :

Kudu Debug console in CMD

Or Kudu Debug Console in Powershell 😉

Kudu Process Explorer

Here you find more information about Kudu for your Azure Web App on GitHub

And to come back at Microsoft Visual Studio Code, you can manage and Build your Azure Web App from here too :

Azure Web App Services in VSC

Hope this first step by step Guide is useful for you to start with Microsoft Azure Web App and Visual Studio Code to make your Pipeline.
More Information at Visual Studio Code

Azure Web Apps Overview


Leave a comment

#Microsoft Azure Storage Tools AzCopy and #Azure Storage Explorer #Cloud #AzureStack

Microsoft Azure Storage tools
Type azcopy /? for help on AzCopy.
C:\Program Files (x86)\Microsoft SDKs\Azure>azcopy /?
——————————————————————————
AzCopy 7.1.0 Copyright (c) 2017 Microsoft Corp. All Rights Reserved.
——————————————————————————

AzCopy </Source:> </Dest:> [/SourceKey:] [/DestKey:] [/SourceSAS:] [/DestSAS:]
[/V:] [/Z:] [/@:] [/Y] [/NC:] [/SourceType:] [/DestType:] [/S]
[/Pattern:] [/CheckMD5] [/L] [/MT] [/XN] [/XO] [/A] [/IA] [/XA]
[/SyncCopy] [/SetContentType] [/BlobType:] [/Delimiter:] [/Snapshot]
[/PKRS:] [/SplitSize:] [/EntityOperation:] [/Manifest:]
[/PayloadFormat:]
##
## Common Options ##
##
/Source:<source> Specifies the source data from which to copy.
The source can be a directory including:
a file system directory, a blob container,
a blob virtual directory, a storage file share,
a storage file directory, or an Azure table.
The source can also be a single file including:
a file system file, a blob or a storage file.
The source is interpreted according to following rules:
1) When either file pattern option /Pattern or
recursive mode option /S is specified,
the source will be interpreted to a directory.
2) When both file pattern option /Pattern and
recursive mode option /S are not specified,
the source can be a single file or a directory.
In this case, AzCopy will choose an existing
location as the source, if the source is both
an existing file and an existing directory,
the source will be interpreted to a single file.

/Dest:<destination> Specifies the destination to copy to.
The destination can be a directory including:
a file system directory, a blob container,
a blob virtual directory, a storage file share,
a storage file directory, or an Azure table.
The destination can also be a single file including:
a file system file, a blob or a storage file.
The destination is interpreted according to following rules:
1) When source is a single file, destination
is interpreted as a single file.
2) When source is a directory, destination
is interpreted as a directory.

/SourceKey:<storage-key> Specifies the storage account key for the
source resource.

/DestKey:<storage-key> Specifies the storage account key for the
destination resource.

/SourceSAS:<SAS-Token> Specifies a Shared Access Signature with READ
and LIST permissions for the source (if
applicable). Surround the SAS with double
quotes, as it may contains special command-line
characters.
The SAS must be a Container/Share/Table SAS, or
an Account SAS with ResourceType that includes
Container.
If the source resource is a blob container,
and neither a key nor a SAS is provided, then
the blob container will be read via anonymous
access.
If the source is a file share or table, a key or
a SAS must be provided.

/DestSAS:<SAS-Token> Specifies a Shared Access Signature (SAS) with
READ and WRITE permissions for the
destination (if applicable). When /Y is
specified, and /XO /XN are not specified, the SAS
can have only WRITE permission for the operation
to succeed.
Surround the SAS with double quotes, as it may
contains special command-line characters.
The SAS must be a Container/Share/Table SAS, or
an Account SAS with ResourceType that includes
Container.
If the destination resource is a blob container,
file share or table, you can either specify this
option followed by the SAS token, or you can
specify the SAS as part of the destination blob
container, file share or table’s URI, without
this option.
This option is not supported when asynchronously
copying between two different types of storage
service or between two different accounts.

/V:[verbose-log-file] Outputs verbose status messages into a log
file.
By default, the verbose log file is named
AzCopyVerbose.log in
%LocalAppData%\Microsoft\Azure\AzCopy. If you
specify an existing file location for this
option, the verbose log will be appended to
that file.

/Z:[journal-file-folder] Specifies a journal file folder for resuming an
operation.
AzCopy always supports resuming if an
operation has been interrupted.
If this option is not specified, or it is
specified without a folder path, then AzCopy
will create the journal file in the default
location, which is
%LocalAppData%\Microsoft\Azure\AzCopy.
Each time you issue a command to AzCopy, it
checks whether a journal file exists in the
default folder, or whether it exists in a
folder that you specified via this option. If
the journal file does not exist in either
place, AzCopy treats the operation as new and
generates a new journal file.
If the journal file does exist, AzCopy will
check whether the command line that you input
matches the command line in the journal file.
If the two command lines match, AzCopy resumes
the incomplete operation. If they do not match,
you will be prompted to either overwrite the
journal file to start a new operation, or to
cancel the current operation.
The journal file is deleted upon successful
completion of the operation.
Note that resuming an operation from a journal
file created by a previous version of AzCopy
is not supported.

/@:<parameter-file> Specifies a file that contains parameters.
AzCopy processes the parameters in the file
just as if they had been specified on the
command line.
In a response file, you can either specify
multiple parameters on a single line, or
specify each parameter on its own line. Note
that an individual parameter cannot span
multiple lines.
Response files can include comments lines that
begin with the # symbol.
You can specify multiple response files.
However, note that AzCopy does not support
nested response files.

/Y Suppresses all AzCopy confirmation prompts.

/NC:<number-of-concurrent> Specifies the number of concurrent operations.
AzCopy by default starts a certain number of
concurrent operations to increase the data
transfer throughput.
Note that large number of concurrent operations
in a low-bandwidth environment may overwhelm
the network connection and prevent the
operations from fully completing. Throttle
concurrent operations based on actual available
network bandwidth.
The upper limit for concurrent operations is
512.

##
## Options – Applicable for Blob and Table Service Operations ##
##

/SourceType:<blob | table> Specifies that the source resource is a blob
or table available in the local development
environment, running in the storage emulator.

/DestType:<blob | table> Specifies that the destination resource is a
blob or table available in the local
development environment, running in the
storage emulator.

##
## Options – Applicable for Blob and File Service Operations ##
##

/S Specifies recursive mode for copy operations.
The /S parameter is only valid when the
source is a directory.
In recursive mode, AzCopy will copy all blobs
or files that match the specified file
pattern, including those in subfolders.

/Pattern:<file-pattern> Specifies a file pattern that indicates which
files to copy.
The behavior of the /Pattern parameter is
determined by the location of the source data,
and the presence of the recursive mode option.
The /Pattern parameter is only valid when the
source is a directory.
Recursive mode is specified via option /S.

If the specified source is a directory in
the file system, then standard wildcards are
in effect, and the file pattern provided is
matched against files within the directory.
If option /S is specified, then AzCopy also
matches the specified pattern against all
files in any subfolders beneath the directory.

If the specified source is a blob container or
virtual directory, then wildcards are not
applied. If option /S is specified, then AzCopy
interprets the specified file pattern as a blob
prefix. If option /S is not specified, then
AzCopy matches the file pattern against exact
blob names.
If the specified source is an Azure file share,
then you must either specify the exact file
name, (e.g. abc.txt) to copy a single file, or
specify option /S to copy all files in the
share recursively. Attempting to specify both a
file pattern and option /S together will result
in an error.

AzCopy uses case-sensitive matching when the
/Source is a blob, blob container or blob virtual
directory, and uses case-insensitive matching
in all the other cases.

The default file pattern used when no file
pattern is specified is *.* for a file system
location or an empty prefix for an Azure
Storage location.
Specifying multiple file patterns is not
supported.

/CheckMD5 Calculates an MD5 hash for downloaded data and
verifies that the MD5 hash stored in the blob
or file’s Content-MD5 property matches the
calculated hash. The MD5 check is turned off by
default, so you must specify this option to
perform the MD5 check when downloading data.
Note that Azure Storage doesn’t guarantee that
the MD5 hash stored for the blob or file is
up-to-date. It is client’s responsibility to
update the MD5 whenever the blob or file is
modified.
AzCopy always sets the Content-MD5 property for
an Azure blob or file after uploading it to the
service.

/L Specifies a listing operation only; no data is
copied.
AzCopy will interpret the using of this option as
a simulation for running the command line without
this option /L and count how many objects will
be copied, you can specify option /V at the same
time to check which objects will be copied in
the verbose log.
The behavior of this option is also determined by
the location of the source data and the presence
of the recursive mode option /S and file pattern
option /Pattern.
When using this option, AzCopy requires LIST and READ
permission of the source location if source is a directory,
or READ permission of the source location if source
is a single file.

/MT Sets the downloaded file’s last-modified time
to be the same as the source blob or file’s.

/XN Excludes a newer source resource. The resource
will not be copied if the source is the same
or newer than destination.

/XO Excludes an older source resource. The resource
will not be copied if the source resource is the
same or older than destination.

/A Uploads only files that have the Archive
attribute set.

/IA:[RASHCNETOI] Uploads only files that have any of the
specified attributes set.
Available attributes include:
R Read-only files
A Files ready for archiving
S System files
H Hidden files
C Compressed file
N Normal files
E Encrypted files
T Temporary files
O Offline files
I Not content indexed Files

/XA:[RASHCNETOI] Excludes files from upload that have any of the
specified attributes set.
Available attributes include:
R Read-only files
A Files ready for archiving
S System files
H Hidden files
C Compressed file
N Normal files
E Encrypted files
T Temporary files
O Offline files
I Not content indexed Files

/SyncCopy Indicates whether to synchronously copy blobs
or files among two Azure Storage end points.
AzCopy by default uses server-side
asynchronous copy. Specify this option to
download the blobs or files from the service
to local memory and then upload them to the
service.
/SyncCopy can be used in below scenarios:
1) Copying from Blob storage to Blob storage.
2) Copying from File storage to File storage.
3) Copying from Blob storage to File storage.
4) Copying from File storage to Blob storage.

/SetContentType:[content-
type] Specifies the content type of the destination
blobs or files.
AzCopy by default uses
“application/octet-stream” as the content type
for the destination blobs or files. If option
/SetContentType is specified without a value
for “content-type”, then AzCopy will set each
blob or file’s content type according to its
file extension. To set same content type for
all the blobs, you must explicitly specify a
value for “content-type”.

##
## Options – Only applicable for Blob Service Operations ##
##

/BlobType:<page | block
| append> Specifies whether the destination blob is a
block blob, a page blob or an append blob.
If the destination is a blob and this option
is not specified, then by default AzCopy will
create a block blob.

/Delimiter:<delimiter> Indicates the delimiter character used to
delimit virtual directories in a blob name.
By default, AzCopy uses / as the delimiter
character. However, AzCopy supports using any
common character (such as @, #, or %) as a
delimiter. If you need to include one of these
special characters on the command line, enclose
it with double quotes.
This option is only applicable for downloading
from an Azure blob container or virtual directory.

/Snapshot Indicates whether to transfer snapshots. This
option is only valid when the source is a
blob container or blob virtual directory.
The transferred blob snapshots are renamed in
this format: [blob-name] (snapshot-time)
[extension].
By default, snapshots are not copied.

##
## Options – only applicable for Table Service Operations ##
##

/PKRS:<“key1#key2#key3#…”> Splits the partition key range to enable
exporting table data in parallel, which
increases the speed of the export operation.
If this option is not specified, then AzCopy
uses a single thread to export table entities.
For example, if the user specifies
/PKRS:”aa#bb”, then AzCopy starts three
concurrent operations.
Each operation exports one of three partition
key ranges, as shown below:
[<first partition key>, aa)
[aa, bb)
[bb, <last partition key>]

/SplitSize:<file-size> Specifies the exported file split size in MB.
If this option is not specified, AzCopy will
export table data to single file.
If the table data is exported to a blob, and
the exported file size reaches the 200 GB limit
for blob size, then AzCopy will split the
exported file, even if this option is not
specified.

/EntityOperation:<InsertOrSkip
| InsertOrMerge
| InsertOrReplace> Specifies the table data import behavior.
InsertOrSkip – Skips an existing entity or
inserts a new entity if it does not exist in
the table.
InsertOrMerge – Merges an existing entity or
inserts a new entity if it does not exist in
the table.
InsertOrReplace – Replaces an existing entity
or inserts a new entity if it does not exist
in the table.

/Manifest:<manifest-file> Specifies the manifest file name for the table
export and import operation.
This option is optional during the export
operation, AzCopy will generate a manifest file
with predefined name if this option is not
specified.
This option is required during the import
operation for locating the data files.

/PayloadFormat:<JSON | CSV> Specifies the format of the exported data file.
If this option is not specified, by default
AzCopy exports data file in JSON format.

##
## Samples ##
##

#1 – Download a blob from Blob storage to the file system, for example,
download ‘https://myaccount.blob.core.windows.net/mycontainer/abc.txt&#8217;
to ‘D:\test\’
a) Use directory transfer if you have READ and LIST permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /Pattern:”abc.txt”
b) Use single file transfer if you have READ permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/abc.txt
/Dest:D:\test\abc.txt /SourceSAS:”<SourceSASWithReadPermission>”

#2 – Copy a blob within a storage account
a) Use directory transfer if you have READ and LIST permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1/
/Dest:https://myaccount.blob.core.windows.net/mycontainer2/
/SourceKey:key /DestKey:key /Pattern:”abc.txt”
b) Use single file transfer if you have READ permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1/abc.txt
/Dest:https://myaccount.blob.core.windows.net/mycontainer2/abc.txt
/SourceSAS:”<SourceSASWithReadPermission>” /DestKey:key

#3 – Upload files and subfolders in a directory to a container, recursively
AzCopy /Source:D:\test\
/Dest:https://myaccount.blob.core.windows.net/mycontainer/
/DestKey:key /S

#4 – Upload files matching the specified file pattern to a container,
recursively.
AzCopy /Source:D:\test\
/Dest:https://myaccount.blob.core.windows.net/mycontainer/ /DestKey:key
/Pattern:*ab* /S

#5 – Download blobs with the specified prefix to the file system, recursively
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /Pattern:”a” /S

#6 – Download files and subfolders in an Azure file share to the file system,
recursively
AzCopy /Source:https://myaccount.file.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /S

#7 – Upload files and subfolders from the file system to an Azure file share,
recursively
AzCopy /Source:D:\test\
/Dest:https://myaccount.file.core.windows.net/mycontainer/
/DestKey:key /S

#8 – Export an Azure table to a local folder
AzCopy /Source:https://myaccount.table.core.windows.net/myTable/
/Dest:D:\test\ /SourceKey:key

#9 – Export an Azure table to a blob container
AzCopy /Source:https://myaccount.table.core.windows.net/myTable/
/Dest:https://myaccount.blob.core.windows.net/mycontainer/
/SourceKey:key1 /Destkey:key2

#10 – Import data in a local folder to a new table
AzCopy /Source:D:\test\
/Dest:https://myaccount.table.core.windows.net/mytable1/ /DestKey:key
/Manifest:”myaccount_mytable_20140103T112020.manifest”
/EntityOperation:InsertOrReplace

#11 – Import data in a blob container to an existing table
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:https://myaccount.table.core.windows.net/mytable/ /SourceKey:key1
/DestKey:key2 /Manifest:”myaccount_mytable_20140103T112020.manifest”
/EntityOperation:InsertOrMerge

#12 – Synchronously copy blobs between two Azure Storage endpoints
AzCopy /Source:https://myaccount1.blob.core.windows.net/mycontainer/
/Dest:https://myaccount2.blob.core.windows.net/mycontainer/
/SourceKey:key1 /DestKey:key2 /Pattern:ab /SyncCopy

——————————————————————————
Learn more about AzCopy and Download at
http://aka.ms/azcopy.
——————————————————————————

 

Download Microsoft Azure Storage Explorer here


Leave a comment

#Microsoft Introduction to Windows Admin Center #Winserv #Windows10 #Azure #hyperv #AzureStack

Microsoft is introducing the New Windows Admin Center ( former Project Honolulu )

Last weekend I made my MVPLAB.LOCAL domain with Project Honolulu version 1803 and Windows Server Insider version.

I Made my Domain controller first and later a Cluster with the Windows Server Insider Build version 17639

With Microsoft Windows Admin Center it’s great to manage Windows Server Core version Servers, you got a great GUI interface and you still have a small footprint for your Server OS with Core 🙂

Windows Admin Center is a new, locally-deployed, browser-based management tool set that lets you manage your Windows Servers with no Azure or cloud dependency. Windows Admin Center gives you full control over all aspects of your server infrastructure and is particularly useful for managing servers on private networks that are not connected to the Internet.
Windows Admin Center is the modern evolution of “in-box” management tools, like Server Manager and MMC. It complements System Center and Operations Management Suite – it’s not a replacement.

Architecture

In my MVPLAB.LOCAL I was busy to make a Gateway to Azure for Hybrid Cloud Management :

But with the Microsoft announcement of Windows Admin Center version 1804 yesterday, I will Upgrade my MVPLAB 😉

Download Windows Admin Center here


Leave a comment

Microsoft #Azure DevTest LAB is Great for #Education and #DevOps

Azure DevTest Labs can be used to implement many key scenarios in addition to dev/test. One of those scenarios is to set up a lab for training. Azure DevTest Labs allows you to create a lab where you can provide custom templates that each trainee can use to create identical and isolated environments for training. You can apply policies to ensure that training environments are available to each trainee only when they need them and contain enough resources – such as virtual machines – required for the training. Finally, you can easily share the lab with trainees, which they can access in one click.

To Create your own DevTest Lab is easy in Microsoft Azure subscription :

Select Developer tools and then DevTest Labs

Give you DevTest Lab a Name and Resource

I already got it installed with some Virtual Machines.

When you go to Configuration and Policies you can configure your DevTest LAB for your Users.

From here you can manage and Configure your DevTest LAB.

Costs per Resource and who is the Owner

You can give your DevTest LAB Users full Control on Virtual Machine Sizes, but then you have to watch your Costs.
To keep you in Control you can decide which VM sizes can be selected by the DevTest LAB Users. From small standard A2 VM
or Powerful GS5 Virtual Machine.

Then you can select how much Virtual Machines can be selected by the DevTest LAB User or How Much Virtual Machines can be added to a Complete DevTest LAB :

Virtual Machines Per User

Virtual Machines per LAB

Here you can make a DevTest LAB Announcement for the Users.

To keep you Costs in a efficient way in Control, you can Auto Shutdown the Complete LAB and Start it with a Scheduler again.
Then you don’t pay for Compute when It’s not in use, this keeps your total costs low.

Auto Start

Auto Shutdown

Important for your Azure DevTest LAB are the Images form the Market Place but you can also upload your own custom images :

Azure Market Place

Of course you can add also Repositories to your Azure DevTest LAB :

When you installed this all, you can configure your Identity and Access Management for your Azure DevTest LAB Users.

I Gave Student01 the Role DevTest Lab User.

When you Login with the Azure DevTest LAB User you see your Resources and the LAB.

In the Activity Log you can see what is happening in your LAB.

For Teachers in Education is Microsoft Azure DevTest LAB a Great solution to work with IT Students and Develop or making there own Projects for School.

Here you see how easy you can role out a Kali-Linux Virtual Machine in your Azure DevTest LAB 

Select the Kali-Linux Image

Select your Virtual Machine Settings

Here you can select Artifacts for in your VM

You can download your JSON ARM Template here

Your Kali-Linux VM is Creating in your Azure DevTest LAB.

I like Microsoft Azure DevTest LAB a lot and I hope you too 🙂

More information about Microsoft Azure DevTest LAB is here on Docs


Leave a comment

#Microsoft Azure CloudShell Bash in Visual Studio Code #Azure #VSC #DevOps

 

When you don’t have Microsoft Visual Studio Code, It’s a Awesome Open Source Free tool for DevOps and ITPro.

https://code.visualstudio.com/download

When you installed VSC you can add Extensions to your Visual Studio Code and one of them is Called Azure Account.
When you Add this extension you can connect to Microsoft Azure Cloud Shell in Visual Studio Code.
But before we can use this Extension to connect to Azure CloudShell we need NodeJS version 6 or higher installed on your OS.

Go to NodeJS and Download

Click Next.

Accept the Terms and click Next.

Click Next

Click Next

It will also install a Shortcut for the online documentation of this version of NodeJS v9.6.1

Click on Install

Click Finish

With Ctrl+Shift+P you will see all the Commands. (1)
Choose for Azure: Open Bash in CloudShell (2)

When you do this it will make a Microsoft Azure Device Login first to Connect to your Azure Subscription like this :

Type the code which is in VSC here

Azure will see that you connect with Visual Studio Code.
Click on Continue.

Login with your Azure Account of your Subscription.

The Connection with VSC and Azure is made.

Now when you choose Azure: Bash in CloudShell again it will show the Azure Cloud Shell in your VSC.

Your are Online with Azure Cloud Shell.

Just Awesome 😉
Cheers James