mountainss Cloud and Datacenter Management Blog

Microsoft SystemCenter blogsite about virtualization on-premises and Cloud


Leave a comment

#Microsoft Azure DevOps Projects and Infrastructure as Code #Azure #IaC #DevOps


Microsoft Azure DevOps Project for CI/CD

The Azure DevOps Project presents a simplified experience where you bring your existing code and Git repository, or choose from one of the sample applications to create a continuous integration (CI) and continuous delivery (CD) pipeline to Azure. The DevOps project automatically creates Azure resources such as a new Azure virtual machine, creates and configures a release pipeline in VSTS that includes a build definition for CI, sets up a release definition for CD, and then creates an Azure Application Insights resource for monitoring.

Infrastructure as Code (IaC) gives you benefits like :

  • Consistency in naming conventions of Azure components
  • Working together in the same way with your company policies
  • Reusability of Templates
  • Automatic documentation and CMDB of deployments in your repository
  • Rapid deployments
  • Flexibility and Scalability in code for Azure Deployments

As an Large Enterprise Company you don’t want to Click and Type in the Azure Portal with lot of employees to get the job done in a consistent way. The changes and deployments will be different in time because people can make mistakes. For Developers it’s important to make your building process before you publish your application, so why not for DevOps and ITpro to do the same thing for Infrastructure.

In the following step-by-step guide you will learn how to make a Microsoft Azure DevOps Project and make a CI/CD Pipeline deploying a virtual machine with your ASP.net Application.

Prerequisites :
An Azure subscription. You can get one free through Visual Studio Dev Essentials.
Access to a GitHub or external Git repository that contains .NET, Java, PHP, Node, Python, or static web code.

Here you find the GitHub for Developer Guide

When you have your prerequisites in place you can start with the following steps :

Search for DevOps at All Services in the Azure Portal

Select .NET and Click on Next

You can see where you are in the flow of creating your CI/CD Pipeline, when you need a Azure SQL Database for your ASP.net application you can select Add a Database (Option). This will provide you Azure SQL as a Service (PaaS).

Database-as-a-Service
(I didn’t Choose for SQL)


In this step select Virtual Machine and click Next

From here you can create a VSTS account or your Existing account of Visual Studio Team Services. After selecting VSTS you can manage your Azure settings and by clicking on Change you can select the Azure options.

 

Select the Virtual Machine you need for your Application.

Here you see the Deployment Running

Important for Infrastructure as Code (IaC), the Deployment template can be saved into the library and / or you can download it for reusability or make your own policies into the template.

When you save it into the Azure Library you get the release notes and who’s the publisher

In the Microsoft Azure DevOps Project main Dashboard you will see the status of your CI/CD Pipeline and that release is in progress or not. On the right-side of the Dashboard you see the Azure resources like the Application endpoint, the Virtual Machine and Application Insights for monitoring. When the CI/CD Pipeline deployment is succeeded you can browse to your ASP.net Application.

Your Application.

Your Virtual Machine Running and in the Monitoring.


The Microsoft Azure DevOps Project CI/CD Pipeline is Completed.

Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your live web application. It will automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It’s designed to help you continuously improve performance and usability. It works for apps on a wide variety of platforms including .NET, Node.js and J2EE, hosted on-premises or in the cloud. It integrates with your DevOps process, and has connection points to a variety of development tools. It can monitor and analyze telemetry from mobile apps by integrating with Visual Studio App Center and HockeyApp.

You can drill down into the error to see what is happening.

Azure Application Insights topology

Application Insights is aimed at the development team, to help you understand how your app is performing and how it’s being used. It monitors:
Request rates, response times, and failure rates – Find out which pages are most popular, at what times of day, and where your users are. See which pages perform best. If your response times and failure rates go high when there are more requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates – Find out whether external services are slowing you down.
Exceptions – Analyse the aggregated statistics, or pick specific instances and drill into the stack trace and related requests. Both server and browser exceptions are reported.
Page views and load performance – reported by your users’ browsers.
AJAX calls from web pages – rates, response times, and failure rates.
User and session counts.
Performance counters from your Windows or Linux server machines, such as CPU, memory, and network usage.
Host diagnostics from Docker or Azure.
Diagnostic trace logs from your app – so that you can correlate trace events with requests.
Custom events and metrics that you write yourself in the client or server code, to track business events such as items sold or games won.

You can also drill down into Microsoft Azure Log Analytics and run your analytics queries to get the right information you want for troubleshooting. More information on Azure Log Analytics and queries is on MSFT docs.

From App Insight we see it was an Exception error

Because the Azure DevOps Project is connected with VSTS you can follow the Build and Release here to and you got your documentation of the CI/CD Pipeline.

From here you can work with your Developers and DevOps and manage the User and Groups security in de CI/CD Pipeline for the next Build. Working together to build innovative apps via VSTS from one Dashboard :

VSTS Dashboard

Next day you see it was one time error and the Pipeline is running Fine 😉

For more information about all the possibilities with Microsoft Azure DevOps Project go to MSFT Docs

DevOps and Microsoft :

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.

To Learn DevOps please visit this Microsoft DevOps Site

Conclusion : 

Invest in your CI/CD Pipeline and make your own environment is important before you deploy into Azure production for your business. Make your ARM Templates and Code in repositories like Git or VSTS. When you have this all in place your are more in control of your consistent Deployments and Changes in the Azure Cloud. I hope this blogpost is useful for you and your Company. Start today with Infrastructure as Code (IaC) and get the benefits 😉

Advertisements


Leave a comment

I Love #Microsoft Azure CloudShell in Visual Studio Code #VSC #Azure #Cloud

Azure Cloud Shell in VSC

Azure Cloud Shell is an interactive, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. Linux users can opt for a Bash experience, while Windows users can opt for PowerShell

Here you find the Installation of Azure Cloud Shell in Visual Studio Code

As Easy as this 😉

More Technical information about Azure Cloud Shell on Microsoft Docs


Leave a comment

Deploying Containers on #Kubernetes Cluster in #Docker for Windows CE and on #Azure AKS

Kubernetes Custer via Docker for Windows CE Edge

Docker CE for Windows is Docker designed to run on Windows 10. It is a native Windows application that provides an easy-to-use development environment for building, shipping, and running dockerized apps. Docker CE for Windows uses Windows-native Hyper-V virtualization and networking and is the fastest and most reliable way to develop Docker apps on Windows. Docker CE for Windows supports running both Linux and Windows Docker containers.
Download Docker for Windows Community Edition Edge here

From Docker for Windows version 18.02 CE Edge includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

I’m using Docker for Windows CE version 18.05.0

Now your Single node Kubernetes Cluster is running.

To get the Kubernetes Dashboard you have to install it with Kubectl :

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Run kubectl proxy

Keep this running.

Go with your browser to : http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login  and you can skip kubeconfig for now.

You are now in the Kubernetes Dashboard.

Now it’s time to make your first containers (Pods) on Kubernetes.
Click on +CREATE in the upper right corner.

For example code I used a yaml script to deploy Nginx with 3 replicas

Deploying the Nginx Containers (Pods)

Nginx is running on Kubernetes.

With Microsoft Visual Studio Code and the Kubernetes extension you can play with Nginx Containers (pods) locally on your laptop.

When you need more capacity and want to scale-up with more Containers (Pods) for your solution, you can use Microsoft Azure Cloud with Azure Kubernetes Services

Monitor Azure Kubernetes Service (AKS) with container health (Preview) and with Analytics

 


Leave a comment

#Microsoft Build 2018 Sessions and Content Overview #Azure #AzureStack #MSBuild2018

Microsoft Build 2018 – Technology Keynote: Microsoft Azure

With Scott Guthrie @scottgu


Inside Azure Datacenter Architecture

with Mark Russinovich @markrussinovich


Architecting and Building Hybrid Cloud Apps for Azure and Azure Stack.
With Filippo Seracini @pipposera and Ricardo Mendes @rifmendes from the AzureStack Team

Container DevOps in Azure
With Jessica Deen @jldeen and Steven Murawski @stevenmurawski


Best Practices with Azure & Kubernetes

Follow @rimmanehme

Microsoft Azure CosmosDB @ Build 2018 The Catalyst for next Generation Apps


From Zero to Azure with Python & VSC


Secure the intelligent edge with Azure Sphere


Satya Nadella – Vision Keynote

Here you can find all the Microsoft Build 2018 Sessions and content.


Leave a comment

Deploy #Azure WebApp with Visual Studio Code and Play with #Kudu and App Service Editor and #VSC

When you have installed Microsoft Visual Studio Code which is Free and Open Source with Git integration, Debugging and lot of Extensions available,
You activate the Microsoft Azure App Service extension in VSC.

Azure App Service Extension

You can install really easy more Azure Extensions here.

On the Left you will see your Azure Subscription and by pushing the + you will create a new Azure WebApp.

Enter the name of the Resource Group

Select your OS Windows or Linux

Add the Name of the New App Service Plan

Choose a App Service plan See more information here

Select Azure Region

After this it will install your Microsoft Azure Web App in the Cloud in a couple of seconds 🙂

 

When you open the Azure Portal you will see your App Service plan running.

From here you can configure your Azure Web App for Continues Delivery, and use different tools like VSC, Kudu or Azure App Service Editor.

Azure Web Apps enables you to build and host web applications in the programming language of your choice without managing infrastructure. It offers auto-scaling and high availability, supports both Windows and Linux, and enables automated deployments from GitHub, Visual Studio Team Services, or any Git repo.

Learn how to use Azure Web Apps with Microsoft quickstarts, tutorials, and samples.

Configure Continues Deployment from the Azure Portal.

Or
Continuous Deployment to Azure App Service

Developer tools from the Azure Portal with App Service Editor.

 

Azure App Services Editor

From here you can open Kudu to manage your Azure Web App and Debug via Console :

Kudu Debug console in CMD

Or Kudu Debug Console in Powershell 😉

Kudu Process Explorer

Here you find more information about Kudu for your Azure Web App on GitHub

And to come back at Microsoft Visual Studio Code, you can manage and Build your Azure Web App from here too :

Azure Web App Services in VSC

Hope this first step by step Guide is useful for you to start with Microsoft Azure Web App and Visual Studio Code to make your Pipeline.
More Information at Visual Studio Code

Azure Web Apps Overview


Leave a comment

#Microsoft Azure Storage Tools AzCopy and #Azure Storage Explorer #Cloud #AzureStack

Microsoft Azure Storage tools
Type azcopy /? for help on AzCopy.
C:\Program Files (x86)\Microsoft SDKs\Azure>azcopy /?
——————————————————————————
AzCopy 7.1.0 Copyright (c) 2017 Microsoft Corp. All Rights Reserved.
——————————————————————————

AzCopy </Source:> </Dest:> [/SourceKey:] [/DestKey:] [/SourceSAS:] [/DestSAS:]
[/V:] [/Z:] [/@:] [/Y] [/NC:] [/SourceType:] [/DestType:] [/S]
[/Pattern:] [/CheckMD5] [/L] [/MT] [/XN] [/XO] [/A] [/IA] [/XA]
[/SyncCopy] [/SetContentType] [/BlobType:] [/Delimiter:] [/Snapshot]
[/PKRS:] [/SplitSize:] [/EntityOperation:] [/Manifest:]
[/PayloadFormat:]
##
## Common Options ##
##
/Source:<source> Specifies the source data from which to copy.
The source can be a directory including:
a file system directory, a blob container,
a blob virtual directory, a storage file share,
a storage file directory, or an Azure table.
The source can also be a single file including:
a file system file, a blob or a storage file.
The source is interpreted according to following rules:
1) When either file pattern option /Pattern or
recursive mode option /S is specified,
the source will be interpreted to a directory.
2) When both file pattern option /Pattern and
recursive mode option /S are not specified,
the source can be a single file or a directory.
In this case, AzCopy will choose an existing
location as the source, if the source is both
an existing file and an existing directory,
the source will be interpreted to a single file.

/Dest:<destination> Specifies the destination to copy to.
The destination can be a directory including:
a file system directory, a blob container,
a blob virtual directory, a storage file share,
a storage file directory, or an Azure table.
The destination can also be a single file including:
a file system file, a blob or a storage file.
The destination is interpreted according to following rules:
1) When source is a single file, destination
is interpreted as a single file.
2) When source is a directory, destination
is interpreted as a directory.

/SourceKey:<storage-key> Specifies the storage account key for the
source resource.

/DestKey:<storage-key> Specifies the storage account key for the
destination resource.

/SourceSAS:<SAS-Token> Specifies a Shared Access Signature with READ
and LIST permissions for the source (if
applicable). Surround the SAS with double
quotes, as it may contains special command-line
characters.
The SAS must be a Container/Share/Table SAS, or
an Account SAS with ResourceType that includes
Container.
If the source resource is a blob container,
and neither a key nor a SAS is provided, then
the blob container will be read via anonymous
access.
If the source is a file share or table, a key or
a SAS must be provided.

/DestSAS:<SAS-Token> Specifies a Shared Access Signature (SAS) with
READ and WRITE permissions for the
destination (if applicable). When /Y is
specified, and /XO /XN are not specified, the SAS
can have only WRITE permission for the operation
to succeed.
Surround the SAS with double quotes, as it may
contains special command-line characters.
The SAS must be a Container/Share/Table SAS, or
an Account SAS with ResourceType that includes
Container.
If the destination resource is a blob container,
file share or table, you can either specify this
option followed by the SAS token, or you can
specify the SAS as part of the destination blob
container, file share or table’s URI, without
this option.
This option is not supported when asynchronously
copying between two different types of storage
service or between two different accounts.

/V:[verbose-log-file] Outputs verbose status messages into a log
file.
By default, the verbose log file is named
AzCopyVerbose.log in
%LocalAppData%\Microsoft\Azure\AzCopy. If you
specify an existing file location for this
option, the verbose log will be appended to
that file.

/Z:[journal-file-folder] Specifies a journal file folder for resuming an
operation.
AzCopy always supports resuming if an
operation has been interrupted.
If this option is not specified, or it is
specified without a folder path, then AzCopy
will create the journal file in the default
location, which is
%LocalAppData%\Microsoft\Azure\AzCopy.
Each time you issue a command to AzCopy, it
checks whether a journal file exists in the
default folder, or whether it exists in a
folder that you specified via this option. If
the journal file does not exist in either
place, AzCopy treats the operation as new and
generates a new journal file.
If the journal file does exist, AzCopy will
check whether the command line that you input
matches the command line in the journal file.
If the two command lines match, AzCopy resumes
the incomplete operation. If they do not match,
you will be prompted to either overwrite the
journal file to start a new operation, or to
cancel the current operation.
The journal file is deleted upon successful
completion of the operation.
Note that resuming an operation from a journal
file created by a previous version of AzCopy
is not supported.

/@:<parameter-file> Specifies a file that contains parameters.
AzCopy processes the parameters in the file
just as if they had been specified on the
command line.
In a response file, you can either specify
multiple parameters on a single line, or
specify each parameter on its own line. Note
that an individual parameter cannot span
multiple lines.
Response files can include comments lines that
begin with the # symbol.
You can specify multiple response files.
However, note that AzCopy does not support
nested response files.

/Y Suppresses all AzCopy confirmation prompts.

/NC:<number-of-concurrent> Specifies the number of concurrent operations.
AzCopy by default starts a certain number of
concurrent operations to increase the data
transfer throughput.
Note that large number of concurrent operations
in a low-bandwidth environment may overwhelm
the network connection and prevent the
operations from fully completing. Throttle
concurrent operations based on actual available
network bandwidth.
The upper limit for concurrent operations is
512.

##
## Options – Applicable for Blob and Table Service Operations ##
##

/SourceType:<blob | table> Specifies that the source resource is a blob
or table available in the local development
environment, running in the storage emulator.

/DestType:<blob | table> Specifies that the destination resource is a
blob or table available in the local
development environment, running in the
storage emulator.

##
## Options – Applicable for Blob and File Service Operations ##
##

/S Specifies recursive mode for copy operations.
The /S parameter is only valid when the
source is a directory.
In recursive mode, AzCopy will copy all blobs
or files that match the specified file
pattern, including those in subfolders.

/Pattern:<file-pattern> Specifies a file pattern that indicates which
files to copy.
The behavior of the /Pattern parameter is
determined by the location of the source data,
and the presence of the recursive mode option.
The /Pattern parameter is only valid when the
source is a directory.
Recursive mode is specified via option /S.

If the specified source is a directory in
the file system, then standard wildcards are
in effect, and the file pattern provided is
matched against files within the directory.
If option /S is specified, then AzCopy also
matches the specified pattern against all
files in any subfolders beneath the directory.

If the specified source is a blob container or
virtual directory, then wildcards are not
applied. If option /S is specified, then AzCopy
interprets the specified file pattern as a blob
prefix. If option /S is not specified, then
AzCopy matches the file pattern against exact
blob names.
If the specified source is an Azure file share,
then you must either specify the exact file
name, (e.g. abc.txt) to copy a single file, or
specify option /S to copy all files in the
share recursively. Attempting to specify both a
file pattern and option /S together will result
in an error.

AzCopy uses case-sensitive matching when the
/Source is a blob, blob container or blob virtual
directory, and uses case-insensitive matching
in all the other cases.

The default file pattern used when no file
pattern is specified is *.* for a file system
location or an empty prefix for an Azure
Storage location.
Specifying multiple file patterns is not
supported.

/CheckMD5 Calculates an MD5 hash for downloaded data and
verifies that the MD5 hash stored in the blob
or file’s Content-MD5 property matches the
calculated hash. The MD5 check is turned off by
default, so you must specify this option to
perform the MD5 check when downloading data.
Note that Azure Storage doesn’t guarantee that
the MD5 hash stored for the blob or file is
up-to-date. It is client’s responsibility to
update the MD5 whenever the blob or file is
modified.
AzCopy always sets the Content-MD5 property for
an Azure blob or file after uploading it to the
service.

/L Specifies a listing operation only; no data is
copied.
AzCopy will interpret the using of this option as
a simulation for running the command line without
this option /L and count how many objects will
be copied, you can specify option /V at the same
time to check which objects will be copied in
the verbose log.
The behavior of this option is also determined by
the location of the source data and the presence
of the recursive mode option /S and file pattern
option /Pattern.
When using this option, AzCopy requires LIST and READ
permission of the source location if source is a directory,
or READ permission of the source location if source
is a single file.

/MT Sets the downloaded file’s last-modified time
to be the same as the source blob or file’s.

/XN Excludes a newer source resource. The resource
will not be copied if the source is the same
or newer than destination.

/XO Excludes an older source resource. The resource
will not be copied if the source resource is the
same or older than destination.

/A Uploads only files that have the Archive
attribute set.

/IA:[RASHCNETOI] Uploads only files that have any of the
specified attributes set.
Available attributes include:
R Read-only files
A Files ready for archiving
S System files
H Hidden files
C Compressed file
N Normal files
E Encrypted files
T Temporary files
O Offline files
I Not content indexed Files

/XA:[RASHCNETOI] Excludes files from upload that have any of the
specified attributes set.
Available attributes include:
R Read-only files
A Files ready for archiving
S System files
H Hidden files
C Compressed file
N Normal files
E Encrypted files
T Temporary files
O Offline files
I Not content indexed Files

/SyncCopy Indicates whether to synchronously copy blobs
or files among two Azure Storage end points.
AzCopy by default uses server-side
asynchronous copy. Specify this option to
download the blobs or files from the service
to local memory and then upload them to the
service.
/SyncCopy can be used in below scenarios:
1) Copying from Blob storage to Blob storage.
2) Copying from File storage to File storage.
3) Copying from Blob storage to File storage.
4) Copying from File storage to Blob storage.

/SetContentType:[content-
type] Specifies the content type of the destination
blobs or files.
AzCopy by default uses
“application/octet-stream” as the content type
for the destination blobs or files. If option
/SetContentType is specified without a value
for “content-type”, then AzCopy will set each
blob or file’s content type according to its
file extension. To set same content type for
all the blobs, you must explicitly specify a
value for “content-type”.

##
## Options – Only applicable for Blob Service Operations ##
##

/BlobType:<page | block
| append> Specifies whether the destination blob is a
block blob, a page blob or an append blob.
If the destination is a blob and this option
is not specified, then by default AzCopy will
create a block blob.

/Delimiter:<delimiter> Indicates the delimiter character used to
delimit virtual directories in a blob name.
By default, AzCopy uses / as the delimiter
character. However, AzCopy supports using any
common character (such as @, #, or %) as a
delimiter. If you need to include one of these
special characters on the command line, enclose
it with double quotes.
This option is only applicable for downloading
from an Azure blob container or virtual directory.

/Snapshot Indicates whether to transfer snapshots. This
option is only valid when the source is a
blob container or blob virtual directory.
The transferred blob snapshots are renamed in
this format: [blob-name] (snapshot-time)
[extension].
By default, snapshots are not copied.

##
## Options – only applicable for Table Service Operations ##
##

/PKRS:<“key1#key2#key3#…”> Splits the partition key range to enable
exporting table data in parallel, which
increases the speed of the export operation.
If this option is not specified, then AzCopy
uses a single thread to export table entities.
For example, if the user specifies
/PKRS:”aa#bb”, then AzCopy starts three
concurrent operations.
Each operation exports one of three partition
key ranges, as shown below:
[<first partition key>, aa)
[aa, bb)
[bb, <last partition key>]

/SplitSize:<file-size> Specifies the exported file split size in MB.
If this option is not specified, AzCopy will
export table data to single file.
If the table data is exported to a blob, and
the exported file size reaches the 200 GB limit
for blob size, then AzCopy will split the
exported file, even if this option is not
specified.

/EntityOperation:<InsertOrSkip
| InsertOrMerge
| InsertOrReplace> Specifies the table data import behavior.
InsertOrSkip – Skips an existing entity or
inserts a new entity if it does not exist in
the table.
InsertOrMerge – Merges an existing entity or
inserts a new entity if it does not exist in
the table.
InsertOrReplace – Replaces an existing entity
or inserts a new entity if it does not exist
in the table.

/Manifest:<manifest-file> Specifies the manifest file name for the table
export and import operation.
This option is optional during the export
operation, AzCopy will generate a manifest file
with predefined name if this option is not
specified.
This option is required during the import
operation for locating the data files.

/PayloadFormat:<JSON | CSV> Specifies the format of the exported data file.
If this option is not specified, by default
AzCopy exports data file in JSON format.

##
## Samples ##
##

#1 – Download a blob from Blob storage to the file system, for example,
download ‘https://myaccount.blob.core.windows.net/mycontainer/abc.txt&#8217;
to ‘D:\test\’
a) Use directory transfer if you have READ and LIST permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /Pattern:”abc.txt”
b) Use single file transfer if you have READ permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/abc.txt
/Dest:D:\test\abc.txt /SourceSAS:”<SourceSASWithReadPermission>”

#2 – Copy a blob within a storage account
a) Use directory transfer if you have READ and LIST permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1/
/Dest:https://myaccount.blob.core.windows.net/mycontainer2/
/SourceKey:key /DestKey:key /Pattern:”abc.txt”
b) Use single file transfer if you have READ permission of the source data:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1/abc.txt
/Dest:https://myaccount.blob.core.windows.net/mycontainer2/abc.txt
/SourceSAS:”<SourceSASWithReadPermission>” /DestKey:key

#3 – Upload files and subfolders in a directory to a container, recursively
AzCopy /Source:D:\test\
/Dest:https://myaccount.blob.core.windows.net/mycontainer/
/DestKey:key /S

#4 – Upload files matching the specified file pattern to a container,
recursively.
AzCopy /Source:D:\test\
/Dest:https://myaccount.blob.core.windows.net/mycontainer/ /DestKey:key
/Pattern:*ab* /S

#5 – Download blobs with the specified prefix to the file system, recursively
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /Pattern:”a” /S

#6 – Download files and subfolders in an Azure file share to the file system,
recursively
AzCopy /Source:https://myaccount.file.core.windows.net/mycontainer/
/Dest:D:\test\ /SourceKey:key /S

#7 – Upload files and subfolders from the file system to an Azure file share,
recursively
AzCopy /Source:D:\test\
/Dest:https://myaccount.file.core.windows.net/mycontainer/
/DestKey:key /S

#8 – Export an Azure table to a local folder
AzCopy /Source:https://myaccount.table.core.windows.net/myTable/
/Dest:D:\test\ /SourceKey:key

#9 – Export an Azure table to a blob container
AzCopy /Source:https://myaccount.table.core.windows.net/myTable/
/Dest:https://myaccount.blob.core.windows.net/mycontainer/
/SourceKey:key1 /Destkey:key2

#10 – Import data in a local folder to a new table
AzCopy /Source:D:\test\
/Dest:https://myaccount.table.core.windows.net/mytable1/ /DestKey:key
/Manifest:”myaccount_mytable_20140103T112020.manifest”
/EntityOperation:InsertOrReplace

#11 – Import data in a blob container to an existing table
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/
/Dest:https://myaccount.table.core.windows.net/mytable/ /SourceKey:key1
/DestKey:key2 /Manifest:”myaccount_mytable_20140103T112020.manifest”
/EntityOperation:InsertOrMerge

#12 – Synchronously copy blobs between two Azure Storage endpoints
AzCopy /Source:https://myaccount1.blob.core.windows.net/mycontainer/
/Dest:https://myaccount2.blob.core.windows.net/mycontainer/
/SourceKey:key1 /DestKey:key2 /Pattern:ab /SyncCopy

——————————————————————————
Learn more about AzCopy and Download at
http://aka.ms/azcopy.
——————————————————————————

 

Download Microsoft Azure Storage Explorer here


Leave a comment

#GlobalAzure BootCamp Day for the Community – Microsoft #Azure Overview Info

I wish everyone around the world an Awesome Global Azure BootCamp !

Here are some Microsoft Azure links for Information :

Create your Azure Free Account Today here

Microsoft Azure Get started documentation

Microsoft Azure Technical Docs Online

Microsoft Azure SDK – Tools

Microsoft Azure Architecture Information

Microsoft Virtual Academy

Microsoft Azure Training

Microsoft Azure Self-Paced Courses on Edx

Microsoft Azure Blog site

Microsoft Azure Marketplace

Microsoft Azure on GitHub

Microsoft Azure Friday on Channel 9

Follow on Twitter :

@Azure

@AzureBackup

@AzureSupport

@AzureCosmosDB

@Scottgu

@Markrussinovich

@CoreySandersWA

#MVPBuzz

@JamesvandenBerg