Tag Archives: Azure

#Azure : Azure Data Factory


Azure provides many options for data ingestion and Azure data factory is one of them. It is an option for scenarios where you need to transfer data regularly or in technical terms it is cloud data integration service known as data pipeline. Azure data factory works on two key pillars, i.e. data movement and data transformation. This cloud based data integration service allows to create data-driven workflows and orchestration/automation of the data movement and transformation processes.

Courtesy: Microsoft

Let me explain you this through one scenario:

One of the super market is going through the transformation and looking for ways to increase the revenue and customer satisfaction. Many stores are doing well in terms of revenue and customer satisfaction while few stores are struggling to achieve the same. Business has decided to close few existing stores and open new stores in new locations as well. Super market stores capture the customer satisfaction by a survey machines installed on each PoS system. When customer makes the payment, cashier request them to provide a feedback. This feedback system runs on cloud-native app and store the data directly to the cloud in synchronous mode. This organization wants to analyze userbase based on the demographics. All the billing related data is being stored in ERP system that resides in on-premises datacenter.

Strategy team has provided an approach to generate and visualize useful data for new markets. To fulfill this need, you need to consolidate all the data in one place and because of the continuity, it is not one-time job as they need to compare data based on the days, weeks, month, year, time and season wise. With Azure Data Factory you can move your data continuously using data pipeline, and once data has been moved there then you can first transform this data based on the need and later use this data with any systems or use analytics tools like Power BI to visualize the data. Here is the process, which differs between version 1 and 2.

Azure Data Factory v1:

Azure Data Factory v2:

Now let’s understand the process in detail:

Connect & Collect: Whenever you need to play with data, first you need to collect it. In layman language, you can copy the data from multiple sources in different ways such as using some copy utility, FTP/SFTP, scripts etc. This data can be in multiple forms such as structured, un-structured and semi-structured, and can be extracted from multiple sources such as on-premises, SaaS solutions, database, file shares etc. Once you have multiple data sources, frequency and availability of data will also differ. Azure data factory can connect to multiple data sources and collect the data into the centralized data store such as Azure Blob Storage or Azure data lake store etc.

Transform & Enrich: Once the raw data has been collected, you can transform the data using compute services such as HDInsight Hadoop, Spark, Data Lake Analytics and Machine Learning.

Publish: Once you have transformed the data now you can use this valuable data anywhere in the cloud or can send back this data to on-premises as well. This data can be used by any analytics tool such as Power BI to visualize and generate the reports or can be loaded into the Azure Data Warehouse, Azure SQL Database, Azure cosmoDB or anywhere else for further use.

Monitor: Azure Data Factory v2 provides the monitoring capabilities to monitor established data integration pipelines for various purposes. You can leverage built-in support for pipeline monition via PowerShell, log analytics, Azure monitor, API and health panels on the Azure portal.

At present, Azure Data Factory is available in selected regions only. ADF v1 is available in East US, East US2, West US, West Central US and North Europe region while ADF v2 is available in East US, East US2, West US, West Central US, North Europe and West Europe regions. However, a data factory can use compute resources and data stores from other regions as well. Therefore, you can use this service by leveraging ADF from selected regions.

Azure Data Factory pricing can be calculated based on the four parameters:

  • Number of activities run.
  • Volume of data moved.
  • SQL Server Integration Services (SSIS) compute hours.
  • Whether a pipeline is active or not.

You can calculate your pricing here.

At present, Azure Data Factory version 2 is in preview.

#Azure: Azure Data Box (Preview)


Microsoft Azure Data Box is the best service to migrate large amount of data to Microsoft Azure storage. For small to medium size of cold data you can use Azcopy and Azure Import/Export service. Data Box is more secure than Azure Import/Export because it provides end-to-end data migration capabilities with the help of partners.

Let me explain you about Azure Data Box through visualization.

This service can be easily requested from Azure portal. Here is how it work:

  1. Request for Azure Data Box service through Azure portal
  2. Microsoft ships it to your address
  3. You receive, connect and fill the data
  4. You return it to the Microsoft
  5. Microsoft upload the data based on your requirements
  6. Microsoft erase the Data Box and wipe it as per NIST SP 800-88r1 standards

It is very fast process and doesn’t waste your time. You get only 10 days of time by default to copy your data to the Azure Data Box. These 10 days exclude the day you receive and the day carrier scans your package. For any reason, you couldn’t copy your data within 10 days then you need to pay fee extra on the daily basis. Here is a pricing details (Courtesy: Microsoft) for one full round-trip of this service:

SERVICE UNIT PREVIEW (US) PRICE (Europe)
Device Standard Shipping 1 Package $95 $113
Import Service FeePREVIEW 1 Unit $125 $125
Extra Day FeePREVIEW 1 Day $7.50 $7.50

If this device is lost or damaged due to any reason, you will have to pay USD $40,000 as a recovery cost. At present this service is in preview and the above pricing is only applicable to preview and may change any point of time. Here is the list of Azure regions, where this service is available at present (31/May/2018).

Location Azure Regions
United States Central US, East US, East US2, North Central US, South Central US, West Central US, West US, West US2
Europe North Europe, South Europe

Now, look at the list of Azure partners for this service.

All these details are applicable to preview and may change at the time of GA.

#Azure: Step by step Azure Import/Export service


In my preceding blogpost I had covered Azure Import/Export service concept and requirements. In this post let me explain how to do it step by step.

First look at the Azure import service.

  1. Look at the data that you need to migrate, and note down the capacity, number of drives required, data type and destination blob location in Microsoft Azure.
  2. Procure and prepare the drives using WAImportExport tool and bitlocker. WAImportExport tool to copy the data and bitlocker to encrypt the data.
  3. Create an import job through Azure portal and upload the journal file created be WAImportExport tool. Journal file is created for each drive that contains drive ID and bitlocker key.
    1. Login to the Azure portal and search for import/export service.

    2. In the Import/Export jobs panel, select “create import/export job” to initiate a new job request.

    3. Fill the basic configuration details as needed.

    4. In job details panel, upload the journal files, select destination storage account.

    5. Drop-off location will be selected by default based on your storage account location and click on OK.

    6. Fill the return shipping information and verify the summary to create a job successfully.
  4. Ship the drives to the shipping address as described in summary page.

  5. Update the delivery tracking number in your import job details and submit the import job.
  6. Once drives received, will be processed in the Azure datacenter.
  7. Drives will be returned to you once import completed based on the return address provided.

Here is the graphical representation of the above process.

Courtesy: Microsoft

Now, look at the Azure export service.

  1. Look at the data that you need to export from Azure storage account, and note down the capacity, number of drives required, data type and destination location.
  2. Procure the number of drives that you need to export data from storage account.
  3. Create an export job through Azure portal.
    1. Login to the Azure portal and search for import/export service.

    2. In the Import/Export jobs panel, select “create import/export job” to initiate a new job request.

    3. Fill the basic configuration details as needed.

    4. In job details panel, select the source storage account.

    5. Drop-off location will be selected by default based on your storage account location, select required export option and click on OK.

    6. Fill the return shipping information.

    7. verify the summary and click on OK to create the job successfully.

  4. Ship the drives to the shipping address as described in summary page.

  5. Update the delivery tracking number in your import job details and submit the export job.
  6. Once drives received, will be processed in the Azure datacenter.
  7. The drives will be encrypted by bitlocker and keys will be provided to you via Azure portal.
  8. Drives will be returned to you once import completed based on the return address provided.

Here is the graphical representation of the above process.

Courtesy: Microsoft

I hope, this blogpost helped you with Azure Import//Export job. Please share your feedback in comments section.

#Azure: Azure Import/Export service


Azure Import/Export service allows data transfer between Azure datacenters and customer locations. It is a secure service to send or receive medium-to-large amount of data when the bandwidth becomes bottleneck and costly. Azcopy is preferred tool for online data migration if you look Microsoft Azure data transfer options. While Azure Import/Export provides large amount of physical data transfer in secure and reliable manner. The data can be copied in one or more drives to import to and to export from Azure blob and file storages.

This Import/Export service use either 2.5-inch SSDs or 2.5/3.5-inch SATA II & III HDDs or mix of these. External HDD with built-in USB adapter and drives in external casing are not supported. Here is the quick snapshot of possible import and export data transfers.

Job Storage Accounts Supported Not Supported
Import Classic

Blob Storage accounts

General Purpose v1 storage accounts.

Azure Blob storage.

Block/page blobs.

Azure File storage.

Export Classic

Blob Storage accounts

General Purpose v1 storage accounts.

Azure Blob storage.

Block, page and append blobs.

Azure File storage.

Points to remember while sending drives for import job.

  • A maximum of 10 drives for each job.
  • Use only single data volume partition.
  • Data volume must be formatted with NTFS.
  • Supported external USB adaptors to copy data to internal HDDs.
    • Anker 68UPSATAA-02BU
    • Anker 68UPSHHDS-BU
    • Startech SATADOCK22UE
    • Orico 6628SUS3-C-BK (6628 Series)
    • Thermaltake BlacX Hot-Swap SATA External Hard Drive Docking Station (USB 2.0 & eSATA)

Let me explain you use cases and process to perform import/export job.

You can use this service in following scenarios:

  • Move data to the cloud as part of the data migration strategy.
  • Data backup to the cloud.
  • Data recovery from the cloud.
  • Data distribution to the customer sites.

Here is the high-level process and components and locations available for Import/Export job.

Components:

  • Import/Export service in Azure portal to create a new job
  • Hard disk drives to copy the data
  • WAImportExport tool to prepare drives and encrypt data

Location available on the date of writing this blog post:

Country Country Country Country
East US North Europe Central India US Gov Iowa
West US West Europe South India US DoD East
East US 2 East Asia West India US DoD Central
West US 2 Southeast Asia Canada Central China East
Central US Australia East Canada East China North
North Central US Australia Southeast Brazil South UK South
South Central US Japan West Korea Central Germany Central
West Central US Japan East US Gov Virginia Germany Northeast

Courtesy: Microsoft

If your Azure storage account location is not available in the above list, you can create a job and send it to the alternate location as specified in the tool while creating an Import job.

Next blogpost covers, step by step process of Azure Import/Export service/job.

#Azure: Step by step Azcopy


When an organization of any size looks at the cloud, data migration becomes focal point of each discussion. Available data transfer options can help you to achieve your goal. In command line methodologies Azcopy is the best tool to migrate reasonable amount of data. You may prefer this tool if you have hundreds of GB data to migrate using sufficient bandwidth. You can use this tool to copy or move data between a file system and a storage account or between storage account. This tool can be deployed on both Windows and Linux systems. It is built on .Net framework for Windows and .Net core framework for Linux. It leverages windows style command-line for windows and POSIX style command-line for Linux.

Let me explain, how to do it step by step on windows system.

First, download the latest version of Azcopy tool for Windows.

Once downloaded run the .msi file. Click on Next to continue installation.

Accept the license agreement and click on Next.

Define the destination folder and click on Next to continue.

Click on Install to begin the installation.

Click on Finish once installation completed successfully to exit the installation wizard.

Open “Microsoft Azure Storage command line” tool from the programs.

Now, look at the source and destination location and type. If I am copying data from internal filesystem to the cloud blob storage then local filesystem is my source and blob container in cloud storage account is going to be my destination.

Note down the location of source data.

Copy the URL of your blob container.

Copy the Access Key. You can find “Access keys” under setting in storage account.

Run the Azcopy command in following syntax: Azcopy /source:<source path> /dest:<destination path> /destkey:<Access key of destination blob> /s

You can monitor the copy activity.

If any error occurs during copy operations, you can monitor that as well.

Note: In the example below, to simulate an error scenario, I had tried to copy all blog posts along with this blog post on that I was working on. Therefore, you can see the same error description.

Another error was .tmp file. This .tmp file error, we can ignore.

Now, let me explain you that “how to perform retry option”. Run the same command and “Incomplete operation with same command line…” prompt enter Y to retry the operation for failed data. As you can observe that the filed operation of in-use file has completed successfully. However, we can ignore the .tmp file error.

Once you have copied all the data, go to the blob container and verify the same.

If you have high bandwidth internet connection or express route, you can move large amount of data as well using Azcopy but it is more relevant option for xyz GB of data. Here xyz represents the numbers.

#Azure : Data transfer


Cloud has become a prominent option for all kind of organizations. When any medium to large organization moves to the cloud, data transfer becomes a biggest challenge. To address this concern, Microsoft provides different types of data transfer options to the customers. Before you get into the details of data transfer options, answer the following questions:

  • How much data, we need to migrate?
  • What is going to be the frequency of data transfer?
  • Data source and destination locations, and respective data regulations?
  • Find the bottlenecks that may arise at the time of migration?
  • Do the cost, time and effort comparison between possible data migration types?

If you look from the Microsoft point of view, they have divided the data transfer into four major categories:

  • Physical data transfer
  • Data transfer using command line tools and APIs
  • Data transfer using graphical user interface
  • Data pipeline

Let me explain you briefly about each one of the data transfer methodology:

Physical data transfer: Widely used when you have large data sets to migrate. It could be leveraged for either one-time data migration activity or for less frequent data migration activity. For physical data transfer you can choose one data methodology based on the size.

  • Azure Import/Export: The Azure Import/Export can be used to transfer large amount of data using internal SATA HDDs or SSDs. By using this service, you can securely transfer data from on-premises to the cloud blob or file storage and vice-versa. When procuring drives for this service, don’t get confuse between SATA and SAS drives. Order SATA III drives as it is faster than older version of SATA drives and support speed of 6 Gbps.
  • Azure Data Box: Azure Data Box is an option to transfer very large amount of data, it is very similar to Azure Import/Export but avoids the hurdles of procuring, writing and sending multiple data disks. In this service, Microsoft provides you secure and reliable appliance to transfer data between on-premises and cloud blob and file storage. It is much easier than Azure Import/Export service as Microsoft takes the responsibility of end-to-end logistics.

Data transfer using command line tools and APIs: used when you have enough bandwidth available to migrate limited amount of data between on-premises and cloud blob and file storage. There are multiple tools available to perform this activity.

  • AzCopy: It is command line to tool to transfer data to and from Azure blob, file and table storage in fast, secure and reliable manner. You can install this tool on Windows or Linux machine to transfer data. It supports parallelism and the ability to resume copy operation when interrupted.
  • Azure CLI: It is a command line tool to manage Azure services and to upload data to Azure storage. Azure CLI doesn’t need any installation and configuration as it is available through Azure portal itself.
  • PowerShell: PowerShell is an alternative option for windows administrators to transfer data.

Data transfer using graphical user interface: is a most simpler way of transferring data between cloud on-premises. You have two options available to transfer data using graphical tools.

  • Azure Portal: Simplest way of exploring and uploading files to the Azure blob storage and data lake store but it has a limitation of exploring and uploading only file at a time.
  • Azure Storage Explorer: Azure storage explorer is a great option for GUI lovers, it provides a capability to manage, upload and download files through interactive interface for blobs, files, queues, tables and Azure Cosmos DBs objects. It also allows to manage data between blob storage, and between storage accounts.

Data Pipeline: used when you need to transfer data regularly.

  • Azure Data Factory: It is an option to transfer and transform data using data-driven workflows (a.k.a data pipeline) on a regular basis by leveraging orchestration or automation processes. It is a managed service to transfer data between Azure services, on-premises, or a combination of the two. The workflow can be created and scheduled based on your requirements. It can process and transform the data by leveraging compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.

Apart from the above core data transfer options, following are the list of tools that can be leveraged to transfer data within specific Azure services.

#Azure : Traffic Manager


Azure traffic manager is nothing but it is your global DNS load balancer that help you to manage the traffic across multiple datacenter and regions. Traffic manager uses the DNS to direct client requests to the multiple endpoints in most appropriate way. With the help of traffic manager, clients connect directly to the endpoints. Traffic manager can also be leveraged for external, non-Azure endpoints.

Let me show, how to create and configure traffic manager step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Traffic Manager”.

Here you define the name of the traffic manager, routing method, subscription, resource group and location.

Name: you should use unique prefix for your traffic manager profile. For Example, if I use ex-tm as a prefix for my traffic manger profile that will be associated with my global Exchange deployment then complete name of this traffic manager profile would be ex-tm.trafficmanager.net. Make sure this name can’t be changed once created.

Define the routing method that you wish to use for your traffic manager profile. However, you can change your routing method from configuration panel as well.

Once defined all the necessary details, click on “Create” to setup a traffic manager profile.

Once traffic manager profile created, you can see the basic configuration from overview panel.

Go to the Configuration panel under settings to configure your traffic manager profile.

Routing Method: Select routing method based on your need. Azure traffic manager profile provides four different types of routing methodologies.

Priority: Use this routing method when you want to route all the traffic to a primary service endpoint, and based on the configuration, traffic can be routed to backup service endpoints if primary service endpoint fails. In simple words, it is kind of active passive routing methodology.

Weighted: Weighted routing methodology can be leveraged when you want to distribute traffic based on the weight assigned to the specific set of endpoints or multiple set of endpoints based on the weight.

Performance: Performance routing methodology is beneficial when you want to distribute traffic according to performance. Here performance criteria is network latency, therefore the traffic will be redirected based on the lowest network latency. It could be the closest location and in some it will not be the closest location as well.

Geographic: Geographic routing as name suggests, it is based on the location from where the DNS query originates. It helps in redirecting requests to the based on the geographic region to improve user experience, to comply with data regulations etc.

DNS time to live (TTL): As it is based on DNS query, therefore you need to define a response time of the query. By default, it is set to 300 sec.

Endpoint monitor settings:

  • Protocol: Select protocol for endpoint probing to check the health of the service endpoints. You get three protocol, HTTP, HTTPS, and TCP. In case of HTTPS, probe only check the availability of certificate but doesn’t check the validity of certificate.
  • Port: Select the port number based on protocol.
  • Path: Define the path setting for HTTP and HTTPS protocol. Use relative path and name of the webpage name for TCP, forward slash (/) is a valid entry for the relative path.

Fast endpoint failover settings:

  • Probing interval: Enter the interval time for probing to check the health of the service endpoint. You can choose between 10 seconds for fast probing and 30 seconds for normal probing.
  • Tolerated number of failures: you can the define number of probing failures between 0 to 9.
  • Probe timeout: Probe timeout should be minimum 5 seconds and maximum should be less than the probing interval.

Generate key from Real user measurements panel under settings. Any measurement, you send and receive any traffic from an application to traffic manager is identified by the RUM Key. To know more about it in detail and how to embed it within the application, click here.

From the Traffic view panel under settings, you can enable the traffic view to view location, volume and latency information for the connections between your users and Traffic Manager endpoints.

From Endpoints panel under settings, add all your service endpoints.

Azure traffic manager supports three types of endpoints.

  • Azure endpoints: Use this type of endpoint to load-balance the traffic of Azure cloud services.
  • External endpoints: Use this type of endpoints if you want to load-balance external services, which are outside the Azure environment.
  • Nested endpoints: Nested endpoints are little advance level configuration in which child traffic manager profile check the health probes and propagate the results to parent traffic manager profile to decide the service endpoints.

Based on the endpoint type selection, fill rest of the required details and then add the endpoints.

In the properties section, you can look at the traffic manager profile properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the azure traffic manager profile.

Under the “Automation script” download the script or add to library for reuse.

In the metrics panel, you can monitor the metrics of traffic manager profile by two different parameters:

  • Endpoint Status by Endpoint
  • Queries by Endpoint Returned

I hope, this article helped you to understand, create, configure and manage Azure traffic manager.

#Azure : Application Gateway


Like your layer 7 load balancer in your traditional datacenter, Azure application gateway takes care of your HTTP/HTTPS based requests. It works at application layer (level 7 of OSI model) and works as a reverse-proxy as well. In application gateway, client connections are terminated at gateway and then forwarded to application.

Let me show, how to create and configure application gateway step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Application Gateway”.

In the basic configuration settings:

Enter the name of the application gateway. Always use a name for any components in Azure that fit for purpose and easy to distinguish.

Select the application gateway tier:

  • Standard: As any other layer 7 load balancer, it provides HTTP(S) load balancing, cookie-based session affinity, secure sockets layer (SSL) offload, end-to-end SSL, URL based content routing, multi-site routing, websocket support, health monitoring, SSL policy and ciphers, request redirect, and multi-tenant back-end support etc.
  • WAF: WAF is an advanced version of standard application gateway with application firewall capabilities that supports all standard AG features along with protection of web applications from common web vulnerabilities and exploits. Application gateway WAF comes pre-configured with OWASP (online web application security project) modsecurity core rule set (3.0 or 2.29) that provides baseline security against many of these vulnerabilities. It provides protection against SQL injection, cross site scripting, command injection, HTTP request smuggling, HTTP response splitting, remote file inclusion attack, HTTP protocol violations and anomalies, bots, crawlers, scanners, application misconfiguration and HTTP Denial of Service etc.

Select SKU size based on your requirement. AG sku comes in small, medium and large size.

Select instance count based on your need. The default instant count is 2.

Select subscription, resource group and location.

Once completed all basic configuration settings, click on OK.

In settings panel, complete following application gateway specific configuration settings:

Select virtual network and subnet that is going to associate with this application gateway.

Complete frontend IP configuration based on your application gateway requirements.

Set idle timeout settings and set the DNS name label.

Select the listener configuration protocol and associated port number.

Once completed with the configuration details, click on OK.

If you select HTTPS in listener configuration, upload PFX certificate file and provide credentials. Once completed with the configuration, click on OK.

Review the summary of the configuration and click on OK to create application gateway.

In the overview section, review the basic configuration.

In configuration panel under settings, you can change the application gateway tier, SKU size and instant count based on your requirements.

In Web application firewall panel under settings, you can upgrade to WAF tier if using standard tier and can also “enable/disable” firewall, configure firewall mode “detection/prevention”, configure rule set with OWASP and advanced configuration for your application gateway.

In Backend pools panel under settings, add and configure backend pools for application gateway. Click on add or existing backend pool to define the backend pool settings.

Click on “+ Add target” to define targets either using “IP address or FQDN” or “Virtual machine”. Click on rules to configure redirection if required.

In HTTP settings under settings, add or edit backend HTTP(S) settings such as cookie based affinity, request timeout, protocol and port etc.

In frontend IP configurations panel under settings, review IP configurations, type, status and associated listeners.

In listeners panel under settings, add basic and multi-site listeners.

In rules panel under settings, define basic and path-based rules based on your requirements.

In health probes panel under settings, you can add and edit health probes.

Once you add health probe, you define following parameters:

Name of the health probe.

Protocol either HTTP or HTTPS.

Enter the host name, path, interval in seconds, timeout in seconds and number of unhealthy thresholds.

In the properties section, you can look at the application gateway properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the application gateway profile.

Under the “Automation script” download the script or add to library for reuse.

In the metrics panel, you can monitor the metrics of the application gateway by following parameters:

  • Current Connections
  • Failed Requests
  • Healthy Host Count
  • Response Status
  • Throughput
  • Total Requests
  • Unhealthy Host Count

In alert rules panel under settings, you can define conditional rules.

In diagnostics logs panel under settings, you can view the logs of the resources.

In backend health under settings, you can review the status of the backend pool.

I hope, this article helped you to understand, create, configure and manage Azure application gateways.

#Azure : VNet-to-VNet Connectivity


VNet-to-VNet connectivity is another option to connect two virtual networks. When peering was not available in Azure, VNet-to-VNet connection was the only option to connect two virtual networks either in same region or in two different regions. Connecting a virtual network to another virtual network (VNet-to-VNet) is like connecting a VNet to an on-premises site location. Both connectivity types use an Azure VPN gateway to provide a secure tunnel using IPsec/IKE.

The VNets you connect can be:

  • In the same or different regions
  • In the same or different subscriptions
  • In the same or different deployment models

Let me explain you, how to set it up step by step. Login to the Azure Portal and go to the virtual network.

As you are going to setup a VNet-to-VNet connectivity between two virtual networks, therefore a Gateway subnet and a Network gateway is required in both virtual networks.

Select subnet under settings in virtual network, select “+ Gateway subnet” to create a gateway subnet for this virtual network.

Select an address range that will be used by this network gateway. By default, it selects next available address range.

In my case, I am using last subnet of my address space for gateway subnet.

Once added, you can review all used subnets in subnet panel.

Once, you have setup the subnet gateway in your virtual network then move on and create virtual network gateway to attach with gateway subnet.

Define the name of the virtual network gateway. Select the gateway type to “VPN” as we are establishing VNet-to-VNet connection. Select VPN type either Route-based or Policy-based according to your requirements. Select the VPN SKU based on your need.

Gateway SKUs by tunnel, connection, and throughput:

SKU S2S/VNet-to-VNet
Tunnels
P2S
Connections
Aggregate
Throughput Benchmark
VpnGw1 Max. 30 Max. 128 650 Mbps
VpnGw2 Max. 30 Max. 128 1 Gbps
VpnGw3 Max. 30 Max. 128 1.25 Gbps
Basic Max. 10 Max. 128 100 Mbps

Courtesy: Microsoft

Select the resource group, location and subscription etc.

Select your virtual network for which you are setting up this virtual network gateway.

Create new public IP address for your virtual network gateway.

Create public IP address using either Basic SKU or Standard SKU.

Once you are done with all the details, click on “Create” to deploy virtual network gateway.

Follow the same steps for another virtual network as well.

Once completed above steps in both the virtual networks now this is a time to establish a connection between both virtual network gateways which belongs to their respective virtual networks so that both the virtual networks can talk to each other. To do this, go to the “+ Create a resource” and search for “connection”.

Select the “Connection”.

Click on “Create” to establish a connection between virtual networks.

In basic settings select the type of the connection to VNet-to-VNet. Select the appropriate subscription, resource group and location.

Once configured all the basic settings, select “OK”.

Select both the virtual network gateways that needs to be connected, select the checkbox “Establish bidirectional connectivity” if you want to establish two-way connection. Define first and second connection names and then shared key to establish a secure connection.

Select first virtual network gateway.

Select second virtual network gateway.

Define both first and second connection name and enter the shared key, select OK.

Review the details in summary page and select OK to create a connection between virtual network gateways.

Once completed successfully, resources can talk to each other across virtual networks.

#Azure : Virtual Networks


Azure virtual network enables Azure resources to communicate with each other in Azure network and with external resources through internet. Azure virtual network is like your traditional local area network in datacenter. Azure virtual networks can be connected with another virtual network in Azure and with your On-premises datacenter as well. Azure virtual network supports private ip addressing and subnetting as you do in your on-premises network. Azure virtual network supports subnets within a virtual network, the number of subnets can be defined based on the virtual network class and size of each subnets, and it is as same as VLAN in your traditional network. By default, subnets within virtual network can talk to each other without establishing any connection. Once a virtual network created, multiple address spaces can be added based on your need. While doing this entire exercise, please make sure that any ip address or ip addresses range is not overlapping with each other neither across your Azure virtual networks nor with on-premises network.

Let me show you, how to set up virtual networks step by step. To start login to Azure portal.

In Azure portal, select “+ Create a resource” à“Networking” à “Virtual network”.

Look at the details required to create a virtual network.

Name: Name of the virtual network, it should be unique in your Azure environment.

Address space: Define address space based on your requirement.

Subscription: Select your subscription.

Resource: Either create a new one or use existing resource group.

Location: Select location to create this virtual network resource, It will selected automatically if you are using existing resource group.

Subnet: Define the name of the subnet.

Address range: Define the address range for this subnet.

Service endpoints: Define the service endpoints, by default it is disable.

Look at the below screenshots for filled details. Once filled all the required details, click on “Create” to deploy a virtual network.


Once deployed successfully, you can find this virtual network in your resources.


Select “Subnets” to look at/verify your existing subnet. Click on “+ Subnet” to create a new subnet in your existing virtual network.


Enter the name of the subnet and then enter the address range for this subnet. As we had used 172.26.0.0/20 (172.26.0.0 – 172.26.15.255), therefore the next range will start from 172.26.16.0, You can specify the new range based on your need.


Once filled the required details, select “OK” to deploy a new subnet in your existing virtual network.


Once deployed successfully, you can see both your subnets here.


Go to the Address space, if you would like to add a new address space in your virtual network.


Add the address space based on your requirement. (Example: Many organization uses different – different set of ip address ranges for different types of networks. Very simple example is Corporate and Perimeter network.) Once entered the range, click on “Save”.


Once added the address space successfully, define the subnet in that address space.


In connected device panel, you can see the devices that are using ip address from this virtual network.


In subnet panel, you can define multiple subnets based on your define address ranges.


In DNS panel, you can define the custom DNS server addresses based your network design. By default, it uses Azure-provided DNS server.


In peering panel, you can define peering between two virtual networks that belongs to the same region.


In Service endpoints panel, you can specify services endpoints based on your requirement. In general, you don’t have to define any thing here.


In properties panel, you can see the properties of your virtual network, such as resource id, location, resource group etc.


In Locks, you panel you can define the locks for your resources by defining lock type either “delete” or “read-only”.


In the Automation script panel, you can view the temple of this deployment and you also get an option for download, add to library and deploy.


In the diagram panel, you get the graphical representation of all the subnets and associated resources.


I hope, this step by step blog post helped you to create your virtual network and subnets in Microsoft Azure. To know more about the networking features such as Gateway subnet, peering etc., read the next blog post.