#Azure : Map your traditional datacenter network with cloud network


Like compute and storage, networking is also an essential part of the cloud. Networking is very broad topic and many network security components also come under this. In cloud, Networking components have been distributed into multiple small components and architecture has become distributed architecture. However, this distributed architecture methodology exists across cloud landscape.

I always suggest having an architecture start from Networking because your network define your boundaries. However, people can have debate that with cloud we are open and we can’t define boundaries. I totally agree with you as you are thinking from end-user’s perspective. In my view, if you can’t define the boundaries then you can’t have appropriate security in place. If you are working with multiple SaaS/PaaS/IaaS vendor, that’s fine but define them in your enterprise architecture so that what, how, where and when, you are sending your data is captured. This debate can’t happen through blogpost, therefore let me explain you key networking components in Azure and their mapping with on-premises networking stuffs.

Once you try to map the networking components of your on-premises with cloud, you observer that most of the things are available but it has slightly different name or functionality.

LAN ßà Virtual Networks

VLAN ßàAddress Space and Subnet

Lag between switches ßà VNet Peering / VNet-to-VNet connectivity

WAN ßà Global VNet Peering / VNet-to-VNet connectivity

MPLS ßà Express Route

L4 load balancer ßà Load balancer

L7 load balancer ßà Application Gateway

Geo-DNS/Global load balancer ßà Traffic Manager

Firewall ßà Network security group / Network virtual appliance

To know about major component and its functionality, read the following blogposts:

Virtual Networks

Network Peering

VNet-to-VNet Connectivity

Load Balancer

Application Gateway

Traffic Manager

Network Security Group

Once you know the boundaries, big picture and bottlenecks, and understand the Azure capabilities as well then you will design a best possible solution.

Advertisements

#Azure : Network Security Group


Network Security Groups provide advanced security protection for the VMs that you create. They control inbound and outbound traffic passing through a Network Interface Card (NIC) (Resource Manage deployment model), a VM (classic deployment), or a subnet (both deployment models).

NSGs contain rules that specify whether the traffic is approved or denied. Each rule is based on a source IP address, a source port, a destination IP address, and a destination port. Based on whether the traffic matches this combination, it either is allowed or denied. Each rule consists of the following properties:

  • Name. This is a unique identifier for the rule.
  • Direction. Direction specifies whether the traffic is inbound or outbound.
  • Priority. If multiple rules match the traffic, rules with higher priority apply.
  • Access. Access specifies whether the traffic is allowed or denied.
  • Source IP address prefix. This identifies from where traffic originates. This prefix can be based on a single IP address, a range of IP addresses in CIDR notation, or the asterisk (*) wildcard character, that must match all possible IP addresses.
  • Source port range. This specifies source ports by using either a single port number from 1-65535, a range of ports (200-400), or the asterisk (*) wildcard character that denotes all possible ports.
  • Destination IP address prefix. This identifies the traffic destination based on a single IP address, a range of IP addresses in CIDR notation, or the asterisk (*) wildcard character, that must match all possible IP addresses.
  • Destination port range. This specifies destination ports by using either a single port number from 1-65535, a range of ports (200-400), or the asterisk (*) wildcard character, that denotes all possible ports.
  • Protocol. Protocol specifies a protocol that matches the rule. It can be UDP, TCP, or the asterisk (*) wildcard character *.

Network security groups are resources that are created in a resource group, but can be shared with other resource groups that exist in your subscription.

Some important things to keep in mind while implementing network security groups include:

  • By default, you can create 100 NSGs per region per subscription. You can raise this limit to 400 by contacting Azure support.
  • You can apply only one NSG to a VM, subnet, or NIC.
  • By default, you can have up to 200 rules in a single NSG. You can raise this limit to 500 by contacting Azure support.
  • You can apply an NSG to multiple resources.

Let me show, how to create and configure network security groups step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Network security group”.

Specify the name of the NSG, select the subscription, select the right resource group from “Use existing” or create a new one, and finally define the location if you are creating a new resource group but if you are using existing resource then location will be selected by default according to the resource group location. Once done, click on “Create” to create a network security group.

Once network security group is created, you look at the default configuration and change the configuration based on your need.

These are the default inbound and outbound security rules.

Click on “Add” to create a new inbound security rule under “Inbound security rules”.

In Inbound security rule, you define following things:

Source:

  • Any: If you select “Any” then it accepts all inbound requests from any source, doesn’t care about the source network or endpoint.
  • IP Addresses: You can define specific IP Address ranges.
  • Service Tag: You can specify the service tags. For example, if you select storage then it allows all the storage categories of IP addresses.

Source port ranges: Specify the multiple ports separating with “‘,” (comma) such as 80, 443 or a series of ports such as 50000-50100.

Destination:

  • Any: If you select “Any” then it accepts all inbound requests for any destination, doesn’t care about the destination network.
  • IP Address: You can define specific IP Address ranges.
  • VirtualNetwork: Any virtual network as a destination network.

Destination port ranges: Specify the multiple ports separating with “‘,” (comma) such as 80, 443 or a series of ports such as 50000-50100.

Protocol: You can select protocols, Any, TCP or UDP.

Action: Allow or Deny.

Priority: Priority can be set between 100 and 4096. Least priority will have highest preference.

Name: Set the name of the NSG rule.

Description: Write description for the NSG rule that best describes the respective rule.

Click on “Add” to create a new outbound security rule under “Outbound security rules”.

In Outbound security rule, you define following things:

Source:

  • Any: If you select “Any” then it accepts all outbound requests for any destination, doesn’t care about the source network.
  • IP Address: You can define specific IP Address ranges.
  • VirtualNetwork: Any virtual network as a source network.

Source port ranges: Specify the multiple ports separating with “‘,” (comma) such as 80, 443 or a series of ports such as 50000-50100.

Destination:

  • Any: If you select “Any” then it accepts all inbound requests from any source, doesn’t care about the destination network or endpoint.
  • IP Addresses: You can define specific IP Address ranges.
  • Service Tag: You can specify the service tags. For example, if you select storage then it allows all the storage categories of IP addresses.

Destination port ranges: Specify the multiple ports separating with “‘,” (comma) such as 80, 443 or a series of ports such as 50000-50100.

Protocol: You can select protocols, Any, TCP or UDP.

Action: Allow or Deny.

Priority: Priority can be set between 100 and 4096. Least priority will have highest preference.

Name: Set the name of the NSG rule.

Description: Write description for the NSG rule that best describes the respective rule.

You can associate any specific network interface with network security group under “Network interfaces”. Select “+ Associate” to attach network interface with NSG.

Select a specific network interface that needs to be linked with NSG.

You can associate any specific virtual network subnet with network security group under “Subnets”. Select “+ Associate” to attach virtual network subnet with NSG.

Select a specific virtual network subnet that needs to be linked with NSG.

In the properties section, you can look at the resource properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the azure resources.

Under the “Automation script” download the script or add to library for reuse.

Under the diagnostic log, you can enable diagnostic for NSG to collect “NetworkSecurityGroupEvent” and “NetworkSecurityGroupRuleCounter” data.

I hope, this article helped you to understand, create, configure and manage Network Security Groups.

#Azure : Traffic Manager


Azure traffic manager is nothing but it is your global DNS load balancer that help you to manage the traffic across multiple datacenter and regions. Traffic manager uses the DNS to direct client requests to the multiple endpoints in most appropriate way. With the help of traffic manager, clients connect directly to the endpoints. Traffic manager can also be leveraged for external, non-Azure endpoints.

Let me show, how to create and configure traffic manager step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Traffic Manager”.

Here you define the name of the traffic manager, routing method, subscription, resource group and location.

Name: you should use unique prefix for your traffic manager profile. For Example, if I use ex-tm as a prefix for my traffic manger profile that will be associated with my global Exchange deployment then complete name of this traffic manager profile would be ex-tm.trafficmanager.net. Make sure this name can’t be changed once created.

Define the routing method that you wish to use for your traffic manager profile. However, you can change your routing method from configuration panel as well.

Once defined all the necessary details, click on “Create” to setup a traffic manager profile.

Once traffic manager profile created, you can see the basic configuration from overview panel.

Go to the Configuration panel under settings to configure your traffic manager profile.

Routing Method: Select routing method based on your need. Azure traffic manager profile provides four different types of routing methodologies.

Priority: Use this routing method when you want to route all the traffic to a primary service endpoint, and based on the configuration, traffic can be routed to backup service endpoints if primary service endpoint fails. In simple words, it is kind of active passive routing methodology.

Weighted: Weighted routing methodology can be leveraged when you want to distribute traffic based on the weight assigned to the specific set of endpoints or multiple set of endpoints based on the weight.

Performance: Performance routing methodology is beneficial when you want to distribute traffic according to performance. Here performance criteria is network latency, therefore the traffic will be redirected based on the lowest network latency. It could be the closest location and in some it will not be the closest location as well.

Geographic: Geographic routing as name suggests, it is based on the location from where the DNS query originates. It helps in redirecting requests to the based on the geographic region to improve user experience, to comply with data regulations etc.

DNS time to live (TTL): As it is based on DNS query, therefore you need to define a response time of the query. By default, it is set to 300 sec.

Endpoint monitor settings:

  • Protocol: Select protocol for endpoint probing to check the health of the service endpoints. You get three protocol, HTTP, HTTPS, and TCP. In case of HTTPS, probe only check the availability of certificate but doesn’t check the validity of certificate.
  • Port: Select the port number based on protocol.
  • Path: Define the path setting for HTTP and HTTPS protocol. Use relative path and name of the webpage name for TCP, forward slash (/) is a valid entry for the relative path.

Fast endpoint failover settings:

  • Probing interval: Enter the interval time for probing to check the health of the service endpoint. You can choose between 10 seconds for fast probing and 30 seconds for normal probing.
  • Tolerated number of failures: you can the define number of probing failures between 0 to 9.
  • Probe timeout: Probe timeout should be minimum 5 seconds and maximum should be less than the probing interval.

Generate key from Real user measurements panel under settings. Any measurement, you send and receive any traffic from an application to traffic manager is identified by the RUM Key. To know more about it in detail and how to embed it within the application, click here.

From the Traffic view panel under settings, you can enable the traffic view to view location, volume and latency information for the connections between your users and Traffic Manager endpoints.

From Endpoints panel under settings, add all your service endpoints.

Azure traffic manager supports three types of endpoints.

  • Azure endpoints: Use this type of endpoint to load-balance the traffic of Azure cloud services.
  • External endpoints: Use this type of endpoints if you want to load-balance external services, which are outside the Azure environment.
  • Nested endpoints: Nested endpoints are little advance level configuration in which child traffic manager profile check the health probes and propagate the results to parent traffic manager profile to decide the service endpoints.

Based on the endpoint type selection, fill rest of the required details and then add the endpoints.

In the properties section, you can look at the traffic manager profile properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the azure traffic manager profile.

Under the “Automation script” download the script or add to library for reuse.

In the metrics panel, you can monitor the metrics of traffic manager profile by two different parameters:

  • Endpoint Status by Endpoint
  • Queries by Endpoint Returned

I hope, this article helped you to understand, create, configure and manage Azure traffic manager.

#Azure : Application Gateway


Like your layer 7 load balancer in your traditional datacenter, Azure application gateway takes care of your HTTP/HTTPS based requests. It works at application layer (level 7 of OSI model) and works as a reverse-proxy as well. In application gateway, client connections are terminated at gateway and then forwarded to application.

Let me show, how to create and configure application gateway step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Application Gateway”.

In the basic configuration settings:

Enter the name of the application gateway. Always use a name for any components in Azure that fit for purpose and easy to distinguish.

Select the application gateway tier:

  • Standard: As any other layer 7 load balancer, it provides HTTP(S) load balancing, cookie-based session affinity, secure sockets layer (SSL) offload, end-to-end SSL, URL based content routing, multi-site routing, websocket support, health monitoring, SSL policy and ciphers, request redirect, and multi-tenant back-end support etc.
  • WAF: WAF is an advanced version of standard application gateway with application firewall capabilities that supports all standard AG features along with protection of web applications from common web vulnerabilities and exploits. Application gateway WAF comes pre-configured with OWASP (online web application security project) modsecurity core rule set (3.0 or 2.29) that provides baseline security against many of these vulnerabilities. It provides protection against SQL injection, cross site scripting, command injection, HTTP request smuggling, HTTP response splitting, remote file inclusion attack, HTTP protocol violations and anomalies, bots, crawlers, scanners, application misconfiguration and HTTP Denial of Service etc.

Select SKU size based on your requirement. AG sku comes in small, medium and large size.

Select instance count based on your need. The default instant count is 2.

Select subscription, resource group and location.

Once completed all basic configuration settings, click on OK.

In settings panel, complete following application gateway specific configuration settings:

Select virtual network and subnet that is going to associate with this application gateway.

Complete frontend IP configuration based on your application gateway requirements.

Set idle timeout settings and set the DNS name label.

Select the listener configuration protocol and associated port number.

Once completed with the configuration details, click on OK.

If you select HTTPS in listener configuration, upload PFX certificate file and provide credentials. Once completed with the configuration, click on OK.

Review the summary of the configuration and click on OK to create application gateway.

In the overview section, review the basic configuration.

In configuration panel under settings, you can change the application gateway tier, SKU size and instant count based on your requirements.

In Web application firewall panel under settings, you can upgrade to WAF tier if using standard tier and can also “enable/disable” firewall, configure firewall mode “detection/prevention”, configure rule set with OWASP and advanced configuration for your application gateway.

In Backend pools panel under settings, add and configure backend pools for application gateway. Click on add or existing backend pool to define the backend pool settings.

Click on “+ Add target” to define targets either using “IP address or FQDN” or “Virtual machine”. Click on rules to configure redirection if required.

In HTTP settings under settings, add or edit backend HTTP(S) settings such as cookie based affinity, request timeout, protocol and port etc.

In frontend IP configurations panel under settings, review IP configurations, type, status and associated listeners.

In listeners panel under settings, add basic and multi-site listeners.

In rules panel under settings, define basic and path-based rules based on your requirements.

In health probes panel under settings, you can add and edit health probes.

Once you add health probe, you define following parameters:

Name of the health probe.

Protocol either HTTP or HTTPS.

Enter the host name, path, interval in seconds, timeout in seconds and number of unhealthy thresholds.

In the properties section, you can look at the application gateway properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the application gateway profile.

Under the “Automation script” download the script or add to library for reuse.

In the metrics panel, you can monitor the metrics of the application gateway by following parameters:

  • Current Connections
  • Failed Requests
  • Healthy Host Count
  • Response Status
  • Throughput
  • Total Requests
  • Unhealthy Host Count

In alert rules panel under settings, you can define conditional rules.

In diagnostics logs panel under settings, you can view the logs of the resources.

In backend health under settings, you can review the status of the backend pool.

I hope, this article helped you to understand, create, configure and manage Azure application gateways.

#Azure : Load Balancer


Like your traditional datacenter load balancer, Azure load balancer provides an ability to scale your application and create high availability for your services. Azure load balancer is layer 4 device and understands TCP and UDP packets. There are two types of azure load balancers are available:

  • Internal Load Balancer
  • Public Load Balancer

Let me show, how to create and configure load balancer step-by-step.

Login to the Azure Portal and select “+ Create a resource”. Select the “Networking” and then select “Load Balancer”.

Enter the name of the load balancer and set the load balancer type to either Internal or Public based on your business requirements.

If you select “Basic” sku for public load balancer then define public ip address, subscription, resource group, and location.

Standard SKU supports many advance features than the basic sku. If you want to know more about the difference between basic and standard sku, read Why use Standard Load Balancer?

If you create an internal load balancer with standard sku, you get an option to select virtual network and subnet.

Azure load balancer with standard sku provides an option to select availability zone.

If you select Internal Azure Load Balancer with SKU, you get two options for IP address assignment i.e. Static and Dynamic.

If you select static ip address assignment then you assign private ip address manually.

In my configuration, I am creating Internal Azure Load Balancer with Basic SKU.

I am using private configuration for my internal load balancer. Once completed the configuration, click on “Create” to deploy load balancer.

Once completed, you can review the basic configuration from overview panel.

In Frontend IP configuration under settings, you can review and add the LoadBalancerFrontEnd Ip addresses.

In Backend pools under settings, you can define add virtual machines that needs to be managed by the load balancer. Click on Add to configure backend pool.

Set the name and associate the backend pool with different types of virtual machines configurations, and click on OK.

In Health probes panel under settings, you get an option to add probes to check the health of your service endpoints. Click on “Add” to configure the health probes.

Configure the Name, Protocol, Port, Interval and Unhealthy threshold options based on your requirements.

In the Load balancing rules panel under settings, click on “Add” to create the load balancing rules.

Define the load balancing rules based on your requirement.

Once completed with the configuration, click on OK to create a rule with defined parameters.

You can also define Inbound NAT rules, click on “Add” to create NAT rules.

Define inbound NAT rules based on your requirement and click on “OK” to create.

In the properties section, you can look at the load balancer properties.

Under the locks section, you can create and configure the lock type to prevent changes and protecting the azure load balancer.

I hope, this article helped you to understand, create, configure and manage Azure load balancers.

#Azure : VNet-to-VNet Connectivity


VNet-to-VNet connectivity is another option to connect two virtual networks. When peering was not available in Azure, VNet-to-VNet connection was the only option to connect two virtual networks either in same region or in two different regions. Connecting a virtual network to another virtual network (VNet-to-VNet) is like connecting a VNet to an on-premises site location. Both connectivity types use an Azure VPN gateway to provide a secure tunnel using IPsec/IKE.

The VNets you connect can be:

  • In the same or different regions
  • In the same or different subscriptions
  • In the same or different deployment models

Let me explain you, how to set it up step by step. Login to the Azure Portal and go to the virtual network.

As you are going to setup a VNet-to-VNet connectivity between two virtual networks, therefore a Gateway subnet and a Network gateway is required in both virtual networks.

Select subnet under settings in virtual network, select “+ Gateway subnet” to create a gateway subnet for this virtual network.

Select an address range that will be used by this network gateway. By default, it selects next available address range.

In my case, I am using last subnet of my address space for gateway subnet.

Once added, you can review all used subnets in subnet panel.

Once, you have setup the subnet gateway in your virtual network then move on and create virtual network gateway to attach with gateway subnet.

Define the name of the virtual network gateway. Select the gateway type to “VPN” as we are establishing VNet-to-VNet connection. Select VPN type either Route-based or Policy-based according to your requirements. Select the VPN SKU based on your need.

Gateway SKUs by tunnel, connection, and throughput:

SKU S2S/VNet-to-VNet
Tunnels
P2S
Connections
Aggregate
Throughput Benchmark
VpnGw1 Max. 30 Max. 128 650 Mbps
VpnGw2 Max. 30 Max. 128 1 Gbps
VpnGw3 Max. 30 Max. 128 1.25 Gbps
Basic Max. 10 Max. 128 100 Mbps

Courtesy: Microsoft

Select the resource group, location and subscription etc.

Select your virtual network for which you are setting up this virtual network gateway.

Create new public IP address for your virtual network gateway.

Create public IP address using either Basic SKU or Standard SKU.

Once you are done with all the details, click on “Create” to deploy virtual network gateway.

Follow the same steps for another virtual network as well.

Once completed above steps in both the virtual networks now this is a time to establish a connection between both virtual network gateways which belongs to their respective virtual networks so that both the virtual networks can talk to each other. To do this, go to the “+ Create a resource” and search for “connection”.

Select the “Connection”.

Click on “Create” to establish a connection between virtual networks.

In basic settings select the type of the connection to VNet-to-VNet. Select the appropriate subscription, resource group and location.

Once configured all the basic settings, select “OK”.

Select both the virtual network gateways that needs to be connected, select the checkbox “Establish bidirectional connectivity” if you want to establish two-way connection. Define first and second connection names and then shared key to establish a secure connection.

Select first virtual network gateway.

Select second virtual network gateway.

Define both first and second connection name and enter the shared key, select OK.

Review the details in summary page and select OK to create a connection between virtual network gateways.

Once completed successfully, resources can talk to each other across virtual networks.

#Azure : Network Peering


In Microsoft Azure Virtual Networks, Peering connects multiple virtual networks. It simplifies the connectivity and configuration between virtual networks. Once connectivity established through peering, traffic flows seamlessly between two virtual networks. Traffic between peered virtual network leverages Microsoft infrastructure backbone, much likely traffic is flowing within the same virtual network. However, it doesn’t cover all the scenarios and it is the option available only for virtual networks available in same region. Apart from this major constraint, there are many other restrictions applies to it such as address ranges can’t be added or deleted from the address space of a virtual network once peered with another virtual network. However, peering virtual networks between region is currently in preview for few regions and it may be generally available soon.

Address spaces within same virtual network doesn’t require peering. For example, if I have two address spaces one for corporate network and another for perimeter network, and both are part of the same virtual network then there is no need to establish any kind of connectivity because both networks can talk to each other by default.

Now, let me show you how to setup peering between virtual network.

Login to the Azure Portal and first go to your virtual network and then go to the “Peering” under settings. Select “+ Add” to establish a peering between virtual networks.

In Add peering panel, fill the required details.

Name: Enter a common name for the peering that you can recognized.

Peer details: Select virtual network deployment model.

Subscription: Select the subscription.

Virtual network: Select the destination virtual network.

Configuration: By default, “Allow virtual network access” enabled. If you don’t have specific configuration, go with default configuration.

Once entered all the necessary details then click “OK” to setup a peering.

Once created successfully, you will be able to see it in peering panel.

Follow the same steps in another virtual network as well. Once completed from both the side, you will be able to flow data between peered networks.