Tag Archives: Azure Iaas

#Azure : Storage services


In Azure Storage Accounts blogpost, I have covered details of storage accounts, access tiers and performance tiers. Storage account is a kind of container for storage services in Microsoft Azure. There are following storage services provided by Microsoft Azure:

Blob storage

File storage

Queues storage

Table storage

Disk Storage

Let me explain you each storage service in detail.

Blob storage: This name “blob” looks bit confusing to the people who are new in the world of storage. In simple words, a blob is a storage that can store almost any kind of file that you store in your PC, tablet, mobile and cloud drives. For example, MS office documents, HTML files, database, database log files, backup files and big data etc. Once stored, you can access it from anywhere in the world through URLs, REST API, and Azure SDK storage clients etc. There are three types of blobs, block blobs, page blobs and append blobs.

  • Block blobs: It is an ideal for storing any kind of ordinary files such as text or media file. It supports files up to about 4.7 TB in size.
  • Page blobs: It is kind of blob that is meant for random access and more efficient with frequent read/write operations. It supports files up to 8TB in size and used for OS and Data VHDs.
  • Append blobs: It is as same as block blob and in other words it is made up of blocks like block blob but it provides an additional capability of appending the files. It is generally used in logging scenarios, where we store logging information from multiple sources and append it.

File storage: Azure file storage is highly available network file share based on the SMB 3.0 (Server Message Block) protocol also known as CIFS (Common Internet File System). Azure file shares can be accessed by Azure virtual machines and cloud services by mounting the share, while on premises deployments can access it through Rest APIs. One of the amazing capability that distinguish it from normal file share i.e. it can be access from anywhere through the URL that points to any file and includes SAS token. The way we use traditional file share, in the same way azure file share can be used. Let’s take few examples to make it clearer.

  • File share to store data such as files, software, utilities, reports etc.
  • Application that depends on file share
  • Configuration files that need to be accessed by multiple sources at the same time.
  • To store crash dump, metrics and diagnostic logs etc.

These are few examples but in your day-to-day life you find many more. As of now, AD based authentication and ACLs are not supported but in future you may see it as well.

Queue storage: This is not a new word for any experienced IT developer/professional. Azure Queue storage is a service to store and retrieve messages asynchronously. A queue can contain millions of messages, up to a capacity limit of the storage account and within a message size limit of 64 KB each. It can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. The maximum time that a message can remain in the queue is 7 days. Few examples of queue storage services are:

  • Passing a message from an Azure web role to an Azure worker role.
  • Covert file types of large number of files such as .png to .jpeg by using azure function. Once you will start uploading the files, azure function will start converting its format.

Table storage: Azure table storage is a service that stores structured data. This service is NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud. It is ideal for storing structure and non-relational data such as spreadsheet kind of information, address books, user data for web applications etc. You can store millions of structured and non-relational entries, up to limit of the storage account. Few example of table storage service are: (Courtesy: Microsoft docs)

  • Storing terabytes of structured data capable of serving web scale applications.
  • Storing datasets that don’t require complex joins, foreign keys, or stored procedures and can be denormalized for fast access.
  • Quickly querying data using a clustered index.
  • Accessing data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries

Disk Storage: Azure disk storage service is the simplest one to understand as we use it as part of the virtual machines either on-premises or in the cloud. Disk storage can be used for operating systems, application or any other kind of data. All virtual machines in Azure have at least two disks, a disk for operating system and a temporary disk. VMs can have one or more data disks. All disk will be in VHD format and can have capacity up to 1023 GB. Azure disk storage service provides these disks in two ways, a managed disk and an unmanaged disk. These disks can further divide between two performance tiers, standard and premium.

Managed disk: Managed disk are disks that is created and managed by Microsoft and you don’t have to worry about the availability of storage. Managed disks are available in both performance tier, based on our requirement you can select the right size of disk and performance tier. Standard performance is represented by S and premium performance tier is represented by P. The available size for both the performance tier is between 32 GB to 4095 GB.

Unmanaged disk: Unmanaged disks are disks that is created and managed by you. To create these disks, first you create storage account and define availability by selecting replication options and then you create unmanaged disks under it. Unmanaged disk also supports standard and premium tier. Here, you are responsible of availability of the disk based on replication method you select.

Standard tier: Standard tier disks are basically HDD and provide limited number of IOPS. This type of disks provides maximum 500 IOPS.

Premium tier: Premium tier disks are SSDs and provide high IOPS and low latency. This type of disks provides maximum 7500 IOPS. Premium disks are only available with limited series of Azure VMs.

Advertisements

#Azure : Storage Accounts


Storage is an essential part of anything what you do in your day-to-day life and same applies to technology as well. Microsoft Azure Storage is a managed service provided by Microsoft cloud services. When you use any product or service, availability, resiliency, performance, scalability, durability, pricing, security and delivery play an important role, and here in case of Azure Storage it is all taken care by Microsoft.

Azure Storage provides two type of storage accounts: General Purpose and Blob.

While Azure Storage provides services in the following types:

Blob storage

File storage

Queues storage

Table storage

Disk Storage

Storage accounts and services are tightly integrated with each other. To use any one of the above service, you first create a storage account then you define the storage services based on the storage account type.

First, let understand the Storage accounts by an illustration:

Now, let understand the storage accounts in detail:

General purpose: A general purpose storage account cater all your azure storage services such as Tables, Queues, Files, Blobs and Azure virtual machine disks under a single account. This type of storage account has two performance tiers:

  • Standard storage performance tier: This performance tier fulfills all your data storage needs such as Tables, Queues, Files, Blobs and Azure virtual machine disks. This tier supports block blobs, page blobs and append blobs.
  • Premium storage performance tier: This performance tier is backed by SSDs and provides high performance IOPS, best for virtual machine disks and data intensive applications such as database. This tier supports only page blobs.

Currently, these general-purpose accounts are available in 2 versions.

General purpose v1: It is previous version of storage account and doesn’t provide latest and greatest storage capabilities, which is certainly available with new kind of storage. It also doesn’t provide access tier (Hot and Cold).

General purpose v2: This is a newer version of general purpose v1 storage and provide all the features, which are part of v1 storage. It also provides all the latest features available for blob, files, queues and tables with better performance and pricing. It also supports the access tiering (Hot and Cold) for different needs and performance.

You can upgrade your GPv1 account to GPv2 account, using PowerShell and Azure CLI.

Blob: A blob storage account is mainly to store unstructured data as blob (objects). It also provides access tier (hot and cold) to support different needs and performance. It only supports block blobs and append blobs. It provides only standard performance tier.

Access tiers: Access tiers are supported by General purpose V2 storage account and blob storage account to serve different needs.

  • Hot access tier indicates that the objects in the storage account will be more frequently accessed. This allows you to store data at a lower access cost. Premium storage always falls under this access tier.
  • Cool access tier indicates that the objects in the storage account will be less frequently accessed. This allows you to store data at a lower data storage cost.

#Azure : Map your traditional datacenter compute with cloud VMs


Cloud has completely changed the IT architecture landscape. Since early days of IT till last decade, Architecture was an abstraction that used to play a key role at the time of transformation or new development. Once Architecture developed, it used to continue for many years with very minimal changes. Since cloud came in its inception, architecture has become a key in day to day work life of an IT professional because of its agility. If not daily then most probably weekly, you can observe some changes in the public cloud world and that needs to be taken care seriously.

In this post, I’ll try to simplify the cloud architecture for compute and will compare with traditional compute architecture. Apart from the simplification, I’ll provide you a logical design thinking approach that will make your life easy no matter what role you are playing in IT.

Let start from traditional datacenter.

If you are an experience IT professional, you must have seen or heard about these names at least once in your career.

Traditional type of Servers: Tower, Rack, and Blade servers.

A true traditional server that comes with multiple configuration options such as dual-processor or quad-processor etc.

New type of platforms: Converged and Hyper-Converged.

These new platforms are basically rack based servers that provide inbuilt advanced storage and networking capabilities by leveraging software defined data center technologies.

Virtualization: In last one decade, every organization has leveraged capabilities of virtualization that enable compute to run multiple virtual machines so that you can fully utilize your high-end servers and save cost in multiple aspects.

Now, let me explain complete compute story in public cloud such as Microsoft Azure.

When you look at compute available through cloud, you can easily determine that it is same kind of virtual machines, which we used to have in our virtualized environment. But in the cloud, the only difference is that you don’t worry about the underlying hypervisor and hardware that is being used behind the scene to provide you virtual machines.

In traditional datacenter, we use multiple racks to install different types of hardware and each rack connects with different power supply units through PDUs and these power supply units connect with main power supply unit. In many scenarios each rack deploys top-of-the-rack switches to provide network connectivity to the devices installed in the rack and in some cases one or two of the racks in the same row deploy these TOR switches. To overcome the challenge of entire datacenter failure, we use multiple datacenters in the form of high-availability and site-resiliency. When an administrator performs any maintenance activity in the traditional datacenter, he/she makes sure that the quorum is maintained while performing maintenance activity to avoid any kind of unexpected failures.

In cloud, H/W level high-availability is being provided by fault domain (unexpected) and maintenance level availability is being provided by update domain, and both features fit under one umbrella that i.e. known as availability sets. To provide high availability, Microsoft Azure uses multiple datacenters (at least two-three) in each region, and to support site resiliency Azure provides multiple region options in same geography or across multiple geographies.

I hope, now you will be able to sketch a clear picture in your mind about traditional datacenter vs cloud.

Now, let me help you with the logical design thinking approach. When you think to deploy a VM or set of VMs, follow the following steps in sequential order.

  1. Think about application and its big picture, keep end-users in your mind and their respective locations.
  2. Select the best suitable cloud region.
  3. Consider different tiering of solution.
  4. Consider security, high availability, site resiliency and load balancing requirements.
  5. Illustrate your network requirements.
  6. Illustrate your storage requirements.
  7. Illustrate your compute requirements.

Once documented all the above, create design diagram and find the approach to deploy your solution. For more details specific to Microsoft Azure compute, read the following blogpost.

#Azure : Virtual Machines

#Azure : Virtual Machine Configuration

#Azure : Virtual Machines High Availability

#Azure : Step-by-step Availability Sets

#Azure : Virtual Machines Scale Sets

#Azure : Large Virtual Machines Scale Sets

#Azure : Large Virtual Machines Scale Sets


In general, virtual machines scale sets provide auto scalability based on the need. With the normal scale sets, you can have a deployment of 0-100 VMs. But if you have requirements to deploy large scale set of more than 100 VMs then you can opt for large virtual machine scale sets. The basic difference between two is placement group. Placement group in virtual machine scale set is nothing but kind of availability set that maintains its own fault and update domains. This placement groups in virtual machine scale sets have been defined by a parameter “singlePlacementGroup” that has two values either “true” or “false”, if this value is set to “true” then scale set will have a single placement group and the number of virtual machine can be between 0-100, and if the value of “singlePlacementGroup” parameter has been set to “false” then scale set will have multiple placement groups and the number of virtual machine can be between 0-1000.

There is one most important consideration for large VM scale sets i.e. storage. If you choose to go with large VM scale sets then you use managed disks and don’t define your own storage accounts. In VM scale sets, if you don’t go with managed disk then you require multiple storage account i.e. 1 for every 20 VMs but in case of large scale sets you leverage managed disks that simplifies your overhead of managing multiple storage accounts.

Let see how to do it.

Login to the Azure portal and search for “scale” in the azure market place and select the “Virtual machine scale set”.

In the Virtual machine scale set panel, select “Create” to create a new Virtual machine scale set.

In the “create virtual machine scale set” fill the basics information.

Virtual machine scale set name = Enter the scale set name for your virtual machine scale set deployment.

Operating system disk image = Select the operating disk image from drop-down.

Subscription = select your subscription.

Resource group = Create a new resource group or select the existing one.

Location = Select the Azure region from drop-down.

User name = Enter the username that will be used for virtual machines.

Password = Enter the password for the user name.

Confirm password = Re-enter the password to confirm.

Scroll down and fill the required details under “Instances and Load Balancer”.

Instance count = Enter to VMs count between 0 – 100. If you enter any number more than 100 and up to 1000, all the configuration settings will be disable except instance size. As explained above, large-scale sets with more than 100 VMs use managed disk by default and deployment of these large-scale sets take place across multiple placement groups.

Instance size = Select the VM size based on your requirement.

Enable scaling beyond 100 instances = By default “No”, if you select “Yes” then rest of the settings will be disabled as described under instance count. Select “Yes” for large Virtual Machines Scale Sets.

Autoscale = By default disabled but if you enable this feature then you need to define the conditions for the auto scaling.

If Autoscale enabled, fill the required details.

Autoscale

Minimum number of VMs = Enter the minimum number of VMs that required in this scale set.

Maximum number of VMs = Enter the maximum number of VMs that required in this scale set.

Scale out

CPU threshold (%) = Enter the cpu threshold after that VM will be added.

Number of VMs to increase by = Enter the number of VMs that will added when your running VMs reach defined cpu threshold.

Scale in

CPU threshold (%) = Enter the cpu threshold after that VM will be removed.

Number of VMs to decrease by = Enter the number of VMs that will removed when your running VMs reach defined cpu threshold.

Once filled all the details, click on create to start the deployment process.

Apart from these configuration settings, you need to consider following while planning for large VM scale sets.

Limit of 1000 VMs only applicable if you use Azure Marketplace images otherwise you can scale up to 300 VMs if you use your own customize image.

When designing network for large VM scale sets using a single subnet, make sure that your subnet has enough IP addresses for all the VMs. As a best practice, reserve 20% more ip addresses than what you need to support large scale sets.

Azure Load Balancer Standard SKU work with large scale sets because of multiple placement group while scale sets with 100 VMs can leverage basic Load Balancer.

Layer 7 load balancers and application gateways don’t need any specific configuration for large scale sets.

Make sure you are defining fault domain for the VMs that should not be in same fault domain, by default VMs will be speeded across fault and update domain in a specific placement group but that doesn’t mean that two VMs (for ex: these two VMs are required all the times) will not be deployed on same hardware. Use Azure resource explorer and go to the instance view of scale set and verify the fault domain and placement group Ids for the specific VMs.

Make sure you have enough vCPU quota to support large number of VMs in a large VM scale set, otherwise request to increase your vCPU quota.

You can convert your Virtual Machine Scale Set from a single placement group (1-100 VMs) to multiple placement group (1-1000 VMs) using Azure Resource Explorer but not vice-versa. You may not get the option to upgrade to large VM scale sets if you are using old version of the Microsoft.Compute API.

#Azure : Virtual Machines Scale Sets


Microsoft Azure virtual machines scale sets are a next step in high availability and scalability of virtual machines. Virtual machine high availability can be achieved by availability sets in Microsoft Azure. Microsoft Azure virtual machines scale sets are a group of an identical compute resources deployed in multiple availability sets. It is a true scalable model of auto-scaling that can target large-scale services with big compute, large data and containerized workloads. As these virtual machine scale sets leverage multiple availability sets in the background, therefore scale operations are tacitly balanced across fault and update domains. These scale sets use five fault domains and five update domains in each availability set. Each virtual machine scale sets can host 0-1000 VMs based on platform images, and 0-300 VMs based on custom images.

To define autoscale configuration for consistent application performance, many permutation and combination can be used. Very common rules are compute, memory, and disk I/O utilization. Apart from these common rule of performance metrics, auto scaling of VMs can also depend on application response, or a fixed scheduled.

Note: Virtual Machines scale sets can also be deployed with availability zones.

Now, let me explain you how does this auto scaling works behind the scene. When a new VM added to the scale set, a VM instance Id will be provided to each VM that is unique within a scaleset. When you add or remove a virtual machine from the scale set, the existing Id doesn’t go anywhere. For Example: In a virtual machine scale set you have 10 VMs, if your 2 VMs removed from the scale set based on the configuration and need, and then after some time 5 VMs are added based on the load then new VMs will have Instance id 10, 11, 12, 13, 14 in an incremented manner and these VMs will be balanced across fault and update domains to maintain maximum availability.

Let see how to do it.

Login to the Azure portal and search for “scale” in the azure market place and select the “Virtual machine scale set”.

In the Virtual machine scale set panel, select “Create” to create a new Virtual machine scale set.

In the “create virtual machine scale set” fill the basics information.

Virtual machine scale set name = Enter the scale set name for your virtual machine scale set deployment.

Operating system disk image = Select the operating disk image from drop-down.

Subscription = select your subscription.

Resource group = Create a new resource group or select the existing one.

Location = Select the Azure region from drop-down.

User name = Enter the username that will be used for virtual machines.

Password = Enter the password for the user name.

Confirm password = Re-enter the password to confirm.

Scroll down and fill the required details under “Instances and Load Balancer”.

Instance count = Enter to VMs count between 0 – 100. If you enter any number more than 100 and up to 1000, all the configuration settings will be disabled except instance size. As large-scale sets with more than 100 VMs use managed disk by default and deployment of these large-scale sets take place across multiple placement groups.

Instance size = Select the VM size based on your requirement.

Enable scaling beyond 100 instances = By default “No”, if you select “Yes” then rest of the settings will be disabled as described under instance count.

Use managed disks = By default “Yes”.

Public IP address name = Define name of the public IP address that will be used for load balancer, which will be placed in front of the scale set.

Public IP allocation method = By default dynamic but Static can be selected.

Domain name label = Domain name label for the load balancer in front of the scale set.

Autoscale = By default disabled but if you enable this feature then you need to define the conditions for the auto scaling.

If Autoscale enabled, fill the required details.

Autoscale

Minimum number of VMs = Enter the minimum number of VMs that required in this scale set.

Maximum number of VMs = Enter the maximum number of VMs that required in this scale set.

Scale out

CPU threshold (%) = Enter the cpu threshold after that VM will be added.

Number of VMs to increase by = Enter the number of VMs that will added when your running VMs reach defined cpu threshold.

Scale in

CPU threshold (%) = Enter the cpu threshold after that VM will be removed.

Number of VMs to decrease by = Enter the number of VMs that will removed when your running VMs reach defined cpu threshold.

Once filled all the details, click on create to start the deployment process.

As you observed that in the entire process, virtual network and storage account was not asked anywhere because virtual machine scale sets take care of it behind the scene based on the configuration. Therefore, you don’t have to really worry about it.

#Azure : Step-by-step Availability Sets


In my previous post, I had explained about the high availability configuration for Azure virtual machines, Availability Sets, Fault domains and Update domains. In this blogpost, I’ll cover step-by-step configuration using Azure Portal. Availability Set configuration can be done in two ways, either by creating and configuring it in early stage based on your application architecture or set it up while creating a first VM for each tier and add rest of the same tier VMs to the respective availability set.

I am going to explain, how to create and configure availability set in advance.

Login to the Azure Portal and select “+ Create a resource”.

In the Azure Market Place, search for Availability Set.

From the search results, select “Availability Set”.

In the Availability Set panel, select create.

In the create availability set panel, define the parameters.

Name: Enter the name of the availability set.

Subscription: Select the subscription.

Resource group: Either create a new one or select an existing one based on your requirement.

Location: Select the location.

Fault domains: Select the number of fault domains, by default it is two but for specific Azure regions you can select up to three fault domains.

Update domains: Select the number of update domains, by default it is five but you go up to 20 update domains in each availability sets.

Use managed disks: Select Yes (by default) if you would like to use managed disk for all the VMs that will be create in this availability set otherwise “No”.

Once, you have filled all the details based on your requirement then select “create” to start the deployment of availability sets.

Wait for few seconds, your availability set will be created. Now, you can go head and create your VMs using this availability set.

While creating a virtual machine you get both the options, either select the existing availability set or create a new one.

If you select the “+ Create new” then again you have to fill the same details as filled earlier then select “OK”.

Once you created an availability set, you will not able to modify it and the same concept applies to VM as well. If you have created a VM as part of the availability set then you can’t come out or change the availability set until you delete and recreate it.

#Azure : Virtual Machines High Availability


High availability is crucial for any production environment either it is in on-premises datacenter or in cloud. If you will go in detail of high availability, you will observer that HA can be achieved in the following levels from compute perspective:

Hardware level

Hypervisor/VM level

Operating System level

 

In this blogpost, I’ll cover high availability options available in MS Azure. First, make it clear that OS level HA has no difference between on-premises or in cloud. Now, let me provide the overview of H/W, and Hypervisor/VM level HA in on-premises datacenter or private cloud.

Hardware level HA: Dual or Quad processor, dual power supply, multi memory channel, multiple network slots, and multiple PCI card slots etc.

Hypervisor/VM level: All type 1 hypervisors provide high-availability configuration options like operating systems. Once, you configure HA for the hypervisor the VM can be created on top of that and hypervisor HA configuration maintains the availability for guest VMs if any host goes down.

Apart from the H/W and hypervisor level, all the datacenter components can be configured in high available mode such as multiple racks and power supply units etc. but when it comes to public cloud you can’t define the above configurations by your-self. However, cloud service provider does all these configurations for you in advance and to simply the things for you it just provides an availability feature.

Microsoft Azure provides “Availability Set” to provide high-availability at the VM level. This availability set feature takes care of both planned and unplanned failures. To define these planned and unplanned failures, availability set allows you to configure update domain and fault domains.

In simple words, availability set is a logical grouping of two or more virtual machines. When you setup the availability set keep the following principles in your mind:

Setup an availability set for one type of VMs. For example, in 3 tier application architecture create different availability sets for each tier.

For high availability of VMS, create multiple VMs in an availability set.

Attach load balancer with availability sets. It helps you to divide the load among the VMs.

Now, let me explain you update domain and fault domain.

Update domain: An update domain allows VMs to maintain availability during planned maintenance. Each update domain contains set of virtual machines and associated physical hardware that can be updated and rebooted at the same time. It allows Azure to perform incremental or rolling upgrades across a deployment. Once, you create an availability set then you can observe that there are five update domains by default set but you can configure up to twenty update domains.

Fault domain: A fault domain allows VMs to maintain availability during unplanned hardware failures, network outages, power failures and software updates. Fault domain describes the datacenter level components such as network switches and power supply serving a single rack can become a single point of failure for one rack or multiple racks. To avoid these circumstance, VMs in an availability set can have at least two fault domains. Many Azure region only supports two fault domains while other Azure regions can have maximum three fault domains.

If you would like to setup an availability set, follow the step by step availability sets blogpost.