This blog is part 2 of a 6-part series Part 1 can be found here.

Although the benefits of moving to the cloud for your compute workloads are many, most migration discussions will invariably include a discussion on costs.  In many cases, we see customers simply perform a like to like mapping of on-premise server capacities into the Azure Pricing Calculator choosing the closest and often cheapest size match listed in Azure and using that for their budgetary discussion. This simplified exercise will almost always either over or underestimate the actual capacity and cost requirements. 

Proper sizing as part of a detailed planning exercise is key to ensure that you do not have your migration project shelved due to high projected cost due to oversizing, or potentially even worse,  end up with serious performance issues post-migration as a result of under-sizing.  To ensure an accurate costing and performance profile required to run your on-premise server workloads in Azure it is important to understand three things.

Understand the Different Azure VM Types

Azure Virtual machine types can be confusing, to say the least with names like Av2, LSv2 and Easv3 Series 3.  There is now such a range of different sizes and types that it can be very tricky choosing the appropriate match for your on-premise servers.

In general, it is key to understand that Azure VM types are separated into six primary types.

  1. General Purpose
  2. Compute Optimized
  3. Memory-Optimized
  4. Storage Optimized
  5. GPU
  6. High-Performance Compute

Without going into extended detail around the features of each different type, the vast majority of requirements will fall into the General-Purpose workload bucket, with approximately 80% of all workloads we have migrated ending up in a D-series VM.  Larger business application servers hosting financial systems or small ERP environments will often require a Compute Optimized F-series VMs with their back-end Database servers best suited to live on a Memory-Optimized E or M-series machine.

Almost as important as understanding which Azure VM type to use is understand how the hardware performance differentiates from existing on-premise servers.  More often than not, the timing of a migration aligns with a decision around refreshing an aging server environment.  That aging environment will be running technology much older (and slower) than the server in Azure.  As a result, this modern infrastructure in Azure can often offer similar or better performance than your current environment using fewer resources. refresh requirement with your existing server hardware running. 

This is also true for different generations of VMs within Azure.  For instance, a V3 machine will typically have faster comparative infrastructure than a V2 – but not always. Are you confused yet?  In order to help simplify the comparisons, Microsoft has developed the concept of an Azure compute unit (ACU) as a guideline for computing power comparisons across different Azure VM generations and types. For on-premise hardware comparison, they have also published SPECInt benchmark scores which can be used to compare against published results for your on-premise servers.

 Understand Azure Disk and Storage Types

Azure storage is another area that is often confused.  Azure offers many different types of purpose fit storage options such a Blob, Queue, Table and File.  While extremely valuable for different types of workloads, these storage options are not to be confused with Disk Storage that comes attached to Azure VMs similar to disks that would be attached to traditional on-premise VM workloads.

Azure offers two types of disk storage.  Managed and Unmanaged. Managed disk storage is effectively Azure’s modern equivalent of what might otherwise be known as disk virtualization or Software Defined Storage. In the vast majority of cases, this is the recommended storage type to use for your Azure VMs because it delivers guaranteed performance while minimalizing operational overhead. Managed disks also offer better security, snapshot and backup support as well as increased reliability compared to Unmanaged disk. The one downside of a managed disk is that they are offered in static size buckets (e.g. 256GB, 512GB) so you may end paying for more storage overall than you actually require. With unmanaged disks, each disk you provision is stored as a VHD in an Azure storage account that you are responsible for creating and managing. The biggest benefit is that you only pay for the storage space you use, however things can get much more complex as you begin to scale out as a single storage account is capable of only support 40 standard virtual hard disks at full throttle, requiring you to add and manage additional storage accounts which can get complicated.

Outside of the type of disk storage you select, there are also two primary types of disks with the options to choose either Standard or Premium disks. In short, Premium disk leverage newer and faster SSD disk technology while Standard disk uses traditional “spinning” disk drives. The pricing difference between standard and premium disks is currently in the range of 3x more expensive for the premium version. This has often led organizations to choose standard disk during their migration simply due to the cost.  Standard disk performance, however, is very limited, and not suited to most business applications, resulting in performance issues is used. As a general rule of thumb, if your on-premise workloads require a SAN environment with at least 10k disk drives to offer the required application performance then you are going to want to use Premium Disk. As a middle ground, we often recommend that customers leverage Standard SSD Managed Disks which are able to provide adequate performance for most entry-level production workloads at a reduced cost. Standard SSD disks can also be easily upgraded to Premium SSD disks at a later date if required.

Finally, it is important to touch on the three main disk roles that exist for Azure VMs: the data disk, the Operating System (OS) disk, and the temporary disk.  These roles all map to disks that are attached to your virtual machine and can be seen as traditional disks in Windows Server’s disk management utility

By default, your Azure VM will have on attached OS Disk and a Temporary Disk.  The OS will have the pre-installed OS (Windows or Linux), and potentially pre-installed applications depending on the template used and have a maximum capacity of 2,048 GB.  The temporary disk is there only to provide short-term storage for applications and processes, such as page or swap files, and not intended for long term storage retention.  The temporary disk is labeled as the D: drive by default.  Data on a temporary disk can and will be lost during maintenance events.  Data disks are managed disks attached to your VM to store application or other data that is required to be kept long term.  These disks are registered in the OS as SCSI drives and have a maximum capacity of 32,767 GB.

Understanding Azure Networking

Whenever you create an Azure VM you must also either create an Azure Virtual Network (Vnet) or use and existing one. Vnets and their related Network Security Groups are very important to understand as they govern connectivity for your virtual machine.  From a pure sizing perspective, however, there are a couple of specific items that you need to take into consideration. 

One gotcha that has caught many organizations is not understanding the number of network interfaces required for each VM.  Many of the lower end general-purpose VMs in Azure have a maximum of two Network Interfaces available which may not be sufficient for your needs. Additionally, there maximum bandwidth caps in place depending on the size and series of the VM as well-meaning that understanding the network traffic profile for your VM is as important as understand the compute performance one.

Outside of individual VM network connectivity and performance, it is also important to consider how you will connect your existing users and environment to your new Azure tenant.  In some cases, where only web-facing workloads are being hosted, no standing interconnectivity between your corporate network and your Azure server networks may be required, however, this is not often the case.

The easiest and most cost-efficient way to connect your on-premise network into your Azure environment is to use an Azure VPN Gateway and set up a site to site VPN.  You also have the option of using a dedicated Azure Express Route, however, this added expense is typically only required for the largest and most complex organizations. Outside of ensuring that your current network edge device is compatible with connecting to an Azure VPN Gateway you will also need to consider the type and size of gateway you require as well as whether you require it to be zone redundant.

Clear as Mud?

If all of these different considerations seem overwhelming, don’t worry, you are not alone.  The good news is that Microsoft has provided a tool that will help with much of the heavy lifting.  We strongly recommend that any sizing exercise leverage current state information gathered over at least 30 days using tools the Azure Migrate tool which will help gather relevant performance metrics from your current environment based on actual usage and not just based on what is currently provisioned.  These metrics are then used to auto map the vast majority of your current workloads into appropriate Azure VM and Storage profile considering computer, storage, and network requirements.

Join us next week as we dive into Step 3 of 6 Key Steps to A Successful Microsoft Azure Migration: Leverage Azure SaaS Services Where Possible