Mac Os Measure Drive Throughput
There are two basic ways that you, as an admin, can deploy the OneDrive sync app to Mac users in your organization: Have users install and set up the OneDrive sync app themselves by following the instructions in Sync files with OneDrive on Mac OS X. To install the OneDrive sync app for Mac, a user. Cache only makes copy seem faster if 1. The line speed is much higher than the disk drive speed and 2. The transfer size fits entirely in the allocated cache. The cache is used such that the maximum line speed can be sustained because it's faster than any drive or transfer line.
Updated: January 10, 2020 Home » Freeware and Software Reviews » Wireless WiFi Optimization & Network
How to test internal LAN network speed? In order to test and troubleshoot networks we need tools that allow us to generate network traffic and analyze the network’s throughput performance. This is true for both wired and wireless networks. In order to properly troubleshoot a wireless (or wired) network we need a way to benchmark its performance, so as modifications are made we can determine whether or not they really make a difference in the network’s performance.
Alternative 2020 Article ➤ 4 Free P2P Intranet Software – Create Home Office File Sharing Network
This is where these LAN benchmarking tools fits in to test network speed between two computers. The concept is you install a new wireless network and it works perfectly. Run these tools and record the benchmark results for later reference. Sometime in the future when either you or others suspect performance has declined then run these tools again and compare the latest results with those you gathered when the network was first installed and working perfectly.
↓ 01 – LAN Speed Test Lite | Windows | macOS
LAN Speed Test was designed from the ground up to be a simple but powerful tool for measuring file transfer, hard drive, USB Drive, and Local Area Network (LAN) speeds (wired & wireless). First, you pick a folder to test to. This folder can be on a local drive or USB drive, etc. to test the drive speed, or a shared folder on your network to test your network speed.
Next, LAN Speed Test builds a file in memory, then transfers it both ways (without effects of Windows/Mac file caching) while keeping track of the time, and then does the calculations for you. For more advanced users, you can test to LAN Speed Test Server instead of a shared folder to take the slower hard drives out of the process as you are testing from one computer’s RAM to another computer’s RAM. Simple concept and easy to use. You’ll find that LAN Speed Test will quickly become one of your favorite network tools!
- Test the speed of your Local Network by testing to/from network shared folders
- Test the speed of your local drives (USB Drives, hard drives, etc.)
- Compatible with LST Server (v1.5 or later) for real network performance results without hard drive limitations – Even test your WAN (Internet) speed with LST Server
- Ability to choose Network Interface Card for computers with multiple NICs
↓ 02 – HELIOS LanTest | Windows | macOS
HELIOS LanTest is a very popular network performance and reliability testing solution for Mac and Windows clients. It is very easy to use within seconds and scales from single local disk testing, to testing performance of a network volume, as well as multi-user concurrent client testing against a single server volume.
HELIOS LanTest can be downloaded via HELIOS WebShare at Server: http://webshare.helios.de, User name and password are – tools
↓ 03 – TamoSoft Throughput Test | Windows | macOS
TamoSoft Throughput Test is a utility for testing the performance of a wireless or wired network. This utility continuously sends TCP and UDP data streams across your network and computes important metrics, such as upstream and downstream throughput values, packet loss, and round-trip time, and displays the results in both numeric and chart formats. TamoSoft Throughput Test supports both IPv4 and IPv6 connections and allows the user to evaluate network performance depending on the Quality of Service (QoS) settings.
Microsoft Windows: Windows 7, Windows 8, Windows 8.1, Windows 10, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2. Mac OS X: Yosemite (10.10), El Capitan (10.11), Sierra (10.12), High Sierra (10.13), Mojave (10.14). Android (client only): Android 4.1 and newer. iOS (client only): iOS 8.0 and newer.
↓ 04 – LANBench | Windows
LANBench is a simple LAN / TCP Network benchmark utility. It is designed for testing network performance between two computers and is based on Winsock 2.2. LANBench tests TCP performance only and is designed for minimal CPU usage so that the pure performance of your network could be fully tested.
- Multiple simultaneous connections – up to 20 (for load simulation)
- Low CPU overhead for pure network performance test
- Multithreaded utilizing Windows I/O completion port
- Configurable packet size, test duration and transfer mode
↓ 05 – NetIO-GUI | Windows
NetIO-GUI is a Windows frontend for the multiplatform commandline utility ‘netio’. It measures ICMP respond times and network transfer rates for different packet sizes and protocols. All results are stored in a SQLite database file and can easily be compared. NetIO-GUI is preferred to rate the quality of peer-to-peer connections like VPN.
↓ 06 – NetStress | Windows
NetStress allows you to generate network traffic and analyze the network’s throughput performance. The results of that comparison with previous benchmark will indicate whether or not there really is a problem and dictate which steps to take next (if any). Regardless of your technical expertise or how expensive your diagnostic tools, if you are modifying a wireless network or making decisions to modify it and you are not testing throughput then you risk wasting time and resources going down the wrong path.
- Single instance of the application (i.e. each instance can be used as server or server plus client)
- Supports both TCP and UDP data transfers
- Supports multiple data streams
- Variable TCP / UDP segment size
- Rate of packet transmission (Packets Per Second)
- Variable Maximum Transmission Unit (MTU)
- Uplink and downlink modes
- Auto node discovery
- Choice of display units (KBps, Kbps, MBps, Mbps)
- Support for multiple network adapters
↓ 07 – iperf | macOS | Linux
iperf is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, protocols, and buffers. For each test it reports the bandwidth, loss, and other parameters. The current version, sometimes referred to as iperf3, is a redesign of an original version developed at NLANR/DAST.
iperf3 is a new implementation from scratch, with the goal of a smaller, simpler code base, and a library version of the functionality that can be used in other programs. iperf3 also a number of features found in other tools such as nuttcp and netperf, but were missing from the original iperf. These include, for example, a zero-copy mode and optional JSON output. Note that iperf3 is not backwards compatible with the original iperf.
↓ 08 – PassMark Advanced Network Test | Free Trial | Windows
The network benchmark test will work with any type of TCP/IP connection. Including ethernet, dial up modems, ADSL, cable modems, local area networks (LAN), Wide area networks (WAN) and wireless networking (WiFi). The software has been optimized to use a minimum amount of CPU time, allowing even high speed gigabit ethernet connections to be benchmarked. Users have the ability to change the following test parameters.
- The IP address of the machine acting as the server and the port number used for the test to help with firewall issues.
- The size of the data block used for each send request. It is also possible to select variable sized blocks to measure performance deltas as block size increases or decreases.
- The duration of the test.
- The protocol, either TCP or UDP. The TCP protocol is when data integrity is important (Errors are corrected using data re-transmission). UDP is used with applications that are tolerant to data loss such as video streaming.
Recommended for you:
-->
This article provides guidelines for building high performance applications using Azure Premium Storage. You can use the instructions provided in this document combined with performance best practices applicable to technologies used by your application. To illustrate the guidelines, we have used SQL Server running on Premium Storage as an example throughout this document.
While we address performance scenarios for the Storage layer in this article, you will need to optimize the application layer. For example, if you are hosting a SharePoint Farm on Azure Premium Storage, you can use the SQL Server examples from this article to optimize the database server. Additionally, optimize the SharePoint Farm's Web server and Application server to get the most performance.
This article will help answer following common questions about optimizing application performance on Azure Premium Storage,
- How to measure your application performance?
- Why are you not seeing expected high performance?
- Which factors influence your application performance on Premium Storage?
- How do these factors influence performance of your application on Premium Storage?
- How can you optimize for IOPS, Bandwidth and Latency?
We have provided these guidelines specifically for Premium Storage because workloads running on Premium Storage are highly performance sensitive. We have provided examples where appropriate. You can also apply some of these guidelines to applications running on IaaS VMs with Standard Storage disks.
Note
Sometimes, what appears to be a disk performance issue is actually a network bottleneck. In these situations, you should optimize your network performance.
If you are looking to benchmark your disk, see our article on Benchmarking a disk.
If your VM supports accelerated networking, you should make sure it is enabled. If it is not enabled, you can enable it on already deployed VMs on both Windows and Linux.
Before you begin, if you are new to Premium Storage, first read the Select an Azure disk type for IaaS VMs and Scalability targets for premium page blob storage accounts.
Application performance indicators
We assess whether an application is performing well or not using performance indicators like, how fast an application is processing a user request, how much data an application is processing per request, how many requests is an application processing in a specific period of time, how long a user has to wait to get a response after submitting their request. The technical terms for these performance indicators are, IOPS, Throughput or Bandwidth, and Latency.
In this section, we will discuss the common performance indicators in the context of Premium Storage. In the following section, Gathering Application Requirements, you will learn how to measure these performance indicators for your application. Later in Optimizing Application Performance, you will learn about the factors affecting these performance indicators and recommendations to optimize them.
IOPS
IOPS, or Input/output Operations Per Second, is the number of requests that your application is sending to the storage disks in one second. An input/output operation could be read or write, sequential, or random. Online Transaction Processing (OLTP) applications like an online retail website need to process many concurrent user requests immediately. The user requests are insert and update intensive database transactions, which the application must process quickly. Therefore, OLTP applications require very high IOPS. Such applications handle millions of small and random IO requests. If you have such an application, you must design the application infrastructure to optimize for IOPS. In the later section, Optimizing Application Performance, we discuss in detail all the factors that you must consider to get high IOPS.
When you attach a premium storage disk to your high scale VM, Azure provisions for you a guaranteed number of IOPS as per the disk specification. For example, a P50 disk provisions 7500 IOPS. Each high scale VM size also has a specific IOPS limit that it can sustain. For example, a Standard GS5 VM has 80,000 IOPS limit.
Throughput
Throughput, or bandwidth is the amount of data that your application is sending to the storage disks in a specified interval. If your application is performing input/output operations with large IO unit sizes, it requires high throughput. Data warehouse applications tend to issue scan intensive operations that access large portions of data at a time and commonly perform bulk operations. In other words, such applications require higher throughput. If you have such an application, you must design its infrastructure to optimize for throughput. In the next section, we discuss in detail the factors you must tune to achieve this.
When you attach a premium storage disk to a high scale VM, Azure provisions throughput as per that disk specification. For example, a P50 disk provisions 250 MB per second disk throughput. Each high scale VM size also has as specific throughput limit that it can sustain. For example, Standard GS5 VM has a maximum throughput of 2,000 MB per second.
There is a relation between throughput and IOPS as shown in the formula below.
Therefore, it is important to determine the optimal throughput and IOPS values that your application requires. As you try to optimize one, the other also gets affected. In a later section, Optimizing Application Performance, we will discuss in more details about optimizing IOPS and Throughput.
Latency
Latency is the time it takes an application to receive a single request, send it to the storage disks and send the response to the client. This is a critical measure of an application's performance in addition to IOPS and Throughput. The Latency of a premium storage disk is the time it takes to retrieve the information for a request and communicate it back to your application. Premium Storage provides consistent low latencies. Premium Disks are designed to provide single-digit millisecond latencies for most IO operations. If you enable ReadOnly host caching on premium storage disks, you can get much lower read latency. We will discuss Disk Caching in more detail in later section on Optimizing Application Performance.
When you are optimizing your application to get higher IOPS and Throughput, it will affect the latency of your application. After tuning the application performance, always evaluate the latency of the application to avoid unexpected high latency behavior.
The following control plane operations on Managed Disks may involve movement of the Disk from one Storage location to another. This is orchestrated via background copy of data that can take several hours to complete, typically less than 24 hours depending on the amount of data in the disks. During that time your application can experience higher than usual read latency as some reads can get redirected to the original location and can take longer to complete. There is no impact on write latency during this period.
- Update the storage type.
- Detach and attach a disk from one VM to another.
- Create a managed disk from a VHD.
- Create a managed disk from a snapshot.
- Convert unmanaged disks to managed disks.
Performance Application Checklist for disks
The first step in designing high-performance applications running on Azure Premium Storage is understanding the performance requirements of your application. After you have gathered performance requirements, you can optimize your application to achieve the most optimal performance.
In the previous section, we explained the common performance indicators, IOPS, Throughput, and Latency. You must identify which of these performance indicators are critical to your application to deliver the desired user experience. For example, high IOPS matters most to OLTP applications processing millions of transactions in a second. Whereas, high Throughput is critical for Data Warehouse applications processing large amounts of data in a second. Extremely low Latency is crucial for real-time applications like live video streaming websites.
Next, measure the maximum performance requirements of your application throughout its lifetime. Use the sample checklist below as a start. Record the maximum performance requirements during normal, peak, and off-hours workload periods. By identifying requirements for all workloads levels, you will be able to determine the overall performance requirement of your application. For example, the normal workload of an e-commerce website will be the transactions it serves during most days in a year. The peak workload of the website will be the transactions it serves during holiday season or special sale events. The peak workload is typically experienced for a limited period, but can require your application to scale two or more times its normal operation. Find out the 50 percentile, 90 percentile, and 99 percentile requirements. This helps filter out any outliers in the performance requirements and you can focus your efforts on optimizing for the right values.
Application performance requirements checklist
Performance requirements | 50 Percentile | 90 Percentile | 99 Percentile |
---|---|---|---|
Max. Transactions per second | |||
% Read operations | |||
% Write operations | |||
% Random operations | |||
% Sequential operations | |||
IO request size | |||
Average Throughput | |||
Max. Throughput | |||
Min. Latency | |||
Average Latency | |||
Max. CPU | |||
Average CPU | |||
Max. Memory | |||
Average Memory | |||
Queue Depth |
Note
You should consider scaling these numbers based on expected future growth of your application. It is a good idea to plan for growth ahead of time, because it could be harder to change the infrastructure for improving performance later.
If you have an existing application and want to move to Premium Storage, first build the checklist above for the existing application. Then, build a prototype of your application on Premium Storage and design the application based on guidelines described in Optimizing Application Performance in a later section of this document. The next article describes the tools you can use to gather the performance measurements.
Counters to measure application performance requirements
The best way to measure performance requirements of your application, is to use performance-monitoring tools provided by the operating system of the server. You can use PerfMon for Windows and iostat for Linux. These tools capture counters corresponding to each measure explained in the above section. You must capture the values of these counters when your application is running its normal, peak, and off-hours workloads.
The PerfMon counters are available for processor, memory and, each logical disk and physical disk of your server. When you use premium storage disks with a VM, the physical disk counters are for each premium storage disk, and logical disk counters are for each volume created on the premium storage disks. You must capture the values for the disks that host your application workload. If there is a one to one mapping between logical and physical disks, you can refer to physical disk counters; otherwise refer to the logical disk counters. On Linux, the iostat command generates a CPU and disk utilization report. The disk utilization report provides statistics per physical device or partition. If you have a database server with its data and logs on separate disks, collect this data for both disks. Below table describes counters for disks, processors, and memory:
Counter | Description | PerfMon | Iostat |
---|---|---|---|
IOPS or Transactions per second | Number of I/O requests issued to the storage disk per second. | Disk Reads/sec Disk Writes/sec | tps r/s w/s |
Disk Reads and Writes | % of Reads and Write operations performed on the disk. | % Disk Read Time % Disk Write Time | r/s w/s |
Throughput | Amount of data read from or written to the disk per second. | Disk Read Bytes/sec Disk Write Bytes/sec | kB_read/s kB_wrtn/s |
Latency | Total time to complete a disk IO request. | Average Disk sec/Read Average disk sec/Write | await svctm |
IO size | The size of I/O requests issues to the storage disks. | Average Disk Bytes/Read Average Disk Bytes/Write | avgrq-sz |
Queue Depth | Number of outstanding I/O requests waiting to be read from or written to the storage disk. | Current Disk Queue Length | avgqu-sz |
Max. Memory | Amount of memory required to run application smoothly | % Committed Bytes in Use | Use vmstat |
Max. CPU | Amount CPU required to run application smoothly | % Processor time | %util |
Learn more about iostat and PerfMon.
Optimize application performance
The main factors that influence performance of an application running on Premium Storage are Nature of IO requests, VM size, Disk size, Number of disks, disk caching, multithreading, and queue depth. You can control some of these factors with knobs provided by the system. Most applications may not give you an option to alter the IO size and Queue Depth directly. For example, if you are using SQL Server, you cannot choose the IO size and queue depth. SQL Server chooses the optimal IO size and queue depth values to get the most performance. It is important to understand the effects of both types of factors on your application performance, so that you can provision appropriate resources to meet performance needs.
Throughout this section, refer to the application requirements checklist that you created, to identify how much you need to optimize your application performance. Based on that, you will be able to determine which factors from this section you will need to tune. To witness the effects of each factor on your application performance, run benchmarking tools on your application setup. Refer to the Benchmarking article, linked at the end, for steps to run common benchmarking tools on Windows and Linux VMs.
Optimize IOPS, throughput, and latency at a glance
The table below summarizes performance factors and the steps necessary to optimize IOPS, throughput, and latency. The sections following this summary will describe each factor is much more depth.
For more information on VM sizes and on the IOPS, throughput, and latency available for each type of VM, see Linux VM sizes or Windows VM sizes.
IOPS | Throughput | Latency | |
---|---|---|---|
Example Scenario | Enterprise OLTP application requiring very high transactions per second rate. | Enterprise Data warehousing application processing large amounts of data. | Near real-time applications requiring instant responses to user requests, like online gaming. |
Performance factors | |||
IO size | Smaller IO size yields higher IOPS. | Larger IO size to yields higher Throughput. | |
VM size | Use a VM size that offers IOPS greater than your application requirement. | Use a VM size with throughput limit greater than your application requirement. | Use a VM size that offers scale limits greater than your application requirement. |
Disk size | Use a disk size that offers IOPS greater than your application requirement. | Use a disk size with Throughput limit greater than your application requirement. | Use a disk size that offers scale limits greater than your application requirement. |
VM and Disk Scale Limits | IOPS limit of the VM size chosen should be greater than total IOPS driven by storage disks attached to it. | Throughput limit of the VM size chosen should be greater than total Throughput driven by premium storage disks attached to it. | Scale limits of the VM size chosen must be greater than total scale limits of attached premium storage disks. |
Disk Caching | Enable ReadOnly Cache on premium storage disks with Read heavy operations to get higher Read IOPS. | Enable ReadOnly Cache on premium storage disks with Ready heavy operations to get very low Read latencies. | |
Disk Striping | Use multiple disks and stripe them together to get a combined higher IOPS and Throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks. | ||
Stripe Size | Smaller stripe size for random small IO pattern seen in OLTP applications. For example, use stripe size of 64 KB for SQL Server OLTP application. | Larger stripe size for sequential large IO pattern seen in Data Warehouse applications. For example, use 256 KB stripe size for SQL Server Data warehouse application. | |
Multithreading | Use multithreading to push higher number of requests to Premium Storage that will lead to higher IOPS and Throughput. For example, on SQL Server set a high MAXDOP value to allocate more CPUs to SQL Server. | ||
Queue Depth | Larger Queue Depth yields higher IOPS. | Larger Queue Depth yields higher Throughput. | Smaller Queue Depth yields lower latencies. |
Nature of IO requests
An IO request is a unit of input/output operation that your application will be performing. Identifying the nature of IO requests, random or sequential, read or write, small or large, will help you determine the performance requirements of your application. It is important to understand the nature of IO requests, to make the right decisions when designing your application infrastructure. IOs must be distributed evenly to achieve the best performance possible.
IO size is one of the more important factors. The IO size is the size of the input/output operation request generated by your application. The IO size has a significant impact on performance especially on the IOPS and Bandwidth that the application is able to achieve. The following formula shows the relationship between IOPS, IO size, and Bandwidth/Throughput.
Some applications allow you to alter their IO size, while some applications do not. For example, SQL Server determines the optimal IO size itself, and does not provide users with any knobs to change it. On the other hand, Oracle provides a parameter called DB_BLOCK_SIZE using which you can configure the I/O request size of the database.
If you are using an application, which does not allow you to change the IO size, use the guidelines in this article to optimize the performance KPI that is most relevant to your application. For example,
- An OLTP application generates millions of small and random IO requests. To handle these types of IO requests, you must design your application infrastructure to get higher IOPS.
- A data warehousing application generates large and sequential IO requests. To handle these types of IO requests, you must design your application infrastructure to get higher Bandwidth or Throughput.
If you are using an application, which allows you to change the IO size, use this rule of thumb for the IO size in addition to other performance guidelines,
- Smaller IO size to get higher IOPS. For example, 8 KB for an OLTP application.
- Larger IO size to get higher Bandwidth/Throughput. For example, 1024 KB for a data warehouse application.
Here is an example on how you can calculate the IOPS and Throughput/Bandwidth for your application. Consider an application using a P30 disk. The maximum IOPS and Throughput/Bandwidth a P30 disk can achieve is 5000 IOPS and 200 MB per second respectively. Now, if your application requires the maximum IOPS from the P30 disk and you use a smaller IO size like 8 KB, the resulting Bandwidth you will be able to get is 40 MB per second. However, if your application requires the maximum Throughput/Bandwidth from P30 disk, and you use a larger IO size like 1024 KB, the resulting IOPS will be less, 200 IOPS. Therefore, tune the IO size such that it meets both your application's IOPS and Throughput/Bandwidth requirement. The following table summarizes the different IO sizes and their corresponding IOPS and Throughput for a P30 disk.
Application Requirement | I/O size | IOPS | Throughput/Bandwidth |
---|---|---|---|
Max IOPS | 8 KB | 5,000 | 40 MB per second |
Max Throughput | 1024 KB | 200 | 200 MB per second |
Max Throughput + high IOPS | 64 KB | 3,200 | 200 MB per second |
Max IOPS + high Throughput | 32 KB | 5,000 | 160 MB per second |
To get IOPS and Bandwidth higher than the maximum value of a single premium storage disk, use multiple premium disks striped together. For example, stripe two P30 disks to get a combined IOPS of 10,000 IOPS or a combined Throughput of 400 MB per second. As explained in the next section, you must use a VM size that supports the combined disk IOPS and Throughput.
Note
As you increase either IOPS or Throughput the other also increases, make sure you do not hit throughput or IOPS limits of the disk or VM when increasing either one.
To witness the effects of IO size on application performance, you can run benchmarking tools on your VM and disks. Create multiple test runs and use different IO size for each run to see the impact. Refer to the Benchmarking article, linked at the end, for more details.
High scale VM sizes
When you start designing an application, one of the first things to do is, choose a VM to host your application. Premium Storage comes with High Scale VM sizes that can run applications requiring higher compute power and a high local disk I/O performance. These VMs provide faster processors, a higher memory-to-core ratio, and a Solid-State Drive (SSD) for the local disk. Examples of High Scale VMs supporting Premium Storage are the DS and GS series VMs.
High Scale VMs are available in different sizes with a different number of CPU cores, memory, OS, and temporary disk size. Each VM size also has maximum number of data disks that you can attach to the VM. Therefore, the chosen VM size will affect how much processing, memory, and storage capacity is available for your application. It also affects the Compute and Storage cost. For example, below are the specifications of the largest VM size in a DS series and a GS series:
VM size | CPU cores | Memory | VM disk sizes | Max. data disks | Cache size | IOPS | Bandwidth Cache IO limits |
---|---|---|---|---|---|---|---|
Standard_DS14 | 16 | 112 GB | OS = 1023 GB Local SSD = 224 GB | 32 | 576 GB | 50,000 IOPS 512 MB per second | 4,000 IOPS and 33 MB per second |
Standard_GS5 | 32 | 448 GB | OS = 1023 GB Local SSD = 896 GB | 64 | 4224 GB | 80,000 IOPS 2,000 MB per second | 5,000 IOPS and 50 MB per second |
To view a complete list of all available Azure VM sizes, refer to Windows VM sizes or Linux VM sizes. Choose a VM size that can meet and scale to your desired application performance requirements. In addition to this, take into account following important considerations when choosing VM sizes.
Scale Limits
The maximum IOPS limits per VM and per disk are different and independent of each other. Make sure that the application is driving IOPS within the limits of the VM as well as the premium disks attached to it. Otherwise, application performance will experience throttling.
As an example, suppose an application requirement is a maximum of 4,000 IOPS. To achieve this, you provision a P30 disk on a DS1 VM. The P30 disk can deliver up to 5,000 IOPS. However, the DS1 VM is limited to 3,200 IOPS. Consequently, the application performance will be constrained by the VM limit at 3,200 IOPS and there will be degraded performance. To prevent this situation, choose a VM and disk size that will both meet application requirements.
Cost of Operation
In many cases, it is possible that your overall cost of operation using Premium Storage is lower than using Standard Storage.
For example, consider an application requiring 16,000 IOPS. To achieve this performance, you will need a Standard_D14 Azure IaaS VM, which can give a maximum IOPS of 16,000 using 32 standard storage 1 TB disks. Each 1-TB standard storage disk can achieve a maximum of 500 IOPS. The estimated cost of this VM per month will be $1,570. The monthly cost of 32 standard storage disks will be $1,638. The estimated total monthly cost will be $3,208.
However, if you hosted the same application on Premium Storage, you will need a smaller VM size and fewer premium storage disks, thus reducing the overall cost. A Standard_DS13 VM can meet the 16,000 IOPS requirement using four P30 disks. The DS13 VM has a maximum IOPS of 25,600 and each P30 disk has a maximum IOPS of 5,000. Overall, this configuration can achieve 5,000 x 4 = 20,000 IOPS. The estimated cost of this VM per month will be $1,003. The monthly cost of four P30 premium storage disks will be $544.34. The estimated total monthly cost will be $1,544.
Table below summarizes the cost breakdown of this scenario for Standard and Premium Storage.
Standard | Premium | |
---|---|---|
Cost of VM per month | $1,570.58 (Standard_D14) | $1,003.66 (Standard_DS13) |
Cost of Disks per month | $1,638.40 (32 x 1-TB disks) | $544.34 (4 x P30 disks) |
Overall Cost per month | $3,208.98 | $1,544.34 |
Linux Distros
With Azure Premium Storage, you get the same level of Performance for VMs running Windows and Linux. We support many flavors of Linux distros, and you can see the complete list here. It is important to note that different distros are better suited for different types of workloads. You will see different levels of performance depending on the distro your workload is running on. Test the Linux distros with your application and choose the one that works best.
When running Linux with Premium Storage, check the latest updates about required drivers to ensure high performance.
Premium storage disk sizes
Azure Premium Storage offers a variety of sizes so you can choose one that best suits your needs. Each disk size has a different scale limit for IOPS, bandwidth, and storage. Choose the right Premium Storage Disk size depending on the application requirements and the high scale VM size. The table below shows the disks sizes and their capabilities. P4, P6, P15, P60, P70, and P80 sizes are currently only supported for Managed Disks.
Premium SSD sizes | P1 | P2 | P3 | P4 | P6 | P10 | P15 | P20 | P30 | P40 | P50 | P60 | P70 | P80 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Disk size in GiB | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1,024 | 2,048 | 4,096 | 8,192 | 16,384 | 32,767 |
Provisioned IOPS per disk | 120 | 120 | 120 | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 |
Provisioned Throughput per disk | 25 MB/sec | 25 MB/sec | 25 MB/sec | 25 MB/sec | 50 MB/sec | 100 MB/sec | 125 MB/sec | 150 MB/sec | 200 MB/sec | 250 MB/sec | 250 MB/sec | 500 MB/sec | 750 MB/sec | 900 MB/sec |
Max burst IOPS per disk | 3,500 | 3,500 | 3,500 | 3,500 | 3,500 | 3,500 | 3,500 | 3,500 | ||||||
Max burst throughput per disk | 170 MB/sec | 170 MB/sec | 170 MB/sec | 170 MB/sec | 170 MB/sec | 170 MB/sec | 170 MB/sec | 170 MB/sec | ||||||
Max burst duration | 30 min | 30 min | 30 min | 30 min | 30 min | 30 min | 30 min | 30 min | ||||||
Eligible for reservation | No | No | No | No | No | No | No | No | Yes, up to one year | Yes, up to one year | Yes, up to one year | Yes, up to one year | Yes, up to one year | Yes, up to one year |
How many disks you choose depends on the disk size chosen. You could use a single P50 disk or multiple P10 disks to meet your application requirement. Take into account considerations listed below when making the choice.
Scale Limits (IOPS and Throughput)
The IOPS and Throughput limits of each Premium disk size is different and independent from the VM scale limits. Make sure that the total IOPS and Throughput from the disks are within scale limits of the chosen VM size.
For example, if an application requirement is a maximum of 250 MB/sec Throughput and you are using a DS4 VM with a single P30 disk. The DS4 VM can give up to 256 MB/sec Throughput. However, a single P30 disk has Throughput limit of 200 MB/sec. Consequently, the application will be constrained at 200 MB/sec due to the disk limit. To overcome this limit, provision more than one data disks to the VM or resize your disks to P40 or P50.
Note
Reads served by the cache are not included in the disk IOPS and Throughput, hence not subject to disk limits. Cache has its separate IOPS and Throughput limit per VM.
For example, initially your reads and writes are 60MB/sec and 40MB/sec respectively. Over time, the cache warms up and serves more and more of the reads from the cache. Then, you can get higher write Throughput from the disk.
Number of Disks
Determine the number of disks you will need by assessing application requirements. Each VM size also has a limit on the number of disks that you can attach to the VM. Typically, this is twice the number of cores. Ensure that the VM size you choose can support the number of disks needed.
Remember, the Premium Storage disks have higher performance capabilities compared to Standard Storage disks. Therefore, if you are migrating your application from Azure IaaS VM using Standard Storage to Premium Storage, you will likely need fewer premium disks to achieve the same or higher performance for your application.
Disk caching
High Scale VMs that leverage Azure Premium Storage have a multi-tier caching technology called BlobCache. BlobCache uses a combination of the host RAM and local SSD for caching. This cache is available for the Premium Storage persistent disks and the VM local disks. By default, this cache setting is set to Read/Write for OS disks and ReadOnly for data disks hosted on Premium Storage. With disk caching enabled on the Premium Storage disks, the high scale VMs can achieve extremely high levels of performance that exceed the underlying disk performance.
Warning
Disk Caching is not supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching.
Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the operating system disk, the VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache setting. Not following those recommendations could lead to data corruption.
To learn more about how BlobCache works, refer to the Inside Azure Premium Storage blog post.
It is important to enable cache on the right set of disks. Whether you should enable disk caching on a premium disk or not will depend on the workload pattern that disk will be handling. Table below shows the default cache settings for OS and Data disks.
Disk type | Default cache setting |
---|---|
OS disk | ReadWrite |
Data disk | ReadOnly |
Following are the recommended disk cache settings for data disks,
Disk caching setting | recommendation on when to use this setting |
---|---|
None | Configure host-cache as None for write-only and write-heavy disks. |
ReadOnly | Configure host-cache as ReadOnly for read-only and read-write disks. |
ReadWrite | Configure host-cache as ReadWrite only if your application properly handles writing cached data to persistent disks when needed. |
ReadOnly
By configuring ReadOnly caching on Premium Storage data disks, you can achieve low Read latency and get very high Read IOPS and Throughput for your application. This is due two reasons,
- Reads performed from cache, which is on the VM memory and local SSD, are much faster than reads from the data disk, which is on the Azure blob storage.
- Premium Storage does not count the Reads served from cache, towards the disk IOPS and Throughput. Therefore, your application is able to achieve higher total IOPS and Throughput.
ReadWrite
By default, the OS disks have ReadWrite caching enabled. We have recently added support for ReadWrite caching on data disks as well. If you are using ReadWrite caching, you must have a proper way to write the data from cache to persistent disks. For example, SQL Server handles writing cached data to the persistent storage disks on its own. Using ReadWrite cache with an application that does not handle persisting the required data can lead to data loss, if the VM crashes.
None
Currently, None is only supported on data disks. It is not supported on OS disks. If you set None on an OS disk it will override this internally and set it to ReadOnly.
As an example, you can apply these guidelines to SQL Server running on Premium Storage by doing the following,
- Configure 'ReadOnly' cache on premium storage disks hosting data files.
a. The fast reads from cache lower the SQL Server query time since data pages are retrieved much faster from the cache compared to directly from the data disks.
b. Serving reads from cache, means there is additional Throughput available from premium data disks. SQL Server can use this additional Throughput towards retrieving more data pages and other operations like backup/restore, batch loads, and index rebuilds. - Configure 'None' cache on premium storage disks hosting the log files.
a. Log files have primarily write-heavy operations. Therefore, they do not benefit from the ReadOnly cache.
Optimize performance on Linux VMs
For all premium SSDs or ultra disks with cache set to ReadOnly or None, you must disable 'barriers' when you mount the file system. You don't need barriers in this scenario because the writes to premium storage disks are durable for these cache settings. When the write request successfully finishes, data has been written to the persistent store. To disable 'barriers,' use one of the following methods. Choose the one for your file system:
- For reiserFS, to disable barriers, use the
barrier=none
mount option. (To enable barriers, usebarrier=flush
.) - For ext3/ext4, to disable barriers, use the
barrier=0
mount option. (To enable barriers, usebarrier=1
.) - For XFS, to disable barriers, use the
nobarrier
mount option. (To enable barriers, usebarrier
.) - For premium storage disks with cache set to ReadWrite, enable barriers for write durability.
- For volume labels to persist after you restart the VM, you must update /etc/fstab with the universally unique identifier (UUID) references to the disks. For more information, see Add a managed disk to a Linux VM.
The following Linux distributions have been validated for premium SSDs. For better performance and stability with premium SSDs, we recommend that you upgrade your VMs to one of these versions or newer.
Some of the versions require the latest Linux Integration Services (LIS), v4.0, for Azure. To download and install a distribution, follow the link listed in the following table. We add images to the list as we complete validation. Our validations show that performance varies for each image. Performance depends on workload characteristics and your image settings. Different images are tuned for different kinds of workloads.
Distribution | Version | Supported kernel | Details |
---|---|---|---|
Ubuntu | 12.04 or newer | 3.2.0-75.110+ | |
Ubuntu | 14.04 or newer | 3.13.0-44.73+ | |
Debian | 7.x, 8.x or newer | 3.16.7-ckt4-1+ | |
SUSE | SLES 12 or newer | 3.12.36-38.1+ | |
SUSE | SLES 11 SP4 or newer | 3.0.101-0.63.1+ | |
CoreOS | 584.0.0+ or newer | 3.18.4+ | |
CentOS | 6.5, 6.6, 6.7, 7.0, or newer | LIS4 required See note in the next section | |
CentOS | 7.1+ or newer | 3.10.0-229.1.2.el7+ | LIS4 recommended See note in the next section |
Red Hat Enterprise Linux (RHEL) | 6.8+, 7.2+, or newer | ||
Oracle | 6.0+, 7.2+, or newer | UEK4 or RHCK | |
Oracle | 7.0-7.1 or newer | UEK4 or RHCK w/LIS4 | |
Oracle | 6.4-6.7 or newer | UEK4 or RHCK w/LIS4 |
LIS drivers for OpenLogic CentOS
If you're running OpenLogic CentOS VMs, run the following command to install the latest drivers:
In some cases the command above will upgrade the kernel as well. If a kernel update is required then you may need to run the above commands again after rebooting to fully install the microsoft-hyper-v package.
Disk striping
When a high scale VM is attached with several premium storage persistent disks, the disks can be striped together to aggregate their IOPs, bandwidth, and storage capacity.
On Windows, you can use Storage Spaces to stripe disks together. You must configure one column for each disk in a pool. Otherwise, the overall performance of striped volume can be lower than expected, due to uneven distribution of traffic across the disks.
Important: Using Server Manager UI, you can set the total number of columns up to 8 for a striped volume. When attaching more than eight disks, use PowerShell to create the volume. Using PowerShell, you can set the number of columns equal to the number of disks. For example, if there are 16 disks in a single stripe set; specify 16 columns in the NumberOfColumns parameter of the New-VirtualDisk PowerShell cmdlet.
On Linux, use the MDADM utility to stripe disks together. For detailed steps on striping disks on Linux refer to Configure Software RAID on Linux.
Stripe Size
An important configuration in disk striping is the stripe size. The stripe size or block size is the smallest chunk of data that application can address on a striped volume. The stripe size you configure depends on the type of application and its request pattern. If you choose the wrong stripe size, it could lead to IO misalignment, which leads to degraded performance of your application.
For example, if an IO request generated by your application is bigger than the disk stripe size, the storage system writes it across stripe unit boundaries on more than one disk. When it is time to access that data, it will have to seek across more than one stripe units to complete the request. The cumulative effect of such behavior can lead to substantial performance degradation. On the other hand, if the IO request size is smaller than stripe size, and if it is random in nature, the IO requests may add up on the same disk causing a bottleneck and ultimately degrading the IO performance.
Depending on the type of workload your application is running, choose an appropriate stripe size. For random small IO requests, use a smaller stripe size. Whereas for large sequential IO requests use a larger stripe size. Find out the stripe size recommendations for the application you will be running on Premium Storage. For SQL Server, configure stripe size of 64 KB for OLTP workloads and 256 KB for data warehousing workloads. See Performance best practices for SQL Server on Azure VMs to learn more.
Note
You can stripe together a maximum of 32 premium storage disks on a DS series VM and 64 premium storage disks on a GS series VM.
Multi-threading
Azure has designed Premium Storage platform to be massively parallel. Therefore, a multi-threaded application achieves much higher performance than a single-threaded application. A multi-threaded application splits up its tasks across multiple threads and increases efficiency of its execution by utilizing the VM and disk resources to the maximum.
For example, if your application is running on a single core VM using two threads, the CPU can switch between the two threads to achieve efficiency. While one thread is waiting on a disk IO to complete, the CPU can switch to the other thread. In this way, two threads can accomplish more than a single thread would. If the VM has more than one core, it further decreases running time since each core can execute tasks in parallel.
You may not be able to change the way an off-the-shelf application implements single threading or multi-threading. For example, SQL Server is capable of handling multi-CPU and multi-core. However, SQL Server decides under what conditions it will leverage one or more threads to process a query. It can run queries and build indexes using multi-threading. For a query that involves joining large tables and sorting data before returning to the user, SQL Server will likely use multiple threads. However, a user cannot control whether SQL Server executes a query using a single thread or multiple threads.
There are configuration settings that you can alter to influence this multi-threading or parallel processing of an application. For example, in case of SQL Server it is the maximum Degree of Parallelism configuration. This setting called MAXDOP, allows you to configure the maximum number of processors SQL Server can use when parallel processing. You can configure MAXDOP for individual queries or index operations. This is beneficial when you want to balance resources of your system for a performance critical application.
Throughput Measures The Time
For example, say your application using SQL Server is executing a large query and an index operation at the same time. Let us assume that you wanted the index operation to be more performant compared to the large query. In such a case, you can set MAXDOP value of the index operation to be higher than the MAXDOP value for the query. This way, SQL Server has more number of processors that it can leverage for the index operation compared to the number of processors it can dedicate to the large query. Remember, you do not control the number of threads SQL Server will use for each operation. You can control the maximum number of processors being dedicated for multi-threading.
Learn more about Degrees of Parallelism in SQL Server. Find out such settings that influence multi-threading in your application and their configurations to optimize performance.
Queue depth
The queue depth or queue length or queue size is the number of pending IO requests in the system. The value of queue depth determines how many IO operations your application can line up, which the storage disks will be processing. It affects all the three application performance indicators that we discussed in this article viz., IOPS, throughput, and latency.
Queue Depth and multi-threading are closely related. The Queue Depth value indicates how much multi-threading can be achieved by the application. If the Queue Depth is large, application can execute more operations concurrently, in other words, more multi-threading. If the Queue Depth is small, even though application is multi-threaded, it will not have enough requests lined up for concurrent execution.
Typically, off the shelf applications do not allow you to change the queue depth, because if set incorrectly it will do more harm than good. Applications will set the right value of queue depth to get the optimal performance. However, it is important to understand this concept so that you can troubleshoot performance issues with your application. You can also observe the effects of queue depth by running benchmarking tools on your system.
Some applications provide settings to influence the Queue Depth. For example, the MAXDOP (maximum degree of parallelism) setting in SQL Server explained in previous section. MAXDOP is a way to influence Queue Depth and multi-threading, although it does not directly change the Queue Depth value of SQL Server.
Hospital Throughput Measures
High queue depth
A high queue depth lines up more operations on the disk. The disk knows the next request in its queue ahead of time. Consequently, the disk can schedule operations ahead of time and process them in an optimal sequence. Since the application is sending more requests to the disk, the disk can process more parallel IOs. Ultimately, the application will be able to achieve higher IOPS. Since application is processing more requests, the total Throughput of the application also increases.
Typically, an application can achieve maximum Throughput with 8-16+ outstanding IOs per attached disk. If a queue depth is one, application is not pushing enough IOs to the system, and it will process less amount of in a given period. In other words, less Throughput.
For example, in SQL Server, setting the MAXDOP value for a query to '4' informs SQL Server that it can use up to four cores to execute the query. SQL Server will determine what is best queue depth value and the number of cores for the query execution.
Optimal queue depth
Very high queue depth value also has its drawbacks. If queue depth value is too high, the application will try to drive very high IOPS. Unless application has persistent disks with sufficient provisioned IOPS, this can negatively affect application latencies. Following formula shows the relationship between IOPS, latency, and queue depth.
You should not configure Queue Depth to any high value, but to an optimal value, which can deliver enough IOPS for the application without affecting latencies. For example, if the application latency needs to be 1 millisecond, the Queue Depth required to achieve 5,000 IOPS is, QD = 5000 x 0.001 = 5.
How To Measure Throughput Manufacturing
Queue Depth for Striped Volume
For a striped volume, maintain a high enough queue depth such that, every disk has a peak queue depth individually. For example, consider an application that pushes a queue depth of 2 and there are four disks in the stripe. The two IO requests will go to two disks and remaining two disks will be idle. Therefore, configure the queue depth such that all the disks can be busy. Formula below shows how to determine the queue depth of striped volumes.
Throttling
Azure Premium Storage provisions specified number of IOPS and Throughput depending on the VM sizes and disk sizes you choose. Anytime your application tries to drive IOPS or Throughput above these limits of what the VM or disk can handle, Premium Storage will throttle it. This manifests in the form of degraded performance in your application. This can mean higher latency, lower Throughput, or lower IOPS. If Premium Storage does not throttle, your application could completely fail by exceeding what its resources are capable of achieving. So, to avoid performance issues due to throttling, always provision sufficient resources for your application. Take into consideration what we discussed in the VM sizes and Disk sizes sections above. Benchmarking is the best way to figure out what resources you will need to host your application.
Next steps
If you are looking to benchmark your disk, see our article on Benchmarking a disk.
Learn more about the available disk types: Select a disk type
For SQL Server users, read articles on Performance Best Practices for SQL Server: