Note: The intended audience of these posts is 100% myself, or other technical people who are looking for snippets of stored information/code for a specific task.

Comparing EBS and SSD volumes on i3.large (AWS)

Sun, 22 Apr 2018 06:27:33 +0000

As part of a project to explore migrating an AWS’s Aurora MySQL database to an EC2 instance I performed several tests on EC2 SSD/EBS disk IOPS and bandwidth which I’ll share below.

Why migrate from Aurora to something else? Aurora is great but AWS charges $0.20 per million IO requests and for some intensive workloads the IO operations required to serve queries over a month really add up; in my experience it’s not uncommon for heavily loaded database instances to run into the tens of billions of IO requests per month. 10 billion IO requests costs $2,000 and that’s not including the cost of the instance itself. Of course this all depends on your applications pattern and frequency of data access but a few billion IO operations really isn’t that much IO, especially if one is running any sort of periodic ETL jobs against the database or one of it’s replicas.

The tests below compare several EBS volumes against a local instance SSD on an i3.large instance.


Disk Type Disk Size Read IOPS Write IOPS Read Throughput MB/s Write Throughput MB/s Cost $GB/month
EBS – Cold HDD (sc1) 500GB 31 10 45 44 0.025
EBS – Throughput HDD (st1) 500GB 101 33 53 53 0.045
EBS – Standard SSD (gp2) 500GB 2074 (bursting) 692 (bursting) 22-53
(variable perf)
53 0.1
NVMe Instance Store SS3 (i3.large) 442GB 43281 14443 238 200 0.2477


During the duration of the GP2 SSD state the disk would’ve been in burst mode which means it theoretically had access to a 3,000 IOPS limit, up from it’s base 1,500 IOPS limit. I was also very surprised by the lower  read throughput of the GP2 SSD. The volume would consistently deliver 22MB/s for a period and then sometime later it would shoot to 45MB/s. I can’t really explain why the GP2 volume has such a variable performance.

The i3.large has a stated limit of 53.1MB/s of EBS bandwidth which is represented in several of the tests. When testing smaller reads and writes the effective EBS bandwidth may appear to be greater than 53.1MB/s which I assume is due to some caching mechanisms.

For testing methods see Test Disk IOPS and Read/Write Bandwidth