Exploring Aurora Serverless V2 for MySQL


Aurora Serverless V2 is generally available around the corner recently 21-04-22 for MySQL 8 and PostgreSQL, with promising features that overcome the V1 disadvantages. Below are those major features


Online auto instance upsize (vertical scaling)Read scaling (Supports up to 15 Read-replica)Supports mixed-configuration cluster ie, the master can be normal Aurora(provisioned) and readers can be in serverlessv2 and vice versaMultiAZ capability (HA)Aurora global databases (DR)Scaling based on memory pressureVertically Scales while SQL is runningPublic IP allowedWorks with custom portCompatible with Aurora version 3.02.0 ie., >= MySQL 8.0.23 (only supported)Supports binlog Support for RDS proxy.High-cost saving

Now let’s proceed to get our hands dirty by launching the serverless-v2 for MYSQL

Launching Serverless V2

It’s time to choose the Engine & Version for launching our serverless v2

Engine type : Amazon Aurora

Edition : Amazon Aurora MySQL – Compatible edition ( Only MySQL used)

Filters : Turn ON Show versions that support ServerlessV2 ( saves time )

Version : Aurora MySQL 3.02.0 ( compatible with MySQL 8.0.23 )

Instance configuration & Availability

DB instance class : Serverless ‘Serverless v2 – new’

Capacity Range : Set based on your requirements and costing ( 1 to 64 ACUs )

Aurora capacity units(ACU) : 2GB RAM+ CPU + N/W

Availability & Durability : Create an Aurora replica

While choosing the capacity range, Minimum ACU will define the lowest capacity to which it scales down ie., 1ACU and Maximum ACU will define the maximum capacity to which it can scale up

Connectivity and Misc setting:

Choose the below settings based on your application needs

VPCSubnet Public access, (Avoid in favor of basic security)VPC security group Additional configuration ( Cluster group, parameter group, custom DB port, performance insight, backup config, Auto minor version upgrade, deletion protection )

To keep it short I have accepted all the defaults to proceed on to “Create database

Once you click the “create database” you can see the cluster getting created, Initially both the nodes in the cluster will be marked as “Reader instance” – don’t panic it’s quite normal.

Once the first instance becomes available, it would be promoted to “Writer” now the cluster is ready to accept the connection, post which the reader gets created in adjacent AZ, refer to the image below

Connectivity & End-point:

ServerlessV2 cluster also provides 3 end-points ie., Highly available cluster, read-only end-points and individual instance end-point

Cluster endpoint – This endpoint connects your application to the current primary DB instance for that Serverless v2 cluster. Your application can perform both read & write operations.Readers endpoint – Serverless v2 cluster has a single built-in reader endpoint which is used only for read-only connections. This also balances connections up to 15 read-replica instances.Instance endpoints – Each DB instance in a serverless v2 cluster has its own unique instance endpoint

You should always be mapping cluster and RO endpoints with applications for high availability


Though Cloudwatch covers needed metrics, to get a deep & granular insight into DB behavior using PMM, I used this link for quick installation, In short for serverless I wanted to view the below

DB uptime, to see if DB reboots during scale-up or scale-downConnection failures Memory resize ( InnoDB Buffer Pool )

Here I took a T2.large machine to install & configure PMM.

Now let’s take Serverlessv2 for a spin:

The beauty of Aurora Serverless V2 is that it supports both Vertical scaling ie., auto Instance upsize as well as Horizontal scaling with read-replicas.

In the remaining portion of this blog will cover the vertical scaling feature of Serverless V2.

Vertical scaling:

With most of the clusters out there the most difficult part is upsizing the writer instance on the fly without interrupting the existing connection. Even after using proxies/DNS for failover, there would be connection failures.

I was more curious about the testing of the vertical scaling feature, since AWS claimed it to be online and without disrupting the existing connected connections, ie., while the query is running. Wow !! fingers crossed.

Come on let’s begin the test, So I decided the remove the “reader instance” first, below is the view of our cluster now.

My initial buffer pool allocation was 672MB since our Minimum (1ACU) we have 2GB out of which ¾ is allocated as InnoDB-buffer-pool

Test Case:

The test case is quite simple, am imposing an Insert only workload(Writes) using the simple load emulator tool Sysbench

Below is the command used

# sysbench /usr/share/sysbench/oltp_insert.lua –threads=8 –report-interval=1 –rate=20 –mysql-host=mydbops-serverlessv2.cluster-cw4ye4iwvr7l.ap-south-1.rds.amazonaws.com –mysql-user=mydbops –mysql-password=XxxxXxXXX –mysql-port=3306 –tables=8 –table-size=10000000 prepare

I started to load 8 tables in parallel with 8 threads and a dataset of 1M record per table

Observations and Timeline:


Below are my observations during the scale-up process

Insert started at 03:57:40 exactly COM_INSERTS reaching 12.80/sec, Serverless was running with 672MB buffer_pool, exactly after 10 secs at 3:57:40 first Scaling process kicks in and buffer_pool memory was raised to 2GB, let’s have a closer look

After a Minute at 03:58:40, the second scaling process kicks in and buffer_pool size leaped to ~9G

I was keenly watching the uptime of MySQL for each scale-up process and also watching the thread failures, but to my surprise both were intact and memory(buffer pool) was scaling linearly at regular intervals of 60 secs and reached a max of 60GB at 04:11:40

The data loading got completed at 04:10:50 ( Graphical stats )

Scale Down:

Post the completion of Inserts in DB there was a brief period of 5min, since in production scale down has to be done in a slow and steady fashion. DB was completely idle now and connections were closed, at 04:16:40 buffer pool memory dropped from 60G to 48GB

Scale down process kicked in at regular intervals of 3 mins from the previous scale down operation and finally at 04:34:40 the serverless was back

Adaptive Scale-up & Down

I would say this entire scale up and scale down the process is very adaptive, intelligent, and well-organized one

No lag in DB performance.Linear increase & decrease of resource is maintainedNo DB reboot and Connection fails were kept at bay

Below is the complete snapshot of the buffer_pool memory scale_up & scale down process along with the INSERT throughput stats, both the process took around ~40mins

Along with the buffer_pool serverless also auto-tunes the below variables specific to MySQL





AWS recommends keeping this value to default in the custom Parameter group of serverlessV2

Below is the image summary of the entire scale-up and scale-down process.

AWS has nailed vertical scaling with aurora serverless, from my point of view its production though it’s in the early GA phase.


The Upsize happens gradually on demand every 1 min.The downsize happens gradually on the ideal load every 3 min.Supports from MySQL 8.0.23Untouch above said MySQL Variables on

Use Cases:

Below are some of the use cases where Aurora serverless V2 fits in perfectly

Applications such as gaming,retail applications, and online gambling apps wherein usage is high for a known period(say daytime or during the match ) and idle or less utilized for the other period Suited for testing and developing environmentsMulti-tenant applications where the load is unpredictableBatch job processing

This is just a starting point, there are still a lot of conversations pending on the Aurora ServerlessV2 such as horizontal scaling(read scaling), Migration, parameters, DR, MutiAZ failover, and Pricing. Stay tuned here !!

Love to test this Serverless V2 on your production environment, Mydbops database engineers are happy to assist.