All Blog Posts
Ubicloud Load Balancer: Simple and Cost-free
Life of an inference request (vLLM V1): How LLMs are served efficiently at scale
Ubicloud Premium Runners: 2x Faster Builds, 10x Larger Cache
PostgreSQL Performance: Local vs. Network-Attached Storage
Ubicloud PostgreSQL: New Features, Higher Performance
Building Burstables: cpu slicing with cgroups
Worry-free Kubernetes, with price-performance of bare metal
Ubicloud's Thin CLIent approach to command line interfaces
Dewey.py: Rebuilding Deep Research with Open Models
Ubicloud Burstable VMs starting at $0.01 per hour
Debugging Hetzner: Uncovering failures with powerstat, sensors, and dmidecode
Cloud virtualization: Red Hat, AWS Firecracker, and Ubicloud internals
OpenAI o1 vs. QwQ-32B: An Analysis
Making GitHub Actions and Docker Layer Caching 4x Faster
EuroGPT: Open source and privacy conscious alternative to ChatGPT Enterprise
Private Network Peering under 200 Lines
Lantern on Ubicloud: Build AI applications with PostgreSQL
Elastic-Quality Full Text Search on Postgres: Fully managed ParadeDB on Ubicloud
Ubicloud Load Balancer: Simple and Cost-Effective
13 Years of Building Infrastructure Control Planes in Ruby
Difference between running Postgres for yourself and for others
Ubicloud Block Storage: Encryption
Announcing New Ubicloud Compute Features
How we enabled ARM64 VMs
Ubicloud Firewalls: How Linux Nftables Enables Flexible Rules
Improving Network Performance with Linux Flowtables
EU's new cloud portability requirements - What do they mean?
Ubicloud hosted Arm runners, 100x better price/performance
Building block storage for the cloud with SPDK (non-replicated)
Open and portable Postgres-as-a-service
Learnings from Building a Simple Authorization System (ABAC)
vCPU, thread, core, node, socket. What do CPU terms mean these days?
Introducing Ubicloud

PostgreSQL Performance: Local vs. Network-Attached Storage

May 30, 2025 · 3 min read
Burak Yucesoy
Burak Yucesoy
Principal Software Engineer

Cloud storage was built around the limits of old hardware. Spinning hard drives (HDDs) were slow and fragile. So, early on cloud providers moved storage off of servers. They used network-attached disks to boost durability and scalability. But hardware has come a long way. Modern NVMe SSDs have eliminated many of the constraints that originally led to network-attached designs. They are also much more affordable. You can get  2.5 million IOPS from a $600 NVMe SSD [1]. By contrast, pushing 2.5 million IOPS through Aurora would cost you $1.3M per month. With NVMe SSDs being faster, cheaper, and more reliable, it's time to rethink PostgreSQL storage.

In this post, we’ll explore how cloud database storage architectures have evolved, how advances in hardware have changed the landscape, and why local NVMe SSDs have become a viable option for cloud databases. We’ll also present benchmarks that compare performance across different storage architectures.

Database Storage over the Years

Before the cloud, databases typically ran on local hard drives (HDDs). This provided low latency for sequential reads but introduced two challenges. Each HDD had a single head sitting on a platter, so your random read/write performance would be terrible. These HDDs also had high annual failure rates due to rotating disks. Most database teams would buy specialized hard drives and put them into fancy RAID configurations to compensate for these problems.

Then, cloud computing happened. AWS made a bold move and popularized pooling storage remotely across many machines. They connected servers to big clusters of HDDs over a network, solving two major issues at once. There were no bottlenecks due to single disk as you could spread I/O over many drives. It also provided higher redundancy and durability by replicating data in the background. At the time, database engineers worried that the extra network hop would kill performance. Surprisingly, it worked well enough and network-attached storage became the default for cloud databases.

With SSDs, trade-offs in cloud architecture have fundamentally changed.

  • Fast speed: SSDs offer high throughput and low latency. This removes much of the performance advantage of spreading I/O across many drives.
  • No moving parts: SSDs are much more reliable than HDDs. This has reduced the durability advantage that centralized storage used to offer.
  • Falling prices: High-performance storage is now affordable. This makes SSDs a cost-effective option for intensive tasks.

Over time, SSDs became more popular due to their better performance, reliability and falling prices. So, cloud providers upgraded their network-attached storage to use SSDs. However, the fundamental design, storage accessed over the network, remained unchanged. This was partly due to path dependency and partly because network-attached storage still offers advantages in certain areas, which we’ll discuss the in next section.

The advancements in SSD technology continued even further with NVMe SSDs. Traditional SSDs often used SATA or SAS interfaces, originally designed for spinning disks. In contrast, NVMe was built specifically for flash memory. It connects directly over PCIe, enabling massively parallel data paths and reducing latency.

As we mentioned in the beginning, the cost of 2.5 million IOPS on Aurora is $1.3M/month, but you can achieve the same performance with a $600 local NVMe SSD. Yet cloud providers stuck with network-attached storage, largely out of inertia. At Ubicloud, we believe it’s time for a reset.

Advantages of Network-Attached Storage

Local NVMe SSDs deliver major performance benefits. However, network-attached storage still has two key advantages: elasticity and durability.

Elasticity allows you to scale storage independently of compute. This is especially useful for unpredictable workloads with fluctuating storage needs. In contrast, local storage is coupled with the underlying compute. Scaling it often requires moving data to a new server with larger disks. It is possible to automate this process and perform it safely, but definitely adds some operational complexity.

Durability is another strength of network-attached storage. Centralized network storage systems are usually highly durable due to their built-in replication. That said, NVMe SSDs are already far more reliable than legacy HDDs. This minimizes the risk of disk failure. Still, when using local NVMe, it's important to have a replication and backup strategy in place. Thankfully, PostgreSQL already comes with robust primitives for replication, high availability, and backups. So, building a resilient system for PostgreSQL on top of local NVMe is entirely feasible.

In summary, network-attached storage offers certain operational advantages, but modern hardware, automation and database tooling make local NVMe a compelling choice in many scenarios. At Ubicloud, this is why we’re confident in using local NVMe for our managed PostgreSQL service.

Benchmarks

We wanted to know how much better PostgreSQL could be with local NVMe drives, so we ran performance benchmarks across three platforms:

  • Ubicloud PostgreSQL: standard-8 instance (comes with 8 vCPU, 32GB RAM and local NVMe SSD)
  • Amazon RDS for PostgreSQL: db.m8g.2xlarge instance (comes with 8 vCPU, 32GB RAM) and GP3 EBS disk
  • Amazon Aurora for PostgreSQL: db.r8g.2xlarge instance (comes with 8 vCPU, 64GB RAM) and I/O optimized disk.

To evaluate performance, we ran two industry-standard benchmarks:

  • TPC-C: This benchmark emulates OLTP workloads, characterized by high concurrency and small transactions. We used sysbench to run TPC-C benchmark (tables: 32, scale: 256)
  • TPC-H: This benchmark emulates analytical workloads, characterized by large scans and complex joins. We used 100 as the scale factor.

On the TPC-C benchmark, Ubicloud with NVMe drives processes 1.4 times more queries than Aurora and 4.6 times more queries than RDS. Latency at 99th percentile was 1.9 times less compared to Aurora and 7.7 times less than RDS.

WorkloadUbicloudAuroraRDS
Transaction/s
873.25
636.31
188.3
Query/s
24815.08
18076.31
5350.79
Latency
314.45
601.29
2405.65
Drag table left or right to see remaining content

Moreover, latency was more stable and predictable with NVMe drives.

On the TPC-H benchmark, Ubicloud with NVMe drives were faster in all TPC-H queries. On average, it was 2.42 times faster than Aurora and 2.96 times faster than RDS.

UbicloudAuroraRDS
Q0175.4170.4174.8
Q02108.65238.86129.69
Q0360.95101.4223.59
Q04119.65552.57355.73
Q0563.1983.69223.53
Q0645.9161.42179.26
Q0760.9676.28223.72
Q0869.9292.86254.33
Q0984.941596.24275.71
Q1091.38123.96314.85
Q1135.2976.76102.38
Q1265.72119.75240.15
Q1348.6680.4483.23
Q1446.7474.11181.5
Q1597.94161.96365.75
Q16100.05107.41107.35
Q17235.25542.08559.31
Q18225.73605.842142.52
Q1923.17161.77111.9
Q201825.9710666.929866.06
Q21103.87729.29213.65
Q229.8830.3710.49
Mean (Geometric)79.72193.23235.61
Difference-2.42x2.96x
Drag table left or right to see remaining content

The Future of Cloud Databases Is Local

We’re at an inflection point. The cloud storage model built 15 years ago no longer makes sense for today’s hardware and workloads. For data intensive applications like PostgreSQL, local SSDs are the better default.