Hardware Optimization: Best Practices for Database Performance Boost
Even the best-designed and fully optimized applications depend on the hardware their databases run on. As a result, a database administrator (DBA) may use all the tuning tricks in the book and still be left battling server-level bottlenecks.
While selecting the right server for your database may not seem like a DBA’s job, dealing with hardware-related performance issues most often is. By getting involved in building the hardware infrastructure of your DBMS, you help create an optimal environment for your workloads. By doing so, you increase the performance and stability of both your applications and your organization’s business.
So, after ruling out poor indexing, unoptimized queries, database design flaws, and outdated statistics, dare to delve deeper into the underlying hardware.
Why Hardware Matters
Database management applications require hardware upgrades, especially as your databases grow and become more demanding. Similarly, the latest versions of your preferable DBMS leverage new hardware technologies to provide features that help them complete database actions more efficiently.
Optimizing your database server on a hardware level enables:
- A safety buffer in case of application-related flaws
- Performance boost for optimized applications
- Future-proofing your infrastructure
For all these reasons, selecting the right server components for your workloads is worth your while.
Choosing the Right Components
Most databases are I/O intensive and require a lot of memory. However, this does not imply that CPU should be neglected during database server optimization, as it can take a significant load off both memory and storage.
Take your workload and application requirements into consideration while building a server. You might be focusing on online transaction processing (OLTP), data warehousing (DW), or something in between, running on bare metal or virtual machines. Whatever the workload, your server needs to support it.
Here are some other factors to consider:
Server Capacity – Look for an infrastructure that supports your required number of simultaneous queries and can handle your expected number of server clients.
Server Performance – Test core CPU performance, memory, and I/O speed and latency, as well as memory buffer pool and storage, read/write speeds.
Future-proofing – Build a system that will support your anticipated user and workload growth.
Finding a balance between memory, CPU, and storage requires looking deeper into the benefits of each of these components.
CPU
The CPU can have a huge impact on your database performance, especially if you use a lot of calculations in your queries.
The fastest way a database can get the instructions it needs is directly from the CPU. L2 and L3 cache sizes are a crucial factor in enabling this and the larger capacity you opt for, the better.
Even if your CPU resources may seem excessive, they can serve the following purposes:
- Reduce I/O requirements through data compression
- Compensate for storage and memory performance bottlenecks
- Increase security through database backup compression
Take the latest Intel® 3rd Gen Xeon® Scalable processors for example. Built on 10 nm technology, these CPUs come with up to 40 cores and up to 60 MB of cache. Allowing for more VMs per unit, these CPUs enable greater virtualization performance on a smaller footprint, helping you cut costs per VM.
In addition, they enable processing technologies OLTP workloads benefit from. Intel® Hyper-Threading delivers two processing threads per physical core, boosting the performance of short-duration queries. Also, Intel® Turbo Boost smart overclocking feature increases single-threaded performance for more database transactions per minute.
From a security perspective, Intel® Software Guard Extensions (SGX) help you isolate sensitive database data in secure enclaves in memory, enabling hardware-protected confidential computing.
The CPU you select will likely stay a part of your DBMS configuration throughout the lifetime of the server. So, get the best one your budget allows.
Memory
The more memory you have for caching, the faster the CPU will process it, avoiding running empty cycles while waiting for I/O. While 32 GB would be a reasonable minimum for modern servers and workloads, you can leverage terabytes of memory if server sockets support it and your needs require them.
More memory will provide your server with the following:
- I/O requirements reduction
- Larger buffer pools
- Checkpoint frequency reduction
Since RAM density doubles every four years or so, you need to be up to date with new technologies.
With persistent memory solutions such as Intel® Optane™ Persistent Memory (PMem), you can bring large datasets closer to the CPU by combining the performance of DRAM with the capacity and data persistence of storage.
Its features include:
- Up to 1000x more speed than standard flash storage
- Speed boost for transactions and real-time analytical workloads
- Data retainment over longer periods, even without a constant power supply
With up to 512 GB capacity per DIMM, it lets you fit entire workloads on a single module, making it suitable for even the largest datasets.
Storage
Traditionally, database servers use Storage Area Network (SAN) or Direct Attached Storage (DAS) solutions. While DAS is less expensive, it offers fewer configuration options in comparison to SAN, which supports RAID and data recovery snapshots. If you are choosing between the two, SAN is the right choice for IOPS workloads and OLTP databases, while DAS will be more suitable for handling high data throughput on a budget.
To reap the benefits of modern storage options, consider SATA or NVMe SSDs. These offer much higher random I/O and sequential read performance. Keep in mind that SSDs have a finite write limit. So, look for enterprise solutions running on more durable single-level cell (SLC) technology instead of the consumer-class multi-level cell (MLC) options, especially if your workloads are read/write-intensive. NVMe storage costs more but delivers an even higher IOPS (I/Os per second) and bandwidth with faster response times, directly impacting data processing speed.
Consider your redundancy, I/O, IOPS, and throughput requirements before choosing your storage. Select the option meeting your needs best or combine different ones for data tiering.
Network
The network is a crucial part of your database infrastructure. Overloading it can lead to latency, performance drops, dropped packets, and even server outages.
During your network setup, think outside the local host and perform benchmarking and diagnostics considering clients who communicate with your database from other parts of the world. For latency-sensitive applications, keep your application server and your database as geographically close as possible.
When it comes to network speed, 10 Gbps should ensure low-latency data transfers. Still, keep in mind that certain applications may retrieve large result sets, saturating the network. Also, network backups and other large data transfers can cause network bottlenecks and slow down user transactions.
Monitor your network constantly and react in case of potential issues.
The Final Decision: On-prem or Data Center
Once you outline the hardware components of your ideal DBMS, it all comes down to two choices: build and manage everything yourself or host your database system in an off-premises data center.
A traditional on-prem DBMS gives you complete control of your infrastructure. However, the cost of hardware, software licensing, power, bandwidth, and infrastructure management staffing falls under your organization’s responsibility.
On the other hand, you can lease bare metal or virtual machines in a provider’s data center on a pay-per-use, monthly, or extended reservation basis. This helps you avoid upfront costs of database server ownership and maintenance, providing you with other benefits enterprise data centers usually offer, including:
- High security
- Power and cooling redundancy
- Software licensing
- High bandwidth and global connectivity
- SLA-guaranteed uptime
PhoenixNAP offers a broad portfolio of opex-modeled infrastructure solutions optimized for database workloads. In minutes, you can deploy pre-configured API-driven Bare Metal Cloud instances and manage your infrastructure through popular IaC tools. This versatile platform comprises NVMe storage, up to 50 Gbps network speed, huge amounts of RAM, and support for Intel Optane PMem.
Conclusion
Whatever you choose, remember that hardware matters. No hardware can fully mitigate the flaws of inadequate app design and poorly written queries. However, an optimized system can improve your database performance significantly, helping you and your organization save time and money in the long run.