In this post…
The Open Source PostgreSQL database is one of the most popular relational databases. More and more Web developers and DBAs are choosing PostgreSQL for it’s stability and reliability. The out-of-the-box install of PostgreSQL is good, but a few basic configuration changes can make it much faster while using fewer system resources. Tuning a database can be overwhelming for web developers, so we’ve kept the following PostgreSQL performance tuning recommendations simple and conservative. To squeeze the most performance out of PostgreSQL we recommend hiring a company that specializes in PostgreSQL.
shared_buffers
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. Tuning this configuration parameter can have the greatest impact on query times, mostly because the default setting is far too low given the amount of memory available on modern servers. The general advice is to set this value to approximately 25% of the physical RAM on the server, up to 8 GB.
For example, if your server has 12 GB of RAM, you would set this line in your postgresql.conf file:
shared_buffers = 3GB
effective_cache_size
The effective_cache_size configuration parameter serves as a guide, informing the PostgreSQL as to how much of the database is potentially in memory. Setting this configuration parameter to 1/2 of total memory is a widely accepted conservative setting.
Following the example of a server with 12 GB of RAM, your postgresql.conf file would look like this:
effective_cache_size = 6GB
checkpoint_segments
The checkpoint_segments configuration parameter determines how often PostgreSQL will do a checkpoint on its transaction files. Each file is 16MB, and the default setting of 3 means PostgreSQL can easily become resource intensive by doing a checkpoint every 48MB. To avoid checkpoints becoming a bottleneck, set this value to at least 10. We recommend setting it to 16.
checkpoint_segments = 16
work_mem
The work_mem configuration parameter determines how much memory PostgreSQL can use for internal sort operations and hash tables before writing to temporary disk files. It is preferable to do these operations in memory because its faster than I/O operations on the disk. There are no exact formulas for figuring this one out, since it depends on the number of sort operations and maximum potential connections to the database. However, doubling this from 1MB to 2MB should be safe and will improve queries that are doing smaller sorts on disk.
work_mem = 2MB
maintenance_work_mem
The maintenance_work_mem configuration parameter specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM and CREATE INDEX. These operations don’t typically run often or concurrently, so it’s safe to increase this value. Changing this will help speed up the automatic vacuuming process — a daemon that runs in the background and cleans up database tables while collecting statistics on new data. Doubling the default from 16MB to 32MB is a good start.
maintenance_work_mem = 32MB
I’d set checkpoint_segments a lot higher than 16 on a busy server; I’ve been known to set it to 128, which is about 4GB of logs.
work_mem and maintenance_work_mem really need to be proportional to the amount of RAM you have. In the case of work_mem, also the number of concurrent users you expect to support. Again, here I’d recommend settings much higher than what you’re recommending for a lot of applications. Also, the default for work_mem is 4MB now.
Josh. Thanks for posting! I’ve attended a few of your talks at OSCON (Hope you make it to SCALE next month for PgDays). It’s a pleasure to have you weigh in on my post. I’m no expert, so I kept these recommendations conservative.
We’ve always kept work_mem on the low side because it’s difficult to predict if an app will gobble up RAM with sorts. We very well may increase it after monitoring RAM for a few months. I did not know that the default for work_mem is now 4mb. That is what we are currently using for our app. We may have to increase that :)
Thanks again for commenting!
If your application is unexpectedly gobbling up RAM with sorts, you have a problem that is not in the database. Any production application with a significant database behind it should (ideally) have a DBA who is involved with the development team to identify possible performance issues before they ever hit production.
Since we all know that performance issues sneak past the best planning, your production monitoring should spot performance trends before they become a trend. (You DO monitor your production database performance metrics, right??)