PostgreSQL for Newbs

Feb 13, 2011 21:57


So you’ve decided to use PostgreSQL as the database for your sparkly new website running some variant of a LAMP stack. Or maybe you just got a new job and must now administer a PostgreSQL install so you excitedly did your research. You’ve read the install docs, tinkered on a VM, and you think you’ve got everything ready.

You’re wrong. You’re going to run out of disk space, your website will be slow, and you’ll go running to the PostgreSQL mailing lists in abject terror because you have no clue what is wrong.

But why?

PostgreSQL is a powerful piece of software, and the defaults are somewhat sane for a very minimal install. Your pain will indeed be acute however, if your install does not fit the vanishingly small set of tiny servers targeted by these settings. First of all, you need to fix your damn server.
Fixing your Damn Server

Linux has constraints on the amount of shared memory a process can request and use. Databases use a lot of shared storage. Most linux systems however, set the shared memory limit to a paltry 32MB. What is that? Is this 1996? Your server has at least 4GB for God’s sake. In fact, I’ll just says it has 8GB for illustrative purposes.

At the end of your /etc/sysctl.conf file, place the following lines (if you have root access):

kernel.shmmax = 2147483648 kernel.shmall = 1025382
Then run the following command:

sudo sysctl -p
That tells your system you can allocate up to 2GB of memory in one huuuuge contiguous segment. Databases like that kind of stuff. A real setting for shmmax would be any value multiplied by 1024*1024*1024, and should be roughly a quarter of your memory. For shmall, use shmmax/2048; don’t ask why. We don’t want your OS spoiling all the fun. But now you have to fix the PostgreSQL settings, because they’re all wrong. Catastrophically wrong.
Hit PostgreSQL with a Clue Bat

You know where your config file is, right? No? It’s probably somewhere in /etc/postgresql if you are using the packaged install for your distribution, or in the actual PostgreSQL data directory if you used some other method. When in doubt:

locate postgresql.conf
Then edit that hell out of that file. It’s wrong.

Look for and change these settings:

shared_buffers = 2G effective_cache_size = 6G
They go together like cold in space. The first setting tells the server to allocate a humongous 2GB segment of RAM, and the other tells the query planner the OS will probably use the rest to cache IO in some manner. Without these, PostgreSQL will be getting every damn request from disk, and disk IO is orders of magnitude slower than memory. You don’t want that, do you?

Next, look for these:

checkpoint_segments = 20 checkpoint_timeout = 10 min checkpoint_completion_target = 0.8
Now, I won’t get into too many details, but the fact of the matter is, your inserts, updates, and deletes aren’t on disk yet. Every ten minutes with these settings, PostgreSQL will start writing those changes to the proper locations, and it will do it spread over the next eight minutes. The assumption here is that your disk controller is trash, because it is. It also frees up more IO for reading, and keeps your system from stalling out because your disks choked on way more writes than they could handle.

These settings may not be enough. If you get warnings in the PostgreSQL logs saying to increase the checkpoint segments, add another 10. Keep doing that until it stops warning you. The only price you pay is 16MB per checkpoint segment, and it’s important in the case of a system crash for recovery purposes, so don’t be stingy. The default is hilariously low for any system that’s actually used by more than one person every other week.
Geriatric (Pre 8.4) Installations

There’s a special kind of hell for you. Upgrade. Upgrade now. If you can’t upgrade, I’m so very sorry. 8.2 and 8.3 aren’t eye-meltingly bad, but 8.4 and 9.0 are simply so much better, older versions pale in comparison-especially when it comes to maintenance. In fact, there are two settings that should make you lose sleep at night: max_fsm_pages and max_fsm_relations. Seriously, these should strike actual fear into your heart if you like your data.

It’s the defaults. They’re terrifyingly low. In an 8.3 or 8.2 server, this will be set to 200,000 at most, upon database creation. So if you have more than 200,000 rows updated or deleted in a single day, your database will bloat. Day after day, it’ll grow just a little (or a lot) bigger, until it exhausts your disk space. And while it does this, your queries will get slower and slower, and fixing this will get more and more difficult as time progresses.

Set them to this:

max_fsm_pages = 4000000 max_fsm_relations = 10000
You lose nothing but a little memory by making these settings ridiculously high, and not that much memory. Six bytes each. Even at 4-million, that’s a mere 24MB. For servers with gigs and gigs of memory, nobody is losing sleep over 24MB. It’s such a small price to pay to avoid performance and maintenance nightmares! There’s a reason it’s all automatic in 8.4 and above!

And since the autovacuum daemon in 8.3 and below is somewhat sketchy, you’ll want to schedule a nightly VACUUM to force it to see all those old database rows and keep track of them. You can do this with the vacuumdb utility. But if you have a default install, it’ll ask for a password… and that’s no way to ensure automation.

Put this line in the top of your pg_hba.conf file if it’s not there already:

local all postgres ident sameuser
This basically sets up the server to let the postgres user, which starts the server and owns all the child processes, connect without a password to do maintenance tasks. Your server is set up so the postgres user can’t log in, right? And only the DBA has rights to sudo or su as that user, right? Then you have little to worry about.

Then put this in a file named /etc/cron.daily/vacuum_all:

sudo -u postgres vacuumdb -avz
That will vacuum every table in every database in your PostgreSQL install. Assume this will take about half an hour per 100GB depending on your hardware. Congratulations, your installation should now limp along acceptably until you can upgrade.
Log, Damn You!

What, you thought I was done? Set these in your postgresql.conf first:

log_min_duration_statement = 1000 log_checkpoints = on
There. Now go to your PostgreSQL data directory and look in the pg_log directory at your logs. Any query over one second will be there, and all of your checkpoints too, so you know how often your data is being written to disk and how long it takes, and whether or not it had trouble syncing the data. The last one will be a warning sign that your controller is insufficient for the level of write IO you’re pushing. Time to upgrade!

Gather up all those slow queries, be it with tools like pgFouine or your own skeptical eyes, and check them for index usage or improvements you can suggest to developers at your company. Be a DBA, damn it!
It’s over!

So why are the defaults all so abysmal? I can only assume the developers are going for maximum system compatibility. And indeed, the default settings are minimal enough to probably run on a Furby. But your server isn’t a Furby, or even a venerable Commodore 128. It’s a looming beast with 96GB of RAM and 32-cores of hyperthreaded destruction. Don’t make PostgreSQL breathe through a straw when you have a warehouse full of sewer pipes.

PostgreSQL is a great database, but it’s not a toy. Drive responsibly.

Originally published at BonesMoses.org. You can comment here or there.

postgres feed, linux, tech talk, postgres, tutorial, database

Previous post Next post
Up