LIMITed Performance

Apr 16, 2010 15:50


PostgreSQL can be both a beautiful thing, and an infuriating mess.

Occasionally I look through the logs on one of our database servers to see if I can’t optimize some queries. It’s good practice, and is an excellent way to monitor basic system performance by watching for queries that take longer than might otherwise be possible. Sometimes performance can be fixed by tweaking an index, or manually rewriting the query and convincing a developer to integrate the changes.

But there are other times; occasions when neither of these approaches are sufficient, and they have the potential to be far worse. PostgreSQL’s query planner, for all intents and purposes, thinks you’re an idiot. This is a good assumption in most cases, because who really understands matrix manipulation and data statistics well enough to even approach the battery of techniques necessary when melding four, five, or more tables? Almost nobody.

Unfortunately the planner sometimes gets it wrong, and unlike many other database backends, PostgreSQL does not have any mechanism for query hints, to try and coax it in a particular direction when one of those rare edge-cases pops up. Edge cases such as the one I found today, where using an index I created would have reduced run time to a fraction of a millisecond, instead of the real run time, which was beyond five orders of magnitude slower. Which would you rather have, a query that finished in 0.350ms, or one that required 55,000?

After a lot of testing, it appeared that the LIMIT clause was the culprit. Without it, the results came back at breakneck speed.

“What?” you say. “How on Earth is that possible!?”

Well, the query planner is sometimes too smart. What happened in this case, was that the user requested an ORDER BY clause, and only wanted 50 rows. The database noticed this is a very small amount of results, so, naturally scanned the ordered column index backwards to grab a quick fifty rows that matched other filter criteria. Unfortunately in this case, it had to seek through over 350,000 records (the entire table) to find them. Since it used the index, that meant an equal amount of random disk seeks.

Even large SAN devices would have trouble performing over 350,000 seeks and fetches in other than a glacial period of time, and SSDs aren’t quite cheap enough to replace them. Setting the limit to something higher than 50 instantly fixed this. With 100, it realized it risked too many index seeks, so it abandoned the inverted index scan in favor of a bitmap index. I didn’t spend any time looking for the exact threshold, because that would be irrelevant; the user’s application wasn’t at fault in this case, and asking for a different paging count is a hack, at best.

Fortunately PostgreSQL just needed an encounter with my clue-bat. You can set the statistical analysis target for a specific column to a value between 1 and 1000, which represents some fraction of the table scrutinized for numerical boundaries and probability curves. Our default for the entire server is 250 for every table, which is quite a bit higher than the PostgreSQL default of 100. But I demurred anyway and increased the value to 400 for the badly behaving column, ran an analyze, and viola! Even a limit of a single row wasn’t enough to trick the planner into choosing the bad query plan. For reference, here’s what I did:

ALTER TABLE foo ALTER COLUMN bar SET STATISTICS 400;
Still not even half the full range, so the room for tweaks is still wide. This entire journey is proof that planners aren’t perfect, but sometimes query hints aren’t the right answer either. What if the developers changed the planner significantly and my painstakingly crafted query hints made performance worse because the hint was merely suppressing a bug they fixed? It’s a rat’s nest of future compatibility woes, and I’m torn, knowing that I’d love to have them, yet understanding the inherent danger they pose.

And like it or not, all databases require intermittent sanity-checks, so as a DBA, I’ll never escape that chore. But it’s good knowing such low-level and safe methods are provided when things go wrong.

Originally published at BonesMoses.org. You can comment here or there.

postgres feed, query planner, tech talk, optimization, postgres, database

Previous post Next post
Up