Regarding #5:
>That caught me by surprise. Both options “roughly the same” and “depends on the data” got about 25% — the guessing probably.
I don't think it was guessing so much as reasoning that fetching 100 rows (and filtering by value) instead of 10 rows doesn't have significant real-world impact unless the row data is particularly large. I'll admit I didn't think of the out-of-index lookup, but my main thought was 100/1000000 vs 10/1000000 isn't a big deal unless the DB is running on an abacus.
I also answered "about the same", and no I didn't notice the bookmark lookup (so-called on MSSQL), but even if I had I'm not sure I would have changed my answer--well maybe I would have because I would have noticed the "trick", but putting my test-taking adaptations aside....
It is already a highly selective query. Adding a 100 bookmark lookups will not cause a material change in performance, unless this query is being executed 1000s of times per second, in which case maybe your real problem lies elsewhere.
At least on MSSQL, I would expect a query plan like so:
1. Index seek WHERE a = 123, yielding ~100 rows. [1]
2. Bookmark lookup with results from (1), yielding ~100 rows.
3. Filter (2) WHERE b = 42 and project date_column, yielding ~10 rows.
4. Aggregate (3) by date_column, yielding ~10 rows or less.
And the optimizer will choose this over a full table scan so long as the 1+2+3 < full table scan. I don't know the threshold for that, but is is certainly more than 100 rows out 1M+, and the planner will have an estimate of selectivity that will inform plan selection.
[1] Important caveat. I interpret this line,
Current situation, selecting about hundred rows out of a million:
....to mean selection of 100 rows from the base table, rather than projection of 100 rows in the result, post-aggregation.
But if he really means a SELECT statement that returns 100 rows, then we have no idea how selective the WHERE clause is, and my answer changes to "The query will be much slower (impact >10%)".
No query optimizer would look at this and say "1M rows? Let's group and aggregate before filtering." Not to mention, the question specifically states that a=? would return 100 rows and a=? and b=? would return 10.
Regarding O(1), the first query would be some form of O(n log n) or O(log n) depending on the table/index data structures.
No query optimizer would look at this and say "1M rows? Let's group and aggregate before filtering."
I hope no optimizer would say that. It is well defined that filtering, as expressed in the where clause (if it uses indexes or not) happens before group and aggregate, and that grouping happens on the result of the filtering. If the optimizer could choose one way or the other, you'd have different results. If you want to group and aggregate first, you need to explicitly express that with a subquery.
> The worst case for the latter query is that it must aggregate over 999,910 rows.
No, it already said it only took 100 rows, it can't get worse from that.
Now if he actually meant the final result set was 100 rows (meaning after the group by) that's different. But that's not what he actually said, so the question is misleading.
My thought was akin to what he said, the system needs to grab all 100 rows to do the second filter.
I didn't think about the fact that the original one was an index-only scan, so went with "Roughly the same", since outside of that property the performance is similar.
Too bad there is no way to get good data on why people thought that way without a much longer quiz.
it has significant impact because first case is "take 100 rows from index" and the second is "take 100 rows from index _and_ for each row go to the row in the table - do the random IO and with 1M rows it is probably 1 IO/row - and check for the value of 'b'"
Such 100 random IOs will cost 0.5 sec on 1 iron platter HDD for example. So the query performance will degrade significantly until either the table is already preloaded into memory or you use SSD drives.
> Such 100 random IOs will cost 0.5 sec on 1 iron platter HDD for example.
That's an incredibly, incredibly, iffy and mostly wrong statement which depends on arguably a corner case which doesn't often reflect reality (factors include which DBMS, row ordering, table size, cache size, block size, page size, RAM size, Hard Disk seek time, HDD throughput.
The only case where that's likely is performing a very cold query on a very large randomly distributed table once (and probably only once).
Even a table of 1 million rows with ~30B per row could easily be read into memory in about 300ms (100MB read time + ~5ms seek time, or ~= 5+(1e6*rowsize/ (100e3)) )
>> Such 100 random IOs will cost 0.5 sec on 1 iron platter HDD for example.
>That's an incredibly, incredibly, iffy and mostly wrong statement which depends on arguably a corner case which doesn't often reflect reality (factors include which DBMS, row ordering, table size, cache size, block size, page size, RAM size, Hard Disk seek time, HDD throughput.
you're welcome to specify iron platter HDDs which would do 100 random IOs in significant different time than 0.5 sec.
>Even a table of 1 million rows with ~30B per row could easily be read into memory in about 300ms (100MB read time + ~5ms seek time, or ~= 5+(1e6*rowsize/ (100e3)) )
>Query Optimizers do exactly this.
Not always. If it has enough info, it may take a full table scan path in such a case. Still it will be about the same time - 300ms vs. 0.5 sec. i mentioned.
Of course it is for cold queries. You forgot to mention, i guess, that full table scan you propose may by-pass DB cache (default depends on DB, modern tendency is by-pass by default), and thus second query will take the same time, while table blocks brought in by random IO would frequently be stored in DB cache thus making second query run somewhat faster - depends on how much data was brought in.
I had the same thought. Seems like the optimizer could still perform an index-only scan to get to 100 rows, then go to the table to filter them down to 10 rows. Yes, the second step is extra, but should still be fast. What am I missing?
That was my reasoning when I answered, but I had missed the fact it was a GROUP BY, which means you can't just filter after the fact.
Edit: In other words it was 100 or 10 aggregated rows. A extra WHERE clause will change the values of each of the rows rather than just filter the rows from 100 to 10. (Which a HAVING clause would do.)
It's much simpler than that. The first query only has to reference the index because the data is IN the index. The second query has to access the table. That's it.
But if it wasn't for the GROUP BY, filtering 100 results of a million down to 10 results wouldn't change performance much even if you read every column of every row of those 100.
The trick is the fact that the GROUP BY means that "It used to return 100 it now returns 10" is a red herring, it still has to read every row to make up those 10.
SELECT date_column, count(*)
FROM tbl
WHERE a = @a
AND b = @b
GROUP BY date_column;
The "AND b = @b" causes the sql engine to access data in the table instead of solely relying on the index. GROUP BY has 0 to do with it. If you changed the query to
SELECT a, date_column
FROM tbl
WHERE a = @a
and
SELECT a, date_column
FROM tbl
WHERE a = @a
AND b = @b
SELECT a, date_column
FROM tbl
WHERE a = @a
AND b = @b
Will only have to scan column b over 100 rows.
Even without an index that will always be neglible, not compared to using the index to grab 100 rows from 10million but just compared to running a query and returning results at all.
The reason that the original can be a lot slower is that the 100 and 10 rows of results are comprised of a lot more rows of actual information, because of the grouping.
You're right that:
SELECT a, date_column
FROM tbl
WHERE a = @a
AND b = @b
would be a lot slower, given the same data, but that isn't the scenario, the group by has implications about what "returns 100 rows, returns 10 rows" actually means in terms of data read.
Query 1 is an index seek only. It does not access the table data.
Query 2 will perform the same index seek but will need to do a key lookup on each row and filter.
It's not negligible. The 100 results are not comprised of a lot more information in this case, regardless of the grouping, because the 1st query does not access the table.
Edit:
I happen to have a table laying around with a little over a million rows and set up a similar set of queries.
The query optimizer suggested the index seek taking 6% of total operation time while the key lookup taking up the other 94%. The rest was negligible.
I don't think it was guessing so much as reasoning that fetching 100 rows (and filtering by value) instead of 10 rows doesn't have significant real-world impact unless the row data is particularly large. I'll admit I didn't think of the out-of-index lookup, but my main thought was 100/1000000 vs 10/1000000 isn't a big deal unless the DB is running on an abacus.