Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 0fbceae

Browse files
Show index search count in EXPLAIN ANALYZE, take 2.
Expose the count of index searches/index descents in EXPLAIN ANALYZE's output for index scan/index-only scan/bitmap index scan nodes. This information is particularly useful with scans that use ScalarArrayOp quals, where the number of index searches can be unpredictable due to implementation details that interact with physical index characteristics (at least with nbtree SAOP scans, since Postgres 17 commit 5bf748b). The information shown also provides useful context when EXPLAIN ANALYZE runs a plan with an index scan node that successfully applied the skip scan optimization (set to be added to nbtree by an upcoming patch). The instrumentation works by teaching all index AMs to increment a new nsearches counter whenever a new index search begins. The counter is incremented at exactly the same point that index AMs already increment the pg_stat_*_indexes.idx_scan counter (we're counting the same event, but at the scan level rather than the relation level). Parallel queries have workers copy their local counter struct into shared memory when an index scan node ends -- even when it isn't a parallel aware scan node. An earlier version of this patch that only worked with parallel aware scans became commit 5ead85f (though that was quickly reverted by commit d00107c following "debug_parallel_query=regress" buildfarm failures). Our approach doesn't match the approach used when tracking other index scan related costs (e.g., "Rows Removed by Filter:"). It is comparable to the approach used in similar cases involving costs that are only readily accessible inside an access method, not from the executor proper (e.g., "Heap Blocks:" output for a Bitmap Heap Scan, which was recently enhanced to show per-worker costs by commit 5a1e6df, using essentially the same scheme as the one used here). It is necessary for index AMs to have direct responsibility for maintaining the new counter, since the counter might need to be incremented multiple times per amgettuple call (or per amgetbitmap call). But it is also necessary for the executor proper to manage the shared memory now used to transfer each worker's counter struct to the leader. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Tomas Vondra <tomas@vondra.me> Reviewed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com> Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com Discussion: https://postgr.es/m/CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
1 parent 12c5f79 commit 0fbceae

37 files changed

+797
-98
lines changed

contrib/bloom/blscan.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,8 @@ blgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
116116
bas = GetAccessStrategy(BAS_BULKREAD);
117117
npages = RelationGetNumberOfBlocks(scan->indexRelation);
118118
pgstat_count_index_scan(scan->indexRelation);
119+
if (scan->instrument)
120+
scan->instrument->nsearches++;
119121

120122
for (blkno = BLOOM_HEAD_BLKNO; blkno < npages; blkno++)
121123
{

doc/src/sgml/bloom.sgml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -173,10 +173,11 @@ CREATE INDEX
173173
Buffers: shared hit=21864
174174
-&gt; Bitmap Index Scan on bloomidx (cost=0.00..178436.00 rows=1 width=0) (actual time=20.005..20.005 rows=2300.00 loops=1)
175175
Index Cond: ((i2 = 898732) AND (i5 = 123451))
176+
Index Searches: 1
176177
Buffers: shared hit=19608
177178
Planning Time: 0.099 ms
178179
Execution Time: 22.632 ms
179-
(10 rows)
180+
(11 rows)
180181
</programlisting>
181182
</para>
182183

@@ -208,13 +209,15 @@ CREATE INDEX
208209
Buffers: shared hit=6
209210
-&gt; Bitmap Index Scan on btreeidx5 (cost=0.00..4.52 rows=11 width=0) (actual time=0.026..0.026 rows=7.00 loops=1)
210211
Index Cond: (i5 = 123451)
212+
Index Searches: 1
211213
Buffers: shared hit=3
212214
-&gt; Bitmap Index Scan on btreeidx2 (cost=0.00..4.52 rows=11 width=0) (actual time=0.007..0.007 rows=8.00 loops=1)
213215
Index Cond: (i2 = 898732)
216+
Index Searches: 1
214217
Buffers: shared hit=3
215218
Planning Time: 0.264 ms
216219
Execution Time: 0.047 ms
217-
(13 rows)
220+
(15 rows)
218221
</programlisting>
219222
Although this query runs much faster than with either of the single
220223
indexes, we pay a penalty in index size. Each of the single-column

doc/src/sgml/monitoring.sgml

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4234,16 +4234,32 @@ description | Waiting for a newly initialized WAL file to reach durable storage
42344234

42354235
<note>
42364236
<para>
4237-
Queries that use certain <acronym>SQL</acronym> constructs to search for
4238-
rows matching any value out of a list or array of multiple scalar values
4239-
(see <xref linkend="functions-comparisons"/>) perform multiple
4240-
<quote>primitive</quote> index scans (up to one primitive scan per scalar
4241-
value) during query execution. Each internal primitive index scan
4242-
increments <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>,
4237+
Index scans may sometimes perform multiple index searches per execution.
4238+
Each index search increments <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>,
42434239
so it's possible for the count of index scans to significantly exceed the
42444240
total number of index scan executor node executions.
42454241
</para>
4242+
<para>
4243+
This can happen with queries that use certain <acronym>SQL</acronym>
4244+
constructs to search for rows matching any value out of a list or array of
4245+
multiple scalar values (see <xref linkend="functions-comparisons"/>). It
4246+
can also happen to queries with a
4247+
<literal><replaceable>column_name</replaceable> =
4248+
<replaceable>value1</replaceable> OR
4249+
<replaceable>column_name</replaceable> =
4250+
<replaceable>value2</replaceable> ...</literal> construct, though only
4251+
when the optimizer transforms the construct into an equivalent
4252+
multi-valued array representation.
4253+
</para>
42464254
</note>
4255+
<tip>
4256+
<para>
4257+
<command>EXPLAIN ANALYZE</command> outputs the total number of index
4258+
searches performed by each index scan node. See
4259+
<xref linkend="using-explain-analyze"/> for an example demonstrating how
4260+
this works.
4261+
</para>
4262+
</tip>
42474263

42484264
</sect2>
42494265

doc/src/sgml/perform.sgml

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -729,9 +729,11 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2;
729729
Buffers: shared hit=3 read=5 written=4
730730
-&gt; Bitmap Index Scan on tenk1_unique1 (cost=0.00..4.36 rows=10 width=0) (actual time=0.004..0.004 rows=10.00 loops=1)
731731
Index Cond: (unique1 &lt; 10)
732+
Index Searches: 1
732733
Buffers: shared hit=2
733734
-&gt; Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.29..7.90 rows=1 width=244) (actual time=0.003..0.003 rows=1.00 loops=10)
734735
Index Cond: (unique2 = t1.unique2)
736+
Index Searches: 10
735737
Buffers: shared hit=24 read=6
736738
Planning:
737739
Buffers: shared hit=15 dirtied=9
@@ -790,6 +792,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous;
790792
Buffers: shared hit=92
791793
-&gt; Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.013..0.013 rows=100.00 loops=1)
792794
Index Cond: (unique1 &lt; 100)
795+
Index Searches: 1
793796
Buffers: shared hit=2
794797
Planning:
795798
Buffers: shared hit=12
@@ -805,6 +808,58 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous;
805808
shown.)
806809
</para>
807810

811+
<para>
812+
Index Scan nodes (as well as Bitmap Index Scan and Index-Only Scan nodes)
813+
show an <quote>Index Searches</quote> line that reports the total number
814+
of searches across <emphasis>all</emphasis> node
815+
executions/<literal>loops</literal>:
816+
817+
<screen>
818+
EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE thousand IN (1, 500, 700, 999);
819+
QUERY PLAN
820+
-------------------------------------------------------------------&zwsp;---------------------------------------------------------------
821+
Bitmap Heap Scan on tenk1 (cost=9.45..73.44 rows=40 width=244) (actual time=0.012..0.028 rows=40.00 loops=1)
822+
Recheck Cond: (thousand = ANY ('{1,500,700,999}'::integer[]))
823+
Heap Blocks: exact=39
824+
Buffers: shared hit=47
825+
-> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..9.44 rows=40 width=0) (actual time=0.009..0.009 rows=40.00 loops=1)
826+
Index Cond: (thousand = ANY ('{1,500,700,999}'::integer[]))
827+
Index Searches: 4
828+
Buffers: shared hit=8
829+
Planning Time: 0.029 ms
830+
Execution Time: 0.034 ms
831+
</screen>
832+
833+
Here we see a Bitmap Index Scan node that needed 4 separate index
834+
searches. The scan had to search the index from the
835+
<structname>tenk1_thous_tenthous</structname> index root page once per
836+
<type>integer</type> value from the predicate's <literal>IN</literal>
837+
construct. However, the number of index searches often won't have such a
838+
simple correspondence to the query predicate:
839+
840+
<screen>
841+
EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE thousand IN (1, 2, 3, 4);
842+
QUERY PLAN
843+
--------------------------------------------------------------------&zwsp;--------------------------------------------------------------
844+
Bitmap Heap Scan on tenk1 (cost=9.45..73.44 rows=40 width=244) (actual time=0.009..0.019 rows=40.00 loops=1)
845+
Recheck Cond: (thousand = ANY ('{1,2,3,4}'::integer[]))
846+
Heap Blocks: exact=38
847+
Buffers: shared hit=40
848+
-> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..9.44 rows=40 width=0) (actual time=0.005..0.005 rows=40.00 loops=1)
849+
Index Cond: (thousand = ANY ('{1,2,3,4}'::integer[]))
850+
Index Searches: 1
851+
Buffers: shared hit=2
852+
Planning Time: 0.029 ms
853+
Execution Time: 0.026 ms
854+
</screen>
855+
856+
This variant of our <literal>IN</literal> query performed only 1 index
857+
search. It spent less time traversing the index (compared to the original
858+
query) because its <literal>IN</literal> construct uses values matching
859+
index tuples stored next to each other, on the same
860+
<structname>tenk1_thous_tenthous</structname> index leaf page.
861+
</para>
862+
808863
<para>
809864
Another type of extra information is the number of rows removed by a
810865
filter condition:
@@ -861,6 +916,7 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @&gt; polygon '(0.5,2.0)';
861916
Index Scan using gpolygonind on polygon_tbl (cost=0.13..8.15 rows=1 width=85) (actual time=0.074..0.074 rows=0.00 loops=1)
862917
Index Cond: (f1 @&gt; '((0.5,2))'::polygon)
863918
Rows Removed by Index Recheck: 1
919+
Index Searches: 1
864920
Buffers: shared hit=1
865921
Planning Time: 0.039 ms
866922
Execution Time: 0.098 ms
@@ -894,8 +950,10 @@ EXPLAIN (ANALYZE, BUFFERS OFF) SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND un
894950
-&gt; BitmapAnd (cost=25.07..25.07 rows=10 width=0) (actual time=0.100..0.101 rows=0.00 loops=1)
895951
-&gt; Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.027..0.027 rows=100.00 loops=1)
896952
Index Cond: (unique1 &lt; 100)
953+
Index Searches: 1
897954
-&gt; Bitmap Index Scan on tenk1_unique2 (cost=0.00..19.78 rows=999 width=0) (actual time=0.070..0.070 rows=999.00 loops=1)
898955
Index Cond: (unique2 &gt; 9000)
956+
Index Searches: 1
899957
Planning Time: 0.162 ms
900958
Execution Time: 0.143 ms
901959
</screen>
@@ -923,6 +981,7 @@ EXPLAIN ANALYZE UPDATE tenk1 SET hundred = hundred + 1 WHERE unique1 &lt; 100;
923981
Buffers: shared hit=4 read=2
924982
-&gt; Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.031..0.031 rows=100.00 loops=1)
925983
Index Cond: (unique1 &lt; 100)
984+
Index Searches: 1
926985
Buffers: shared read=2
927986
Planning Time: 0.151 ms
928987
Execution Time: 1.856 ms
@@ -1061,6 +1120,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000
10611120
Index Cond: (unique2 &gt; 9000)
10621121
Filter: (unique1 &lt; 100)
10631122
Rows Removed by Filter: 287
1123+
Index Searches: 1
10641124
Buffers: shared hit=16
10651125
Planning Time: 0.077 ms
10661126
Execution Time: 0.086 ms

doc/src/sgml/ref/explain.sgml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -506,10 +506,11 @@ EXPLAIN ANALYZE EXECUTE query(100, 200);
506506
Buffers: shared hit=4
507507
-&gt; Index Scan using test_pkey on test (cost=0.29..10.27 rows=99 width=8) (actual time=0.009..0.025 rows=99.00 loops=1)
508508
Index Cond: ((id &gt; 100) AND (id &lt; 200))
509+
Index Searches: 1
509510
Buffers: shared hit=4
510511
Planning Time: 0.244 ms
511512
Execution Time: 0.073 ms
512-
(9 rows)
513+
(10 rows)
513514
</programlisting>
514515
</para>
515516

doc/src/sgml/rules.sgml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1046,6 +1046,7 @@ SELECT count(*) FROM words WHERE word = 'caterpiler';
10461046
-&gt; Index Only Scan using wrd_word on wrd (cost=0.42..4.44 rows=1 width=0) (actual time=0.039..0.039 rows=0.00 loops=1)
10471047
Index Cond: (word = 'caterpiler'::text)
10481048
Heap Fetches: 0
1049+
Index Searches: 1
10491050
Planning time: 0.164 ms
10501051
Execution time: 0.117 ms
10511052
</programlisting>
@@ -1090,6 +1091,7 @@ SELECT word FROM words ORDER BY word &lt;-&gt; 'caterpiler' LIMIT 10;
10901091
Limit (cost=0.29..1.06 rows=10 width=10) (actual time=187.222..188.257 rows=10.00 loops=1)
10911092
-&gt; Index Scan using wrd_trgm on wrd (cost=0.29..37020.87 rows=479829 width=10) (actual time=187.219..188.252 rows=10.00 loops=1)
10921093
Order By: (word &lt;-&gt; 'caterpiler'::text)
1094+
Index Searches: 1
10931095
Planning time: 0.196 ms
10941096
Execution time: 198.640 ms
10951097
</programlisting>

src/backend/access/brin/brin.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -592,6 +592,8 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
592592
opaque = (BrinOpaque *) scan->opaque;
593593
bdesc = opaque->bo_bdesc;
594594
pgstat_count_index_scan(idxRel);
595+
if (scan->instrument)
596+
scan->instrument->nsearches++;
595597

596598
/*
597599
* We need to know the size of the table so that we know how long to

src/backend/access/gin/ginscan.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -442,6 +442,8 @@ ginNewScanKey(IndexScanDesc scan)
442442
MemoryContextSwitchTo(oldCtx);
443443

444444
pgstat_count_index_scan(scan->indexRelation);
445+
if (scan->instrument)
446+
scan->instrument->nsearches++;
445447
}
446448

447449
void

src/backend/access/gist/gistget.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -625,6 +625,8 @@ gistgettuple(IndexScanDesc scan, ScanDirection dir)
625625
GISTSearchItem fakeItem;
626626

627627
pgstat_count_index_scan(scan->indexRelation);
628+
if (scan->instrument)
629+
scan->instrument->nsearches++;
628630

629631
so->firstCall = false;
630632
so->curPageData = so->nPageData = 0;
@@ -750,6 +752,8 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
750752
return 0;
751753

752754
pgstat_count_index_scan(scan->indexRelation);
755+
if (scan->instrument)
756+
scan->instrument->nsearches++;
753757

754758
/* Begin the scan by processing the root page */
755759
so->curPageData = so->nPageData = 0;

src/backend/access/hash/hashsearch.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -298,6 +298,8 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
298298
HashScanPosItem *currItem;
299299

300300
pgstat_count_index_scan(rel);
301+
if (scan->instrument)
302+
scan->instrument->nsearches++;
301303

302304
/*
303305
* We do not support hash scans with no index qualification, because we

src/backend/access/heap/heapam_handler.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -749,7 +749,7 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
749749

750750
tableScan = NULL;
751751
heapScan = NULL;
752-
indexScan = index_beginscan(OldHeap, OldIndex, SnapshotAny, 0, 0);
752+
indexScan = index_beginscan(OldHeap, OldIndex, SnapshotAny, NULL, 0, 0);
753753
index_rescan(indexScan, NULL, 0, NULL, 0);
754754
}
755755
else

src/backend/access/index/genam.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
119119
scan->ignore_killed_tuples = !scan->xactStartedInRecovery;
120120

121121
scan->opaque = NULL;
122+
scan->instrument = NULL;
122123

123124
scan->xs_itup = NULL;
124125
scan->xs_itupdesc = NULL;
@@ -446,7 +447,7 @@ systable_beginscan(Relation heapRelation,
446447
}
447448

448449
sysscan->iscan = index_beginscan(heapRelation, irel,
449-
snapshot, nkeys, 0);
450+
snapshot, NULL, nkeys, 0);
450451
index_rescan(sysscan->iscan, idxkey, nkeys, NULL, 0);
451452
sysscan->scan = NULL;
452453

@@ -711,7 +712,7 @@ systable_beginscan_ordered(Relation heapRelation,
711712
}
712713

713714
sysscan->iscan = index_beginscan(heapRelation, indexRelation,
714-
snapshot, nkeys, 0);
715+
snapshot, NULL, nkeys, 0);
715716
index_rescan(sysscan->iscan, idxkey, nkeys, NULL, 0);
716717
sysscan->scan = NULL;
717718

0 commit comments

Comments
 (0)