Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 55918f7

Browse files
committed
Remove arbitrary cap on read_stream.c buffer queue.
Previously the internal queue of buffers was capped at max_ios * 4, though not less than io_combine_limit, at allocation time. That was done in the first version based on conservative theories about resource usage and heuristics pending later work. The configured I/O depth could not always be reached with dense random streams generated by ANALYZE, VACUUM, the proposed Bitmap Heap Scan patch, and also sequential streams with the proposed AIO subsystem to name some examples. The new formula is (max_ios + 1) * io_combine_limit, enough buffers for the full configured I/O concurrency level using the full configured I/O combine size, plus the buffers from one finished but not yet consumed full-sized I/O. Significantly more memory would be needed for high GUC values if the client code requests a large per-buffer data size, but that is discouraged (existing and proposed stream users try to keep it under a few words, if not zero). With this new formula, an intermediate variable could have overflowed under maximum GUC values, so its data type is adjusted to cope. Discussion: https://postgr.es/m/CA%2BhUKGK_%3D4CVmMHvsHjOVrK6t4F%3DLBpFzsrr3R%2BaJYN8kcTfWg%40mail.gmail.com
1 parent 48e4ae9 commit 55918f7

File tree

1 file changed

+11
-7
lines changed

1 file changed

+11
-7
lines changed

src/backend/storage/aio/read_stream.c

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -447,13 +447,17 @@ read_stream_begin_impl(int flags,
447447

448448
/*
449449
* Choose the maximum number of buffers we're prepared to pin. We try to
450-
* pin fewer if we can, though. We clamp it to at least io_combine_limit
451-
* so that we can have a chance to build up a full io_combine_limit sized
452-
* read, even when max_ios is zero. Be careful not to allow int16 to
453-
* overflow (even though that's not possible with the current GUC range
454-
* limits), allowing also for the spare entry and the overflow space.
450+
* pin fewer if we can, though. We add one so that we can make progress
451+
* even if max_ios is set to 0 (see also further down). For max_ios > 0,
452+
* this also allows an extra full I/O's worth of buffers: after an I/O
453+
* finishes we don't want to have to wait for its buffers to be consumed
454+
* before starting a new one.
455+
*
456+
* Be careful not to allow int16 to overflow (even though that's not
457+
* possible with the current GUC range limits), allowing also for the
458+
* spare entry and the overflow space.
455459
*/
456-
max_pinned_buffers = Max(max_ios * 4, io_combine_limit);
460+
max_pinned_buffers = (max_ios + 1) * io_combine_limit;
457461
max_pinned_buffers = Min(max_pinned_buffers,
458462
PG_INT16_MAX - io_combine_limit - 1);
459463

@@ -725,7 +729,7 @@ read_stream_next_buffer(ReadStream *stream, void **per_buffer_data)
725729
stream->ios[stream->oldest_io_index].buffer_index == oldest_buffer_index)
726730
{
727731
int16 io_index = stream->oldest_io_index;
728-
int16 distance;
732+
int32 distance; /* wider temporary value, clamped below */
729733

730734
/* Sanity check that we still agree on the buffers. */
731735
Assert(stream->ios[io_index].op.buffers ==

0 commit comments

Comments
 (0)