Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 8555173

Browse files
committed
Fix overread in JSON parsing errors for incomplete byte sequences
json_lex_string() relies on pg_encoding_mblen_bounded() to point to the end of a JSON string when generating an error message, and the input it uses is not guaranteed to be null-terminated. It was possible to walk off the end of the input buffer by a few bytes when the last bytes consist of an incomplete multi-byte sequence, as token_terminator would point to a location defined by pg_encoding_mblen_bounded() rather than the end of the input. This commit switches token_terminator so as the error uses data up to the end of the JSON input. More work should be done so as this code could rely on an equivalent of report_invalid_encoding() so as incorrect byte sequences can show in error messages in a readable form. This requires work for at least two cases in the JSON parsing API: an incomplete token and an invalid escape sequence. A more complete solution may be too invasive for a backpatch, so this is left as a future improvement, taking care of the overread first. A test is added on HEAD as test_json_parser makes this issue straight-forward to check. Note that pg_encoding_mblen_bounded() no longer has any callers. This will be removed on HEAD with a separate commit, as this is proving to encourage unsafe coding. Author: Jacob Champion Discussion: https://postgr.es/m/CAOYmi+ncM7pwLS3AnKCSmoqqtpjvA8wmCdoBtKA3ZrB2hZG6zA@mail.gmail.com Backpatch-through: 13
1 parent 2fb7560 commit 8555173

File tree

2 files changed

+10
-2
lines changed

2 files changed

+10
-2
lines changed

src/common/jsonapi.c

+2-2
Original file line numberDiff line numberDiff line change
@@ -1689,8 +1689,8 @@ json_lex_string(JsonLexContext *lex)
16891689
} while (0)
16901690
#define FAIL_AT_CHAR_END(code) \
16911691
do { \
1692-
lex->token_terminator = \
1693-
s + pg_encoding_mblen_bounded(lex->input_encoding, s); \
1692+
char *term = s + pg_encoding_mblen(lex->input_encoding, s); \
1693+
lex->token_terminator = (term <= end) ? term : end; \
16941694
return code; \
16951695
} while (0)
16961696

src/test/modules/test_json_parser/t/002_inline.pl

+8
Original file line numberDiff line numberDiff line change
@@ -127,4 +127,12 @@ sub test
127127
'"\\\\\\\\\\\\\\"',
128128
error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
129129

130+
# Case with three bytes: double-quote, backslash and <f5>.
131+
# Both invalid-token and invalid-escape are possible errors, because for
132+
# smaller chunk sizes the incremental parser skips the string parsing when
133+
# it cannot find an ending quote.
134+
test("incomplete UTF-8 sequence",
135+
"\"\\\x{F5}",
136+
error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
137+
130138
done_testing();

0 commit comments

Comments
 (0)