python: document output_buffer_limit's block-granularity behavior#1464
Open
MukundaKatta wants to merge 1 commit into
Open
python: document output_buffer_limit's block-granularity behavior#1464MukundaKatta wants to merge 1 commit into
MukundaKatta wants to merge 1 commit into
Conversation
The python _brotli.c docstring for Decompressor::process describes output_buffer_limit as an absolute cap: the buffer 'will not grow once its size is equal to or exceeds that value.' In practice, the cap is checked inside the BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT branch only, so the returned output can exceed the requested limit by up to one decoder block (observed on x86_64: a 1-byte limit yields a 32,752-byte payload, a 32,753-byte limit yields 98,272 bytes). Rewrite the docstring so it describes what the code actually does -- a best-effort, block-granularity cap -- instead of implying a byte-precise hard limit. No runtime behavior change; test assertions already check against MIN_OUTPUT_BUFFER_SIZE rather than the caller limit, so they still pass unchanged. Refs google#1460.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Refs #1460.
The docstring for
Decompressor::processinpython/_brotli.cdescribesoutput_buffer_limitas an absolute cap:In practice the check is only applied inside the
BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUTbranch of the decompression loop, so the returned output can exceed the requested limit by up to one decoder block. Repro on x86_64 from #1460:Rewrite the docstring so it describes what the code actually does — a best-effort, block-granularity cap — instead of implying a byte-precise hard limit. This matches the existing test assertions in
python/tests/decompressor_test.py, which check againstMIN_OUTPUT_BUFFER_SIZE(32 KiB) rather than the caller-supplied limit.Testing
python/tests/decompressor_test.py::test_decompress_with_limitandtest_changing_limitalready assertlen(decompressed_chunk) <= MIN_OUTPUT_BUFFER_SIZE(32 KiB), consistent with the new docstring wording; they continue to pass.Out of scope
The issue also suggests considering whether
output_buffer_limitshould be enforced at byte granularity. That would be a behavioural change in theBuffer_Growpath (the unconditional initialBuffer_Grow()call allocatesMIN_OUTPUT_BUFFER_SIZEbytes before the loop even sees the limit), and likely a separate discussion; this PR only aligns the documented contract with the shipped behaviour.