Skip to content

Commit

Permalink
gh-128519: Align the docstring of untokenize() to match the docs (#12…
Browse files Browse the repository at this point in the history
  • Loading branch information
tomasr8 authored Jan 6, 2025
1 parent a62ba52 commit aef52ca
Showing 1 changed file with 4 additions and 10 deletions.
14 changes: 4 additions & 10 deletions Lib/tokenize.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,16 +318,10 @@ def untokenize(iterable):
with at least two elements, a token number and token value. If
only two tokens are passed, the resulting output is poor.
Round-trip invariant for full input:
Untokenized source will match input source exactly
Round-trip invariant for limited input:
# Output bytes will tokenize back to the input
t1 = [tok[:2] for tok in tokenize(f.readline)]
newcode = untokenize(t1)
readline = BytesIO(newcode).readline
t2 = [tok[:2] for tok in tokenize(readline)]
assert t1 == t2
The result is guaranteed to tokenize back to match the input so
that the conversion is lossless and round-trips are assured.
The guarantee applies only to the token type and token string as
the spacing between tokens (column positions) may change.
"""
ut = Untokenizer()
out = ut.untokenize(iterable)
Expand Down

0 comments on commit aef52ca

Please sign in to comment.