Skip to content

Commit af92e9e

Browse files
moogiciandsmith111
authored andcommitted
Speed up blib2to3 tokenization using startswith with a tuple (psf#4541)
1 parent 802eccc commit af92e9e

File tree

2 files changed

+3
-2
lines changed

2 files changed

+3
-2
lines changed

CHANGES.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
### Performance
4444

4545
<!-- Changes that improve Black's performance. -->
46+
- Speed up the `is_fstring_start` function in Black's tokenizer (#4541)
4647

4748
### Output
4849

src/blib2to3/pgen2/tokenize.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ def _combinations(*l: str) -> set[str]:
221221
| {f"{prefix}'" for prefix in _strprefixes | _fstring_prefixes}
222222
| {f'{prefix}"' for prefix in _strprefixes | _fstring_prefixes}
223223
)
224-
fstring_prefix: Final = (
224+
fstring_prefix: Final = tuple(
225225
{f"{prefix}'" for prefix in _fstring_prefixes}
226226
| {f'{prefix}"' for prefix in _fstring_prefixes}
227227
| {f"{prefix}'''" for prefix in _fstring_prefixes}
@@ -459,7 +459,7 @@ def untokenize(iterable: Iterable[TokenInfo]) -> str:
459459

460460

461461
def is_fstring_start(token: str) -> bool:
462-
return builtins.any(token.startswith(prefix) for prefix in fstring_prefix)
462+
return token.startswith(fstring_prefix)
463463

464464

465465
def _split_fstring_start_and_middle(token: str) -> tuple[str, str]:

0 commit comments

Comments
 (0)