10000 gh-104169: Refactor tokenizer into lexer and wrappers by lysnikolaou · Pull Request #110684 · python/cpython · GitHub
[go: up one dir, main page]

Skip to content

gh-104169: Refactor tokenizer into lexer and wrappers #110684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Oct 11, 2023
Prev Previous commit
Next Next commit
Fix lint errors
  • Loading branch information
lysnikolaou committed Oct 11, 2023
commit df2c8cb9bfc86761fc06bdcd280cfd6ba8e59832
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
Split the tokenizer into two separate directories:
- One part includes the actual lexeme producing logic and lives in `Parser/lexer`.
- The second part wraps the lexer according to the different tokenization modes we have (string, utf-8, file, interactive, readline) and lives in `Parser/tokenizer`.
- One part includes the actual lexeme producing logic and lives in ``Parser/lexer``.
- The second part wraps the lexer according to the different tokenization modes
we have (string, utf-8, file, interactive, readline) and lives in ``Parser/tokenizer``.
0