8000 gh-104169: Refactor tokenizer into lexer and wrappers by lysnikolaou · Pull Request #110684 · python/cpython · GitHub
[go: up one dir, main page]

Skip to content

gh-104169: Refactor tokenizer into lexer and wrappers #110684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Oct 11, 2023
Prev Previous commit
Next Next commit
Fix ifdef
  • Loading branch information
lysnikolaou committed Oct 11, 2023
commit 815cb3476817b00b4b87adaba00e1803553fda0b
2 changes: 1 addition & 1 deletion Parser/tokenizer/helpers.c
Original file line number Diff line number Diff line change
Expand Up @@ -514,7 +514,7 @@ _PyTokenizer_ensure_utf8(char *line, struct tok_state *tok)

/* ############## DEBUGGING STUFF ############## */

#if defined(Py_DEBUG)
#ifdef Py_DEBUG
void
_PyTokenizer_print_escape(FILE *f, const char *s, Py_ssize_t size)
{
Expand Down
2 changes: 1 addition & 1 deletion Parser/tokenizer/helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ int _PyTokenizer_check_coding_spec(const char* line, Py_ssize_t size, struct tok
int set_readline(struct tok_state *, const char *));
int _PyTokenizer_ensure_utf8(char *line, struct tok_state *tok);

#if Py_DEBUG
#ifdef Py_DEBUG
void _PyTokenizer_print_escape(FILE *f, const char *s, Py_ssize_t size);
void _PyTokenizer_tok_dump(int type, char *start, char *end);
#endif
Expand Down
0