615 lines
43 KiB
HTML

<!DOCTYPE html>
<html lang="en" data-content_root="../">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="viewport" content="width=device-width, initial-scale=1" />
<meta property="og:title" content="tokenize — Tokenizer for Python source" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://docs.python.org/3/library/tokenize.html" />
<meta property="og:site_name" content="Python documentation" />
<meta property="og:description" content="Source code: Lib/tokenize.py The tokenize module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it u..." />
<meta property="og:image" content="https://docs.python.org/3/_static/og-image.png" />
<meta property="og:image:alt" content="Python documentation" />
<meta name="description" content="Source code: Lib/tokenize.py The tokenize module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it u..." />
<meta property="og:image:width" content="200">
<meta property="og:image:height" content="200">
<meta name="theme-color" content="#3776ab">
<title>tokenize — Tokenizer for Python source &#8212; Python 3.13.3 documentation</title><meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="../_static/pygments.css?v=b86133f3" />
<link rel="stylesheet" type="text/css" href="../_static/pydoctheme.css?v=23252803" />
<link id="pygments_dark_css" media="(prefers-color-scheme: dark)" rel="stylesheet" type="text/css" href="../_static/pygments_dark.css?v=5349f25f" />
<script src="../_static/documentation_options.js?v=5d57ca2d"></script>
<script src="../_static/doctools.js?v=9bcbadda"></script>
<script src="../_static/sphinx_highlight.js?v=dc90522c"></script>
<script src="../_static/sidebar.js"></script>
<link rel="search" type="application/opensearchdescription+xml"
title="Search within Python 3.13.3 documentation"
href="../_static/opensearch.xml"/>
<link rel="author" title="About these documents" href="../about.html" />
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="copyright" title="Copyright" href="../copyright.html" />
<link rel="next" title="tabnanny — Detection of ambiguous indentation" href="tabnanny.html" />
<link rel="prev" title="keyword — Testing for Python keywords" href="keyword.html" />
<link rel="canonical" href="https://docs.python.org/3/library/tokenize.html">
<style>
@media only screen {
table.full-width-table {
width: 100%;
}
}
</style>
<link rel="stylesheet" href="../_static/pydoctheme_dark.css" media="(prefers-color-scheme: dark)" id="pydoctheme_dark_css">
<link rel="shortcut icon" type="image/png" href="../_static/py.svg" />
<script type="text/javascript" src="../_static/copybutton.js"></script>
<script type="text/javascript" src="../_static/menu.js"></script>
<script type="text/javascript" src="../_static/search-focus.js"></script>
<script type="text/javascript" src="../_static/themetoggle.js"></script>
<script type="text/javascript" src="../_static/rtd_switcher.js"></script>
<meta name="readthedocs-addons-api-version" content="1">
</head>
<body>
<div class="mobile-nav">
<input type="checkbox" id="menuToggler" class="toggler__input" aria-controls="navigation"
aria-pressed="false" aria-expanded="false" role="button" aria-label="Menu" />
<nav class="nav-content" role="navigation">
<label for="menuToggler" class="toggler__label">
<span></span>
</label>
<span class="nav-items-wrapper">
<a href="https://www.python.org/" class="nav-logo">
<img src="../_static/py.svg" alt="Python logo"/>
</a>
<span class="version_switcher_placeholder"></span>
<form role="search" class="search" action="../search.html" method="get">
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" class="search-icon">
<path fill-rule="nonzero" fill="currentColor" d="M15.5 14h-.79l-.28-.27a6.5 6.5 0 001.48-5.34c-.47-2.78-2.79-5-5.59-5.34a6.505 6.505 0 00-7.27 7.27c.34 2.8 2.56 5.12 5.34 5.59a6.5 6.5 0 005.34-1.48l.27.28v.79l4.25 4.25c.41.41 1.08.41 1.49 0 .41-.41.41-1.08 0-1.49L15.5 14zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"></path>
</svg>
<input placeholder="Quick search" aria-label="Quick search" type="search" name="q" />
<input type="submit" value="Go"/>
</form>
</span>
</nav>
<div class="menu-wrapper">
<nav class="menu" role="navigation" aria-label="main navigation">
<div class="language_switcher_placeholder"></div>
<label class="theme-selector-label">
Theme
<select class="theme-selector" oninput="activateTheme(this.value)">
<option value="auto" selected>Auto</option>
<option value="light">Light</option>
<option value="dark">Dark</option>
</select>
</label>
<div>
<h3><a href="../contents.html">Table of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source</a><ul>
<li><a class="reference internal" href="#tokenizing-input">Tokenizing Input</a></li>
<li><a class="reference internal" href="#command-line-usage">Command-Line Usage</a></li>
<li><a class="reference internal" href="#examples">Examples</a></li>
</ul>
</li>
</ul>
</div>
<div>
<h4>Previous topic</h4>
<p class="topless"><a href="keyword.html"
title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">keyword</span></code> — Testing for Python keywords</a></p>
</div>
<div>
<h4>Next topic</h4>
<p class="topless"><a href="tabnanny.html"
title="next chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tabnanny</span></code> — Detection of ambiguous indentation</a></p>
</div>
<div role="note" aria-label="source link">
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="../bugs.html">Report a Bug</a></li>
<li>
<a href="https://github.com/python/cpython/blob/main/Doc/library/tokenize.rst"
rel="nofollow">Show Source
</a>
</li>
</ul>
</div>
</nav>
</div>
</div>
<div class="related" role="navigation" aria-label="Related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="../genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li>
<li class="right" >
<a href="tabnanny.html" title="tabnanny — Detection of ambiguous indentation"
accesskey="N">next</a> |</li>
<li class="right" >
<a href="keyword.html" title="keyword — Testing for Python keywords"
accesskey="P">previous</a> |</li>
<li><img src="../_static/py.svg" alt="Python logo" style="vertical-align: middle; margin-top: -1px"/></li>
<li><a href="https://www.python.org/">Python</a> &#187;</li>
<li class="switchers">
<div class="language_switcher_placeholder"></div>
<div class="version_switcher_placeholder"></div>
</li>
<li>
</li>
<li id="cpython-language-and-version">
<a href="../index.html">3.13.3 Documentation</a> &#187;
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> &#187;</li>
<li class="nav-item nav-item-2"><a href="language.html" accesskey="U">Python Language Services</a> &#187;</li>
<li class="nav-item nav-item-this"><a href=""><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source</a></li>
<li class="right">
<div class="inline-search" role="search">
<form class="inline-search" action="../search.html" method="get">
<input placeholder="Quick search" aria-label="Quick search" type="search" name="q" id="search-box" />
<input type="submit" value="Go" />
</form>
</div>
|
</li>
<li class="right">
<label class="theme-selector-label">
Theme
<select class="theme-selector" oninput="activateTheme(this.value)">
<option value="auto" selected>Auto</option>
<option value="light">Light</option>
<option value="dark">Dark</option>
</select>
</label> |</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<section id="module-tokenize">
<span id="tokenize-tokenizer-for-python-source"></span><h1><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source<a class="headerlink" href="#module-tokenize" title="Link to this heading"></a></h1>
<p><strong>Source code:</strong> <a class="extlink-source reference external" href="https://github.com/python/cpython/tree/3.13/Lib/tokenize.py">Lib/tokenize.py</a></p>
<hr class="docutils" />
<p>The <a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a> module provides a lexical scanner for Python source code,
implemented in Python. The scanner in this module returns comments as tokens
as well, making it useful for implementing “pretty-printers”, including
colorizers for on-screen displays.</p>
<p>To simplify token stream handling, all <a class="reference internal" href="../reference/lexical_analysis.html#operators"><span class="std std-ref">operator</span></a> and
<a class="reference internal" href="../reference/lexical_analysis.html#delimiters"><span class="std std-ref">delimiter</span></a> tokens and <a class="reference internal" href="constants.html#Ellipsis" title="Ellipsis"><code class="xref py py-data docutils literal notranslate"><span class="pre">Ellipsis</span></code></a> are returned using
the generic <a class="reference internal" href="token.html#token.OP" title="token.OP"><code class="xref py py-data docutils literal notranslate"><span class="pre">OP</span></code></a> token type. The exact
type can be determined by checking the <code class="docutils literal notranslate"><span class="pre">exact_type</span></code> property on the
<a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a> returned from <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize.tokenize()</span></code></a>.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Note that the functions in this module are only designed to parse
syntactically valid Python code (code that does not raise when parsed
using <a class="reference internal" href="ast.html#ast.parse" title="ast.parse"><code class="xref py py-func docutils literal notranslate"><span class="pre">ast.parse()</span></code></a>). The behavior of the functions in this module is
<strong>undefined</strong> when providing invalid Python code and it can change at any
point.</p>
</div>
<section id="tokenizing-input">
<h2>Tokenizing Input<a class="headerlink" href="#tokenizing-input" title="Link to this heading"></a></h2>
<p>The primary entry point is a <a class="reference internal" href="../glossary.html#term-generator"><span class="xref std std-term">generator</span></a>:</p>
<dl class="py function">
<dt class="sig sig-object py" id="tokenize.tokenize">
<span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">tokenize</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">readline</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.tokenize" title="Link to this definition"></a></dt>
<dd><p>The <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> generator requires one argument, <em>readline</em>, which
must be a callable object which provides the same interface as the
<a class="reference internal" href="io.html#io.IOBase.readline" title="io.IOBase.readline"><code class="xref py py-meth docutils literal notranslate"><span class="pre">io.IOBase.readline()</span></code></a> method of file objects. Each call to the
function should return one line of input as bytes.</p>
<p>The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple <code class="docutils literal notranslate"><span class="pre">(srow,</span> <span class="pre">scol)</span></code> of ints specifying the row and
column where the token begins in the source; a 2-tuple <code class="docutils literal notranslate"><span class="pre">(erow,</span> <span class="pre">ecol)</span></code> of
ints specifying the row and column where the token ends in the source; and
the line on which the token was found. The line passed (the last tuple item)
is the <em>physical</em> line. The 5 tuple is returned as a <a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a>
with the field names:
<code class="docutils literal notranslate"><span class="pre">type</span> <span class="pre">string</span> <span class="pre">start</span> <span class="pre">end</span> <span class="pre">line</span></code>.</p>
<p>The returned <a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a> has an additional property named
<code class="docutils literal notranslate"><span class="pre">exact_type</span></code> that contains the exact operator type for
<a class="reference internal" href="token.html#token.OP" title="token.OP"><code class="xref py py-data docutils literal notranslate"><span class="pre">OP</span></code></a> tokens. For all other token types <code class="docutils literal notranslate"><span class="pre">exact_type</span></code>
equals the named tuple <code class="docutils literal notranslate"><span class="pre">type</span></code> field.</p>
<div class="versionchanged">
<p><span class="versionmodified changed">Changed in version 3.1: </span>Added support for named tuples.</p>
</div>
<div class="versionchanged">
<p><span class="versionmodified changed">Changed in version 3.3: </span>Added support for <code class="docutils literal notranslate"><span class="pre">exact_type</span></code>.</p>
</div>
<p><a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> determines the source encoding of the file by looking for a
UTF-8 BOM or encoding cookie, according to <span class="target" id="index-0"></span><a class="pep reference external" href="https://peps.python.org/pep-0263/"><strong>PEP 263</strong></a>.</p>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="tokenize.generate_tokens">
<span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">generate_tokens</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">readline</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.generate_tokens" title="Link to this definition"></a></dt>
<dd><p>Tokenize a source reading unicode strings instead of bytes.</p>
<p>Like <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a>, the <em>readline</em> argument is a callable returning
a single line of input. However, <a class="reference internal" href="#tokenize.generate_tokens" title="tokenize.generate_tokens"><code class="xref py py-func docutils literal notranslate"><span class="pre">generate_tokens()</span></code></a> expects <em>readline</em>
to return a str object rather than bytes.</p>
<p>The result is an iterator yielding named tuples, exactly like
<a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a>. It does not yield an <a class="reference internal" href="token.html#token.ENCODING" title="token.ENCODING"><code class="xref py py-data docutils literal notranslate"><span class="pre">ENCODING</span></code></a> token.</p>
</dd></dl>
<p>All constants from the <a class="reference internal" href="token.html#module-token" title="token: Constants representing terminal nodes of the parse tree."><code class="xref py py-mod docutils literal notranslate"><span class="pre">token</span></code></a> module are also exported from
<a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a>.</p>
<p>Another function is provided to reverse the tokenization process. This is
useful for creating tools that tokenize a script, modify the token stream, and
write back the modified script.</p>
<dl class="py function">
<dt class="sig sig-object py" id="tokenize.untokenize">
<span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">untokenize</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">iterable</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.untokenize" title="Link to this definition"></a></dt>
<dd><p>Converts tokens back into Python source code. The <em>iterable</em> must return
sequences with at least two elements, the token type and the token string.
Any additional sequence elements are ignored.</p>
<p>The result is guaranteed to tokenize back to match the input so that the
conversion is lossless and round-trips are assured. The guarantee applies
only to the token type and token string as the spacing between tokens
(column positions) may change.</p>
<p>It returns bytes, encoded using the <a class="reference internal" href="token.html#token.ENCODING" title="token.ENCODING"><code class="xref py py-data docutils literal notranslate"><span class="pre">ENCODING</span></code></a> token, which
is the first token sequence output by <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a>. If there is no
encoding token in the input, it returns a str instead.</p>
</dd></dl>
<p><a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:</p>
<dl class="py function">
<dt class="sig sig-object py" id="tokenize.detect_encoding">
<span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">detect_encoding</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">readline</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.detect_encoding" title="Link to this definition"></a></dt>
<dd><p>The <a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a> function is used to detect the encoding that
should be used to decode a Python source file. It requires one argument,
readline, in the same way as the <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> generator.</p>
<p>It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
in.</p>
<p>It detects the encoding from the presence of a UTF-8 BOM or an encoding
cookie as specified in <span class="target" id="index-1"></span><a class="pep reference external" href="https://peps.python.org/pep-0263/"><strong>PEP 263</strong></a>. If both a BOM and a cookie are present,
but disagree, a <a class="reference internal" href="exceptions.html#SyntaxError" title="SyntaxError"><code class="xref py py-exc docutils literal notranslate"><span class="pre">SyntaxError</span></code></a> will be raised. Note that if the BOM is found,
<code class="docutils literal notranslate"><span class="pre">'utf-8-sig'</span></code> will be returned as an encoding.</p>
<p>If no encoding is specified, then the default of <code class="docutils literal notranslate"><span class="pre">'utf-8'</span></code> will be
returned.</p>
<p>Use <a class="reference internal" href="#tokenize.open" title="tokenize.open"><code class="xref py py-func docutils literal notranslate"><span class="pre">open()</span></code></a> to open Python source files: it uses
<a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a> to detect the file encoding.</p>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="tokenize.open">
<span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">open</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">filename</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.open" title="Link to this definition"></a></dt>
<dd><p>Open a file in read only mode using the encoding detected by
<a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a>.</p>
<div class="versionadded">
<p><span class="versionmodified added">Added in version 3.2.</span></p>
</div>
</dd></dl>
<dl class="py exception">
<dt class="sig sig-object py" id="tokenize.TokenError">
<em class="property"><span class="k"><span class="pre">exception</span></span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">tokenize.</span></span><span class="sig-name descname"><span class="pre">TokenError</span></span><a class="headerlink" href="#tokenize.TokenError" title="Link to this definition"></a></dt>
<dd><p>Raised when either a docstring or expression that may be split over several
lines is not completed anywhere in the file, for example:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="s2">&quot;&quot;&quot;Beginning of</span>
<span class="s2">docstring</span>
</pre></div>
</div>
<p>or:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="p">[</span><span class="mi">1</span><span class="p">,</span>
<span class="mi">2</span><span class="p">,</span>
<span class="mi">3</span>
</pre></div>
</div>
</dd></dl>
</section>
<section id="command-line-usage">
<span id="tokenize-cli"></span><h2>Command-Line Usage<a class="headerlink" href="#command-line-usage" title="Link to this heading"></a></h2>
<div class="versionadded">
<p><span class="versionmodified added">Added in version 3.3.</span></p>
</div>
<p>The <a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a> module can be executed as a script from the command line.
It is as simple as:</p>
<div class="highlight-sh notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>-m<span class="w"> </span>tokenize<span class="w"> </span><span class="o">[</span>-e<span class="o">]</span><span class="w"> </span><span class="o">[</span>filename.py<span class="o">]</span>
</pre></div>
</div>
<p>The following options are accepted:</p>
<dl class="std option">
<dt class="sig sig-object std" id="cmdoption-tokenize-h">
<span id="cmdoption-tokenize-help"></span><span class="sig-name descname"><span class="pre">-h</span></span><span class="sig-prename descclassname"></span><span class="sig-prename descclassname"><span class="pre">,</span> </span><span class="sig-name descname"><span class="pre">--help</span></span><span class="sig-prename descclassname"></span><a class="headerlink" href="#cmdoption-tokenize-h" title="Link to this definition"></a></dt>
<dd><p>show this help message and exit</p>
</dd></dl>
<dl class="std option">
<dt class="sig sig-object std" id="cmdoption-tokenize-e">
<span id="cmdoption-tokenize-exact"></span><span class="sig-name descname"><span class="pre">-e</span></span><span class="sig-prename descclassname"></span><span class="sig-prename descclassname"><span class="pre">,</span> </span><span class="sig-name descname"><span class="pre">--exact</span></span><span class="sig-prename descclassname"></span><a class="headerlink" href="#cmdoption-tokenize-e" title="Link to this definition"></a></dt>
<dd><p>display token names using the exact type</p>
</dd></dl>
<p>If <code class="file docutils literal notranslate"><span class="pre">filename.py</span></code> is specified its contents are tokenized to stdout.
Otherwise, tokenization is performed on stdin.</p>
</section>
<section id="examples">
<h2>Examples<a class="headerlink" href="#examples" title="Link to this heading"></a></h2>
<p>Example of a script rewriter that transforms float literals into Decimal
objects:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span><span class="w"> </span><span class="nn">tokenize</span><span class="w"> </span><span class="kn">import</span> <span class="n">tokenize</span><span class="p">,</span> <span class="n">untokenize</span><span class="p">,</span> <span class="n">NUMBER</span><span class="p">,</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">NAME</span><span class="p">,</span> <span class="n">OP</span>
<span class="kn">from</span><span class="w"> </span><span class="nn">io</span><span class="w"> </span><span class="kn">import</span> <span class="n">BytesIO</span>
<span class="k">def</span><span class="w"> </span><span class="nf">decistmt</span><span class="p">(</span><span class="n">s</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Substitute Decimals for floats in a string of statements.</span>
<span class="sd"> &gt;&gt;&gt; from decimal import Decimal</span>
<span class="sd"> &gt;&gt;&gt; s = &#39;print(+21.3e-5*-.1234/81.7)&#39;</span>
<span class="sd"> &gt;&gt;&gt; decistmt(s)</span>
<span class="sd"> &quot;print (+Decimal (&#39;21.3e-5&#39;)*-Decimal (&#39;.1234&#39;)/Decimal (&#39;81.7&#39;))&quot;</span>
<span class="sd"> The format of the exponent is inherited from the platform C library.</span>
<span class="sd"> Known cases are &quot;e-007&quot; (Windows) and &quot;e-07&quot; (not Windows). Since</span>
<span class="sd"> we&#39;re only showing 12 digits, and the 13th isn&#39;t close to 5, the</span>
<span class="sd"> rest of the output should be platform-independent.</span>
<span class="sd"> &gt;&gt;&gt; exec(s) #doctest: +ELLIPSIS</span>
<span class="sd"> -3.21716034272e-0...7</span>
<span class="sd"> Output from calculations with Decimal should be identical across all</span>
<span class="sd"> platforms.</span>
<span class="sd"> &gt;&gt;&gt; exec(decistmt(s))</span>
<span class="sd"> -3.217160342717258261933904529E-7</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">result</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">g</span> <span class="o">=</span> <span class="n">tokenize</span><span class="p">(</span><span class="n">BytesIO</span><span class="p">(</span><span class="n">s</span><span class="o">.</span><span class="n">encode</span><span class="p">(</span><span class="s1">&#39;utf-8&#39;</span><span class="p">))</span><span class="o">.</span><span class="n">readline</span><span class="p">)</span> <span class="c1"># tokenize the string</span>
<span class="k">for</span> <span class="n">toknum</span><span class="p">,</span> <span class="n">tokval</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">g</span><span class="p">:</span>
<span class="k">if</span> <span class="n">toknum</span> <span class="o">==</span> <span class="n">NUMBER</span> <span class="ow">and</span> <span class="s1">&#39;.&#39;</span> <span class="ow">in</span> <span class="n">tokval</span><span class="p">:</span> <span class="c1"># replace NUMBER tokens</span>
<span class="n">result</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span>
<span class="p">(</span><span class="n">NAME</span><span class="p">,</span> <span class="s1">&#39;Decimal&#39;</span><span class="p">),</span>
<span class="p">(</span><span class="n">OP</span><span class="p">,</span> <span class="s1">&#39;(&#39;</span><span class="p">),</span>
<span class="p">(</span><span class="n">STRING</span><span class="p">,</span> <span class="nb">repr</span><span class="p">(</span><span class="n">tokval</span><span class="p">)),</span>
<span class="p">(</span><span class="n">OP</span><span class="p">,</span> <span class="s1">&#39;)&#39;</span><span class="p">)</span>
<span class="p">])</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">result</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">toknum</span><span class="p">,</span> <span class="n">tokval</span><span class="p">))</span>
<span class="k">return</span> <span class="n">untokenize</span><span class="p">(</span><span class="n">result</span><span class="p">)</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s1">&#39;utf-8&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>Example of tokenizing from the command line. The script:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span><span class="w"> </span><span class="nf">say_hello</span><span class="p">():</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Hello, World!&quot;</span><span class="p">)</span>
<span class="n">say_hello</span><span class="p">()</span>
</pre></div>
</div>
<p>will be tokenized to the following output where the first column is the range
of the line/column coordinates where the token is found, the second column is
the name of the token, and the final column is the value of the token (if any)</p>
<div class="highlight-shell-session notranslate"><div class="highlight"><pre><span></span><span class="gp">$ </span>python<span class="w"> </span>-m<span class="w"> </span>tokenize<span class="w"> </span>hello.py
<span class="go">0,0-0,0: ENCODING &#39;utf-8&#39;</span>
<span class="go">1,0-1,3: NAME &#39;def&#39;</span>
<span class="go">1,4-1,13: NAME &#39;say_hello&#39;</span>
<span class="go">1,13-1,14: OP &#39;(&#39;</span>
<span class="go">1,14-1,15: OP &#39;)&#39;</span>
<span class="go">1,15-1,16: OP &#39;:&#39;</span>
<span class="go">1,16-1,17: NEWLINE &#39;\n&#39;</span>
<span class="go">2,0-2,4: INDENT &#39; &#39;</span>
<span class="go">2,4-2,9: NAME &#39;print&#39;</span>
<span class="go">2,9-2,10: OP &#39;(&#39;</span>
<span class="go">2,10-2,25: STRING &#39;&quot;Hello, World!&quot;&#39;</span>
<span class="go">2,25-2,26: OP &#39;)&#39;</span>
<span class="go">2,26-2,27: NEWLINE &#39;\n&#39;</span>
<span class="go">3,0-3,1: NL &#39;\n&#39;</span>
<span class="go">4,0-4,0: DEDENT &#39;&#39;</span>
<span class="go">4,0-4,9: NAME &#39;say_hello&#39;</span>
<span class="go">4,9-4,10: OP &#39;(&#39;</span>
<span class="go">4,10-4,11: OP &#39;)&#39;</span>
<span class="go">4,11-4,12: NEWLINE &#39;\n&#39;</span>
<span class="go">5,0-5,0: ENDMARKER &#39;&#39;</span>
</pre></div>
</div>
<p>The exact token type names can be displayed using the <a class="reference internal" href="#cmdoption-tokenize-e"><code class="xref std std-option docutils literal notranslate"><span class="pre">-e</span></code></a> option:</p>
<div class="highlight-shell-session notranslate"><div class="highlight"><pre><span></span><span class="gp">$ </span>python<span class="w"> </span>-m<span class="w"> </span>tokenize<span class="w"> </span>-e<span class="w"> </span>hello.py
<span class="go">0,0-0,0: ENCODING &#39;utf-8&#39;</span>
<span class="go">1,0-1,3: NAME &#39;def&#39;</span>
<span class="go">1,4-1,13: NAME &#39;say_hello&#39;</span>
<span class="go">1,13-1,14: LPAR &#39;(&#39;</span>
<span class="go">1,14-1,15: RPAR &#39;)&#39;</span>
<span class="go">1,15-1,16: COLON &#39;:&#39;</span>
<span class="go">1,16-1,17: NEWLINE &#39;\n&#39;</span>
<span class="go">2,0-2,4: INDENT &#39; &#39;</span>
<span class="go">2,4-2,9: NAME &#39;print&#39;</span>
<span class="go">2,9-2,10: LPAR &#39;(&#39;</span>
<span class="go">2,10-2,25: STRING &#39;&quot;Hello, World!&quot;&#39;</span>
<span class="go">2,25-2,26: RPAR &#39;)&#39;</span>
<span class="go">2,26-2,27: NEWLINE &#39;\n&#39;</span>
<span class="go">3,0-3,1: NL &#39;\n&#39;</span>
<span class="go">4,0-4,0: DEDENT &#39;&#39;</span>
<span class="go">4,0-4,9: NAME &#39;say_hello&#39;</span>
<span class="go">4,9-4,10: LPAR &#39;(&#39;</span>
<span class="go">4,10-4,11: RPAR &#39;)&#39;</span>
<span class="go">4,11-4,12: NEWLINE &#39;\n&#39;</span>
<span class="go">5,0-5,0: ENDMARKER &#39;&#39;</span>
</pre></div>
</div>
<p>Example of tokenizing a file programmatically, reading unicode
strings instead of bytes with <a class="reference internal" href="#tokenize.generate_tokens" title="tokenize.generate_tokens"><code class="xref py py-func docutils literal notranslate"><span class="pre">generate_tokens()</span></code></a>:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span><span class="w"> </span><span class="nn">tokenize</span>
<span class="k">with</span> <span class="n">tokenize</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="s1">&#39;hello.py&#39;</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">tokens</span> <span class="o">=</span> <span class="n">tokenize</span><span class="o">.</span><span class="n">generate_tokens</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">readline</span><span class="p">)</span>
<span class="k">for</span> <span class="n">token</span> <span class="ow">in</span> <span class="n">tokens</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">token</span><span class="p">)</span>
</pre></div>
</div>
<p>Or reading bytes directly with <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a>:</p>
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span><span class="w"> </span><span class="nn">tokenize</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">&#39;hello.py&#39;</span><span class="p">,</span> <span class="s1">&#39;rb&#39;</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">tokens</span> <span class="o">=</span> <span class="n">tokenize</span><span class="o">.</span><span class="n">tokenize</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">readline</span><span class="p">)</span>
<span class="k">for</span> <span class="n">token</span> <span class="ow">in</span> <span class="n">tokens</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">token</span><span class="p">)</span>
</pre></div>
</div>
</section>
</section>
<div class="clearer"></div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="Main">
<div class="sphinxsidebarwrapper">
<div>
<h3><a href="../contents.html">Table of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source</a><ul>
<li><a class="reference internal" href="#tokenizing-input">Tokenizing Input</a></li>
<li><a class="reference internal" href="#command-line-usage">Command-Line Usage</a></li>
<li><a class="reference internal" href="#examples">Examples</a></li>
</ul>
</li>
</ul>
</div>
<div>
<h4>Previous topic</h4>
<p class="topless"><a href="keyword.html"
title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">keyword</span></code> — Testing for Python keywords</a></p>
</div>
<div>
<h4>Next topic</h4>
<p class="topless"><a href="tabnanny.html"
title="next chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tabnanny</span></code> — Detection of ambiguous indentation</a></p>
</div>
<div role="note" aria-label="source link">
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="../bugs.html">Report a Bug</a></li>
<li>
<a href="https://github.com/python/cpython/blob/main/Doc/library/tokenize.rst"
rel="nofollow">Show Source
</a>
</li>
</ul>
</div>
</div>
<div id="sidebarbutton" title="Collapse sidebar">
<span>«</span>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related" role="navigation" aria-label="Related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="../genindex.html" title="General Index"
>index</a></li>
<li class="right" >
<a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li>
<li class="right" >
<a href="tabnanny.html" title="tabnanny — Detection of ambiguous indentation"
>next</a> |</li>
<li class="right" >
<a href="keyword.html" title="keyword — Testing for Python keywords"
>previous</a> |</li>
<li><img src="../_static/py.svg" alt="Python logo" style="vertical-align: middle; margin-top: -1px"/></li>
<li><a href="https://www.python.org/">Python</a> &#187;</li>
<li class="switchers">
<div class="language_switcher_placeholder"></div>
<div class="version_switcher_placeholder"></div>
</li>
<li>
</li>
<li id="cpython-language-and-version">
<a href="../index.html">3.13.3 Documentation</a> &#187;
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> &#187;</li>
<li class="nav-item nav-item-2"><a href="language.html" >Python Language Services</a> &#187;</li>
<li class="nav-item nav-item-this"><a href=""><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source</a></li>
<li class="right">
<div class="inline-search" role="search">
<form class="inline-search" action="../search.html" method="get">
<input placeholder="Quick search" aria-label="Quick search" type="search" name="q" id="search-box" />
<input type="submit" value="Go" />
</form>
</div>
|
</li>
<li class="right">
<label class="theme-selector-label">
Theme
<select class="theme-selector" oninput="activateTheme(this.value)">
<option value="auto" selected>Auto</option>
<option value="light">Light</option>
<option value="dark">Dark</option>
</select>
</label> |</li>
</ul>
</div>
<div class="footer">
&copy;
<a href="../copyright.html">
Copyright
</a>
2001-2025, Python Software Foundation.
<br />
This page is licensed under the Python Software Foundation License Version 2.
<br />
Examples, recipes, and other code in the documentation are additionally licensed under the Zero Clause BSD License.
<br />
See <a href="/license.html">History and License</a> for more information.<br />
<br />
The Python Software Foundation is a non-profit corporation.
<a href="https://www.python.org/psf/donations/">Please donate.</a>
<br />
<br />
Last updated on Apr 08, 2025 (14:33 UTC).
<a href="/bugs.html">Found a bug</a>?
<br />
Created using <a href="https://www.sphinx-doc.org/">Sphinx</a> 8.2.3.
</div>
</body>
</html>