<p align="center">
<img height="150" src="https://github.com/cedricrupb/ptokenizers/raw/main/resources/code_tokenize.svg" />
</p>
------------------------------------------------
> Fast tokenization and structural analysis of
any programming language in Python
Programming Language Processing (PLP) brings the capabilities of modern NLP systems to the world of programming languages.
To achieve high performance PLP systems, existing methods often take advantage of the fully defined nature of programming languages. Especially the syntactical structure can be exploited to gain knowledge about programs.
**code.tokenize** provides easy access to the syntactic structure of a program. The tokenizer converts a program into a sequence of program tokens ready for further end-to-end processing.
By relating each token to an AST node, it is possible to extend the program representation easily with further syntactic information.
## Installation
The package is tested under Python 3. It can be installed via:
```
pip install code-tokenize
```
## Usage
code.tokenize can tokenize nearly any program code in a few lines of code:
```python
import code_tokenize as ctok
# Python
ctok.tokenize(
'''
def my_func():
print("Hello World")
''',
lang = "python")
# Output: [def, my_func, (, ), :, #NEWLINE#, ...]
# Java
ctok.tokenize(
'''
public static void main(String[] args){
System.out.println("Hello World");
}
''',
lang = "java",
syntax_error = "ignore")
# Output: [public, static, void, main, (, String, [, ], args), {, System, ...]
# JavaScript
ctok.tokenize(
'''
alert("Hello World");
''',
lang = "javascript",
syntax_error = "ignore")
# Output: [alert, (, "Hello World", ), ;]
```
## Supported languages
code.tokenize employs [tree-sitter](https://tree-sitter.github.io/tree-sitter/) as a backend. Therefore, in principal, any language supported by tree-sitter is also
supported by a tokenizer in code.tokenize.
For some languages, this library supports additional
features that are not directly supported by tree-sitter.
Therefore, we distinguish between three language classes
and support the following language identifier:
- `native`: python
- `advanced`: java
- `basic`: javascript, go, ruby, cpp, c, swift, rust, ...
Languages in the `native` class support all features
of this library and are extensively tested. `advanced` languages are tested but do not support the full feature set. Languages of the `basic` class are not tested and
only support the feature set of the backend. They can still be used for tokenization and AST parsing.
## How to contribute
**Your language is not natively supported by code.tokenize or the tokenization seems to be incorrect?** Then change it!
While code.tokenize is developed mainly as an helper library for internal research projects, we welcome pull requests of any sorts (if it is a new feature or a bug fix).
**Want to help to test more languages?**
Our goal is to support as many languages as possible at a `native` level. However, languages on `basic` level are completly untested. You can help by testing `basic` languages and reporting issues in the tokenization process!
## Release history
* 0.2.0
* Major API redesign!
* CHANGE: AST parsing is now done by an external library: [code_ast](https://github.com/cedricrupb/code_ast)
* CHANGE: Visitor pattern instead of custom tokenizer
* CHANGE: Custom visitors for language dependent tokenization
* 0.1.0
* The first proper release
* CHANGE: Language specific tokenizer configuration
* CHANGE: Basic analyses of the program structure and token role
* CHANGE: Documentation
* 0.0.1
* Work in progress
## Project Info
The goal of this project is to provide developer in the
programming language processing community with easy
access to program tokenization and AST parsing. This is currently developed as a helper library for internal research projects. Therefore, it will only be updated
as needed.
Feel free to open an issue if anything unexpected
happens.
Distributed under the MIT license. See ``LICENSE`` for more information.
This project was developed as part of our research related to:
```bibtex
@inproceedings{richter2022tssb,
title={TSSB-3M: Mining single statement bugs at massive scale},
author={Cedric Richter, Heike Wehrheim},
booktitle={MSR},
year={2022}
}
```
We thank the developer of [tree-sitter](https://tree-sitter.github.io/tree-sitter/) library. Without tree-sitter this project would not be possible.
Raw data
{
"_id": null,
"home_page": "https://github.com/cedricrupb/code_tokenize",
"name": "code-tokenize",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Cedric Richter <cedricr.upb@gmail.com>",
"keywords": "code, tokenization, tokenize, program, language processing",
"author": "Cedric Richter",
"author_email": "Cedric Richter <cedricr.upb@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/69/61/3d77da992c21126551d1d84e9525b14d6f4daf62a2e8b66cbb4d48a344f9/code_tokenize-0.2.1.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img height=\"150\" src=\"https://github.com/cedricrupb/ptokenizers/raw/main/resources/code_tokenize.svg\" />\n</p>\n\n------------------------------------------------\n> Fast tokenization and structural analysis of\nany programming language in Python\n\nProgramming Language Processing (PLP) brings the capabilities of modern NLP systems to the world of programming languages. \nTo achieve high performance PLP systems, existing methods often take advantage of the fully defined nature of programming languages. Especially the syntactical structure can be exploited to gain knowledge about programs.\n\n**code.tokenize** provides easy access to the syntactic structure of a program. The tokenizer converts a program into a sequence of program tokens ready for further end-to-end processing.\nBy relating each token to an AST node, it is possible to extend the program representation easily with further syntactic information.\n\n## Installation\nThe package is tested under Python 3. It can be installed via:\n```\npip install code-tokenize\n```\n\n## Usage\ncode.tokenize can tokenize nearly any program code in a few lines of code:\n```python\nimport code_tokenize as ctok\n\n# Python\nctok.tokenize(\n '''\n def my_func():\n print(\"Hello World\")\n ''',\nlang = \"python\")\n\n# Output: [def, my_func, (, ), :, #NEWLINE#, ...]\n\n# Java\nctok.tokenize(\n '''\n public static void main(String[] args){\n System.out.println(\"Hello World\");\n }\n ''',\nlang = \"java\", \nsyntax_error = \"ignore\")\n\n# Output: [public, static, void, main, (, String, [, ], args), {, System, ...]\n\n# JavaScript\nctok.tokenize(\n '''\n alert(\"Hello World\");\n ''',\nlang = \"javascript\", \nsyntax_error = \"ignore\")\n\n# Output: [alert, (, \"Hello World\", ), ;]\n\n\n```\n\n## Supported languages\ncode.tokenize employs [tree-sitter](https://tree-sitter.github.io/tree-sitter/) as a backend. Therefore, in principal, any language supported by tree-sitter is also\nsupported by a tokenizer in code.tokenize.\n\nFor some languages, this library supports additional\nfeatures that are not directly supported by tree-sitter.\nTherefore, we distinguish between three language classes\nand support the following language identifier:\n\n- `native`: python\n- `advanced`: java\n- `basic`: javascript, go, ruby, cpp, c, swift, rust, ...\n\nLanguages in the `native` class support all features \nof this library and are extensively tested. `advanced` languages are tested but do not support the full feature set. Languages of the `basic` class are not tested and\nonly support the feature set of the backend. They can still be used for tokenization and AST parsing.\n\n## How to contribute\n**Your language is not natively supported by code.tokenize or the tokenization seems to be incorrect?** Then change it!\n\nWhile code.tokenize is developed mainly as an helper library for internal research projects, we welcome pull requests of any sorts (if it is a new feature or a bug fix). \n\n**Want to help to test more languages?**\nOur goal is to support as many languages as possible at a `native` level. However, languages on `basic` level are completly untested. You can help by testing `basic` languages and reporting issues in the tokenization process!\n\n## Release history\n* 0.2.0\n * Major API redesign!\n * CHANGE: AST parsing is now done by an external library: [code_ast](https://github.com/cedricrupb/code_ast)\n * CHANGE: Visitor pattern instead of custom tokenizer\n * CHANGE: Custom visitors for language dependent tokenization\n* 0.1.0\n * The first proper release\n * CHANGE: Language specific tokenizer configuration\n * CHANGE: Basic analyses of the program structure and token role\n * CHANGE: Documentation\n* 0.0.1\n * Work in progress\n\n## Project Info\nThe goal of this project is to provide developer in the\nprogramming language processing community with easy\naccess to program tokenization and AST parsing. This is currently developed as a helper library for internal research projects. Therefore, it will only be updated\nas needed.\n\nFeel free to open an issue if anything unexpected\nhappens. \n\nDistributed under the MIT license. See ``LICENSE`` for more information.\n\nThis project was developed as part of our research related to:\n```bibtex\n@inproceedings{richter2022tssb,\n title={TSSB-3M: Mining single statement bugs at massive scale},\n author={Cedric Richter, Heike Wehrheim},\n booktitle={MSR},\n year={2022}\n}\n```\n\nWe thank the developer of [tree-sitter](https://tree-sitter.github.io/tree-sitter/) library. Without tree-sitter this project would not be possible. \n",
"bugtrack_url": null,
"license": null,
"summary": "Fast program tokenization and structural analysis in Python",
"version": "0.2.1",
"project_urls": {
"Bug Reports": "https://github.com/cedricrupb/code_tokenize/issues",
"Download": "https://github.com/cedricrupb/code_tokenize/archive/refs/tags/v0.2.1.tar.gz",
"Homepage": "https://github.com/cedricrupb/code_tokenize",
"Source": "https://github.com/cedricrupb/code_tokenize"
},
"split_keywords": [
"code",
" tokenization",
" tokenize",
" program",
" language processing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b901004614eca501f31f4bc33547e142cd51e0e1e84955694b2e043d025ccc27",
"md5": "2a14f9e41901730206e389d60bc7bf56",
"sha256": "f6ce40a6dbca0f729ad59269ca15c807cee22a769f70d664d66bac12f5c4f48f"
},
"downloads": -1,
"filename": "code_tokenize-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2a14f9e41901730206e389d60bc7bf56",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 17358,
"upload_time": "2025-01-14T09:17:23",
"upload_time_iso_8601": "2025-01-14T09:17:23.473089Z",
"url": "https://files.pythonhosted.org/packages/b9/01/004614eca501f31f4bc33547e142cd51e0e1e84955694b2e043d025ccc27/code_tokenize-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "69613d77da992c21126551d1d84e9525b14d6f4daf62a2e8b66cbb4d48a344f9",
"md5": "49a58b585d6608380bbff468b9c1e3aa",
"sha256": "a563f6adade6b61a5dc82364fa0c71d86abefae40715bab249e6b30838c00c7d"
},
"downloads": -1,
"filename": "code_tokenize-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "49a58b585d6608380bbff468b9c1e3aa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 13869,
"upload_time": "2025-01-14T09:17:25",
"upload_time_iso_8601": "2025-01-14T09:17:25.756289Z",
"url": "https://files.pythonhosted.org/packages/69/61/3d77da992c21126551d1d84e9525b14d6f4daf62a2e8b66cbb4d48a344f9/code_tokenize-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-14 09:17:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cedricrupb",
"github_project": "code_tokenize",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "tree_sitter",
"specs": [
[
"==",
"0.21.3"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.32.0"
]
]
},
{
"name": "GitPython",
"specs": [
[
">=",
"3.1.41"
]
]
},
{
"name": "code_ast",
"specs": [
[
">=",
"0.1.1"
]
]
}
],
"lcname": "code-tokenize"
}