tonic-textual


Nametonic-textual JSON
Version 3.0.2 PyPI version JSON
download
home_pagehttps://www.tonic.ai/
SummaryWrappers around the Tonic Textual API
upload_time2024-12-03 18:24:49
maintainerNone
docs_urlNone
authorAdam Kamor
requires_python<4.0,>=3.7
licenseMIT
keywords tonic.ai tonic tonic textual
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <a id="readme-top"></a>

<h1 align="center">
  <img style="vertical-align:middle" height="200" src="https://raw.githubusercontent.com/TonicAI/textual/main/images/textual-logo.png">
</h1>

<p align="center">Unblock AI initiatives by maximizing your free-text assets through realistic data de-identification and high quality data extraction 🚀</p>

<p align="center">
    <a href="https://www.python.org/">
      <img alt="Build" src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple">
    </a>
    <a href="https://github.com/tonicai/textual_sdk_internal/blob/master/LICENSE">
      <img alt="License" src="https://img.shields.io/badge/license-MIT-blue">
    </a>
    <a href='https://tonic-ai-textual-sdk.readthedocs-hosted.com/en/latest/?badge=latest'>
      <img src='https://readthedocs.com/projects/tonic-ai-textual-sdk/badge/?version=latest' alt='Documentation Status' />
    </a>
</p>

<p align="center">
  <a href="https://tonic-ai-textual-sdk.readthedocs-hosted.com/en/latest/">Documentation</a>
  |
  <a href="https://textual.tonic.ai/signup">Get an API key</a>
  |
  <a href="https://github.com/tonicai/textual_sdk/issues/new?labels=bug&template=bug-report---.md">Report a bug</a>
  |
  <a href="https://github.com/tonicai/textual_sdk/issues/new?labels=enhancement&template=feature-request---.md">Request a feature</a>
</p>

Textual makes it easy to build safe AI models and applications on sensitive customer data. It is used across industries, with a primary focus on finance, healthcare, and customer support. Build safe models by using Textual to identify customer PII/PHI, then generate synthetic text and documents that you can use to train your models without inadvertently embedding PII/PHI into your model weights.

Textual comes with a built-in data pipeline functionality so that it scales with you. Use our SDK to redact text or to extract relevant information from complex documents before you build your data pipelines.


## Key Features

- 🔎 NER. Our models are fast and accurate. Use them on real-world, complex, and messy unstructured data to find the exact entities that you care about.
- 🧬 Synthesis. We don't just find sensitive data. We also synthesize it, to provide you with a new version of your data that 
is suitable for model training and AI development.
- ⛏️ Extraction. We support a variety of file formats in addition to txt. We can extract interesting data from PDFs, DOCX files, images, and more.


<!-- TABLE OF CONTENTS -->

## 📚 Contents
<ol>
  <li><a href="#prerequisites">Prerequisites</a></li>
  <li><a href="#getting-started">Getting started</a></li>
  <li><a href="#ner_usage">NER usage</a></li>
  <li><a href="#parse_usage">Parse usage</a></li>
  <li><a href="#ui_automation">UI automation</a></li>
  <li><a href="#roadmap">Bug reports and feature requests</a></li>
  <li><a href="#contributing">Contributing</a></li>
  <li><a href="#license">License</a></li>
  <li><a href="#contact">Contact</a></li>
</ol>



<!-- GETTING STARTED -->
## 📦 Installation

1. Get a free API key at [Textual.](https://textual.tonic.ai).
2. Install the package from PyPI
   ```sh
   pip install tonic-textual
   ```
3. You can pass your API key as an argument directly into SDK calls, or you can save it to your environment.
   ```sh
   export TONIC_TEXTUAL_API_KEY=<API Key>
   ```

<p align="right">(<a href="#readme-top">back to top</a>)</p>

## 🏃‍♂ Getting started

This library supports the following workflows:

* NER detection, along with entity tokenization and synthesis
* Data extraction of unstructured files such as PDFs and Office documents (docx, xlsx).

Each workflow has its own client. Each client supports the same set of constructor arguments.

```
from tonic_textual.redact_api import TextualNer
from tonic_textual.parse_api import TextualParse

textual_ner = TextualNer()
textual_parse = TextualParse()
```

Both clients support the following optional arguments:

- ```base_url``` - The URL of the server that hosts Tonic Textual. Defaults to https://textual.tonic.ai

- ```api_key``` - Your API key. If not specified, you must set TONIC_TEXTUAL_API_KEY in your environment.

- ```verify``` - Whether to verify SSL certification. Default is true.


## 🔎 NER usage

Textual can identify entities within free text. It works on raw text and on content from files, including pdf, docx, xlsx, images, txt, and csv files. 

### Free text

```python
raw_redaction = textual_ner.redact("My name is John and I live in Atlanta.")
```

```raw_redaction``` returns a response similar to the following:

```json
{
    "original_text": "My name is John and I a live in Atlanta.",
    "redacted_text": "My name is [NAME_GIVEN_dySb5] and I a live in [LOCATION_CITY_FgBgz8WW].",
    "usage": 9,
    "de_identify_results": [
        {
            "start": 11,
            "end": 15,
            "new_start": 11,
            "new_end": 29,
            "label": "NAME_GIVEN",
            "text": "John",
            "score": 0.9,
            "language": "en",
            "new_text": "[NAME_GIVEN_dySb5]"
        },
        {
            "start": 32,
            "end": 39,
            "new_start": 46,
            "new_end": 70,
            "label": "LOCATION_CITY",
            "text": "Atlanta",
            "score": 0.9,
            "language": "en",
            "new_text": "[LOCATION_CITY_FgBgz8WW]"
        }
    ]
}
```

The ```redacted_text``` property provides the new text. In the new text, identified entities are replaced with tokenized values. Each identified entity is listed in the ```de_identify_results``` array.

You can also choose to synthesize entities instead of tokenizing them. To synthesize specific entities, use the optional ```generator_config``` argument.

```python
raw_redaction = textual_ner.redact("My name is John and I live in Atlanta.", generator_config={'LOCATION_CITY':'Synthesis', 'NAME_GIVEN':'Synthesis'})
```

In the response, this generates a new ```redacted_text``` value that contains the synthetic entities. For example:

| My name is Alfonzo and I live in Wilkinsburg.

### Files

Textual can also identify, tokenize, and synthesize text within files such as PDF and DOCX. The result is a new file where the specified entities are either tokenized or synthesized.  

To generate a redacted file:

```python
with open('file.pdf','rb') as f:
  ref_id = textual_ner.start_file_redact(f, 'file.pdf')

with open('redacted_file.pdf','wb') as of:
  file_bytes = textual_ner.download_redacted_file(ref_id)
  of.write(file_bytes)
```

The ```download_redacted_file``` method takes similar arguments to the ```redact()``` method. It also supports a ```generator_config``` parameter to adjust which entities are tokenized and synthesized.

### Consistency

When entities are tokenized, the tokenized values are unique to the original value. A given entity always generates to the same unique token. To map a token back to its original value, use the ```unredact``` function call.  

Synthetic entities are consistent. This means that a given entity, such as 'Atlanta', is always mapped to the same fake city. Synthetic values can potentially collide and are not reversible.

To change the underlying mapping of both tokens and synthetic values, in the ```redact()``` function call, pass in the optional ```random_seed``` parameter.  

_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)._

<p align="right">(<a href="#readme-top">back to top</a>)</p>

## ⛏️ Parse usage

Textual supports the extraction of text and other content from files. Textual currently supports:

- pdf
- png, tif, jpg
- txt, csv, tsv, and other plaintext formats
- docx, xlsx

Textual takes these unstructured files and converts them to a structured representation in JSON.  

The JSON output has file-specific pieces. For example, table and KVP detection is only performed on PDFs and images. However, all files support the following JSON properties:

```json
{
  "fileType": "<file type>",
  "content": {
    "text": "<Markdown file content>",
    "hash": "<hashed file content>",
    "entities": [   //Entry for each entity in the file
      {
        "start": <start location>,
        "end": <end location>,
        "label": "<value type>",
        "text": "<value text>",
        "score": <confidence score>
      }
    ]
  },
  "schemaVersion": <integer schema version>
}
```

PDFs and images have additional properties for ```tables``` and ```kvps```.

DocX files support ```headers```, ```footers```, and ```endnotes```.

Xlsx files break down the content by the individual sheets.

For a detailed breakdown of the JSON schema for each file type, go to the [JSON schema information in the Textual guide](https://docs.tonic.ai/textual/pipelines/viewing-pipeline-results/pipeline-json-structure).


To parse a file one time, you can use our SDK.

```python
with open('invoice.pdf','rb') as f:
  parsed_file = textual_parse.parse_file(f.read(), 'invoice.pdf')
```

The parsed_file is a ```FileParseResult``` type, which has helper methods that you can use to retrieve content from the document.

- ```get_markdown(generator_config={})``` retrieves the document as Markdown. To tokenize or synthesize the Markdown, pass in a list of entities to ```generator_config```.

- ```get_chunks(generator_config={}, metadata_entities=[])``` chunks the files in a form suitable for vector database ingestion. To tokenize or synthesize chunks, or enrich them with entity level metadata, provide a list of entities. The listed entities should be relevant to the questions that are asked of the RAG system. For example, if you are building a RAG for front line customer support reps, you might expect to include 'PRODUCT' and 'ORGANIZATION' as metadata entities.

In addition to processing files from your local system, you can reference files directly from Amazon S3. The ```parse_s3_file``` function call behaves the same as ```parse_file```, but requires a bucket and key argument to specify your specific file in Amazon S3. It uses boto3 to retrieve the files from Amazon S3.

_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)_

<p align="right">(<a href="#readme-top">back to top</a>)</p>


## 💻 UI automation

The Textual UI supports file redaction and parsing. It provides an experience for users to orchestrate jobs and process files at scale. It supports integrations with various bucket solutions such as Amazon S3, as well as systems such as Sharepoint and Databricks Unity Catalog volumes.

You can use the SDK for actions such as building smart pipelines (for parsing) and dataset collections (for file redaction).

_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)_

<p align="right">(<a href="#readme-top">back to top</a>)</p>


<!-- ROADMAP -->
## Bug reports and feature requests

To submit a bug or feature request, go to [open issues](https://github.com/tonicai/textual_sdk/issues). We try to be responsive here - any issues filed should expect a prompt response from the Textual team.

<p align="right">(<a href="#readme-top">back to top</a>)</p>


<!-- CONTRIBUTING -->
## Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.

If you have a suggestion that would make this better, fork the repo and create a pull request.

1. Fork the project
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a pull request

You can also simply open an issue with the tag "enhancement".

Don't forget to give the project a star! Thanks again!

<p align="right">(<a href="#readme-top">back to top</a>)</p>


<!-- LICENSE -->
## License

Distributed under the MIT License. For more information, see `LICENSE.txt`.


<!-- CONTACT -->
## Contact

Tonic AI - [@tonicfakedata](https://x.com/tonicfakedata) - support@tonic.ai

Project Link: [Textual](https://tonic.ai/textual)


            

Raw data

            {
    "_id": null,
    "home_page": "https://www.tonic.ai/",
    "name": "tonic-textual",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.7",
    "maintainer_email": null,
    "keywords": "tonic.ai, tonic, tonic textual",
    "author": "Adam Kamor",
    "author_email": "adam@tonic.ai",
    "download_url": "https://files.pythonhosted.org/packages/0b/2e/2ef56ea9a77ce699bd0447c512e0a5b3e1258f3d4c39ee65cbe53263826f/tonic_textual-3.0.2.tar.gz",
    "platform": null,
    "description": "<a id=\"readme-top\"></a>\n\n<h1 align=\"center\">\n  <img style=\"vertical-align:middle\" height=\"200\" src=\"https://raw.githubusercontent.com/TonicAI/textual/main/images/textual-logo.png\">\n</h1>\n\n<p align=\"center\">Unblock AI initiatives by maximizing your free-text assets through realistic data de-identification and high quality data extraction \ud83d\ude80</p>\n\n<p align=\"center\">\n    <a href=\"https://www.python.org/\">\n      <img alt=\"Build\" src=\"https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple\">\n    </a>\n    <a href=\"https://github.com/tonicai/textual_sdk_internal/blob/master/LICENSE\">\n      <img alt=\"License\" src=\"https://img.shields.io/badge/license-MIT-blue\">\n    </a>\n    <a href='https://tonic-ai-textual-sdk.readthedocs-hosted.com/en/latest/?badge=latest'>\n      <img src='https://readthedocs.com/projects/tonic-ai-textual-sdk/badge/?version=latest' alt='Documentation Status' />\n    </a>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://tonic-ai-textual-sdk.readthedocs-hosted.com/en/latest/\">Documentation</a>\n  |\n  <a href=\"https://textual.tonic.ai/signup\">Get an API key</a>\n  |\n  <a href=\"https://github.com/tonicai/textual_sdk/issues/new?labels=bug&template=bug-report---.md\">Report a bug</a>\n  |\n  <a href=\"https://github.com/tonicai/textual_sdk/issues/new?labels=enhancement&template=feature-request---.md\">Request a feature</a>\n</p>\n\nTextual makes it easy to build safe AI models and applications on sensitive customer data. It is used across industries, with a primary focus on finance, healthcare, and customer support. Build safe models by using Textual to identify customer PII/PHI, then generate synthetic text and documents that you can use to train your models without inadvertently embedding PII/PHI into your model weights.\n\nTextual comes with a built-in data pipeline functionality so that it scales with you. Use our SDK to redact text or to extract relevant information from complex documents before you build your data pipelines.\n\n\n## Key Features\n\n- \ud83d\udd0e NER. Our models are fast and accurate. Use them on real-world, complex, and messy unstructured data to find the exact entities that you care about.\n- \ud83e\uddec Synthesis. We don't just find sensitive data. We also synthesize it, to provide you with a new version of your data that \nis suitable for model training and AI development.\n- \u26cf\ufe0f Extraction. We support a variety of file formats in addition to txt. We can extract interesting data from PDFs, DOCX files, images, and more.\n\n\n<!-- TABLE OF CONTENTS -->\n\n## \ud83d\udcda Contents\n<ol>\n  <li><a href=\"#prerequisites\">Prerequisites</a></li>\n  <li><a href=\"#getting-started\">Getting started</a></li>\n  <li><a href=\"#ner_usage\">NER usage</a></li>\n  <li><a href=\"#parse_usage\">Parse usage</a></li>\n  <li><a href=\"#ui_automation\">UI automation</a></li>\n  <li><a href=\"#roadmap\">Bug reports and feature requests</a></li>\n  <li><a href=\"#contributing\">Contributing</a></li>\n  <li><a href=\"#license\">License</a></li>\n  <li><a href=\"#contact\">Contact</a></li>\n</ol>\n\n\n\n<!-- GETTING STARTED -->\n## \ud83d\udce6 Installation\n\n1. Get a free API key at [Textual.](https://textual.tonic.ai).\n2. Install the package from PyPI\n   ```sh\n   pip install tonic-textual\n   ```\n3. You can pass your API key as an argument directly into SDK calls, or you can save it to your environment.\n   ```sh\n   export TONIC_TEXTUAL_API_KEY=<API Key>\n   ```\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## \ud83c\udfc3\u200d\u2642 Getting started\n\nThis library supports the following workflows:\n\n* NER detection, along with entity tokenization and synthesis\n* Data extraction of unstructured files such as PDFs and Office documents (docx, xlsx).\n\nEach workflow has its own client. Each client supports the same set of constructor arguments.\n\n```\nfrom tonic_textual.redact_api import TextualNer\nfrom tonic_textual.parse_api import TextualParse\n\ntextual_ner = TextualNer()\ntextual_parse = TextualParse()\n```\n\nBoth clients support the following optional arguments:\n\n- ```base_url``` - The URL of the server that hosts Tonic Textual. Defaults to https://textual.tonic.ai\n\n- ```api_key``` - Your API key. If not specified, you must set TONIC_TEXTUAL_API_KEY in your environment.\n\n- ```verify``` - Whether to verify SSL certification. Default is true.\n\n\n## \ud83d\udd0e NER usage\n\nTextual can identify entities within free text. It works on raw text and on content from files, including pdf, docx, xlsx, images, txt, and csv files. \n\n### Free text\n\n```python\nraw_redaction = textual_ner.redact(\"My name is John and I live in Atlanta.\")\n```\n\n```raw_redaction``` returns a response similar to the following:\n\n```json\n{\n    \"original_text\": \"My name is John and I a live in Atlanta.\",\n    \"redacted_text\": \"My name is [NAME_GIVEN_dySb5] and I a live in [LOCATION_CITY_FgBgz8WW].\",\n    \"usage\": 9,\n    \"de_identify_results\": [\n        {\n            \"start\": 11,\n            \"end\": 15,\n            \"new_start\": 11,\n            \"new_end\": 29,\n            \"label\": \"NAME_GIVEN\",\n            \"text\": \"John\",\n            \"score\": 0.9,\n            \"language\": \"en\",\n            \"new_text\": \"[NAME_GIVEN_dySb5]\"\n        },\n        {\n            \"start\": 32,\n            \"end\": 39,\n            \"new_start\": 46,\n            \"new_end\": 70,\n            \"label\": \"LOCATION_CITY\",\n            \"text\": \"Atlanta\",\n            \"score\": 0.9,\n            \"language\": \"en\",\n            \"new_text\": \"[LOCATION_CITY_FgBgz8WW]\"\n        }\n    ]\n}\n```\n\nThe ```redacted_text``` property provides the new text. In the new text, identified entities are replaced with tokenized values. Each identified entity is listed in the ```de_identify_results``` array.\n\nYou can also choose to synthesize entities instead of tokenizing them. To synthesize specific entities, use the optional ```generator_config``` argument.\n\n```python\nraw_redaction = textual_ner.redact(\"My name is John and I live in Atlanta.\", generator_config={'LOCATION_CITY':'Synthesis', 'NAME_GIVEN':'Synthesis'})\n```\n\nIn the response, this generates a new ```redacted_text``` value that contains the synthetic entities. For example:\n\n| My name is Alfonzo and I live in Wilkinsburg.\n\n### Files\n\nTextual can also identify, tokenize, and synthesize text within files such as PDF and DOCX. The result is a new file where the specified entities are either tokenized or synthesized.  \n\nTo generate a redacted file:\n\n```python\nwith open('file.pdf','rb') as f:\n  ref_id = textual_ner.start_file_redact(f, 'file.pdf')\n\nwith open('redacted_file.pdf','wb') as of:\n  file_bytes = textual_ner.download_redacted_file(ref_id)\n  of.write(file_bytes)\n```\n\nThe ```download_redacted_file``` method takes similar arguments to the ```redact()``` method. It also supports a ```generator_config``` parameter to adjust which entities are tokenized and synthesized.\n\n### Consistency\n\nWhen entities are tokenized, the tokenized values are unique to the original value. A given entity always generates to the same unique token. To map a token back to its original value, use the ```unredact``` function call.  \n\nSynthetic entities are consistent. This means that a given entity, such as 'Atlanta', is always mapped to the same fake city. Synthetic values can potentially collide and are not reversible.\n\nTo change the underlying mapping of both tokens and synthetic values, in the ```redact()``` function call, pass in the optional ```random_seed``` parameter.  \n\n_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)._\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## \u26cf\ufe0f Parse usage\n\nTextual supports the extraction of text and other content from files. Textual currently supports:\n\n- pdf\n- png, tif, jpg\n- txt, csv, tsv, and other plaintext formats\n- docx, xlsx\n\nTextual takes these unstructured files and converts them to a structured representation in JSON.  \n\nThe JSON output has file-specific pieces. For example, table and KVP detection is only performed on PDFs and images. However, all files support the following JSON properties:\n\n```json\n{\n  \"fileType\": \"<file type>\",\n  \"content\": {\n    \"text\": \"<Markdown file content>\",\n    \"hash\": \"<hashed file content>\",\n    \"entities\": [   //Entry for each entity in the file\n      {\n        \"start\": <start location>,\n        \"end\": <end location>,\n        \"label\": \"<value type>\",\n        \"text\": \"<value text>\",\n        \"score\": <confidence score>\n      }\n    ]\n  },\n  \"schemaVersion\": <integer schema version>\n}\n```\n\nPDFs and images have additional properties for ```tables``` and ```kvps```.\n\nDocX files support ```headers```, ```footers```, and ```endnotes```.\n\nXlsx files break down the content by the individual sheets.\n\nFor a detailed breakdown of the JSON schema for each file type, go to the [JSON schema information in the Textual guide](https://docs.tonic.ai/textual/pipelines/viewing-pipeline-results/pipeline-json-structure).\n\n\nTo parse a file one time, you can use our SDK.\n\n```python\nwith open('invoice.pdf','rb') as f:\n  parsed_file = textual_parse.parse_file(f.read(), 'invoice.pdf')\n```\n\nThe parsed_file is a ```FileParseResult``` type, which has helper methods that you can use to retrieve content from the document.\n\n- ```get_markdown(generator_config={})``` retrieves the document as Markdown. To tokenize or synthesize the Markdown, pass in a list of entities to ```generator_config```.\n\n- ```get_chunks(generator_config={}, metadata_entities=[])``` chunks the files in a form suitable for vector database ingestion. To tokenize or synthesize chunks, or enrich them with entity level metadata, provide a list of entities. The listed entities should be relevant to the questions that are asked of the RAG system. For example, if you are building a RAG for front line customer support reps, you might expect to include 'PRODUCT' and 'ORGANIZATION' as metadata entities.\n\nIn addition to processing files from your local system, you can reference files directly from Amazon S3. The ```parse_s3_file``` function call behaves the same as ```parse_file```, but requires a bucket and key argument to specify your specific file in Amazon S3. It uses boto3 to retrieve the files from Amazon S3.\n\n_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)_\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n\n## \ud83d\udcbb UI automation\n\nThe Textual UI supports file redaction and parsing. It provides an experience for users to orchestrate jobs and process files at scale. It supports integrations with various bucket solutions such as Amazon S3, as well as systems such as Sharepoint and Databricks Unity Catalog volumes.\n\nYou can use the SDK for actions such as building smart pipelines (for parsing) and dataset collections (for file redaction).\n\n_For more examples, refer to the [Textual SDK documentation](https://textual.tonic.ai/docs/index.html)_\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n\n<!-- ROADMAP -->\n## Bug reports and feature requests\n\nTo submit a bug or feature request, go to [open issues](https://github.com/tonicai/textual_sdk/issues). We try to be responsive here - any issues filed should expect a prompt response from the Textual team.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n\n<!-- CONTRIBUTING -->\n## Contributing\n\nContributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.\n\nIf you have a suggestion that would make this better, fork the repo and create a pull request.\n\n1. Fork the project\n2. Create your feature branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)\n4. Push to the branch (`git push origin feature/AmazingFeature`)\n5. Open a pull request\n\nYou can also simply open an issue with the tag \"enhancement\".\n\nDon't forget to give the project a star! Thanks again!\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n\n<!-- LICENSE -->\n## License\n\nDistributed under the MIT License. For more information, see `LICENSE.txt`.\n\n\n<!-- CONTACT -->\n## Contact\n\nTonic AI - [@tonicfakedata](https://x.com/tonicfakedata) - support@tonic.ai\n\nProject Link: [Textual](https://tonic.ai/textual)\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Wrappers around the Tonic Textual API",
    "version": "3.0.2",
    "project_urls": {
        "Homepage": "https://www.tonic.ai/"
    },
    "split_keywords": [
        "tonic.ai",
        " tonic",
        " tonic textual"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4048ca7038a089973725be1c27c4fe919a8bfae4a4deddb39b7a727b1327a838",
                "md5": "b3777a169aa3a541fb6013bfd28ab405",
                "sha256": "ce38a2fbadd0f269c48d67bdd4208899dd0e9930e6da405f6cbe43f581364a74"
            },
            "downloads": -1,
            "filename": "tonic_textual-3.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b3777a169aa3a541fb6013bfd28ab405",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.7",
            "size": 54069,
            "upload_time": "2024-12-03T18:24:48",
            "upload_time_iso_8601": "2024-12-03T18:24:48.277791Z",
            "url": "https://files.pythonhosted.org/packages/40/48/ca7038a089973725be1c27c4fe919a8bfae4a4deddb39b7a727b1327a838/tonic_textual-3.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b2e2ef56ea9a77ce699bd0447c512e0a5b3e1258f3d4c39ee65cbe53263826f",
                "md5": "be3dd1786c8ac0ec8cfcca8a751b332e",
                "sha256": "5c87c3a481169ec389f5de62befdd9141435399675f1825f121e1878a6f42edb"
            },
            "downloads": -1,
            "filename": "tonic_textual-3.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "be3dd1786c8ac0ec8cfcca8a751b332e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.7",
            "size": 40144,
            "upload_time": "2024-12-03T18:24:49",
            "upload_time_iso_8601": "2024-12-03T18:24:49.562510Z",
            "url": "https://files.pythonhosted.org/packages/0b/2e/2ef56ea9a77ce699bd0447c512e0a5b3e1258f3d4c39ee65cbe53263826f/tonic_textual-3.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-03 18:24:49",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "tonic-textual"
}
        
Elapsed time: 0.31113s