iscc-sct


Nameiscc-sct JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://iscc.codes
SummaryISCC - Semantic Code Text
upload_time2024-08-19 17:43:21
maintainerNone
docs_urlNone
authorTitusz
requires_python<3.13,>=3.9
licenseCC-BY-NC-SA-4.0
keywords iscc text similarity cross lingual semantic similarity
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ISCC - Semantic Text-Code

[![Tests](https://github.com/iscc/iscc-sct/actions/workflows/tests.yml/badge.svg)](https://github.com/iscc/iscc-core/actions/workflows/tests.yml)
[![Version](https://img.shields.io/pypi/v/iscc-sct.svg)](https://pypi.python.org/pypi/iscc-sct/)
[![Downloads](https://pepy.tech/badge/iscc-sct)](https://pepy.tech/project/iscc-sct)

> [!CAUTION]
> **This is a proof of concept.** All releases with version numbers below v1.0.0 may break backward
> compatibility and produce incompatible Semantic Text-Codes. The algorithms of this `iscc-sct`
> repository are experimental and not part of the official
> [ISO 24138:2024](https://www.iso.org/standard/77899.html) standard.

`iscc-sct` is a **Semantic-Code Text** implementation for the [ISCC](https://core.iscc.codes)
(*International Standard Content Code*). The Semantic-Code Text is a new ISCC-UNIT for semantic text
identification. The algorithm creates simmilar (low hamming distance) codes for semantically similar
text inputs across different languages. The SCT ISCC-UNIT is a compact binary code created from a
binarized document-vector text-embedding.

## What is the ISCC

The ISCC is a combination of various similarity preserving fingerprints and an identifier for
digital media content.

ISCCs are generated algorithmically from digital content, just like cryptographic hashes. However,
instead of using a single cryptographic hash function to identify data only, the ISCC uses various
algorithms to create a composite identifier that exhibits similarity-preserving properties (soft
hash or Simprint).

The component-based structure of the ISCC identifies content at multiple levels of abstraction. Each
component is self-describing, modular, and can be used separately or with others to aid in various
content identification tasks. The algorithmic design supports content deduplication, database
synchronization, indexing, integrity verification, timestamping, versioning, data provenance,
similarity clustering, anomaly detection, usage tracking, allocation of royalties, fact-checking and
general digital asset management use-cases.

## What is ISCC Semantic Text-Code?

The ISCC framework already includes a Text-Code based on lexical similarity for near-duplicate
matching. The ISCC Semantic Text-Code is a planned additional ISCC-UNIT focused on capturing a more
abstract and broader semantic similarity. It is engineered to be robust against a wide range of
variations and, most remarkably, translations of text that cannot be matched based on lexical
similarity alone.

### Translation Matching

One of the most interesting aspects of the Semantic Text-Code is its ability to generate
**(near)-identical codes for translations of the same text**. This means that the same content,
expressed in different languages, can be identified and linked, opening up new possibilities for
cross-lingual content identification and similarity detection.

## Key Features

- **Semantic Similarity**: Utilizes deep learning models to generate codes that reflect the semantic
  essence of text.
- **Translation Matching**: Creates nearly identical codes for text translations, enabling
  cross-lingual content identification.
- **Bit-Length Flexibility**: Supports generating codes of various bit lengths (up to 256 bits),
  allowing for adjustable granularity in similarity detection.
- **ISCC Compatible**: Generates codes fully compatible with the ISCC specification, facilitating
  seamless integration with existing ISCC-based systems.

## Installation

Ensure you have Python 3.9 or newer installed on your system. Install the library using:

```bash
pip install iscc-sct
```

For systems with GPU CUDA support, enhance performance by installing with:

```bash
pip install iscc-sct[gpu]
```

## Usage

Generate a Semantic Text-Code using the create function:

```python-repl
>>> import iscc_sct as sct
>>> text = "This is some sample text. It can be a longer document or even an entire book."
>>> sct.create(text, bits=256)
{
  "iscc": "ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI",
  "characters": 77
}

```

For granular (per chunk) feature outputs:

```python-repl
>>> import iscc_sct as sct
>>> text = "This is some sample text. It can be a longer document or even an entire book."
>>> sct.create(text, bits=256, granular=True)
{
  "iscc": "ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI",
  "characters": 77,
  "features": [
    {
      "maintype": "semantic",
      "subtype": "text",
      "version": 0,
      "simprints": [
        {
          "simprint": "XZjeSfdyVi0",
          "offset": 0,
          "size": 77,
          "content": "This is some sample text. It can be a longer document or even an entire book."
        }
      ]
    }
  ]
}

```

The installation also provides a sct command-line tool:

```shell
usage: sct [-h] [-b BITS] [-g] [-d] [path]

Generate Semantic Text-Codes for text files.

positional arguments:
  path                  Path to text files (supports glob patterns) or 'gui' to launch Gradio demo.

options:
  -h, --help            show this help message and exit
  -b BITS, --bits BITS  Bit-Length of Code (default 256)
  -g, --granular        Activate granular processing.
  -d, --debug           Show debugging messages.
```

## How It Works

`iscc-sct` employs the following process:

1. Splits the text into overlaping chunks (using syntactically sensible breakpoints).
1. Uses a pre-trained deep learning model for text embedding.
1. Generates feature vectors capturing essential characteristics of the chunks.
1. Aggregates these vectors and binarizes them to produce a Semantic Text-Code.
1. Prefixes the binarized vector with the matching ISCC header, encodes it with base32, and adds the
   "ISCC:" prefix.

This process ensures robustness to variations and translations, enabling cross-lingual matching
based on a short Simprint.

## Development and Contributing

We welcome contributions to enhance the capabilities and efficiency of this proof of concept. For
development, install the project in development mode using [Poetry](https://python-poetry.org):

```shell
git clone https://github.com/iscc/iscc-sct.git
cd iscc-sct
poetry install
```

If you have suggestions for improvements or bug fixes, please open an issue or pull request. For
major changes, please open an issue first to discuss your ideas.

**We particularly welcome recommendations for other multilingual text embedding models trained with
Matryoshka Representation Learning (MRL) and optimized for binarization. Such contributions could
significantly improve the performance and efficiency of the ISCC Semantic Text-Code generation.**

## Gradio Demo

This repository also provides an interactive Gradio demo that allows you to explore the capabilities
of ISCC Semantic Text-Code. The demo showcases:

- Generation of ISCC Semantic Text-Codes for input texts
- Comparison of two texts and their similarity based on the generated codes
- Visualization of text chunking and granular matches
- Adjustable parameters like ISCC bit-length and maximum tokens per chunk

You can access the live version of the Gradio demo at:
[https://huggingface.co/spaces/iscc/iscc-sct](https://huggingface.co/spaces/iscc/iscc-sct)

### Running the Gradio Demo Locally

To run the Gradio demo locally, you first need to install the `iscc-sct` package with the optional
`demo` dependency:

```shell
pip install iscc-sct[demo]
```

This will ensure that Gradio and other necessary dependencies for the demo are installed.

After installation, you can use the `sct` command-line tool that comes with the package:

```shell
sct gui
```

This command will launch the Gradio interface in your default web browser, allowing you to interact
with the demo on your local machine.

## Suported Languages:

Arabic, Armenian, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese (China), Chinese (Taiwan),
Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, French (Canada),
Galician, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian,
Japanese, Kannada, Korean, Kurdish, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Marathi,
Mongolian, Norwegian Bokmål, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian,
Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian,
Urdu, Vietnamese.

## Future Work

### Shift Resistant Semantic Chunking

The current chunking strategy uses tries to maximize chunk sizes (up to 127 tokens) wheil still
splitting at lexically sensible boundaries with an overlap of up to 48 tokens. See
[text-splitter](https://github.com/benbrandt/text-splitter).

Cross document chunk matching via granular Simprints can likely be improved significantly with a
semantically aware and shift resistant chunking strategy. Better shift resistance would improve the
chances that the bounderies detected for semantically similar text sequences in different documents
are aligned.

### MRL based Embeddings

A text embedding model trained with
[Matryoshka Representation Learning](https://arxiv.org/pdf/2205.13147) may yield better results with
short 64-bit Semantic Text-Codes.

### Larger Chunk Sizes

A text embedding model with support for a larger `max_token` size (currently 128) may yield
higher-order granular simprints based on larger chunks of text.

## Acknowledgements

- Text Chunking: [text-splitter](https://github.com/benbrandt/text-splitter)
- Text Embeddings:
  [Sentence-Transformers](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)

            

Raw data

            {
    "_id": null,
    "home_page": "https://iscc.codes",
    "name": "iscc-sct",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": null,
    "keywords": "iscc, text similarity, cross lingual, semantic similarity",
    "author": "Titusz",
    "author_email": "tp@py7.de",
    "download_url": "https://files.pythonhosted.org/packages/b7/39/01a903887cc7fb881cef000e8557cfe3c193571248206469db9603571f82/iscc_sct-0.1.2.tar.gz",
    "platform": null,
    "description": "# ISCC - Semantic Text-Code\n\n[![Tests](https://github.com/iscc/iscc-sct/actions/workflows/tests.yml/badge.svg)](https://github.com/iscc/iscc-core/actions/workflows/tests.yml)\n[![Version](https://img.shields.io/pypi/v/iscc-sct.svg)](https://pypi.python.org/pypi/iscc-sct/)\n[![Downloads](https://pepy.tech/badge/iscc-sct)](https://pepy.tech/project/iscc-sct)\n\n> [!CAUTION]\n> **This is a proof of concept.** All releases with version numbers below v1.0.0 may break backward\n> compatibility and produce incompatible Semantic Text-Codes. The algorithms of this `iscc-sct`\n> repository are experimental and not part of the official\n> [ISO 24138:2024](https://www.iso.org/standard/77899.html) standard.\n\n`iscc-sct` is a **Semantic-Code Text** implementation for the [ISCC](https://core.iscc.codes)\n(*International Standard Content Code*). The Semantic-Code Text is a new ISCC-UNIT for semantic text\nidentification. The algorithm creates simmilar (low hamming distance) codes for semantically similar\ntext inputs across different languages. The SCT ISCC-UNIT is a compact binary code created from a\nbinarized document-vector text-embedding.\n\n## What is the ISCC\n\nThe ISCC is a combination of various similarity preserving fingerprints and an identifier for\ndigital media content.\n\nISCCs are generated algorithmically from digital content, just like cryptographic hashes. However,\ninstead of using a single cryptographic hash function to identify data only, the ISCC uses various\nalgorithms to create a composite identifier that exhibits similarity-preserving properties (soft\nhash or Simprint).\n\nThe component-based structure of the ISCC identifies content at multiple levels of abstraction. Each\ncomponent is self-describing, modular, and can be used separately or with others to aid in various\ncontent identification tasks. The algorithmic design supports content deduplication, database\nsynchronization, indexing, integrity verification, timestamping, versioning, data provenance,\nsimilarity clustering, anomaly detection, usage tracking, allocation of royalties, fact-checking and\ngeneral digital asset management use-cases.\n\n## What is ISCC Semantic Text-Code?\n\nThe ISCC framework already includes a Text-Code based on lexical similarity for near-duplicate\nmatching. The ISCC Semantic Text-Code is a planned additional ISCC-UNIT focused on capturing a more\nabstract and broader semantic similarity. It is engineered to be robust against a wide range of\nvariations and, most remarkably, translations of text that cannot be matched based on lexical\nsimilarity alone.\n\n### Translation Matching\n\nOne of the most interesting aspects of the Semantic Text-Code is its ability to generate\n**(near)-identical codes for translations of the same text**. This means that the same content,\nexpressed in different languages, can be identified and linked, opening up new possibilities for\ncross-lingual content identification and similarity detection.\n\n## Key Features\n\n- **Semantic Similarity**: Utilizes deep learning models to generate codes that reflect the semantic\n  essence of text.\n- **Translation Matching**: Creates nearly identical codes for text translations, enabling\n  cross-lingual content identification.\n- **Bit-Length Flexibility**: Supports generating codes of various bit lengths (up to 256 bits),\n  allowing for adjustable granularity in similarity detection.\n- **ISCC Compatible**: Generates codes fully compatible with the ISCC specification, facilitating\n  seamless integration with existing ISCC-based systems.\n\n## Installation\n\nEnsure you have Python 3.9 or newer installed on your system. Install the library using:\n\n```bash\npip install iscc-sct\n```\n\nFor systems with GPU CUDA support, enhance performance by installing with:\n\n```bash\npip install iscc-sct[gpu]\n```\n\n## Usage\n\nGenerate a Semantic Text-Code using the create function:\n\n```python-repl\n>>> import iscc_sct as sct\n>>> text = \"This is some sample text. It can be a longer document or even an entire book.\"\n>>> sct.create(text, bits=256)\n{\n  \"iscc\": \"ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI\",\n  \"characters\": 77\n}\n\n```\n\nFor granular (per chunk) feature outputs:\n\n```python-repl\n>>> import iscc_sct as sct\n>>> text = \"This is some sample text. It can be a longer document or even an entire book.\"\n>>> sct.create(text, bits=256, granular=True)\n{\n  \"iscc\": \"ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI\",\n  \"characters\": 77,\n  \"features\": [\n    {\n      \"maintype\": \"semantic\",\n      \"subtype\": \"text\",\n      \"version\": 0,\n      \"simprints\": [\n        {\n          \"simprint\": \"XZjeSfdyVi0\",\n          \"offset\": 0,\n          \"size\": 77,\n          \"content\": \"This is some sample text. It can be a longer document or even an entire book.\"\n        }\n      ]\n    }\n  ]\n}\n\n```\n\nThe installation also provides a sct command-line tool:\n\n```shell\nusage: sct [-h] [-b BITS] [-g] [-d] [path]\n\nGenerate Semantic Text-Codes for text files.\n\npositional arguments:\n  path                  Path to text files (supports glob patterns) or 'gui' to launch Gradio demo.\n\noptions:\n  -h, --help            show this help message and exit\n  -b BITS, --bits BITS  Bit-Length of Code (default 256)\n  -g, --granular        Activate granular processing.\n  -d, --debug           Show debugging messages.\n```\n\n## How It Works\n\n`iscc-sct` employs the following process:\n\n1. Splits the text into overlaping chunks (using syntactically sensible breakpoints).\n1. Uses a pre-trained deep learning model for text embedding.\n1. Generates feature vectors capturing essential characteristics of the chunks.\n1. Aggregates these vectors and binarizes them to produce a Semantic Text-Code.\n1. Prefixes the binarized vector with the matching ISCC header, encodes it with base32, and adds the\n   \"ISCC:\" prefix.\n\nThis process ensures robustness to variations and translations, enabling cross-lingual matching\nbased on a short Simprint.\n\n## Development and Contributing\n\nWe welcome contributions to enhance the capabilities and efficiency of this proof of concept. For\ndevelopment, install the project in development mode using [Poetry](https://python-poetry.org):\n\n```shell\ngit clone https://github.com/iscc/iscc-sct.git\ncd iscc-sct\npoetry install\n```\n\nIf you have suggestions for improvements or bug fixes, please open an issue or pull request. For\nmajor changes, please open an issue first to discuss your ideas.\n\n**We particularly welcome recommendations for other multilingual text embedding models trained with\nMatryoshka Representation Learning (MRL) and optimized for binarization. Such contributions could\nsignificantly improve the performance and efficiency of the ISCC Semantic Text-Code generation.**\n\n## Gradio Demo\n\nThis repository also provides an interactive Gradio demo that allows you to explore the capabilities\nof ISCC Semantic Text-Code. The demo showcases:\n\n- Generation of ISCC Semantic Text-Codes for input texts\n- Comparison of two texts and their similarity based on the generated codes\n- Visualization of text chunking and granular matches\n- Adjustable parameters like ISCC bit-length and maximum tokens per chunk\n\nYou can access the live version of the Gradio demo at:\n[https://huggingface.co/spaces/iscc/iscc-sct](https://huggingface.co/spaces/iscc/iscc-sct)\n\n### Running the Gradio Demo Locally\n\nTo run the Gradio demo locally, you first need to install the `iscc-sct` package with the optional\n`demo` dependency:\n\n```shell\npip install iscc-sct[demo]\n```\n\nThis will ensure that Gradio and other necessary dependencies for the demo are installed.\n\nAfter installation, you can use the `sct` command-line tool that comes with the package:\n\n```shell\nsct gui\n```\n\nThis command will launch the Gradio interface in your default web browser, allowing you to interact\nwith the demo on your local machine.\n\n## Suported Languages:\n\nArabic, Armenian, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese (China), Chinese (Taiwan),\nCroatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, French (Canada),\nGalician, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian,\nJapanese, Kannada, Korean, Kurdish, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Marathi,\nMongolian, Norwegian Bokm\u00e5l, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian,\nSerbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian,\nUrdu, Vietnamese.\n\n## Future Work\n\n### Shift Resistant Semantic Chunking\n\nThe current chunking strategy uses tries to maximize chunk sizes (up to 127 tokens) wheil still\nsplitting at lexically sensible boundaries with an overlap of up to 48 tokens. See\n[text-splitter](https://github.com/benbrandt/text-splitter).\n\nCross document chunk matching via granular Simprints can likely be improved significantly with a\nsemantically aware and shift resistant chunking strategy. Better shift resistance would improve the\nchances that the bounderies detected for semantically similar text sequences in different documents\nare aligned.\n\n### MRL based Embeddings\n\nA text embedding model trained with\n[Matryoshka Representation Learning](https://arxiv.org/pdf/2205.13147) may yield better results with\nshort 64-bit Semantic Text-Codes.\n\n### Larger Chunk Sizes\n\nA text embedding model with support for a larger `max_token` size (currently 128) may yield\nhigher-order granular simprints based on larger chunks of text.\n\n## Acknowledgements\n\n- Text Chunking: [text-splitter](https://github.com/benbrandt/text-splitter)\n- Text Embeddings:\n  [Sentence-Transformers](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)\n",
    "bugtrack_url": null,
    "license": "CC-BY-NC-SA-4.0",
    "summary": "ISCC - Semantic Code Text",
    "version": "0.1.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/iscc/iscc-sct/issues",
        "Changelog": "https://github.com/iscc/iscc-sct/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/iscc/iscc-sct",
        "Donate": "https://iscc.foundation/support",
        "Homepage": "https://iscc.codes",
        "Repository": "https://github.com/iscc/iscc-sct",
        "Twitter": "https://twitter.com/iscc_foundation"
    },
    "split_keywords": [
        "iscc",
        " text similarity",
        " cross lingual",
        " semantic similarity"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "85176f1a7d59194005ea4462e9c359ed351894d1017240ec4341328e26876a70",
                "md5": "6cb61f97c66c57d3ce90eee39ec997c7",
                "sha256": "c5c060af220951f3dca9a50e8b918da22d27b081694697322a5b4ca84e8d508b"
            },
            "downloads": -1,
            "filename": "iscc_sct-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6cb61f97c66c57d3ce90eee39ec997c7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 3572580,
            "upload_time": "2024-08-19T17:43:12",
            "upload_time_iso_8601": "2024-08-19T17:43:12.762433Z",
            "url": "https://files.pythonhosted.org/packages/85/17/6f1a7d59194005ea4462e9c359ed351894d1017240ec4341328e26876a70/iscc_sct-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b73901a903887cc7fb881cef000e8557cfe3c193571248206469db9603571f82",
                "md5": "15fd8ec7e2256772783bc8af83dee724",
                "sha256": "3f27d015d02760de27d7d5d3f1bb6f174d8ecfa377aa0c43777db958dd5e87a6"
            },
            "downloads": -1,
            "filename": "iscc_sct-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "15fd8ec7e2256772783bc8af83dee724",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.9",
            "size": 3548854,
            "upload_time": "2024-08-19T17:43:21",
            "upload_time_iso_8601": "2024-08-19T17:43:21.698881Z",
            "url": "https://files.pythonhosted.org/packages/b7/39/01a903887cc7fb881cef000e8557cfe3c193571248206469db9603571f82/iscc_sct-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-19 17:43:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "iscc",
    "github_project": "iscc-sct",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "iscc-sct"
}
        
Elapsed time: 0.83684s