pixeltable


Namepixeltable JSON
Version 0.4.12 PyPI version JSON
download
home_pageNone
SummaryAI Data Infrastructure: Declarative, Multimodal, and Incremental
upload_time2025-09-05 05:00:15
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords ai artificial-intelligence chatbot computer-vision data-science database feature-engineering feature-store genai llm machine-learning ml mlops multimodal vector-database
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<img src="https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/pixeltable-logo-large.png"
     alt="Pixeltable Logo" width="50%" />
<br></br>

<h2>Declarative Data Infrastructure for Multimodal AI Apps</h2>

[![License](https://img.shields.io/badge/License-Apache%202.0-0530AD.svg)](https://opensource.org/licenses/Apache-2.0)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pixeltable?logo=python&logoColor=white&)
![Platform Support](https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-E5DDD4)
<br>
[![tests status](https://github.com/pixeltable/pixeltable/actions/workflows/pytest.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/pytest.yml)
[![nightly status](https://github.com/pixeltable/pixeltable/actions/workflows/nightly.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/nightly.yml)
[![stress-tests status](https://github.com/pixeltable/pixeltable/actions/workflows/stress-tests.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/stress-tests.yml)
[![PyPI Package](https://img.shields.io/pypi/v/pixeltable?color=4D148C)](https://pypi.org/project/pixeltable/)
[![My Discord (1306431018890166272)](https://img.shields.io/badge/💬-Discord-%235865F2.svg)](https://discord.gg/QPyqFYx2UN)

[**Installation**](https://docs.pixeltable.com/docs/overview/installation) |
[**Quick Start**](https://docs.pixeltable.com/docs/overview/quick-start) |
[**Documentation**](https://docs.pixeltable.com/) |
[**API Reference**](https://pixeltable.github.io/pixeltable/) |
[**Examples**](https://docs.pixeltable.com/docs/examples/use-cases) |
[**Discord Community**](https://discord.gg/QPyqFYx2UN)

</div>

---

## 💾 Installation

```python
pip install pixeltable
```

**Pixeltable unifies storage, retrieval, and orchestration for multimodal data.**
It stores metadata and computed results persistently, typically in a `.pixeltable` directory in your workspace.

## Pixeltable Demo

https://github.com/user-attachments/assets/b50fd6df-5169-4881-9dbe-1b6e5d06cede

## Quick Start

With Pixeltable, you define your *entire* data processing and AI workflow declaratively using
**[computed columns](https://docs.pixeltable.com/docs/datastore/computed-columns)** on
**[tables](https://docs.pixeltable.com/docs/datastore/tables-and-operations)**.
Focus on your application logic, not the data plumbing.

```python

# Installation
pip install -qU torch transformers openai pixeltable

# Basic setup
import pixeltable as pxt

# Table with multimodal column types (Image, Video, Audio, Document)
t = pxt.create_table('images', {'input_image': pxt.Image})

# Computed columns: define transformation logic once, runs on all data
from pixeltable.functions import huggingface

# Object detection with automatic model management
t.add_computed_column(
    detections=huggingface.detr_for_object_detection(
        t.input_image,
        model_id='facebook/detr-resnet-50'
    )
)

# Extract specific fields from detection results
t.add_computed_column(detections_text=t.detections.label_text)

# OpenAI Vision API integration with built-in rate limiting and async managemennt
from pixeltable.functions import openai

t.add_computed_column(
    vision=openai.vision(
        prompt="Describe what's in this image.",
        image=t.input_image,
        model='gpt-4o-mini'
    )
)

# Insert data directly from an external URL
# Automatically triggers computation of all computed columns
t.insert(input_image='https://raw.github.com/pixeltable/pixeltable/release/docs/resources/images/000000000025.jpg')

# Query - All data, metadata, and computed results are persistently stored
# Structured and unstructured data are returned side-by-side
results = t.select(
    t.input_image,
    t.detections_text,
    t.vision
).collect()
```

## ✨ What Happened?

* **Data Ingestion & Storage:** References [files](https://docs.pixeltable.com/docs/datastore/bringing-data)
    (images, videos, audio, docs) in place, handles structured data.
* **Transformation & Processing:** Applies *any* Python function ([UDFs](https://docs.pixeltable.com/docs/datastore/custom-functions))
    or built-in operations ([chunking, frame extraction](https://docs.pixeltable.com/docs/datastore/iterators)) automatically.
* **AI Model Integration:** Runs inference ([embeddings](https://docs.pixeltable.com/docs/datastore/embedding-index),
    [object detection](https://docs.pixeltable.com/docs/examples/vision/yolox),
    [LLMs](https://docs.pixeltable.com/docs/integrations/frameworks#cloud-llm-providers)) as part of the data pipeline.
* **Indexing & Retrieval:** Creates and manages vector indexes for fast
    [semantic search](https://docs.pixeltable.com/docs/datastore/embedding-index#phase-3%3A-query)
    alongside traditional filtering.
* **Incremental Computation:** Only [recomputes](https://docs.pixeltable.com/docs/overview/quick-start) what's
    necessary when data or code changes, saving time and cost.
* **Versioning & Lineage:** Automatically tracks data and schema changes for reproducibility. See below for an example
    that uses "time travel" to query an older version of a table.

Pixeltable can ingest data from local storage or directly from a URL. When external media files are referenced by URL,
as in the `insert` statement above, Pixeltable caches them locally before processing. See the
[Working with External Files](https://github.com/pixeltable/pixeltable/blob/main/docs/notebooks/feature-guides/working-with-external-files.ipynb)
notebook for more details.

## 🗄️ Where Did My Data Go?

Pixeltable workloads generate various outputs, including both structured outputs (such as bounding boxes for detected
objects) and/or unstructured outputs (such as generated images or video). By default, everything resides in your
Pixeltable user directory at `~/.pixeltable`. Structured data is stored in a Postgres instance in `~/.pixeltable`.
Generated media (images, video, audio, documents) are stored outside the Postgres database, in separate flat files in
`~/.pixeltable/media`. Those media files are referenced by URL in the database, and Pixeltable provides the "glue" for
a unified table interface over both structured and unstructured data.

In general, the user is not expected to interact directly with the data in `~/.pixeltable`; the data store is fully
managed by Pixeltable and is intended to be accessed through the Pixeltable Python SDK.

## ⚖️ Key Principles

* **[Unified Multimodal Interface:](https://docs.pixeltable.com/docs/datastore/tables-and-operations)** `pxt.Image`,
    `pxt.Video`, `pxt.Audio`, `pxt.Document`, etc. – manage diverse data consistently.

    ```python
    t = pxt.create_table(
        'media',
        {
            'img': pxt.Image,
            'video': pxt.Video
        }
    )
    ```

* **[Declarative Computed Columns:](https://docs.pixeltable.com/docs/datastore/computed-columns)** Define processing
    steps once; they run automatically on new/updated data.

    ```python
    t.add_computed_column(
        classification=huggingface.vit_for_image_classification(
            t.image
        )
    )
    ```

* **[Built-in Vector Search:](https://docs.pixeltable.com/docs/datastore/embedding-index)** Add embedding indexes and
    perform similarity searches directly on tables/views.

    ```python
    t.add_embedding_index(
        'img',
        embedding=clip.using(
            model_id='openai/clip-vit-base-patch32'
        )
    )

    sim = t.img.similarity("cat playing with yarn")
    ```

* **[On-the-Fly Data Views:](https://docs.pixeltable.com/docs/datastore/views)** Create virtual tables using iterators
    for efficient processing without data duplication.

    ```python
    frames = pxt.create_view(
        'frames',
        videos,
        iterator=FrameIterator.create(
            video=videos.video,
            fps=1
        )
    )
    ```

* **[Seamless AI Integration:](https://docs.pixeltable.com/docs/integrations/frameworks)** Built-in functions for
    OpenAI, Anthropic, Hugging Face, CLIP, YOLOX, and more.

    ```python
    t.add_computed_column(
        response=openai.chat_completions(
            messages=[{"role": "user", "content": t.prompt}]
        )
    )
    ```

* **[Bring Your Own Code:](https://docs.pixeltable.com/docs/datastore/custom-functions)** Extend Pixeltable with simple
    Python User-Defined Functions.

    ```python
    @pxt.udf
    def format_prompt(context: list, question: str) -> str:
        return f"Context: {context}\nQuestion: {question}"
    ```

* **[Agentic Workflows / Tool Calling:](https://docs.pixeltable.com/docs/examples/chat/tools)** Register `@pxt.udf` or
    `@pxt.query` functions as tools and orchestrate LLM-based tool use (incl. multimodal).

    ```python
    # Example tools: a UDF and a Query function for RAG
    tools = pxt.tools(get_weather_udf, search_context_query)

    # LLM decides which tool to call; Pixeltable executes it
    t.add_computed_column(
        tool_output=invoke_tools(tools, t.llm_tool_choice)
    )
    ```

* **[Data Persistence:](https://docs.pixeltable.com/docs/datastore/tables-and-operations#data-operations)** All data,
    metadata, and computed results are automatically stored and versioned.

    ```python
    t = pxt.get_table('my_table')  # Get a handle to an existing table
    t.select(t.account, t.balance).collect()  # Query its contents
    t.revert()  # Undo the last modification to the table and restore its previous state
    ```

* **[Time Travel:](https://docs.pixeltable.com/docs/datastore/tables-and-operations#data-operations)** By default,
    Pixeltable preserves the full change history of each table, and any prior version can be selected and queried.

    ```python
    t.history()  # Display a human-readable list of all prior versions of the table
    old_version = pxt.get_table('my_table:472')  # Get a handle to a specific table version
    old_version.select(t.account, t.balance).collect()  # Query the older version
    ```

* **[SQL-like Python Querying:](https://docs.pixeltable.com/docs/datastore/filtering-and-selecting)** Familiar syntax
    combined with powerful AI capabilities.

    ```python
    results = (
        t.where(t.score > 0.8)
        .order_by(t.timestamp)
        .select(t.image, score=t.score)
        .limit(10)
        .collect()
    )
    ```

## 💡 Key Examples

*(See the [Full Quick Start](https://docs.pixeltable.com/docs/overview/quick-start) or
[Notebook Gallery](#-notebook-gallery) for more details)*

**1. Multimodal Data Store and Data Transformation (Computed Column):**

```bash
pip install pixeltable
```

```python
import pixeltable as pxt

# Create a table
t = pxt.create_table(
    'films',
    {'name': pxt.String, 'revenue': pxt.Float, 'budget': pxt.Float},
    if_exists="replace"
)

t.insert([
    {'name': 'Inside Out', 'revenue': 800.5, 'budget': 200.0},
    {'name': 'Toy Story', 'revenue': 1073.4, 'budget': 200.0}
])

# Add a computed column for profit - runs automatically!
t.add_computed_column(profit=(t.revenue - t.budget), if_exists="replace")

# Query the results
print(t.select(t.name, t.profit).collect())
# Output includes the automatically computed 'profit' column
```

**2. Object Detection with [YOLOX](https://github.com/pixeltable/pixeltable-yolox):**

```bash
pip install pixeltable pixeltable-yolox
```

```python
import PIL
import pixeltable as pxt
from yolox.models import Yolox
from yolox.data.datasets import COCO_CLASSES

t = pxt.create_table('image', {'image': pxt.Image}, if_exists='replace')

# Insert some images
prefix = 'https://upload.wikimedia.org/wikipedia/commons'
paths = [
    '/1/15/Cat_August_2010-4.jpg',
    '/e/e1/Example_of_a_Dog.jpg',
    '/thumb/b/bf/Bird_Diversity_2013.png/300px-Bird_Diversity_2013.png'
]
t.insert({'image': prefix + p} for p in paths)

@pxt.udf
def detect(image: PIL.Image.Image) -> list[str]:
    model = Yolox.from_pretrained("yolox_s")
    result = model([image])
    coco_labels = [COCO_CLASSES[label] for label in result[0]["labels"]]
    return coco_labels

t.add_computed_column(classification=detect(t.image))

print(t.select().collect())
```

**3. Image Similarity Search (CLIP Embedding Index):**

```bash
pip install pixeltable sentence-transformers
```

```python
import pixeltable as pxt
from pixeltable.functions.huggingface import clip

# Create image table and add sample images
images = pxt.create_table('my_images', {'img': pxt.Image}, if_exists='replace')
images.insert([
    {'img': 'https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Orange_tabby_cat_sitting_on_fallen_leaves-Hisashi-01A.jpg/1920px-Orange_tabby_cat_sitting_on_fallen_leaves-Hisashi-01A.jpg'},
    {'img': 'https://upload.wikimedia.org/wikipedia/commons/d/d5/Retriever_in_water.jpg'}
])

# Add CLIP embedding index for similarity search
images.add_embedding_index(
    'img',
    embedding=clip.using(model_id='openai/clip-vit-base-patch32')
)

# Text-based image search
query_text = "a dog playing fetch"
sim_text = images.img.similarity(query_text)
results_text = images.order_by(sim_text, asc=False).limit(3).select(
    image=images.img, similarity=sim_text
).collect()
print("--- Text Query Results ---")
print(results_text)

# Image-based image search
query_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Huskiesatrest.jpg/2880px-Huskiesatrest.jpg'
sim_image = images.img.similarity(query_image_url)
results_image = images.order_by(sim_image, asc=False).limit(3).select(
    image=images.img, similarity=sim_image
).collect()
print("--- Image URL Query Results ---")
print(results_image)
```

**4. Multimodal/Incremental RAG Workflow (Document Chunking & LLM Call):**

```bash
pip install pixeltable openai spacy sentence-transformers
```

```bash
python -m spacy download en_core_web_sm
```

```python
import pixeltable as pxt
import pixeltable.functions as pxtf
from pixeltable.functions import openai, huggingface
from pixeltable.iterators import DocumentSplitter

# Manage your tables by directories
directory = "my_docs"
pxt.drop_dir(directory, if_not_exists="ignore", force=True)
pxt.create_dir("my_docs")

# Create a document table and add a PDF
docs = pxt.create_table(f'{directory}.docs', {'doc': pxt.Document})
docs.insert([{'doc': 'https://github.com/pixeltable/pixeltable/raw/release/docs/resources/rag-demo/Jefferson-Amazon.pdf'}])

# Create chunks view with sentence-based splitting
chunks = pxt.create_view(
    'doc_chunks',
    docs,
    iterator=DocumentSplitter.create(document=docs.doc, separators='sentence')
)

# Explicitly create the embedding function object
embed_model = huggingface.sentence_transformer.using(model_id='all-MiniLM-L6-v2')
# Add embedding index using the function object
chunks.add_embedding_index('text', string_embed=embed_model)

# Define query function for retrieval - Returns a DataFrame expression
@pxt.query
def get_relevant_context(query_text: str, limit: int = 3):
    sim = chunks.text.similarity(query_text)
    # Return a list of strings (text of relevant chunks)
    return chunks.order_by(sim, asc=False).limit(limit).select(chunks.text)

# Build a simple Q&A table
qa = pxt.create_table(f'{directory}.qa_system', {'prompt': pxt.String})

# 1. Add retrieved context (now a list of strings)
qa.add_computed_column(context=get_relevant_context(qa.prompt))

# 2. Format the prompt with context
qa.add_computed_column(
    final_prompt=pxtf.string.format(
        """
        PASSAGES:
        {0}

        QUESTION:
        {1}
        """,
        qa.context,
        qa.prompt
    )
)

# 4. Generate the answer using the well-formatted prompt column
qa.add_computed_column(
    answer=openai.chat_completions(
        model='gpt-4o-mini',
        messages=[{
            'role': 'user',
            'content': qa.final_prompt
        }]
    ).choices[0].message.content
)

# Ask a question and get the answer
qa.insert([{'prompt': 'What can you tell me about Amazon?'}])
print("--- Final Answer ---")
print(qa.select(qa.answer).collect())
```

## 📚 Notebook Gallery

Explore Pixeltable's capabilities interactively:

| Topic | Notebook | Topic | Notebook |
|:----------|:-----------------|:-------------------------|:---------------------------------:|
| **Fundamentals** | | **Integrations** | |
| 10-Min Tour | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/pixeltable-basics.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | OpenAI | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-openai.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> |
| Tables & Ops | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/fundamentals/tables-and-data-operations.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | Anthropic | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-anthropic.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> |
| UDFs | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/udfs-in-pixeltable.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | Together AI | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-together.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> |
| Embedding Index | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/embedding-and-vector-indexes.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | Label Studio | <a target="_blank" href="https://docs.pixeltable.com/docs/cookbooks/vision/label-studio"> <img src="https://img.shields.io/badge/📚%20Docs-013056" alt="Visit Docs"/></a> |
| External Files | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/working-with-external-files.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | Mistral | <a target="_blank" href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/third_party/Pixeltable/incremental_prompt_engineering_and_model_comparison.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Github"/> |
| **Use Cases** | | **Sample Apps** | |
| RAG Demo | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/rag-demo.ipynb">  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> | Multimodal Agent | <a target="_blank" href="https://huggingface.co/spaces/Pixeltable/Multimodal-Powerhouse"> <img src="https://img.shields.io/badge/🤗%20Demo-FF7D04" alt="HF Space"/></a> |
| Object Detection | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/object-detection-in-videos.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> | Image/Text Search | <a target="_blank" href="https://github.com/pixeltable/pixeltable/tree/main/docs/sample-apps/text-and-image-similarity-search-nextjs-fastapi">  <img src="https://img.shields.io/badge/🖥️%20App-black.svg" alt="GitHub App"/> |
| Audio Transcription | <a target="_blank" href="https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/audio-transcriptions.ipynb">  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> | Discord Bot | <a target="_blank" href="https://github.com/pixeltable/pixeltable/blob/main/docs/sample-apps/context-aware-discord-bot"> <img src="https://img.shields.io/badge/%F0%9F%92%AC%20Bot-%235865F2.svg" alt="GitHub App"/></a> |

## 🚨 Maintaining Production-Ready Multimodal AI Apps is Still Too Hard

Building robust AI applications, especially [multimodal](https://docs.pixeltable.com/docs/datastore/bringing-data) ones,
requires stitching together numerous tools:

* ETL pipelines for data loading and transformation.
* Vector databases for semantic search.
* Feature stores for ML models.
* Orchestrators for scheduling.
* Model serving infrastructure for inference.
* Separate systems for parallelization, caching, versioning, and lineage tracking.

This complex "data plumbing" slows down development, increases costs, and makes applications brittle and hard to reproduce.

## 🔮 Roadmap (2025)

### Cloud Infrastructure and Deployment

We're working on a hosted Pixeltable service that will:

* Enable Multimodal Data Sharing of Pixeltable Tables and Views | [Waitlist](https://www.pixeltable.com/waitlist)
* Provide a persistent cloud instance
* Turn Pixeltable workflows (Tables, Queries, UDFs) into API endpoints/[MCP Servers](https://github.com/pixeltable/pixeltable-mcp-server)

## 🤝 Contributing

We love contributions! Whether it's reporting bugs, suggesting features, improving documentation, or submitting code
changes, please check out our [Contributing Guide](CONTRIBUTING.md) and join the
[Discussions](https://github.com/pixeltable/pixeltable/discussions) or our
[Discord Server](https://discord.gg/QPyqFYx2UN).

## 🏢 License

Pixeltable is licensed under the Apache 2.0 License.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pixeltable",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "ai, artificial-intelligence, chatbot, computer-vision, data-science, database, feature-engineering, feature-store, genai, llm, machine-learning, ml, mlops, multimodal, vector-database",
    "author": null,
    "author_email": "\"Pixeltable, Inc.\" <contact@pixeltable.com>",
    "download_url": null,
    "platform": null,
    "description": "<div align=\"center\">\n<img src=\"https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/pixeltable-logo-large.png\"\n     alt=\"Pixeltable Logo\" width=\"50%\" />\n<br></br>\n\n<h2>Declarative Data Infrastructure for Multimodal AI Apps</h2>\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-0530AD.svg)](https://opensource.org/licenses/Apache-2.0)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pixeltable?logo=python&logoColor=white&)\n![Platform Support](https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-E5DDD4)\n<br>\n[![tests status](https://github.com/pixeltable/pixeltable/actions/workflows/pytest.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/pytest.yml)\n[![nightly status](https://github.com/pixeltable/pixeltable/actions/workflows/nightly.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/nightly.yml)\n[![stress-tests status](https://github.com/pixeltable/pixeltable/actions/workflows/stress-tests.yml/badge.svg)](https://github.com/pixeltable/pixeltable/actions/workflows/stress-tests.yml)\n[![PyPI Package](https://img.shields.io/pypi/v/pixeltable?color=4D148C)](https://pypi.org/project/pixeltable/)\n[![My Discord (1306431018890166272)](https://img.shields.io/badge/\ud83d\udcac-Discord-%235865F2.svg)](https://discord.gg/QPyqFYx2UN)\n\n[**Installation**](https://docs.pixeltable.com/docs/overview/installation) |\n[**Quick Start**](https://docs.pixeltable.com/docs/overview/quick-start) |\n[**Documentation**](https://docs.pixeltable.com/) |\n[**API Reference**](https://pixeltable.github.io/pixeltable/) |\n[**Examples**](https://docs.pixeltable.com/docs/examples/use-cases) |\n[**Discord Community**](https://discord.gg/QPyqFYx2UN)\n\n</div>\n\n---\n\n## \ud83d\udcbe Installation\n\n```python\npip install pixeltable\n```\n\n**Pixeltable unifies storage, retrieval, and orchestration for multimodal data.**\nIt stores metadata and computed results persistently, typically in a `.pixeltable` directory in your workspace.\n\n## Pixeltable Demo\n\nhttps://github.com/user-attachments/assets/b50fd6df-5169-4881-9dbe-1b6e5d06cede\n\n## Quick Start\n\nWith Pixeltable, you define your *entire* data processing and AI workflow declaratively using\n**[computed columns](https://docs.pixeltable.com/docs/datastore/computed-columns)** on\n**[tables](https://docs.pixeltable.com/docs/datastore/tables-and-operations)**.\nFocus on your application logic, not the data plumbing.\n\n```python\n\n# Installation\npip install -qU torch transformers openai pixeltable\n\n# Basic setup\nimport pixeltable as pxt\n\n# Table with multimodal column types (Image, Video, Audio, Document)\nt = pxt.create_table('images', {'input_image': pxt.Image})\n\n# Computed columns: define transformation logic once, runs on all data\nfrom pixeltable.functions import huggingface\n\n# Object detection with automatic model management\nt.add_computed_column(\n    detections=huggingface.detr_for_object_detection(\n        t.input_image,\n        model_id='facebook/detr-resnet-50'\n    )\n)\n\n# Extract specific fields from detection results\nt.add_computed_column(detections_text=t.detections.label_text)\n\n# OpenAI Vision API integration with built-in rate limiting and async managemennt\nfrom pixeltable.functions import openai\n\nt.add_computed_column(\n    vision=openai.vision(\n        prompt=\"Describe what's in this image.\",\n        image=t.input_image,\n        model='gpt-4o-mini'\n    )\n)\n\n# Insert data directly from an external URL\n# Automatically triggers computation of all computed columns\nt.insert(input_image='https://raw.github.com/pixeltable/pixeltable/release/docs/resources/images/000000000025.jpg')\n\n# Query - All data, metadata, and computed results are persistently stored\n# Structured and unstructured data are returned side-by-side\nresults = t.select(\n    t.input_image,\n    t.detections_text,\n    t.vision\n).collect()\n```\n\n## \u2728 What Happened?\n\n* **Data Ingestion & Storage:** References [files](https://docs.pixeltable.com/docs/datastore/bringing-data)\n    (images, videos, audio, docs) in place, handles structured data.\n* **Transformation & Processing:** Applies *any* Python function ([UDFs](https://docs.pixeltable.com/docs/datastore/custom-functions))\n    or built-in operations ([chunking, frame extraction](https://docs.pixeltable.com/docs/datastore/iterators)) automatically.\n* **AI Model Integration:** Runs inference ([embeddings](https://docs.pixeltable.com/docs/datastore/embedding-index),\n    [object detection](https://docs.pixeltable.com/docs/examples/vision/yolox),\n    [LLMs](https://docs.pixeltable.com/docs/integrations/frameworks#cloud-llm-providers)) as part of the data pipeline.\n* **Indexing & Retrieval:** Creates and manages vector indexes for fast\n    [semantic search](https://docs.pixeltable.com/docs/datastore/embedding-index#phase-3%3A-query)\n    alongside traditional filtering.\n* **Incremental Computation:** Only [recomputes](https://docs.pixeltable.com/docs/overview/quick-start) what's\n    necessary when data or code changes, saving time and cost.\n* **Versioning & Lineage:** Automatically tracks data and schema changes for reproducibility. See below for an example\n    that uses \"time travel\" to query an older version of a table.\n\nPixeltable can ingest data from local storage or directly from a URL. When external media files are referenced by URL,\nas in the `insert` statement above, Pixeltable caches them locally before processing. See the\n[Working with External Files](https://github.com/pixeltable/pixeltable/blob/main/docs/notebooks/feature-guides/working-with-external-files.ipynb)\nnotebook for more details.\n\n## \ud83d\uddc4\ufe0f Where Did My Data Go?\n\nPixeltable workloads generate various outputs, including both structured outputs (such as bounding boxes for detected\nobjects) and/or unstructured outputs (such as generated images or video). By default, everything resides in your\nPixeltable user directory at `~/.pixeltable`. Structured data is stored in a Postgres instance in `~/.pixeltable`.\nGenerated media (images, video, audio, documents) are stored outside the Postgres database, in separate flat files in\n`~/.pixeltable/media`. Those media files are referenced by URL in the database, and Pixeltable provides the \"glue\" for\na unified table interface over both structured and unstructured data.\n\nIn general, the user is not expected to interact directly with the data in `~/.pixeltable`; the data store is fully\nmanaged by Pixeltable and is intended to be accessed through the Pixeltable Python SDK.\n\n## \u2696\ufe0f Key Principles\n\n* **[Unified Multimodal Interface:](https://docs.pixeltable.com/docs/datastore/tables-and-operations)** `pxt.Image`,\n    `pxt.Video`, `pxt.Audio`, `pxt.Document`, etc. \u2013 manage diverse data consistently.\n\n    ```python\n    t = pxt.create_table(\n        'media',\n        {\n            'img': pxt.Image,\n            'video': pxt.Video\n        }\n    )\n    ```\n\n* **[Declarative Computed Columns:](https://docs.pixeltable.com/docs/datastore/computed-columns)** Define processing\n    steps once; they run automatically on new/updated data.\n\n    ```python\n    t.add_computed_column(\n        classification=huggingface.vit_for_image_classification(\n            t.image\n        )\n    )\n    ```\n\n* **[Built-in Vector Search:](https://docs.pixeltable.com/docs/datastore/embedding-index)** Add embedding indexes and\n    perform similarity searches directly on tables/views.\n\n    ```python\n    t.add_embedding_index(\n        'img',\n        embedding=clip.using(\n            model_id='openai/clip-vit-base-patch32'\n        )\n    )\n\n    sim = t.img.similarity(\"cat playing with yarn\")\n    ```\n\n* **[On-the-Fly Data Views:](https://docs.pixeltable.com/docs/datastore/views)** Create virtual tables using iterators\n    for efficient processing without data duplication.\n\n    ```python\n    frames = pxt.create_view(\n        'frames',\n        videos,\n        iterator=FrameIterator.create(\n            video=videos.video,\n            fps=1\n        )\n    )\n    ```\n\n* **[Seamless AI Integration:](https://docs.pixeltable.com/docs/integrations/frameworks)** Built-in functions for\n    OpenAI, Anthropic, Hugging Face, CLIP, YOLOX, and more.\n\n    ```python\n    t.add_computed_column(\n        response=openai.chat_completions(\n            messages=[{\"role\": \"user\", \"content\": t.prompt}]\n        )\n    )\n    ```\n\n* **[Bring Your Own Code:](https://docs.pixeltable.com/docs/datastore/custom-functions)** Extend Pixeltable with simple\n    Python User-Defined Functions.\n\n    ```python\n    @pxt.udf\n    def format_prompt(context: list, question: str) -> str:\n        return f\"Context: {context}\\nQuestion: {question}\"\n    ```\n\n* **[Agentic Workflows / Tool Calling:](https://docs.pixeltable.com/docs/examples/chat/tools)** Register `@pxt.udf` or\n    `@pxt.query` functions as tools and orchestrate LLM-based tool use (incl. multimodal).\n\n    ```python\n    # Example tools: a UDF and a Query function for RAG\n    tools = pxt.tools(get_weather_udf, search_context_query)\n\n    # LLM decides which tool to call; Pixeltable executes it\n    t.add_computed_column(\n        tool_output=invoke_tools(tools, t.llm_tool_choice)\n    )\n    ```\n\n* **[Data Persistence:](https://docs.pixeltable.com/docs/datastore/tables-and-operations#data-operations)** All data,\n    metadata, and computed results are automatically stored and versioned.\n\n    ```python\n    t = pxt.get_table('my_table')  # Get a handle to an existing table\n    t.select(t.account, t.balance).collect()  # Query its contents\n    t.revert()  # Undo the last modification to the table and restore its previous state\n    ```\n\n* **[Time Travel:](https://docs.pixeltable.com/docs/datastore/tables-and-operations#data-operations)** By default,\n    Pixeltable preserves the full change history of each table, and any prior version can be selected and queried.\n\n    ```python\n    t.history()  # Display a human-readable list of all prior versions of the table\n    old_version = pxt.get_table('my_table:472')  # Get a handle to a specific table version\n    old_version.select(t.account, t.balance).collect()  # Query the older version\n    ```\n\n* **[SQL-like Python Querying:](https://docs.pixeltable.com/docs/datastore/filtering-and-selecting)** Familiar syntax\n    combined with powerful AI capabilities.\n\n    ```python\n    results = (\n        t.where(t.score > 0.8)\n        .order_by(t.timestamp)\n        .select(t.image, score=t.score)\n        .limit(10)\n        .collect()\n    )\n    ```\n\n## \ud83d\udca1 Key Examples\n\n*(See the [Full Quick Start](https://docs.pixeltable.com/docs/overview/quick-start) or\n[Notebook Gallery](#-notebook-gallery) for more details)*\n\n**1. Multimodal Data Store and Data Transformation (Computed Column):**\n\n```bash\npip install pixeltable\n```\n\n```python\nimport pixeltable as pxt\n\n# Create a table\nt = pxt.create_table(\n    'films',\n    {'name': pxt.String, 'revenue': pxt.Float, 'budget': pxt.Float},\n    if_exists=\"replace\"\n)\n\nt.insert([\n    {'name': 'Inside Out', 'revenue': 800.5, 'budget': 200.0},\n    {'name': 'Toy Story', 'revenue': 1073.4, 'budget': 200.0}\n])\n\n# Add a computed column for profit - runs automatically!\nt.add_computed_column(profit=(t.revenue - t.budget), if_exists=\"replace\")\n\n# Query the results\nprint(t.select(t.name, t.profit).collect())\n# Output includes the automatically computed 'profit' column\n```\n\n**2. Object Detection with [YOLOX](https://github.com/pixeltable/pixeltable-yolox):**\n\n```bash\npip install pixeltable pixeltable-yolox\n```\n\n```python\nimport PIL\nimport pixeltable as pxt\nfrom yolox.models import Yolox\nfrom yolox.data.datasets import COCO_CLASSES\n\nt = pxt.create_table('image', {'image': pxt.Image}, if_exists='replace')\n\n# Insert some images\nprefix = 'https://upload.wikimedia.org/wikipedia/commons'\npaths = [\n    '/1/15/Cat_August_2010-4.jpg',\n    '/e/e1/Example_of_a_Dog.jpg',\n    '/thumb/b/bf/Bird_Diversity_2013.png/300px-Bird_Diversity_2013.png'\n]\nt.insert({'image': prefix + p} for p in paths)\n\n@pxt.udf\ndef detect(image: PIL.Image.Image) -> list[str]:\n    model = Yolox.from_pretrained(\"yolox_s\")\n    result = model([image])\n    coco_labels = [COCO_CLASSES[label] for label in result[0][\"labels\"]]\n    return coco_labels\n\nt.add_computed_column(classification=detect(t.image))\n\nprint(t.select().collect())\n```\n\n**3. Image Similarity Search (CLIP Embedding Index):**\n\n```bash\npip install pixeltable sentence-transformers\n```\n\n```python\nimport pixeltable as pxt\nfrom pixeltable.functions.huggingface import clip\n\n# Create image table and add sample images\nimages = pxt.create_table('my_images', {'img': pxt.Image}, if_exists='replace')\nimages.insert([\n    {'img': 'https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Orange_tabby_cat_sitting_on_fallen_leaves-Hisashi-01A.jpg/1920px-Orange_tabby_cat_sitting_on_fallen_leaves-Hisashi-01A.jpg'},\n    {'img': 'https://upload.wikimedia.org/wikipedia/commons/d/d5/Retriever_in_water.jpg'}\n])\n\n# Add CLIP embedding index for similarity search\nimages.add_embedding_index(\n    'img',\n    embedding=clip.using(model_id='openai/clip-vit-base-patch32')\n)\n\n# Text-based image search\nquery_text = \"a dog playing fetch\"\nsim_text = images.img.similarity(query_text)\nresults_text = images.order_by(sim_text, asc=False).limit(3).select(\n    image=images.img, similarity=sim_text\n).collect()\nprint(\"--- Text Query Results ---\")\nprint(results_text)\n\n# Image-based image search\nquery_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Huskiesatrest.jpg/2880px-Huskiesatrest.jpg'\nsim_image = images.img.similarity(query_image_url)\nresults_image = images.order_by(sim_image, asc=False).limit(3).select(\n    image=images.img, similarity=sim_image\n).collect()\nprint(\"--- Image URL Query Results ---\")\nprint(results_image)\n```\n\n**4. Multimodal/Incremental RAG Workflow (Document Chunking & LLM Call):**\n\n```bash\npip install pixeltable openai spacy sentence-transformers\n```\n\n```bash\npython -m spacy download en_core_web_sm\n```\n\n```python\nimport pixeltable as pxt\nimport pixeltable.functions as pxtf\nfrom pixeltable.functions import openai, huggingface\nfrom pixeltable.iterators import DocumentSplitter\n\n# Manage your tables by directories\ndirectory = \"my_docs\"\npxt.drop_dir(directory, if_not_exists=\"ignore\", force=True)\npxt.create_dir(\"my_docs\")\n\n# Create a document table and add a PDF\ndocs = pxt.create_table(f'{directory}.docs', {'doc': pxt.Document})\ndocs.insert([{'doc': 'https://github.com/pixeltable/pixeltable/raw/release/docs/resources/rag-demo/Jefferson-Amazon.pdf'}])\n\n# Create chunks view with sentence-based splitting\nchunks = pxt.create_view(\n    'doc_chunks',\n    docs,\n    iterator=DocumentSplitter.create(document=docs.doc, separators='sentence')\n)\n\n# Explicitly create the embedding function object\nembed_model = huggingface.sentence_transformer.using(model_id='all-MiniLM-L6-v2')\n# Add embedding index using the function object\nchunks.add_embedding_index('text', string_embed=embed_model)\n\n# Define query function for retrieval - Returns a DataFrame expression\n@pxt.query\ndef get_relevant_context(query_text: str, limit: int = 3):\n    sim = chunks.text.similarity(query_text)\n    # Return a list of strings (text of relevant chunks)\n    return chunks.order_by(sim, asc=False).limit(limit).select(chunks.text)\n\n# Build a simple Q&A table\nqa = pxt.create_table(f'{directory}.qa_system', {'prompt': pxt.String})\n\n# 1. Add retrieved context (now a list of strings)\nqa.add_computed_column(context=get_relevant_context(qa.prompt))\n\n# 2. Format the prompt with context\nqa.add_computed_column(\n    final_prompt=pxtf.string.format(\n        \"\"\"\n        PASSAGES:\n        {0}\n\n        QUESTION:\n        {1}\n        \"\"\",\n        qa.context,\n        qa.prompt\n    )\n)\n\n# 4. Generate the answer using the well-formatted prompt column\nqa.add_computed_column(\n    answer=openai.chat_completions(\n        model='gpt-4o-mini',\n        messages=[{\n            'role': 'user',\n            'content': qa.final_prompt\n        }]\n    ).choices[0].message.content\n)\n\n# Ask a question and get the answer\nqa.insert([{'prompt': 'What can you tell me about Amazon?'}])\nprint(\"--- Final Answer ---\")\nprint(qa.select(qa.answer).collect())\n```\n\n## \ud83d\udcda Notebook Gallery\n\nExplore Pixeltable's capabilities interactively:\n\n| Topic | Notebook | Topic | Notebook |\n|:----------|:-----------------|:-------------------------|:---------------------------------:|\n| **Fundamentals** | | **Integrations** | |\n| 10-Min Tour | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/pixeltable-basics.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | OpenAI | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-openai.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> |\n| Tables & Ops | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/fundamentals/tables-and-data-operations.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | Anthropic | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-anthropic.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> |\n| UDFs | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/udfs-in-pixeltable.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | Together AI | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/integrations/working-with-together.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> |\n| Embedding Index | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/embedding-and-vector-indexes.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | Label Studio | <a target=\"_blank\" href=\"https://docs.pixeltable.com/docs/cookbooks/vision/label-studio\"> <img src=\"https://img.shields.io/badge/\ud83d\udcda%20Docs-013056\" alt=\"Visit Docs\"/></a> |\n| External Files | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/feature-guides/working-with-external-files.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | Mistral | <a target=\"_blank\" href=\"https://colab.research.google.com/github/mistralai/cookbook/blob/main/third_party/Pixeltable/incremental_prompt_engineering_and_model_comparison.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Github\"/> |\n| **Use Cases** | | **Sample Apps** | |\n| RAG Demo | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/rag-demo.ipynb\">  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> | Multimodal Agent | <a target=\"_blank\" href=\"https://huggingface.co/spaces/Pixeltable/Multimodal-Powerhouse\"> <img src=\"https://img.shields.io/badge/\ud83e\udd17%20Demo-FF7D04\" alt=\"HF Space\"/></a> |\n| Object Detection | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/object-detection-in-videos.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> </a> | Image/Text Search | <a target=\"_blank\" href=\"https://github.com/pixeltable/pixeltable/tree/main/docs/sample-apps/text-and-image-similarity-search-nextjs-fastapi\">  <img src=\"https://img.shields.io/badge/\ud83d\udda5\ufe0f%20App-black.svg\" alt=\"GitHub App\"/> |\n| Audio Transcription | <a target=\"_blank\" href=\"https://colab.research.google.com/github/pixeltable/pixeltable/blob/release/docs/notebooks/use-cases/audio-transcriptions.ipynb\">  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/> | Discord Bot | <a target=\"_blank\" href=\"https://github.com/pixeltable/pixeltable/blob/main/docs/sample-apps/context-aware-discord-bot\"> <img src=\"https://img.shields.io/badge/%F0%9F%92%AC%20Bot-%235865F2.svg\" alt=\"GitHub App\"/></a> |\n\n## \ud83d\udea8 Maintaining Production-Ready Multimodal AI Apps is Still Too Hard\n\nBuilding robust AI applications, especially [multimodal](https://docs.pixeltable.com/docs/datastore/bringing-data) ones,\nrequires stitching together numerous tools:\n\n* ETL pipelines for data loading and transformation.\n* Vector databases for semantic search.\n* Feature stores for ML models.\n* Orchestrators for scheduling.\n* Model serving infrastructure for inference.\n* Separate systems for parallelization, caching, versioning, and lineage tracking.\n\nThis complex \"data plumbing\" slows down development, increases costs, and makes applications brittle and hard to reproduce.\n\n## \ud83d\udd2e Roadmap (2025)\n\n### Cloud Infrastructure and Deployment\n\nWe're working on a hosted Pixeltable service that will:\n\n* Enable Multimodal Data Sharing of Pixeltable Tables and Views | [Waitlist](https://www.pixeltable.com/waitlist)\n* Provide a persistent cloud instance\n* Turn Pixeltable workflows (Tables, Queries, UDFs) into API endpoints/[MCP Servers](https://github.com/pixeltable/pixeltable-mcp-server)\n\n## \ud83e\udd1d Contributing\n\nWe love contributions! Whether it's reporting bugs, suggesting features, improving documentation, or submitting code\nchanges, please check out our [Contributing Guide](CONTRIBUTING.md) and join the\n[Discussions](https://github.com/pixeltable/pixeltable/discussions) or our\n[Discord Server](https://discord.gg/QPyqFYx2UN).\n\n## \ud83c\udfe2 License\n\nPixeltable is licensed under the Apache 2.0 License.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AI Data Infrastructure: Declarative, Multimodal, and Incremental",
    "version": "0.4.12",
    "project_urls": {
        "documentation": "https://docs.pixeltable.com/",
        "homepage": "https://pixeltable.com/",
        "repository": "https://github.com/pixeltable/pixeltable"
    },
    "split_keywords": [
        "ai",
        " artificial-intelligence",
        " chatbot",
        " computer-vision",
        " data-science",
        " database",
        " feature-engineering",
        " feature-store",
        " genai",
        " llm",
        " machine-learning",
        " ml",
        " mlops",
        " multimodal",
        " vector-database"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e7e014d5d0db7bc2b0fed9a235e093e132dae2d3bd2abc3c109e031a711411c6",
                "md5": "5d378bde97bbc5964d81d8c00d7b6ef1",
                "sha256": "904d29dc420ff1a8e39faf35d082dcee67fcc77b5bd627d593946510f42b5685"
            },
            "downloads": -1,
            "filename": "pixeltable-0.4.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5d378bde97bbc5964d81d8c00d7b6ef1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 490456,
            "upload_time": "2025-09-05T05:00:15",
            "upload_time_iso_8601": "2025-09-05T05:00:15.480020Z",
            "url": "https://files.pythonhosted.org/packages/e7/e0/14d5d0db7bc2b0fed9a235e093e132dae2d3bd2abc3c109e031a711411c6/pixeltable-0.4.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-05 05:00:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pixeltable",
    "github_project": "pixeltable",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "pixeltable"
}
        
Elapsed time: 0.49016s