promptwright


Namepromptwright JSON
Version 1.2.1 PyPI version JSON
download
home_pageNone
SummaryLLM based Synthetic Data Generation
upload_time2024-11-25 02:34:46
maintainerNone
docs_urlNone
authorLuke Hinds
requires_python>=3.11
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Promptwright - Synthetic Dataset Generation Library

[![Tests](https://github.com/StacklokLabs/promptwright/actions/workflows/test.yml/badge.svg)](https://github.com/StacklokLabs/promptwright/actions/workflows/test.yml)
[![Python Version](https://img.shields.io/pypi/pyversions/promptwright.svg)](https://pypi.org/project/promptwright/)

![promptwright-cover](https://github.com/user-attachments/assets/5e345bda-df66-474b-90e7-f488d8f89032)

Promptwright is a Python library from [Stacklok](https://stacklok.com) designed 
for generating large synthetic  datasets using a local LLM. The library offers
a flexible and easy-to-use set of interfaces, enabling users the ability to
generate prompt led synthetic datasets.

Promptwright was inspired by the [redotvideo/pluto](https://github.com/redotvideo/pluto),
in fact it started as fork, but ended up largley being a re-write, to allow
dataset generation against a local LLM model.

The library interfaces with Ollama, making it easy to just pull a model and run
Promptwright, but other providers could be used, as long as they provide a
compatible API (happy to help expand the library to support other providers,
just open an issue).

## Features

- **Local LLM Client Integration**: Interact with Ollama based models
- **Configurable Instructions and Prompts**: Define custom instructions and system prompts
- **YAML Configuration**: Define your generation tasks using YAML configuration files
- **Command Line Interface**: Run generation tasks directly from the command line
- **Push to Hugging Face**: Push the generated dataset to Hugging Face Hub with automatic dataset cards and tags
- **System Message Control**: Choose whether to include system messages in the generated dataset

## Getting Started

### Prerequisites

- Python 3.11+
- Poetry (for dependency management)
- Ollama CLI installed and running (see [Ollama Installation](https://ollama.com/)
- A Model pulled via Ollama (see [Model Compatibility](#model-compatibility))
- (Optional) Hugging Face account and API token for dataset upload

### Installation

#### pip

You can install Promptwright using pip:

```bash
pip install promptwright
```

#### Development Installation

To install the prerequisites, you can use the following commands:

```bash
# Install Poetry if you haven't already
curl -sSL https://install.python-poetry.org | python3 -

# Install promptwright and its dependencies
git clone https://github.com/StacklokLabs/promptwright.git
cd promptwright
poetry install
```

### Usage

Promptwright offers two ways to define and run your generation tasks:

#### 1. Using YAML Configuration (Recommended)

Create a YAML file defining your generation task:

```yaml
system_prompt: "You are a helpful assistant. You provide clear and concise answers to user questions."

topic_tree:
  args:
    root_prompt: "Capital Cities of the World."
    model_system_prompt: "<system_prompt_placeholder>"
    tree_degree: 3
    tree_depth: 2
    temperature: 0.7
    model_name: "ollama/mistral:latest"
  save_as: "basic_prompt_topictree.jsonl"

data_engine:
  args:
    instructions: "Please provide training examples with questions about capital cities."
    system_prompt: "<system_prompt_placeholder>"
    model_name: "ollama/mistral:latest"
    temperature: 0.9
    max_retries: 2

dataset:
  creation:
    num_steps: 5
    batch_size: 1
    model_name: "ollama/mistral:latest"
    sys_msg: true  # Include system message in dataset (default: true)
  save_as: "basic_prompt_dataset.jsonl"

# Optional Hugging Face Hub configuration
huggingface:
  # Repository in format "username/dataset-name"
  repository: "your-username/your-dataset-name"
  # Token can also be provided via HF_TOKEN environment variable or --hf-token CLI option
  token: "your-hf-token"
  # Additional tags for the dataset (optional)
  # "promptwright" and "synthetic" tags are added automatically
  tags:
    - "promptwright-generated-dataset"
    - "geography"
```

Run using the CLI:

```bash
promptwright start config.yaml
```

The CLI supports various options to override configuration values:

```bash
promptwright start config.yaml \
  --topic-tree-save-as output_tree.jsonl \
  --dataset-save-as output_dataset.jsonl \
  --model-name ollama/llama3 \
  --temperature 0.8 \
  --tree-degree 4 \
  --tree-depth 3 \
  --num-steps 10 \
  --batch-size 2 \
  --sys-msg true \  # Control system message inclusion (default: true)
  --hf-repo username/dataset-name \
  --hf-token your-token \
  --hf-tags tag1 --hf-tags tag2
```

#### Hugging Face Hub Integration

Promptwright supports automatic dataset upload to the Hugging Face Hub with the following features:

1. **Dataset Upload**: Upload your generated dataset directly to Hugging Face Hub
2. **Dataset Cards**: Automatically creates and updates dataset cards
3. **Automatic Tags**: Adds "promptwright" and "synthetic" tags automatically
4. **Custom Tags**: Support for additional custom tags
5. **Flexible Authentication**: HF token can be provided via:
   - CLI option: `--hf-token your-token`
   - Environment variable: `export HF_TOKEN=your-token`
   - YAML configuration: `huggingface.token`

Example using environment variable:
```bash
export HF_TOKEN=your-token
promptwright start config.yaml --hf-repo username/dataset-name
```

Or pass it in as a CLI option:
```bash
promptwright start config.yaml --hf-repo username/dataset-name --hf-token your-token
```

#### 2. Using Python Code

You can also create generation tasks programmatically using Python code. There
are several examples in the `examples` directory that demonstrate this approach.

Example Python usage:

```python
from promptwright import DataEngine, EngineArguments, TopicTree, TopicTreeArguments

tree = TopicTree(
    args=TopicTreeArguments(
        root_prompt="Creative Writing Prompts",
        model_system_prompt=system_prompt,
        tree_degree=5,
        tree_depth=4,
        temperature=0.9,
        model_name="ollama/llama3"
    )
)

engine = DataEngine(
    args=EngineArguments(
        instructions="Generate creative writing prompts and example responses.",
        system_prompt="You are a creative writing instructor providing writing prompts and example responses.",
        model_name="ollama/llama3",
        temperature=0.9,
        max_retries=2,
        sys_msg=True,  # Include system message in dataset (default: true)
    )
)
```

### Development

The project uses Poetry for dependency management. Here are some common development commands:

```bash
# Install dependencies including development dependencies
make install

# Format code
make format

# Run linting
make lint

# Run tests
make test

# Run security checks
make security

# Build the package
make build

# Run all checks and build
make all
```

### Prompt Output Examples

With sys_msg=true (default):
```json
{
  "messages": [
    {
      "role": "system",
      "content": "You are tasked with designing an immersive virtual reality experience that transports users to a fantastical world of wonder."
    },
    {
      "role": "user",
      "content": "Create a descriptive passage about a character discovering their hidden talents."
    },
    {
      "role": "assistant",
      "content": "As she stared at the canvas, Emma's fingers hovered above the paintbrushes, as if hesitant to unleash the colors that had been locked within her. The strokes began with bold abandon, swirling blues and greens merging into a mesmerizing dance of light and shadow. With each passing moment, she felt herself becoming the art – her very essence seeping onto the canvas like watercolors in a spring storm. The world around her melted away, leaving only the vibrant symphony of color and creation."
    }
  ]
}
```

With sys_msg=false:
```json
{
  "messages": [
    {
      "role": "user",
      "content": "Create a descriptive passage about a character discovering their hidden talents."
    },
    {
      "role": "assistant",
      "content": "As she stared at the canvas, Emma's fingers hovered above the paintbrushes, as if hesitant to unleash the colors that had been locked within her. The strokes began with bold abandon, swirling blues and greens merging into a mesmerizing dance of light and shadow. With each passing moment, she felt herself becoming the art – her very essence seeping onto the canvas like watercolors in a spring storm. The world around her melted away, leaving only the vibrant symphony of color and creation."
    }
  ]
}
```

## Model Compatibility

The library should work with most LLM models. It has been tested with the
following models so far:

- **Mistral**
- **LLaMA3**
- **Qwen2.5**

## Unpredictable Behavior

The library is designed to generate synthetic data based on the prompts and instructions
provided. The quality of the generated data is dependent on the quality of the prompts
and the model used. The library does not guarantee the quality of the generated data.

Large Language Models can sometimes generate unpredictable or inappropriate
content and the authors of this library are not responsible for the content
generated by the models. We recommend reviewing the generated data before using it
in any production environment.

Large Language Models also have the potential to fail to stick with the behavior
defined by the prompt around JSON formatting, and may generate invalid JSON. This
is a known issue with the underlying model and not the library. We handle these
errors by retrying the generation process and filtering out invalid JSON. The 
failure rate is low, but it can happen. We report on each failure within a final
summary.

## Contributing

If something here could be improved, please open an issue or submit a pull request.

### License

This project is licensed under the Apache 2 License. See the `LICENSE` file for more details.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "promptwright",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": null,
    "author": "Luke Hinds",
    "author_email": "luke@stacklok.com",
    "download_url": "https://files.pythonhosted.org/packages/56/84/891ec343505682db024a5bb793a14db3674df00b8d020675781e0b05f7a2/promptwright-1.2.1.tar.gz",
    "platform": null,
    "description": "# Promptwright - Synthetic Dataset Generation Library\n\n[![Tests](https://github.com/StacklokLabs/promptwright/actions/workflows/test.yml/badge.svg)](https://github.com/StacklokLabs/promptwright/actions/workflows/test.yml)\n[![Python Version](https://img.shields.io/pypi/pyversions/promptwright.svg)](https://pypi.org/project/promptwright/)\n\n![promptwright-cover](https://github.com/user-attachments/assets/5e345bda-df66-474b-90e7-f488d8f89032)\n\nPromptwright is a Python library from [Stacklok](https://stacklok.com) designed \nfor generating large synthetic  datasets using a local LLM. The library offers\na flexible and easy-to-use set of interfaces, enabling users the ability to\ngenerate prompt led synthetic datasets.\n\nPromptwright was inspired by the [redotvideo/pluto](https://github.com/redotvideo/pluto),\nin fact it started as fork, but ended up largley being a re-write, to allow\ndataset generation against a local LLM model.\n\nThe library interfaces with Ollama, making it easy to just pull a model and run\nPromptwright, but other providers could be used, as long as they provide a\ncompatible API (happy to help expand the library to support other providers,\njust open an issue).\n\n## Features\n\n- **Local LLM Client Integration**: Interact with Ollama based models\n- **Configurable Instructions and Prompts**: Define custom instructions and system prompts\n- **YAML Configuration**: Define your generation tasks using YAML configuration files\n- **Command Line Interface**: Run generation tasks directly from the command line\n- **Push to Hugging Face**: Push the generated dataset to Hugging Face Hub with automatic dataset cards and tags\n- **System Message Control**: Choose whether to include system messages in the generated dataset\n\n## Getting Started\n\n### Prerequisites\n\n- Python 3.11+\n- Poetry (for dependency management)\n- Ollama CLI installed and running (see [Ollama Installation](https://ollama.com/)\n- A Model pulled via Ollama (see [Model Compatibility](#model-compatibility))\n- (Optional) Hugging Face account and API token for dataset upload\n\n### Installation\n\n#### pip\n\nYou can install Promptwright using pip:\n\n```bash\npip install promptwright\n```\n\n#### Development Installation\n\nTo install the prerequisites, you can use the following commands:\n\n```bash\n# Install Poetry if you haven't already\ncurl -sSL https://install.python-poetry.org | python3 -\n\n# Install promptwright and its dependencies\ngit clone https://github.com/StacklokLabs/promptwright.git\ncd promptwright\npoetry install\n```\n\n### Usage\n\nPromptwright offers two ways to define and run your generation tasks:\n\n#### 1. Using YAML Configuration (Recommended)\n\nCreate a YAML file defining your generation task:\n\n```yaml\nsystem_prompt: \"You are a helpful assistant. You provide clear and concise answers to user questions.\"\n\ntopic_tree:\n  args:\n    root_prompt: \"Capital Cities of the World.\"\n    model_system_prompt: \"<system_prompt_placeholder>\"\n    tree_degree: 3\n    tree_depth: 2\n    temperature: 0.7\n    model_name: \"ollama/mistral:latest\"\n  save_as: \"basic_prompt_topictree.jsonl\"\n\ndata_engine:\n  args:\n    instructions: \"Please provide training examples with questions about capital cities.\"\n    system_prompt: \"<system_prompt_placeholder>\"\n    model_name: \"ollama/mistral:latest\"\n    temperature: 0.9\n    max_retries: 2\n\ndataset:\n  creation:\n    num_steps: 5\n    batch_size: 1\n    model_name: \"ollama/mistral:latest\"\n    sys_msg: true  # Include system message in dataset (default: true)\n  save_as: \"basic_prompt_dataset.jsonl\"\n\n# Optional Hugging Face Hub configuration\nhuggingface:\n  # Repository in format \"username/dataset-name\"\n  repository: \"your-username/your-dataset-name\"\n  # Token can also be provided via HF_TOKEN environment variable or --hf-token CLI option\n  token: \"your-hf-token\"\n  # Additional tags for the dataset (optional)\n  # \"promptwright\" and \"synthetic\" tags are added automatically\n  tags:\n    - \"promptwright-generated-dataset\"\n    - \"geography\"\n```\n\nRun using the CLI:\n\n```bash\npromptwright start config.yaml\n```\n\nThe CLI supports various options to override configuration values:\n\n```bash\npromptwright start config.yaml \\\n  --topic-tree-save-as output_tree.jsonl \\\n  --dataset-save-as output_dataset.jsonl \\\n  --model-name ollama/llama3 \\\n  --temperature 0.8 \\\n  --tree-degree 4 \\\n  --tree-depth 3 \\\n  --num-steps 10 \\\n  --batch-size 2 \\\n  --sys-msg true \\  # Control system message inclusion (default: true)\n  --hf-repo username/dataset-name \\\n  --hf-token your-token \\\n  --hf-tags tag1 --hf-tags tag2\n```\n\n#### Hugging Face Hub Integration\n\nPromptwright supports automatic dataset upload to the Hugging Face Hub with the following features:\n\n1. **Dataset Upload**: Upload your generated dataset directly to Hugging Face Hub\n2. **Dataset Cards**: Automatically creates and updates dataset cards\n3. **Automatic Tags**: Adds \"promptwright\" and \"synthetic\" tags automatically\n4. **Custom Tags**: Support for additional custom tags\n5. **Flexible Authentication**: HF token can be provided via:\n   - CLI option: `--hf-token your-token`\n   - Environment variable: `export HF_TOKEN=your-token`\n   - YAML configuration: `huggingface.token`\n\nExample using environment variable:\n```bash\nexport HF_TOKEN=your-token\npromptwright start config.yaml --hf-repo username/dataset-name\n```\n\nOr pass it in as a CLI option:\n```bash\npromptwright start config.yaml --hf-repo username/dataset-name --hf-token your-token\n```\n\n#### 2. Using Python Code\n\nYou can also create generation tasks programmatically using Python code. There\nare several examples in the `examples` directory that demonstrate this approach.\n\nExample Python usage:\n\n```python\nfrom promptwright import DataEngine, EngineArguments, TopicTree, TopicTreeArguments\n\ntree = TopicTree(\n    args=TopicTreeArguments(\n        root_prompt=\"Creative Writing Prompts\",\n        model_system_prompt=system_prompt,\n        tree_degree=5,\n        tree_depth=4,\n        temperature=0.9,\n        model_name=\"ollama/llama3\"\n    )\n)\n\nengine = DataEngine(\n    args=EngineArguments(\n        instructions=\"Generate creative writing prompts and example responses.\",\n        system_prompt=\"You are a creative writing instructor providing writing prompts and example responses.\",\n        model_name=\"ollama/llama3\",\n        temperature=0.9,\n        max_retries=2,\n        sys_msg=True,  # Include system message in dataset (default: true)\n    )\n)\n```\n\n### Development\n\nThe project uses Poetry for dependency management. Here are some common development commands:\n\n```bash\n# Install dependencies including development dependencies\nmake install\n\n# Format code\nmake format\n\n# Run linting\nmake lint\n\n# Run tests\nmake test\n\n# Run security checks\nmake security\n\n# Build the package\nmake build\n\n# Run all checks and build\nmake all\n```\n\n### Prompt Output Examples\n\nWith sys_msg=true (default):\n```json\n{\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": \"You are tasked with designing an immersive virtual reality experience that transports users to a fantastical world of wonder.\"\n    },\n    {\n      \"role\": \"user\",\n      \"content\": \"Create a descriptive passage about a character discovering their hidden talents.\"\n    },\n    {\n      \"role\": \"assistant\",\n      \"content\": \"As she stared at the canvas, Emma's fingers hovered above the paintbrushes, as if hesitant to unleash the colors that had been locked within her. The strokes began with bold abandon, swirling blues and greens merging into a mesmerizing dance of light and shadow. With each passing moment, she felt herself becoming the art \u2013 her very essence seeping onto the canvas like watercolors in a spring storm. The world around her melted away, leaving only the vibrant symphony of color and creation.\"\n    }\n  ]\n}\n```\n\nWith sys_msg=false:\n```json\n{\n  \"messages\": [\n    {\n      \"role\": \"user\",\n      \"content\": \"Create a descriptive passage about a character discovering their hidden talents.\"\n    },\n    {\n      \"role\": \"assistant\",\n      \"content\": \"As she stared at the canvas, Emma's fingers hovered above the paintbrushes, as if hesitant to unleash the colors that had been locked within her. The strokes began with bold abandon, swirling blues and greens merging into a mesmerizing dance of light and shadow. With each passing moment, she felt herself becoming the art \u2013 her very essence seeping onto the canvas like watercolors in a spring storm. The world around her melted away, leaving only the vibrant symphony of color and creation.\"\n    }\n  ]\n}\n```\n\n## Model Compatibility\n\nThe library should work with most LLM models. It has been tested with the\nfollowing models so far:\n\n- **Mistral**\n- **LLaMA3**\n- **Qwen2.5**\n\n## Unpredictable Behavior\n\nThe library is designed to generate synthetic data based on the prompts and instructions\nprovided. The quality of the generated data is dependent on the quality of the prompts\nand the model used. The library does not guarantee the quality of the generated data.\n\nLarge Language Models can sometimes generate unpredictable or inappropriate\ncontent and the authors of this library are not responsible for the content\ngenerated by the models. We recommend reviewing the generated data before using it\nin any production environment.\n\nLarge Language Models also have the potential to fail to stick with the behavior\ndefined by the prompt around JSON formatting, and may generate invalid JSON. This\nis a known issue with the underlying model and not the library. We handle these\nerrors by retrying the generation process and filtering out invalid JSON. The \nfailure rate is low, but it can happen. We report on each failure within a final\nsummary.\n\n## Contributing\n\nIf something here could be improved, please open an issue or submit a pull request.\n\n### License\n\nThis project is licensed under the Apache 2 License. See the `LICENSE` file for more details.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "LLM based Synthetic Data Generation",
    "version": "1.2.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f4e5aaa1a30905e7aae44a41d07f818f43e66f28a0f3ff64c90207818c561665",
                "md5": "50eca4948029ae3893f2a1e06d860496",
                "sha256": "82f24751e20b36c64ec0d0ca68d09289cef172ce5adf2536ccb286b9a311c781"
            },
            "downloads": -1,
            "filename": "promptwright-1.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "50eca4948029ae3893f2a1e06d860496",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 27387,
            "upload_time": "2024-11-25T02:34:44",
            "upload_time_iso_8601": "2024-11-25T02:34:44.040275Z",
            "url": "https://files.pythonhosted.org/packages/f4/e5/aaa1a30905e7aae44a41d07f818f43e66f28a0f3ff64c90207818c561665/promptwright-1.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5684891ec343505682db024a5bb793a14db3674df00b8d020675781e0b05f7a2",
                "md5": "e8092cb63e37a6f2b37d20f7d90b8632",
                "sha256": "1c48044454860c9fce04b0d3d3f1f8d1b6f1dc283ffc3c5faefff6de9490a1be"
            },
            "downloads": -1,
            "filename": "promptwright-1.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "e8092cb63e37a6f2b37d20f7d90b8632",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 26992,
            "upload_time": "2024-11-25T02:34:46",
            "upload_time_iso_8601": "2024-11-25T02:34:46.037087Z",
            "url": "https://files.pythonhosted.org/packages/56/84/891ec343505682db024a5bb793a14db3674df00b8d020675781e0b05f7a2/promptwright-1.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-25 02:34:46",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "promptwright"
}
        
Elapsed time: 0.72558s