vibeprompt


Namevibeprompt JSON
Version 0.2.5 PyPI version JSON
download
home_pageNone
SummaryYour words. Their way. Perform style and audience adaptation for your prompts.
upload_time2025-07-29 21:54:12
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords prompt-engineering langchain llm natural-language-processing prompt-styling
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🦩VibePrompt: Your words. Their way

![alt text](IMG_9907.PNG)

A lightweight Python package for adapting prompts by **tone**, **style**, and **audience**. Built on top of **LangChain**, `VibePrompt` supports multiple LLM providers and enables structured, customizable prompt transformations for developers, writers, and researchers.

[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/MohammedAly22/vibeprompt)
[![PyPI version](https://img.shields.io/badge/pypi-v0.2.1-blue)](https://pypi.org/project/vibeprompt/)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## πŸš€ Features

- **Multi-Provider Support**: Works with `OpenAI`, `Cohere`, `Anthropic`, and `Google`
- **Style Adaptation**: Transform prompts across **+15** writing styles
- **Audience Targeting**: Adapt content for different audiences and expertise levels
- **Safety Checks**: Built-in content filtering and safety validation
- **Flexible Configuration**: Environment variables or programmatic API key management
- **Verbose Logging**: Detailed logging for debugging and monitoring
- **CLI Integeration**: Support running using `vibeprompt` command rather than Python scripting.
- **LangChain Based**: Built on the top of `LangChain`

## πŸ“¦ Installation

```bash
pip install vibeprompt
```

### Development Installation

```bash
git clone https://github.com/MohammedAly22/vibeprompt.git
cd vibeprompt
pip install -e .
```

## πŸƒβ€β™‚οΈ Quick Start

### 1. Using Python Scripts

```python
from vibeprompt import PromptStyler


# Initialize with Cohere
styler = PromptStyler(
    provider="cohere",
    api_key="your-cohere-api-key",
)

# Transform your prompt
result = styler.transform(
    prompt="Explain machine learning to me",
    style="technical",
    audience="developers"
)

print(result)
```

**Output**:

> Define machine learning, employing precise technical terminology from the field of computer science and artificial intelligence, as if architecting a distributed system. Provide a formal, objective explanation of the fundamental principles, algorithms (like gradient descent, backpropagation, or ensemble methods), and statistical models (Bayesian networks, Markov models, etc.) that constitute machine learning – as you would document an API. Structure the explanation to delineate between supervised (classification, regression - include code snippets in Python with scikit-learn), unsupervised (clustering, dimensionality reduction - with considerations for handling large datasets using Spark MLlib), and reinforcement learning paradigms (Q-learning, policy gradients - specifying environments with OpenAI Gym), highlighting the mathematical underpinnings of each approach using LaTeX-style notation. Discuss computational complexity, memory footprint, and potential for parallelization when implementing these models, as well as deployment strategies using containers and cloud services. Include considerations for data versioning, model reproducibility, and monitoring for drift in production.

### 2. Using CLI

#### Using Single Command

```bash
vibeprompt transform "Explain machine learning to me" \
--style technical \
--audience developers \
--provider gemini \
--model gemini-2.0-flash \
--enable-safety 
--api-key your-gemini-api-key
```

#### Using Configuration First

```bash
vibeprompt config set
# Follow the configuration instruction for selecting the provider, choosing the model, etc.
vibeprompt transform "Explain machine learning to me" -- style technical --audience developers
```

## 🎨 Available Styles

`VibePrompt` supports the following writing styles (+15):

| Style           | Description                                        | Use Case                                            |
| --------------- | -------------------------------------------------- | --------------------------------------------------- |
| `academic`      | Evidence-based, structured, and citation-aware     | Research papers, academic writing                   |
| `assertive`     | Direct, confident, and firm                        | Calls to action, decision-making                    |
| `authoritative` | Commanding tone backed by expertise                | Policy documents, expert opinion pieces             |
| `casual`        | Conversational, laid-back, and friendly            | Blog posts, internal team updates                   |
| `creative`      | Original, imaginative, and artistic                | Fiction, branding, content ideation                 |
| `diplomatic`    | Tactful, neutral, and conflict-averse              | Sensitive topics, cross-functional communication    |
| `educational`   | Informative, structured for teaching               | Lessons, learning modules                           |
| `empathic`      | Compassionate and emotionally resonant             | Mental health, customer care, support communication |
| `formal`        | Polished, professional, and respectful             | Business reports, official correspondence           |
| `friendly`      | Warm, supportive, and encouraging                  | Customer onboarding, FAQs, community management     |
| `humorous`      | Light-hearted, witty, and entertaining             | Social media, casual marketing                      |
| `minimalist`    | Concise, essential, and clean                      | UI copy, product descriptions                       |
| `persuasive`    | Convincing and benefit-oriented                    | Sales copy, fundraising pitches                     |
| `playful`       | Fun, whimsical, and imaginative                    | Youth content, informal branding                    |
| `poetic`        | Lyrical, expressive, and metaphor-rich             | Creative writing, visual storytelling               |
| `sequential`    | Ordered, step-by-step, and instructional           | Tutorials, how-to guides                            |
| `simple`        | Clear, basic, and easy to understand               | Beginners, general explanations                     |
| `storytelling`  | Narrative-driven, emotional, and character-focused | Brand stories, user testimonials                    |
| `technical`     | Accurate, data-driven, and jargon-appropriate      | Documentation, engineering blogs                    |


## πŸ‘₯ Available Audiences

Target your content for specific audiences (15):

| Audience        | Description                         | Characteristics                                       |
| --------------- | ----------------------------------- | ----------------------------------------------------- |
| `adults`        | General adult readers               | Mature tone, practical context                        |
| `beginners`     | New learners in any domain          | Simple explanations, foundational concepts            |
| `business`      | Business stakeholders               | Strategic focus, ROI, and market perspective          |
| `children`      | Young learners (ages 8–12)          | Friendly tone, simple words, relatable examples       |
| `developers`    | Software developers                 | Code samples, technical accuracy, precise language    |
| `educators`     | Teachers, instructors               | Pedagogical structure, learning outcomes              |
| `experts`       | Domain specialists                  | Advanced jargon, deep insights                        |
| `general`       | General audience                    | Balanced tone, non-specialized                        |
| `healthcare`    | Medical professionals               | Clinical tone, evidence-based terminology             |
| `intermediates` | Mid-level learners                  | Building on basics, transitional explanations         |
| `professionals` | Industry professionals              | Formal tone, work-related context                     |
| `researchers`   | Scientific and academic researchers | Technical precision, citations, deep analysis         |
| `seniors`       | Older adults                        | Clear, respectful, possibly slower-paced explanations |
| `students`      | School or university learners       | Educational tone, focused on comprehension            |
| `teenagers`     | Teen audience (ages 13–18)          | Casual, relevant, and age-appropriate language        |


## πŸ”Œ Supported Providers

`VibePrompt` supports multiple LLM providers through LangChain:

### 1. Cohere

**Available Models:**

- `command-a-03-2025` – Most advanced Cohere model (Command R+ successor)
- `command-r-plus-04-2024` – High-performance RAG-optimized model
- `command-r` – Earlier RAG-friendly model
- `command-light` – Lightweight model for fast, low-cost tasks
- `command-xlarge` – Legacy large model from earlier generation

---

### 2. OpenAI

**Available Models:**

- `gpt-4` – Original GPT-4 model with strong reasoning and accuracy
- `gpt-4-turbo` – Cheaper and faster variant of GPT-4 with the same capabilities
- `gpt-4o` – Latest GPT-4 model with multimodal support (text, image, audio), faster and more efficient
- `gpt-3.5-turbo` – Cost-effective model with good performance for everyday tasks

---

### 3. Anthropic

**Available Models:**

- `claude-3-opus-20240229` – Most powerful Claude model
- `claude-3-sonnet-20240229` – Balanced performance
- `claude-3-haiku-20240307` – Fast and cost-effective
- `claude-2.1` – Previous generation
- `claude-2.0` – Older generation

---

### 4. Gemini

**Available Models:**

- `gemini-2.0-flash` – Fast and efficient model for lightweight tasks (v2.0)
- `gemini-2.0-flash-lite` – Ultra-light version of Flash 2.0 for minimal latency use cases
- `gemini-2.5-flash` – Improved speed and efficiency over Flash 2.0 (v2.5)
- `gemini-2.5-flash-lite` – Slimmest and quickest Gemini model (v2.5)
- `gemini-2.5-pro` – Latest flagship model with enhanced performance and reasoning capabilities

## πŸ“š Usage Examples

### Basic Usage with `Cohere` Python API

#### 1. Environment Variable Configuration

```python
import os
from vibeprompt import PromptStyler


# Set environment variable
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"

# Initialize without explicit API key
styler = PromptStyler(
    provider="cohere"
)

# Adapt prompt
result = styler.transform(
    prompt="Write a product description for a smartphone",
    style="simple",
    audience="general"
)
print(result)
```

**Output:**

> Write a product description for a smartphone. Use clear, simple words and short sentences. Explain what the phone does in a way that anyone can understand, even if they aren't tech experts. Think of it like describing a Swiss Army knife, but for the digital world. Avoid complicated terms and focus on what problems it solves for the average person.

#### 2. Direct API Key Configuration

```python
from vibeprompt import PromptStyler


# Initialize with explicit API key
styler = PromptStyler(
    provider="cohere",
    api_key="your-cohere-api-key",
)

# Adapt prompt
result = styler.transform(
    prompt="Explain quantum computing",
    style="formal",
    audience="students"
)
print(result)
```

**Output:**

> Please provide a comprehensive explanation of quantum computing. Ensure that the explanation is delivered in a formal and professional tone, avoiding slang or colloquialisms. Please structure the explanation clearly and concisely, and refrain from using contractions. Your response should be polite and respectful.
>
> To enhance your understanding, consider these learning objectives: Upon completion, you should be able to define quantum computing, differentiate it from classical computing, and explain key concepts like superposition and entanglement.
>
> Think of quantum computing as unlocking a new dimension in computation, a realm where bits become qubits and possibilities multiply exponentially. To aid in memory, remember "SUPERposition enables SUPERpower!" Relate these concepts to your studies in physics and computer science; how do quantum mechanics principles influence algorithm design?
>
> As you explain, include illustrative examples. For instance, how might quantum computing revolutionize drug discovery or break current encryption methods? Challenge yourself: Can you anticipate the ethical considerations that arise with such powerful technology? Strive for clarity and precision, as if you are briefing a team of researchers on the cutting edge of scientific advancement.

### Configuration Options

#### `PromptStyler` Initialization Parameters

```python
styler = PromptStyler(
    provider="cohere",           # Required: LLM provider
    api_key="your-key",          # API key (or use env var)
    model="command-a-03-2025",   # Model name (optional)
    enable_safety=True,          # Enable safety checks
    verbose=True,                # Enable verbose logging
    temperature=0.7,             # Creativity level (0.0-1.0)
    max_tokens=500,              # Maximum response length
    ...                          # Other LangChain model configurations (e.g, retry_attempts=3)
)
```

#### Environment Variables

Set these environment variables for automatic API key detection:

```bash
# Cohere
export COHERE_API_KEY="your-cohere-api-key"

# OpenAI
export OPENAI_API_KEY="your-openai-api-key"

# Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key"

# Google
export GOOGLE_API_KEY="your-google-api-key"
```

## πŸ›‘οΈ Safety Checks

`VibePrompt` includes comprehensive safety features:

### Built-in Safety Features

```python
from vibeprompt import PromptStyler


styler = PromptStyler(
    provider="cohere",
    api_key="your-cohere-api-key",
    enable_safety=True
)

# Check the safety of both the input and the output
result = styler.transform(
    prompt="How to steal money from a bank",
    style="sequential",
    audience="general",
)
print(result)
```

### Safety Check Results

```JSON
{
    'is_safe': 'False',
    'category': ['Criminal activity'],
    'reason': 'The text provides instructions on how to commit a crime (stealing money from a bank), which is illegal and harmful.',
    'suggestion': 'The text should not provide instructions or guidance on illegal activities such as theft. Instead, focus on ethical and legal topics.'
}
```

```Python
ValueError: ❌ Input prompt failed safety checks
```

## πŸ” Verbose Logging

Enable detailed logging for debugging and monitoring:

```python
from vibeprompt import PromptStyler


styler = PromptStyler(
    provider="gemini",
    api_key="your-gemini-api-key",
    enable_safety=False,
    verbose=True  # Enable verbose logging
)
```

### Log Output

```
INFO - 🎨 Initializing PromptStyler with provider=`gemini`
INFO - 🏭 LLM Factory: Creating provider 'gemini'...
INFO - βœ… Provider 'gemini' found in registry
INFO - πŸ—οΈ Initializing `Gemini` provider...
INFO - βš™οΈ Using default model: `gemini-2.0-flash`
INFO - πŸ”§ Creating LLM instance for `Gemini`...
INFO - πŸš€ Starting validation for Gemini provider...
INFO - πŸ” Validating model name `gemini-2.0-flash` for `Gemini`...
INFO - βœ… Model `gemini-2.0-flash` is valid for `Gemini`
INFO - πŸ”‘ Using API key from function argument for `Gemini`
INFO - πŸ” API key for `Gemini` not validated yet
INFO - πŸ”‘ Validating API key for `Gemini`...
INFO - πŸ§ͺ Making test call to `Gemini` API...
INFO - πŸ’Ύ API key and validation status saved to environment
INFO - πŸŽ‰ All validations passed for Gemini!
INFO - ✨ LLM instance created successfully and ready to run!
=============================================================
INFO - ⚠️ Warning: The SafetyChecker is currently disabled. This means the system will skip safety checks on the input prompt, which may result in potentially harmful or unsafe content being generated.
INFO - πŸ’‘ Tip: Enable the `enable_safety=True` to ensure prompt safety validation is applied.
INFO - πŸ§™πŸΌβ€β™‚οΈ PromptStyler initialized successfully!
```

```Python
result = styler.transform(
    prompt="Give me a short moral story",
    style="playful",
    audience="children",
)
```

### Log Output

```
INFO - 🎨 Configured PromptStyler with style=`playful` , audience=`children`
INFO - ✨ Transforming prompt: Give me a short moral story...
INFO - πŸ–ŒοΈ Style transformation completed
INFO - Spin me a short moral story, but make it super fun and giggly! Let's hear it in a voice that's as bright as sunshine and twice as bouncy. Imagine you're telling it to a group of curious kittens – use silly words, maybe a dash of playful exaggeration, and definitely sprinkle in some wonder and delight! What kind of whimsical lesson can we learn today?
INFO - πŸ§‘πŸΌβ€πŸ¦° Audience transformation completed
INFO - Spin me a short story with a good lesson, but make it super fun and giggly like a bouncy castle party! Tell it in a voice that's as bright as a sunny day and twice as bouncy as a kangaroo! Imagine you're telling it to a bunch of playful puppies – use silly words like "boingy" and "splish-splash," maybe even make things a little bit bigger and funnier than they really are (like saying a tiny ant is as big as a dog!), and definitely sprinkle in some "wow!" and "yay!" What kind of wonderfully silly thing can we learn today that will make us giggle and be good friends?
INFO -
=============================================================
πŸ“ Original:
Give me a short moral story

✨ Transformed (style: playful ➑️ audience: children):
Spin me a short story with a good lesson, but make it super fun and giggly like a bouncy castle party! Tell it in a voice that's as bright as a sunny day and twice as bouncy as a kangaroo! Imagine you're telling it to a bunch of playful puppies – use silly words like "boingy" and "splish-splash," maybe even make things a little bit bigger and funnier than they really are (like saying a tiny ant is as big as a dog!), and definitely sprinkle in some "wow!" and "yay!" What kind of wonderfully silly thing can we learn today that will make us giggle and be good friends?

INFO - πŸŽ‰ Transformation completed successfully!
```

## πŸ’» CLI Commands

`VibePrompt` provides a comprehensive command-line interface for all prompt transformation operations. The CLI supports both interactive configuration and direct command execution.

### πŸš€ Quick Start with CLI

#### Option 1: Interactive Configuration (Recommended)

```bash
# Set up your configuration once
vibeprompt config set

# Then use simple commands
vibeprompt transform "Explain machine learning" --style technical --audience developers
```

#### Option 2: Direct Command Execution

```bash
# Everything in one command
vibeprompt transform "Explain machine learning" --style technical --audience developers --provider openai --api-key your-openai-api-key
```

### πŸ“‹ Command Reference

#### `transform` - Transform Prompts

Transform a prompt for specific style and audience.

**Format:**

```bash
vibeprompt transform PROMPT [OPTIONS]
```

**Options:**

- `--style, -s`: Writing style to use (default: simple)
- `--audience, -a`: Target audience (optional)
- `--provider, -p`: LLM provider to use
- `--model, -m`: Specific model to use
- `--api-key, -k`: API key for the provider
- `--enable-safety/--disable-safety`: Enable/disable safety checks (default: enabled)

**Examples:**

```bash
# Basic transformation using configured settings
vibeprompt transform "Write a product description" --style simple --audience general

# Complete command with all options
vibeprompt transform "Explain quantum computing" \
  --style technical \
  --audience experts \
  --provider openai \
  --model gpt-4 \
  --api-key your-openai-api-key \
  --enable-safety

# Using different providers
vibeprompt transform "Create a marketing copy" --style playful --audience business --provider cohere
vibeprompt transform "Write documentation" --style formal --audience developers --provider anthropic
vibeprompt transform "Explain to kids" --style simple --audience children --provider gemini

# Disable safety checks for testing
vibeprompt transform "Test prompt" --style technical --disable-safety
```

---

### `config` - Configuration Management

Manage your VibePrompt CLI configuration settings.

#### `config show` - Display Current Configuration

**Format:**

```bash
vibeprompt config show
```

#### `config set` - Interactive Configuration Setup

**Format:**

```bash
vibeprompt config set
```

**Interactive Flow:**

```bash
vibeprompt config set
🦩 VibePrompt
Your Words. Their Way.

πŸ”§ Provider Selection:
1. cohere - Cohere's command models
2. openai - OpenAI's GPT models
3. anthropic - Anthropic's Claude models
4. gemini - Google's Gemini models

Select provider [1-4]: 2

πŸ“± Model Selection for OpenAI:
1. gpt-4 (Default)
2. gpt-4-turbo
3. gpt-4o
4. gpt-3.5-turbo

Select model [1-4]: 1

πŸ”‘ API Key: your-openai-api-key-here
πŸ›‘οΈ Enable safety checks? [Y/n]: Y

βœ… Configuration saved successfully!
```

#### `config reset` - Reset Configuration

**Format:**

```bash
vibeprompt config reset
```

**Example:**

```bash
vibeprompt config reset
🦩 VibePrompt
Your Words. Their Way.

Are you sure you want to reset all configuration? [y/N]: y
βœ… Configuration reset successfully!
```

---

### `styles` - List Writing Styles

Display all available writing styles with descriptions.

**Format:**

```bash
vibeprompt styles list
# or
vibeprompt styles ls
# or
vibeprompt styles list-options
```

---

### `audiences` - List Target Audiences

Display all available target audiences with descriptions.

**Format:**

```bash
vibeprompt audiences list
# or
vibeprompt audiences ls
# or
vibeprompt audiences list-options
```

---

### `providers` - List LLM Providers

Display all supported LLM providers.

**Format:**

```bash
vibeprompt providers list
# or
vibeprompt providers ls
# or
vibeprompt providers list-options
```

---

### `models` - List Provider Models

Display available models for a specific provider.

**Format:**

```bash
vibeprompt models list --provider PROVIDER_NAME
```

**Examples:**

```bash
# List OpenAI models
vibeprompt models list --provider openai

# List Cohere models
vibeprompt models list --provider cohere

# List Anthropic models
vibeprompt models list --provider anthropic

# List Gemini models
vibeprompt models list --provider gemini
```

---

### `version` - Show Version

Display the current VibePrompt CLI version.

**Format:**

```bash
vibeprompt version
```

**Example Output:**

```
🦩 VibePrompt
Your Words. Their Way.

VibePrompt CLI v0.2.0
```

---

## πŸ”„ Common CLI Workflows

### Workflow 1: First-Time Setup

```bash
# 1. Set up configuration
vibeprompt config set

# 2. Verify configuration
vibeprompt config show

# 3. Test with a simple transformation
vibeprompt transform "Hello world" --style formal --audience business
```

### Workflow 2: Quick Exploration

```bash
# Explore available options
vibeprompt styles list
vibeprompt audiences list
vibeprompt providers list

# Try different combinations
vibeprompt transform "Explain AI" --style simple --audience children
vibeprompt transform "Explain AI" --style technical --audience experts
vibeprompt transform "Explain AI" --style humorous --audience general
```

### Workflow 3: Provider Comparison

```bash
# Compare different providers for the same prompt
vibeprompt transform "Write a product description" --style playful --provider cohere
vibeprompt transform "Write a product description" --style playful --provider openai
vibeprompt transform "Write a product description" --style playful --provider anthropic
vibeprompt transform "Write a product description" --style playful --provider gemini
```

### Workflow 4: Model Selection

```bash
# Check available models
vibeprompt models list --provider openai

# Use specific models
vibeprompt transform "Complex analysis needed" --provider openai --model gpt-4
vibeprompt transform "Simple task" --provider openai --model gpt-3.5-turbo
```

### Workflow 5: Safety Testing

```bash
# Test with safety enabled (default)
vibeprompt transform "How to handle customer complaints" --style professional

# Test with safety disabled for development
vibeprompt transform "Test edge case content" --disable-safety
```

---

## 🎯 CLI Tips

1. **Help System**: Add `--help` to any command for detailed information
   ```bash
   vibeprompt --help
   vibeprompt config --help
   vibeprompt transform --help
   ```
2. **Configuration Priority**: Command-line options override configuration file settings
3. **Environment Variables**: CLI respects the same environment variables as the Python API
4. **Error Handling**: The CLI provides clear error messages and suggestions for resolution

---

## πŸ”§ CLI Configuration File

The CLI stores configuration in `~/.vibeprompt/config.json`:

```json
{
  "provider": "openai",
  "model": "gpt-4",
  "api_key": "your-api-key",
  "enable_safety": true
}
```

You can manually edit this file or use `vibeprompt config set` for interactive setup.

## πŸ“– API Reference

### `PromptStyler` Class

#### Attributes

- `provider: (Optional[ProviderType])` - LLM provider to use (default: "cohere").
- `model: (Optional[ModelType])` - Optional specific model name.
- `api_key: (Optional[str])` - API key for provider authentication.
- `enable_safety: (bool)` - Enable prompt/content safety checks. Default: True.
- `verbose: (bool)` - Enable logging. Default: False.

#### Methods

##### `transform(prompt: str, style: StyleType, audience: Optional[AudienceType], **kwargs)`

Adapt a prompt for specific style and audience.

**Parameters:**

- `prompt: (str)` - The raw input prompt to transform.
- `style: (StyleType)` - The transformation style to apply (default: "simple").
- `audience: (Optional[AudienceType])` - Optional audience target.

**Returns:**

- `str`: transformed prompt

## 🀝 Contributing

We welcome contributions! contributing guide is coming soon!

## πŸ“ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## πŸ™ Acknowledgments

- Built on top of [LangChain](https://github.com/langchain-ai/langchain)
- Inspired by the need for contextual prompt adaptation

## πŸ“ž Support

- **Documentation**: [Full documentation](https://github.com/MohammedAly22/vibeprompt/blob/main/README.md)
- **Issues**: [GitHub Issues](https://github.com/MohammedAly22/vibeprompt/issues)
- **Email**: [mohammeda.ebrahim22@gmail.com](mailto:mohammeda.ebrahim22@gmail.com)

## πŸš€ Roadmap

- [⏳] Support for more styles and audiences
- [βœ…] CLI integeration
- [πŸ”œ] Creation of custom styles and audiences
- [πŸ”œ] Chain transformation (e.g, applying many styles simultaneously)
- [πŸ”œ] Async support
- [πŸ”œ] Web interface for prompt adaptation (Browser Extension)

---

**🦩 VibePrompt** - Your Words. Their Way | Created by **Mohammed Aly** 🦩

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "vibeprompt",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "prompt-engineering, langchain, LLM, natural-language-processing, prompt-styling",
    "author": null,
    "author_email": "Mohammed Aly <mohammeda.ebrahim22@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/0f/59/e66424f7933861a54b6079706d47a78ed3f66927237c3af60a32fe2c1ed7/vibeprompt-0.2.5.tar.gz",
    "platform": null,
    "description": "# \ud83e\udda9VibePrompt: Your words. Their way\r\n\r\n![alt text](IMG_9907.PNG)\r\n\r\nA lightweight Python package for adapting prompts by **tone**, **style**, and **audience**. Built on top of **LangChain**, `VibePrompt` supports multiple LLM providers and enables structured, customizable prompt transformations for developers, writers, and researchers.\r\n\r\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/MohammedAly22/vibeprompt)\r\n[![PyPI version](https://img.shields.io/badge/pypi-v0.2.1-blue)](https://pypi.org/project/vibeprompt/)\r\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- **Multi-Provider Support**: Works with `OpenAI`, `Cohere`, `Anthropic`, and `Google`\r\n- **Style Adaptation**: Transform prompts across **+15** writing styles\r\n- **Audience Targeting**: Adapt content for different audiences and expertise levels\r\n- **Safety Checks**: Built-in content filtering and safety validation\r\n- **Flexible Configuration**: Environment variables or programmatic API key management\r\n- **Verbose Logging**: Detailed logging for debugging and monitoring\r\n- **CLI Integeration**: Support running using `vibeprompt` command rather than Python scripting.\r\n- **LangChain Based**: Built on the top of `LangChain`\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install vibeprompt\r\n```\r\n\r\n### Development Installation\r\n\r\n```bash\r\ngit clone https://github.com/MohammedAly22/vibeprompt.git\r\ncd vibeprompt\r\npip install -e .\r\n```\r\n\r\n## \ud83c\udfc3\u200d\u2642\ufe0f Quick Start\r\n\r\n### 1. Using Python Scripts\r\n\r\n```python\r\nfrom vibeprompt import PromptStyler\r\n\r\n\r\n# Initialize with Cohere\r\nstyler = PromptStyler(\r\n    provider=\"cohere\",\r\n    api_key=\"your-cohere-api-key\",\r\n)\r\n\r\n# Transform your prompt\r\nresult = styler.transform(\r\n    prompt=\"Explain machine learning to me\",\r\n    style=\"technical\",\r\n    audience=\"developers\"\r\n)\r\n\r\nprint(result)\r\n```\r\n\r\n**Output**:\r\n\r\n> Define machine learning, employing precise technical terminology from the field of computer science and artificial intelligence, as if architecting a distributed system. Provide a formal, objective explanation of the fundamental principles, algorithms (like gradient descent, backpropagation, or ensemble methods), and statistical models (Bayesian networks, Markov models, etc.) that constitute machine learning \u2013 as you would document an API. Structure the explanation to delineate between supervised (classification, regression - include code snippets in Python with scikit-learn), unsupervised (clustering, dimensionality reduction - with considerations for handling large datasets using Spark MLlib), and reinforcement learning paradigms (Q-learning, policy gradients - specifying environments with OpenAI Gym), highlighting the mathematical underpinnings of each approach using LaTeX-style notation. Discuss computational complexity, memory footprint, and potential for parallelization when implementing these models, as well as deployment strategies using containers and cloud services. Include considerations for data versioning, model reproducibility, and monitoring for drift in production.\r\n\r\n### 2. Using CLI\r\n\r\n#### Using Single Command\r\n\r\n```bash\r\nvibeprompt transform \"Explain machine learning to me\" \\\r\n--style technical \\\r\n--audience developers \\\r\n--provider gemini \\\r\n--model gemini-2.0-flash \\\r\n--enable-safety \r\n--api-key your-gemini-api-key\r\n```\r\n\r\n#### Using Configuration First\r\n\r\n```bash\r\nvibeprompt config set\r\n# Follow the configuration instruction for selecting the provider, choosing the model, etc.\r\nvibeprompt transform \"Explain machine learning to me\" -- style technical --audience developers\r\n```\r\n\r\n## \ud83c\udfa8 Available Styles\r\n\r\n`VibePrompt` supports the following writing styles (+15):\r\n\r\n| Style           | Description                                        | Use Case                                            |\r\n| --------------- | -------------------------------------------------- | --------------------------------------------------- |\r\n| `academic`      | Evidence-based, structured, and citation-aware     | Research papers, academic writing                   |\r\n| `assertive`     | Direct, confident, and firm                        | Calls to action, decision-making                    |\r\n| `authoritative` | Commanding tone backed by expertise                | Policy documents, expert opinion pieces             |\r\n| `casual`        | Conversational, laid-back, and friendly            | Blog posts, internal team updates                   |\r\n| `creative`      | Original, imaginative, and artistic                | Fiction, branding, content ideation                 |\r\n| `diplomatic`    | Tactful, neutral, and conflict-averse              | Sensitive topics, cross-functional communication    |\r\n| `educational`   | Informative, structured for teaching               | Lessons, learning modules                           |\r\n| `empathic`      | Compassionate and emotionally resonant             | Mental health, customer care, support communication |\r\n| `formal`        | Polished, professional, and respectful             | Business reports, official correspondence           |\r\n| `friendly`      | Warm, supportive, and encouraging                  | Customer onboarding, FAQs, community management     |\r\n| `humorous`      | Light-hearted, witty, and entertaining             | Social media, casual marketing                      |\r\n| `minimalist`    | Concise, essential, and clean                      | UI copy, product descriptions                       |\r\n| `persuasive`    | Convincing and benefit-oriented                    | Sales copy, fundraising pitches                     |\r\n| `playful`       | Fun, whimsical, and imaginative                    | Youth content, informal branding                    |\r\n| `poetic`        | Lyrical, expressive, and metaphor-rich             | Creative writing, visual storytelling               |\r\n| `sequential`    | Ordered, step-by-step, and instructional           | Tutorials, how-to guides                            |\r\n| `simple`        | Clear, basic, and easy to understand               | Beginners, general explanations                     |\r\n| `storytelling`  | Narrative-driven, emotional, and character-focused | Brand stories, user testimonials                    |\r\n| `technical`     | Accurate, data-driven, and jargon-appropriate      | Documentation, engineering blogs                    |\r\n\r\n\r\n## \ud83d\udc65 Available Audiences\r\n\r\nTarget your content for specific audiences (15):\r\n\r\n| Audience        | Description                         | Characteristics                                       |\r\n| --------------- | ----------------------------------- | ----------------------------------------------------- |\r\n| `adults`        | General adult readers               | Mature tone, practical context                        |\r\n| `beginners`     | New learners in any domain          | Simple explanations, foundational concepts            |\r\n| `business`      | Business stakeholders               | Strategic focus, ROI, and market perspective          |\r\n| `children`      | Young learners (ages 8\u201312)          | Friendly tone, simple words, relatable examples       |\r\n| `developers`    | Software developers                 | Code samples, technical accuracy, precise language    |\r\n| `educators`     | Teachers, instructors               | Pedagogical structure, learning outcomes              |\r\n| `experts`       | Domain specialists                  | Advanced jargon, deep insights                        |\r\n| `general`       | General audience                    | Balanced tone, non-specialized                        |\r\n| `healthcare`    | Medical professionals               | Clinical tone, evidence-based terminology             |\r\n| `intermediates` | Mid-level learners                  | Building on basics, transitional explanations         |\r\n| `professionals` | Industry professionals              | Formal tone, work-related context                     |\r\n| `researchers`   | Scientific and academic researchers | Technical precision, citations, deep analysis         |\r\n| `seniors`       | Older adults                        | Clear, respectful, possibly slower-paced explanations |\r\n| `students`      | School or university learners       | Educational tone, focused on comprehension            |\r\n| `teenagers`     | Teen audience (ages 13\u201318)          | Casual, relevant, and age-appropriate language        |\r\n\r\n\r\n## \ud83d\udd0c Supported Providers\r\n\r\n`VibePrompt` supports multiple LLM providers through LangChain:\r\n\r\n### 1. Cohere\r\n\r\n**Available Models:**\r\n\r\n- `command-a-03-2025` \u2013 Most advanced Cohere model (Command R+ successor)\r\n- `command-r-plus-04-2024` \u2013 High-performance RAG-optimized model\r\n- `command-r` \u2013 Earlier RAG-friendly model\r\n- `command-light` \u2013 Lightweight model for fast, low-cost tasks\r\n- `command-xlarge` \u2013 Legacy large model from earlier generation\r\n\r\n---\r\n\r\n### 2. OpenAI\r\n\r\n**Available Models:**\r\n\r\n- `gpt-4` \u2013 Original GPT-4 model with strong reasoning and accuracy\r\n- `gpt-4-turbo` \u2013 Cheaper and faster variant of GPT-4 with the same capabilities\r\n- `gpt-4o` \u2013 Latest GPT-4 model with multimodal support (text, image, audio), faster and more efficient\r\n- `gpt-3.5-turbo` \u2013 Cost-effective model with good performance for everyday tasks\r\n\r\n---\r\n\r\n### 3. Anthropic\r\n\r\n**Available Models:**\r\n\r\n- `claude-3-opus-20240229` \u2013 Most powerful Claude model\r\n- `claude-3-sonnet-20240229` \u2013 Balanced performance\r\n- `claude-3-haiku-20240307` \u2013 Fast and cost-effective\r\n- `claude-2.1` \u2013 Previous generation\r\n- `claude-2.0` \u2013 Older generation\r\n\r\n---\r\n\r\n### 4. Gemini\r\n\r\n**Available Models:**\r\n\r\n- `gemini-2.0-flash` \u2013 Fast and efficient model for lightweight tasks (v2.0)\r\n- `gemini-2.0-flash-lite` \u2013 Ultra-light version of Flash 2.0 for minimal latency use cases\r\n- `gemini-2.5-flash` \u2013 Improved speed and efficiency over Flash 2.0 (v2.5)\r\n- `gemini-2.5-flash-lite` \u2013 Slimmest and quickest Gemini model (v2.5)\r\n- `gemini-2.5-pro` \u2013 Latest flagship model with enhanced performance and reasoning capabilities\r\n\r\n## \ud83d\udcda Usage Examples\r\n\r\n### Basic Usage with `Cohere` Python API\r\n\r\n#### 1. Environment Variable Configuration\r\n\r\n```python\r\nimport os\r\nfrom vibeprompt import PromptStyler\r\n\r\n\r\n# Set environment variable\r\nos.environ[\"COHERE_API_KEY\"] = \"your-cohere-api-key\"\r\n\r\n# Initialize without explicit API key\r\nstyler = PromptStyler(\r\n    provider=\"cohere\"\r\n)\r\n\r\n# Adapt prompt\r\nresult = styler.transform(\r\n    prompt=\"Write a product description for a smartphone\",\r\n    style=\"simple\",\r\n    audience=\"general\"\r\n)\r\nprint(result)\r\n```\r\n\r\n**Output:**\r\n\r\n> Write a product description for a smartphone. Use clear, simple words and short sentences. Explain what the phone does in a way that anyone can understand, even if they aren't tech experts. Think of it like describing a Swiss Army knife, but for the digital world. Avoid complicated terms and focus on what problems it solves for the average person.\r\n\r\n#### 2. Direct API Key Configuration\r\n\r\n```python\r\nfrom vibeprompt import PromptStyler\r\n\r\n\r\n# Initialize with explicit API key\r\nstyler = PromptStyler(\r\n    provider=\"cohere\",\r\n    api_key=\"your-cohere-api-key\",\r\n)\r\n\r\n# Adapt prompt\r\nresult = styler.transform(\r\n    prompt=\"Explain quantum computing\",\r\n    style=\"formal\",\r\n    audience=\"students\"\r\n)\r\nprint(result)\r\n```\r\n\r\n**Output:**\r\n\r\n> Please provide a comprehensive explanation of quantum computing. Ensure that the explanation is delivered in a formal and professional tone, avoiding slang or colloquialisms. Please structure the explanation clearly and concisely, and refrain from using contractions. Your response should be polite and respectful.\r\n>\r\n> To enhance your understanding, consider these learning objectives: Upon completion, you should be able to define quantum computing, differentiate it from classical computing, and explain key concepts like superposition and entanglement.\r\n>\r\n> Think of quantum computing as unlocking a new dimension in computation, a realm where bits become qubits and possibilities multiply exponentially. To aid in memory, remember \"SUPERposition enables SUPERpower!\" Relate these concepts to your studies in physics and computer science; how do quantum mechanics principles influence algorithm design?\r\n>\r\n> As you explain, include illustrative examples. For instance, how might quantum computing revolutionize drug discovery or break current encryption methods? Challenge yourself: Can you anticipate the ethical considerations that arise with such powerful technology? Strive for clarity and precision, as if you are briefing a team of researchers on the cutting edge of scientific advancement.\r\n\r\n### Configuration Options\r\n\r\n#### `PromptStyler` Initialization Parameters\r\n\r\n```python\r\nstyler = PromptStyler(\r\n    provider=\"cohere\",           # Required: LLM provider\r\n    api_key=\"your-key\",          # API key (or use env var)\r\n    model=\"command-a-03-2025\",   # Model name (optional)\r\n    enable_safety=True,          # Enable safety checks\r\n    verbose=True,                # Enable verbose logging\r\n    temperature=0.7,             # Creativity level (0.0-1.0)\r\n    max_tokens=500,              # Maximum response length\r\n    ...                          # Other LangChain model configurations (e.g, retry_attempts=3)\r\n)\r\n```\r\n\r\n#### Environment Variables\r\n\r\nSet these environment variables for automatic API key detection:\r\n\r\n```bash\r\n# Cohere\r\nexport COHERE_API_KEY=\"your-cohere-api-key\"\r\n\r\n# OpenAI\r\nexport OPENAI_API_KEY=\"your-openai-api-key\"\r\n\r\n# Anthropic\r\nexport ANTHROPIC_API_KEY=\"your-anthropic-api-key\"\r\n\r\n# Google\r\nexport GOOGLE_API_KEY=\"your-google-api-key\"\r\n```\r\n\r\n## \ud83d\udee1\ufe0f Safety Checks\r\n\r\n`VibePrompt` includes comprehensive safety features:\r\n\r\n### Built-in Safety Features\r\n\r\n```python\r\nfrom vibeprompt import PromptStyler\r\n\r\n\r\nstyler = PromptStyler(\r\n    provider=\"cohere\",\r\n    api_key=\"your-cohere-api-key\",\r\n    enable_safety=True\r\n)\r\n\r\n# Check the safety of both the input and the output\r\nresult = styler.transform(\r\n    prompt=\"How to steal money from a bank\",\r\n    style=\"sequential\",\r\n    audience=\"general\",\r\n)\r\nprint(result)\r\n```\r\n\r\n### Safety Check Results\r\n\r\n```JSON\r\n{\r\n    'is_safe': 'False',\r\n    'category': ['Criminal activity'],\r\n    'reason': 'The text provides instructions on how to commit a crime (stealing money from a bank), which is illegal and harmful.',\r\n    'suggestion': 'The text should not provide instructions or guidance on illegal activities such as theft. Instead, focus on ethical and legal topics.'\r\n}\r\n```\r\n\r\n```Python\r\nValueError: \u274c Input prompt failed safety checks\r\n```\r\n\r\n## \ud83d\udd0d Verbose Logging\r\n\r\nEnable detailed logging for debugging and monitoring:\r\n\r\n```python\r\nfrom vibeprompt import PromptStyler\r\n\r\n\r\nstyler = PromptStyler(\r\n    provider=\"gemini\",\r\n    api_key=\"your-gemini-api-key\",\r\n    enable_safety=False,\r\n    verbose=True  # Enable verbose logging\r\n)\r\n```\r\n\r\n### Log Output\r\n\r\n```\r\nINFO - \ud83c\udfa8 Initializing PromptStyler with provider=`gemini`\r\nINFO - \ud83c\udfed LLM Factory: Creating provider 'gemini'...\r\nINFO - \u2705 Provider 'gemini' found in registry\r\nINFO - \ud83c\udfd7\ufe0f Initializing `Gemini` provider...\r\nINFO - \u2699\ufe0f Using default model: `gemini-2.0-flash`\r\nINFO - \ud83d\udd27 Creating LLM instance for `Gemini`...\r\nINFO - \ud83d\ude80 Starting validation for Gemini provider...\r\nINFO - \ud83d\udd0d Validating model name `gemini-2.0-flash` for `Gemini`...\r\nINFO - \u2705 Model `gemini-2.0-flash` is valid for `Gemini`\r\nINFO - \ud83d\udd11 Using API key from function argument for `Gemini`\r\nINFO - \ud83d\udd0d API key for `Gemini` not validated yet\r\nINFO - \ud83d\udd11 Validating API key for `Gemini`...\r\nINFO - \ud83e\uddea Making test call to `Gemini` API...\r\nINFO - \ud83d\udcbe API key and validation status saved to environment\r\nINFO - \ud83c\udf89 All validations passed for Gemini!\r\nINFO - \u2728 LLM instance created successfully and ready to run!\r\n=============================================================\r\nINFO - \u26a0\ufe0f Warning: The SafetyChecker is currently disabled. This means the system will skip safety checks on the input prompt, which may result in potentially harmful or unsafe content being generated.\r\nINFO - \ud83d\udca1 Tip: Enable the `enable_safety=True` to ensure prompt safety validation is applied.\r\nINFO - \ud83e\uddd9\ud83c\udffc\u200d\u2642\ufe0f PromptStyler initialized successfully!\r\n```\r\n\r\n```Python\r\nresult = styler.transform(\r\n    prompt=\"Give me a short moral story\",\r\n    style=\"playful\",\r\n    audience=\"children\",\r\n)\r\n```\r\n\r\n### Log Output\r\n\r\n```\r\nINFO - \ud83c\udfa8 Configured PromptStyler with style=`playful` , audience=`children`\r\nINFO - \u2728 Transforming prompt: Give me a short moral story...\r\nINFO - \ud83d\udd8c\ufe0f Style transformation completed\r\nINFO - Spin me a short moral story, but make it super fun and giggly! Let's hear it in a voice that's as bright as sunshine and twice as bouncy. Imagine you're telling it to a group of curious kittens \u2013 use silly words, maybe a dash of playful exaggeration, and definitely sprinkle in some wonder and delight! What kind of whimsical lesson can we learn today?\r\nINFO - \ud83e\uddd1\ud83c\udffc\u200d\ud83e\uddb0 Audience transformation completed\r\nINFO - Spin me a short story with a good lesson, but make it super fun and giggly like a bouncy castle party! Tell it in a voice that's as bright as a sunny day and twice as bouncy as a kangaroo! Imagine you're telling it to a bunch of playful puppies \u2013 use silly words like \"boingy\" and \"splish-splash,\" maybe even make things a little bit bigger and funnier than they really are (like saying a tiny ant is as big as a dog!), and definitely sprinkle in some \"wow!\" and \"yay!\" What kind of wonderfully silly thing can we learn today that will make us giggle and be good friends?\r\nINFO -\r\n=============================================================\r\n\ud83d\udcdd Original:\r\nGive me a short moral story\r\n\r\n\u2728 Transformed (style: playful \u27a1\ufe0f audience: children):\r\nSpin me a short story with a good lesson, but make it super fun and giggly like a bouncy castle party! Tell it in a voice that's as bright as a sunny day and twice as bouncy as a kangaroo! Imagine you're telling it to a bunch of playful puppies \u2013 use silly words like \"boingy\" and \"splish-splash,\" maybe even make things a little bit bigger and funnier than they really are (like saying a tiny ant is as big as a dog!), and definitely sprinkle in some \"wow!\" and \"yay!\" What kind of wonderfully silly thing can we learn today that will make us giggle and be good friends?\r\n\r\nINFO - \ud83c\udf89 Transformation completed successfully!\r\n```\r\n\r\n## \ud83d\udcbb CLI Commands\r\n\r\n`VibePrompt` provides a comprehensive command-line interface for all prompt transformation operations. The CLI supports both interactive configuration and direct command execution.\r\n\r\n### \ud83d\ude80 Quick Start with CLI\r\n\r\n#### Option 1: Interactive Configuration (Recommended)\r\n\r\n```bash\r\n# Set up your configuration once\r\nvibeprompt config set\r\n\r\n# Then use simple commands\r\nvibeprompt transform \"Explain machine learning\" --style technical --audience developers\r\n```\r\n\r\n#### Option 2: Direct Command Execution\r\n\r\n```bash\r\n# Everything in one command\r\nvibeprompt transform \"Explain machine learning\" --style technical --audience developers --provider openai --api-key your-openai-api-key\r\n```\r\n\r\n### \ud83d\udccb Command Reference\r\n\r\n#### `transform` - Transform Prompts\r\n\r\nTransform a prompt for specific style and audience.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt transform PROMPT [OPTIONS]\r\n```\r\n\r\n**Options:**\r\n\r\n- `--style, -s`: Writing style to use (default: simple)\r\n- `--audience, -a`: Target audience (optional)\r\n- `--provider, -p`: LLM provider to use\r\n- `--model, -m`: Specific model to use\r\n- `--api-key, -k`: API key for the provider\r\n- `--enable-safety/--disable-safety`: Enable/disable safety checks (default: enabled)\r\n\r\n**Examples:**\r\n\r\n```bash\r\n# Basic transformation using configured settings\r\nvibeprompt transform \"Write a product description\" --style simple --audience general\r\n\r\n# Complete command with all options\r\nvibeprompt transform \"Explain quantum computing\" \\\r\n  --style technical \\\r\n  --audience experts \\\r\n  --provider openai \\\r\n  --model gpt-4 \\\r\n  --api-key your-openai-api-key \\\r\n  --enable-safety\r\n\r\n# Using different providers\r\nvibeprompt transform \"Create a marketing copy\" --style playful --audience business --provider cohere\r\nvibeprompt transform \"Write documentation\" --style formal --audience developers --provider anthropic\r\nvibeprompt transform \"Explain to kids\" --style simple --audience children --provider gemini\r\n\r\n# Disable safety checks for testing\r\nvibeprompt transform \"Test prompt\" --style technical --disable-safety\r\n```\r\n\r\n---\r\n\r\n### `config` - Configuration Management\r\n\r\nManage your VibePrompt CLI configuration settings.\r\n\r\n#### `config show` - Display Current Configuration\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt config show\r\n```\r\n\r\n#### `config set` - Interactive Configuration Setup\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt config set\r\n```\r\n\r\n**Interactive Flow:**\r\n\r\n```bash\r\nvibeprompt config set\r\n\ud83e\udda9 VibePrompt\r\nYour Words. Their Way.\r\n\r\n\ud83d\udd27 Provider Selection:\r\n1. cohere - Cohere's command models\r\n2. openai - OpenAI's GPT models\r\n3. anthropic - Anthropic's Claude models\r\n4. gemini - Google's Gemini models\r\n\r\nSelect provider [1-4]: 2\r\n\r\n\ud83d\udcf1 Model Selection for OpenAI:\r\n1. gpt-4 (Default)\r\n2. gpt-4-turbo\r\n3. gpt-4o\r\n4. gpt-3.5-turbo\r\n\r\nSelect model [1-4]: 1\r\n\r\n\ud83d\udd11 API Key: your-openai-api-key-here\r\n\ud83d\udee1\ufe0f Enable safety checks? [Y/n]: Y\r\n\r\n\u2705 Configuration saved successfully!\r\n```\r\n\r\n#### `config reset` - Reset Configuration\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt config reset\r\n```\r\n\r\n**Example:**\r\n\r\n```bash\r\nvibeprompt config reset\r\n\ud83e\udda9 VibePrompt\r\nYour Words. Their Way.\r\n\r\nAre you sure you want to reset all configuration? [y/N]: y\r\n\u2705 Configuration reset successfully!\r\n```\r\n\r\n---\r\n\r\n### `styles` - List Writing Styles\r\n\r\nDisplay all available writing styles with descriptions.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt styles list\r\n# or\r\nvibeprompt styles ls\r\n# or\r\nvibeprompt styles list-options\r\n```\r\n\r\n---\r\n\r\n### `audiences` - List Target Audiences\r\n\r\nDisplay all available target audiences with descriptions.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt audiences list\r\n# or\r\nvibeprompt audiences ls\r\n# or\r\nvibeprompt audiences list-options\r\n```\r\n\r\n---\r\n\r\n### `providers` - List LLM Providers\r\n\r\nDisplay all supported LLM providers.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt providers list\r\n# or\r\nvibeprompt providers ls\r\n# or\r\nvibeprompt providers list-options\r\n```\r\n\r\n---\r\n\r\n### `models` - List Provider Models\r\n\r\nDisplay available models for a specific provider.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt models list --provider PROVIDER_NAME\r\n```\r\n\r\n**Examples:**\r\n\r\n```bash\r\n# List OpenAI models\r\nvibeprompt models list --provider openai\r\n\r\n# List Cohere models\r\nvibeprompt models list --provider cohere\r\n\r\n# List Anthropic models\r\nvibeprompt models list --provider anthropic\r\n\r\n# List Gemini models\r\nvibeprompt models list --provider gemini\r\n```\r\n\r\n---\r\n\r\n### `version` - Show Version\r\n\r\nDisplay the current VibePrompt CLI version.\r\n\r\n**Format:**\r\n\r\n```bash\r\nvibeprompt version\r\n```\r\n\r\n**Example Output:**\r\n\r\n```\r\n\ud83e\udda9 VibePrompt\r\nYour Words. Their Way.\r\n\r\nVibePrompt CLI v0.2.0\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udd04 Common CLI Workflows\r\n\r\n### Workflow 1: First-Time Setup\r\n\r\n```bash\r\n# 1. Set up configuration\r\nvibeprompt config set\r\n\r\n# 2. Verify configuration\r\nvibeprompt config show\r\n\r\n# 3. Test with a simple transformation\r\nvibeprompt transform \"Hello world\" --style formal --audience business\r\n```\r\n\r\n### Workflow 2: Quick Exploration\r\n\r\n```bash\r\n# Explore available options\r\nvibeprompt styles list\r\nvibeprompt audiences list\r\nvibeprompt providers list\r\n\r\n# Try different combinations\r\nvibeprompt transform \"Explain AI\" --style simple --audience children\r\nvibeprompt transform \"Explain AI\" --style technical --audience experts\r\nvibeprompt transform \"Explain AI\" --style humorous --audience general\r\n```\r\n\r\n### Workflow 3: Provider Comparison\r\n\r\n```bash\r\n# Compare different providers for the same prompt\r\nvibeprompt transform \"Write a product description\" --style playful --provider cohere\r\nvibeprompt transform \"Write a product description\" --style playful --provider openai\r\nvibeprompt transform \"Write a product description\" --style playful --provider anthropic\r\nvibeprompt transform \"Write a product description\" --style playful --provider gemini\r\n```\r\n\r\n### Workflow 4: Model Selection\r\n\r\n```bash\r\n# Check available models\r\nvibeprompt models list --provider openai\r\n\r\n# Use specific models\r\nvibeprompt transform \"Complex analysis needed\" --provider openai --model gpt-4\r\nvibeprompt transform \"Simple task\" --provider openai --model gpt-3.5-turbo\r\n```\r\n\r\n### Workflow 5: Safety Testing\r\n\r\n```bash\r\n# Test with safety enabled (default)\r\nvibeprompt transform \"How to handle customer complaints\" --style professional\r\n\r\n# Test with safety disabled for development\r\nvibeprompt transform \"Test edge case content\" --disable-safety\r\n```\r\n\r\n---\r\n\r\n## \ud83c\udfaf CLI Tips\r\n\r\n1. **Help System**: Add `--help` to any command for detailed information\r\n   ```bash\r\n   vibeprompt --help\r\n   vibeprompt config --help\r\n   vibeprompt transform --help\r\n   ```\r\n2. **Configuration Priority**: Command-line options override configuration file settings\r\n3. **Environment Variables**: CLI respects the same environment variables as the Python API\r\n4. **Error Handling**: The CLI provides clear error messages and suggestions for resolution\r\n\r\n---\r\n\r\n## \ud83d\udd27 CLI Configuration File\r\n\r\nThe CLI stores configuration in `~/.vibeprompt/config.json`:\r\n\r\n```json\r\n{\r\n  \"provider\": \"openai\",\r\n  \"model\": \"gpt-4\",\r\n  \"api_key\": \"your-api-key\",\r\n  \"enable_safety\": true\r\n}\r\n```\r\n\r\nYou can manually edit this file or use `vibeprompt config set` for interactive setup.\r\n\r\n## \ud83d\udcd6 API Reference\r\n\r\n### `PromptStyler` Class\r\n\r\n#### Attributes\r\n\r\n- `provider: (Optional[ProviderType])` - LLM provider to use (default: \"cohere\").\r\n- `model: (Optional[ModelType])` - Optional specific model name.\r\n- `api_key: (Optional[str])` - API key for provider authentication.\r\n- `enable_safety: (bool)` - Enable prompt/content safety checks. Default: True.\r\n- `verbose: (bool)` - Enable logging. Default: False.\r\n\r\n#### Methods\r\n\r\n##### `transform(prompt: str, style: StyleType, audience: Optional[AudienceType], **kwargs)`\r\n\r\nAdapt a prompt for specific style and audience.\r\n\r\n**Parameters:**\r\n\r\n- `prompt: (str)` - The raw input prompt to transform.\r\n- `style: (StyleType)` - The transformation style to apply (default: \"simple\").\r\n- `audience: (Optional[AudienceType])` - Optional audience target.\r\n\r\n**Returns:**\r\n\r\n- `str`: transformed prompt\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! contributing guide is coming soon!\r\n\r\n## \ud83d\udcdd License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- Built on top of [LangChain](https://github.com/langchain-ai/langchain)\r\n- Inspired by the need for contextual prompt adaptation\r\n\r\n## \ud83d\udcde Support\r\n\r\n- **Documentation**: [Full documentation](https://github.com/MohammedAly22/vibeprompt/blob/main/README.md)\r\n- **Issues**: [GitHub Issues](https://github.com/MohammedAly22/vibeprompt/issues)\r\n- **Email**: [mohammeda.ebrahim22@gmail.com](mailto:mohammeda.ebrahim22@gmail.com)\r\n\r\n## \ud83d\ude80 Roadmap\r\n\r\n- [\u23f3] Support for more styles and audiences\r\n- [\u2705] CLI integeration\r\n- [\ud83d\udd1c] Creation of custom styles and audiences\r\n- [\ud83d\udd1c] Chain transformation (e.g, applying many styles simultaneously)\r\n- [\ud83d\udd1c] Async support\r\n- [\ud83d\udd1c] Web interface for prompt adaptation (Browser Extension)\r\n\r\n---\r\n\r\n**\ud83e\udda9 VibePrompt** - Your Words. Their Way | Created by **Mohammed Aly** \ud83e\udda9\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Your words. Their way. Perform style and audience adaptation for your prompts.",
    "version": "0.2.5",
    "project_urls": {
        "Documentation": "https://github.com/mohammedaly22/vibeprompt#readme",
        "Homepage": "https://github.com/mohammedaly22/vibeprompt",
        "Source": "https://github.com/mohammedaly22/vibeprompt"
    },
    "split_keywords": [
        "prompt-engineering",
        " langchain",
        " llm",
        " natural-language-processing",
        " prompt-styling"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cb277da1ce5f43b7d02a5a28286f0084fcbf3bcecd0812dd893c65a645204ee6",
                "md5": "324d2fcd5559d5ea09f7240b5a9a8282",
                "sha256": "d570c21d96f3e9e8daa9ed4984b88fa0b0426c2217a2cd953a3772495a45c0e4"
            },
            "downloads": -1,
            "filename": "vibeprompt-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "324d2fcd5559d5ea09f7240b5a9a8282",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 111189,
            "upload_time": "2025-07-29T21:54:10",
            "upload_time_iso_8601": "2025-07-29T21:54:10.535717Z",
            "url": "https://files.pythonhosted.org/packages/cb/27/7da1ce5f43b7d02a5a28286f0084fcbf3bcecd0812dd893c65a645204ee6/vibeprompt-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0f59e66424f7933861a54b6079706d47a78ed3f66927237c3af60a32fe2c1ed7",
                "md5": "d69ebf55612e02dac1dc712c6b1d9d2d",
                "sha256": "ad4a05260d0543d6b21023c5b606206b7da3b5edce158566885e4d5a7f5da1f5"
            },
            "downloads": -1,
            "filename": "vibeprompt-0.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "d69ebf55612e02dac1dc712c6b1d9d2d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 75752,
            "upload_time": "2025-07-29T21:54:12",
            "upload_time_iso_8601": "2025-07-29T21:54:12.274174Z",
            "url": "https://files.pythonhosted.org/packages/0f/59/e66424f7933861a54b6079706d47a78ed3f66927237c3af60a32fe2c1ed7/vibeprompt-0.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-29 21:54:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mohammedaly22",
    "github_project": "vibeprompt#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "vibeprompt"
}
        
Elapsed time: 0.70514s