qwentastic


Nameqwentastic JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryA simple interface for running Qwen locally
upload_time2025-02-17 00:19:17
maintainerNone
docs_urlNone
authorYour Name
requires_python>=3.8
licenseMIT
keywords qwen ai language model
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Qwentastic 🚀

A powerful yet simple interface for running Qwen locally. This package provides an elegant way to interact with the Qwen 1.5 14B model through just two intuitive functions.

## 🌟 Features

- **Simple One-Liner Interface**: Just two functions to remember
  - `qwen_data()`: Set context and purpose
  - `qwen_prompt()`: Get AI responses
- **Efficient Model Management**: 
  - Singleton pattern ensures model loads only once
  - Automatic resource management
  - State persistence between calls
- **Smart Memory Handling**:
  - Optimized for both CPU and GPU environments
  - Automatic device detection and optimization
- **Production Ready**:
  - Thread-safe implementation
  - Error handling and recovery
  - Detailed logging

## 📦 Installation

```bash
pip install qwentastic
```

## 🚀 Quick Start

```python
from qwentastic import qwen_data, qwen_prompt

# Set the AI's purpose/context
qwen_data("You are a Python expert focused on writing clean, efficient code")

# Get responses
response = qwen_prompt("How do I implement a decorator in Python?")
print(response)
```

## 💻 System Requirements

- Python >= 3.8
- RAM: 16GB minimum (32GB recommended)
- Storage: 30GB free space for model files
- CUDA-capable GPU recommended (but not required)

### Hardware Recommendations
- **CPU**: Modern multi-core processor
- **GPU**: NVIDIA GPU with 12GB+ VRAM (for optimal performance)
- **RAM**: 32GB for smooth operation
- **Storage**: SSD recommended for faster model loading

## ⚡ Performance Notes

First run will:
1. Download the Qwen 1.5 14B model (~30GB)
2. Cache it locally for future use
3. Initialize the model (may take a few minutes)

Subsequent runs will be much faster as the model is cached.

## 🔧 Advanced Usage

### Custom Temperature

```python
from qwentastic import qwen_data, qwen_prompt

# Set creative context
qwen_data("You are a creative storyteller")

# Get more creative responses with higher temperature
response = qwen_prompt(
    "Write a short story about a robot learning to paint",
    temperature=0.8  # More creative (default is 0.7)
)
```

### Memory Management

The package automatically handles model loading and unloading. The model stays in memory until your program exits, optimizing for repeated use while being memory-efficient.

## 🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## 📝 License

MIT License - feel free to use this in your projects!

## ⚠️ Important Notes

- First run requires internet connection for model download
- Model files are cached in the HuggingFace cache directory
- GPU acceleration requires CUDA support
- CPU inference is supported but significantly slower

## 🔍 Troubleshooting

Common issues and solutions:

1. **Out of Memory**:
   - Try reducing batch size
   - Close other GPU-intensive applications
   - Switch to CPU if needed

2. **Slow Inference**:
   - Check GPU utilization
   - Ensure CUDA is properly installed
   - Consider hardware upgrades for better performance

## 📚 Citation

If you use this in your research, please cite:

```bibtex
@software{qwentastic,
  title = {Qwentastic: Simple Interface for Qwen 1.5},
  author = {Your Name},
  year = {2024},
  url = {https://github.com/yourusername/qwentastic}
}

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "qwentastic",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "qwen, ai, language model",
    "author": "Your Name",
    "author_email": "Your Name <your.email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/ab/a8/4c4cc982072420e230994c6777f68c1a6ef4361364b530d84e540af20e29/qwentastic-0.1.0.tar.gz",
    "platform": null,
    "description": "# Qwentastic \ud83d\ude80\n\nA powerful yet simple interface for running Qwen locally. This package provides an elegant way to interact with the Qwen 1.5 14B model through just two intuitive functions.\n\n## \ud83c\udf1f Features\n\n- **Simple One-Liner Interface**: Just two functions to remember\n  - `qwen_data()`: Set context and purpose\n  - `qwen_prompt()`: Get AI responses\n- **Efficient Model Management**: \n  - Singleton pattern ensures model loads only once\n  - Automatic resource management\n  - State persistence between calls\n- **Smart Memory Handling**:\n  - Optimized for both CPU and GPU environments\n  - Automatic device detection and optimization\n- **Production Ready**:\n  - Thread-safe implementation\n  - Error handling and recovery\n  - Detailed logging\n\n## \ud83d\udce6 Installation\n\n```bash\npip install qwentastic\n```\n\n## \ud83d\ude80 Quick Start\n\n```python\nfrom qwentastic import qwen_data, qwen_prompt\n\n# Set the AI's purpose/context\nqwen_data(\"You are a Python expert focused on writing clean, efficient code\")\n\n# Get responses\nresponse = qwen_prompt(\"How do I implement a decorator in Python?\")\nprint(response)\n```\n\n## \ud83d\udcbb System Requirements\n\n- Python >= 3.8\n- RAM: 16GB minimum (32GB recommended)\n- Storage: 30GB free space for model files\n- CUDA-capable GPU recommended (but not required)\n\n### Hardware Recommendations\n- **CPU**: Modern multi-core processor\n- **GPU**: NVIDIA GPU with 12GB+ VRAM (for optimal performance)\n- **RAM**: 32GB for smooth operation\n- **Storage**: SSD recommended for faster model loading\n\n## \u26a1 Performance Notes\n\nFirst run will:\n1. Download the Qwen 1.5 14B model (~30GB)\n2. Cache it locally for future use\n3. Initialize the model (may take a few minutes)\n\nSubsequent runs will be much faster as the model is cached.\n\n## \ud83d\udd27 Advanced Usage\n\n### Custom Temperature\n\n```python\nfrom qwentastic import qwen_data, qwen_prompt\n\n# Set creative context\nqwen_data(\"You are a creative storyteller\")\n\n# Get more creative responses with higher temperature\nresponse = qwen_prompt(\n    \"Write a short story about a robot learning to paint\",\n    temperature=0.8  # More creative (default is 0.7)\n)\n```\n\n### Memory Management\n\nThe package automatically handles model loading and unloading. The model stays in memory until your program exits, optimizing for repeated use while being memory-efficient.\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## \ud83d\udcdd License\n\nMIT License - feel free to use this in your projects!\n\n## \u26a0\ufe0f Important Notes\n\n- First run requires internet connection for model download\n- Model files are cached in the HuggingFace cache directory\n- GPU acceleration requires CUDA support\n- CPU inference is supported but significantly slower\n\n## \ud83d\udd0d Troubleshooting\n\nCommon issues and solutions:\n\n1. **Out of Memory**:\n   - Try reducing batch size\n   - Close other GPU-intensive applications\n   - Switch to CPU if needed\n\n2. **Slow Inference**:\n   - Check GPU utilization\n   - Ensure CUDA is properly installed\n   - Consider hardware upgrades for better performance\n\n## \ud83d\udcda Citation\n\nIf you use this in your research, please cite:\n\n```bibtex\n@software{qwentastic,\n  title = {Qwentastic: Simple Interface for Qwen 1.5},\n  author = {Your Name},\n  year = {2024},\n  url = {https://github.com/yourusername/qwentastic}\n}\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A simple interface for running Qwen locally",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/yourusername/qwentastic"
    },
    "split_keywords": [
        "qwen",
        " ai",
        " language model"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8d7a51aabe6269e6004c8d18581a65de2d5554f7f2369d05655f80c3ffbf2b3b",
                "md5": "0201ca97e387a806ab9a8ccbf7e49cf4",
                "sha256": "6abf16152287a939d4a1c45634deea9d6fd94c10f148d0c1ee55d9b849f85e7e"
            },
            "downloads": -1,
            "filename": "qwentastic-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0201ca97e387a806ab9a8ccbf7e49cf4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 5363,
            "upload_time": "2025-02-17T00:19:15",
            "upload_time_iso_8601": "2025-02-17T00:19:15.661900Z",
            "url": "https://files.pythonhosted.org/packages/8d/7a/51aabe6269e6004c8d18581a65de2d5554f7f2369d05655f80c3ffbf2b3b/qwentastic-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "aba84c4cc982072420e230994c6777f68c1a6ef4361364b530d84e540af20e29",
                "md5": "516e7238c881cd55f17a162040373aba",
                "sha256": "39113f79d12fedcd4eed9522041ac8ea4ed7fe0799d70e2f049d5e0765e74543"
            },
            "downloads": -1,
            "filename": "qwentastic-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "516e7238c881cd55f17a162040373aba",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 4628,
            "upload_time": "2025-02-17T00:19:17",
            "upload_time_iso_8601": "2025-02-17T00:19:17.356356Z",
            "url": "https://files.pythonhosted.org/packages/ab/a8/4c4cc982072420e230994c6777f68c1a6ef4361364b530d84e540af20e29/qwentastic-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-17 00:19:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "qwentastic",
    "github_not_found": true,
    "lcname": "qwentastic"
}
        
Elapsed time: 0.46223s