Name | lmsp JSON |
Version |
0.5.6
JSON |
| download |
home_page | None |
Summary | A command-line interface for sending prompts to LM Studio loaded models |
upload_time | 2025-07-11 09:24:04 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
lm-studio
cli
ai
llm
prompt
|
VCS |
 |
bugtrack_url |
|
requirements |
requests
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# lmsp - LM Studio Prompt CLI
A simple command-line interface for sending prompts to LM Studio loaded models.
## Features
- Send prompts to locally loaded LM Studio models
- Uses the first loaded model by default (or specify with `-m`)
- **Requires pre-loaded models**: Models must be loaded using `lms load <model>` or LM Studio desktop app
- Support for piping input from other commands
- Verbose logging with `-v` flag for debugging
- Simple and fast command-line interface
## Installation
### Quick Install from PyPI (Recommended)
```bash
# Install globally with pip
pip install lmsp
# Or install globally with uv (recommended)
uv tool install lmsp
```
### Alternative Installation Methods
#### Install from source
```bash
# Using uv tool (recommended - installs globally)
uv tool install git+https://github.com/kmlawson/lmsp.git
# Or clone and install locally
git clone https://github.com/kmlawson/lmsp.git
cd lmsp
uv tool install .
```
#### Install in virtual environment
```bash
# Using uv
uv venv
source .venv/bin/activate
uv pip install lmsp
# Or using pip
python -m venv venv
source venv/bin/activate
pip install lmsp
```
#### Development installation
```bash
# Clone and install in development mode
git clone https://github.com/kmlawson/lmsp.git
cd lmsp
uv pip install -e . # or pip install -e .
```
## Configuration
lmsp supports a configuration file to set default values for command-line options. The configuration file is located at `~/.lmsp-config` and is automatically created with default values when you first run lmsp.
### Configuration File Format
The configuration file uses JSON format:
```json
{
"model": null,
"port": 1234,
"pipe_mode": "append",
"wait": false,
"stats": false,
"plain": false,
"verbose": false
}
```
### Configuration Options
- **model**: Default model to use (null means use first loaded model)
- **port**: Default LM Studio server port (1234)
- **pipe_mode**: How to handle piped input ("replace", "append", or "prepend")
- **wait**: Disable streaming by default (false)
- **stats**: Show response statistics by default (false)
- **plain**: Disable markdown formatting by default (false)
- **verbose**: Enable verbose logging by default (false)
### Example Custom Configuration
```json
{
"model": "google/gemma-3n-e4b",
"port": 1234,
"pipe_mode": "append",
"wait": true,
"stats": true,
"plain": false,
"verbose": false
}
```
This configuration would:
- Use "google/gemma-3n-e4b" as the default model
- Wait for complete responses (no streaming) and beautify markdown output
- Show response statistics by default
- Append piped content to prompts
Command-line arguments always override configuration file settings.
## Usage
### Prerequisites
Before using lmsp, you need to load a model:
```bash
# Load a model using lms command
lms load google/gemma-3n-e4b
# Or use LM Studio desktop app to load a model
```
### Basic usage
```bash
lmsp "What is the capital of France?"
```
### Specify a model
```bash
# Use a specific model (must be already loaded)
lmsp -m llama-3.2-1b-instruct "Explain quantum computing"
# Enable verbose logging for debugging
lmsp -v -m google/gemma-3n-e4b "What is AI?"
```
### Pipe input
```bash
# Simple piping - replaces the prompt
cat document.txt | lmsp
# Combine prompt with piped content (default appends)
cat document.txt | lmsp "Summarize this document:"
# Control how piped input is combined
cat context.txt | lmsp "Answer based on context:" --pipe-mode prepend
cat document.txt | lmsp "Summarize:" --pipe-mode append
# Real example: Translate a text to English
cat tests/testdata/test-text.md | lmsp "Please translate the following text to English:"
```
### Check loaded models
```bash
# List currently loaded models
lmsp --list-models
# List all available models (not loaded)
lms ls
```
### Check server status
```bash
lmsp --check-server
```
### Get help
```bash
lmsp --help
# or lmsp -h
```
## Security Considerations
When using `lmsp`, please be aware of the following security considerations:
### Piped Content
- **Be cautious about what content you pipe to `lmsp`**. The piped content is directly appended or prepended to your prompt without sanitization.
- Avoid piping untrusted content or files from unknown sources
- Be especially careful when piping content that might contain prompt injection attempts or malicious instructions
- Example of what to avoid:
```bash
# Don't pipe untrusted user input or files
cat untrusted_user_file.txt | lmsp "Summarize this:"
```
### Model Selection
- Only use trusted models that you have intentionally loaded into LM Studio
- Be aware that models will execute the prompts you send, including any piped content
### Local Usage
- `lmsp` is designed for local use with your own LM Studio instance
- It connects to `localhost` only and does not expose any network services
## Prerequisites
1. LM Studio must be installed
2. The LM Studio server must be running (`lms server start`)
3. At least one model must be loaded (`lms load <model>`)
## Running Tests
```bash
python -m unittest tests.test_lmsp -v
```
## Planned Features
- Ability to attach images with `-a` flag for multi-modal models
- Ability to continue from last prompt
- Enhanced piping support for documents
Raw data
{
"_id": null,
"home_page": null,
"name": "lmsp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "lm-studio, cli, ai, llm, prompt",
"author": null,
"author_email": "\"Konrad M. Lawson\" <kl@muninn.net>",
"download_url": "https://files.pythonhosted.org/packages/0b/7a/012ffc216eaf7f0fa9d140e45c80f0914fc4d2a9db139613f0b153ec5b75/lmsp-0.5.6.tar.gz",
"platform": null,
"description": "# lmsp - LM Studio Prompt CLI\n\nA simple command-line interface for sending prompts to LM Studio loaded models.\n\n## Features\n\n- Send prompts to locally loaded LM Studio models\n- Uses the first loaded model by default (or specify with `-m`)\n- **Requires pre-loaded models**: Models must be loaded using `lms load <model>` or LM Studio desktop app\n- Support for piping input from other commands\n- Verbose logging with `-v` flag for debugging\n- Simple and fast command-line interface\n\n## Installation\n\n### Quick Install from PyPI (Recommended)\n```bash\n# Install globally with pip\npip install lmsp\n\n# Or install globally with uv (recommended)\nuv tool install lmsp\n```\n\n### Alternative Installation Methods\n\n#### Install from source\n```bash\n# Using uv tool (recommended - installs globally)\nuv tool install git+https://github.com/kmlawson/lmsp.git\n\n# Or clone and install locally\ngit clone https://github.com/kmlawson/lmsp.git\ncd lmsp\nuv tool install .\n```\n\n#### Install in virtual environment\n```bash\n# Using uv\nuv venv\nsource .venv/bin/activate\nuv pip install lmsp\n\n# Or using pip\npython -m venv venv\nsource venv/bin/activate\npip install lmsp\n```\n\n#### Development installation\n```bash\n# Clone and install in development mode\ngit clone https://github.com/kmlawson/lmsp.git\ncd lmsp\nuv pip install -e . # or pip install -e .\n```\n\n## Configuration\n\nlmsp supports a configuration file to set default values for command-line options. The configuration file is located at `~/.lmsp-config` and is automatically created with default values when you first run lmsp.\n\n### Configuration File Format\n\nThe configuration file uses JSON format:\n\n```json\n{\n \"model\": null,\n \"port\": 1234,\n \"pipe_mode\": \"append\",\n \"wait\": false,\n \"stats\": false,\n \"plain\": false,\n \"verbose\": false\n}\n```\n\n### Configuration Options\n\n- **model**: Default model to use (null means use first loaded model)\n- **port**: Default LM Studio server port (1234)\n- **pipe_mode**: How to handle piped input (\"replace\", \"append\", or \"prepend\")\n- **wait**: Disable streaming by default (false)\n- **stats**: Show response statistics by default (false)\n- **plain**: Disable markdown formatting by default (false)\n- **verbose**: Enable verbose logging by default (false)\n\n### Example Custom Configuration\n\n```json\n{\n \"model\": \"google/gemma-3n-e4b\",\n \"port\": 1234,\n \"pipe_mode\": \"append\",\n \"wait\": true,\n \"stats\": true,\n \"plain\": false,\n \"verbose\": false\n}\n```\n\nThis configuration would:\n- Use \"google/gemma-3n-e4b\" as the default model\n- Wait for complete responses (no streaming) and beautify markdown output\n- Show response statistics by default\n- Append piped content to prompts\n\nCommand-line arguments always override configuration file settings.\n\n## Usage\n\n### Prerequisites\nBefore using lmsp, you need to load a model:\n```bash\n# Load a model using lms command\nlms load google/gemma-3n-e4b\n\n# Or use LM Studio desktop app to load a model\n```\n\n### Basic usage\n```bash\nlmsp \"What is the capital of France?\"\n```\n\n### Specify a model\n```bash\n# Use a specific model (must be already loaded)\nlmsp -m llama-3.2-1b-instruct \"Explain quantum computing\"\n\n# Enable verbose logging for debugging\nlmsp -v -m google/gemma-3n-e4b \"What is AI?\"\n```\n\n### Pipe input\n```bash\n# Simple piping - replaces the prompt\ncat document.txt | lmsp\n\n# Combine prompt with piped content (default appends)\ncat document.txt | lmsp \"Summarize this document:\"\n\n# Control how piped input is combined\ncat context.txt | lmsp \"Answer based on context:\" --pipe-mode prepend\ncat document.txt | lmsp \"Summarize:\" --pipe-mode append\n\n# Real example: Translate a text to English\ncat tests/testdata/test-text.md | lmsp \"Please translate the following text to English:\"\n```\n\n### Check loaded models\n```bash\n# List currently loaded models\nlmsp --list-models\n\n# List all available models (not loaded)\nlms ls\n```\n\n### Check server status\n```bash\nlmsp --check-server\n```\n\n### Get help\n```bash\nlmsp --help\n# or lmsp -h\n```\n\n## Security Considerations\n\nWhen using `lmsp`, please be aware of the following security considerations:\n\n### Piped Content\n- **Be cautious about what content you pipe to `lmsp`**. The piped content is directly appended or prepended to your prompt without sanitization.\n- Avoid piping untrusted content or files from unknown sources\n- Be especially careful when piping content that might contain prompt injection attempts or malicious instructions\n- Example of what to avoid:\n ```bash\n # Don't pipe untrusted user input or files\n cat untrusted_user_file.txt | lmsp \"Summarize this:\"\n ```\n\n### Model Selection\n- Only use trusted models that you have intentionally loaded into LM Studio\n- Be aware that models will execute the prompts you send, including any piped content\n\n### Local Usage\n- `lmsp` is designed for local use with your own LM Studio instance\n- It connects to `localhost` only and does not expose any network services\n\n## Prerequisites\n\n1. LM Studio must be installed\n2. The LM Studio server must be running (`lms server start`)\n3. At least one model must be loaded (`lms load <model>`)\n\n## Running Tests\n\n```bash\npython -m unittest tests.test_lmsp -v\n```\n\n## Planned Features\n\n- Ability to attach images with `-a` flag for multi-modal models\n- Ability to continue from last prompt\n- Enhanced piping support for documents\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A command-line interface for sending prompts to LM Studio loaded models",
"version": "0.5.6",
"project_urls": {
"Homepage": "https://github.com/kmlawson/lmsp",
"Issues": "https://github.com/kmlawson/lmsp/issues"
},
"split_keywords": [
"lm-studio",
" cli",
" ai",
" llm",
" prompt"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "81f8d5ea98206f0d03fc0cca76a324998f2ed085550fe125e0d7c1bac19028bc",
"md5": "31e13580bd248aaaeb5407fcf75ba8bb",
"sha256": "d7182ff035f4d32e3cda394958574707e3bc8cfa751c07ab0b870168c4e67e4a"
},
"downloads": -1,
"filename": "lmsp-0.5.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "31e13580bd248aaaeb5407fcf75ba8bb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 13617,
"upload_time": "2025-07-11T09:24:02",
"upload_time_iso_8601": "2025-07-11T09:24:02.979147Z",
"url": "https://files.pythonhosted.org/packages/81/f8/d5ea98206f0d03fc0cca76a324998f2ed085550fe125e0d7c1bac19028bc/lmsp-0.5.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0b7a012ffc216eaf7f0fa9d140e45c80f0914fc4d2a9db139613f0b153ec5b75",
"md5": "87a247fbdf7e97c505348337562274d8",
"sha256": "b3bea738efbc75ba53c952df02a5be4860280440ba84f08aae4d08c5b1dc7257"
},
"downloads": -1,
"filename": "lmsp-0.5.6.tar.gz",
"has_sig": false,
"md5_digest": "87a247fbdf7e97c505348337562274d8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25655,
"upload_time": "2025-07-11T09:24:04",
"upload_time_iso_8601": "2025-07-11T09:24:04.229883Z",
"url": "https://files.pythonhosted.org/packages/0b/7a/012ffc216eaf7f0fa9d140e45c80f0914fc4d2a9db139613f0b153ec5b75/lmsp-0.5.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 09:24:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kmlawson",
"github_project": "lmsp",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "requests",
"specs": [
[
">=",
"2.31.0"
]
]
}
],
"lcname": "lmsp"
}