Name | verdi-llm JSON |
Version |
0.1.3
JSON |
| download |
home_page | None |
Summary | An LLM tool for an even easier use of AiiDA. |
upload_time | 2025-08-21 21:13:47 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | None |
keywords |
aiida
llm
verdi
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# verdi-llm
<br>
<div align="left">
<img src="assets/logo.png" alt="Repository Logo" width="200" style="margin-top: 20px; margin-bottom: 20px;"/>
</div>
<br>
verdi-llm helps AiiDA users generate `verdi` commands using natural language queries. Instead of memorizing complex command syntax, simply describe what you want to do, and verdi-llm will suggest the appropriate command.
## Installation
```
pip install verdi-llm
```
## Quick Start
**Interactive Setup Process:**
1. Choose a backend (currently supports: `groq`)
2. Enter your API key for the selected backend
3. Wait for embeddings to be generated (one-time setup)
**Example:**
```bash
$ verdi-llm configure
This command will help you choose and configure an LLM backend for AiiDA.
Please follow the instructions below to set up your preferred LLM backend.
Step #1 Choose a backend:
groq
Your choice: groq
Step #2 Enter your API key for groq:
Your API key: gsk_xxxxxxxxxxxxxxxxxxxx
Configuration saved to ~/.aiida/llm/config.json
You can now use the verdi smart command to interact with the LLM backend.
```
**Non-Interactive Setup Process:**
`verdi-llm configure --backend groq --api-key <API-KEY>` -- to generate a key [see here](https://console.groq.com/keys)
### 2. Generate Commands
Use the `cli` command to get AI-powered command suggestions:
```bash
verdi-llm cli "your question here"
```
**Example:**
```bash
verdi-llm cli "how do I list all my calculations"
```
**Output:**
```bash
💡 Suggested command (in 1.2 seconds):
verdi process list
[E]xecute, [M]odify, [C]ancel?
```
### 3. Interactive Shell Mode
Loading embeddings takes a few seconds, which can become annoying if you plan to ask multiple questions. It's better to load them once with the interactive shell:
```bash
verdi-llm shell
```
## User Workflows
### Workflow 1: Quick Command Generation
**Goal:** Get a single command suggestion and execute it
```bash
# Step 1: Ask your question
verdi-llm cli "show me the status of my computer setup"
# Step 2: Review the suggestion
💡 Suggested command (in 0.8 seconds):
verdi computer list --all
# Step 3: Choose action
[E]xecute, [M]odify, [C]ancel? e
# Step 4: Command executes automatically
$ verdi computer list --all
[Command output appears here]
```
### Workflow 2: Interactive Shell Session
**Goal:** Work in an interactive environment with persistent AI assistance
```bash
# Launch shell
verdi-llm shell
# The shell loads AI models (one-time per session)
🚀 Initializing verdi-llm shell...
Loading SentenceTransformer (this may take a few seconds)...
✅ SentenceTransformer loaded successfully!
🎯 verdi-llm shell ready!
# Use regular commands
verdi-llm:~/projects$ ls
project1/ project2/ data/
# Ask AI for help
verdi-llm:~/projects$ cli create a new calculation job
💡 Suggested command:
verdi process submit
[E]xecute, [M]odify, [C]ancel? m
Enter modified command: verdi process submit --help
# Regular shell features work
verdi-llm:~/projects$ cd project1
verdi-llm:~/projects/project1$
# Exit when done
verdi-llm:~/projects/project1$ exit
👋 Exiting verdi-llm shell
```
## Command Examples
### Basic Usage Examples
| Natural language query | Typical suggested command |
|------------------------|---------------------------|
| "is my computer configured correctly" | `verdi computer configure show` |
| "show details of process 123" | `verdi process show 123` |
| "get information about node 456" | `verdi node show 456` |
### Advanced Usage Examples
| Complex queries | Suggested command patterns |
|----------------|---------------------------|
| "Delete the working directories of all finished calculation jobs since the last month on cluster1 and cluster2 without asking for confirmation" | `verdi calcjob cleanworkdir --past-days 30 --computers cluster1,cluster2 --exit-status 0 --force` |
| "Export band structure data node 98765 as a matplotlib PDF with y-axis range from -5 to 10, LaTeX-formatted labels, and saves it to bands.pdf" | `verdi data core.bands export 98765 --format mpl_pdf --y-min-lim -5 --y-max-lim 10 --prettify-format latex --output bands.pdf` |
### RAG System Insights
The system uses Retrieval-Augmented Generation (RAG) to find relevant commands:
1. **Your query** → Converted to embeddings
2. **Similarity search** → Finds top 3 most relevant commands
3. **Context building** → Includes command usage, descriptions, and options
4. **AI generation** → Creates specific command based on context
## Troubleshooting
### Common Issues and Solutions
**Issue: "No configuration file found"**
```
Solution: Run `verdi-llm configure` first
```
**Issue: AI responses are slow**
```
Possible causes:
- First-time embedding generation
- Network latency to LLM backend
- Large command database
Solutions:
- Use the shell mode for faster subsequent queries
- Check your internet connection
- Wait for initial setup to complete
```
**Issue: Commands don't work as expected**
```
Solutions:
- Use the [M]odify option to adjust commands
- Check if you have the required AiiDA setup
- Verify command parameters match your environment
```
## API Backend Information
### Groq Integration
Currently supports Groq's API with the following models:
- **Default**: `llama3-8b-8192`
- **Features**: Fast inference, good command understanding
- **Rate limits**: Subject to Groq's API limitations
### Adding New Backends
The architecture supports additional backends. Future versions may include more backends.
If you prefer another backend, please open an issue with your suggestion!
## Support and Feedback
- Report issues with specific query examples
- Suggest improvements for command understanding
- Share successful query patterns
Raw data
{
"_id": null,
"home_page": null,
"name": "verdi-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "aiida, llm, verdi",
"author": null,
"author_email": "Ali Khosravi <khsrali@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/d2/90/e504dcaa4e75b45cb1ba8d3a94959623e996e77e6cee4e237e2cbddb6e2d/verdi_llm-0.1.3.tar.gz",
"platform": null,
"description": "# verdi-llm\n\n<br>\n<div align=\"left\">\n <img src=\"assets/logo.png\" alt=\"Repository Logo\" width=\"200\" style=\"margin-top: 20px; margin-bottom: 20px;\"/>\n</div>\n<br>\n\nverdi-llm helps AiiDA users generate `verdi` commands using natural language queries. Instead of memorizing complex command syntax, simply describe what you want to do, and verdi-llm will suggest the appropriate command.\n\n\n## Installation\n\n```\npip install verdi-llm\n```\n\n## Quick Start\n\n**Interactive Setup Process:**\n1. Choose a backend (currently supports: `groq`)\n2. Enter your API key for the selected backend\n3. Wait for embeddings to be generated (one-time setup)\n\n**Example:**\n```bash\n$ verdi-llm configure\nThis command will help you choose and configure an LLM backend for AiiDA.\n\nPlease follow the instructions below to set up your preferred LLM backend.\nStep #1 Choose a backend:\n groq\nYour choice: groq\nStep #2 Enter your API key for groq:\nYour API key: gsk_xxxxxxxxxxxxxxxxxxxx\n\nConfiguration saved to ~/.aiida/llm/config.json\nYou can now use the verdi smart command to interact with the LLM backend.\n```\n\n**Non-Interactive Setup Process:**\n`verdi-llm configure --backend groq --api-key <API-KEY>` -- to generate a key [see here](https://console.groq.com/keys)\n\n### 2. Generate Commands\n\nUse the `cli` command to get AI-powered command suggestions:\n\n```bash\nverdi-llm cli \"your question here\"\n```\n\n**Example:**\n```bash\nverdi-llm cli \"how do I list all my calculations\"\n```\n\n**Output:**\n```bash\n\ud83d\udca1 Suggested command (in 1.2 seconds): \nverdi process list\n\n[E]xecute, [M]odify, [C]ancel? \n```\n\n\n\n### 3. Interactive Shell Mode\nLoading embeddings takes a few seconds, which can become annoying if you plan to ask multiple questions. It's better to load them once with the interactive shell:\n\n```bash\nverdi-llm shell\n```\n\n\n\n## User Workflows\n\n### Workflow 1: Quick Command Generation\n\n**Goal:** Get a single command suggestion and execute it\n\n```bash\n# Step 1: Ask your question\nverdi-llm cli \"show me the status of my computer setup\"\n\n# Step 2: Review the suggestion\n\ud83d\udca1 Suggested command (in 0.8 seconds): \nverdi computer list --all\n\n# Step 3: Choose action\n[E]xecute, [M]odify, [C]ancel? e\n\n# Step 4: Command executes automatically\n$ verdi computer list --all\n[Command output appears here]\n```\n\n### Workflow 2: Interactive Shell Session\n\n**Goal:** Work in an interactive environment with persistent AI assistance\n\n```bash\n# Launch shell\nverdi-llm shell\n\n# The shell loads AI models (one-time per session)\n\ud83d\ude80 Initializing verdi-llm shell...\n Loading SentenceTransformer (this may take a few seconds)...\n\u2705 SentenceTransformer loaded successfully!\n\ud83c\udfaf verdi-llm shell ready!\n\n# Use regular commands\nverdi-llm:~/projects$ ls\nproject1/ project2/ data/\n\n# Ask AI for help\nverdi-llm:~/projects$ cli create a new calculation job\n\ud83d\udca1 Suggested command:\nverdi process submit\n\n[E]xecute, [M]odify, [C]ancel? m\nEnter modified command: verdi process submit --help\n\n# Regular shell features work\nverdi-llm:~/projects$ cd project1\nverdi-llm:~/projects/project1$ \n\n# Exit when done\nverdi-llm:~/projects/project1$ exit\n\ud83d\udc4b Exiting verdi-llm shell\n```\n\n\n## Command Examples\n\n### Basic Usage Examples\n\n| Natural language query | Typical suggested command |\n|------------------------|---------------------------|\n| \"is my computer configured correctly\" | `verdi computer configure show` |\n| \"show details of process 123\" | `verdi process show 123` |\n| \"get information about node 456\" | `verdi node show 456` |\n\n### Advanced Usage Examples\n\n| Complex queries | Suggested command patterns |\n|----------------|---------------------------|\n| \"Delete the working directories of all finished calculation jobs since the last month on cluster1 and cluster2 without asking for confirmation\" | `verdi calcjob cleanworkdir --past-days 30 --computers cluster1,cluster2 --exit-status 0 --force` |\n| \"Export band structure data node 98765 as a matplotlib PDF with y-axis range from -5 to 10, LaTeX-formatted labels, and saves it to bands.pdf\" | `verdi data core.bands export 98765 --format mpl_pdf --y-min-lim -5 --y-max-lim 10 --prettify-format latex --output bands.pdf` |\n\n\n\n\n### RAG System Insights\n\nThe system uses Retrieval-Augmented Generation (RAG) to find relevant commands:\n\n1. **Your query** \u2192 Converted to embeddings\n2. **Similarity search** \u2192 Finds top 3 most relevant commands\n3. **Context building** \u2192 Includes command usage, descriptions, and options\n4. **AI generation** \u2192 Creates specific command based on context\n\n\n## Troubleshooting\n\n### Common Issues and Solutions\n\n**Issue: \"No configuration file found\"**\n```\nSolution: Run `verdi-llm configure` first\n```\n\n**Issue: AI responses are slow**\n```\nPossible causes:\n- First-time embedding generation\n- Network latency to LLM backend\n- Large command database\n\nSolutions:\n- Use the shell mode for faster subsequent queries\n- Check your internet connection\n- Wait for initial setup to complete\n```\n\n**Issue: Commands don't work as expected**\n```\nSolutions:\n- Use the [M]odify option to adjust commands\n- Check if you have the required AiiDA setup\n- Verify command parameters match your environment\n```\n\n## API Backend Information\n\n### Groq Integration\n\nCurrently supports Groq's API with the following models:\n- **Default**: `llama3-8b-8192`\n- **Features**: Fast inference, good command understanding\n- **Rate limits**: Subject to Groq's API limitations\n\n### Adding New Backends\n\nThe architecture supports additional backends. Future versions may include more backends.\nIf you prefer another backend, please open an issue with your suggestion!\n\n## Support and Feedback\n\n\n- Report issues with specific query examples\n- Suggest improvements for command understanding\n- Share successful query patterns\n",
"bugtrack_url": null,
"license": null,
"summary": "An LLM tool for an even easier use of AiiDA.",
"version": "0.1.3",
"project_urls": null,
"split_keywords": [
"aiida",
" llm",
" verdi"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "95f2c7d0fea4f3cc3973fe737971ccaf764bc0ba8dbfa60cb5363c0094a16431",
"md5": "895a6b622f21688ee3b39e0908a23a9c",
"sha256": "a4dc19a7938ce103878f6cf0a5dae0e8484878017ddf75572ea787fb0806b09c"
},
"downloads": -1,
"filename": "verdi_llm-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "895a6b622f21688ee3b39e0908a23a9c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 38370,
"upload_time": "2025-08-21T21:13:46",
"upload_time_iso_8601": "2025-08-21T21:13:46.090730Z",
"url": "https://files.pythonhosted.org/packages/95/f2/c7d0fea4f3cc3973fe737971ccaf764bc0ba8dbfa60cb5363c0094a16431/verdi_llm-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d290e504dcaa4e75b45cb1ba8d3a94959623e996e77e6cee4e237e2cbddb6e2d",
"md5": "5b08e09b5ab28e870ce231f58d0102c7",
"sha256": "3cfeb1207cd065ebb9ffbddde3eabe88fe2b0485c7bf0dc0eec036445ebe1468"
},
"downloads": -1,
"filename": "verdi_llm-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "5b08e09b5ab28e870ce231f58d0102c7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 104898,
"upload_time": "2025-08-21T21:13:47",
"upload_time_iso_8601": "2025-08-21T21:13:47.459982Z",
"url": "https://files.pythonhosted.org/packages/d2/90/e504dcaa4e75b45cb1ba8d3a94959623e996e77e6cee4e237e2cbddb6e2d/verdi_llm-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 21:13:47",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "verdi-llm"
}