# Text-to-Action Architecture
## Overview
Text-to-Action is a system that transaltes natural language queries to programmatic actions. It interprets user input, determines the most appropriate function to execute, extracts relevant parameters, and performs the corresponding action.
## QuickStart
```python
pip install text-to-action
```
Below is a simple example:
```python
from text_to_action import ActionDispatcher
from dotenv import load_dotenv
load_dotenv()
action_file = "text_to_action/src/text_to_action/actions/calculator.py"
dispatcher = ActionDispatcher(action_embedding_filename="calculator.h5",actions_filepath=action_file,
use_llm_extract_parameters=True,verbose_output=True)
while True:
user_input = input("Enter your query: ") # sum of 3, 4 and 5
if user_input.lower() == 'quit':
break
results = dispatcher.dispatch(user_input)
for result in results:
print(result,":",results[result])
print('\n')
```
### Quick Notes:
- Get an API keyfrom services like Groq (free-tier available) or OpenAI. Create a `.env` file and set the api keys values to either `GROQ_API_KEY` or `OPENAI_API_KEY`.
- If you are using NER for parameters extraction, download the corresponding model from spacy.
```
python -m spacy download en_core_web_trf
```
## Creating actions
- First, create a list of actions descriptions in the following format:
```python
functions_description = [ {
"name": "add",
"prompt": "20+50"
},
{
"name": "subtract",
"prompt": "What is 10 minus 4?"
}]
```
Better and diverse descriptions for each function, better accuracy.
- Then, you can create embeddings for functions using the following:
```python
from text_to_action import create_action_embeddings
from text_to_action.types import ModelSource
# you can use SBERT or other huggingface models to create embeddings
create_actions_embeddings(functions_description, save_filename="calculator.h5",
embedding_model="all-MiniLM-L6-v2",model_source=ModelSource.SBERT)
```
- Finally, define the necessary functions and save them to a file. Use the types defined in [entity_models](src/text_to_action/entity_models.py) for function parameter types, or create additional types as needed for data validation and to ensure type safety and clarity in your code.
```python
from typing import List
from text_to_action.entity_models import CARDINAL
def add(items:List[CARDINAL]):
"""
Returns the sum of a and b.
"""
return sum([int(item.value) for item in items])
def subtract(a: CARDINAL, b: CARDINAL):
"""
Returns the difference between a and b.
"""
return a.value - b.value
```
You can tehn use created actions:
```python
from text_to_action import ActionDispatcher
from dotenv import load_dotenv
load_dotenv()
# use the same embedding model, model source you used when creating the actions embeddings
# actions_filepath is where the functions are defined
dispatcher = ActionDispatcher(action_embedding_filename="calculator.h5",actions_filepath=action_file,
use_llm_extract_parameters=False,verbose_output=True,
embedding_model: str = "all-MiniLM-L6-v2",
model_source: ModelSource = ModelSource.SBERT,)
```
## Key Components
1. **Action Dispatcher**: The core component that orchestrates the flow from query to action execution.
2. **Vector Store**: Stores embeddings of function descriptions and associated metadata for efficient similarity search.
3. **Parameter Extractor**: Extracts function arguments from the input text using NER or LLM-based approaches.
## Workflow
1. The system receives a natural language query from the user.
2. The query is processed by the Vector Store to identify the most relevant function(s).
3. The Parameter Extractor analyzes the query to extract required function arguments.
4. The Action Dispatcher selects the most appropriate function based on similarity scores and parameter availability.
5. The selected function is executed with the extracted parameters.
6. The result is returned to the user.
## Possible use Cases
- Natural Language Interfaces for APIs
- Chatbots and Virtual Assistants
- Automated Task Execution Systems
- Voice-Controlled Applications
## Future Enhancements
- Integration with more advanced LLMs for improved parameter extraction
- Support for multi-step actions and complex workflows
- User feedback loop for continuous improvement of function matching
- GUI for easy management of the function database
## Contributions
Contributions are welcome.
Raw data
{
"_id": null,
"home_page": "https://github.com/sri0606/text_to_action",
"name": "text-to-action",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": "text_to_action, function calling, natural language processing",
"author": "Sriram Seelamneni",
"author_email": "srirams0606@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/33/a4/5675dbebcafb3e94a70ebbfa291e8bd1b4ba22ff84915d2c2c1a9af701ea/text_to_action-1.0.3.tar.gz",
"platform": null,
"description": "# Text-to-Action Architecture\n\n## Overview\n\nText-to-Action is a system that transaltes natural language queries to programmatic actions. It interprets user input, determines the most appropriate function to execute, extracts relevant parameters, and performs the corresponding action.\n\n## QuickStart\n\n```python\npip install text-to-action\n```\n\nBelow is a simple example:\n\n```python\nfrom text_to_action import ActionDispatcher\nfrom dotenv import load_dotenv\nload_dotenv()\n\naction_file = \"text_to_action/src/text_to_action/actions/calculator.py\"\ndispatcher = ActionDispatcher(action_embedding_filename=\"calculator.h5\",actions_filepath=action_file,\n use_llm_extract_parameters=True,verbose_output=True)\n\nwhile True:\n user_input = input(\"Enter your query: \") # sum of 3, 4 and 5\n if user_input.lower() == 'quit':\n break\n results = dispatcher.dispatch(user_input)\n for result in results:\n print(result,\":\",results[result])\n print('\\n')\n```\n\n### Quick Notes:\n\n- Get an API keyfrom services like Groq (free-tier available) or OpenAI. Create a `.env` file and set the api keys values to either `GROQ_API_KEY` or `OPENAI_API_KEY`.\n\n- If you are using NER for parameters extraction, download the corresponding model from spacy.\n\n ```\n python -m spacy download en_core_web_trf\n ```\n\n## Creating actions\n\n- First, create a list of actions descriptions in the following format:\n\n ```python\n functions_description = [ {\n \"name\": \"add\",\n \"prompt\": \"20+50\"\n },\n {\n \"name\": \"subtract\",\n \"prompt\": \"What is 10 minus 4?\"\n }]\n ```\n\n Better and diverse descriptions for each function, better accuracy.\n\n- Then, you can create embeddings for functions using the following:\n\n ```python\n from text_to_action import create_action_embeddings\n from text_to_action.types import ModelSource\n\n # you can use SBERT or other huggingface models to create embeddings\n create_actions_embeddings(functions_description, save_filename=\"calculator.h5\",\n embedding_model=\"all-MiniLM-L6-v2\",model_source=ModelSource.SBERT)\n ```\n\n- Finally, define the necessary functions and save them to a file. Use the types defined in [entity_models](src/text_to_action/entity_models.py) for function parameter types, or create additional types as needed for data validation and to ensure type safety and clarity in your code.\n\n ```python\n from typing import List\n from text_to_action.entity_models import CARDINAL\n\n def add(items:List[CARDINAL]):\n \"\"\"\n Returns the sum of a and b.\n \"\"\"\n return sum([int(item.value) for item in items])\n\n def subtract(a: CARDINAL, b: CARDINAL):\n \"\"\"\n Returns the difference between a and b.\n \"\"\"\n return a.value - b.value\n ```\n\nYou can tehn use created actions:\n\n```python\nfrom text_to_action import ActionDispatcher\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# use the same embedding model, model source you used when creating the actions embeddings\n# actions_filepath is where the functions are defined\ndispatcher = ActionDispatcher(action_embedding_filename=\"calculator.h5\",actions_filepath=action_file,\n use_llm_extract_parameters=False,verbose_output=True,\n embedding_model: str = \"all-MiniLM-L6-v2\",\n model_source: ModelSource = ModelSource.SBERT,)\n\n```\n\n## Key Components\n\n1. **Action Dispatcher**: The core component that orchestrates the flow from query to action execution.\n\n2. **Vector Store**: Stores embeddings of function descriptions and associated metadata for efficient similarity search.\n\n3. **Parameter Extractor**: Extracts function arguments from the input text using NER or LLM-based approaches.\n\n## Workflow\n\n1. The system receives a natural language query from the user.\n2. The query is processed by the Vector Store to identify the most relevant function(s).\n3. The Parameter Extractor analyzes the query to extract required function arguments.\n4. The Action Dispatcher selects the most appropriate function based on similarity scores and parameter availability.\n5. The selected function is executed with the extracted parameters.\n6. The result is returned to the user.\n\n## Possible use Cases\n\n- Natural Language Interfaces for APIs\n- Chatbots and Virtual Assistants\n- Automated Task Execution Systems\n- Voice-Controlled Applications\n\n## Future Enhancements\n\n- Integration with more advanced LLMs for improved parameter extraction\n- Support for multi-step actions and complex workflows\n- User feedback loop for continuous improvement of function matching\n- GUI for easy management of the function database\n\n## Contributions\n\nContributions are welcome.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A system that translates natural language queries into programmatic actions",
"version": "1.0.3",
"project_urls": {
"Homepage": "https://github.com/sri0606/text_to_action",
"Repository": "https://github.com/sri0606/text_to_action"
},
"split_keywords": [
"text_to_action",
" function calling",
" natural language processing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9a75eefac6eca38993ca3be9fc7748a4cb336b21493f23ea7e886f78946ceb0f",
"md5": "82604921f2b711ff4f14efcfc7e82a20",
"sha256": "10224d0938fd4b69bb69aec6e4f2c576361edb9cdfbfdd8eadbe99b9229b304c"
},
"downloads": -1,
"filename": "text_to_action-1.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "82604921f2b711ff4f14efcfc7e82a20",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 442657,
"upload_time": "2024-07-27T16:02:34",
"upload_time_iso_8601": "2024-07-27T16:02:34.682697Z",
"url": "https://files.pythonhosted.org/packages/9a/75/eefac6eca38993ca3be9fc7748a4cb336b21493f23ea7e886f78946ceb0f/text_to_action-1.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "33a45675dbebcafb3e94a70ebbfa291e8bd1b4ba22ff84915d2c2c1a9af701ea",
"md5": "b75284448efa61f2b8cebb1f09b5cf5d",
"sha256": "7eae9ac0bd77ff3677c640575a0afc6052421c843bcedd1d4eef51da46d6805f"
},
"downloads": -1,
"filename": "text_to_action-1.0.3.tar.gz",
"has_sig": false,
"md5_digest": "b75284448efa61f2b8cebb1f09b5cf5d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 432563,
"upload_time": "2024-07-27T16:02:36",
"upload_time_iso_8601": "2024-07-27T16:02:36.290764Z",
"url": "https://files.pythonhosted.org/packages/33/a4/5675dbebcafb3e94a70ebbfa291e8bd1b4ba22ff84915d2c2c1a9af701ea/text_to_action-1.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-27 16:02:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sri0606",
"github_project": "text_to_action",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "text-to-action"
}