SimplerLLM


NameSimplerLLM JSON
Version 0.3.1.9 PyPI version JSON
download
home_pagehttps://github.com/hassancs91/SimplerLLM
SummaryAn easy-to-use Library for interacting with language models.
upload_time2025-02-22 14:11:08
maintainerNone
docs_urlNone
authorHasan Aboul Hasan
requires_python>=3.6
licenseMIT
keywords text generation openai llm rag
VCS
bugtrack_url
requirements aiohttp duckduckgo_search lxml_html_clean newspaper3k numpy openai pydantic PyPDF2 python-dotenv python_docx Requests youtube_transcript_api colorama scipy tiktoken
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ⚪ SimplerLLM (Beta)

⚡ Your Easy Pass to Advanced AI ⚡


[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Join the Discord chat!](https://img.shields.io/badge/Join-Discord-7289DA.svg)](https://discord.gg/HUrtZXyp3j)



## 🤔 What is SimplerLLM?

SimplerLLM is an open-source Python library designed to simplify interactions with Large Language Models (LLMs) for researchers and beginners. It offers a unified interface for different LLM providers and a suite of tools to enhance language model capabilities and make it Super easy for anyone to develop AI-powered tools and apps.

Below is a simple documentation, if you're looking for the whole detailed documentation [check the official website](https://docs.simplerllm.com/)

## Easy Installation

With pip:

```bash
pip install simplerllm
```

### Features

- **Unified LLM Interface**: Define an LLM instance in one line for providers like OpenAI, Google Gemini, Anthropic, and even Ollama. 
- **Generic Text Loader**: Load text from various sources like DOCX, PDF, TXT files, or blog posts.
- **RapidAPI Connector**: Connect with AI services on RapidAPI.
- **SERP Integration**: Perform searches easily using Serper and Value Serp APIs.
- **Prompt Template Builder**: Easily create and manage prompt templates.
  And Much More Coming Soon!

### Setting Up Environment Variables

To use this library, you need to set several API keys in your environment. Start by creating a .env file in the root directory of your project and adding your API keys there.

🔴 This file should be kept private and not committed to version control to protect your keys.

Here is an example of what your .env file should look like:

```
OPENAI_API_KEY="your_openai_api_key_here" # For accessing OpenAI's API
GEMINI_API_KEY="your_gemeni_api_key_here" # For accessing Gemini's API
ANTHROPIC_API_KEY="your_claude_api_key_here" # For accessing Anthropic's API
RAPIDAPI_API_KEY="your_rapidapi_api_key_here" # For accessing APIs on RapidAPI
VALUE_SERP_API_KEY="your_value_serp_api_key_here" # For Google search
SERPER_API_KEY="your_serper_api_key_here" # For Google search
STABILITY_API_KEY="your_stability_api_key_here" # For image generation
```

## Creating an LLM Instance

```python
from SimplerLLM.language.llm import LLM, LLMProvider

# For OpenAI
llm_instance = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-3.5-turbo")

# For Google Gemini
#llm_instance = LLM.create(provider=LLMProvider.GEMINI, model_name="gemini-1.5-flash")

# For Anthropic Claude
#llm_instance = LLM.create(provider=LLMProvider.ANTHROPIC, model_name="claude-3-5-sonnet-20240620")

# For Ollama (Local Model)
#llm_instance = LLM.create(provider=LLMProvider.OLLAMA, model_name="phi")

# Generate a response
response = llm_instance.generate_response(prompt="generate a 5 words sentence")
print(response)
```

## Generating a JSON easily with any LLM instance

This function helps you always get a json structured response from LLMs. This will help you a lot if you're using the response for your software and you want a stable json output.

```python
from pydantic import BaseModel
from SimplerLLM.language.llm import LLM, LLMProvider
from SimplerLLM.language.llm_addons import generate_pydantic_json_model

class LLMResponse(BaseModel):
    response: str

llm_instance = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
prompt = "generate a sentence about the importance of AI"

output = generate_pydantic_json_model(llm_instance=llm_instance,prompt=prompt,model_class=LLMResponse)
json_output = output.model_dump()
```

The `output` generated by the LLM in this case will be an object of type LLMResponse, and to parse it easily into a json response we use the `model_dump()` function.

## Using Tools

### SERP

```python
from SimplerLLM.tools.serp import search_with_serper_api

search_results = search_with_serper_api("your search query", num_results=3)

# use the search results the way you want!

```

### Generic Text Loader

```python
from SimplerLLM.tools.generic_loader import load_content

text_file = load_content("file.txt")

print(text_file.content)

```

### Calling any RapidAPI API

```python
from  SimplerLLM.tools.rapid_api import RapidAPIClient

api_url = "https://domain-authority1.p.rapidapi.com/seo/get-domain-info"
api_params = {
    'domain': 'learnwithhasan.com',
}

api_client = RapidAPIClient()  # API key read from environment variable
response = api_client.call_api(api_url, method='GET', params=api_params)
```

### Prompt Template Builder

```python
from SimplerLLM.prompts.prompt_builder import create_multi_value_prompts,create_prompt_template

basic_prompt = "Generate 5 titles for a blog about {topic} and {style}"

prompt_template = pr.create_prompt_template(basic_prompt)

prompt_template.assign_parms(topic = "marketing",style = "catchy")

print(prompt_template.content)


## working with multiple value prompts
multi_value_prompt_template = """Hello {name}, your next meeting is on {date}.
 and bring a {object} wit you"""

params_list = [
     {"name": "Alice", "date": "January 10th", "object" : "dog"},
     {"name": "Bob", "date": "January 12th", "object" : "bag"},
     {"name": "Charlie", "date": "January 15th", "object" : "pen"}
]


multi_value_prompt = create_multi_value_prompts(multi_value_prompt_template)
generated_prompts = multi_value_prompt.generate_prompts(params_list)

print(generated_prompts[0])

```

## Chunking Functions

We have introduced new functions to help you split texts into manageable chunks based on different criteria. These functions are part of the chunker tool.

### chunk_by_max_chunk_size

This function splits text into chunks with a maximum size, optionally preserving sentence structure.

### chunk_by_sentences

This function splits the text into chunks based on sentences.

### chunk_by_paragraphs

This function splits text into chunks based on paragraphs.

### chunk_by_semantics

This functions splits text into chunks based on semantics.

Example

```python
from SimplerLLM.tools.text_chunker import chunk_by_semantics
from SimplerLLM.language.embeddings import EmbeddingsLLM, EmbeddingsProvider

blog_url = "https://www.semrush.com/blog/digital-marketing/"
blog_post = loader.load_content(blog_url)
text = blog_post.content

embeddings_model = EmbeddingsLLM.create(provider=EmbeddingsProvider.OPENAI,
                                        model_name="text-embedding-3-small")
semantic_chunks = chunk_by_semantics(text, embeddings_model, threshold_percentage=80)

print(semantic_chunks)
```

## Next Updates

- Adding More Tools
- Prompt Optimization
- Response Evaluation
- GPT Trainer
- Advanced Document Loader
- Integration With More Providers
- Simple RAG With SimplerVectors
- Integration with Vector Databases
- Agent Builder
- LLM Server 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hassancs91/SimplerLLM",
    "name": "SimplerLLM",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "text generation, openai, LLM, RAG",
    "author": "Hasan Aboul Hasan",
    "author_email": "hasan@learnwithhasan.com",
    "download_url": "https://files.pythonhosted.org/packages/01/60/0b044d0bff98a13067e167b21a33334260f789d2d29a80ecde196e85b581/simplerllm-0.3.1.9.tar.gz",
    "platform": null,
    "description": "# \u26aa SimplerLLM (Beta)\r\n\r\n\u26a1 Your Easy Pass to Advanced AI \u26a1\r\n\r\n\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\r\n[![Join the Discord chat!](https://img.shields.io/badge/Join-Discord-7289DA.svg)](https://discord.gg/HUrtZXyp3j)\r\n\r\n\r\n\r\n## \ud83e\udd14 What is SimplerLLM?\r\n\r\nSimplerLLM is an open-source Python library designed to simplify interactions with Large Language Models (LLMs) for researchers and beginners. It offers a unified interface for different LLM providers and a suite of tools to enhance language model capabilities and make it Super easy for anyone to develop AI-powered tools and apps.\r\n\r\nBelow is a simple documentation, if you're looking for the whole detailed documentation [check the official website](https://docs.simplerllm.com/)\r\n\r\n## Easy Installation\r\n\r\nWith pip:\r\n\r\n```bash\r\npip install simplerllm\r\n```\r\n\r\n### Features\r\n\r\n- **Unified LLM Interface**: Define an LLM instance in one line for providers like OpenAI, Google Gemini, Anthropic, and even Ollama. \r\n- **Generic Text Loader**: Load text from various sources like DOCX, PDF, TXT files, or blog posts.\r\n- **RapidAPI Connector**: Connect with AI services on RapidAPI.\r\n- **SERP Integration**: Perform searches easily using Serper and Value Serp APIs.\r\n- **Prompt Template Builder**: Easily create and manage prompt templates.\r\n  And Much More Coming Soon!\r\n\r\n### Setting Up Environment Variables\r\n\r\nTo use this library, you need to set several API keys in your environment. Start by creating a .env file in the root directory of your project and adding your API keys there.\r\n\r\n\ud83d\udd34 This file should be kept private and not committed to version control to protect your keys.\r\n\r\nHere is an example of what your .env file should look like:\r\n\r\n```\r\nOPENAI_API_KEY=\"your_openai_api_key_here\" # For accessing OpenAI's API\r\nGEMINI_API_KEY=\"your_gemeni_api_key_here\" # For accessing Gemini's API\r\nANTHROPIC_API_KEY=\"your_claude_api_key_here\" # For accessing Anthropic's API\r\nRAPIDAPI_API_KEY=\"your_rapidapi_api_key_here\" # For accessing APIs on RapidAPI\r\nVALUE_SERP_API_KEY=\"your_value_serp_api_key_here\" # For Google search\r\nSERPER_API_KEY=\"your_serper_api_key_here\" # For Google search\r\nSTABILITY_API_KEY=\"your_stability_api_key_here\" # For image generation\r\n```\r\n\r\n## Creating an LLM Instance\r\n\r\n```python\r\nfrom SimplerLLM.language.llm import LLM, LLMProvider\r\n\r\n# For OpenAI\r\nllm_instance = LLM.create(provider=LLMProvider.OPENAI, model_name=\"gpt-3.5-turbo\")\r\n\r\n# For Google Gemini\r\n#llm_instance = LLM.create(provider=LLMProvider.GEMINI, model_name=\"gemini-1.5-flash\")\r\n\r\n# For Anthropic Claude\r\n#llm_instance = LLM.create(provider=LLMProvider.ANTHROPIC, model_name=\"claude-3-5-sonnet-20240620\")\r\n\r\n# For Ollama (Local Model)\r\n#llm_instance = LLM.create(provider=LLMProvider.OLLAMA, model_name=\"phi\")\r\n\r\n# Generate a response\r\nresponse = llm_instance.generate_response(prompt=\"generate a 5 words sentence\")\r\nprint(response)\r\n```\r\n\r\n## Generating a JSON easily with any LLM instance\r\n\r\nThis function helps you always get a json structured response from LLMs. This will help you a lot if you're using the response for your software and you want a stable json output.\r\n\r\n```python\r\nfrom pydantic import BaseModel\r\nfrom SimplerLLM.language.llm import LLM, LLMProvider\r\nfrom SimplerLLM.language.llm_addons import generate_pydantic_json_model\r\n\r\nclass LLMResponse(BaseModel):\r\n    response: str\r\n\r\nllm_instance = LLM.create(provider=LLMProvider.OPENAI, model_name=\"gpt-4o\")\r\nprompt = \"generate a sentence about the importance of AI\"\r\n\r\noutput = generate_pydantic_json_model(llm_instance=llm_instance,prompt=prompt,model_class=LLMResponse)\r\njson_output = output.model_dump()\r\n```\r\n\r\nThe `output` generated by the LLM in this case will be an object of type LLMResponse, and to parse it easily into a json response we use the `model_dump()` function.\r\n\r\n## Using Tools\r\n\r\n### SERP\r\n\r\n```python\r\nfrom SimplerLLM.tools.serp import search_with_serper_api\r\n\r\nsearch_results = search_with_serper_api(\"your search query\", num_results=3)\r\n\r\n# use the search results the way you want!\r\n\r\n```\r\n\r\n### Generic Text Loader\r\n\r\n```python\r\nfrom SimplerLLM.tools.generic_loader import load_content\r\n\r\ntext_file = load_content(\"file.txt\")\r\n\r\nprint(text_file.content)\r\n\r\n```\r\n\r\n### Calling any RapidAPI API\r\n\r\n```python\r\nfrom  SimplerLLM.tools.rapid_api import RapidAPIClient\r\n\r\napi_url = \"https://domain-authority1.p.rapidapi.com/seo/get-domain-info\"\r\napi_params = {\r\n    'domain': 'learnwithhasan.com',\r\n}\r\n\r\napi_client = RapidAPIClient()  # API key read from environment variable\r\nresponse = api_client.call_api(api_url, method='GET', params=api_params)\r\n```\r\n\r\n### Prompt Template Builder\r\n\r\n```python\r\nfrom SimplerLLM.prompts.prompt_builder import create_multi_value_prompts,create_prompt_template\r\n\r\nbasic_prompt = \"Generate 5 titles for a blog about {topic} and {style}\"\r\n\r\nprompt_template = pr.create_prompt_template(basic_prompt)\r\n\r\nprompt_template.assign_parms(topic = \"marketing\",style = \"catchy\")\r\n\r\nprint(prompt_template.content)\r\n\r\n\r\n## working with multiple value prompts\r\nmulti_value_prompt_template = \"\"\"Hello {name}, your next meeting is on {date}.\r\n and bring a {object} wit you\"\"\"\r\n\r\nparams_list = [\r\n     {\"name\": \"Alice\", \"date\": \"January 10th\", \"object\" : \"dog\"},\r\n     {\"name\": \"Bob\", \"date\": \"January 12th\", \"object\" : \"bag\"},\r\n     {\"name\": \"Charlie\", \"date\": \"January 15th\", \"object\" : \"pen\"}\r\n]\r\n\r\n\r\nmulti_value_prompt = create_multi_value_prompts(multi_value_prompt_template)\r\ngenerated_prompts = multi_value_prompt.generate_prompts(params_list)\r\n\r\nprint(generated_prompts[0])\r\n\r\n```\r\n\r\n## Chunking Functions\r\n\r\nWe have introduced new functions to help you split texts into manageable chunks based on different criteria. These functions are part of the chunker tool.\r\n\r\n### chunk_by_max_chunk_size\r\n\r\nThis function splits text into chunks with a maximum size, optionally preserving sentence structure.\r\n\r\n### chunk_by_sentences\r\n\r\nThis function splits the text into chunks based on sentences.\r\n\r\n### chunk_by_paragraphs\r\n\r\nThis function splits text into chunks based on paragraphs.\r\n\r\n### chunk_by_semantics\r\n\r\nThis functions splits text into chunks based on semantics.\r\n\r\nExample\r\n\r\n```python\r\nfrom SimplerLLM.tools.text_chunker import chunk_by_semantics\r\nfrom SimplerLLM.language.embeddings import EmbeddingsLLM, EmbeddingsProvider\r\n\r\nblog_url = \"https://www.semrush.com/blog/digital-marketing/\"\r\nblog_post = loader.load_content(blog_url)\r\ntext = blog_post.content\r\n\r\nembeddings_model = EmbeddingsLLM.create(provider=EmbeddingsProvider.OPENAI,\r\n                                        model_name=\"text-embedding-3-small\")\r\nsemantic_chunks = chunk_by_semantics(text, embeddings_model, threshold_percentage=80)\r\n\r\nprint(semantic_chunks)\r\n```\r\n\r\n## Next Updates\r\n\r\n- Adding More Tools\r\n- Prompt Optimization\r\n- Response Evaluation\r\n- GPT Trainer\r\n- Advanced Document Loader\r\n- Integration With More Providers\r\n- Simple RAG With SimplerVectors\r\n- Integration with Vector Databases\r\n- Agent Builder\r\n- LLM Server \r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An easy-to-use Library for interacting with language models.",
    "version": "0.3.1.9",
    "project_urls": {
        "Homepage": "https://github.com/hassancs91/SimplerLLM"
    },
    "split_keywords": [
        "text generation",
        " openai",
        " llm",
        " rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fd2c546b42eca390bf829dd8060f6ef83587820637c905fad7c0d18c48382c92",
                "md5": "725ef27a899308fcee27e306f132c794",
                "sha256": "ab662365c4b9a7f29567e857169d980ca3cf332b40064caf8d6c6a17436085cb"
            },
            "downloads": -1,
            "filename": "SimplerLLM-0.3.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "725ef27a899308fcee27e306f132c794",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 140178,
            "upload_time": "2025-02-22T14:11:00",
            "upload_time_iso_8601": "2025-02-22T14:11:00.616873Z",
            "url": "https://files.pythonhosted.org/packages/fd/2c/546b42eca390bf829dd8060f6ef83587820637c905fad7c0d18c48382c92/SimplerLLM-0.3.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "01600b044d0bff98a13067e167b21a33334260f789d2d29a80ecde196e85b581",
                "md5": "cc20ec17798409a7e5916fa7f0e6b997",
                "sha256": "77d6ec1c4f6e70054c9c32dbcd9bfde635cbb2d3c2337dfb2ba60b34cafcbdf6"
            },
            "downloads": -1,
            "filename": "simplerllm-0.3.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "cc20ec17798409a7e5916fa7f0e6b997",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 81868,
            "upload_time": "2025-02-22T14:11:08",
            "upload_time_iso_8601": "2025-02-22T14:11:08.439230Z",
            "url": "https://files.pythonhosted.org/packages/01/60/0b044d0bff98a13067e167b21a33334260f789d2d29a80ecde196e85b581/simplerllm-0.3.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-22 14:11:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hassancs91",
    "github_project": "SimplerLLM",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "aiohttp",
            "specs": [
                [
                    "==",
                    "3.9.4"
                ]
            ]
        },
        {
            "name": "duckduckgo_search",
            "specs": [
                [
                    "==",
                    "5.3.0"
                ]
            ]
        },
        {
            "name": "lxml_html_clean",
            "specs": [
                [
                    "==",
                    "0.1.1"
                ]
            ]
        },
        {
            "name": "newspaper3k",
            "specs": [
                [
                    "==",
                    "0.2.8"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.26.4"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "1.59.8"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.10.5"
                ]
            ]
        },
        {
            "name": "PyPDF2",
            "specs": [
                [
                    "==",
                    "3.0.1"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "==",
                    "1.0.1"
                ]
            ]
        },
        {
            "name": "python_docx",
            "specs": [
                [
                    "==",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "Requests",
            "specs": [
                [
                    "==",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "youtube_transcript_api",
            "specs": [
                [
                    "==",
                    "0.6.2"
                ]
            ]
        },
        {
            "name": "colorama",
            "specs": [
                [
                    "==",
                    "0.4.6"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "==",
                    "1.15.2"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        }
    ],
    "lcname": "simplerllm"
}
        
Elapsed time: 0.98777s