llmnet


Namellmnet JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/maxmekiska/llmnet
SummaryA library designed to harness the diversity of thought by combining multiple LLMs.
upload_time2024-01-29 01:14:42
maintainer
docs_urlNone
authorMaximilian Mekiska
requires_python
license
keywords machinelearning llm bots network
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `llmnet`

llmnet is a library designed to facilitate collaborative work among LLMs on diverse tasks. Its primary goal is to encourage a diversity of thought across various LLM models.

llmnet comprises two main components:

1. LLM network workers
2. Consensus worker


The LLM network workers can independently and concurrently process tasks, while the consensus worker can access the various solutions and generate a final output. It's important to note that the consensus worker is optional and doesn't necessarily need to be employed.

## Example

### Prerequisite

llmnet currently supports LLM models from OpenAI and Google. The user can define the model to be used for the LLM workers, as well as the model to be used for the consensus worker.

Please make sure to set env variables called `OPENAI_API_KEY`, `GOOGLE_API_KEY` to your OpenAi and Google keys.

### How to use llmnet?

#### llm worker

You have currently three llm worker at your disposal:

1. openaillmbot
2. googlellmbot
3. randomllmbot
4. randomopenaillmbot
5. randomgooglellmbot

##### `openaillmbot`

Interface with OpenAi models.

optional parameters:

```
model         (str) = 'gpt-3.5-turbo'
max_tokens    (int) = 2024
temperature   (float) = 0.1
n             (int) = 1
stop          (Union[str, List[str]]) = None
```

##### `googlellmbot`

Interface with Google models.

optional parameters:

```
model               (str) = 'gemini-pro'
max_output_tokens   (int) = 2024
temperature         (float) = 0.1
top_p               (float) = None
top_k               (int) = None
candidate_count     (int) = 1
stop_sequences      (str) = None
```

##### `randomllmbot`

Select randomly between all available llmworkers and parameter specified.

optional parameters:

```
random_configuration  (Dict) = {}
```

example dict:

```
{
    "<worker1>":
    {
        "<parameter1>": [<possible_arguments>],
        "<parameter2>": [<possible_arguments>],
        ...
    },
    "<worker2>":
    {
        "<parameter1>": [<possible_arguments>],
        "<parameter2>": [<possible_arguments>]
        ...
    }
    ...
}
```

##### `randomopenaillmbot`

Select randomly between all configurations possible for OpenAi based llms.

optional parameters - if not provided, defaults to `openaillmbot` default values:

```
random_configuration  (Dict) = {}
```

example dict:

```
{
    "<parameter1>": [<possible_arguments>],
    "<parameter2>": [<possible_arguments>],
    ...
}
```


##### `randomgooglellmbot`

Select randomly between all configurations possible for Google based llms.

optional parameters - if not provided, defaults to `googlellmbot` default values:

```
random_configuration  (Dict) = {}
```

example dict:

```
{
    "<parameter1>": [<possible_arguments],
    "<parameter2>": [<possible_arguments],
    ...
}
```
#### `create_network`

- creates llmworker network
- expects:
  - instruct: List[Dict[str, str]]
    - structure: `[{"objective": xxx, "context": ooo}..]`, the context key is is optional
  - worker: select any worker from the above
  - max_concurrent_worker: how many API calls are allowed in parallel
  - kwargs: any configuration for the worker selected
  - access results via getter methods:
    - get_worker_answers: collection of answers combined in one string
    - get_worker_answers_messages: collection of answers with metadata

#### `apply_consensus`

- creates consensus worker
- expects:
  - worker: select any worker from the above
  - kwargs: any configuration for the worker selected
  - set_prompt: prompt to build consensus
    - access results via getter methods:
      - get_worker_consensus: consensus result as string
      - get_worker_consensus_messages: consensus result with metadata

#### Simple independent tasks - no consensus

```python
from llmnet import LlmNetwork


instructions = []


instructions =
    [
    {"objective": "how many countries are there?"},
    {"objective": "what is AGI"},
    {"objective": "What is the purpose of biological life?"}
    ]

net = LlmNetwork()

net.create_network(
    instruct=instructions,
    worker="randomllmbot",
    max_concurrent_worker=2, # how many API calls are allowed in parallel
    random_configuration={
        "googlellmbot": {"model": ["gemini-pro"], "temperature": [0.12, 0.11]},
        "openaillmbot": {
            "model": ["gpt-3.5-turbo", "gpt-4"],
            "temperature": [0.11, 0.45, 1],
        },
    },
)

# collection of answers as a string
net.get_worker_answers

# collection of answers with metadata
net.get_worker_answer_messages
```

#### One task with same objective split between multiple workers - consensus

```python
from llmnet import LlmNetwork


instructions = []


instructions =
    [
    {"objective": "What is empiricism?", "context": "Text Part One"},
    {"objective": "What is empiricism?", "context": "Text Part Two"},
    {"objective": "What is empiricism?", "context": "Text Part Three"}
    ]

net = LlmNetwork()

net.create_network(
    instruct=instructions,
    worker="randomllmbot",
    max_concurrent_worker=2, # how many API calls are allowed in parallel
    random_configuration={
        "googlellmbot": {"model": ["gemini-pro"], "temperature": [0.12, 0.11]},
        "openaillmbot": {
            "model": ["gpt-3.5-turbo"],
            "temperature": [0.11, 0.45, 1],
        },
    },
)

# collection of answers as a string
net.get_worker_answers

# collection of answers with metadata
net.get_worker_answer_messages

# apply consensus
net.apply_consensus(
    worker="openaillmbot",
    model="gpt-3.5-turbo",
    temperature=0.7,
    set_prompt=f"Answer this objective: What is empiricism? with the following text in just one sentences: {net.get_worker_answers}",
)

# get final consensus answer as a string
net.get_worker_consensus

# get answer with metadata
net.get_worker_consensus_messages
```

#### Other example use cases

- independent objectives, choose best solution via consensus
- mixed objectives with and without context, with or without consensus
- etc.

## Appendix

Map reduce by LangChain: [LangChain MapReduce Documentation](https://python.langchain.com/docs/modules/chains/document/map_reduce)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/maxmekiska/llmnet",
    "name": "llmnet",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "machinelearning,llm,bots,network",
    "author": "Maximilian Mekiska",
    "author_email": "maxmekiska@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b6/00/5da5bfa8c3c39c88be5e0fb7140a16bd67e9dae61330b137f7dcf1850406/llmnet-0.1.0.tar.gz",
    "platform": null,
    "description": "# `llmnet`\r\n\r\nllmnet is a library designed to facilitate collaborative work among LLMs on diverse tasks. Its primary goal is to encourage a diversity of thought across various LLM models.\r\n\r\nllmnet comprises two main components:\r\n\r\n1. LLM network workers\r\n2. Consensus worker\r\n\r\n\r\nThe LLM network workers can independently and concurrently process tasks, while the consensus worker can access the various solutions and generate a final output. It's important to note that the consensus worker is optional and doesn't necessarily need to be employed.\r\n\r\n## Example\r\n\r\n### Prerequisite\r\n\r\nllmnet currently supports LLM models from OpenAI and Google. The user can define the model to be used for the LLM workers, as well as the model to be used for the consensus worker.\r\n\r\nPlease make sure to set env variables called `OPENAI_API_KEY`, `GOOGLE_API_KEY` to your OpenAi and Google keys.\r\n\r\n### How to use llmnet?\r\n\r\n#### llm worker\r\n\r\nYou have currently three llm worker at your disposal:\r\n\r\n1. openaillmbot\r\n2. googlellmbot\r\n3. randomllmbot\r\n4. randomopenaillmbot\r\n5. randomgooglellmbot\r\n\r\n##### `openaillmbot`\r\n\r\nInterface with OpenAi models.\r\n\r\noptional parameters:\r\n\r\n```\r\nmodel         (str) = 'gpt-3.5-turbo'\r\nmax_tokens    (int) = 2024\r\ntemperature   (float) = 0.1\r\nn             (int) = 1\r\nstop          (Union[str, List[str]]) = None\r\n```\r\n\r\n##### `googlellmbot`\r\n\r\nInterface with Google models.\r\n\r\noptional parameters:\r\n\r\n```\r\nmodel               (str) = 'gemini-pro'\r\nmax_output_tokens   (int) = 2024\r\ntemperature         (float) = 0.1\r\ntop_p               (float) = None\r\ntop_k               (int) = None\r\ncandidate_count     (int) = 1\r\nstop_sequences      (str) = None\r\n```\r\n\r\n##### `randomllmbot`\r\n\r\nSelect randomly between all available llmworkers and parameter specified.\r\n\r\noptional parameters:\r\n\r\n```\r\nrandom_configuration  (Dict) = {}\r\n```\r\n\r\nexample dict:\r\n\r\n```\r\n{\r\n    \"<worker1>\":\r\n    {\r\n        \"<parameter1>\": [<possible_arguments>],\r\n        \"<parameter2>\": [<possible_arguments>],\r\n        ...\r\n    },\r\n    \"<worker2>\":\r\n    {\r\n        \"<parameter1>\": [<possible_arguments>],\r\n        \"<parameter2>\": [<possible_arguments>]\r\n        ...\r\n    }\r\n    ...\r\n}\r\n```\r\n\r\n##### `randomopenaillmbot`\r\n\r\nSelect randomly between all configurations possible for OpenAi based llms.\r\n\r\noptional parameters - if not provided, defaults to `openaillmbot` default values:\r\n\r\n```\r\nrandom_configuration  (Dict) = {}\r\n```\r\n\r\nexample dict:\r\n\r\n```\r\n{\r\n    \"<parameter1>\": [<possible_arguments>],\r\n    \"<parameter2>\": [<possible_arguments>],\r\n    ...\r\n}\r\n```\r\n\r\n\r\n##### `randomgooglellmbot`\r\n\r\nSelect randomly between all configurations possible for Google based llms.\r\n\r\noptional parameters - if not provided, defaults to `googlellmbot` default values:\r\n\r\n```\r\nrandom_configuration  (Dict) = {}\r\n```\r\n\r\nexample dict:\r\n\r\n```\r\n{\r\n    \"<parameter1>\": [<possible_arguments],\r\n    \"<parameter2>\": [<possible_arguments],\r\n    ...\r\n}\r\n```\r\n#### `create_network`\r\n\r\n- creates llmworker network\r\n- expects:\r\n  - instruct: List[Dict[str, str]]\r\n    - structure: `[{\"objective\": xxx, \"context\": ooo}..]`, the context key is is optional\r\n  - worker: select any worker from the above\r\n  - max_concurrent_worker: how many API calls are allowed in parallel\r\n  - kwargs: any configuration for the worker selected\r\n  - access results via getter methods:\r\n    - get_worker_answers: collection of answers combined in one string\r\n    - get_worker_answers_messages: collection of answers with metadata\r\n\r\n#### `apply_consensus`\r\n\r\n- creates consensus worker\r\n- expects:\r\n  - worker: select any worker from the above\r\n  - kwargs: any configuration for the worker selected\r\n  - set_prompt: prompt to build consensus\r\n    - access results via getter methods:\r\n      - get_worker_consensus: consensus result as string\r\n      - get_worker_consensus_messages: consensus result with metadata\r\n\r\n#### Simple independent tasks - no consensus\r\n\r\n```python\r\nfrom llmnet import LlmNetwork\r\n\r\n\r\ninstructions = []\r\n\r\n\r\ninstructions =\r\n    [\r\n    {\"objective\": \"how many countries are there?\"},\r\n    {\"objective\": \"what is AGI\"},\r\n    {\"objective\": \"What is the purpose of biological life?\"}\r\n    ]\r\n\r\nnet = LlmNetwork()\r\n\r\nnet.create_network(\r\n    instruct=instructions,\r\n    worker=\"randomllmbot\",\r\n    max_concurrent_worker=2, # how many API calls are allowed in parallel\r\n    random_configuration={\r\n        \"googlellmbot\": {\"model\": [\"gemini-pro\"], \"temperature\": [0.12, 0.11]},\r\n        \"openaillmbot\": {\r\n            \"model\": [\"gpt-3.5-turbo\", \"gpt-4\"],\r\n            \"temperature\": [0.11, 0.45, 1],\r\n        },\r\n    },\r\n)\r\n\r\n# collection of answers as a string\r\nnet.get_worker_answers\r\n\r\n# collection of answers with metadata\r\nnet.get_worker_answer_messages\r\n```\r\n\r\n#### One task with same objective split between multiple workers - consensus\r\n\r\n```python\r\nfrom llmnet import LlmNetwork\r\n\r\n\r\ninstructions = []\r\n\r\n\r\ninstructions =\r\n    [\r\n    {\"objective\": \"What is empiricism?\", \"context\": \"Text Part One\"},\r\n    {\"objective\": \"What is empiricism?\", \"context\": \"Text Part Two\"},\r\n    {\"objective\": \"What is empiricism?\", \"context\": \"Text Part Three\"}\r\n    ]\r\n\r\nnet = LlmNetwork()\r\n\r\nnet.create_network(\r\n    instruct=instructions,\r\n    worker=\"randomllmbot\",\r\n    max_concurrent_worker=2, # how many API calls are allowed in parallel\r\n    random_configuration={\r\n        \"googlellmbot\": {\"model\": [\"gemini-pro\"], \"temperature\": [0.12, 0.11]},\r\n        \"openaillmbot\": {\r\n            \"model\": [\"gpt-3.5-turbo\"],\r\n            \"temperature\": [0.11, 0.45, 1],\r\n        },\r\n    },\r\n)\r\n\r\n# collection of answers as a string\r\nnet.get_worker_answers\r\n\r\n# collection of answers with metadata\r\nnet.get_worker_answer_messages\r\n\r\n# apply consensus\r\nnet.apply_consensus(\r\n    worker=\"openaillmbot\",\r\n    model=\"gpt-3.5-turbo\",\r\n    temperature=0.7,\r\n    set_prompt=f\"Answer this objective: What is empiricism? with the following text in just one sentences: {net.get_worker_answers}\",\r\n)\r\n\r\n# get final consensus answer as a string\r\nnet.get_worker_consensus\r\n\r\n# get answer with metadata\r\nnet.get_worker_consensus_messages\r\n```\r\n\r\n#### Other example use cases\r\n\r\n- independent objectives, choose best solution via consensus\r\n- mixed objectives with and without context, with or without consensus\r\n- etc.\r\n\r\n## Appendix\r\n\r\nMap reduce by LangChain: [LangChain MapReduce Documentation](https://python.langchain.com/docs/modules/chains/document/map_reduce)\r\n\r\n\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A library designed to harness the diversity of thought by combining multiple LLMs.",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/maxmekiska/llmnet"
    },
    "split_keywords": [
        "machinelearning",
        "llm",
        "bots",
        "network"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e24fd460f7a8cc25b78cb13e1eae8fbf8f1b66bdc426af55c69bfce896e73af5",
                "md5": "9395cf203cc6ad998f29d4bb6f6e035d",
                "sha256": "e6f0c88fce40265461841124403550987c6dac922f82110d10aff03162dfc646"
            },
            "downloads": -1,
            "filename": "llmnet-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9395cf203cc6ad998f29d4bb6f6e035d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 9738,
            "upload_time": "2024-01-29T01:14:41",
            "upload_time_iso_8601": "2024-01-29T01:14:41.096366Z",
            "url": "https://files.pythonhosted.org/packages/e2/4f/d460f7a8cc25b78cb13e1eae8fbf8f1b66bdc426af55c69bfce896e73af5/llmnet-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6005da5bfa8c3c39c88be5e0fb7140a16bd67e9dae61330b137f7dcf1850406",
                "md5": "0a04a1a4b697679632c19a15fe032155",
                "sha256": "5071e33c4d21c4a6299ed3a309f4cc0116eed9f9e1442ced9ba3529ba67c2b23"
            },
            "downloads": -1,
            "filename": "llmnet-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0a04a1a4b697679632c19a15fe032155",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 10186,
            "upload_time": "2024-01-29T01:14:42",
            "upload_time_iso_8601": "2024-01-29T01:14:42.752516Z",
            "url": "https://files.pythonhosted.org/packages/b6/00/5da5bfa8c3c39c88be5e0fb7140a16bd67e9dae61330b137f7dcf1850406/llmnet-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-29 01:14:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "maxmekiska",
    "github_project": "llmnet",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "llmnet"
}
        
Elapsed time: 0.18077s