lollms


Namelollms JSON
Version 9.3.0 PyPI version JSON
download
home_pagehttps://github.com/ParisNeo/lollms
SummaryA python library for AI personality definition
upload_time2024-03-04 22:26:06
maintainer
docs_urlNone
authorSaifeddine ALOUI (ParisNeo)
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Lord of Large Language Models (LoLLMs)
<div align="center">
  <img src="https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png" alt="Logo" width="200" height="200">
</div>

![GitHub license](https://img.shields.io/github/license/ParisNeo/lollms)
![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms)
![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms)
![GitHub forks](https://img.shields.io/github/forks/ParisNeo/lollms)
[![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/4rR282WJb6)
[![Follow me on Twitter](https://img.shields.io/twitter/follow/SpaceNerduino?style=social)](https://twitter.com/SpaceNerduino)
[![Follow Me on YouTube](https://img.shields.io/badge/Follow%20Me%20on-YouTube-red?style=flat&logo=youtube)](https://www.youtube.com/user/Parisneo)
[![Downloads](https://static.pepy.tech/badge/lollms)](https://pepy.tech/project/lollms)
[![Downloads](https://static.pepy.tech/badge/lollms/month)](https://pepy.tech/project/lollms)
[![Downloads](https://static.pepy.tech/badge/lollms/week)](https://pepy.tech/project/lollms)

Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.

## Features

- Fully integrated library with access to bindings, personalities and helper tools.
- Generate text using large language models.
- Supports multiple personalities for generating text with different styles and tones.
- Real-time text generation with WebSocket-based communication.
- RESTful API for listing personalities and adding new personalities.
- Easy integration with various applications and frameworks.
- Possibility to send files to personalities
- Possibility to run on multiple nodes and provide a generation service to many outputs at once.
- Data stays local even in the remote version. Only generations are sent to the host node. The logs, data and discussion history are kept in your local disucssion folder.

## Installation

You can install LoLLMs using pip, the Python package manager. Open your terminal or command prompt and run the following command:

```bash
pip install --upgrade lollms
```

Or if you want to get the latest version from the git:

```bash
pip install --upgrade git+https://github.com/ParisNeo/lollms.git
```

## GPU support
If you want to use cuda. Either install it directly or use conda to install everything:
```bash
conda create --name lollms python=3.10
```
Activate the environment

```bash
conda activate lollms
```

Install cudatoolkit

```bash
conda install -c anaconda cudatoolkit
```

Install lollms

```bash
pip install --upgrade lollms
```

Now you are ready.

To simply configure your environment run the settings app:

```bash
lollms-settings
```

The tool is intuitive and will guide you through configuration process.


The first time you will be prompted to select a binding.
![image](https://github.com/ParisNeo/lollms/assets/827993/2d7f58fe-089d-4d3e-a21a-0609f8e27969)

Once the binding is selected, you have to install at least a model. You have two options:

1- install from internet. Just give the link to a model on hugging face. For example. if you select the default llamacpp python bindings (7), you can install this model: 
```bash
https://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML/resolve/main/airoboros-7b-gpt4.ggmlv3.q4_1.bin
```
2- install from local drive. Just give the path to a model on your pc. The model will not be copied. We only create a reference to the model. This is useful if you use multiple clients so that you can mutualize your models with other tools. 


Now you are ready to use the server.

## Library example

Here is the smallest possible example that allows you to use the full potential of the tool with nearly no code
```python
from lollms.console import Conversation 

cv = Conversation(None)
cv.start_conversation()
```
Now you can reimplement the start_conversation method to do the things you want:
```python
from lollms.console import Conversation 

class MyConversation(Conversation):
  def __init__(self, cfg=None):
    super().__init__(cfg, show_welcome_message=False)

  def start_conversation(self):
    prompt = "Once apon a time"
    def callback(text, type=None):
        print(text, end="", flush=True)
        return True
    print(prompt, end="", flush=True)
    output = self.safe_generate(prompt, callback=callback)

if __name__ == '__main__':
  cv = MyConversation()
  cv.start_conversation()
```

Or if you want here is a conversation tool written in few lines
```python
from lollms.console import Conversation 

class MyConversation(Conversation):
  def __init__(self, cfg=None):
    super().__init__(cfg, show_welcome_message=False)

  def start_conversation(self):
    full_discussion=""
    while True:
      prompt = input("You: ")
      if prompt=="exit":
        return
      if prompt=="menu":
        self.menu.main_menu()
      full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
      full_discussion += self.personality.ai_message_prefix
      def callback(text, type=None):
          print(text, end="", flush=True)
          return True
      print(self.personality.name+": ",end="",flush=True)
      output = self.safe_generate(full_discussion, callback=callback)
      full_discussion += output.strip()+self.personality.link_text
      print()

if __name__ == '__main__':
  cv = MyConversation()
  cv.start_conversation()
```
Here we use the safe_generate method that does all the cropping for you ,so you can chat forever and will never run out of context.

## Socket IO Server Usage

Once installed, you can start the LoLLMs Server using the `lollms-server` command followed by the desired parameters.

```
lollms-server --host <host> --port <port> --config <config_file> --bindings_path <bindings_path> --personalities_path <personalities_path> --models_path <models_path> --binding_name <binding_name> --model_name <model_name> --personality_full_name <personality_full_name>
```

### Parameters

- `--host`: The hostname or IP address to bind the server (default: localhost).
- `--port`: The port number to run the server (default: 9600).
- `--config`: Path to the configuration file (default: None).
- `--bindings_path`: The path to the Bindings folder (default: "./bindings_zoo").
- `--personalities_path`: The path to the personalities folder (default: "./personalities_zoo").
- `--models_path`: The path to the models folder (default: "./models").
- `--binding_name`: The default binding to be used (default: "llama_cpp_official").
- `--model_name`: The default model name (default: "Manticore-13B.ggmlv3.q4_0.bin").
- `--personality_full_name`: The full name of the default personality (default: "personality").

### Examples

Start the server with default settings:

```
lollms-server
```

Start the server on a specific host and port:

```
lollms-server --host 0.0.0.0 --port 5000
```
## API Endpoints

### WebSocket Events

- `connect`: Triggered when a client connects to the server.
- `disconnect`: Triggered when a client disconnects from the server.
- `list_personalities`: List all available personalities.
- `add_personality`: Add a new personality to the server.
- `generate_text`: Generate text based on the provided prompt and selected personality.

### RESTful API

- `GET /personalities`: List all available personalities.
- `POST /personalities`: Add a new personality to the server.

Sure! Here are examples of how to communicate with the LoLLMs Server using JavaScript and Python.

### JavaScript Example

```javascript
// Establish a WebSocket connection with the server
const socket = io.connect('http://localhost:9600');

// Event: When connected to the server
socket.on('connect', () => {
  console.log('Connected to the server');

  // Request the list of available personalities
  socket.emit('list_personalities');
});

// Event: Receive the list of personalities from the server
socket.on('personalities_list', (data) => {
  const personalities = data.personalities;
  console.log('Available Personalities:', personalities);

  // Select a personality and send a text generation request
  const selectedPersonality = personalities[0];
  const prompt = 'Once upon a time...';
  socket.emit('generate_text', { personality: selectedPersonality, prompt: prompt });
});

// Event: Receive the generated text from the server
socket.on('text_generated', (data) => {
  const generatedText = data.text;
  console.log('Generated Text:', generatedText);

  // Do something with the generated text
});

// Event: When disconnected from the server
socket.on('disconnect', () => {
  console.log('Disconnected from the server');
});
```

### Python Example

```python
import socketio

# Create a SocketIO client
sio = socketio.Client()

# Event: When connected to the server
@sio.on('connect')
def on_connect():
    print('Connected to the server')

    # Request the list of available personalities
    sio.emit('list_personalities')

# Event: Receive the list of personalities from the server
@sio.on('personalities_list')
def on_personalities_list(data):
    personalities = data['personalities']
    print('Available Personalities:', personalities)

    # Select a personality and send a text generation request
    selected_personality = personalities[0]
    prompt = 'Once upon a time...'
    sio.emit('generate_text', {'personality': selected_personality, 'prompt': prompt})

# Event: Receive the generated text from the server
@sio.on('text_generated')
def on_text_generated(data):
    generated_text = data['text']
    print('Generated Text:', generated_text)

    # Do something with the generated text

# Event: When disconnected from the server
@sio.on('disconnect')
def on_disconnect():
    print('Disconnected from the server')

# Connect to the server
sio.connect('http://localhost:9600')

# Keep the client running
sio.wait()
```

Make sure to have the necessary dependencies installed for the JavaScript and Python examples. For JavaScript, you need the `socket.io-client` package, and for Python, you need the `python-socketio` package.

## Contributing

Contributions to the LoLLMs Server project are welcome and appreciated. If you would like to contribute, please follow the guidelines outlined in the [CONTRIBUTING.md](https://github.com/ParisNeo/lollms/blob/main/CONTRIBUTING.md) file.

## License

LoLLMs Server is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/ParisNeo/lollms/blob/main/LICENSE) file for more information.

## Repository

The source code for LoLLMs Server can be found on GitHub

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ParisNeo/lollms",
    "name": "lollms",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Saifeddine ALOUI (ParisNeo)",
    "author_email": "aloui.saifeddine@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/22/49/a3cfa14654a8296778449caaa217b2f22e6b89650e52dc07a1ed806f15b0/lollms-9.3.0.tar.gz",
    "platform": null,
    "description": "# Lord of Large Language Models (LoLLMs)\r\n<div align=\"center\">\r\n  <img src=\"https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png\" alt=\"Logo\" width=\"200\" height=\"200\">\r\n</div>\r\n\r\n![GitHub license](https://img.shields.io/github/license/ParisNeo/lollms)\r\n![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms)\r\n![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms)\r\n![GitHub forks](https://img.shields.io/github/forks/ParisNeo/lollms)\r\n[![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/4rR282WJb6)\r\n[![Follow me on Twitter](https://img.shields.io/twitter/follow/SpaceNerduino?style=social)](https://twitter.com/SpaceNerduino)\r\n[![Follow Me on YouTube](https://img.shields.io/badge/Follow%20Me%20on-YouTube-red?style=flat&logo=youtube)](https://www.youtube.com/user/Parisneo)\r\n[![Downloads](https://static.pepy.tech/badge/lollms)](https://pepy.tech/project/lollms)\r\n[![Downloads](https://static.pepy.tech/badge/lollms/month)](https://pepy.tech/project/lollms)\r\n[![Downloads](https://static.pepy.tech/badge/lollms/week)](https://pepy.tech/project/lollms)\r\n\r\nLord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.\r\n\r\n## Features\r\n\r\n- Fully integrated library with access to bindings, personalities and helper tools.\r\n- Generate text using large language models.\r\n- Supports multiple personalities for generating text with different styles and tones.\r\n- Real-time text generation with WebSocket-based communication.\r\n- RESTful API for listing personalities and adding new personalities.\r\n- Easy integration with various applications and frameworks.\r\n- Possibility to send files to personalities\r\n- Possibility to run on multiple nodes and provide a generation service to many outputs at once.\r\n- Data stays local even in the remote version. Only generations are sent to the host node. The logs, data and discussion history are kept in your local disucssion folder.\r\n\r\n## Installation\r\n\r\nYou can install LoLLMs using pip, the Python package manager. Open your terminal or command prompt and run the following command:\r\n\r\n```bash\r\npip install --upgrade lollms\r\n```\r\n\r\nOr if you want to get the latest version from the git:\r\n\r\n```bash\r\npip install --upgrade git+https://github.com/ParisNeo/lollms.git\r\n```\r\n\r\n## GPU support\r\nIf you want to use cuda. Either install it directly or use conda to install everything:\r\n```bash\r\nconda create --name lollms python=3.10\r\n```\r\nActivate the environment\r\n\r\n```bash\r\nconda activate lollms\r\n```\r\n\r\nInstall cudatoolkit\r\n\r\n```bash\r\nconda install -c anaconda cudatoolkit\r\n```\r\n\r\nInstall lollms\r\n\r\n```bash\r\npip install --upgrade lollms\r\n```\r\n\r\nNow you are ready.\r\n\r\nTo simply configure your environment run the settings app:\r\n\r\n```bash\r\nlollms-settings\r\n```\r\n\r\nThe tool is intuitive and will guide you through configuration process.\r\n\r\n\r\nThe first time you will be prompted to select a binding.\r\n![image](https://github.com/ParisNeo/lollms/assets/827993/2d7f58fe-089d-4d3e-a21a-0609f8e27969)\r\n\r\nOnce the binding is selected, you have to install at least a model. You have two options:\r\n\r\n1- install from internet. Just give the link to a model on hugging face. For example. if you select the default llamacpp python bindings (7), you can install this model: \r\n```bash\r\nhttps://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML/resolve/main/airoboros-7b-gpt4.ggmlv3.q4_1.bin\r\n```\r\n2- install from local drive. Just give the path to a model on your pc. The model will not be copied. We only create a reference to the model. This is useful if you use multiple clients so that you can mutualize your models with other tools. \r\n\r\n\r\nNow you are ready to use the server.\r\n\r\n## Library example\r\n\r\nHere is the smallest possible example that allows you to use the full potential of the tool with nearly no code\r\n```python\r\nfrom lollms.console import Conversation \r\n\r\ncv = Conversation(None)\r\ncv.start_conversation()\r\n```\r\nNow you can reimplement the start_conversation method to do the things you want:\r\n```python\r\nfrom lollms.console import Conversation \r\n\r\nclass MyConversation(Conversation):\r\n  def __init__(self, cfg=None):\r\n    super().__init__(cfg, show_welcome_message=False)\r\n\r\n  def start_conversation(self):\r\n    prompt = \"Once apon a time\"\r\n    def callback(text, type=None):\r\n        print(text, end=\"\", flush=True)\r\n        return True\r\n    print(prompt, end=\"\", flush=True)\r\n    output = self.safe_generate(prompt, callback=callback)\r\n\r\nif __name__ == '__main__':\r\n  cv = MyConversation()\r\n  cv.start_conversation()\r\n```\r\n\r\nOr if you want here is a conversation tool written in few lines\r\n```python\r\nfrom lollms.console import Conversation \r\n\r\nclass MyConversation(Conversation):\r\n  def __init__(self, cfg=None):\r\n    super().__init__(cfg, show_welcome_message=False)\r\n\r\n  def start_conversation(self):\r\n    full_discussion=\"\"\r\n    while True:\r\n      prompt = input(\"You: \")\r\n      if prompt==\"exit\":\r\n        return\r\n      if prompt==\"menu\":\r\n        self.menu.main_menu()\r\n      full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text\r\n      full_discussion += self.personality.ai_message_prefix\r\n      def callback(text, type=None):\r\n          print(text, end=\"\", flush=True)\r\n          return True\r\n      print(self.personality.name+\": \",end=\"\",flush=True)\r\n      output = self.safe_generate(full_discussion, callback=callback)\r\n      full_discussion += output.strip()+self.personality.link_text\r\n      print()\r\n\r\nif __name__ == '__main__':\r\n  cv = MyConversation()\r\n  cv.start_conversation()\r\n```\r\nHere we use the safe_generate method that does all the cropping for you ,so you can chat forever and will never run out of context.\r\n\r\n## Socket IO Server Usage\r\n\r\nOnce installed, you can start the LoLLMs Server using the `lollms-server` command followed by the desired parameters.\r\n\r\n```\r\nlollms-server --host <host> --port <port> --config <config_file> --bindings_path <bindings_path> --personalities_path <personalities_path> --models_path <models_path> --binding_name <binding_name> --model_name <model_name> --personality_full_name <personality_full_name>\r\n```\r\n\r\n### Parameters\r\n\r\n- `--host`: The hostname or IP address to bind the server (default: localhost).\r\n- `--port`: The port number to run the server (default: 9600).\r\n- `--config`: Path to the configuration file (default: None).\r\n- `--bindings_path`: The path to the Bindings folder (default: \"./bindings_zoo\").\r\n- `--personalities_path`: The path to the personalities folder (default: \"./personalities_zoo\").\r\n- `--models_path`: The path to the models folder (default: \"./models\").\r\n- `--binding_name`: The default binding to be used (default: \"llama_cpp_official\").\r\n- `--model_name`: The default model name (default: \"Manticore-13B.ggmlv3.q4_0.bin\").\r\n- `--personality_full_name`: The full name of the default personality (default: \"personality\").\r\n\r\n### Examples\r\n\r\nStart the server with default settings:\r\n\r\n```\r\nlollms-server\r\n```\r\n\r\nStart the server on a specific host and port:\r\n\r\n```\r\nlollms-server --host 0.0.0.0 --port 5000\r\n```\r\n## API Endpoints\r\n\r\n### WebSocket Events\r\n\r\n- `connect`: Triggered when a client connects to the server.\r\n- `disconnect`: Triggered when a client disconnects from the server.\r\n- `list_personalities`: List all available personalities.\r\n- `add_personality`: Add a new personality to the server.\r\n- `generate_text`: Generate text based on the provided prompt and selected personality.\r\n\r\n### RESTful API\r\n\r\n- `GET /personalities`: List all available personalities.\r\n- `POST /personalities`: Add a new personality to the server.\r\n\r\nSure! Here are examples of how to communicate with the LoLLMs Server using JavaScript and Python.\r\n\r\n### JavaScript Example\r\n\r\n```javascript\r\n// Establish a WebSocket connection with the server\r\nconst socket = io.connect('http://localhost:9600');\r\n\r\n// Event: When connected to the server\r\nsocket.on('connect', () => {\r\n  console.log('Connected to the server');\r\n\r\n  // Request the list of available personalities\r\n  socket.emit('list_personalities');\r\n});\r\n\r\n// Event: Receive the list of personalities from the server\r\nsocket.on('personalities_list', (data) => {\r\n  const personalities = data.personalities;\r\n  console.log('Available Personalities:', personalities);\r\n\r\n  // Select a personality and send a text generation request\r\n  const selectedPersonality = personalities[0];\r\n  const prompt = 'Once upon a time...';\r\n  socket.emit('generate_text', { personality: selectedPersonality, prompt: prompt });\r\n});\r\n\r\n// Event: Receive the generated text from the server\r\nsocket.on('text_generated', (data) => {\r\n  const generatedText = data.text;\r\n  console.log('Generated Text:', generatedText);\r\n\r\n  // Do something with the generated text\r\n});\r\n\r\n// Event: When disconnected from the server\r\nsocket.on('disconnect', () => {\r\n  console.log('Disconnected from the server');\r\n});\r\n```\r\n\r\n### Python Example\r\n\r\n```python\r\nimport socketio\r\n\r\n# Create a SocketIO client\r\nsio = socketio.Client()\r\n\r\n# Event: When connected to the server\r\n@sio.on('connect')\r\ndef on_connect():\r\n    print('Connected to the server')\r\n\r\n    # Request the list of available personalities\r\n    sio.emit('list_personalities')\r\n\r\n# Event: Receive the list of personalities from the server\r\n@sio.on('personalities_list')\r\ndef on_personalities_list(data):\r\n    personalities = data['personalities']\r\n    print('Available Personalities:', personalities)\r\n\r\n    # Select a personality and send a text generation request\r\n    selected_personality = personalities[0]\r\n    prompt = 'Once upon a time...'\r\n    sio.emit('generate_text', {'personality': selected_personality, 'prompt': prompt})\r\n\r\n# Event: Receive the generated text from the server\r\n@sio.on('text_generated')\r\ndef on_text_generated(data):\r\n    generated_text = data['text']\r\n    print('Generated Text:', generated_text)\r\n\r\n    # Do something with the generated text\r\n\r\n# Event: When disconnected from the server\r\n@sio.on('disconnect')\r\ndef on_disconnect():\r\n    print('Disconnected from the server')\r\n\r\n# Connect to the server\r\nsio.connect('http://localhost:9600')\r\n\r\n# Keep the client running\r\nsio.wait()\r\n```\r\n\r\nMake sure to have the necessary dependencies installed for the JavaScript and Python examples. For JavaScript, you need the `socket.io-client` package, and for Python, you need the `python-socketio` package.\r\n\r\n## Contributing\r\n\r\nContributions to the LoLLMs Server project are welcome and appreciated. If you would like to contribute, please follow the guidelines outlined in the [CONTRIBUTING.md](https://github.com/ParisNeo/lollms/blob/main/CONTRIBUTING.md) file.\r\n\r\n## License\r\n\r\nLoLLMs Server is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/ParisNeo/lollms/blob/main/LICENSE) file for more information.\r\n\r\n## Repository\r\n\r\nThe source code for LoLLMs Server can be found on GitHub\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A python library for AI personality definition",
    "version": "9.3.0",
    "project_urls": {
        "Homepage": "https://github.com/ParisNeo/lollms"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3703da6873b25e1308dcfaadb8532a50b1cbf15816f475b3efc6f59c25bd81d2",
                "md5": "668d92d6097181d43b826fd969b44b08",
                "sha256": "03c40332c636db61bc4c543eff6efe2c78709658be55618c18af61d46ac22f23"
            },
            "downloads": -1,
            "filename": "lollms-9.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "668d92d6097181d43b826fd969b44b08",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 89327,
            "upload_time": "2024-03-04T22:26:02",
            "upload_time_iso_8601": "2024-03-04T22:26:02.081283Z",
            "url": "https://files.pythonhosted.org/packages/37/03/da6873b25e1308dcfaadb8532a50b1cbf15816f475b3efc6f59c25bd81d2/lollms-9.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2249a3cfa14654a8296778449caaa217b2f22e6b89650e52dc07a1ed806f15b0",
                "md5": "e39f6460231cd35d83a8c37b6fa380a7",
                "sha256": "a51085aea9897b350774e6a5ac9c736a7d2dfeeeacef0f0ad5b04240ec522148"
            },
            "downloads": -1,
            "filename": "lollms-9.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e39f6460231cd35d83a8c37b6fa380a7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 87156,
            "upload_time": "2024-03-04T22:26:06",
            "upload_time_iso_8601": "2024-03-04T22:26:06.487744Z",
            "url": "https://files.pythonhosted.org/packages/22/49/a3cfa14654a8296778449caaa217b2f22e6b89650e52dc07a1ed806f15b0/lollms-9.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-04 22:26:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ParisNeo",
    "github_project": "lollms",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lollms"
}
        
Elapsed time: 0.19518s