Name | Version | Summary | date |
iauto-desktop |
0.1.0 |
iauto-desktop is a desktop application for iauto |
2024-01-29 08:46:29 |
llmnet |
0.1.0 |
A library designed to harness the diversity of thought by combining multiple LLMs. |
2024-01-29 01:14:42 |
pydoxtools |
0.8.0 |
This library contains a set of tools in order to extract and synthesize structured information from documents |
2024-01-28 21:47:04 |
live-illustrate |
0.2.0 |
Live-ish illustration for your role-playing campaign |
2024-01-27 08:30:07 |
SciToolsSciBot |
1.0.8 |
SciBot scripts for domain-specific chatbot. |
2024-01-26 15:45:20 |
umshini |
0.1.4 |
Umshini client for playing in MARL tournaments |
2024-01-24 17:29:31 |
bard-webapi |
0.2.4 |
Reverse-engineered async API for Google Bard inspired by Gemini |
2024-01-21 11:24:02 |
GeAI |
0.1.8 |
Generative Artificial Intelligence |
2024-01-20 05:44:22 |
argilla-haystack |
0.0.1b0 |
Argilla-Haystack Integration |
2024-01-19 12:18:33 |
mindcraft |
0.2.4 |
Mindcraft: an LLM-based engine for creating real NPCs. Empowered by Hugging Face, quantized LLMs with AWQ (thanks @TheBloke) and vLLM. It follows a RAG approach with chunk or sentence splitting, and a vector store. Right now, ChromaDB is the supported Vector Store and chunk splitting using `tiktoken` or sentence splitting using `spacy` are available. |
2024-01-17 12:40:17 |
promptcraft |
0.4.8 |
PromptCraft: A Prompt Perturbation Toolkit for Prompt Robustness Analysis |
2024-01-16 22:40:56 |
nuxai |
0.2 |
llm pipelines |
2024-01-14 16:13:43 |
glai |
0.1.3 |
Easy deployment of quantized llama models on cpu |
2024-01-13 19:04:27 |
gguf-modeldb |
0.0.3 |
A Llama2 quantized gguf model db with over 80 preconfigured models downloadable in one line, easly add your own models or adjust settings. Don't struggle with manual downloads again. |
2024-01-13 18:57:53 |
MyLlmUtils |
0.1.3 |
Personal utility package to make using LLM more easily |
2024-01-13 07:43:11 |
VortexGPT |
0.1 |
VORTEX provides free access to text and image generation models. |
2024-01-12 11:36:02 |
promptcache |
0.0.1a1 |
A tool for caching prompts and compleation based on embedding |
2024-01-12 10:09:51 |
RAGchain |
0.2.6 |
Build advanced RAG workflows with LLM, compatible with Langchain |
2024-01-09 05:06:11 |
llama-memory |
0.0.1a1 |
Easy deployment of quantized llama models on cpu |
2024-01-09 01:59:55 |
rag_webquery |
0.1.1 |
A command line utility to query websites using a local LLM |
2024-01-08 08:08:09 |