Name | Version | Summary | date |
itext2kg |
0.0.5 |
Incremental Knowledge Graphs Constructor Using Large Language Models |
2024-09-20 04:37:17 |
fastmlxOns |
0.2.1 |
FastMLX is a high performance production ready API to host MLX models. |
2024-09-06 00:59:14 |
fastmlx |
0.2.1 |
FastMLX is a high performance production ready API to host MLX models. |
2024-08-10 20:35:44 |
llm-sql-prompt |
0.5.1 |
Utility to generate ChatGPT prompts for SQL writing, offering table structure snapshots and sample row data from Postgres and sqlite databases. |
2024-07-30 22:53:36 |
llm-bench |
0.4.32 |
LLM Benchmarking tool for OLLAMA |
2024-07-23 16:39:44 |
ehrmonize |
0.1.2 |
ehrmonize is package to abstract medical concepts using large language models. |
2024-07-02 16:47:54 |
cognitrix |
0.2.5 |
Package for creating AI Agents using llms |
2024-06-14 17:32:58 |
ai-edge-quantizer |
0.0.1 |
A quantizer for advanced developers to quantize converted ODML models. |
2024-06-03 23:31:26 |
openai-cost-logger |
0.5.1 |
OpenAI Cost Logger |
2024-05-28 14:24:13 |
dreamy |
0.1.5.1 |
DReAMy is a python library to automatically analyse (for now only) textual dream reports. At the moment, annotations are exclusively based on different [Hall & Van de Castle](https://link.springer.com/chapter/10.1007/978-1-4899-0298-6_2) features, but we are looking forward to expanding our set. For more details on the theretcal aspect, please refer to the [pre-print](https://arxiv.org/abs/2302.14828). |
2024-05-22 21:35:33 |
llmlingua-promptflow |
0.0.1 |
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. |
2024-05-08 06:38:21 |
llmlingua |
0.2.2 |
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. |
2024-04-09 08:21:56 |
jailbreakbench |
0.1.3 |
An Open Robustness Benchmark for Jailbreaking Language Models |
2024-04-06 15:41:26 |
peanutbutter |
0.1.1 |
Property-Based Testing and LLMS, what could go wrong? |
2024-04-05 00:26:29 |
GenCasting |
0.0.8 |
Extract token-level probabilities from LLMs for classification-type outputs. |
2024-04-01 23:13:50 |
llm_benchmark |
0.3.1 |
LLM Benchmark for Throughputs via Ollama |
2024-03-30 21:47:35 |
os-copilot |
0.1.0 |
An self-improving embodied conversational agents seamlessly integrated into the operating system to automate our daily tasks. |
2024-03-26 03:25:47 |