PyDigger - unearthing stuff about Python


NameVersionSummarydate
TokenProbs 1.0.3 Extract token-level probabilities from LLMs for classification-type outputs. 2024-10-31 18:49:33
flow-judge 0.1.2 A small yet powerful LM Judge 2024-10-29 07:32:52
fira 1.0.1 Fira, a plug-and-play memory-efficient training framework of LLMs. 2024-10-05 15:40:33
ConTextMining 0.0.2 Complementing topic models with few-shot in-context learning to generate interpretable topics 2024-09-23 19:05:42
itext2kg 0.0.5 Incremental Knowledge Graphs Constructor Using Large Language Models 2024-09-20 04:37:17
fastmlxOns 0.2.1 FastMLX is a high performance production ready API to host MLX models. 2024-09-06 00:59:14
fastmlx 0.2.1 FastMLX is a high performance production ready API to host MLX models. 2024-08-10 20:35:44
datadreamer.dev 0.38.0 Prompt. Generate Synthetic Data. Train & Align Models. 2024-08-02 20:03:47
llm-sql-prompt 0.5.1 Utility to generate ChatGPT prompts for SQL writing, offering table structure snapshots and sample row data from Postgres and sqlite databases. 2024-07-30 22:53:36
llm-benchmark 0.3.22 LLM Benchmark for Throughputs via Ollama 2024-07-24 20:50:06
llm-bench 0.4.32 LLM Benchmarking tool for OLLAMA 2024-07-23 16:39:44
ehrmonize 0.1.2 ehrmonize is package to abstract medical concepts using large language models. 2024-07-02 16:47:54
cognitrix 0.2.5 Package for creating AI Agents using llms 2024-06-14 17:32:58
ai-edge-quantizer 0.0.1 A quantizer for advanced developers to quantize converted ODML models. 2024-06-03 23:31:26
openai-cost-logger 0.5.1 OpenAI Cost Logger 2024-05-28 14:24:13
dreamy 0.1.5.1 DReAMy is a python library to automatically analyse (for now only) textual dream reports. At the moment, annotations are exclusively based on different [Hall & Van de Castle](https://link.springer.com/chapter/10.1007/978-1-4899-0298-6_2) features, but we are looking forward to expanding our set. For more details on the theretcal aspect, please refer to the [pre-print](https://arxiv.org/abs/2302.14828). 2024-05-22 21:35:33
llmlingua-promptflow 0.0.1 To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. 2024-05-08 06:38:21
llmlingua 0.2.2 To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. 2024-04-09 08:21:56
jailbreakbench 0.1.3 An Open Robustness Benchmark for Jailbreaking Language Models 2024-04-06 15:41:26
peanutbutter 0.1.1 Property-Based Testing and LLMS, what could go wrong? 2024-04-05 00:26:29
hourdayweektotal
3011949552274474
Elapsed time: 1.32671s