| Name | Version | Summary | date |
| llm-queue-task-manager |
0.1.3 |
A Redis-based queue task manager for LLM processing with intelligent token budget management |
2025-08-05 22:20:24 |
| ovllm |
0.3.0 |
One-line vLLM wrapper with gorgeous DSPy integration |
2025-08-04 22:42:36 |
| mcp-server-time |
2025.8.4 |
A Model Context Protocol server providing tools for time queries and timezone conversions for LLMs |
2025-08-04 14:52:29 |
| copyfiles-cli |
0.1.6 |
Generate a copyfiles.txt containing project tree + file contents. |
2025-08-01 23:07:40 |
| vetting-python |
0.1.0 |
A Python implementation of the VETTING (Verification and Evaluation Tool for Targeting Invalid Narrative Generation) framework for LLM safety and educational applications |
2025-08-01 04:24:02 |
| textlasso |
0.1.3 |
Simple packege for working with LLM text responses and prompts. |
2025-07-31 13:45:59 |
| azllm |
0.1.6 |
A Python package that provides an easier user interface for multiple LLM providers. |
2025-07-31 01:06:46 |
| autofic-core |
0.1.1 |
A solution for remediating vulnerable source code using LLMs. |
2025-07-30 02:05:20 |
| py-context-llm7 |
0.1.0 |
Minimal Python client for https://api.context.llm7.io |
2025-07-29 20:42:36 |
| hibiz-llm-wrapper |
1.0.5 |
A comprehensive Python wrapper for Large Language Models with database integration and usage tracking |
2025-07-29 12:17:36 |
| document-data-extractor |
1.0.4 |
Best open-source document to markdown extractor for LLM training data. Convert PDF, Word, PowerPoint, Excel, images, URLs to clean markdown, JSON, HTML locally. Alternative to Unstructured, Docling, Marker, MarkItDown, MinerU, PaddleOCR, Tesseract |
2025-07-29 08:25:56 |
| llm-execution-time-predictor |
0.1.2 |
LLM batch inference latency predictor and profiler CLI tool |
2025-07-29 03:16:27 |
| codeflash-ali-dev |
0.10.15 |
Client for codeflash.ai - automatic code performance optimization, powered by AI |
2025-07-28 22:16:47 |
| model-forge-llm |
2.3.0 |
A reusable library for managing LLM providers, authentication, and model selection. |
2025-07-28 15:34:34 |
| llmunchies |
0.2.0 |
A lightweight, model-agnostic context engine for LLMs. Feed your model tasty context. |
2025-07-28 00:40:02 |
| llm-user-memory |
0.1.1 |
Transparent memory system for LLM |
2025-07-27 16:25:16 |
| llm-contracts |
1.0.0 |
LLM Output Validation, Linting, and Assertion Layer |
2025-07-26 12:08:26 |
| crashlens-logger |
1.0.4 |
CLI tool to generate structured LLM logs for CrashLens cost detection |
2025-07-25 19:45:15 |
| aikitx |
1.0.0 |
A comprehensive GUI toolkit for Large Language Models (LLMs) with GGUF support, document processing, email automation, and multi-backend inference |
2025-07-25 19:44:31 |
| qwen-agent |
0.0.29 |
Qwen-Agent: Enhancing LLMs with Agent Workflows, RAG, Function Calling, and Code Interpreter. |
2025-07-25 04:52:50 |