Name | Version | Summary | date |
levalicious-mcp-server-time |
0.0.0 |
A Model Context Protocol server providing tools for time queries and timezone conversions for LLMs |
2025-07-13 05:25:22 |
novaeval |
0.3.2 |
A comprehensive, open-source LLM evaluation framework for testing and benchmarking AI models |
2025-07-12 20:19:59 |
codeflash-ali-dev |
0.10.11 |
Client for codeflash.ai - automatic code performance optimization, powered by AI |
2025-07-12 15:22:19 |
cliops |
4.4.4 |
Advanced CLI tool for structured, pattern-based prompt optimization and state management |
2025-07-12 06:57:52 |
HAPI-SDK |
0.2.23 |
A plug-and-play SDK for de-hallucinating outputs from LLMs using semantic entropy and trained classifiers |
2025-07-12 02:44:18 |
galileo-core |
3.56.0 |
Shared schemas and configuration for Galileo's Python packages. |
2025-07-11 22:50:13 |
ggufloader |
1.0.3 |
A local LLM runner for loading and chatting with GGUF models |
2025-07-11 19:31:43 |
neuro-san |
0.5.43 |
NeuroAI data-driven System for multi-Agent Networks - client, library and server |
2025-07-11 17:49:53 |
mcp-cli |
0.2.4 |
A cli for the Model Context Provider |
2025-07-11 16:39:56 |
chuk-tool-processor |
0.6.1 |
Async-native framework for registering, discovering, and executing tools referenced in LLM responses |
2025-07-11 16:14:25 |
openaivec |
0.9.1 |
Generative mutation for tabular calculation |
2025-07-11 12:28:10 |
talktollm |
0.4.1 |
A Python utility for interacting with large language models (LLMs) via web automation |
2025-07-11 06:51:42 |
codeflash |
0.15.4 |
Client for codeflash.ai - automatic code performance optimization, powered by AI |
2025-07-11 04:54:38 |
testkitLLM |
0.1.5 |
Testing Framework for LLM-Based Agents |
2025-07-11 03:31:51 |
llm-requesty |
0.0.1 |
LLM plugin for models hosted by requesty |
2025-07-11 01:02:23 |
llm-agent-protector |
0.1.0 |
Polymorphic Prompt Assembler to protect LLM agents from prompt injection and prompt leak |
2025-07-10 23:16:57 |
shrink-prompt |
1.0.0 |
Lightning-fast LLM prompt compression with 30-70% token reduction. Domain-specific rules for legal, medical, technical content. Sub-20ms processing, zero external calls. |
2025-07-10 17:51:08 |
llmshield-spanish-corpus |
1.0.1 |
Spanish language corpus for LLMShield entity detection. |
2025-07-10 15:32:37 |
LLM-Bridge |
1.7.9 |
A Bridge for LLMs |
2025-07-10 13:18:12 |
ai-microcore |
4.2.3 |
# Minimalistic Foundation for AI Applications |
2025-07-10 11:13:45 |