PyDigger - unearthing stuff about Python


NameVersionSummarydate
llmlingua-promptflow 0.0.1 To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. 2024-05-08 06:38:21
llmlingua 0.2.2 To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. 2024-04-09 08:21:56
The LLMLingua team
hourdayweektotal
45166110867266048
Elapsed time: 1.31522s