PyDigger - unearthing stuff about Python


NameVersionSummarydate
optimum-neuron 0.0.26 Optimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Trainium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks. 2024-11-15 15:54:07
estimagic 0.5.1 Tools to solve difficult numerical optimization problems. 2024-11-13 17:11:56
optimagic 0.5.1 Tools to solve difficult numerical optimization problems. 2024-11-13 17:03:15
pysmatch 0.7 Propensity Score Matching(PSM) on python 2024-11-13 06:29:48
gemlib 0.12.1 GEMlib scientific compute library for epidemic modelling 2024-11-12 05:27:28
geo-espresso 0.3.16 Earth Science PRoblems for the Evaluation of Strategies, Solvers and Optimizers 2024-11-12 02:28:18
pyabc 0.12.15 Distributed, likelihood-free ABC-SMC inference 2024-11-11 09:22:18
BayesicFitting 3.2.2 A Python Toolbox for Bayesian fitting. 2024-11-07 17:13:24
metalm-xclient 0.1.0 雪浪模型推理服务的客户端 2024-11-04 13:31:32
everai 0.2.38 Client library to manage everai infrastructure 2024-11-01 07:05:30
optimum 1.23.3 Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. 2024-10-29 17:43:32
tritonclient 2.51.0 Python client library and utilities for communicating with Triton Inference Server 2024-10-28 20:27:32
skeem 0.1.1 Infer SQL DDL statements from tabular data 2024-10-22 07:37:52
causalbench-asu 0.1rc9 Spatio Temporal Causal Benchmarking Platform 2024-10-21 23:25:24
torch-tensorrt 2.5.0 Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch 2024-10-18 01:21:50
friendli-client 1.5.6 Client of Friendli Suite. 2024-10-17 04:58:33
fiestaEM 0.0.1 Fast inference of electromagnetic signals with JAX 2024-10-17 00:36:58
inference-server 1.3.1 Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image. 2024-10-09 07:19:14
hot-fair-utilities 1.3.0 Utilities for AI - Assisted Mapping fAIr 2024-10-04 09:01:15
bedrock-inference 0.0.13 Login into your AWS and call Bedrock models in Python 2024-09-30 10:16:12
hourdayweektotal
43123910365264164
Elapsed time: 2.05744s