PyDigger - unearthing stuff about Python


NameVersionSummarydate
InterpreTS 0.3.0 Feature extraction from time series to support the creation of interpretable and explainable predictive models. 2024-12-11 23:19:50
locking-activations 0.1.0 Locking activations via k-sparse autoencoders. 2024-09-17 12:06:40
lit-nlp 1.2 🔥LIT: The Learning Interpretability Tool 2024-06-26 16:32:41
codebook-features 0.1.2 Sparse and discrete interpretability tool for neural networks 2024-02-05 22:09:52
meitorch 0.223 Generate Most Exciting Input to explore and understand PyTorch model's behavior by identifying input samples that induce high activation from specific neurons in your model. 2024-02-03 11:04:20
concept-erasure 0.2.3 Erasing concepts from neural representations with provable guarantees 2024-01-10 19:49:32
dice-ml 0.11 Generate Diverse Counterfactual Explanations for any machine learning model. 2023-10-27 03:54:08
RIM-interpret 0.0.5 Interpretability metrics for machine learning models 2023-08-21 04:01:17
ClarityAI 1.0.0 ClarityAI is a Python package designed to empower machine learning practitioners with a wide range of interpretability methods to enhance the transparency and explainability of their ML models. 2023-08-12 20:26:16
eleuther-elk 0.1.1 Keeping language models honest by directly eliciting knowledge encoded in their activations 2023-07-20 23:32:21
captum-rise 1.0 The implementation of the RISE algorithm for the Captum framework 2023-07-01 00:56:38
tuned-lens 0.1.1 Tools for understanding how transformer predictions are built layer-by-layer 2023-06-13 16:10:12
canonical-sets 0.0.3 Exposing Algorithmic Bias with Canonical Sets. 2023-02-03 13:20:29
jax-rex 0.0.11 Jax-based Recourse Explanation Library 2022-12-24 21:55:36
discern-xai 0.0.25 DisCERN: Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods 2022-12-10 20:45:12
hourdayweektotal
2010789548274518
Elapsed time: 2.14518s