PyDigger - unearthing stuff about Python


NameVersionSummarydate
intel-ai-safety 0.0.0 Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, featureset, and model's algorithms are exposed. 2024-04-23 02:52:49
concept-erasure 0.2.3 Erasing concepts from neural representations with provable guarantees 2024-01-10 19:49:32
osculari 0.0.4 Open source library to explore artificial neural networks with psychophysical experiments. 2023-12-21 20:31:42
contrastive-xai 0.1.1 Contrastive Explainable AI Algorithms 2023-07-31 02:54:27
eleuther-elk 0.1.1 Keeping language models honest by directly eliciting knowledge encoded in their activations 2023-07-20 23:32:21
tuned-lens 0.1.1 Tools for understanding how transformer predictions are built layer-by-layer 2023-06-13 16:10:12
time-interpret 0.3.0 Model interpretability library for PyTorch with a focus on time series. 2023-06-06 01:33:02
hourdayweektotal
3311009535274549
Elapsed time: 4.38074s