Name | Version | Summary | date |
model-alignment |
0.2 |
Model Alignment: Aligning prompts to human preferences through natural language feedback |
2024-10-07 17:26:47 |
omnixai-community |
1.3.2.3 |
OmniXAI: An Explainable AI Toolbox |
2024-09-03 11:37:58 |
flextrees |
0.1.0 |
|
2024-03-14 07:15:58 |
pyxai |
1.0.12 |
Explaining Machine Learning Classifiers in Python |
2024-02-02 08:49:41 |
multixai |
1.0.0 |
|
2024-01-21 05:59:27 |
lofo-importance |
0.3.4 |
Leave One Feature Out Importance |
2024-01-16 09:19:52 |
xi-method |
0.1.7 |
Post hoc explanations for ML models through measures of statistical dependence |
2023-09-22 15:34:19 |
XAISuite |
2.9.2 |
XAISuite: Training and Explanation Generation Utilities for Machine Learning Models |
2023-09-05 01:21:31 |
pyxai-experimental |
1.0.post1 |
Explaining Machine Learning Classifiers in Python |
2023-09-04 10:58:11 |
omnixai |
1.3.1 |
OmniXAI: An Explainable AI Toolbox |
2023-07-16 04:58:16 |
tuw-nlp |
0.1.0 |
NLP tools at TUW Informatics |
2023-04-17 09:18:09 |