trustai


Nametrustai JSON
Version 0.1.12 PyPI version JSON
download
home_pagehttps://github.com/PaddlePaddle/TrustAI
Summarybaidu TrustAI
upload_time2022-12-07 16:28:48
maintainer
docs_urlNone
authorBaidu NLP
requires_python
licenseApache License 2.0
keywords baidu trustai interpretation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<p align="center">
  <img src="./imgs/trustai.png" align="middle"  width="500" />
</p>


<p align="center">
<a href="https://pypi.org/project/trustai/"><img src="https://img.shields.io/pypi/v/trustai.svg?&color=green"></a>
<a href="./LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg"></a>
<a href=""><img src="https://img.shields.io/badge/python-3.6.2+-orange.svg"></a>
<a href=""><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-red.svg"></a>
</p>

<h4 align="center">
  <a href=#安装> 安装 </a> |
  <a href=#快速开始> 快速开始 </a>|
  <a href=#可信分析功能> 可信分析功能 </a> |
  <a href=#可信增强功能> 可信增强功能 </a> |
  <a href=#应用案例> 应用案例 </a> |
  <a href=#评测榜单> 评测榜单 </a> |
  <a href=#学术文献> 学术文献 </a>
</h4>

**TrustAI**是基于深度学习平台『飞桨』([PaddlePaddle](https://github.com/PaddlePaddle/Paddle))开发的集可信分析和增强于一体的可信AI工具集,助力NLP开发者提升深度学习模型效果和可信度,推动模型安全、可靠的落地于应用。


## News 📢
* 🔥 2022.10.30 [可解释评测数据集](https://www.luge.ai/#/luge/task/taskDetail?taskId=15)入驻千言,部分数据提供人工标注证据,欢迎大家使用。
* 🔥 2022.8.29 [PaddleNLP分类系统](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/text_classification)已经接入TrustAI能力,欢迎大家试用。
* 🔥 2022.8.20 TrustAI[发布](https://mp.weixin.qq.com/s/Ph3uzbUEUj1K7UALdM6OCA)可信增强能力及应用案例。
* 🎉 2022.5.20 TrustAI首次[发布](https://mp.weixin.qq.com/s/AqYReKRnki9TwI5huY1f5Q)!

## <p id="可信分析功能">👏可信分析功能</p>
TrustAI提供特征级证据和实例级证据分析方法,全方位解释模型的预测,帮助开发者了解模型预测机制,以及协助使用者基于证据做出正确决策。

### 特征级证据分析

根据模型预测结果,从输入文本中提取模型预测所依赖的证据,即输入文本中支持模型预测的若干重要词。

<p align="center">
  <img src="./imgs/token.png" align="middle", width="500" />
</p>

应用示例见AI Studio - [基于TrustAI的特征级证据分析示例-中文情感分析任务](https://aistudio.baidu.com/aistudio/projectdetail/4431334)

关于方法更多详细内容可参考 - [特征级证据分析文档](./trustai/interpretation/token_level/README.md)

### 实例级证据分析


从训练数据中找出对当前预测文本影响较大的若干训练样本作为模型预测依赖证据。
<p align="center">
  <img src="./imgs/example.png" align="middle", width="500" />
</p>



应用示例见AI Studio - [基于TrustAI的实例级证据分析示例-中文情感分析任务](https://aistudio.baidu.com/aistudio/projectdetail/4433286)

关于方法更多详细内容可参考 - [实例级证据分析文档](./trustai/interpretation/example_level/README.md)

## <p id="可信增强功能">💥可信增强功能</p>

基于对模型预测依赖证据的分析,TrustAI提供了模型缺陷识别和对应的优化方案,即可信增强功能。当前,从训练数据和训练机制优化角度,TrustAI开源了针对3种数据缺陷的识别方案和优化方案,希望能够帮助开发者以最小成本解决训练数据缺陷问题。同时,TrustAI开源了一种基于证据指导的预测机制优化方案,用来解决长文本理解问题。

### 训练数据中脏数据自动识别


TrustAI提供了脏数据(即标注质量差的数据)自动识别功能,帮助降低人工检查数据的成本。

如下图所示,在两个公开数据集上,TrustAI自动识别的脏数据比例远高于随机选择策略。

<p align="center">
<img align="center" src="./imgs/dirty_analysis.png", width=400><br>
图1 不同策略的脏数据识别效果
</p>

应用示例见AI Studio - [训练数据中脏数据自动识别示例](https://aistudio.baidu.com/aistudio/projectdetail/4434058)

### 训练数据覆盖不足识别及有效数据增强

训练数据覆盖不足会导致模型在对应的测试数据上表现不好。基于实例级证据分析方法,TrustAI可识别训练数据覆盖不足的测试数据(这些数据构成的集合称为目标集),模型在目标集上效果降低20%左右。进一步地,为降低标注成本,TrustAI提供有效数据选择策略,即从未标注数据中选择可以提高训练数据覆盖度和模型效果的数据进行标注。

如下图所示,在两个公开数据集上,TrustAI提供的有效数据增强策略对模型在目标数据上的效果提升远高于随机选择策略。

<p align="center">
<img align="center" src="./imgs/sparse_analysis.png", width=400><br>
图2 目标集提升的效果
</p>

应用示例见AI Studio - [训练数据覆盖不足识别及有效数据增强示例](https://aistudio.baidu.com/aistudio/projectdetail/4434403)


### 训练数据分布偏置识别及偏置缓解
神经网络模型会利用数据集中的偏置做预测,这会导致模型没有学会理解语言,鲁棒性差。TrustAI提供了分布修正和权重修正两种策略,在不需要人工介入的条件下,有效缓解数据偏置对模型训练的影响。

如下图所示,在两个公开的鲁棒性数据集上,TrustAI的权重修正和分布修正策略分别取得明显提升。

<p align="center">
<img align="center" src="./imgs/bias_correction.png", width=400><br>
图3 偏置修正后模型在鲁棒性数据集上的效果
</p>

应用示例见AI Studio - [数据分布偏置缓解策略-数据权重修正示例](https://aistudio.baidu.com/aistudio/projectdetail/4434616)和[数据分布偏置缓解策略-数据分布修正示例](https://aistudio.baidu.com/aistudio/projectdetail/4434652)

### 证据识别及基于证据的预测 - 预测机制优化
在长本文理解任务中,输入中的冗余信息往往会干扰模型预测,导致模型鲁棒性差。TrustAI提供了“证据识别-基于证据的预测”两阶段预测方案,显著提升长文本任务上的模型效果,尤其是模型的鲁棒性。

以DuReader-robust数据集的训练数据训练模型,在DuReader-robust验证集、测试集以及DuReader-checklist测试集上进行了效果验证,分别验证模型的基本效果、鲁棒性效果、领域泛化效果,各数据集上的答案精准匹配率均取得显著提升。

<p align="center">
<img align="center" src="./imgs/redundancy_removal.png", width=400><br>
图4 证据识别及基于证据预测的两阶段策略在阅读理解任务上的效果
</p>

应用示例见AI Studio - [证据识别及基于证据的预测示例-中文阅读理解任务](https://aistudio.baidu.com/aistudio/projectdetail/4525331)

**关于可信增强更多内容请阅读[tutorials](./tutorials)。**


## 安装

### 依赖
* `python`: >=3.6.2
* [`paddlepaddle`](https://www.paddlepaddle.org.cn/): >=2.0

### pip 安装

```shell
# 依赖paddlepaddle,推荐安装CUDA版本
pip install -U paddlepaddle-gpu
pip install -U trustai
```

### 源码编译
```shell
git clone git@github.com:PaddlePaddle/TrustAI.git
cd TrustAI
python setup.py install
```


## 快速开始

### 特征级证据分析
<details><summary>&emsp;以Integrated Gradient方法为例,其调用方法如下所示:</summary>

```python
from trustai.demo import DEMO
from trustai.interpretation import IntGradInterpreter
from trustai.interpretation import visualize

demo = DEMO('chnsenticorp')
# init demo model
model = demo.get_model()
tokens, model_inputs = demo("这个宾馆比较陈旧了")
# tokens: List[List[str]], [['[CLS]', '这', '个', '宾', '馆', '比', '较', '陈', '旧', '了', '[SEP]']]
# model_inputs: List[Paddle.Tensor],满足`logits = model(*model_inputs)`
# init interpreter
interpreter = IntGradInterpreter(model)
result = interpreter(model_inputs)
# result: List[IGResult], result[0].attribtions与tokens[0]一一对应,表示每一个token对预测结果的支持程度,即证据的支持度分数。
# result[0].attributions: [ 0.04054353,  0.12724458, -0.00042592,  0.01736268,  0.07130871, -0.00350687,
#                           0.01605285,  0.04392833,  0.04841821, -0.00514487,  0.13098583]

# 可视化结果
html = visualize(result, words=tokens)
# TrustAI提供可视化输出,即根据输入特征的支持度,以不同颜色深度展示结果。颜色越深表示支持度越大,越浅表示支持度越小。
```

&emsp;更多详情 - [特征级证据分析文档](./trustai/interpretation/token_level/README.md)


</details>


### 实例级证据分析

<details><summary>&emsp;以Feature Similarity方法为例,其调用方法如下所示:</summary>

```python
from trustai.demo import DEMO
from trustai.interpretation import FeatureSimilarityModel
demo = DEMO('chnsenticorp')
# init demo model
model = demo.get_model()
tokens, model_inputs = demo("房间设备比较陈旧,没五星标准 客人非常不满意")
# tokens: List[List[str]]
# model_inputs: List[Paddle.Tensor],满足`logits = model(*model_inputs)`
# get dataloader of train data, 满足`logits = model(*next(train_data_loader))`
train_data, train_dataloader = demo.get_train_data_and_dataloader()
# init interpreter
interpreter = FeatureSimilarityModel(model, train_dataloader, classifier_layer_name='classifier')
result = interpreter(model_inputs)
# result: List[ExampleResult], [ExampleResult(pred_label=0, pos_indexes=(7112, 1757, 4487), neg_indexes=(8952, 5986, 1715), pos_scores=(0.9454082250595093, 0.9445762038230896, 0.9439479112625122), neg_scores=(-0.2316494882106781, -0.23641490936279297, -0.23641490936279297))]
# ExampleResult.pos_indexes: List[int], 支持当前预测的训练数据在训练集中的索引
# ExampleResult.neg_indexes: List[int], 不支持当前预测的训练数据在训练集中的索引
# ExampleResult.pos_scores: List[float], 支持当前预测的训练数据的支持度
# ExampleResult.neg_scores: List[float], 不支持当前预测的训练数据的支持度
```

&emsp;更多详情 - [实例级证据分析文档](./trustai/interpretation/example_level/README.md)

</details>

关于接口使用的更多样例见[examples目录](./examples)


## <p id="应用案例">🚀应用案例</p>

</details>

<details><summary> &emsp;自动识别脏数据,降低人力检查成本 </summary>
</br>

&emsp;&emsp;&emsp;[训练数据中脏数据自动识别示例](./tutorials/dirty_data_identification)

</details>

<details><summary> &emsp;以一半标注成本,带来更大效果提升 </summary>
</br>

&emsp;&emsp;&emsp;[训练数据覆盖不足识别及有效数据增强示例](./tutorials/sparse_data_identification)

</details>

<details><summary> &emsp;缓解数据集偏置,提升模型鲁棒性 </summary>

&emsp;&emsp;&emsp;[数据集分布偏置缓解 - 数据权重修正策略示例](./tutorials/data_bias_identification/less_learn_shortcut)

&emsp;&emsp;&emsp;[数据集分布偏置缓解 - 数据分布修正策略示例](./tutorials/data_bias_identification/data_distribution_correction)

</details>

<details><summary> &emsp;证据识别及基于证据的预测,提升模型鲁棒性 </summary>

&emsp;&emsp;&emsp;[证据识别及基于证据的预测示例](./tutorials/redundancy_removal)

</details>

</br>

关于应用案例的更多说明,请参考[tutorials目录](./tutorials/)

## 评测榜单

评测数据集下载:[千言数据集-可解释性评测](https://www.luge.ai/#/luge/task/taskDetail?taskId=15)

<details><summary> &emsp;限时赛</summary>

* [2022 CCF BDCI 基于文心NLP大模型的阅读理解可解释评测](https://aistudio.baidu.com/aistudio/competition/detail/394/0/introduction),比赛时间:2022/08/29 - 2022/12/31
* [兴智杯-深度学习模型可解释性赛事](http://www.aiinnovation.com.cn/#/trackDetail?id=23),已结束
* [2022 CCF BDCI 基于文心NLP大模型的阅读理解可解释评测](https://aistudio.baidu.com/aistudio/competition/detail/394/0/introduction),已结束。


</details>

<details><summary> &emsp;常规赛</summary>

* [千言数据集:情感分析可解释性评测(中文)](https://aistudio.baidu.com/aistudio/competition/detail/443/0/introduction)
* [千言数据集:情感分析可解释性评测(英文)](https://aistudio.baidu.com/aistudio/competition/detail/449/0/introduction)
* [千言数据集:文本相似度可解释性评测(中文)](https://aistudio.baidu.com/aistudio/competition/detail/445/0/introduction)
* [千言数据集:文本相似度可解释性评测(英文)](https://aistudio.baidu.com/aistudio/competition/detail/451/0/introduction)
* [千言数据集:阅读理解可解释性评测(中文)](https://aistudio.baidu.com/aistudio/competition/detail/447/0/introduction)
* [千言数据集:阅读理解可解释性评测(英文)](https://aistudio.baidu.com/aistudio/competition/detail/453/0/introduction)

</details>


## 学术文献
<details><summary>&emsp;评测参考论文(数据集和评测指标)</summary>

* `Dataset` : [A Fine-grained Interpretability Evaluation Benchmark for Neural NLP, Wang Lijie, et al. 2022](https://arxiv.org/pdf/2205.11097.pdf)
* `Dataset` : [A Fine-grained Interpretability Evaluation Benchmark for Pre-trained Language Models, Shen yaozong, et al. 2022](https://arxiv.org/pdf/2207.13948.pdf)
* `Dataset` : [Benchmarking and Survey of Explanation Methods for Black Box Models](https://arxiv.org/pdf/2102.13076.pdf)
* `Dataset` : [ERASER: A Benchmark to Evaluate Rationalized NLP Models](https://aclanthology.org/2020.acl-main.408.pdf)
* `Metrics` : [On the Sensitivity and Stability of Model Interpretations in NLP](https://arxiv.org/abs/2104.08782)
* `Metrics` : [Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?](https://aclanthology.org/2020.acl-main.386.pdf)

</details>

<details><summary> &emsp;可信分析参考论文 </summary>

* `IntegratedGraients`: [Axiomatic Attribution for Deep Networks, Mukund Sundararajan et al. 2017](https://arxiv.org/abs/1703.01365)
* `GradientShap`: [A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)
* `Lime`: ["Why Should I Trust You?": Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938)
* `NormLime`: [NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks, Isaac Ahern et al. 2019](https://arxiv.org/abs/1909.04200)
* `Attention`: [Attention is not explanation, S Jain et al. 2019](https://arxiv.org/pdf/1902.10186.pdf)
* `Representer Pointer`:[Representer point selection for explaining deep neural networks, Chih-Kuan Yeh et al. 2018](https://proceedings.neurips.cc/paper/2018/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-Paper.pdf)
* `Similarity based Instance Attribution`: [An Empirical Comparison of Instance Attribution Methods for NLP](https://arxiv.org/pdf/2104.04128.pdf)
* `Similarity based Instance Attribution`: [Input Similarity from the Neural Network Perspective](https://arxiv.org/pdf/2102.05262.pdf)

</details>

<details><summary> &emsp;可信增强参考论文 </summary>

  * `Bias` : [Towards Debiasing NLU Models from Unknown Biases](https://arxiv.org/pdf/2009.12303v4.pdf)
  * `Bias` : [Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models](https://arxiv.org/pdf/2103.06922.pdf)
  * `Bias` : [Learning to Learn to be Right for the Right Reasons](https://aclanthology.org/2021.naacl-main.304/)
  * `Robustness` : [Can Rationalization Improve Robustness](https://arxiv.org/pdf/2204.11790v1.pdf)

</details>

<details><summary> &emsp; 端到端可解释性模型参考论文 </summary>

* `Self-explaining` : [Self-explaining deep models with logic rule reasoning](https://arxiv.org/abs/2210.07024)
  
</details>

<details><summary> &emsp;进阶学习材料 </summary>

* `tutorials` : [ACL 2020 tutorial: Interpretability and Analysis in Neural NLP](https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1204/slides/cs224n-2020-lecture20-interpretability.pdf) | [Video](https://www.youtube.com/watch?v=RkYASrVFdlU)
* `tutorials` : [EMNLP 2020 Tutorial on Interpreting Predictions of NLP Models](https://github.com/Eric-Wallace/interpretability-tutorial-emnlp2020) | [Video](https://www.youtube.com/watch?v=gprIzglUW1s)
* `tutorials` : [NAACL 2021 tutorial:Fine-grained Interpretation and Causation Analysis in Deep NLP Models](https://aclanthology.org/2021.naacl-tutorials.2.pdf) | [Video](https://www.youtube.com/watch?v=gprIzglUW1s)
* `Survey` : [Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing](https://openreview.net/pdf?id=ogNcxJn32BZ)
* `Survey` : [A Survey on the Explainability of Supervised Machine Learning](https://dl.acm.org/doi/pdf/10.1613/jair.1.12228)
* `Workshop` : [ICML 2022 Workshop: Interpretable Machine Learning in Healthcare](https://sites.google.com/view/imlh2022?pli=1)

</details>

<details><summary> &emsp;各赛事优秀方案分享 </summary>

  * `情感可解释` : [情感可解释前三方案分享](https://aistudio.baidu.com/aistudio/competition/detail/443/0/datasets)(需报名)

</details>


## 引用
要引用 TrustAI 进行研究,请使用以下格式进行引用。
```
@article{wang2022fine,
  title={A Fine-grained Interpretability Evaluation Benchmark for Neural NLP},
  author={Wang, Lijie and Shen, Yaozong and Peng, Shuyuan and Zhang, Shuai and Xiao, Xinyan and Liu, Hao and Tang, Hongxuan and Chen, Ying and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2205.11097},
  year={2022}
}
```

## 致谢
我们实现的可信分析方法参考和依赖了[InterpretDL](https://github.com/PaddlePaddle/InterpretDL)项目,在此向InterpretDL的作者表示感谢。

## LICENSE
TrustAI遵循[Apache-2.0开源协议](./LICENSE)。

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/PaddlePaddle/TrustAI",
    "name": "trustai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "baidu,TrustAI,interpretation",
    "author": "Baidu NLP",
    "author_email": "nlp-parser@baidu.com",
    "download_url": "",
    "platform": null,
    "description": "\n<p align=\"center\">\n  <img src=\"./imgs/trustai.png\" align=\"middle\"  width=\"500\" />\n</p>\n\n\n<p align=\"center\">\n<a href=\"https://pypi.org/project/trustai/\"><img src=\"https://img.shields.io/pypi/v/trustai.svg?&color=green\"></a>\n<a href=\"./LICENSE\"><img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\"></a>\n<a href=\"\"><img src=\"https://img.shields.io/badge/python-3.6.2+-orange.svg\"></a>\n<a href=\"\"><img src=\"https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-red.svg\"></a>\n</p>\n\n<h4 align=\"center\">\n  <a href=#\u5b89\u88c5> \u5b89\u88c5 </a> |\n  <a href=#\u5feb\u901f\u5f00\u59cb> \u5feb\u901f\u5f00\u59cb </a>|\n  <a href=#\u53ef\u4fe1\u5206\u6790\u529f\u80fd> \u53ef\u4fe1\u5206\u6790\u529f\u80fd </a> |\n  <a href=#\u53ef\u4fe1\u589e\u5f3a\u529f\u80fd> \u53ef\u4fe1\u589e\u5f3a\u529f\u80fd </a> |\n  <a href=#\u5e94\u7528\u6848\u4f8b> \u5e94\u7528\u6848\u4f8b </a> |\n  <a href=#\u8bc4\u6d4b\u699c\u5355> \u8bc4\u6d4b\u699c\u5355 </a> |\n  <a href=#\u5b66\u672f\u6587\u732e> \u5b66\u672f\u6587\u732e </a>\n</h4>\n\n**TrustAI**\u662f\u57fa\u4e8e\u6df1\u5ea6\u5b66\u4e60\u5e73\u53f0\u300e\u98de\u6868\u300f([PaddlePaddle](https://github.com/PaddlePaddle/Paddle))\u5f00\u53d1\u7684\u96c6\u53ef\u4fe1\u5206\u6790\u548c\u589e\u5f3a\u4e8e\u4e00\u4f53\u7684\u53ef\u4fe1AI\u5de5\u5177\u96c6\uff0c\u52a9\u529bNLP\u5f00\u53d1\u8005\u63d0\u5347\u6df1\u5ea6\u5b66\u4e60\u6a21\u578b\u6548\u679c\u548c\u53ef\u4fe1\u5ea6\uff0c\u63a8\u52a8\u6a21\u578b\u5b89\u5168\u3001\u53ef\u9760\u7684\u843d\u5730\u4e8e\u5e94\u7528\u3002\n\n\n## News \ud83d\udce2\n* \ud83d\udd25 2022.10.30 [\u53ef\u89e3\u91ca\u8bc4\u6d4b\u6570\u636e\u96c6](https://www.luge.ai/#/luge/task/taskDetail?taskId=15)\u5165\u9a7b\u5343\u8a00\uff0c\u90e8\u5206\u6570\u636e\u63d0\u4f9b\u4eba\u5de5\u6807\u6ce8\u8bc1\u636e\uff0c\u6b22\u8fce\u5927\u5bb6\u4f7f\u7528\u3002\n* \ud83d\udd25 2022.8.29 [PaddleNLP\u5206\u7c7b\u7cfb\u7edf](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/text_classification)\u5df2\u7ecf\u63a5\u5165TrustAI\u80fd\u529b\uff0c\u6b22\u8fce\u5927\u5bb6\u8bd5\u7528\u3002\n* \ud83d\udd25 2022.8.20 TrustAI[\u53d1\u5e03](https://mp.weixin.qq.com/s/Ph3uzbUEUj1K7UALdM6OCA)\u53ef\u4fe1\u589e\u5f3a\u80fd\u529b\u53ca\u5e94\u7528\u6848\u4f8b\u3002\n* \ud83c\udf89 2022.5.20 TrustAI\u9996\u6b21[\u53d1\u5e03](https://mp.weixin.qq.com/s/AqYReKRnki9TwI5huY1f5Q)\uff01\n\n## <p id=\"\u53ef\u4fe1\u5206\u6790\u529f\u80fd\">\ud83d\udc4f\u53ef\u4fe1\u5206\u6790\u529f\u80fd</p>\nTrustAI\u63d0\u4f9b\u7279\u5f81\u7ea7\u8bc1\u636e\u548c\u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\u65b9\u6cd5\uff0c\u5168\u65b9\u4f4d\u89e3\u91ca\u6a21\u578b\u7684\u9884\u6d4b\uff0c\u5e2e\u52a9\u5f00\u53d1\u8005\u4e86\u89e3\u6a21\u578b\u9884\u6d4b\u673a\u5236\uff0c\u4ee5\u53ca\u534f\u52a9\u4f7f\u7528\u8005\u57fa\u4e8e\u8bc1\u636e\u505a\u51fa\u6b63\u786e\u51b3\u7b56\u3002\n\n### \u7279\u5f81\u7ea7\u8bc1\u636e\u5206\u6790\n\n\u6839\u636e\u6a21\u578b\u9884\u6d4b\u7ed3\u679c\uff0c\u4ece\u8f93\u5165\u6587\u672c\u4e2d\u63d0\u53d6\u6a21\u578b\u9884\u6d4b\u6240\u4f9d\u8d56\u7684\u8bc1\u636e\uff0c\u5373\u8f93\u5165\u6587\u672c\u4e2d\u652f\u6301\u6a21\u578b\u9884\u6d4b\u7684\u82e5\u5e72\u91cd\u8981\u8bcd\u3002\n\n<p align=\"center\">\n  <img src=\"./imgs/token.png\" align=\"middle\", width=\"500\" />\n</p>\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u57fa\u4e8eTrustAI\u7684\u7279\u5f81\u7ea7\u8bc1\u636e\u5206\u6790\u793a\u4f8b-\u4e2d\u6587\u60c5\u611f\u5206\u6790\u4efb\u52a1](https://aistudio.baidu.com/aistudio/projectdetail/4431334)\n\n\u5173\u4e8e\u65b9\u6cd5\u66f4\u591a\u8be6\u7ec6\u5185\u5bb9\u53ef\u53c2\u8003 - [\u7279\u5f81\u7ea7\u8bc1\u636e\u5206\u6790\u6587\u6863](./trustai/interpretation/token_level/README.md)\n\n### \u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\n\n\n\u4ece\u8bad\u7ec3\u6570\u636e\u4e2d\u627e\u51fa\u5bf9\u5f53\u524d\u9884\u6d4b\u6587\u672c\u5f71\u54cd\u8f83\u5927\u7684\u82e5\u5e72\u8bad\u7ec3\u6837\u672c\u4f5c\u4e3a\u6a21\u578b\u9884\u6d4b\u4f9d\u8d56\u8bc1\u636e\u3002\n<p align=\"center\">\n  <img src=\"./imgs/example.png\" align=\"middle\", width=\"500\" />\n</p>\n\n\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u57fa\u4e8eTrustAI\u7684\u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\u793a\u4f8b-\u4e2d\u6587\u60c5\u611f\u5206\u6790\u4efb\u52a1](https://aistudio.baidu.com/aistudio/projectdetail/4433286)\n\n\u5173\u4e8e\u65b9\u6cd5\u66f4\u591a\u8be6\u7ec6\u5185\u5bb9\u53ef\u53c2\u8003 - [\u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\u6587\u6863](./trustai/interpretation/example_level/README.md)\n\n## <p id=\"\u53ef\u4fe1\u589e\u5f3a\u529f\u80fd\">\ud83d\udca5\u53ef\u4fe1\u589e\u5f3a\u529f\u80fd</p>\n\n\u57fa\u4e8e\u5bf9\u6a21\u578b\u9884\u6d4b\u4f9d\u8d56\u8bc1\u636e\u7684\u5206\u6790\uff0cTrustAI\u63d0\u4f9b\u4e86\u6a21\u578b\u7f3a\u9677\u8bc6\u522b\u548c\u5bf9\u5e94\u7684\u4f18\u5316\u65b9\u6848\uff0c\u5373\u53ef\u4fe1\u589e\u5f3a\u529f\u80fd\u3002\u5f53\u524d\uff0c\u4ece\u8bad\u7ec3\u6570\u636e\u548c\u8bad\u7ec3\u673a\u5236\u4f18\u5316\u89d2\u5ea6\uff0cTrustAI\u5f00\u6e90\u4e86\u9488\u5bf93\u79cd\u6570\u636e\u7f3a\u9677\u7684\u8bc6\u522b\u65b9\u6848\u548c\u4f18\u5316\u65b9\u6848\uff0c\u5e0c\u671b\u80fd\u591f\u5e2e\u52a9\u5f00\u53d1\u8005\u4ee5\u6700\u5c0f\u6210\u672c\u89e3\u51b3\u8bad\u7ec3\u6570\u636e\u7f3a\u9677\u95ee\u9898\u3002\u540c\u65f6\uff0cTrustAI\u5f00\u6e90\u4e86\u4e00\u79cd\u57fa\u4e8e\u8bc1\u636e\u6307\u5bfc\u7684\u9884\u6d4b\u673a\u5236\u4f18\u5316\u65b9\u6848\uff0c\u7528\u6765\u89e3\u51b3\u957f\u6587\u672c\u7406\u89e3\u95ee\u9898\u3002\n\n### \u8bad\u7ec3\u6570\u636e\u4e2d\u810f\u6570\u636e\u81ea\u52a8\u8bc6\u522b\n\n\nTrustAI\u63d0\u4f9b\u4e86\u810f\u6570\u636e\uff08\u5373\u6807\u6ce8\u8d28\u91cf\u5dee\u7684\u6570\u636e\uff09\u81ea\u52a8\u8bc6\u522b\u529f\u80fd\uff0c\u5e2e\u52a9\u964d\u4f4e\u4eba\u5de5\u68c0\u67e5\u6570\u636e\u7684\u6210\u672c\u3002\n\n\u5982\u4e0b\u56fe\u6240\u793a\uff0c\u5728\u4e24\u4e2a\u516c\u5f00\u6570\u636e\u96c6\u4e0a\uff0cTrustAI\u81ea\u52a8\u8bc6\u522b\u7684\u810f\u6570\u636e\u6bd4\u4f8b\u8fdc\u9ad8\u4e8e\u968f\u673a\u9009\u62e9\u7b56\u7565\u3002\n\n<p align=\"center\">\n<img align=\"center\" src=\"./imgs/dirty_analysis.png\", width=400><br>\n\u56fe1 \u4e0d\u540c\u7b56\u7565\u7684\u810f\u6570\u636e\u8bc6\u522b\u6548\u679c\n</p>\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u8bad\u7ec3\u6570\u636e\u4e2d\u810f\u6570\u636e\u81ea\u52a8\u8bc6\u522b\u793a\u4f8b](https://aistudio.baidu.com/aistudio/projectdetail/4434058)\n\n### \u8bad\u7ec3\u6570\u636e\u8986\u76d6\u4e0d\u8db3\u8bc6\u522b\u53ca\u6709\u6548\u6570\u636e\u589e\u5f3a\n\n\u8bad\u7ec3\u6570\u636e\u8986\u76d6\u4e0d\u8db3\u4f1a\u5bfc\u81f4\u6a21\u578b\u5728\u5bf9\u5e94\u7684\u6d4b\u8bd5\u6570\u636e\u4e0a\u8868\u73b0\u4e0d\u597d\u3002\u57fa\u4e8e\u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\u65b9\u6cd5\uff0cTrustAI\u53ef\u8bc6\u522b\u8bad\u7ec3\u6570\u636e\u8986\u76d6\u4e0d\u8db3\u7684\u6d4b\u8bd5\u6570\u636e\uff08\u8fd9\u4e9b\u6570\u636e\u6784\u6210\u7684\u96c6\u5408\u79f0\u4e3a\u76ee\u6807\u96c6\uff09\uff0c\u6a21\u578b\u5728\u76ee\u6807\u96c6\u4e0a\u6548\u679c\u964d\u4f4e20%\u5de6\u53f3\u3002\u8fdb\u4e00\u6b65\u5730\uff0c\u4e3a\u964d\u4f4e\u6807\u6ce8\u6210\u672c\uff0cTrustAI\u63d0\u4f9b\u6709\u6548\u6570\u636e\u9009\u62e9\u7b56\u7565\uff0c\u5373\u4ece\u672a\u6807\u6ce8\u6570\u636e\u4e2d\u9009\u62e9\u53ef\u4ee5\u63d0\u9ad8\u8bad\u7ec3\u6570\u636e\u8986\u76d6\u5ea6\u548c\u6a21\u578b\u6548\u679c\u7684\u6570\u636e\u8fdb\u884c\u6807\u6ce8\u3002\n\n\u5982\u4e0b\u56fe\u6240\u793a\uff0c\u5728\u4e24\u4e2a\u516c\u5f00\u6570\u636e\u96c6\u4e0a\uff0cTrustAI\u63d0\u4f9b\u7684\u6709\u6548\u6570\u636e\u589e\u5f3a\u7b56\u7565\u5bf9\u6a21\u578b\u5728\u76ee\u6807\u6570\u636e\u4e0a\u7684\u6548\u679c\u63d0\u5347\u8fdc\u9ad8\u4e8e\u968f\u673a\u9009\u62e9\u7b56\u7565\u3002\n\n<p align=\"center\">\n<img align=\"center\" src=\"./imgs/sparse_analysis.png\", width=400><br>\n\u56fe2 \u76ee\u6807\u96c6\u63d0\u5347\u7684\u6548\u679c\n</p>\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u8bad\u7ec3\u6570\u636e\u8986\u76d6\u4e0d\u8db3\u8bc6\u522b\u53ca\u6709\u6548\u6570\u636e\u589e\u5f3a\u793a\u4f8b](https://aistudio.baidu.com/aistudio/projectdetail/4434403)\n\n\n### \u8bad\u7ec3\u6570\u636e\u5206\u5e03\u504f\u7f6e\u8bc6\u522b\u53ca\u504f\u7f6e\u7f13\u89e3\n\u795e\u7ecf\u7f51\u7edc\u6a21\u578b\u4f1a\u5229\u7528\u6570\u636e\u96c6\u4e2d\u7684\u504f\u7f6e\u505a\u9884\u6d4b\uff0c\u8fd9\u4f1a\u5bfc\u81f4\u6a21\u578b\u6ca1\u6709\u5b66\u4f1a\u7406\u89e3\u8bed\u8a00\uff0c\u9c81\u68d2\u6027\u5dee\u3002TrustAI\u63d0\u4f9b\u4e86\u5206\u5e03\u4fee\u6b63\u548c\u6743\u91cd\u4fee\u6b63\u4e24\u79cd\u7b56\u7565\uff0c\u5728\u4e0d\u9700\u8981\u4eba\u5de5\u4ecb\u5165\u7684\u6761\u4ef6\u4e0b\uff0c\u6709\u6548\u7f13\u89e3\u6570\u636e\u504f\u7f6e\u5bf9\u6a21\u578b\u8bad\u7ec3\u7684\u5f71\u54cd\u3002\n\n\u5982\u4e0b\u56fe\u6240\u793a\uff0c\u5728\u4e24\u4e2a\u516c\u5f00\u7684\u9c81\u68d2\u6027\u6570\u636e\u96c6\u4e0a\uff0cTrustAI\u7684\u6743\u91cd\u4fee\u6b63\u548c\u5206\u5e03\u4fee\u6b63\u7b56\u7565\u5206\u522b\u53d6\u5f97\u660e\u663e\u63d0\u5347\u3002\n\n<p align=\"center\">\n<img align=\"center\" src=\"./imgs/bias_correction.png\", width=400><br>\n\u56fe3 \u504f\u7f6e\u4fee\u6b63\u540e\u6a21\u578b\u5728\u9c81\u68d2\u6027\u6570\u636e\u96c6\u4e0a\u7684\u6548\u679c\n</p>\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u6570\u636e\u5206\u5e03\u504f\u7f6e\u7f13\u89e3\u7b56\u7565-\u6570\u636e\u6743\u91cd\u4fee\u6b63\u793a\u4f8b](https://aistudio.baidu.com/aistudio/projectdetail/4434616)\u548c[\u6570\u636e\u5206\u5e03\u504f\u7f6e\u7f13\u89e3\u7b56\u7565-\u6570\u636e\u5206\u5e03\u4fee\u6b63\u793a\u4f8b](https://aistudio.baidu.com/aistudio/projectdetail/4434652)\n\n### \u8bc1\u636e\u8bc6\u522b\u53ca\u57fa\u4e8e\u8bc1\u636e\u7684\u9884\u6d4b - \u9884\u6d4b\u673a\u5236\u4f18\u5316\n\u5728\u957f\u672c\u6587\u7406\u89e3\u4efb\u52a1\u4e2d\uff0c\u8f93\u5165\u4e2d\u7684\u5197\u4f59\u4fe1\u606f\u5f80\u5f80\u4f1a\u5e72\u6270\u6a21\u578b\u9884\u6d4b\uff0c\u5bfc\u81f4\u6a21\u578b\u9c81\u68d2\u6027\u5dee\u3002TrustAI\u63d0\u4f9b\u4e86\u201c\u8bc1\u636e\u8bc6\u522b-\u57fa\u4e8e\u8bc1\u636e\u7684\u9884\u6d4b\u201d\u4e24\u9636\u6bb5\u9884\u6d4b\u65b9\u6848\uff0c\u663e\u8457\u63d0\u5347\u957f\u6587\u672c\u4efb\u52a1\u4e0a\u7684\u6a21\u578b\u6548\u679c\uff0c\u5c24\u5176\u662f\u6a21\u578b\u7684\u9c81\u68d2\u6027\u3002\n\n\u4ee5DuReader-robust\u6570\u636e\u96c6\u7684\u8bad\u7ec3\u6570\u636e\u8bad\u7ec3\u6a21\u578b\uff0c\u5728DuReader-robust\u9a8c\u8bc1\u96c6\u3001\u6d4b\u8bd5\u96c6\u4ee5\u53caDuReader-checklist\u6d4b\u8bd5\u96c6\u4e0a\u8fdb\u884c\u4e86\u6548\u679c\u9a8c\u8bc1\uff0c\u5206\u522b\u9a8c\u8bc1\u6a21\u578b\u7684\u57fa\u672c\u6548\u679c\u3001\u9c81\u68d2\u6027\u6548\u679c\u3001\u9886\u57df\u6cdb\u5316\u6548\u679c\uff0c\u5404\u6570\u636e\u96c6\u4e0a\u7684\u7b54\u6848\u7cbe\u51c6\u5339\u914d\u7387\u5747\u53d6\u5f97\u663e\u8457\u63d0\u5347\u3002\n\n<p align=\"center\">\n<img align=\"center\" src=\"./imgs/redundancy_removal.png\", width=400><br>\n\u56fe4 \u8bc1\u636e\u8bc6\u522b\u53ca\u57fa\u4e8e\u8bc1\u636e\u9884\u6d4b\u7684\u4e24\u9636\u6bb5\u7b56\u7565\u5728\u9605\u8bfb\u7406\u89e3\u4efb\u52a1\u4e0a\u7684\u6548\u679c\n</p>\n\n\u5e94\u7528\u793a\u4f8b\u89c1AI Studio - [\u8bc1\u636e\u8bc6\u522b\u53ca\u57fa\u4e8e\u8bc1\u636e\u7684\u9884\u6d4b\u793a\u4f8b-\u4e2d\u6587\u9605\u8bfb\u7406\u89e3\u4efb\u52a1](https://aistudio.baidu.com/aistudio/projectdetail/4525331)\n\n**\u5173\u4e8e\u53ef\u4fe1\u589e\u5f3a\u66f4\u591a\u5185\u5bb9\u8bf7\u9605\u8bfb[tutorials](./tutorials)\u3002**\n\n\n## \u5b89\u88c5\n\n### \u4f9d\u8d56\n* `python`: >=3.6.2\n* [`paddlepaddle`](https://www.paddlepaddle.org.cn/): >=2.0\n\n### pip \u5b89\u88c5\n\n```shell\n# \u4f9d\u8d56paddlepaddle\uff0c\u63a8\u8350\u5b89\u88c5CUDA\u7248\u672c\npip install -U paddlepaddle-gpu\npip install -U trustai\n```\n\n### \u6e90\u7801\u7f16\u8bd1\n```shell\ngit clone git@github.com:PaddlePaddle/TrustAI.git\ncd TrustAI\npython setup.py install\n```\n\n\n## \u5feb\u901f\u5f00\u59cb\n\n### \u7279\u5f81\u7ea7\u8bc1\u636e\u5206\u6790\n<details><summary>&emsp;\u4ee5Integrated Gradient\u65b9\u6cd5\u4e3a\u4f8b\uff0c\u5176\u8c03\u7528\u65b9\u6cd5\u5982\u4e0b\u6240\u793a\uff1a</summary>\n\n```python\nfrom trustai.demo import DEMO\nfrom trustai.interpretation import IntGradInterpreter\nfrom trustai.interpretation import visualize\n\ndemo = DEMO('chnsenticorp')\n# init demo model\nmodel = demo.get_model()\ntokens, model_inputs = demo(\"\u8fd9\u4e2a\u5bbe\u9986\u6bd4\u8f83\u9648\u65e7\u4e86\")\n# tokens: List[List[str]], [['[CLS]', '\u8fd9', '\u4e2a', '\u5bbe', '\u9986', '\u6bd4', '\u8f83', '\u9648', '\u65e7', '\u4e86', '[SEP]']]\n# model_inputs: List[Paddle.Tensor]\uff0c\u6ee1\u8db3`logits = model(*model_inputs)`\n# init interpreter\ninterpreter = IntGradInterpreter(model)\nresult = interpreter(model_inputs)\n# result: List[IGResult], result[0].attribtions\u4e0etokens[0]\u4e00\u4e00\u5bf9\u5e94\uff0c\u8868\u793a\u6bcf\u4e00\u4e2atoken\u5bf9\u9884\u6d4b\u7ed3\u679c\u7684\u652f\u6301\u7a0b\u5ea6\uff0c\u5373\u8bc1\u636e\u7684\u652f\u6301\u5ea6\u5206\u6570\u3002\n# result[0].attributions: [ 0.04054353,  0.12724458, -0.00042592,  0.01736268,  0.07130871, -0.00350687,\n#                           0.01605285,  0.04392833,  0.04841821, -0.00514487,  0.13098583]\n\n# \u53ef\u89c6\u5316\u7ed3\u679c\nhtml = visualize(result, words=tokens)\n# TrustAI\u63d0\u4f9b\u53ef\u89c6\u5316\u8f93\u51fa\uff0c\u5373\u6839\u636e\u8f93\u5165\u7279\u5f81\u7684\u652f\u6301\u5ea6\uff0c\u4ee5\u4e0d\u540c\u989c\u8272\u6df1\u5ea6\u5c55\u793a\u7ed3\u679c\u3002\u989c\u8272\u8d8a\u6df1\u8868\u793a\u652f\u6301\u5ea6\u8d8a\u5927\uff0c\u8d8a\u6d45\u8868\u793a\u652f\u6301\u5ea6\u8d8a\u5c0f\u3002\n```\n\n&emsp;\u66f4\u591a\u8be6\u60c5 - [\u7279\u5f81\u7ea7\u8bc1\u636e\u5206\u6790\u6587\u6863](./trustai/interpretation/token_level/README.md)\n\n\n</details>\n\n\n### \u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\n\n<details><summary>&emsp;\u4ee5Feature Similarity\u65b9\u6cd5\u4e3a\u4f8b\uff0c\u5176\u8c03\u7528\u65b9\u6cd5\u5982\u4e0b\u6240\u793a\uff1a</summary>\n\n```python\nfrom trustai.demo import DEMO\nfrom trustai.interpretation import FeatureSimilarityModel\ndemo = DEMO('chnsenticorp')\n# init demo model\nmodel = demo.get_model()\ntokens, model_inputs = demo(\"\u623f\u95f4\u8bbe\u5907\u6bd4\u8f83\u9648\u65e7\uff0c\u6ca1\u4e94\u661f\u6807\u51c6 \u5ba2\u4eba\u975e\u5e38\u4e0d\u6ee1\u610f\")\n# tokens: List[List[str]]\n# model_inputs: List[Paddle.Tensor]\uff0c\u6ee1\u8db3`logits = model(*model_inputs)`\n# get dataloader of train data, \u6ee1\u8db3`logits = model(*next(train_data_loader))`\ntrain_data, train_dataloader = demo.get_train_data_and_dataloader()\n# init interpreter\ninterpreter = FeatureSimilarityModel(model, train_dataloader, classifier_layer_name='classifier')\nresult = interpreter(model_inputs)\n# result: List[ExampleResult], [ExampleResult(pred_label=0, pos_indexes=(7112, 1757, 4487), neg_indexes=(8952, 5986, 1715), pos_scores=(0.9454082250595093, 0.9445762038230896, 0.9439479112625122), neg_scores=(-0.2316494882106781, -0.23641490936279297, -0.23641490936279297))]\n# ExampleResult.pos_indexes: List[int], \u652f\u6301\u5f53\u524d\u9884\u6d4b\u7684\u8bad\u7ec3\u6570\u636e\u5728\u8bad\u7ec3\u96c6\u4e2d\u7684\u7d22\u5f15\n# ExampleResult.neg_indexes: List[int], \u4e0d\u652f\u6301\u5f53\u524d\u9884\u6d4b\u7684\u8bad\u7ec3\u6570\u636e\u5728\u8bad\u7ec3\u96c6\u4e2d\u7684\u7d22\u5f15\n# ExampleResult.pos_scores: List[float], \u652f\u6301\u5f53\u524d\u9884\u6d4b\u7684\u8bad\u7ec3\u6570\u636e\u7684\u652f\u6301\u5ea6\n# ExampleResult.neg_scores: List[float], \u4e0d\u652f\u6301\u5f53\u524d\u9884\u6d4b\u7684\u8bad\u7ec3\u6570\u636e\u7684\u652f\u6301\u5ea6\n```\n\n&emsp;\u66f4\u591a\u8be6\u60c5 - [\u5b9e\u4f8b\u7ea7\u8bc1\u636e\u5206\u6790\u6587\u6863](./trustai/interpretation/example_level/README.md)\n\n</details>\n\n\u5173\u4e8e\u63a5\u53e3\u4f7f\u7528\u7684\u66f4\u591a\u6837\u4f8b\u89c1[examples\u76ee\u5f55](./examples)\n\n\n## <p id=\"\u5e94\u7528\u6848\u4f8b\">\ud83d\ude80\u5e94\u7528\u6848\u4f8b</p>\n\n</details>\n\n<details><summary> &emsp;\u81ea\u52a8\u8bc6\u522b\u810f\u6570\u636e\uff0c\u964d\u4f4e\u4eba\u529b\u68c0\u67e5\u6210\u672c </summary>\n</br>\n\n&emsp;&emsp;&emsp;[\u8bad\u7ec3\u6570\u636e\u4e2d\u810f\u6570\u636e\u81ea\u52a8\u8bc6\u522b\u793a\u4f8b](./tutorials/dirty_data_identification)\n\n</details>\n\n<details><summary> &emsp;\u4ee5\u4e00\u534a\u6807\u6ce8\u6210\u672c\uff0c\u5e26\u6765\u66f4\u5927\u6548\u679c\u63d0\u5347 </summary>\n</br>\n\n&emsp;&emsp;&emsp;[\u8bad\u7ec3\u6570\u636e\u8986\u76d6\u4e0d\u8db3\u8bc6\u522b\u53ca\u6709\u6548\u6570\u636e\u589e\u5f3a\u793a\u4f8b](./tutorials/sparse_data_identification)\n\n</details>\n\n<details><summary> &emsp;\u7f13\u89e3\u6570\u636e\u96c6\u504f\u7f6e\uff0c\u63d0\u5347\u6a21\u578b\u9c81\u68d2\u6027 </summary>\n\n&emsp;&emsp;&emsp;[\u6570\u636e\u96c6\u5206\u5e03\u504f\u7f6e\u7f13\u89e3 - \u6570\u636e\u6743\u91cd\u4fee\u6b63\u7b56\u7565\u793a\u4f8b](./tutorials/data_bias_identification/less_learn_shortcut)\n\n&emsp;&emsp;&emsp;[\u6570\u636e\u96c6\u5206\u5e03\u504f\u7f6e\u7f13\u89e3 - \u6570\u636e\u5206\u5e03\u4fee\u6b63\u7b56\u7565\u793a\u4f8b](./tutorials/data_bias_identification/data_distribution_correction)\n\n</details>\n\n<details><summary> &emsp;\u8bc1\u636e\u8bc6\u522b\u53ca\u57fa\u4e8e\u8bc1\u636e\u7684\u9884\u6d4b\uff0c\u63d0\u5347\u6a21\u578b\u9c81\u68d2\u6027 </summary>\n\n&emsp;&emsp;&emsp;[\u8bc1\u636e\u8bc6\u522b\u53ca\u57fa\u4e8e\u8bc1\u636e\u7684\u9884\u6d4b\u793a\u4f8b](./tutorials/redundancy_removal)\n\n</details>\n\n</br>\n\n\u5173\u4e8e\u5e94\u7528\u6848\u4f8b\u7684\u66f4\u591a\u8bf4\u660e\uff0c\u8bf7\u53c2\u8003[tutorials\u76ee\u5f55](./tutorials/)\n\n## \u8bc4\u6d4b\u699c\u5355\n\n\u8bc4\u6d4b\u6570\u636e\u96c6\u4e0b\u8f7d\uff1a[\u5343\u8a00\u6570\u636e\u96c6-\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b](https://www.luge.ai/#/luge/task/taskDetail?taskId=15)\n\n<details><summary> &emsp;\u9650\u65f6\u8d5b</summary>\n\n* [2022 CCF BDCI \u57fa\u4e8e\u6587\u5fc3NLP\u5927\u6a21\u578b\u7684\u9605\u8bfb\u7406\u89e3\u53ef\u89e3\u91ca\u8bc4\u6d4b](https://aistudio.baidu.com/aistudio/competition/detail/394/0/introduction)\uff0c\u6bd4\u8d5b\u65f6\u95f4\uff1a2022/08/29 - 2022/12/31\n* [\u5174\u667a\u676f-\u6df1\u5ea6\u5b66\u4e60\u6a21\u578b\u53ef\u89e3\u91ca\u6027\u8d5b\u4e8b](http://www.aiinnovation.com.cn/#/trackDetail?id=23)\uff0c\u5df2\u7ed3\u675f\n* [2022 CCF BDCI \u57fa\u4e8e\u6587\u5fc3NLP\u5927\u6a21\u578b\u7684\u9605\u8bfb\u7406\u89e3\u53ef\u89e3\u91ca\u8bc4\u6d4b](https://aistudio.baidu.com/aistudio/competition/detail/394/0/introduction)\uff0c\u5df2\u7ed3\u675f\u3002\n\n\n</details>\n\n<details><summary> &emsp;\u5e38\u89c4\u8d5b</summary>\n\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u60c5\u611f\u5206\u6790\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u4e2d\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/443/0/introduction)\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u60c5\u611f\u5206\u6790\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u82f1\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/449/0/introduction)\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u6587\u672c\u76f8\u4f3c\u5ea6\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u4e2d\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/445/0/introduction)\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u6587\u672c\u76f8\u4f3c\u5ea6\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u82f1\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/451/0/introduction)\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u9605\u8bfb\u7406\u89e3\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u4e2d\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/447/0/introduction)\n* [\u5343\u8a00\u6570\u636e\u96c6\uff1a\u9605\u8bfb\u7406\u89e3\u53ef\u89e3\u91ca\u6027\u8bc4\u6d4b\uff08\u82f1\u6587\uff09](https://aistudio.baidu.com/aistudio/competition/detail/453/0/introduction)\n\n</details>\n\n\n## \u5b66\u672f\u6587\u732e\n<details><summary>&emsp;\u8bc4\u6d4b\u53c2\u8003\u8bba\u6587\uff08\u6570\u636e\u96c6\u548c\u8bc4\u6d4b\u6307\u6807\uff09</summary>\n\n* `Dataset` : [A Fine-grained Interpretability Evaluation Benchmark for Neural NLP, Wang Lijie, et al. 2022](https://arxiv.org/pdf/2205.11097.pdf)\n* `Dataset` : [A Fine-grained Interpretability Evaluation Benchmark for Pre-trained Language Models, Shen yaozong, et al. 2022](https://arxiv.org/pdf/2207.13948.pdf)\n* `Dataset` : [Benchmarking and Survey of Explanation Methods for Black Box Models](https://arxiv.org/pdf/2102.13076.pdf)\n* `Dataset` : [ERASER: A Benchmark to Evaluate Rationalized NLP Models](https://aclanthology.org/2020.acl-main.408.pdf)\n* `Metrics` : [On the Sensitivity and Stability of Model Interpretations in NLP](https://arxiv.org/abs/2104.08782)\n* `Metrics` : [Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?](https://aclanthology.org/2020.acl-main.386.pdf)\n\n</details>\n\n<details><summary> &emsp;\u53ef\u4fe1\u5206\u6790\u53c2\u8003\u8bba\u6587 </summary>\n\n* `IntegratedGraients`: [Axiomatic Attribution for Deep Networks, Mukund Sundararajan et al. 2017](https://arxiv.org/abs/1703.01365)\n* `GradientShap`: [A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)\n* `Lime`: [\"Why Should I Trust You?\": Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938)\n* `NormLime`: [NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks, Isaac Ahern et al. 2019](https://arxiv.org/abs/1909.04200)\n* `Attention`: [Attention is not explanation, S Jain et al. 2019](https://arxiv.org/pdf/1902.10186.pdf)\n* `Representer Pointer`:[Representer point selection for explaining deep neural networks, Chih-Kuan Yeh et al. 2018](https://proceedings.neurips.cc/paper/2018/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-Paper.pdf)\n* `Similarity based Instance Attribution`: [An Empirical Comparison of Instance Attribution Methods for NLP](https://arxiv.org/pdf/2104.04128.pdf)\n* `Similarity based Instance Attribution`: [Input Similarity from the Neural Network Perspective](https://arxiv.org/pdf/2102.05262.pdf)\n\n</details>\n\n<details><summary> &emsp;\u53ef\u4fe1\u589e\u5f3a\u53c2\u8003\u8bba\u6587 </summary>\n\n  * `Bias` : [Towards Debiasing NLU Models from Unknown Biases](https://arxiv.org/pdf/2009.12303v4.pdf)\n  * `Bias` : [Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models](https://arxiv.org/pdf/2103.06922.pdf)\n  * `Bias` : [Learning to Learn to be Right for the Right Reasons](https://aclanthology.org/2021.naacl-main.304/)\n  * `Robustness` : [Can Rationalization Improve Robustness](https://arxiv.org/pdf/2204.11790v1.pdf)\n\n</details>\n\n<details><summary> &emsp; \u7aef\u5230\u7aef\u53ef\u89e3\u91ca\u6027\u6a21\u578b\u53c2\u8003\u8bba\u6587 </summary>\n\n* `Self-explaining` : [Self-explaining deep models with logic rule reasoning](https://arxiv.org/abs/2210.07024)\n  \n</details>\n\n<details><summary> &emsp;\u8fdb\u9636\u5b66\u4e60\u6750\u6599 </summary>\n\n* `tutorials` : [ACL 2020 tutorial: Interpretability and Analysis in Neural NLP](https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1204/slides/cs224n-2020-lecture20-interpretability.pdf) | [Video](https://www.youtube.com/watch?v=RkYASrVFdlU)\n* `tutorials` : [EMNLP 2020 Tutorial on Interpreting Predictions of NLP Models](https://github.com/Eric-Wallace/interpretability-tutorial-emnlp2020) | [Video](https://www.youtube.com/watch?v=gprIzglUW1s)\n* `tutorials` : [NAACL 2021 tutorial\uff1aFine-grained Interpretation and Causation Analysis in Deep NLP Models](https://aclanthology.org/2021.naacl-tutorials.2.pdf) | [Video](https://www.youtube.com/watch?v=gprIzglUW1s)\n* `Survey` : [Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing](https://openreview.net/pdf?id=ogNcxJn32BZ)\n* `Survey` : [A Survey on the Explainability of Supervised Machine Learning](https://dl.acm.org/doi/pdf/10.1613/jair.1.12228)\n* `Workshop` : [ICML 2022 Workshop: Interpretable Machine Learning in Healthcare](https://sites.google.com/view/imlh2022?pli=1)\n\n</details>\n\n<details><summary> &emsp;\u5404\u8d5b\u4e8b\u4f18\u79c0\u65b9\u6848\u5206\u4eab </summary>\n\n  * `\u60c5\u611f\u53ef\u89e3\u91ca` : [\u60c5\u611f\u53ef\u89e3\u91ca\u524d\u4e09\u65b9\u6848\u5206\u4eab](https://aistudio.baidu.com/aistudio/competition/detail/443/0/datasets)\uff08\u9700\u62a5\u540d\uff09\n\n</details>\n\n\n## \u5f15\u7528\n\u8981\u5f15\u7528 TrustAI \u8fdb\u884c\u7814\u7a76\uff0c\u8bf7\u4f7f\u7528\u4ee5\u4e0b\u683c\u5f0f\u8fdb\u884c\u5f15\u7528\u3002\n```\n@article{wang2022fine,\n  title={A Fine-grained Interpretability Evaluation Benchmark for Neural NLP},\n  author={Wang, Lijie and Shen, Yaozong and Peng, Shuyuan and Zhang, Shuai and Xiao, Xinyan and Liu, Hao and Tang, Hongxuan and Chen, Ying and Wu, Hua and Wang, Haifeng},\n  journal={arXiv preprint arXiv:2205.11097},\n  year={2022}\n}\n```\n\n## \u81f4\u8c22\n\u6211\u4eec\u5b9e\u73b0\u7684\u53ef\u4fe1\u5206\u6790\u65b9\u6cd5\u53c2\u8003\u548c\u4f9d\u8d56\u4e86[InterpretDL](https://github.com/PaddlePaddle/InterpretDL)\u9879\u76ee\uff0c\u5728\u6b64\u5411InterpretDL\u7684\u4f5c\u8005\u8868\u793a\u611f\u8c22\u3002\n\n## LICENSE\nTrustAI\u9075\u5faa[Apache-2.0\u5f00\u6e90\u534f\u8bae](./LICENSE)\u3002\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "baidu TrustAI",
    "version": "0.1.12",
    "split_keywords": [
        "baidu",
        "trustai",
        "interpretation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "615ba704468e242c16f87f33f4de4602",
                "sha256": "72134ae83e0b5901f49784265e76083f53f083702a0b9890d4c655cfe8980a7d"
            },
            "downloads": -1,
            "filename": "trustai-0.1.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "615ba704468e242c16f87f33f4de4602",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 89922,
            "upload_time": "2022-12-07T16:28:48",
            "upload_time_iso_8601": "2022-12-07T16:28:48.184858Z",
            "url": "https://files.pythonhosted.org/packages/78/a7/3cea347e6c2fccd63bfcd011e6e272c71d31cbafee1ea34e6cd595e18c14/trustai-0.1.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-07 16:28:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "PaddlePaddle",
    "github_project": "TrustAI",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "trustai"
}
        
Elapsed time: 0.11198s