litie


Namelitie JSON
Version 0.2.5 PyPI version JSON
download
home_pagehttps://github.com/xusenlinzy/lit-ie
SummaryPytorch-lightning Code Blocks for Information Extraction
upload_time2023-07-10 09:32:24
maintainer
docs_urlNone
authorxusenlin
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements colorama colorlog datasets jieba pypinyin pytorch_lightning scikit_learn scipy seaborn sentencepiece setuptools spacy torchmetrics transformers gradio
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <br>
    <img src="images/logo.png" width="400"/>
    <br>
<p>
<p align="center">
    <a href="https://github.com/xusenlinzy/lit-ner"><img src="https://img.shields.io/github/license/xusenlinzy/lit-ner"></a>
    <a href=""><img src="https://img.shields.io/badge/python-3.8+-aff.svg"></a>
    <a href=""><img src="https://img.shields.io/badge/pytorch-%3E=1.12-red?logo=pytorch"></a>
    <a href="https://github.com/xusenlinzy/lit-ner"><img src="https://img.shields.io/github/last-commit/xusenlinzy/lit-ner"></a>
    <a href="https://github.com/xusenlinzy/lit-ner"><img src="https://img.shields.io/github/issues/xusenlinzy/lit-ner?color=9cc"></a>
    <a href="https://github.com/xusenlinzy/lit-ner"><img src="https://img.shields.io/github/stars/xusenlinzy/lit-ner?color=ccf"></a>
    <a href="https://github.com/xusenlinzy/lit-ner"><img src="https://img.shields.io/badge/langurage-py-brightgreen?style=flat&color=blue"></a>
</p>

此项目为开源**文本分类、实体抽取、关系抽取和事件抽取**模型的训练和推理提供统一的框架,具有以下特性


+ ✨ 支持多种开源文本分类、实体抽取、关系抽取和事件抽取模型


+ 👑 支持百度 [UIE](https://github.com/PaddlePaddle/PaddleNLP) 模型的训练和推理


+ 🚀 统一的训练和推理框架


+ 🎯 集成对抗训练方法,简便易用


## 📢 News 

+ 【2023.6.21】 增加文本分类代码示例


+ 【2023.6.19】 增加 `gplinker` 事件抽取模型和代码示例


+ 【2023.6.15】 增加对抗训练功能和示例、增加 `onerel` 关系抽取模型


+ 【2023.6.14】 新增 `UIE` 模型代码示例


---

## 📦 安装

### pip 安装

```shell
pip install --upgrade litie
```

### 源码安装

```shell
git clone https://github.com/xusenlinzy/lit-ie

pip install -r requirements.txt
```


## 🐼 模型

### 实体抽取

| 模型                                               | 论文                                                                                                                                                                            | 备注                                                                                                                                            |
|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| [softmax](litie/nn/ner/crf.py)                   |                                                                                                                                                                               | 全连接层序列标注并使用 `BIO` 解码                                                                                                                          |
| [crf](litie/nn/ner/crf.py)                       | [Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data](https://repository.upenn.edu/cgi/viewcontent.cgi?article=1162&context=cis_papers) | 全连接层+条件随机场,并使用 `BIO` 解码                                                                                                                       |
| [cascade-crf](litie/nn/ner/crf.py)               |                                                                                                                                                                               | 先预测实体再预测实体类型                                                                                                                                  |
| [span](litie/nn/ner/span.py)                     |                                                                                                                                                                               | 使用两个指针网络预测实体起始位置                                                                                                                              |
| [global-pointer](litie/nn/ner/global_pointer.py) |                                                                                                                                                                               | [GlobalPointer:用统一的方式处理嵌套和非嵌套NER](https://spaces.ac.cn/archives/8373)、[Efficient GlobalPointer:少点参数,多点效果](https://spaces.ac.cn/archives/8877) |
| [mrc](litie/nn/ner/mrc.py)                       | [A Unified MRC Framework for Named Entity Recognition.](https://aclanthology.org/2020.acl-main.519.pdf)                                                                       | 将实体识别任务转换为阅读理解问题,输入为实体类型模板+句子,预测对应实体的起始位置                                                                                                     |
| [tplinker](litie/nn/ner/tplinker.py)             | [TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking.](https://aclanthology.org/2020.coling-main.138.pdf)                            | 将实体识别任务转换为表格填充问题                                                                                                                              |
| [lear](litie/nn/ner/lear.py)                     | [Enhanced Language Representation with Label Knowledge for Span Extraction.](https://aclanthology.org/2021.emnlp-main.379.pdf)                                                | 改进 `MRC` 方法效率问题,采用标签融合机制                                                                                                                      |
| [w2ner](litie/nn/ner/w2ner.py)                   | [Unified Named Entity Recognition as Word-Word Relation Classification.](https://arxiv.org/pdf/2112.10070.pdf)                                                                | 统一解决嵌套实体、不连续实体的抽取问题                                                                                                                           |
| [cnn](litie/nn/ner/cnn.py)                       | [An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition.](https://arxiv.org/abs/2208.04534)                                                           | 改进 `W2NER` 方法,采用卷积网络提取实体内部token之间的关系                                                                                                          |


### 关系抽取

| 模型                                  | 论文                                                                                                                                                 | 备注                                                                  |
|-------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|
| [casrel](litie/nn/re/casrel.py)     | [A Novel Cascade Binary Tagging Framework for Relational Triple Extraction.](https://aclanthology.org/2020.acl-main.136.pdf)                       | 两阶段关系抽取,先抽取出句子中的主语,再通过指针网络抽取出主语对应的关系和宾语                             |
| [tplinker](litie/nn/re/tplinker.py) | [TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking.](https://aclanthology.org/2020.coling-main.138.pdf) | 将关系抽取问题转换为主语-宾语的首尾连接问题                                              |
| [spn](litie/nn/re/spn.py)           | [Joint Entity and Relation Extraction with Set Prediction Networks.](http://xxx.itp.ac.cn/pdf/2011.01675v2)                                        | 将关系抽取问题转为为三元组的集合预测问题,采用 `Encoder-Decoder` 框架                        |
| [prgc](litie/nn/re/prgc.py)         | [PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction.](https://aclanthology.org/2021.acl-long.486.pdf)     | 先抽取句子的潜在关系类型,然后对于给定关系抽取出对应的主语-宾语对,最后通过全局对齐模块过滤                      |
| [pfn](litie/nn/re/pfn.py)           | [A Partition Filter Network for Joint Entity and Relation Extraction.](https://aclanthology.org/2021.emnlp-main.17.pdf)                            | 采用类似  `LSTM`  的分区过滤机制,将隐藏层信息分成实体识别、关系识别和共享三部分,对与不同的任务利用不同的信息        |
| [grte](litie/nn/re/grte.py)         | [A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling.](https://aclanthology.org/2021.emnlp-main.208.pdf)     | 将关系抽取问题转换为单词对的分类问题,基于全局特征抽取模块循环优化单词对的向量表示                           |
| [gplinker](litie/nn/re/gplinker.py) |                                                                                                                                                    | [GPLinker:基于GlobalPointer的实体关系联合抽取](https://kexue.fm/archives/8888) |


## 📚 数据

### 实体抽取

将数据集处理成以下 `json` 格式

```json
{
  "text": "结果上周六他们主场0:3惨败给了中游球队瓦拉多利德,近7个多月以来西甲首次输球。", 
  "entities": [
    {
      "id": 0, 
      "entity": "瓦拉多利德", 
      "start_offset": 20, 
      "end_offset": 25, 
      "label": "organization"
    }, 
    {
      "id": 1, 
      "entity": "西甲", 
      "start_offset": 33, 
      "end_offset": 35, 
      "label": "organization"
    }
  ]
}
```

字段含义:

+ `text`: 文本内容

+ `entities`: 该文本所包含的所有实体

    + `id`: 实体 `id`

    + `entity`: 实体名称
  
    + `start_offset`: 实体开始位置

    + `end_offset`: 实体结束位置的下一位

    + `label`: 实体类型


### 关系抽取

将数据集处理成以下 `json` 格式

```json
{
  "text": "查尔斯·阿兰基斯(Charles Aránguiz),1989年4月17日出生于智利圣地亚哥,智利职业足球运动员,司职中场,效力于德国足球甲级联赛勒沃库森足球俱乐部", 
  "spo_list": [
    {
      "predicate": "出生地",
      "object": "圣地亚哥", 
      "subject": "查尔斯·阿兰基斯"
    }, 
    {
      "predicate": "出生日期",
      "object": "1989年4月17日",
      "subject": "查尔斯·阿兰基斯"
    }
  ]
}
```

字段含义:

+ `text`: 文本内容

+ `spo_list`: 该文本所包含的所有关系三元组

    + `subject`: 主体名称

    + `object`: 客体名称
  
    + `predicate`: 主体和客体之间的关系


### 事件抽取

将数据集处理成以下 `json` 格式

```json
{
  "text": "油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓",
  "id": "f2d936214dc2cb1b873a75ee29a30ec9",
  "event_list": [
    {
      "event_type": "组织关系-裁员",
      "trigger": "裁员",
      "trigger_start_index": 8,
      "arguments": [
        {
          "argument_start_index": 0,
          "role": "裁员方",
          "argument": "油服巨头哈里伯顿"
        },
        {
          "argument_start_index": 10,
          "role": "裁员人数",
          "argument": "650人"
        }
      ],
      "class": "组织关系"
    }
  ]
}
```

字段含义:

+ `text`: 文本内容

+ `event_list`: 该文本所包含的所有事件

    + `event_type`: 事件类型

    + `trigger`: 触发词
  
    + `trigger_start_index`: 触发词开始位置

    + `arguments`: 论元
  
        + `role`: 论元角色
      
        + `argument`: 论元名称
      
        + `argument_start_index`: 论元名称开始位置
  
## 🚀 模型训练

### 实体抽取

```python
import os

from litie.arguments import (
    DataTrainingArguments,
    TrainingArguments,
)
from litie.models import AutoNerModel

os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'

training_args = TrainingArguments(
    other_learning_rate=2e-3,
    num_train_epochs=20,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    output_dir="outputs/crf",
)

data_args = DataTrainingArguments(
    dataset_name="datasets/cmeee",
    train_file="train.json",
    validation_file="dev.json",
    preprocessing_num_workers=16,
)

# 1. create model
model = AutoNerModel(
    task_model_name="crf",
    model_name_or_path="hfl/chinese-roberta-wwm-ext",
    training_args=training_args,
)

# 2. finetune model
model.finetune(data_args)
```

训练脚本详见 [named_entity_recognition](./examples/named_entity_recognition)

### 关系抽取

```python
import os

from litie.arguments import (
    DataTrainingArguments,
    TrainingArguments,
)
from litie.models import AutoRelationExtractionModel

os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'

training_args = TrainingArguments(
    num_train_epochs=20,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    output_dir="outputs/gplinker",
)

data_args = DataTrainingArguments(
    dataset_name="datasets/duie",
    train_file="train.json",
    validation_file="dev.json",
    preprocessing_num_workers=16,
)

# 1. create model
model = AutoRelationExtractionModel(
    task_model_name="gplinker",
    model_name_or_path="hfl/chinese-roberta-wwm-ext",
    training_args=training_args,
)

# 2. finetune model
model.finetune(data_args, num_sanity_val_steps=0)
```

训练脚本详见 [relation_extraction](./examples/relation_extraction)


### 事件抽取

```python
import os
import json

from litie.arguments import DataTrainingArguments, TrainingArguments
from litie.models import AutoEventExtractionModel

os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'

schema_path = "datasets/duee/schema.json"

labels = []
with open("datasets/duee/schema.json") as f:
    for l in f:
        l = json.loads(l)
        t = l["event_type"]
        for r in ["触发词"] + [s["role"] for s in l["role_list"]]:
            labels.append(f"{t}@{r}")

training_args = TrainingArguments(
    num_train_epochs=200,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    output_dir="outputs/gplinker",
)

data_args = DataTrainingArguments(
    dataset_name="datasets/duee",
    train_file="train.json",
    validation_file="dev.json",
    preprocessing_num_workers=16,
    train_max_length=128,
)

# 1. create model
model = AutoEventExtractionModel(
    task_model_name="gplinker",
    model_name_or_path="hfl/chinese-roberta-wwm-ext",
    training_args=training_args,
)

# 2. finetune model
model.finetune(
    data_args,
    labels=labels,
    num_sanity_val_steps=0,
    monitor="val_argu_f1",
    check_val_every_n_epoch=20,
)
```

训练脚本详见 [event_extraction](./examples/event_extraction)


## 📊 模型推理

### 实体抽取

```python
from litie.pipelines import NerPipeline

task_model = "crf"
model_name_or_path = "path of crf model"
pipeline = NerPipeline(task_model, model_name_or_path=model_name_or_path)

print(pipeline("结果上周六他们主场0:3惨败给了中游球队瓦拉多利德,近7个多月以来西甲首次输球。"))
```

web demo

```python
from litie.ui import NerPlayground

NerPlayground().launch()
```


### 关系抽取

```python
from litie.pipelines import RelationExtractionPipeline

task_model = "gplinker"
model_name_or_path = "path of gplinker model"
pipeline = RelationExtractionPipeline(task_model, model_name_or_path=model_name_or_path)

print(pipeline("查尔斯·阿兰基斯(Charles Aránguiz),1989年4月17日出生于智利圣地亚哥,智利职业足球运动员,司职中场,效力于德国足球甲级联赛勒沃库森足球俱乐部"))
```

web demo

```python
from litie.ui import RelationExtractionPlayground

RelationExtractionPlayground().launch()
```


### 事件抽取

```python
from litie.pipelines import EventExtractionPipeline

task_model = "gplinker"
model_name_or_path = "path of gplinker model"
pipeline = EventExtractionPipeline(task_model, model_name_or_path=model_name_or_path)

print(pipeline("油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓"))
```

web demo

```python
from litie.ui import EventExtractionPlayground

EventExtractionPlayground().launch()
```


## 🍭 通用信息抽取

+ [UIE(Universal Information Extraction)](https://arxiv.org/pdf/2203.12277.pdf):Yaojie Lu等人在ACL-2022中提出了通用信息抽取统一框架 `UIE`。


+ 该框架实现了实体抽取、关系抽取、事件抽取、情感分析等任务的统一建模,并使得不同任务间具备良好的迁移和泛化能力。


+ [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)借鉴该论文的方法,基于 `ERNIE 3.0` 知识增强预训练模型,训练并开源了首个中文通用信息抽取模型 `UIE`。


+ 该模型可以支持不限定行业领域和抽取目标的关键信息抽取,实现零样本快速冷启动,并具备优秀的小样本微调能力,快速适配特定的抽取目标。


### 模型训练

模型训练脚本详见 [uie](./examples/uie)

### 模型推理

<details>
<summary>👉 命名实体识别</summary>

```python
from pprint import pprint
from litie.pipelines import UIEPipeline

# 实体识别
schema = ['时间', '选手', '赛事名称'] 
# uie-base模型已上传至huggingface,可自动下载,其他模型只需提供模型名称将自动进行转换
uie = UIEPipeline("xusenlin/uie-base", schema=schema)
pprint(uie("2月8日上午北京冬奥会自由式滑雪女子大跳台决赛中中国选手谷爱凌以188.25分获得金牌!")) # Better print results using pprint
```

output: 

```json
[
  {
    "时间": [
      {
        "end": 6,
        "probability": 0.98573786,
        "start": 0,
        "text": "2月8日上午"
      }
    ],
    "赛事名称": [
      {
        "end": 23,
        "probability": 0.8503085,
        "start": 6,
        "text": "北京冬奥会自由式滑雪女子大跳台决赛"
      }
    ],
    "选手": [
      {
        "end": 31,
        "probability": 0.8981544,
        "start": 28,
        "text": "谷爱凌"
      }
    ]
  }
]
```
</details>

<details>
<summary>👉 实体关系抽取</summary>

```python
from pprint import pprint
from litie.pipelines import UIEPipeline

# 关系抽取
schema = {'竞赛名称': ['主办方', '承办方', '已举办次数']}
# uie-base模型已上传至huggingface,可自动下载,其他模型只需提供模型名称将自动进行转换
uie = UIEPipeline("xusenlin/uie-base", schema=schema)
pprint(uie("2022语言与智能技术竞赛由中国中文信息学会和中国计算机学会联合主办,百度公司、中国中文信息学会评测工作委员会和中国计算机学会自然语言处理专委会承办,已连续举办4届,成为全球最热门的中文NLP赛事之一。")) # Better print results using pprint
```

output:

```json
[
  {
    "竞赛名称": [
      {
        "end": 13,
        "probability": 0.78253937,
        "relations": {
          "主办方": [
            {
              "end": 22,
              "probability": 0.8421704,
              "start": 14,
              "text": "中国中文信息学会"
            },
            {
              "end": 30,
              "probability": 0.75807965,
              "start": 23,
              "text": "中国计算机学会"
            }
          ],
          "已举办次数": [
            {
              "end": 82,
              "probability": 0.4671307,
              "start": 80,
              "text": "4届"
            }
          ],
          "承办方": [
            {
              "end": 55,
              "probability": 0.700049,
              "start": 40,
              "text": "中国中文信息学会评测工作委员会"
            },
            {
              "end": 72,
              "probability": 0.61934763,
              "start": 56,
              "text": "中国计算机学会自然语言处理专委会"
            },
            {
              "end": 39,
              "probability": 0.8292698,
              "start": 35,
              "text": "百度公司"
            }
          ]
        },
        "start": 0,
        "text": "2022语言与智能技术竞赛"
      }
    ]
  }
]
```
</details>


<details>
<summary>👉  事件抽取</summary>

```python
from pprint import pprint
from litie.pipelines import UIEPipeline

# 事件抽取
schema = {"地震触发词": ["地震强度", "时间", "震中位置", "震源深度"]}
# uie-base模型已上传至huggingface,可自动下载,其他模型只需提供模型名称将自动进行转换
uie = UIEPipeline("xusenlin/uie-base", schema=schema)
pprint(uie("中国地震台网正式测定:5月16日06时08分在云南临沧市凤庆县(北纬24.34度,东经99.98度)发生3.5级地震,震源深度10千米。")) # Better print results using pprint
```

output:

```json
[
  {
    "地震触发词": [
      {
        "end": 58,
        "probability": 0.99774253,
        "relations": {
          "地震强度": [
            {
              "end": 56,
              "probability": 0.9980802,
              "start": 52,
              "text": "3.5级"
            }
          ],
          "时间": [
            {
              "end": 22,
              "probability": 0.98533,
              "start": 11,
              "text": "5月16日06时08分"
            }
          ],
          "震中位置": [
            {
              "end": 50,
              "probability": 0.7874015,
              "start": 23,
              "text": "云南临沧市凤庆县(北纬24.34度,东经99.98度)"
            }
          ],
          "震源深度": [
            {
              "end": 67,
              "probability": 0.9937973,
              "start": 63,
              "text": "10千米"
            }
          ]
        },
        "start": 56,
        "text": "地震"
      }
    ]
  }
]
```
</details>

<details>
<summary>👉 评论观点抽取</summary>

```python
from pprint import pprint
from litie.pipelines import UIEPipeline

# 评论观点抽取
schema = {'评价维度': ['观点词', '情感倾向[正向,负向]']}
# uie-base模型已上传至huggingface,可自动下载,其他模型只需提供模型名称将自动进行转换
uie = UIEPipeline("xusenlin/uie-base", schema=schema)
pprint(uie("店面干净,很清静,服务员服务热情,性价比很高,发现收银台有排队")) # Better print results using pprint
```

output:

```json
[
  {
    "评价维度": [
      {
        "end": 20,
        "probability": 0.98170394,
        "relations": {
          "情感倾向[正向,负向]": [
            {
              "probability": 0.9966142773628235,
              "text": "正向"
            }
          ],
          "观点词": [
            {
              "end": 22,
              "probability": 0.95739645,
              "start": 21,
              "text": "高"
            }
          ]
        },
        "start": 17,
        "text": "性价比"
      },
      {
        "end": 2,
        "probability": 0.9696847,
        "relations": {
          "情感倾向[正向,负向]": [
            {
              "probability": 0.9982153177261353,
              "text": "正向"
            }
          ],
          "观点词": [
            {
              "end": 4,
              "probability": 0.9945317,
              "start": 2,
              "text": "干净"
            }
          ]
        },
        "start": 0,
        "text": "店面"
      }
    ]
  }
]
```
</details>


<details>
<summary>👉 情感分类</summary>


```python
from pprint import pprint
from litie.pipelines import UIEPipeline

# 事件抽取
schema = '情感倾向[正向,负向]'
# uie-base模型已上传至huggingface,可自动下载,其他模型只需提供模型名称将自动进行转换
uie = UIEPipeline("xusenlin/uie-base", schema=schema)
pprint(uie("这个产品用起来真的很流畅,我非常喜欢")) # Better print results using pprint
```

output:

```json
[
  {
    "情感倾向[正向,负向]": [
      {
        "probability": 0.9990023970603943,
        "text": "正向"
      }
    ]
  }
]
```
</details>


## 📜 License

此项目为 `Apache 2.0` 许可证授权,有关详细信息,请参阅 [LICENSE](LICENSE) 文件。



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xusenlinzy/lit-ie",
    "name": "litie",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "xusenlin",
    "author_email": "1659821119@qq.com",
    "download_url": "",
    "platform": null,
    "description": "<p align=\"center\">\n    <br>\n    <img src=\"images/logo.png\" width=\"400\"/>\n    <br>\n<p>\n<p align=\"center\">\n    <a href=\"https://github.com/xusenlinzy/lit-ner\"><img src=\"https://img.shields.io/github/license/xusenlinzy/lit-ner\"></a>\n    <a href=\"\"><img src=\"https://img.shields.io/badge/python-3.8+-aff.svg\"></a>\n    <a href=\"\"><img src=\"https://img.shields.io/badge/pytorch-%3E=1.12-red?logo=pytorch\"></a>\n    <a href=\"https://github.com/xusenlinzy/lit-ner\"><img src=\"https://img.shields.io/github/last-commit/xusenlinzy/lit-ner\"></a>\n    <a href=\"https://github.com/xusenlinzy/lit-ner\"><img src=\"https://img.shields.io/github/issues/xusenlinzy/lit-ner?color=9cc\"></a>\n    <a href=\"https://github.com/xusenlinzy/lit-ner\"><img src=\"https://img.shields.io/github/stars/xusenlinzy/lit-ner?color=ccf\"></a>\n    <a href=\"https://github.com/xusenlinzy/lit-ner\"><img src=\"https://img.shields.io/badge/langurage-py-brightgreen?style=flat&color=blue\"></a>\n</p>\n\n\u6b64\u9879\u76ee\u4e3a\u5f00\u6e90**\u6587\u672c\u5206\u7c7b\u3001\u5b9e\u4f53\u62bd\u53d6\u3001\u5173\u7cfb\u62bd\u53d6\u548c\u4e8b\u4ef6\u62bd\u53d6**\u6a21\u578b\u7684\u8bad\u7ec3\u548c\u63a8\u7406\u63d0\u4f9b\u7edf\u4e00\u7684\u6846\u67b6\uff0c\u5177\u6709\u4ee5\u4e0b\u7279\u6027\n\n\n+ \u2728 \u652f\u6301\u591a\u79cd\u5f00\u6e90\u6587\u672c\u5206\u7c7b\u3001\u5b9e\u4f53\u62bd\u53d6\u3001\u5173\u7cfb\u62bd\u53d6\u548c\u4e8b\u4ef6\u62bd\u53d6\u6a21\u578b\n\n\n+ \ud83d\udc51 \u652f\u6301\u767e\u5ea6 [UIE](https://github.com/PaddlePaddle/PaddleNLP) \u6a21\u578b\u7684\u8bad\u7ec3\u548c\u63a8\u7406\n\n\n+ \ud83d\ude80 \u7edf\u4e00\u7684\u8bad\u7ec3\u548c\u63a8\u7406\u6846\u67b6\n\n\n+ \ud83c\udfaf \u96c6\u6210\u5bf9\u6297\u8bad\u7ec3\u65b9\u6cd5\uff0c\u7b80\u4fbf\u6613\u7528\n\n\n## \ud83d\udce2 News \n\n+ \u30102023.6.21\u3011 \u589e\u52a0\u6587\u672c\u5206\u7c7b\u4ee3\u7801\u793a\u4f8b\n\n\n+ \u30102023.6.19\u3011 \u589e\u52a0 `gplinker` \u4e8b\u4ef6\u62bd\u53d6\u6a21\u578b\u548c\u4ee3\u7801\u793a\u4f8b\n\n\n+ \u30102023.6.15\u3011 \u589e\u52a0\u5bf9\u6297\u8bad\u7ec3\u529f\u80fd\u548c\u793a\u4f8b\u3001\u589e\u52a0 `onerel` \u5173\u7cfb\u62bd\u53d6\u6a21\u578b\n\n\n+ \u30102023.6.14\u3011 \u65b0\u589e `UIE` \u6a21\u578b\u4ee3\u7801\u793a\u4f8b\n\n\n---\n\n## \ud83d\udce6 \u5b89\u88c5\n\n### pip \u5b89\u88c5\n\n```shell\npip install --upgrade litie\n```\n\n### \u6e90\u7801\u5b89\u88c5\n\n```shell\ngit clone https://github.com/xusenlinzy/lit-ie\n\npip install -r requirements.txt\n```\n\n\n## \ud83d\udc3c \u6a21\u578b\n\n### \u5b9e\u4f53\u62bd\u53d6\n\n| \u6a21\u578b                                               | \u8bba\u6587                                                                                                                                                                            | \u5907\u6ce8                                                                                                                                            |\n|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| [softmax](litie/nn/ner/crf.py)                   |                                                                                                                                                                               | \u5168\u8fde\u63a5\u5c42\u5e8f\u5217\u6807\u6ce8\u5e76\u4f7f\u7528 `BIO` \u89e3\u7801                                                                                                                          |\n| [crf](litie/nn/ner/crf.py)                       | [Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data](https://repository.upenn.edu/cgi/viewcontent.cgi?article=1162&context=cis_papers) | \u5168\u8fde\u63a5\u5c42+\u6761\u4ef6\u968f\u673a\u573a\uff0c\u5e76\u4f7f\u7528 `BIO` \u89e3\u7801                                                                                                                       |\n| [cascade-crf](litie/nn/ner/crf.py)               |                                                                                                                                                                               | \u5148\u9884\u6d4b\u5b9e\u4f53\u518d\u9884\u6d4b\u5b9e\u4f53\u7c7b\u578b                                                                                                                                  |\n| [span](litie/nn/ner/span.py)                     |                                                                                                                                                                               | \u4f7f\u7528\u4e24\u4e2a\u6307\u9488\u7f51\u7edc\u9884\u6d4b\u5b9e\u4f53\u8d77\u59cb\u4f4d\u7f6e                                                                                                                              |\n| [global-pointer](litie/nn/ner/global_pointer.py) |                                                                                                                                                                               | [GlobalPointer\uff1a\u7528\u7edf\u4e00\u7684\u65b9\u5f0f\u5904\u7406\u5d4c\u5957\u548c\u975e\u5d4c\u5957NER](https://spaces.ac.cn/archives/8373)\u3001[Efficient GlobalPointer\uff1a\u5c11\u70b9\u53c2\u6570\uff0c\u591a\u70b9\u6548\u679c](https://spaces.ac.cn/archives/8877) |\n| [mrc](litie/nn/ner/mrc.py)                       | [A Unified MRC Framework for Named Entity Recognition.](https://aclanthology.org/2020.acl-main.519.pdf)                                                                       | \u5c06\u5b9e\u4f53\u8bc6\u522b\u4efb\u52a1\u8f6c\u6362\u4e3a\u9605\u8bfb\u7406\u89e3\u95ee\u9898\uff0c\u8f93\u5165\u4e3a\u5b9e\u4f53\u7c7b\u578b\u6a21\u677f+\u53e5\u5b50\uff0c\u9884\u6d4b\u5bf9\u5e94\u5b9e\u4f53\u7684\u8d77\u59cb\u4f4d\u7f6e                                                                                                     |\n| [tplinker](litie/nn/ner/tplinker.py)             | [TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking.](https://aclanthology.org/2020.coling-main.138.pdf)                            | \u5c06\u5b9e\u4f53\u8bc6\u522b\u4efb\u52a1\u8f6c\u6362\u4e3a\u8868\u683c\u586b\u5145\u95ee\u9898                                                                                                                              |\n| [lear](litie/nn/ner/lear.py)                     | [Enhanced Language Representation with Label Knowledge for Span Extraction.](https://aclanthology.org/2021.emnlp-main.379.pdf)                                                | \u6539\u8fdb `MRC` \u65b9\u6cd5\u6548\u7387\u95ee\u9898\uff0c\u91c7\u7528\u6807\u7b7e\u878d\u5408\u673a\u5236                                                                                                                      |\n| [w2ner](litie/nn/ner/w2ner.py)                   | [Unified Named Entity Recognition as Word-Word Relation Classification.](https://arxiv.org/pdf/2112.10070.pdf)                                                                | \u7edf\u4e00\u89e3\u51b3\u5d4c\u5957\u5b9e\u4f53\u3001\u4e0d\u8fde\u7eed\u5b9e\u4f53\u7684\u62bd\u53d6\u95ee\u9898                                                                                                                           |\n| [cnn](litie/nn/ner/cnn.py)                       | [An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition.](https://arxiv.org/abs/2208.04534)                                                           | \u6539\u8fdb `W2NER` \u65b9\u6cd5\uff0c\u91c7\u7528\u5377\u79ef\u7f51\u7edc\u63d0\u53d6\u5b9e\u4f53\u5185\u90e8token\u4e4b\u95f4\u7684\u5173\u7cfb                                                                                                          |\n\n\n### \u5173\u7cfb\u62bd\u53d6\n\n| \u6a21\u578b                                  | \u8bba\u6587                                                                                                                                                 | \u5907\u6ce8                                                                  |\n|-------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|\n| [casrel](litie/nn/re/casrel.py)     | [A Novel Cascade Binary Tagging Framework for Relational Triple Extraction.](https://aclanthology.org/2020.acl-main.136.pdf)                       | \u4e24\u9636\u6bb5\u5173\u7cfb\u62bd\u53d6\uff0c\u5148\u62bd\u53d6\u51fa\u53e5\u5b50\u4e2d\u7684\u4e3b\u8bed\uff0c\u518d\u901a\u8fc7\u6307\u9488\u7f51\u7edc\u62bd\u53d6\u51fa\u4e3b\u8bed\u5bf9\u5e94\u7684\u5173\u7cfb\u548c\u5bbe\u8bed                             |\n| [tplinker](litie/nn/re/tplinker.py) | [TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking.](https://aclanthology.org/2020.coling-main.138.pdf) | \u5c06\u5173\u7cfb\u62bd\u53d6\u95ee\u9898\u8f6c\u6362\u4e3a\u4e3b\u8bed-\u5bbe\u8bed\u7684\u9996\u5c3e\u8fde\u63a5\u95ee\u9898                                              |\n| [spn](litie/nn/re/spn.py)           | [Joint Entity and Relation Extraction with Set Prediction Networks.](http://xxx.itp.ac.cn/pdf/2011.01675v2)                                        | \u5c06\u5173\u7cfb\u62bd\u53d6\u95ee\u9898\u8f6c\u4e3a\u4e3a\u4e09\u5143\u7ec4\u7684\u96c6\u5408\u9884\u6d4b\u95ee\u9898\uff0c\u91c7\u7528 `Encoder-Decoder` \u6846\u67b6                        |\n| [prgc](litie/nn/re/prgc.py)         | [PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction.](https://aclanthology.org/2021.acl-long.486.pdf)     | \u5148\u62bd\u53d6\u53e5\u5b50\u7684\u6f5c\u5728\u5173\u7cfb\u7c7b\u578b\uff0c\u7136\u540e\u5bf9\u4e8e\u7ed9\u5b9a\u5173\u7cfb\u62bd\u53d6\u51fa\u5bf9\u5e94\u7684\u4e3b\u8bed-\u5bbe\u8bed\u5bf9\uff0c\u6700\u540e\u901a\u8fc7\u5168\u5c40\u5bf9\u9f50\u6a21\u5757\u8fc7\u6ee4                      |\n| [pfn](litie/nn/re/pfn.py)           | [A Partition Filter Network for Joint Entity and Relation Extraction.](https://aclanthology.org/2021.emnlp-main.17.pdf)                            | \u91c7\u7528\u7c7b\u4f3c  `LSTM`  \u7684\u5206\u533a\u8fc7\u6ee4\u673a\u5236\uff0c\u5c06\u9690\u85cf\u5c42\u4fe1\u606f\u5206\u6210\u5b9e\u4f53\u8bc6\u522b\u3001\u5173\u7cfb\u8bc6\u522b\u548c\u5171\u4eab\u4e09\u90e8\u5206\uff0c\u5bf9\u4e0e\u4e0d\u540c\u7684\u4efb\u52a1\u5229\u7528\u4e0d\u540c\u7684\u4fe1\u606f        |\n| [grte](litie/nn/re/grte.py)         | [A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling.](https://aclanthology.org/2021.emnlp-main.208.pdf)     | \u5c06\u5173\u7cfb\u62bd\u53d6\u95ee\u9898\u8f6c\u6362\u4e3a\u5355\u8bcd\u5bf9\u7684\u5206\u7c7b\u95ee\u9898\uff0c\u57fa\u4e8e\u5168\u5c40\u7279\u5f81\u62bd\u53d6\u6a21\u5757\u5faa\u73af\u4f18\u5316\u5355\u8bcd\u5bf9\u7684\u5411\u91cf\u8868\u793a                           |\n| [gplinker](litie/nn/re/gplinker.py) |                                                                                                                                                    | [GPLinker\uff1a\u57fa\u4e8eGlobalPointer\u7684\u5b9e\u4f53\u5173\u7cfb\u8054\u5408\u62bd\u53d6](https://kexue.fm/archives/8888) |\n\n\n## \ud83d\udcda \u6570\u636e\n\n### \u5b9e\u4f53\u62bd\u53d6\n\n\u5c06\u6570\u636e\u96c6\u5904\u7406\u6210\u4ee5\u4e0b `json` \u683c\u5f0f\n\n```json\n{\n  \"text\": \"\u7ed3\u679c\u4e0a\u5468\u516d\u4ed6\u4eec\u4e3b\u573a0\uff1a3\u60e8\u8d25\u7ed9\u4e86\u4e2d\u6e38\u7403\u961f\u74e6\u62c9\u591a\u5229\u5fb7\uff0c\u8fd17\u4e2a\u591a\u6708\u4ee5\u6765\u897f\u7532\u9996\u6b21\u8f93\u7403\u3002\", \n  \"entities\": [\n    {\n      \"id\": 0, \n      \"entity\": \"\u74e6\u62c9\u591a\u5229\u5fb7\", \n      \"start_offset\": 20, \n      \"end_offset\": 25, \n      \"label\": \"organization\"\n    }, \n    {\n      \"id\": 1, \n      \"entity\": \"\u897f\u7532\", \n      \"start_offset\": 33, \n      \"end_offset\": 35, \n      \"label\": \"organization\"\n    }\n  ]\n}\n```\n\n\u5b57\u6bb5\u542b\u4e49\uff1a\n\n+ `text`: \u6587\u672c\u5185\u5bb9\n\n+ `entities`: \u8be5\u6587\u672c\u6240\u5305\u542b\u7684\u6240\u6709\u5b9e\u4f53\n\n    + `id`: \u5b9e\u4f53 `id`\n\n    + `entity`: \u5b9e\u4f53\u540d\u79f0\n  \n    + `start_offset`: \u5b9e\u4f53\u5f00\u59cb\u4f4d\u7f6e\n\n    + `end_offset`: \u5b9e\u4f53\u7ed3\u675f\u4f4d\u7f6e\u7684\u4e0b\u4e00\u4f4d\n\n    + `label`: \u5b9e\u4f53\u7c7b\u578b\n\n\n### \u5173\u7cfb\u62bd\u53d6\n\n\u5c06\u6570\u636e\u96c6\u5904\u7406\u6210\u4ee5\u4e0b `json` \u683c\u5f0f\n\n```json\n{\n  \"text\": \"\u67e5\u5c14\u65af\u00b7\u963f\u5170\u57fa\u65af\uff08Charles Ar\u00e1nguiz\uff09\uff0c1989\u5e744\u670817\u65e5\u51fa\u751f\u4e8e\u667a\u5229\u5723\u5730\u4e9a\u54e5\uff0c\u667a\u5229\u804c\u4e1a\u8db3\u7403\u8fd0\u52a8\u5458\uff0c\u53f8\u804c\u4e2d\u573a\uff0c\u6548\u529b\u4e8e\u5fb7\u56fd\u8db3\u7403\u7532\u7ea7\u8054\u8d5b\u52d2\u6c83\u5e93\u68ee\u8db3\u7403\u4ff1\u4e50\u90e8\", \n  \"spo_list\": [\n    {\n      \"predicate\": \"\u51fa\u751f\u5730\",\n      \"object\": \"\u5723\u5730\u4e9a\u54e5\", \n      \"subject\": \"\u67e5\u5c14\u65af\u00b7\u963f\u5170\u57fa\u65af\"\n    }, \n    {\n      \"predicate\": \"\u51fa\u751f\u65e5\u671f\",\n      \"object\": \"1989\u5e744\u670817\u65e5\",\n      \"subject\": \"\u67e5\u5c14\u65af\u00b7\u963f\u5170\u57fa\u65af\"\n    }\n  ]\n}\n```\n\n\u5b57\u6bb5\u542b\u4e49\uff1a\n\n+ `text`: \u6587\u672c\u5185\u5bb9\n\n+ `spo_list`: \u8be5\u6587\u672c\u6240\u5305\u542b\u7684\u6240\u6709\u5173\u7cfb\u4e09\u5143\u7ec4\n\n    + `subject`: \u4e3b\u4f53\u540d\u79f0\n\n    + `object`: \u5ba2\u4f53\u540d\u79f0\n  \n    + `predicate`: \u4e3b\u4f53\u548c\u5ba2\u4f53\u4e4b\u95f4\u7684\u5173\u7cfb\n\n\n### \u4e8b\u4ef6\u62bd\u53d6\n\n\u5c06\u6570\u636e\u96c6\u5904\u7406\u6210\u4ee5\u4e0b `json` \u683c\u5f0f\n\n```json\n{\n  \"text\": \"\u6cb9\u670d\u5de8\u5934\u54c8\u91cc\u4f2f\u987f\u88c1\u5458650\u4eba \u56e0\u7f8e\u56fd\u6cb9\u6c14\u5f00\u91c7\u6d3b\u52a8\u653e\u7f13\",\n  \"id\": \"f2d936214dc2cb1b873a75ee29a30ec9\",\n  \"event_list\": [\n    {\n      \"event_type\": \"\u7ec4\u7ec7\u5173\u7cfb-\u88c1\u5458\",\n      \"trigger\": \"\u88c1\u5458\",\n      \"trigger_start_index\": 8,\n      \"arguments\": [\n        {\n          \"argument_start_index\": 0,\n          \"role\": \"\u88c1\u5458\u65b9\",\n          \"argument\": \"\u6cb9\u670d\u5de8\u5934\u54c8\u91cc\u4f2f\u987f\"\n        },\n        {\n          \"argument_start_index\": 10,\n          \"role\": \"\u88c1\u5458\u4eba\u6570\",\n          \"argument\": \"650\u4eba\"\n        }\n      ],\n      \"class\": \"\u7ec4\u7ec7\u5173\u7cfb\"\n    }\n  ]\n}\n```\n\n\u5b57\u6bb5\u542b\u4e49\uff1a\n\n+ `text`: \u6587\u672c\u5185\u5bb9\n\n+ `event_list`: \u8be5\u6587\u672c\u6240\u5305\u542b\u7684\u6240\u6709\u4e8b\u4ef6\n\n    + `event_type`: \u4e8b\u4ef6\u7c7b\u578b\n\n    + `trigger`: \u89e6\u53d1\u8bcd\n  \n    + `trigger_start_index`: \u89e6\u53d1\u8bcd\u5f00\u59cb\u4f4d\u7f6e\n\n    + `arguments`: \u8bba\u5143\n  \n        + `role`: \u8bba\u5143\u89d2\u8272\n      \n        + `argument`: \u8bba\u5143\u540d\u79f0\n      \n        + `argument_start_index`: \u8bba\u5143\u540d\u79f0\u5f00\u59cb\u4f4d\u7f6e\n  \n## \ud83d\ude80 \u6a21\u578b\u8bad\u7ec3\n\n### \u5b9e\u4f53\u62bd\u53d6\n\n```python\nimport os\n\nfrom litie.arguments import (\n    DataTrainingArguments,\n    TrainingArguments,\n)\nfrom litie.models import AutoNerModel\n\nos.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'\n\ntraining_args = TrainingArguments(\n    other_learning_rate=2e-3,\n    num_train_epochs=20,\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=32,\n    output_dir=\"outputs/crf\",\n)\n\ndata_args = DataTrainingArguments(\n    dataset_name=\"datasets/cmeee\",\n    train_file=\"train.json\",\n    validation_file=\"dev.json\",\n    preprocessing_num_workers=16,\n)\n\n# 1. create model\nmodel = AutoNerModel(\n    task_model_name=\"crf\",\n    model_name_or_path=\"hfl/chinese-roberta-wwm-ext\",\n    training_args=training_args,\n)\n\n# 2. finetune model\nmodel.finetune(data_args)\n```\n\n\u8bad\u7ec3\u811a\u672c\u8be6\u89c1 [named_entity_recognition](./examples/named_entity_recognition)\n\n### \u5173\u7cfb\u62bd\u53d6\n\n```python\nimport os\n\nfrom litie.arguments import (\n    DataTrainingArguments,\n    TrainingArguments,\n)\nfrom litie.models import AutoRelationExtractionModel\n\nos.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'\n\ntraining_args = TrainingArguments(\n    num_train_epochs=20,\n    per_device_train_batch_size=16,\n    per_device_eval_batch_size=16,\n    output_dir=\"outputs/gplinker\",\n)\n\ndata_args = DataTrainingArguments(\n    dataset_name=\"datasets/duie\",\n    train_file=\"train.json\",\n    validation_file=\"dev.json\",\n    preprocessing_num_workers=16,\n)\n\n# 1. create model\nmodel = AutoRelationExtractionModel(\n    task_model_name=\"gplinker\",\n    model_name_or_path=\"hfl/chinese-roberta-wwm-ext\",\n    training_args=training_args,\n)\n\n# 2. finetune model\nmodel.finetune(data_args, num_sanity_val_steps=0)\n```\n\n\u8bad\u7ec3\u811a\u672c\u8be6\u89c1 [relation_extraction](./examples/relation_extraction)\n\n\n### \u4e8b\u4ef6\u62bd\u53d6\n\n```python\nimport os\nimport json\n\nfrom litie.arguments import DataTrainingArguments, TrainingArguments\nfrom litie.models import AutoEventExtractionModel\n\nos.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'\n\nschema_path = \"datasets/duee/schema.json\"\n\nlabels = []\nwith open(\"datasets/duee/schema.json\") as f:\n    for l in f:\n        l = json.loads(l)\n        t = l[\"event_type\"]\n        for r in [\"\u89e6\u53d1\u8bcd\"] + [s[\"role\"] for s in l[\"role_list\"]]:\n            labels.append(f\"{t}@{r}\")\n\ntraining_args = TrainingArguments(\n    num_train_epochs=200,\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=32,\n    output_dir=\"outputs/gplinker\",\n)\n\ndata_args = DataTrainingArguments(\n    dataset_name=\"datasets/duee\",\n    train_file=\"train.json\",\n    validation_file=\"dev.json\",\n    preprocessing_num_workers=16,\n    train_max_length=128,\n)\n\n# 1. create model\nmodel = AutoEventExtractionModel(\n    task_model_name=\"gplinker\",\n    model_name_or_path=\"hfl/chinese-roberta-wwm-ext\",\n    training_args=training_args,\n)\n\n# 2. finetune model\nmodel.finetune(\n    data_args,\n    labels=labels,\n    num_sanity_val_steps=0,\n    monitor=\"val_argu_f1\",\n    check_val_every_n_epoch=20,\n)\n```\n\n\u8bad\u7ec3\u811a\u672c\u8be6\u89c1 [event_extraction](./examples/event_extraction)\n\n\n## \ud83d\udcca \u6a21\u578b\u63a8\u7406\n\n### \u5b9e\u4f53\u62bd\u53d6\n\n```python\nfrom litie.pipelines import NerPipeline\n\ntask_model = \"crf\"\nmodel_name_or_path = \"path of crf model\"\npipeline = NerPipeline(task_model, model_name_or_path=model_name_or_path)\n\nprint(pipeline(\"\u7ed3\u679c\u4e0a\u5468\u516d\u4ed6\u4eec\u4e3b\u573a0\uff1a3\u60e8\u8d25\u7ed9\u4e86\u4e2d\u6e38\u7403\u961f\u74e6\u62c9\u591a\u5229\u5fb7\uff0c\u8fd17\u4e2a\u591a\u6708\u4ee5\u6765\u897f\u7532\u9996\u6b21\u8f93\u7403\u3002\"))\n```\n\nweb demo\n\n```python\nfrom litie.ui import NerPlayground\n\nNerPlayground().launch()\n```\n\n\n### \u5173\u7cfb\u62bd\u53d6\n\n```python\nfrom litie.pipelines import RelationExtractionPipeline\n\ntask_model = \"gplinker\"\nmodel_name_or_path = \"path of gplinker model\"\npipeline = RelationExtractionPipeline(task_model, model_name_or_path=model_name_or_path)\n\nprint(pipeline(\"\u67e5\u5c14\u65af\u00b7\u963f\u5170\u57fa\u65af\uff08Charles Ar\u00e1nguiz\uff09\uff0c1989\u5e744\u670817\u65e5\u51fa\u751f\u4e8e\u667a\u5229\u5723\u5730\u4e9a\u54e5\uff0c\u667a\u5229\u804c\u4e1a\u8db3\u7403\u8fd0\u52a8\u5458\uff0c\u53f8\u804c\u4e2d\u573a\uff0c\u6548\u529b\u4e8e\u5fb7\u56fd\u8db3\u7403\u7532\u7ea7\u8054\u8d5b\u52d2\u6c83\u5e93\u68ee\u8db3\u7403\u4ff1\u4e50\u90e8\"))\n```\n\nweb demo\n\n```python\nfrom litie.ui import RelationExtractionPlayground\n\nRelationExtractionPlayground().launch()\n```\n\n\n### \u4e8b\u4ef6\u62bd\u53d6\n\n```python\nfrom litie.pipelines import EventExtractionPipeline\n\ntask_model = \"gplinker\"\nmodel_name_or_path = \"path of gplinker model\"\npipeline = EventExtractionPipeline(task_model, model_name_or_path=model_name_or_path)\n\nprint(pipeline(\"\u6cb9\u670d\u5de8\u5934\u54c8\u91cc\u4f2f\u987f\u88c1\u5458650\u4eba \u56e0\u7f8e\u56fd\u6cb9\u6c14\u5f00\u91c7\u6d3b\u52a8\u653e\u7f13\"))\n```\n\nweb demo\n\n```python\nfrom litie.ui import EventExtractionPlayground\n\nEventExtractionPlayground().launch()\n```\n\n\n## \ud83c\udf6d \u901a\u7528\u4fe1\u606f\u62bd\u53d6\n\n+ [UIE(Universal Information Extraction)](https://arxiv.org/pdf/2203.12277.pdf)\uff1aYaojie Lu\u7b49\u4eba\u5728ACL-2022\u4e2d\u63d0\u51fa\u4e86\u901a\u7528\u4fe1\u606f\u62bd\u53d6\u7edf\u4e00\u6846\u67b6 `UIE`\u3002\n\n\n+ \u8be5\u6846\u67b6\u5b9e\u73b0\u4e86\u5b9e\u4f53\u62bd\u53d6\u3001\u5173\u7cfb\u62bd\u53d6\u3001\u4e8b\u4ef6\u62bd\u53d6\u3001\u60c5\u611f\u5206\u6790\u7b49\u4efb\u52a1\u7684\u7edf\u4e00\u5efa\u6a21\uff0c\u5e76\u4f7f\u5f97\u4e0d\u540c\u4efb\u52a1\u95f4\u5177\u5907\u826f\u597d\u7684\u8fc1\u79fb\u548c\u6cdb\u5316\u80fd\u529b\u3002\n\n\n+ [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)\u501f\u9274\u8be5\u8bba\u6587\u7684\u65b9\u6cd5\uff0c\u57fa\u4e8e `ERNIE 3.0` \u77e5\u8bc6\u589e\u5f3a\u9884\u8bad\u7ec3\u6a21\u578b\uff0c\u8bad\u7ec3\u5e76\u5f00\u6e90\u4e86\u9996\u4e2a\u4e2d\u6587\u901a\u7528\u4fe1\u606f\u62bd\u53d6\u6a21\u578b `UIE`\u3002\n\n\n+ \u8be5\u6a21\u578b\u53ef\u4ee5\u652f\u6301\u4e0d\u9650\u5b9a\u884c\u4e1a\u9886\u57df\u548c\u62bd\u53d6\u76ee\u6807\u7684\u5173\u952e\u4fe1\u606f\u62bd\u53d6\uff0c\u5b9e\u73b0\u96f6\u6837\u672c\u5feb\u901f\u51b7\u542f\u52a8\uff0c\u5e76\u5177\u5907\u4f18\u79c0\u7684\u5c0f\u6837\u672c\u5fae\u8c03\u80fd\u529b\uff0c\u5feb\u901f\u9002\u914d\u7279\u5b9a\u7684\u62bd\u53d6\u76ee\u6807\u3002\n\n\n### \u6a21\u578b\u8bad\u7ec3\n\n\u6a21\u578b\u8bad\u7ec3\u811a\u672c\u8be6\u89c1 [uie](./examples/uie)\n\n### \u6a21\u578b\u63a8\u7406\n\n<details>\n<summary>\ud83d\udc49 \u547d\u540d\u5b9e\u4f53\u8bc6\u522b</summary>\n\n```python\nfrom pprint import pprint\nfrom litie.pipelines import UIEPipeline\n\n# \u5b9e\u4f53\u8bc6\u522b\nschema = ['\u65f6\u95f4', '\u9009\u624b', '\u8d5b\u4e8b\u540d\u79f0'] \n# uie-base\u6a21\u578b\u5df2\u4e0a\u4f20\u81f3huggingface\uff0c\u53ef\u81ea\u52a8\u4e0b\u8f7d\uff0c\u5176\u4ed6\u6a21\u578b\u53ea\u9700\u63d0\u4f9b\u6a21\u578b\u540d\u79f0\u5c06\u81ea\u52a8\u8fdb\u884c\u8f6c\u6362\nuie = UIEPipeline(\"xusenlin/uie-base\", schema=schema)\npprint(uie(\"2\u67088\u65e5\u4e0a\u5348\u5317\u4eac\u51ac\u5965\u4f1a\u81ea\u7531\u5f0f\u6ed1\u96ea\u5973\u5b50\u5927\u8df3\u53f0\u51b3\u8d5b\u4e2d\u4e2d\u56fd\u9009\u624b\u8c37\u7231\u51cc\u4ee5188.25\u5206\u83b7\u5f97\u91d1\u724c\uff01\")) # Better print results using pprint\n```\n\noutput: \n\n```json\n[\n  {\n    \"\u65f6\u95f4\": [\n      {\n        \"end\": 6,\n        \"probability\": 0.98573786,\n        \"start\": 0,\n        \"text\": \"2\u67088\u65e5\u4e0a\u5348\"\n      }\n    ],\n    \"\u8d5b\u4e8b\u540d\u79f0\": [\n      {\n        \"end\": 23,\n        \"probability\": 0.8503085,\n        \"start\": 6,\n        \"text\": \"\u5317\u4eac\u51ac\u5965\u4f1a\u81ea\u7531\u5f0f\u6ed1\u96ea\u5973\u5b50\u5927\u8df3\u53f0\u51b3\u8d5b\"\n      }\n    ],\n    \"\u9009\u624b\": [\n      {\n        \"end\": 31,\n        \"probability\": 0.8981544,\n        \"start\": 28,\n        \"text\": \"\u8c37\u7231\u51cc\"\n      }\n    ]\n  }\n]\n```\n</details>\n\n<details>\n<summary>\ud83d\udc49 \u5b9e\u4f53\u5173\u7cfb\u62bd\u53d6</summary>\n\n```python\nfrom pprint import pprint\nfrom litie.pipelines import UIEPipeline\n\n# \u5173\u7cfb\u62bd\u53d6\nschema = {'\u7ade\u8d5b\u540d\u79f0': ['\u4e3b\u529e\u65b9', '\u627f\u529e\u65b9', '\u5df2\u4e3e\u529e\u6b21\u6570']}\n# uie-base\u6a21\u578b\u5df2\u4e0a\u4f20\u81f3huggingface\uff0c\u53ef\u81ea\u52a8\u4e0b\u8f7d\uff0c\u5176\u4ed6\u6a21\u578b\u53ea\u9700\u63d0\u4f9b\u6a21\u578b\u540d\u79f0\u5c06\u81ea\u52a8\u8fdb\u884c\u8f6c\u6362\nuie = UIEPipeline(\"xusenlin/uie-base\", schema=schema)\npprint(uie(\"2022\u8bed\u8a00\u4e0e\u667a\u80fd\u6280\u672f\u7ade\u8d5b\u7531\u4e2d\u56fd\u4e2d\u6587\u4fe1\u606f\u5b66\u4f1a\u548c\u4e2d\u56fd\u8ba1\u7b97\u673a\u5b66\u4f1a\u8054\u5408\u4e3b\u529e\uff0c\u767e\u5ea6\u516c\u53f8\u3001\u4e2d\u56fd\u4e2d\u6587\u4fe1\u606f\u5b66\u4f1a\u8bc4\u6d4b\u5de5\u4f5c\u59d4\u5458\u4f1a\u548c\u4e2d\u56fd\u8ba1\u7b97\u673a\u5b66\u4f1a\u81ea\u7136\u8bed\u8a00\u5904\u7406\u4e13\u59d4\u4f1a\u627f\u529e\uff0c\u5df2\u8fde\u7eed\u4e3e\u529e4\u5c4a\uff0c\u6210\u4e3a\u5168\u7403\u6700\u70ed\u95e8\u7684\u4e2d\u6587NLP\u8d5b\u4e8b\u4e4b\u4e00\u3002\")) # Better print results using pprint\n```\n\noutput:\n\n```json\n[\n  {\n    \"\u7ade\u8d5b\u540d\u79f0\": [\n      {\n        \"end\": 13,\n        \"probability\": 0.78253937,\n        \"relations\": {\n          \"\u4e3b\u529e\u65b9\": [\n            {\n              \"end\": 22,\n              \"probability\": 0.8421704,\n              \"start\": 14,\n              \"text\": \"\u4e2d\u56fd\u4e2d\u6587\u4fe1\u606f\u5b66\u4f1a\"\n            },\n            {\n              \"end\": 30,\n              \"probability\": 0.75807965,\n              \"start\": 23,\n              \"text\": \"\u4e2d\u56fd\u8ba1\u7b97\u673a\u5b66\u4f1a\"\n            }\n          ],\n          \"\u5df2\u4e3e\u529e\u6b21\u6570\": [\n            {\n              \"end\": 82,\n              \"probability\": 0.4671307,\n              \"start\": 80,\n              \"text\": \"4\u5c4a\"\n            }\n          ],\n          \"\u627f\u529e\u65b9\": [\n            {\n              \"end\": 55,\n              \"probability\": 0.700049,\n              \"start\": 40,\n              \"text\": \"\u4e2d\u56fd\u4e2d\u6587\u4fe1\u606f\u5b66\u4f1a\u8bc4\u6d4b\u5de5\u4f5c\u59d4\u5458\u4f1a\"\n            },\n            {\n              \"end\": 72,\n              \"probability\": 0.61934763,\n              \"start\": 56,\n              \"text\": \"\u4e2d\u56fd\u8ba1\u7b97\u673a\u5b66\u4f1a\u81ea\u7136\u8bed\u8a00\u5904\u7406\u4e13\u59d4\u4f1a\"\n            },\n            {\n              \"end\": 39,\n              \"probability\": 0.8292698,\n              \"start\": 35,\n              \"text\": \"\u767e\u5ea6\u516c\u53f8\"\n            }\n          ]\n        },\n        \"start\": 0,\n        \"text\": \"2022\u8bed\u8a00\u4e0e\u667a\u80fd\u6280\u672f\u7ade\u8d5b\"\n      }\n    ]\n  }\n]\n```\n</details>\n\n\n<details>\n<summary>\ud83d\udc49  \u4e8b\u4ef6\u62bd\u53d6</summary>\n\n```python\nfrom pprint import pprint\nfrom litie.pipelines import UIEPipeline\n\n# \u4e8b\u4ef6\u62bd\u53d6\nschema = {\"\u5730\u9707\u89e6\u53d1\u8bcd\": [\"\u5730\u9707\u5f3a\u5ea6\", \"\u65f6\u95f4\", \"\u9707\u4e2d\u4f4d\u7f6e\", \"\u9707\u6e90\u6df1\u5ea6\"]}\n# uie-base\u6a21\u578b\u5df2\u4e0a\u4f20\u81f3huggingface\uff0c\u53ef\u81ea\u52a8\u4e0b\u8f7d\uff0c\u5176\u4ed6\u6a21\u578b\u53ea\u9700\u63d0\u4f9b\u6a21\u578b\u540d\u79f0\u5c06\u81ea\u52a8\u8fdb\u884c\u8f6c\u6362\nuie = UIEPipeline(\"xusenlin/uie-base\", schema=schema)\npprint(uie(\"\u4e2d\u56fd\u5730\u9707\u53f0\u7f51\u6b63\u5f0f\u6d4b\u5b9a\uff1a5\u670816\u65e506\u65f608\u5206\u5728\u4e91\u5357\u4e34\u6ca7\u5e02\u51e4\u5e86\u53bf(\u5317\u7eac24.34\u5ea6\uff0c\u4e1c\u7ecf99.98\u5ea6)\u53d1\u751f3.5\u7ea7\u5730\u9707\uff0c\u9707\u6e90\u6df1\u5ea610\u5343\u7c73\u3002\")) # Better print results using pprint\n```\n\noutput:\n\n```json\n[\n  {\n    \"\u5730\u9707\u89e6\u53d1\u8bcd\": [\n      {\n        \"end\": 58,\n        \"probability\": 0.99774253,\n        \"relations\": {\n          \"\u5730\u9707\u5f3a\u5ea6\": [\n            {\n              \"end\": 56,\n              \"probability\": 0.9980802,\n              \"start\": 52,\n              \"text\": \"3.5\u7ea7\"\n            }\n          ],\n          \"\u65f6\u95f4\": [\n            {\n              \"end\": 22,\n              \"probability\": 0.98533,\n              \"start\": 11,\n              \"text\": \"5\u670816\u65e506\u65f608\u5206\"\n            }\n          ],\n          \"\u9707\u4e2d\u4f4d\u7f6e\": [\n            {\n              \"end\": 50,\n              \"probability\": 0.7874015,\n              \"start\": 23,\n              \"text\": \"\u4e91\u5357\u4e34\u6ca7\u5e02\u51e4\u5e86\u53bf(\u5317\u7eac24.34\u5ea6\uff0c\u4e1c\u7ecf99.98\u5ea6)\"\n            }\n          ],\n          \"\u9707\u6e90\u6df1\u5ea6\": [\n            {\n              \"end\": 67,\n              \"probability\": 0.9937973,\n              \"start\": 63,\n              \"text\": \"10\u5343\u7c73\"\n            }\n          ]\n        },\n        \"start\": 56,\n        \"text\": \"\u5730\u9707\"\n      }\n    ]\n  }\n]\n```\n</details>\n\n<details>\n<summary>\ud83d\udc49 \u8bc4\u8bba\u89c2\u70b9\u62bd\u53d6</summary>\n\n```python\nfrom pprint import pprint\nfrom litie.pipelines import UIEPipeline\n\n# \u8bc4\u8bba\u89c2\u70b9\u62bd\u53d6\nschema = {'\u8bc4\u4ef7\u7ef4\u5ea6': ['\u89c2\u70b9\u8bcd', '\u60c5\u611f\u503e\u5411[\u6b63\u5411\uff0c\u8d1f\u5411]']}\n# uie-base\u6a21\u578b\u5df2\u4e0a\u4f20\u81f3huggingface\uff0c\u53ef\u81ea\u52a8\u4e0b\u8f7d\uff0c\u5176\u4ed6\u6a21\u578b\u53ea\u9700\u63d0\u4f9b\u6a21\u578b\u540d\u79f0\u5c06\u81ea\u52a8\u8fdb\u884c\u8f6c\u6362\nuie = UIEPipeline(\"xusenlin/uie-base\", schema=schema)\npprint(uie(\"\u5e97\u9762\u5e72\u51c0\uff0c\u5f88\u6e05\u9759\uff0c\u670d\u52a1\u5458\u670d\u52a1\u70ed\u60c5\uff0c\u6027\u4ef7\u6bd4\u5f88\u9ad8\uff0c\u53d1\u73b0\u6536\u94f6\u53f0\u6709\u6392\u961f\")) # Better print results using pprint\n```\n\noutput:\n\n```json\n[\n  {\n    \"\u8bc4\u4ef7\u7ef4\u5ea6\": [\n      {\n        \"end\": 20,\n        \"probability\": 0.98170394,\n        \"relations\": {\n          \"\u60c5\u611f\u503e\u5411[\u6b63\u5411\uff0c\u8d1f\u5411]\": [\n            {\n              \"probability\": 0.9966142773628235,\n              \"text\": \"\u6b63\u5411\"\n            }\n          ],\n          \"\u89c2\u70b9\u8bcd\": [\n            {\n              \"end\": 22,\n              \"probability\": 0.95739645,\n              \"start\": 21,\n              \"text\": \"\u9ad8\"\n            }\n          ]\n        },\n        \"start\": 17,\n        \"text\": \"\u6027\u4ef7\u6bd4\"\n      },\n      {\n        \"end\": 2,\n        \"probability\": 0.9696847,\n        \"relations\": {\n          \"\u60c5\u611f\u503e\u5411[\u6b63\u5411\uff0c\u8d1f\u5411]\": [\n            {\n              \"probability\": 0.9982153177261353,\n              \"text\": \"\u6b63\u5411\"\n            }\n          ],\n          \"\u89c2\u70b9\u8bcd\": [\n            {\n              \"end\": 4,\n              \"probability\": 0.9945317,\n              \"start\": 2,\n              \"text\": \"\u5e72\u51c0\"\n            }\n          ]\n        },\n        \"start\": 0,\n        \"text\": \"\u5e97\u9762\"\n      }\n    ]\n  }\n]\n```\n</details>\n\n\n<details>\n<summary>\ud83d\udc49 \u60c5\u611f\u5206\u7c7b</summary>\n\n\n```python\nfrom pprint import pprint\nfrom litie.pipelines import UIEPipeline\n\n# \u4e8b\u4ef6\u62bd\u53d6\nschema = '\u60c5\u611f\u503e\u5411[\u6b63\u5411\uff0c\u8d1f\u5411]'\n# uie-base\u6a21\u578b\u5df2\u4e0a\u4f20\u81f3huggingface\uff0c\u53ef\u81ea\u52a8\u4e0b\u8f7d\uff0c\u5176\u4ed6\u6a21\u578b\u53ea\u9700\u63d0\u4f9b\u6a21\u578b\u540d\u79f0\u5c06\u81ea\u52a8\u8fdb\u884c\u8f6c\u6362\nuie = UIEPipeline(\"xusenlin/uie-base\", schema=schema)\npprint(uie(\"\u8fd9\u4e2a\u4ea7\u54c1\u7528\u8d77\u6765\u771f\u7684\u5f88\u6d41\u7545\uff0c\u6211\u975e\u5e38\u559c\u6b22\")) # Better print results using pprint\n```\n\noutput:\n\n```json\n[\n  {\n    \"\u60c5\u611f\u503e\u5411[\u6b63\u5411\uff0c\u8d1f\u5411]\": [\n      {\n        \"probability\": 0.9990023970603943,\n        \"text\": \"\u6b63\u5411\"\n      }\n    ]\n  }\n]\n```\n</details>\n\n\n## \ud83d\udcdc License\n\n\u6b64\u9879\u76ee\u4e3a `Apache 2.0` \u8bb8\u53ef\u8bc1\u6388\u6743\uff0c\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 [LICENSE](LICENSE) \u6587\u4ef6\u3002\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Pytorch-lightning Code Blocks for Information Extraction",
    "version": "0.2.5",
    "project_urls": {
        "Homepage": "https://github.com/xusenlinzy/lit-ie"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8443a16636bdf3b4ccd939499e7b4a9ef30d8341bd1c36d5a76e2f9e541f264b",
                "md5": "6413835a66ce17bf2fb20ca22eeeebb1",
                "sha256": "01c73336938c704c1df74c3be6319af9a6bd971ae468dad07798cd62ff6f395a"
            },
            "downloads": -1,
            "filename": "litie-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6413835a66ce17bf2fb20ca22eeeebb1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 226415,
            "upload_time": "2023-07-10T09:32:24",
            "upload_time_iso_8601": "2023-07-10T09:32:24.153289Z",
            "url": "https://files.pythonhosted.org/packages/84/43/a16636bdf3b4ccd939499e7b4a9ef30d8341bd1c36d5a76e2f9e541f264b/litie-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-10 09:32:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xusenlinzy",
    "github_project": "lit-ie",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "colorama",
            "specs": []
        },
        {
            "name": "colorlog",
            "specs": []
        },
        {
            "name": "datasets",
            "specs": []
        },
        {
            "name": "jieba",
            "specs": []
        },
        {
            "name": "pypinyin",
            "specs": []
        },
        {
            "name": "pytorch_lightning",
            "specs": []
        },
        {
            "name": "scikit_learn",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "seaborn",
            "specs": []
        },
        {
            "name": "sentencepiece",
            "specs": []
        },
        {
            "name": "setuptools",
            "specs": []
        },
        {
            "name": "spacy",
            "specs": []
        },
        {
            "name": "torchmetrics",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": []
        },
        {
            "name": "gradio",
            "specs": []
        }
    ],
    "lcname": "litie"
}
        
Elapsed time: 0.18890s