# BackdoorMBTI
BackdoorMBTI is an open source project expanding the unimodal backdoor learning to a multimodal context. We hope that BackdoorMBTI can facilitate the analysis and development of backdoor defense methods within a multimodal context.
main feature:
- poison dataset generateion
- backdoor model generation
- attack training
- defense training
- backdoor evaluation
The framework:

## Task Supported
| Task | Dataset | Modality |
|:---------------------------|:---------------|:---------|
| Object Classification | CIFAR10 | Image |
| Object Classification | TinyImageNet | Image |
| Traffic Sign Recognition | GTSRB | Image |
| Facial Recognition | CelebA | Image |
| Sentiment Analysis | SST-2 | Text |
| Sentiment Analysis | IMDb | Text |
| Topic Classification | DBpedia | Text |
| Topic Classification | AG’s News | Text |
| Speech Command Recognition | SpeechCommands | Audio |
| Music Genre Classification | GTZAN | Audio |
| Speaker Identification | VoxCeleb1 | Audio |
### Backdoor Attacks Supported
| Modality | Attack | Visible | Pattern | Add | Sample Specific | paper |
|:--------:|:------------|:----------:|:--------:|:----:|:-----:|:----|
|Image| AdaptiveBlend | Invisible | Global | Yes | No | [REVISITING THE ASSUMPTION OF LATENT SEPARABILITY FOR BACKDOOR DEFENSES](https://openreview.net/pdf?id=_wSHsgrVali) |
|Image| BadNets | Visible | Local | Yes | No | [Badnets: Evaluating backdooring attacks on deep neural networks](https://ieeexplore.ieee.org/iel7/6287639/8600701/08685687.pdf) |
|Image| Blend(under test) | InVisible | Global | Yes | Yes | [A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING](https://arxiv.org/abs/1712.05526v1) |
|Image| Blind(under test) | Visible | Local | Yes |Yes | [Blind Backdoors in Deep Learning Models](https://www.cs.cornell.edu/~shmat/shmat_usenix21blind.pdf) |
|Image| BPP | Invisible | Global | Yes | No | [Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning](http://openaccess.thecvf.com/content/CVPR2022/papers/Wang_BppAttack_Stealthy_and_Efficient_Trojan_Attacks_Against_Deep_Neural_Networks_CVPR_2022_paper.pdf) |
|Image| DynaTrigger | Visible | Local | Yes | Yes | [Dynamic backdoor attacks against machine learning models](https://arxiv.org/pdf/2003.03675) |
|Image| EMBTROJAN(under test) | Inisible | Local | Yes | No | [An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks](https://dl.acm.org/doi/pdf/10.1145/3394486.3403064) |
|Image| LC | Invisible | Global | No | Yes | [Label-consistent backdoor attacks](https://openaccess.thecvf.com/content/ICCV2021/papers/Zeng_Rethinking_the_Backdoor_Attacks_Triggers_A_Frequency_Perspective_ICCV_2021_paper.pdf) |
|Image| Lowfreq | Invisible | Global | Yes | Yes |[Rethinking the Backdoor Attacks’ Triggers: A Frequency Perspective](https://arxiv.org/pdf/1912.02771/) |
|Image| PNoise | Invisible | Global | Yes | Yes | [Use procedural noise to achieve backdoor attack](https://ieeexplore.ieee.org/iel7/6287639/9312710/09529206.pdf) |
|Image| Refool | Invisible | Global | Yes | No | [Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550188.pdf) |
|Image| SBAT | Invisible | Global | No | Yes | [Stealthy Backdoor Attack with Adversarial Training](https://ieeexplore.ieee.org/abstract/document/9746008/) |
|Image| SIG | Invisible | Global | Yes | No | [A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING](https://arxiv.org/pdf/1902.11237) |
|Image| SSBA | Invisible | Global | No | Yes | [Invisible Backdoor Attack with Sample-Specific Triggers](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Invisible_Backdoor_Attack_With_Sample-Specific_Triggers_ICCV_2021_paper.pdf) |
|Image| trojanNN(under test) | Visible | Local | Yes | Yes | [Trojaning Attack on Neural Network](https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2782&context=cstech) |
|Image| ubw(under test) | Invisible | Global | Yes | No | [Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection](https://proceedings.neurips.cc/paper_files/paper/2022/file/55bfedfd31489e5ae83c9ce8eec7b0e1-Paper-Conference.pdf) |
|Image| WaNet | Invisible | Global | No | Yes | [WaNet -- Imperceptible Warping-Based Backdoor Attack](https://arxiv.org/pdf/2102.10369) |
|Text | AddSent | Visible | Local | Yes | No | [A backdoor attack against LSTM-based text classification systems](https://arxiv.org/pdf/1905.12457.pdf) |
|Text | BadNets | Visible | Local | Yes | No | [Badnets: Evaluating backdooring attacks on deep neural networks](https://ieeexplore.ieee.org/iel7/6287639/8600701/08685687.pdf) |
|Text | BITE | Invisible | Local | Yes | Yes | [Textual backdoor attacks with iterative trigger injection](https://u1x3881ofs0.feishu.cn/sheets/VHbrsq8MdhV7BPtd77Nc6BGSnIc?sheet=ae56f0&range=QTE4) |
|Text | LWP | Visible | Local | Yes | No | [Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning](https://aclanthology.org/2021.emnlp-main.241.pdf) |
|Text | STYLEBKD | Visible | Global | No | Yes | [Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer](https://arxiv.org/pdf/2110.07139) |
|Text | SYNBKD | Invisible | Global | No | Yes | [Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger](https://arxiv.org/pdf/2105.12400.pdf) |
|Audio| Baasv(under test) | \- | Global | Yes | No | [Backdoor Attack against Speaker Verification](https://arxiv.org/pdf/2010.11607) |
|Audio| Blend | \- | Local | Yes | No | [Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning](https://arxiv.org/abs/1712.05526v1) |
|Audio| DABA | \- | Global | Yes | No | [Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems](https://dl.acm.org/doi/abs/10.1145/3503161.3548261) |
|Audio| GIS | \- | Global | No | No | [Going in style: Audio backdoors through stylistic transformations](https://arxiv.org/pdf/2211.03117) |
|Audio| UltraSonic | \- | Local | Yes | No | [Can You Hear It? Backdoor Attacks via Ultrasonic Triggers](https://github.com/skoffas/ultrasonic_backdoor) |
### Backdoor Defenses Supported
| Defense |Modality| Input | Stage | Output | Paper |
|:-------:|:-----:|:-----:|:---:|:-----:|:-----:|
| STRIP | Audio,Image and text |backdoor model, clean dataset| post-training | clean dataset | [STRIP: A Defence Against Trojan Attacks on Deep Neural Networks](https://arxiv.org/pdf/1902.06531.pdf) |
| AC | Audio,Image and text |backdoor model, clean dataset, poison dataset| post-training | clean model, clean datasest | [Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering](https://arxiv.org/pdf/1811.03728.pdf) |
| FT | Audio,Image and text |backdoor model, clean dataset| in-training | clean model | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.](https://arxiv.org/pdf/1805.12185.pdf) |
| FP | Audio,Image and text |backdoor model, clean dataset| post-training | clean model | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.](https://arxiv.org/pdf/1805.12185.pdf) |
| ABL | Audio,Image and text |backdoor model, poison dataset| in-training | clean model | [Anti-Backdoor Learning: Training Clean Models on Poisoned Data](https://arxiv.org/pdf/2110.11571.pdf) |
| CLP | Audio,Image and text |backdoor model| post-training | clean model | [Data-free Backdoor Removal based on Channel Lipschitzness](https://arxiv.org/pdf/2208.03111.pdf) |
| NC | Image|backdoor model, clean dataset| post-training | clean model, trigger pattern | [Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks](https://gangw.web.illinois.edu/class/cs598/papers/sp19-poisoning-backdoor.pdf) |
## Installation
To install the virtual environment:
```
conda create -n bkdmbti python=3.10
conda activate bkdmbti
pip install -r requirements.txt
```
## Quick Start
### Download Data
Download the data if it can not be downloaded automatically. Some data download scripts are provided in `scripts` folder.
### Backdoor Attack
Here we provide an example to quickly start with the attack experiments, and reproduce the BadNets backdoor attack results. We use resnet-18 as the default model, and 0.1 as the default poison ratio.
```
cd scripts
python atk_train.py --data_type image --dataset cifar10 --attack_name badnet --model resnet18 --pratio 0.1 --num_workers 4 --epochs 100
python atk_train.py --data_type audio --dataset speechcommands --attack_name blend --model audiocnn --pratio 0.1 --num_workers 4 --epochs 100 --add_noise true
python atk_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --num_workers 4 --epochs 100 --mislabel true
```
Use args `--add_noise true` and `--mislabel true` to add perturbations to the data. After the experiment, metrics ACC(Accuracy), ASR(Attack Success Rate) and RA(Robustness Accuracy) are collected in attack phase.
To learn more about the attack command, you can run `python atk_train.py -h` to see more details.
### Backdoor Defense
Here we provide a defense example, it depends on the backdoor model generated in the attack phase, so you should run the corresponding attack experiment before defense phase.
```
cd scripts
python def_train.py --data_type image --dataset cifar10 --attack_name badnet --pratio 0.1 --defense_name finetune --num_workers 4 --epochs 10
python def_train.py --data_type audio --dataset speechcommands --attack_name blend --model audiocnn --pratio 0.1 --defense_name fineprune --num_workers 4 --epochs 1 --add_noise true
python def_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --defense_name strip --num_workers 4 --epochs 1 --mislabel true
```
To learn more about the attack command, you can run `python def_train.py -h` to see more details.
In defense phase, detection accuracy will be collected if the defense is a detection method, and then the sanitized dataset will be used to retrain the model. ACC, ASR and RA metrics are collected after retraining.
# Results
More results can be found in: [results.md](./results.md)
Raw data
{
"_id": null,
"home_page": "https://github.com/SJTUHaiyangYu/BackdoorMBTI",
"name": "backdoormbti",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": "anonymous",
"author_email": "anonymous@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/44/43/4b3b50bfede1c9da8b4a48682a1c484c61dc98b8351407714aca47d61690/backdoormbti-0.2.2.tar.gz",
"platform": null,
"description": "# BackdoorMBTI\n\nBackdoorMBTI is an open source project expanding the unimodal backdoor learning to a multimodal context. We hope that BackdoorMBTI can facilitate the analysis and development of backdoor defense methods within a multimodal context.\n\n main feature\uff1a\n - poison dataset generateion\n - backdoor model generation\n - attack training\n - defense training\n - backdoor evaluation\n\nThe framework:\n\n## Task Supported\n\n\n| Task | Dataset | Modality |\n|:---------------------------|:---------------|:---------|\n| Object Classification | CIFAR10 | Image |\n| Object Classification | TinyImageNet | Image |\n| Traffic Sign Recognition | GTSRB | Image |\n| Facial Recognition | CelebA | Image |\n| Sentiment Analysis | SST-2 | Text |\n| Sentiment Analysis | IMDb | Text |\n| Topic Classification | DBpedia | Text |\n| Topic Classification | AG\u2019s News | Text |\n| Speech Command Recognition | SpeechCommands | Audio |\n| Music Genre Classification | GTZAN | Audio |\n| Speaker Identification | VoxCeleb1 | Audio |\n\n\n### Backdoor Attacks Supported\n\n| Modality | Attack | Visible | Pattern | Add | Sample Specific | paper |\n|:--------:|:------------|:----------:|:--------:|:----:|:-----:|:----|\n|Image| AdaptiveBlend | Invisible | Global | Yes | No | [REVISITING THE ASSUMPTION OF LATENT SEPARABILITY FOR BACKDOOR DEFENSES](https://openreview.net/pdf?id=_wSHsgrVali) |\n|Image| BadNets | Visible | Local | Yes | No | [Badnets: Evaluating backdooring attacks on deep neural networks](https://ieeexplore.ieee.org/iel7/6287639/8600701/08685687.pdf) |\n|Image| Blend(under test) | InVisible | Global | Yes | Yes | [A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING](https://arxiv.org/abs/1712.05526v1) |\n|Image| Blind(under test) | Visible | Local | Yes |Yes | [Blind Backdoors in Deep Learning Models](https://www.cs.cornell.edu/~shmat/shmat_usenix21blind.pdf) |\n|Image| BPP | Invisible | Global | Yes | No | [Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning](http://openaccess.thecvf.com/content/CVPR2022/papers/Wang_BppAttack_Stealthy_and_Efficient_Trojan_Attacks_Against_Deep_Neural_Networks_CVPR_2022_paper.pdf) |\n|Image| DynaTrigger | Visible | Local | Yes | Yes | [Dynamic backdoor attacks against machine learning models](https://arxiv.org/pdf/2003.03675) |\n|Image| EMBTROJAN(under test) | Inisible | Local | Yes | No | [An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks](https://dl.acm.org/doi/pdf/10.1145/3394486.3403064) |\n|Image| LC | Invisible | Global | No | Yes | [Label-consistent backdoor attacks](https://openaccess.thecvf.com/content/ICCV2021/papers/Zeng_Rethinking_the_Backdoor_Attacks_Triggers_A_Frequency_Perspective_ICCV_2021_paper.pdf) |\n|Image| Lowfreq | Invisible | Global | Yes | Yes |[Rethinking the Backdoor Attacks\u2019 Triggers: A Frequency Perspective](https://arxiv.org/pdf/1912.02771/) |\n|Image| PNoise | Invisible | Global | Yes | Yes | [Use procedural noise to achieve backdoor attack](https://ieeexplore.ieee.org/iel7/6287639/9312710/09529206.pdf) |\n|Image| Refool | Invisible | Global | Yes | No | [Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550188.pdf) |\n|Image| SBAT | Invisible | Global | No | Yes | [Stealthy Backdoor Attack with Adversarial Training](https://ieeexplore.ieee.org/abstract/document/9746008/) |\n|Image| SIG | Invisible | Global | Yes | No | [A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING](https://arxiv.org/pdf/1902.11237) |\n|Image| SSBA | Invisible | Global | No | Yes | [Invisible Backdoor Attack with Sample-Specific Triggers](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Invisible_Backdoor_Attack_With_Sample-Specific_Triggers_ICCV_2021_paper.pdf) |\n|Image| trojanNN(under test) | Visible | Local | Yes | Yes | [Trojaning Attack on Neural Network](https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2782&context=cstech) |\n|Image| ubw(under test) | Invisible | Global | Yes | No | [Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection](https://proceedings.neurips.cc/paper_files/paper/2022/file/55bfedfd31489e5ae83c9ce8eec7b0e1-Paper-Conference.pdf) |\n|Image| WaNet | Invisible | Global | No | Yes | [WaNet -- Imperceptible Warping-Based Backdoor Attack](https://arxiv.org/pdf/2102.10369) |\n|Text | AddSent | Visible | Local | Yes | No | [A backdoor attack against LSTM-based text classification systems](https://arxiv.org/pdf/1905.12457.pdf) |\n|Text | BadNets | Visible | Local | Yes | No | [Badnets: Evaluating backdooring attacks on deep neural networks](https://ieeexplore.ieee.org/iel7/6287639/8600701/08685687.pdf) |\n|Text | BITE | Invisible | Local | Yes | Yes | [Textual backdoor attacks with iterative trigger injection](https://u1x3881ofs0.feishu.cn/sheets/VHbrsq8MdhV7BPtd77Nc6BGSnIc?sheet=ae56f0&range=QTE4) |\n|Text | LWP | Visible | Local | Yes | No | [Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning](https://aclanthology.org/2021.emnlp-main.241.pdf) |\n|Text | STYLEBKD | Visible | Global | No | Yes | [Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer](https://arxiv.org/pdf/2110.07139) |\n|Text | SYNBKD | Invisible | Global | No | Yes | [Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger](https://arxiv.org/pdf/2105.12400.pdf) |\n|Audio| Baasv(under test) | \\- | Global | Yes | No | [Backdoor Attack against Speaker Verification](https://arxiv.org/pdf/2010.11607) |\n|Audio| Blend | \\- | Local | Yes | No | [Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning](https://arxiv.org/abs/1712.05526v1) |\n|Audio| DABA | \\- | Global | Yes | No | [Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems](https://dl.acm.org/doi/abs/10.1145/3503161.3548261) |\n|Audio| GIS | \\- | Global | No | No | [Going in style: Audio backdoors through stylistic transformations](https://arxiv.org/pdf/2211.03117) |\n|Audio| UltraSonic | \\- | Local | Yes | No | [Can You Hear It? Backdoor Attacks via Ultrasonic Triggers](https://github.com/skoffas/ultrasonic_backdoor) |\n\n\n### Backdoor Defenses Supported\n\n\n| Defense |Modality| Input | Stage | Output | Paper | \n|:-------:|:-----:|:-----:|:---:|:-----:|:-----:|\n| STRIP | Audio,Image and text |backdoor model, clean dataset| post-training | clean dataset | [STRIP: A Defence Against Trojan Attacks on Deep Neural Networks](https://arxiv.org/pdf/1902.06531.pdf) |\n| AC | Audio,Image and text |backdoor model, clean dataset, poison dataset| post-training | clean model, clean datasest | [Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering](https://arxiv.org/pdf/1811.03728.pdf) | \n| FT | Audio,Image and text |backdoor model, clean dataset| in-training | clean model | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.](https://arxiv.org/pdf/1805.12185.pdf) |\n| FP | Audio,Image and text |backdoor model, clean dataset| post-training | clean model | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.](https://arxiv.org/pdf/1805.12185.pdf) |\n| ABL | Audio,Image and text |backdoor model, poison dataset| in-training | clean model | [Anti-Backdoor Learning: Training Clean Models on Poisoned Data](https://arxiv.org/pdf/2110.11571.pdf) |\n| CLP | Audio,Image and text |backdoor model| post-training | clean model | [Data-free Backdoor Removal based on Channel Lipschitzness](https://arxiv.org/pdf/2208.03111.pdf) |\n| NC | Image|backdoor model, clean dataset| post-training | clean model, trigger pattern | [Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks](https://gangw.web.illinois.edu/class/cs598/papers/sp19-poisoning-backdoor.pdf) |\n\n\n## Installation\n\nTo install the virtual environment:\n```\nconda create -n bkdmbti python=3.10\nconda activate bkdmbti\npip install -r requirements.txt\n```\n\n## Quick Start\n\n### Download Data\n\nDownload the data if it can not be downloaded automatically. Some data download scripts are provided in `scripts` folder.\n\n### Backdoor Attack \n\nHere we provide an example to quickly start with the attack experiments, and reproduce the BadNets backdoor attack results. We use resnet-18 as the default model, and 0.1 as the default poison ratio.\n```\ncd scripts\npython atk_train.py --data_type image --dataset cifar10 --attack_name badnet --model resnet18 --pratio 0.1 --num_workers 4 --epochs 100 \npython atk_train.py --data_type audio --dataset speechcommands --attack_name blend --model audiocnn --pratio 0.1 --num_workers 4 --epochs 100 --add_noise true\npython atk_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --num_workers 4 --epochs 100 --mislabel true\n```\nUse args `--add_noise true` and `--mislabel true` to add perturbations to the data. After the experiment, metrics ACC(Accuracy), ASR(Attack Success Rate) and RA(Robustness Accuracy) are collected in attack phase.\nTo learn more about the attack command, you can run `python atk_train.py -h` to see more details.\n### Backdoor Defense\n\nHere we provide a defense example, it depends on the backdoor model generated in the attack phase, so you should run the corresponding attack experiment before defense phase.\n```\ncd scripts\npython def_train.py --data_type image --dataset cifar10 --attack_name badnet --pratio 0.1 --defense_name finetune --num_workers 4 --epochs 10 \npython def_train.py --data_type audio --dataset speechcommands --attack_name blend --model audiocnn --pratio 0.1 --defense_name fineprune --num_workers 4 --epochs 1 --add_noise true\npython def_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --defense_name strip --num_workers 4 --epochs 1 --mislabel true\n```\nTo learn more about the attack command, you can run `python def_train.py -h` to see more details.\nIn defense phase, detection accuracy will be collected if the defense is a detection method, and then the sanitized dataset will be used to retrain the model. ACC, ASR and RA metrics are collected after retraining.\n\n\n# Results\nMore results can be found in: [results.md](./results.md)\n",
"bugtrack_url": null,
"license": null,
"summary": "An open-source framework for backdoor learning and defense in multimodal contexts",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://github.com/SJTUHaiyangYu/BackdoorMBTI"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "505a6f8ab1bffe775c8caba196711d23fb20c238722da94cf357a1155b58c969",
"md5": "bb3a237b509fcd2be129931e76c9cea5",
"sha256": "d98ee82558f6c4cf0849c1f70f3bf587fef158195fb098789a62999c75537849"
},
"downloads": -1,
"filename": "backdoormbti-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bb3a237b509fcd2be129931e76c9cea5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 5637841,
"upload_time": "2024-10-14T16:16:55",
"upload_time_iso_8601": "2024-10-14T16:16:55.885507Z",
"url": "https://files.pythonhosted.org/packages/50/5a/6f8ab1bffe775c8caba196711d23fb20c238722da94cf357a1155b58c969/backdoormbti-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "44434b3b50bfede1c9da8b4a48682a1c484c61dc98b8351407714aca47d61690",
"md5": "0bcbbb5bfdd5317ed3464a9833484aeb",
"sha256": "889ff7b5b0588d82f6719a3cdfc2afce39fd6baa38791be0078cff1ef66ebeac"
},
"downloads": -1,
"filename": "backdoormbti-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "0bcbbb5bfdd5317ed3464a9833484aeb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 5533396,
"upload_time": "2024-10-14T16:16:58",
"upload_time_iso_8601": "2024-10-14T16:16:58.913247Z",
"url": "https://files.pythonhosted.org/packages/44/43/4b3b50bfede1c9da8b4a48682a1c484c61dc98b8351407714aca47d61690/backdoormbti-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-14 16:16:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SJTUHaiyangYu",
"github_project": "BackdoorMBTI",
"github_not_found": true,
"lcname": "backdoormbti"
}