facechain


Namefacechain JSON
Version 0.0.1.dev0 PyPI version JSON
download
home_page
Summaryfacechain
upload_time2023-08-17 13:36:47
maintainer
docs_urlNone
authorShadow Walker
requires_python
license
keywords facechain
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <br>
    <img src="https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif" width="400"/>
    <br>
    <h1>FaceChain</h1>
<p>

# Introduction

如果您熟悉中文,可以阅读[中文版本的README](./README_ZH.md)。

FaceChain is a deep-learning toolchain for generating your Digital-Twin. With a minimum of 1 portrait-photo, you can create a Digital-Twin of your own and start generating personal portraits in different settings (multiple styles now supported!). You may train your Digital-Twin model and generate photos via FaceChain's Python scripts, or via the familiar Gradio interface. You can also experience FaceChain directly with our [ModelScope Studio](https://modelscope.cn/studios/CVstudio/cv_human_portrait/summary).

FaceChain is powered by [ModelScope](https://github.com/modelscope/modelscope).

![image](resources/git_cover.jpg)


# News
- Support a series of new style models in a plug-and-play fashion. Refer to: [Features](#Features)   (August 16th, 2023 UTC)
- Support customizable prompts. Refer to: [Features](#Features)    (August 16th, 2023 UTC)
- Colab notebook is available now! You can experience FaceChain directly with our [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing).   (August 15th, 2023 UTC)


# To-Do List
- Support existing style models (such as those on Civitai) in a plug-an-play fashion.  --on-going
- Support customizable prompts (try on different outfits etc.)  --on-going
- Support customizable poses, with controlnet or composer
- Support more beauty-retouch effects
- Support latest foundation models such as SDXL
- Provide Colab compatibility   --done
- Provide WebUI compatibility


# Features
- Support a series of new style models in a plug-and-play fashion
  - Description
    - Allow users to select different style models for training distinct types of Digital-Twins.
  - Installation
    - Refer to [Installation Guide](#installation-guide)
  - Execution
  ```shell
    cd facechain/advanced-style
    python3 app.py
  ```
  - Exampled outcomes
  ![image](resources/style_lora_xiapei.jpg)
  - Reference
    - [xiapei lora model](https://www.liblibai.com/modelinfo/f746450340a3a932c99be55c1a82d20c)
    - For more LoRA styles, refer to [Civitai](https://civitai.com/)

- Support customizable prompts
  - Description
    - Allow users to achieve various portrait styles with customized prompts.
  - Installation
    - Refer to [Installation Guide](#installation-guide)
  - Execution
  ```shell
    cd facechain/advanced-prompt
    python3 app.py
  ```
  - Exampled outcomes (prompt: wearing an elegant evening gown)
    ![image](resources/prompt_evening_gown.jpg)


# Installation

## Compatibility Verification
The following are the environment dependencies that have been verified:
- python: py3.8, py3.10
- pytorch: torch2.0.0, torch2.0.1
- tensorflow: 2.8.0, tensorflow-cpu
- CUDA: 11.7
- CUDNN: 8+
- OS: Ubuntu 20.04, CentOS 7.9
- GPU: Nvidia-A10 24G

## Resource Usage
- GPU: About 19G
- Disk: About 50GB

## Installation Guide
The following installation methods are supported:


### 1. ModelScope notebook【recommended】

   The ModelScope notebook has a free tier that allows you to run the FaceChain application, refer to [ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)
   
    In addition to ModelScope notebook and ECS, I would suggest that we add that user may also start DSW instance with the option of ModelScope (GPU) image, to create a ready-to-use environment.

```shell
# Step1: 我的notebook -> PAI-DSW -> GPU环境

# Step2: Open the Terminal,clone FaceChain from github:
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1

# Step3: Entry the Notebook cell:
import os
os.chdir('/mnt/workspace/facechain')
print(os.getcwd())

!pip3 install gradio
!python3 app.py


# Step4: click "public URL" or "local URL", upload your images to 
# train your own model and then generate your digital twin.
```


### 2. Docker

If you are familiar with using docker, we recommend to use this way:

```shell
# Step1: Prepare the environment with GPU on local or cloud, we recommend to use Alibaba Cloud ECS, refer to: https://www.aliyun.com/product/ecs

# Step2: Download the docker image (for installing docker engine, refer to https://docs.docker.com/engine/install/)
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0

# Step3: run the docker container
docker run -it --name facechain -p 7860:7860 --gpus all registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0 /bin/bash
(Note: you may need to install the nvidia-container-runtime, refer to https://github.com/NVIDIA/nvidia-container-runtime)

# Step4: Install the gradio in the docker container:
pip3 install gradio

# Step5 clone facechain from github
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1
cd facechain
python3 app.py

# Step6
Run the app server: click "public URL" --> in the form of: https://xxx.gradio.live
```

### 3. conda Virtual Environment

Use the conda virtual environment, and refer to [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage your dependencies. After installation, execute the following commands:
(Note: mmcv has strict environment requirements and might not be compatible in some cases. It's recommended to use Docker.)

```shell
conda create -n facechain python=3.8    # Verified environments: 3.8 and 3.10
conda activate facechain

GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1
cd facechain

pip3 install -r requirements.txt
pip3 install -U openmim 
mim install mmcv-full==1.7.0

# Navigate to the facechain directory and run:
python3 app.py

# Finally, click on the URL generated in the log to access the web page.
```

**Note**: After the app service is successfully launched, go to the URL in the log, enter the "Image Customization" tab, click "Select Image to Upload", and choose at least one image with a face. Then, click "Start Training" to begin model training. After the training is completed, there will be corresponding displays in the log. Afterwards, switch to the "Image Experience" tab and click "Start Inference" to generate your own digital image.


### 4. colab notebook
Please refer to [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing) for details.


# Script Execution

FaceChain supports direct training and inference in the python environment. Run the following command in the cloned folder to start training:

```shell
PYTHONPATH=. sh train_lora.sh "ly261666/cv_portrait_model" "v2.0" "film/film" "./imgs" "./processed" "./output"
```

Parameter meaning:

```text
ly261666/cv_portrait_model: The stable diffusion base model of the ModelScope model hub, which will be used for training, no need to be changed.
v2.0: The version number of this base model, no need to be changed
film/film: This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed
./imgs: This parameter needs to be replaced with the actual value. It means a local file directory that contains the original photos used for training and generation
./processed: The folder of the processed images after preprocessing, this parameter needs to be passed the same value in inference, no need to be changed
./output: The folder where the model weights stored after training, no need to be changed
```

Wait for 5-20 minutes to complete the training. Users can also adjust other training hyperparameters. The hyperparameters supported by training can be viewed in the file of `train_lora.sh`, or the complete hyperparameter list in `facechain/train_text_to_image_lora.py`.

When inferring, please edit the code in run_inference.py:

```python
# Fill in the folder of the images after preprocessing above, it should be the same as during training
processed_dir = './processed'
# The number of images to generate in inference
num_generate = 5
# The stable diffusion base model used in training, no need to be changed
base_model = 'ly261666/cv_portrait_model'
# The version number of this base model, no need to be changed
revision = 'v2.0'
# This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed
base_model_sub_dir = 'film/film'
# The folder where the model weights stored after training, it must be the same as during training
train_output_dir = './output'
# Specify a folder to save the generated images, this parameter can be modified as needed
output_dir = './generated'
```

Then execute:

```shell
python run_inference.py
```

You can find the generated personal digital image photos in the `output_dir`.

# Algorithm Introduction

## Architectural Overview

The ability of the personal portrait generatation evolves around the text-to-image capability of Stable Diffusion model. We consider the main factors that affect the generation effect of personal portraits: portrait style information and user character information. For this, we use the style LoRA model trained offline and the face LoRA model trained online to learn the above information. LoRA is a fine-tuning model with fewer trainable parameters. In Stable Diffusion, the information of the input image can be injected into the LoRA model by the way of text generation image training with a small amount of input image. Therefore, the ability of the personal portrait model is divided into training and inference stages. The training stage generates image and text label data for fine-tuning the Stable Diffusion model, and obtains the face LoRA model. The inference stage generates personal portrait images based on the face LoRA model and style LoRA model.

![image](resources/framework_eng.jpg)

## Training

Input: User-uploaded images that contain clear face areas

Output: Face LoRA model

Description: First, we process the user-uploaded images using an image rotation model based on orientation judgment and a face refinement rotation method based on face detection and keypoint models, and obtain images containing forward faces. Next, we use a human body parsing model and a human portrait beautification model to obtain high-quality face training images. Afterwards, we use a face attribute model and a text annotation model, combined with tag post-processing methods, to generate fine-grained labels for training images. Finally, we use the above images and label data to fine-tune the Stable Diffusion model to obtain the face LoRA model.

## Inference

Input: User-uploaded images in the training phase, preset input prompt words for generating personal portraits

Output: Personal portrait image

Description: First, we fuse the weights of the face LoRA model and style LoRA model into the Stable Diffusion model. Next, we use the text generation image function of the Stable Diffusion model to preliminarily generate personal portrait images based on the preset input prompt words. Then we further improve the face details of the above portrait image using the face fusion model. The template face used for fusion is selected from the training images through the face quality evaluation model. Finally, we use the face recognition model to calculate the similarity between the generated portrait image and the template face, and use this to sort the portrait images, and output the personal portrait image that ranks first as the final output result.

## Model List

The models used in FaceChain:

[1]  Face detection model DamoFD:https://modelscope.cn/models/damo/cv_ddsar_face-detection_iclr23-damofd

[2]  Image rotating model, offered in the ModelScope studio

[3]  Human parsing model M2FP:https://modelscope.cn/models/damo/cv_resnet101_image-multiple-human-parsing

[4]  Skin retouching model ABPN:https://modelscope.cn/models/damo/cv_unet_skin-retouching

[5]  Face attribute recognition model FairFace:https://modelscope.cn/models/damo/cv_resnet34_face-attribute-recognition_fairface

[6]  DeepDanbooru model:https://github.com/KichangKim/DeepDanbooru

[7]  Face quality assessment FQA:https://modelscope.cn/models/damo/cv_manual_face-quality-assessment_fqa

[8]  Face fusion model:https://modelscope.cn/models/damo/cv_unet-image-face-fusion_damo

[9]  Face recognition model RTS:https://modelscope.cn/models/damo/cv_ir_face-recognition-ood_rts          

# More Information

- [ModelScope library](https://github.com/modelscope/modelscope/)


​		ModelScope Library provides the foundation for building the model-ecosystem of ModelScope, including the interface and implementation to integrate various models into ModelScope. 

- [Contribute models to ModelScope](https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88)

# License

This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "facechain",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "facechain",
    "author": "Shadow Walker",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "<p align=\"center\">\n    <br>\n    <img src=\"https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif\" width=\"400\"/>\n    <br>\n    <h1>FaceChain</h1>\n<p>\n\n# Introduction\n\n\u5982\u679c\u60a8\u719f\u6089\u4e2d\u6587\uff0c\u53ef\u4ee5\u9605\u8bfb[\u4e2d\u6587\u7248\u672c\u7684README](./README_ZH.md)\u3002\n\nFaceChain is a deep-learning toolchain for generating your Digital-Twin. With a minimum of 1 portrait-photo, you can create a Digital-Twin of your own and start generating personal portraits in different settings (multiple styles now supported!). You may train your Digital-Twin model and generate photos via FaceChain's Python scripts, or via the familiar Gradio interface. You can also experience FaceChain directly with our [ModelScope Studio](https://modelscope.cn/studios/CVstudio/cv_human_portrait/summary).\n\nFaceChain is powered by [ModelScope](https://github.com/modelscope/modelscope).\n\n![image](resources/git_cover.jpg)\n\n\n# News\n- Support a series of new style models in a plug-and-play fashion. Refer to: [Features](#Features)   (August 16th, 2023 UTC)\n- Support customizable prompts. Refer to: [Features](#Features)    (August 16th, 2023 UTC)\n- Colab notebook is available now! You can experience FaceChain directly with our [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing).   (August 15th, 2023 UTC)\n\n\n# To-Do List\n- Support existing style models (such as those on Civitai) in a plug-an-play fashion.  --on-going\n- Support customizable prompts (try on different outfits etc.)  --on-going\n- Support customizable poses, with controlnet or composer\n- Support more beauty-retouch effects\n- Support latest foundation models such as SDXL\n- Provide Colab compatibility   --done\n- Provide WebUI compatibility\n\n\n# Features\n- Support a series of new style models in a plug-and-play fashion\n  - Description\n    - Allow users to select different style models for training distinct types of Digital-Twins.\n  - Installation\n    - Refer to [Installation Guide](#installation-guide)\n  - Execution\n  ```shell\n    cd facechain/advanced-style\n    python3 app.py\n  ```\n  - Exampled outcomes\n  ![image](resources/style_lora_xiapei.jpg)\n  - Reference\n    - [xiapei lora model](https://www.liblibai.com/modelinfo/f746450340a3a932c99be55c1a82d20c)\n    - For more LoRA styles, refer to [Civitai](https://civitai.com/)\n\n- Support customizable prompts\n  - Description\n    - Allow users to achieve various portrait styles with customized prompts.\n  - Installation\n    - Refer to [Installation Guide](#installation-guide)\n  - Execution\n  ```shell\n    cd facechain/advanced-prompt\n    python3 app.py\n  ```\n  - Exampled outcomes (prompt: wearing an elegant evening gown)\n    ![image](resources/prompt_evening_gown.jpg)\n\n\n# Installation\n\n## Compatibility Verification\nThe following are the environment dependencies that have been verified:\n- python: py3.8, py3.10\n- pytorch: torch2.0.0, torch2.0.1\n- tensorflow: 2.8.0, tensorflow-cpu\n- CUDA: 11.7\n- CUDNN: 8+\n- OS: Ubuntu 20.04, CentOS 7.9\n- GPU: Nvidia-A10 24G\n\n## Resource Usage\n- GPU: About 19G\n- Disk: About 50GB\n\n## Installation Guide\nThe following installation methods are supported:\n\n\n### 1. ModelScope notebook\u3010recommended\u3011\n\n   The ModelScope notebook has a free tier that allows you to run the FaceChain application, refer to [ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)\n   \n    In addition to ModelScope notebook and ECS, I would suggest that we add that user may also start DSW instance with the option of ModelScope (GPU) image, to create a ready-to-use environment.\n\n```shell\n# Step1: \u6211\u7684notebook -> PAI-DSW -> GPU\u73af\u5883\n\n# Step2: Open the Terminal\uff0cclone FaceChain from github:\nGIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1\n\n# Step3: Entry the Notebook cell:\nimport os\nos.chdir('/mnt/workspace/facechain')\nprint(os.getcwd())\n\n!pip3 install gradio\n!python3 app.py\n\n\n# Step4: click \"public URL\" or \"local URL\", upload your images to \n# train your own model and then generate your digital twin.\n```\n\n\n### 2. Docker\n\nIf you are familiar with using docker, we recommend to use this way:\n\n```shell\n# Step1: Prepare the environment with GPU on local or cloud, we recommend to use Alibaba Cloud ECS, refer to: https://www.aliyun.com/product/ecs\n\n# Step2: Download the docker image (for installing docker engine, refer to https://docs.docker.com/engine/install/\uff09\ndocker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0\n\n# Step3: run the docker container\ndocker run -it --name facechain -p 7860:7860 --gpus all registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0 /bin/bash\n(Note: you may need to install the nvidia-container-runtime, refer to https://github.com/NVIDIA/nvidia-container-runtime)\n\n# Step4: Install the gradio in the docker container:\npip3 install gradio\n\n# Step5 clone facechain from github\nGIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1\ncd facechain\npython3 app.py\n\n# Step6\nRun the app server: click \"public URL\" --> in the form of: https://xxx.gradio.live\n```\n\n### 3. conda Virtual Environment\n\nUse the conda virtual environment, and refer to [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage your dependencies. After installation, execute the following commands:\n(Note: mmcv has strict environment requirements and might not be compatible in some cases. It's recommended to use Docker.)\n\n```shell\nconda create -n facechain python=3.8    # Verified environments: 3.8 and 3.10\nconda activate facechain\n\nGIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1\ncd facechain\n\npip3 install -r requirements.txt\npip3 install -U openmim \nmim install mmcv-full==1.7.0\n\n# Navigate to the facechain directory and run:\npython3 app.py\n\n# Finally, click on the URL generated in the log to access the web page.\n```\n\n**Note**: After the app service is successfully launched, go to the URL in the log, enter the \"Image Customization\" tab, click \"Select Image to Upload\", and choose at least one image with a face. Then, click \"Start Training\" to begin model training. After the training is completed, there will be corresponding displays in the log. Afterwards, switch to the \"Image Experience\" tab and click \"Start Inference\" to generate your own digital image.\n\n\n### 4. colab notebook\nPlease refer to [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing) for details.\n\n\n# Script Execution\n\nFaceChain supports direct training and inference in the python environment. Run the following command in the cloned folder to start training:\n\n```shell\nPYTHONPATH=. sh train_lora.sh \"ly261666/cv_portrait_model\" \"v2.0\" \"film/film\" \"./imgs\" \"./processed\" \"./output\"\n```\n\nParameter meaning:\n\n```text\nly261666/cv_portrait_model: The stable diffusion base model of the ModelScope model hub, which will be used for training, no need to be changed.\nv2.0: The version number of this base model, no need to be changed\nfilm/film: This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed\n./imgs: This parameter needs to be replaced with the actual value. It means a local file directory that contains the original photos used for training and generation\n./processed: The folder of the processed images after preprocessing, this parameter needs to be passed the same value in inference, no need to be changed\n./output: The folder where the model weights stored after training, no need to be changed\n```\n\nWait for 5-20 minutes to complete the training. Users can also adjust other training hyperparameters. The hyperparameters supported by training can be viewed in the file of `train_lora.sh`, or the complete hyperparameter list in `facechain/train_text_to_image_lora.py`.\n\nWhen inferring, please edit the code in run_inference.py:\n\n```python\n# Fill in the folder of the images after preprocessing above, it should be the same as during training\nprocessed_dir = './processed'\n# The number of images to generate in inference\nnum_generate = 5\n# The stable diffusion base model used in training, no need to be changed\nbase_model = 'ly261666/cv_portrait_model'\n# The version number of this base model, no need to be changed\nrevision = 'v2.0'\n# This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed\nbase_model_sub_dir = 'film/film'\n# The folder where the model weights stored after training, it must be the same as during training\ntrain_output_dir = './output'\n# Specify a folder to save the generated images, this parameter can be modified as needed\noutput_dir = './generated'\n```\n\nThen execute:\n\n```shell\npython run_inference.py\n```\n\nYou can find the generated personal digital image photos in the `output_dir`.\n\n# Algorithm Introduction\n\n## Architectural Overview\n\nThe ability of the personal portrait generatation evolves around the text-to-image capability of Stable Diffusion model. We consider the main factors that affect the generation effect of personal portraits: portrait style information and user character information. For this, we use the style LoRA model trained offline and the face LoRA model trained online to learn the above information. LoRA is a fine-tuning model with fewer trainable parameters. In Stable Diffusion, the information of the input image can be injected into the LoRA model by the way of text generation image training with a small amount of input image. Therefore, the ability of the personal portrait model is divided into training and inference stages. The training stage generates image and text label data for fine-tuning the Stable Diffusion model, and obtains the face LoRA model. The inference stage generates personal portrait images based on the face LoRA model and style LoRA model.\n\n![image](resources/framework_eng.jpg)\n\n## Training\n\nInput: User-uploaded images that contain clear face areas\n\nOutput: Face LoRA model\n\nDescription: First, we process the user-uploaded images using an image rotation model based on orientation judgment and a face refinement rotation method based on face detection and keypoint models, and obtain images containing forward faces. Next, we use a human body parsing model and a human portrait beautification model to obtain high-quality face training images. Afterwards, we use a face attribute model and a text annotation model, combined with tag post-processing methods, to generate fine-grained labels for training images. Finally, we use the above images and label data to fine-tune the Stable Diffusion model to obtain the face LoRA model.\n\n## Inference\n\nInput: User-uploaded images in the training phase, preset input prompt words for generating personal portraits\n\nOutput: Personal portrait image\n\nDescription: First, we fuse the weights of the face LoRA model and style LoRA model into the Stable Diffusion model. Next, we use the text generation image function of the Stable Diffusion model to preliminarily generate personal portrait images based on the preset input prompt words. Then we further improve the face details of the above portrait image using the face fusion model. The template face used for fusion is selected from the training images through the face quality evaluation model. Finally, we use the face recognition model to calculate the similarity between the generated portrait image and the template face, and use this to sort the portrait images, and output the personal portrait image that ranks first as the final output result.\n\n## Model List\n\nThe models used in FaceChain:\n\n[1]  Face detection model DamoFD\uff1ahttps://modelscope.cn/models/damo/cv_ddsar_face-detection_iclr23-damofd\n\n[2]  Image rotating model, offered in the ModelScope studio\n\n[3]  Human parsing model M2FP\uff1ahttps://modelscope.cn/models/damo/cv_resnet101_image-multiple-human-parsing\n\n[4]  Skin retouching model ABPN\uff1ahttps://modelscope.cn/models/damo/cv_unet_skin-retouching\n\n[5]  Face attribute recognition model FairFace\uff1ahttps://modelscope.cn/models/damo/cv_resnet34_face-attribute-recognition_fairface\n\n[6]  DeepDanbooru model\uff1ahttps://github.com/KichangKim/DeepDanbooru\n\n[7]  Face quality assessment FQA\uff1ahttps://modelscope.cn/models/damo/cv_manual_face-quality-assessment_fqa\n\n[8]  Face fusion model\uff1ahttps://modelscope.cn/models/damo/cv_unet-image-face-fusion_damo\n\n[9]  Face recognition model RTS\uff1ahttps://modelscope.cn/models/damo/cv_ir_face-recognition-ood_rts          \n\n# More Information\n\n- [ModelScope library](https://github.com/modelscope/modelscope/)\n\n\n\u200b\t\tModelScope Library provides the foundation for building the model-ecosystem of ModelScope, including the interface and implementation to integrate various models into ModelScope. \n\n- [Contribute models to ModelScope](https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88)\n\n# License\n\nThis project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "facechain",
    "version": "0.0.1.dev0",
    "project_urls": null,
    "split_keywords": [
        "facechain"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "17006238af393df91ecdf5ab88103b2b948cf36bfb4b3a319125e0bb39bec68f",
                "md5": "8b088180622ac99bdcf6f763ce5fe539",
                "sha256": "308a343692a6db0cfd9f3ccaeee06462c98d2052dd7cc350d0f11e270253082a"
            },
            "downloads": -1,
            "filename": "facechain-0.0.1.dev0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b088180622ac99bdcf6f763ce5fe539",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 26879,
            "upload_time": "2023-08-17T13:36:47",
            "upload_time_iso_8601": "2023-08-17T13:36:47.706431Z",
            "url": "https://files.pythonhosted.org/packages/17/00/6238af393df91ecdf5ab88103b2b948cf36bfb4b3a319125e0bb39bec68f/facechain-0.0.1.dev0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-17 13:36:47",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "facechain"
}
        
Elapsed time: 0.10023s