echomimic


Nameechomimic JSON
Version 0.0.1.dev0 PyPI version JSON
download
home_pageNone
Summaryechomimic
upload_time2024-07-09 15:36:56
maintainerNone
docs_urlNone
authorShadow Walker
requires_pythonNone
licenseNone
keywords echomimic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align='center'>EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning</h1>

<div align='center'>
    <a href='https://github.com/yuange250' target='_blank'>Zhiyuan Chen</a><sup>*</sup>&emsp;
    <a href='https://github.com/JoeFannie' target='_blank'>Jiajiong Cao</a><sup>*</sup>&emsp;
    <a href='https://github.com/octavianChen' target='_blank'>Zhiquan Chen</a><sup></sup>&emsp;
    <a href='https://github.com/lymhust' target='_blank'>Yuming Li</a><sup></sup>&emsp;
    <a href='https://github.com/' target='_blank'>Chenguang Ma</a><sup></sup>
</div>
<div align='center'>
    *Equal Contribution.
</div>

<div align='center'>
Terminal Technology Department, Alipay, Ant Group.
</div>

<div align='center'>
    <a href='https://badtobest.github.io/echomimic.html'><img src='https://img.shields.io/badge/Project-Page-blue'></a>
    <a href='https://huggingface.co/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
    <a href=''><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
    <a href='assets/echomimic.png'><img src='https://badges.aleen42.com/src/wechat.svg'></a>
</div>

## Gallery
### Audio Driven (Sing)

<table class="center">
    
<tr>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/d014d921-9f94-4640-97ad-035b00effbfe" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/877603a5-a4f9-4486-a19f-8888422daf78" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/e0cb5afb-40a6-4365-84f8-cb2834c4cfe7" muted="false"></video>
    </td>
</tr>

</table>

### Audio Driven (English)

<table class="center">
    
<tr>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/386982cd-3ff8-470d-a6d9-b621e112f8a5" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/5c60bb91-1776-434e-a720-8857a00b1501" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/1f15adc5-0f33-4afa-b96a-2011886a4a06" muted="false"></video>
    </td>
</tr>

</table>

### Audio Driven (Chinese)

<table class="center">
    
<tr>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/a8092f9a-a5dc-4cd6-95be-1831afaccf00" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/c8b5c59f-0483-42ef-b3ee-4cffae6c7a52" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/532a3e60-2bac-4039-a06c-ff6bf06cb4a4" muted="false"></video>
    </td>
</tr>

</table>

### Landmark Driven

<table class="center">
    
<tr>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/1da6c46f-4532-4375-a0dc-0a4d6fd30a39" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/d4f4d5c1-e228-463a-b383-27fb90ed6172" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/18bd2c93-319e-4d1c-8255-3f02ba717475" muted="false"></video>
    </td>
</tr>

</table>

### Audio + Selected Landmark Driven

<table class="center">
    
<tr>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/4a29d735-ec1b-474d-b843-3ff0bdf85f55" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/b994c8f5-8dae-4dd8-870f-962b50dc091f" muted="false"></video>
    </td>
    <td width=30% style="border: none">
        <video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/955c1d51-07b2-494d-ab93-895b9c43b896" muted="false"></video>
    </td>
</tr>

</table>

**(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.)**

## Installation

### Download the Codes

```bash
  git clone https://github.com/BadToBest/EchoMimic
  cd EchoMimic
```

### Python Environment Setup

- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11

Create conda environment (Recommended):

```bash
  conda create -n echomimic python=3.8
  conda activate echomimic
```

Install packages with `pip`
```bash
  pip install -r requirements.txt
```

### Download ffmpeg-static
Download and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then
```
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
```

### Download pretrained weights

```shell
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
```

The **pretrained_weights** is organized as follows.

```
./pretrained_weights/
├── denoising_unet.pth
├── reference_unet.pth
├── motion_module.pth
├── face_locator.pth
├── sd-vae-ft-mse
│   └── ...
├── sd-image-variations-diffusers
│   └── ...
└── audio_processor
    └── whisper_tiny.pt
```

In which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **face_locator.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)

### Audio-Drived Algo Inference 
Run the python inference script:

```bash
  python -u infer_audio2vid.py
```

### Audio-Drived Algo Inference On Your Own Cases 

Edit the inference config file **./configs/prompts/animation.yaml**, and add your own case:

```bash
test_cases:
  "path/to/your/image":
    - "path/to/your/audio"
```

The run the python inference script:
```bash
  python -u infer_audio2vid.py
```

## Release Plans

|  Status  | Milestone                                                                | ETA |
|:--------:|:-------------------------------------------------------------------------|:--:|
|    🚀    | The inference source code of the Audio-Driven algo meet everyone on GitHub   | 9th July, 2024 |
|    🚀    | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |
|    🚀    | The inference source code of the Pose-Driven algo meet everyone on GitHub   | 13th July, 2024 |
|    🚀    | Pretrained models with better pose control to be released                | 13th July, 2024 |
|    🚀    | Pretrained models with better sing performance to be released            | TBD |
|    🚀    | Large-Scale and High-resolution Chinese-Based Talking Head Dataset       | TBD |

## Acknowledgements

We would like to thank the contributors to the [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [MuseTalk](https://github.com/TMElyralab/MuseTalk) repositories, for their open research and exploration. 

We are also grateful to [V-Express](https://github.com/tencent-ailab/V-Express) and [hallo](https://github.com/fudan-generative-vision/hallo) for their outstanding work in the area of diffusion-based talking heads.

If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.

## Citation

If you find our work useful for your research, please consider citing the paper:

```
@misc{chen2024echomimic,
  title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
  author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
  year={2024},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "echomimic",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "echomimic",
    "author": "Shadow Walker",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "<h1 align='center'>EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning</h1>\n\n<div align='center'>\n    <a href='https://github.com/yuange250' target='_blank'>Zhiyuan Chen</a><sup>*</sup>&emsp;\n    <a href='https://github.com/JoeFannie' target='_blank'>Jiajiong Cao</a><sup>*</sup>&emsp;\n    <a href='https://github.com/octavianChen' target='_blank'>Zhiquan Chen</a><sup></sup>&emsp;\n    <a href='https://github.com/lymhust' target='_blank'>Yuming Li</a><sup></sup>&emsp;\n    <a href='https://github.com/' target='_blank'>Chenguang Ma</a><sup></sup>\n</div>\n<div align='center'>\n    *Equal Contribution.\n</div>\n\n<div align='center'>\nTerminal Technology Department, Alipay, Ant Group.\n</div>\n\n<div align='center'>\n    <a href='https://badtobest.github.io/echomimic.html'><img src='https://img.shields.io/badge/Project-Page-blue'></a>\n    <a href='https://huggingface.co/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>\n    <a href=''><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>\n    <a href='assets/echomimic.png'><img src='https://badges.aleen42.com/src/wechat.svg'></a>\n</div>\n\n## Gallery\n### Audio Driven (Sing)\n\n<table class=\"center\">\n    \n<tr>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/d014d921-9f94-4640-97ad-035b00effbfe\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/877603a5-a4f9-4486-a19f-8888422daf78\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/e0cb5afb-40a6-4365-84f8-cb2834c4cfe7\" muted=\"false\"></video>\n    </td>\n</tr>\n\n</table>\n\n### Audio Driven (English)\n\n<table class=\"center\">\n    \n<tr>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/386982cd-3ff8-470d-a6d9-b621e112f8a5\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/5c60bb91-1776-434e-a720-8857a00b1501\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/1f15adc5-0f33-4afa-b96a-2011886a4a06\" muted=\"false\"></video>\n    </td>\n</tr>\n\n</table>\n\n### Audio Driven (Chinese)\n\n<table class=\"center\">\n    \n<tr>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/a8092f9a-a5dc-4cd6-95be-1831afaccf00\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/c8b5c59f-0483-42ef-b3ee-4cffae6c7a52\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/532a3e60-2bac-4039-a06c-ff6bf06cb4a4\" muted=\"false\"></video>\n    </td>\n</tr>\n\n</table>\n\n### Landmark Driven\n\n<table class=\"center\">\n    \n<tr>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/1da6c46f-4532-4375-a0dc-0a4d6fd30a39\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/d4f4d5c1-e228-463a-b383-27fb90ed6172\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/18bd2c93-319e-4d1c-8255-3f02ba717475\" muted=\"false\"></video>\n    </td>\n</tr>\n\n</table>\n\n### Audio + Selected Landmark Driven\n\n<table class=\"center\">\n    \n<tr>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/4a29d735-ec1b-474d-b843-3ff0bdf85f55\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/b994c8f5-8dae-4dd8-870f-962b50dc091f\" muted=\"false\"></video>\n    </td>\n    <td width=30% style=\"border: none\">\n        <video controls loop src=\"https://github.com/BadToBest/EchoMimic/assets/11451501/955c1d51-07b2-494d-ab93-895b9c43b896\" muted=\"false\"></video>\n    </td>\n</tr>\n\n</table>\n\n**\uff08Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.\uff09**\n\n## Installation\n\n### Download the Codes\n\n```bash\n  git clone https://github.com/BadToBest/EchoMimic\n  cd EchoMimic\n```\n\n### Python Environment Setup\n\n- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7\n- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)\n- Tested Python Version: 3.8 / 3.10 / 3.11\n\nCreate conda environment (Recommended):\n\n```bash\n  conda create -n echomimic python=3.8\n  conda activate echomimic\n```\n\nInstall packages with `pip`\n```bash\n  pip install -r requirements.txt\n```\n\n### Download ffmpeg-static\nDownload and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then\n```\nexport FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static\n```\n\n### Download pretrained weights\n\n```shell\ngit lfs install\ngit clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights\n```\n\nThe **pretrained_weights** is organized as follows.\n\n```\n./pretrained_weights/\n\u251c\u2500\u2500 denoising_unet.pth\n\u251c\u2500\u2500 reference_unet.pth\n\u251c\u2500\u2500 motion_module.pth\n\u251c\u2500\u2500 face_locator.pth\n\u251c\u2500\u2500 sd-vae-ft-mse\n\u2502   \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 sd-image-variations-diffusers\n\u2502   \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 audio_processor\n    \u2514\u2500\u2500 whisper_tiny.pt\n```\n\nIn which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **face_locator.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:\n- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)\n- [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)\n- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)\n\n### Audio-Drived Algo Inference \nRun the python inference script:\n\n```bash\n  python -u infer_audio2vid.py\n```\n\n### Audio-Drived Algo Inference On Your Own Cases \n\nEdit the inference config file **./configs/prompts/animation.yaml**, and add your own case:\n\n```bash\ntest_cases:\n  \"path/to/your/image\":\n    - \"path/to/your/audio\"\n```\n\nThe run the python inference script:\n```bash\n  python -u infer_audio2vid.py\n```\n\n## Release Plans\n\n|  Status  | Milestone                                                                | ETA |\n|:--------:|:-------------------------------------------------------------------------|:--:|\n|    \ud83d\ude80    | The inference source code of the Audio-Driven algo meet everyone on GitHub   | 9th July, 2024 |\n|    \ud83d\ude80    | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |\n|    \ud83d\ude80    | The inference source code of the Pose-Driven algo meet everyone on GitHub   | 13th July, 2024 |\n|    \ud83d\ude80    | Pretrained models with better pose control to be released                | 13th July, 2024 |\n|    \ud83d\ude80    | Pretrained models with better sing performance to be released            | TBD |\n|    \ud83d\ude80    | Large-Scale and High-resolution Chinese-Based Talking Head Dataset       | TBD |\n\n## Acknowledgements\n\nWe would like to thank the contributors to the [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [MuseTalk](https://github.com/TMElyralab/MuseTalk) repositories, for their open research and exploration. \n\nWe are also grateful to [V-Express](https://github.com/tencent-ailab/V-Express) and [hallo](https://github.com/fudan-generative-vision/hallo) for their outstanding work in the area of diffusion-based talking heads.\n\nIf we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.\n\n## Citation\n\nIf you find our work useful for your research, please consider citing the paper:\n\n```\n@misc{chen2024echomimic,\n  title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},\n  author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},\n  year={2024},\n  archivePrefix={arXiv},\n  primaryClass={cs.CV}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "echomimic",
    "version": "0.0.1.dev0",
    "project_urls": null,
    "split_keywords": [
        "echomimic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c47c4b1277c520695fb9272a8d800aa7cca1e28b0ea5ed9c195bbf7a3f06ba5c",
                "md5": "956cbb8a986aabe16a5bc55a739cd089",
                "sha256": "1d20e98d250c43623cb212454168f4ad91253e470c7816f42f8b10a474e424ad"
            },
            "downloads": -1,
            "filename": "echomimic-0.0.1.dev0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "956cbb8a986aabe16a5bc55a739cd089",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 8290,
            "upload_time": "2024-07-09T15:36:56",
            "upload_time_iso_8601": "2024-07-09T15:36:56.117781Z",
            "url": "https://files.pythonhosted.org/packages/c4/7c/4b1277c520695fb9272a8d800aa7cca1e28b0ea5ed9c195bbf7a3f06ba5c/echomimic-0.0.1.dev0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-09 15:36:56",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "echomimic"
}
        
Elapsed time: 2.19469s