townwish-mindone-testing


Nametownwish-mindone-testing JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryONE for all, Optimal generator with No Exception.
upload_time2024-10-30 10:02:31
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords artificial intelligence deep learning diffusion generative model mindspore
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MindONE

This repository contains SoTA algorithms, models, and interesting projects in the area of multimodal understanding and content generation

ONE is short for "ONE for all"
## News

**Hello MindSpore** from **Stable Diffusion 3**!

<div>
<img src="https://github.com/townwish4git/mindone/assets/143256262/8c25ae9a-67b1-436f-abf6-eca36738cd17" alt="sd3" width="512" height="512">
</div>

- [mindone/diffusers](mindone/diffusers) now supports [Stable Diffusion 3](https://huggingface.co/stabilityai/stable-diffusion-3-medium). Give it a try yourself!

    ```py
    import mindspore
    from mindone.diffusers import StableDiffusion3Pipeline

    pipe = StableDiffusion3Pipeline.from_pretrained(
        "stabilityai/stable-diffusion-3-medium-diffusers",
        mindspore_dtype=mindspore.float16,
    )
    prompt = "A cat holding a sign that says 'Hello MindSpore'"
    image = pipe(prompt)[0][0]
    image.save("sd3.png")
    ```

### supported models under mindone/examples
| model  |  features  
| :---   |  :--  |
| [cambrian](https://github.com/mindspore-lab/mindone/blob/master/examples/cambrain)      | working on it |
| [minicpm-v](https://github.com/mindspore-lab/mindone/blob/master/examples/minicpm_v)      | working on v2.6 |
| [internvl](https://github.com/mindspore-lab/mindone/blob/master/examples/internvl)      | working on v1.0 v1.5 v2.0 |
| [llava](https://github.com/mindspore-lab/mindone/blob/master/examples/llava)      | working on llava 1.5 & 1.6 |
| [vila](https://github.com/mindspore-lab/mindone/blob/master/examples/vila)      | working on it |
| [pllava](https://github.com/mindspore-lab/mindone/blob/master/examples/pllava)      | working on it |
| [hpcai open sora](https://github.com/mindspore-lab/mindone/blob/master/examples/opensora_hpcai)      | support v1.0/1.1/1.2 large scale training with dp/sp/zero |
| [open sora plan](https://github.com/mindspore-lab/mindone/blob/master/examples/opensora_pku) | support v1.0/1.1/1.2 large scale training with dp/sp/zero |
| [stable diffusion](https://github.com/mindspore-lab/mindone/blob/master/examples/stable_diffusion_v2) | support sd 1.5/2.0/2.1, vanilla fine tune, lora, dreambooth, text inversion|
| [stable diffusion xl](https://github.com/mindspore-lab/mindone/blob/master/examples/stable_diffusion_xl)  |support sai style(stability AI) vanilla fine tune, lora, dreambooth |
| [dit](https://github.com/mindspore-lab/mindone/blob/master/examples/dit)     | support text to image fine tune |
| [latte](https://github.com/mindspore-lab/mindone/blob/master/examples/latte)     | support uncondition text to image fine tune |
| [animate diff](https://github.com/mindspore-lab/mindone/blob/master/examples/animatediff) | support motion module and lora training |
| [video composer](https://github.com/mindspore-lab/mindone/tree/master/examples/videocomposer)     | support conditional video generation with motion transfer and etc.|
| [ip adapter](https://github.com/mindspore-lab/mindone/blob/master/examples/ip_adapter)     | refactoring  |
| [t2i-adapter](https://github.com/mindspore-lab/mindone/blob/master/examples/t2i_adapter)     | refactoring |

###  run hf diffusers on mindspore
mindone diffusers is under active development, most tasks were tested with mindspore 2.2.10 and ascend 910 hardware.

| component  |  features  
| :---   |  :--  
| [pipeline](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/pipelines) | support text2image,text2video,text2audio tasks 30+
| [models](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/models) | support audoencoder & transformers base models same as hf diffusers
| [schedulers](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/schedulers) | support ddpm & dpm solver 10+ schedulers same as hf diffusers
#### TODO
* [ ] mindspore 2.3.0 version adaption
* [ ] hf diffusers 0.30.0 version adaption

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "townwish-mindone-testing",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, diffusion, generative model, mindspore",
    "author": null,
    "author_email": "MindSpore Lab <mindspore-lab@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/0c/ae/c71c155a8dccd52e724a8e499dcc16d15cc087044c4302a43a887ff2b8e6/townwish_mindone_testing-0.2.0.tar.gz",
    "platform": null,
    "description": "# MindONE\n\nThis repository contains SoTA algorithms, models, and interesting projects in the area of multimodal understanding and content generation\n\nONE is short for \"ONE for all\"\n## News\n\n**Hello MindSpore** from **Stable Diffusion 3**!\n\n<div>\n<img src=\"https://github.com/townwish4git/mindone/assets/143256262/8c25ae9a-67b1-436f-abf6-eca36738cd17\" alt=\"sd3\" width=\"512\" height=\"512\">\n</div>\n\n- [mindone/diffusers](mindone/diffusers) now supports [Stable Diffusion 3](https://huggingface.co/stabilityai/stable-diffusion-3-medium). Give it a try yourself!\n\n    ```py\n    import mindspore\n    from mindone.diffusers import StableDiffusion3Pipeline\n\n    pipe = StableDiffusion3Pipeline.from_pretrained(\n        \"stabilityai/stable-diffusion-3-medium-diffusers\",\n        mindspore_dtype=mindspore.float16,\n    )\n    prompt = \"A cat holding a sign that says 'Hello MindSpore'\"\n    image = pipe(prompt)[0][0]\n    image.save(\"sd3.png\")\n    ```\n\n### supported models under mindone/examples\n| model  |  features  \n| :---   |  :--  |\n| [cambrian](https://github.com/mindspore-lab/mindone/blob/master/examples/cambrain)      | working on it |\n| [minicpm-v](https://github.com/mindspore-lab/mindone/blob/master/examples/minicpm_v)      | working on v2.6 |\n| [internvl](https://github.com/mindspore-lab/mindone/blob/master/examples/internvl)      | working on v1.0 v1.5 v2.0 |\n| [llava](https://github.com/mindspore-lab/mindone/blob/master/examples/llava)      | working on llava 1.5 & 1.6 |\n| [vila](https://github.com/mindspore-lab/mindone/blob/master/examples/vila)      | working on it |\n| [pllava](https://github.com/mindspore-lab/mindone/blob/master/examples/pllava)      | working on it |\n| [hpcai open sora](https://github.com/mindspore-lab/mindone/blob/master/examples/opensora_hpcai)      | support v1.0/1.1/1.2 large scale training with dp/sp/zero |\n| [open sora plan](https://github.com/mindspore-lab/mindone/blob/master/examples/opensora_pku) | support v1.0/1.1/1.2 large scale training with dp/sp/zero |\n| [stable diffusion](https://github.com/mindspore-lab/mindone/blob/master/examples/stable_diffusion_v2) | support sd 1.5/2.0/2.1, vanilla fine tune, lora, dreambooth, text inversion|\n| [stable diffusion xl](https://github.com/mindspore-lab/mindone/blob/master/examples/stable_diffusion_xl)  |support sai style(stability AI) vanilla fine tune, lora, dreambooth |\n| [dit](https://github.com/mindspore-lab/mindone/blob/master/examples/dit)     | support text to image fine tune |\n| [latte](https://github.com/mindspore-lab/mindone/blob/master/examples/latte)     | support uncondition text to image fine tune |\n| [animate diff](https://github.com/mindspore-lab/mindone/blob/master/examples/animatediff) | support motion module and lora training |\n| [video composer](https://github.com/mindspore-lab/mindone/tree/master/examples/videocomposer)     | support conditional video generation with motion transfer and etc.|\n| [ip adapter](https://github.com/mindspore-lab/mindone/blob/master/examples/ip_adapter)     | refactoring  |\n| [t2i-adapter](https://github.com/mindspore-lab/mindone/blob/master/examples/t2i_adapter)     | refactoring |\n\n###  run hf diffusers on mindspore\nmindone diffusers is under active development, most tasks were tested with mindspore 2.2.10 and ascend 910 hardware.\n\n| component  |  features  \n| :---   |  :--  \n| [pipeline](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/pipelines) | support text2image,text2video,text2audio tasks 30+\n| [models](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/models) | support audoencoder & transformers base models same as hf diffusers\n| [schedulers](https://github.com/mindspore-lab/mindone/tree/master/mindone/diffusers/schedulers) | support ddpm & dpm solver 10+ schedulers same as hf diffusers\n#### TODO\n* [ ] mindspore 2.3.0 version adaption\n* [ ] hf diffusers 0.30.0 version adaption\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "ONE for all, Optimal generator with No Exception.",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://pypi.org/project/townwish_mindone_testing/"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " diffusion",
        " generative model",
        " mindspore"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f9fe1a269fdb0c5ada5779088300e2a32a3eb252486ccb836dab964ae7e36467",
                "md5": "0b5400c866920b64e801af3c614b2453",
                "sha256": "217b713ff87dbfb397fe4dc065a80781e36f067d4d4fa3943de3138359ff13ce"
            },
            "downloads": -1,
            "filename": "townwish_mindone_testing-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0b5400c866920b64e801af3c614b2453",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1763078,
            "upload_time": "2024-10-30T10:02:29",
            "upload_time_iso_8601": "2024-10-30T10:02:29.097755Z",
            "url": "https://files.pythonhosted.org/packages/f9/fe/1a269fdb0c5ada5779088300e2a32a3eb252486ccb836dab964ae7e36467/townwish_mindone_testing-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0caec71c155a8dccd52e724a8e499dcc16d15cc087044c4302a43a887ff2b8e6",
                "md5": "872736d4745775a419b57999fa456d24",
                "sha256": "bc815ebd6a91e9db8e0393adf5299138e407525e5164d4f4522fda09264f96a4"
            },
            "downloads": -1,
            "filename": "townwish_mindone_testing-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "872736d4745775a419b57999fa456d24",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1385175,
            "upload_time": "2024-10-30T10:02:31",
            "upload_time_iso_8601": "2024-10-30T10:02:31.165442Z",
            "url": "https://files.pythonhosted.org/packages/0c/ae/c71c155a8dccd52e724a8e499dcc16d15cc087044c4302a43a887ff2b8e6/townwish_mindone_testing-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-30 10:02:31",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "townwish-mindone-testing"
}
        
Elapsed time: 0.59925s