InvokeAI


NameInvokeAI JSON
Version 4.1.0 PyPI version JSON
download
home_pageNone
SummaryAn implementation of Stable Diffusion which provides various new features and options to aid the image generation process
upload_time2024-04-17 22:01:30
maintainerNone
docs_urlNone
authorNone
requires_python<3.12,>=3.10
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
keywords stable-diffusion ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)

# Invoke - Professional Creative AI Tools for Visual Media 
##  To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
  


[![discord badge]][discord link]

[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]

[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]

[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]

[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]: https://useful-forks.github.io/?repo=invoke-ai%2FInvokeAI
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
[github open issues link]: https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/

</div>

InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.

**Quick links**: [[How to
  Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
  href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
  href="https://invoke-ai.github.io/InvokeAI/">Documentation and
  Tutorials</a>]
  [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
  [<a
  href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
  Ideas & Q&A</a>] 
   [<a
  href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>] 

<div align="center">


![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)


</div>

## Table of Contents

Table of Contents 📝

**Getting Started**
1. 🏁 [Quick Start](#quick-start) 
3. 🖥️ [Hardware Requirements](#hardware-requirements) 

**More About Invoke**
1. 🌟 [Features](#features) 
2. 📣 [Latest Changes](#latest-changes) 
3. 🛠️ [Troubleshooting](#troubleshooting) 

**Supporting the Project**
1. 🤝 [Contributing](#contributing) 
2. 👥 [Contributors](#contributors) 
3. 💕 [Support](#support) 

## Quick Start

For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)

If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.

### Automatic Installer (suggested for 1st time users)

1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)

2. Download the .zip file for your OS (Windows/macOS/Linux).

3. Unzip the file.

4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. **Linux:** run `install.sh`.

5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.

6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generation models.

7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.

8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`

9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.

10. Type `banana sushi` in the box on the top left and click `Invoke`

### Command-Line Installation (for developers and users familiar with Terminals)

You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with `pnpm` (can be installed with
the command `npm install -g pnpm` if needed)

1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:

    ```terminal
    mkdir invokeai
    ````

3. Create a virtual environment named `.venv` inside this directory and activate it:

    ```terminal
    cd invokeai
    python -m venv .venv --prompt InvokeAI
    ```

4. Activate the virtual environment (do it every time you run InvokeAI)

    _For Linux/Mac users:_

    ```sh
    source .venv/bin/activate
    ```

    _For Windows users:_

    ```ps
    .venv\Scripts\activate
    ```

5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.

    _For Windows/Linux with an NVIDIA GPU:_

    ```terminal
    pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
    ```

    _For Linux with an AMD GPU:_

    ```sh
    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
    ```

    _For non-GPU systems:_
    ```terminal
    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
    ``` 

    _For Macintoshes, either Intel or M1/M2/M3:_

    ```sh
    pip install InvokeAI --use-pep517
    ```

6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):

    ```terminal
    invokeai-configure --root .
    ```
	Don't miss the dot at the end!

7. Launch the web server (do it every time you run InvokeAI):

    ```terminal
    invokeai-web
    ```

8. Point your browser to http://localhost:9090 to bring up the web interface.

9. Type `banana sushi` in the box on the top left and click `Invoke`.

Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.

## Detailed Installation Instructions

This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)

<a name="migrating-to-3"></a>
### Migrating a v2.3 InvokeAI root directory

The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named `invokeai` and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.

We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. `invokeai-3` and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place.  This section gives both recipes.

#### Creating a new root directory and migrating old models

This is the safer recipe because it leaves your old root directory in
place to fall back on.

1. Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"

2. When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.

3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
`invokeai.bat` and select option 8 "Open the developers console". This
will take you to the command line.

4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
paths for your v2.3 and v3.0 root directories respectively.

This will copy and convert your old models from 2.3 format to 3.0
format and create a new `models` directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.

 If you wish, you can pass the 2.3 root directory to both `--from` and
`--to` in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.

#### Migrating in place

For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. ***This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script.** See the next section for a Windows recipe.

##### For Mac and Linux Users:

1. Launch the InvokeAI launcher script in your current v2.3 root directory.

2. Select option [9] "Update InvokeAI" to bring up the updater dialog.

3. Select option [1] to upgrade to the latest release.

4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [6] "Re-run the configure script to fix a broken
install or to complete a major upgrade".

This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:

  - The invokeai.init file, replaced by invokeai.yaml
  - The models directory
  - The configs/models.yaml model index
  
The original versions of these files will be saved with the suffix
".orig" appended to the end. Once you have confirmed that the upgrade
worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.

##### For Windows Users:

Windows Users can upgrade with the

1. Enter the 2.3 root directory you wish to upgrade
2. Launch `invoke.sh` or `invoke.bat`
3. Select the "Developer's console" option [8]
4. Type the following commands

```
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
```
(Replace `v3.0.0` with the current release number if this document is out of date).

The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script

#### Migrating Images

The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. To do this, you 
need to run an additional step:

1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".

2. At the developer's console command line, type the command:

```bash
invokeai-import-images
```

3. This will lead you through the process of confirming the desired
   source and destination for the imported images. The images will
   appear in the gallery board of your choice, and contain the
   original prompt, model name, and other parameters used to generate
   the image.
   
(Many kudos to **techjedi** for contributing this script.)

## Hardware Requirements

InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).

### System

You will need one of the following:

- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
  of VRAM is highly recommended for rendering using the Stable
  Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
  only), 6-8 GB for XL rendering.

We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.

**Memory** - At least 12 GB Main Memory RAM.

**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.

## Features

Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)

### *Web Server & UI*

InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.

### *Unified Canvas*

The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.

### *Workflows & Nodes*

InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.

### *Board & Gallery Management*

Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow. 

### Other features

- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*


### Latest Changes

For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).

### Troubleshooting / FAQ

Please check out our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** to get solutions for common installation
problems and other issues. For more help, please join our [Discord][discord link]

## Contributing

Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.

Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.

If you are unfamiliar with how
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing: 
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).

We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
to become part of our community.

Welcome to InvokeAI!

### Contributors

This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.

### Support

For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].

Original portions of the software are Copyright (c) 2023 by respective contributors.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "InvokeAI",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.10",
    "maintainer_email": null,
    "keywords": "stable-diffusion, AI",
    "author": null,
    "author_email": "The InvokeAI Project <lincoln.stein@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f0/e5/ccbec46ddb9eadf0706881b7e5b12082fdcfaef1366cbfa2964e716c2eed/InvokeAI-4.1.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)\n\n# Invoke - Professional Creative AI Tools for Visual Media \n##  To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)\n  \n\n\n[![discord badge]][discord link]\n\n[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]\n\n[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]\n\n[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]\n\n[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github\n[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain\n[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord\n[discord link]: https://discord.gg/ZmtBAhwWhy\n[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github\n[github forks link]: https://useful-forks.github.io/?repo=invoke-ai%2FInvokeAI\n[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github\n[github open issues link]: https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen\n[github open prs badge]: https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github\n[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen\n[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github\n[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers\n[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900\n[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main\n[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github\n[latest release link]: https://github.com/invoke-ai/InvokeAI/releases\n[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg\n[translation status link]: https://hosted.weblate.org/engage/invokeai/\n\n</div>\n\nInvokeAI is a leading creative engine built to empower professionals\nand enthusiasts alike. Generate and create stunning visual media using\nthe latest AI-driven technologies. InvokeAI offers an industry leading\nWeb Interface, interactive Command Line Interface, and also serves as\nthe foundation for multiple commercial products.\n\n**Quick links**: [[How to\n  Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a\n  href=\"https://discord.gg/ZmtBAhwWhy\">Discord Server</a>] [<a\n  href=\"https://invoke-ai.github.io/InvokeAI/\">Documentation and\n  Tutorials</a>]\n  [<a href=\"https://github.com/invoke-ai/InvokeAI/issues\">Bug Reports</a>]\n  [<a\n  href=\"https://github.com/invoke-ai/InvokeAI/discussions\">Discussion,\n  Ideas & Q&A</a>] \n   [<a\n  href=\"https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/\">Contributing</a>] \n\n<div align=\"center\">\n\n\n![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)\n\n\n</div>\n\n## Table of Contents\n\nTable of Contents \ud83d\udcdd\n\n**Getting Started**\n1. \ud83c\udfc1 [Quick Start](#quick-start) \n3. \ud83d\udda5\ufe0f [Hardware Requirements](#hardware-requirements) \n\n**More About Invoke**\n1. \ud83c\udf1f [Features](#features) \n2. \ud83d\udce3 [Latest Changes](#latest-changes) \n3. \ud83d\udee0\ufe0f [Troubleshooting](#troubleshooting) \n\n**Supporting the Project**\n1. \ud83e\udd1d [Contributing](#contributing) \n2. \ud83d\udc65 [Contributors](#contributors) \n3. \ud83d\udc95 [Support](#support) \n\n## Quick Start\n\nFor full installation and upgrade instructions, please see:\n[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)\n\nIf upgrading from version 2.3, please read [Migrating a 2.3 root\ndirectory to 3.0](#migrating-to-3) first.\n\n### Automatic Installer (suggested for 1st time users)\n\n1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)\n\n2. Download the .zip file for your OS (Windows/macOS/Linux).\n\n3. Unzip the file.\n\n4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder\ninto the Terminal, and press return. **Linux:** run `install.sh`.\n\n5. You'll be asked to confirm the location of the folder in which\nto install InvokeAI and its image generation model files. Pick a\nlocation with at least 15 GB of free memory. More if you plan on\ninstalling lots of models.\n\n6. Wait while the installer does its thing. After installing the software,\nthe installer will launch a script that lets you configure InvokeAI and\nselect a set of starting image generation models.\n\n7. Find the folder that InvokeAI was installed into (it is not the\nsame as the unpacked zip file directory!) The default location of this\nfolder (if you didn't change it in step 5) is `~/invokeai` on\nLinux/Mac systems, and `C:\\Users\\YourName\\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.\n\n8. On Windows systems, double-click on the `invoke.bat` file. On\nmacOS, open a Terminal window, drag `invoke.sh` from the folder into\nthe Terminal, and press return. On Linux, run `invoke.sh`\n\n9. Press 2 to open the \"browser-based UI\", press enter/return, wait a\nminute or two for Stable Diffusion to start up, then open your browser\nand go to http://localhost:9090.\n\n10. Type `banana sushi` in the box on the top left and click `Invoke`\n\n### Command-Line Installation (for developers and users familiar with Terminals)\n\nYou must have Python 3.10 through 3.11 installed on your machine. Earlier or\nlater versions are not supported.\nNode.js also needs to be installed along with `pnpm` (can be installed with\nthe command `npm install -g pnpm` if needed)\n\n1. Open a command-line window on your machine. The PowerShell is recommended for Windows.\n2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:\n\n    ```terminal\n    mkdir invokeai\n    ````\n\n3. Create a virtual environment named `.venv` inside this directory and activate it:\n\n    ```terminal\n    cd invokeai\n    python -m venv .venv --prompt InvokeAI\n    ```\n\n4. Activate the virtual environment (do it every time you run InvokeAI)\n\n    _For Linux/Mac users:_\n\n    ```sh\n    source .venv/bin/activate\n    ```\n\n    _For Windows users:_\n\n    ```ps\n    .venv\\Scripts\\activate\n    ```\n\n5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.\n\n    _For Windows/Linux with an NVIDIA GPU:_\n\n    ```terminal\n    pip install \"InvokeAI[xformers]\" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121\n    ```\n\n    _For Linux with an AMD GPU:_\n\n    ```sh\n    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6\n    ```\n\n    _For non-GPU systems:_\n    ```terminal\n    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu\n    ``` \n\n    _For Macintoshes, either Intel or M1/M2/M3:_\n\n    ```sh\n    pip install InvokeAI --use-pep517\n    ```\n\n6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):\n\n    ```terminal\n    invokeai-configure --root .\n    ```\n\tDon't miss the dot at the end!\n\n7. Launch the web server (do it every time you run InvokeAI):\n\n    ```terminal\n    invokeai-web\n    ```\n\n8. Point your browser to http://localhost:9090 to bring up the web interface.\n\n9. Type `banana sushi` in the box on the top left and click `Invoke`.\n\nBe sure to activate the virtual environment each time before re-launching InvokeAI,\nusing `source .venv/bin/activate` or `.venv\\Scripts\\activate`.\n\n## Detailed Installation Instructions\n\nThis fork is supported across Linux, Windows and Macintosh. Linux\nusers can use either an Nvidia-based card (with CUDA support) or an\nAMD card (using the ROCm driver). For full installation and upgrade\ninstructions, please see:\n[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)\n\n<a name=\"migrating-to-3\"></a>\n### Migrating a v2.3 InvokeAI root directory\n\nThe InvokeAI root directory is where the InvokeAI startup file,\ninstalled models, and generated images are stored. It is ordinarily\nnamed `invokeai` and located in your home directory. The contents and\nlayout of this directory has changed between versions 2.3 and 3.0 and\ncannot be used directly.\n\nWe currently recommend that you use the installer to create a new root\ndirectory named differently from the 2.3 one, e.g. `invokeai-3` and\nthen use a migration script to copy your 2.3 models into the new\nlocation. However, if you choose, you can upgrade this directory in\nplace.  This section gives both recipes.\n\n#### Creating a new root directory and migrating old models\n\nThis is the safer recipe because it leaves your old root directory in\nplace to fall back on.\n\n1. Follow the instructions above to create and install InvokeAI in a\ndirectory that has a different name from the 2.3 invokeai directory.\nIn this example, we will use \"invokeai-3\"\n\n2. When you are prompted to select models to install, select a minimal\nset of models, such as stable-diffusion-v1.5 only.\n\n3. After installation is complete launch `invokeai.sh` (Linux/Mac) or\n`invokeai.bat` and select option 8 \"Open the developers console\". This\nwill take you to the command line.\n\n4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to\n/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`\npaths for your v2.3 and v3.0 root directories respectively.\n\nThis will copy and convert your old models from 2.3 format to 3.0\nformat and create a new `models` directory in the 3.0 directory. The\nold models directory (which contains the models selected at install\ntime) will be renamed `models.orig` and can be deleted once you have\nconfirmed that the migration was successful.\n\n If you wish, you can pass the 2.3 root directory to both `--from` and\n`--to` in order to update in place. Warning: this directory will no\nlonger be usable with InvokeAI 2.3.\n\n#### Migrating in place\n\nFor the adventurous, you may do an in-place upgrade from 2.3 to 3.0\nwithout touching the command line. ***This recipe does not work on\nWindows platforms due to a bug in the Windows version of the 2.3\nupgrade script.** See the next section for a Windows recipe.\n\n##### For Mac and Linux Users:\n\n1. Launch the InvokeAI launcher script in your current v2.3 root directory.\n\n2. Select option [9] \"Update InvokeAI\" to bring up the updater dialog.\n\n3. Select option [1] to upgrade to the latest release.\n\n4. Once the upgrade is finished you will be returned to the launcher\nmenu. Select option [6] \"Re-run the configure script to fix a broken\ninstall or to complete a major upgrade\".\n\nThis will run the configure script against the v2.3 directory and\nupdate it to the 3.0 format. The following files will be replaced:\n\n  - The invokeai.init file, replaced by invokeai.yaml\n  - The models directory\n  - The configs/models.yaml model index\n  \nThe original versions of these files will be saved with the suffix\n\".orig\" appended to the end. Once you have confirmed that the upgrade\nworked, you can safely remove these files. Alternatively you can\nrestore a working v2.3 directory by removing the new files and\nrestoring the \".orig\" files' original names.\n\n##### For Windows Users:\n\nWindows Users can upgrade with the\n\n1. Enter the 2.3 root directory you wish to upgrade\n2. Launch `invoke.sh` or `invoke.bat`\n3. Select the \"Developer's console\" option [8]\n4. Type the following commands\n\n```\npip install \"invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0\" --use-pep517 --upgrade\ninvokeai-configure --root .\n```\n(Replace `v3.0.0` with the current release number if this document is out of date).\n\nThe first command will install and upgrade new software to run\nInvokeAI. The second will prepare the 2.3 directory for use with 3.0.\nYou may now launch the WebUI in the usual way, by selecting option [1]\nfrom the launcher script\n\n#### Migrating Images\n\nThe migration script will migrate your invokeai settings and models,\nincluding textual inversion models, LoRAs and merges that you may have\ninstalled previously. However it does **not** migrate the generated\nimages stored in your 2.3-format outputs directory. To do this, you \nneed to run an additional step:\n\n1. From a working InvokeAI 3.0 root directory, start the launcher and\nenter menu option [8] to open the \"developer's console\".\n\n2. At the developer's console command line, type the command:\n\n```bash\ninvokeai-import-images\n```\n\n3. This will lead you through the process of confirming the desired\n   source and destination for the imported images. The images will\n   appear in the gallery board of your choice, and contain the\n   original prompt, model name, and other parameters used to generate\n   the image.\n   \n(Many kudos to **techjedi** for contributing this script.)\n\n## Hardware Requirements\n\nInvokeAI is supported across Linux, Windows and macOS. Linux\nusers can use either an Nvidia-based card (with CUDA support) or an\nAMD card (using the ROCm driver).\n\n### System\n\nYou will need one of the following:\n\n- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB\n  of VRAM is highly recommended for rendering using the Stable\n  Diffusion XL models\n- An Apple computer with an M1 chip.\n- An AMD-based graphics card with 4GB or more VRAM memory (Linux\n  only), 6-8 GB for XL rendering.\n\nWe do not recommend the GTX 1650 or 1660 series video cards. They are\nunable to run in half-precision mode and do not have sufficient VRAM\nto render 512x512 images.\n\n**Memory** - At least 12 GB Main Memory RAM.\n\n**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.\n\n## Features\n\nFeature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)\n\n### *Web Server & UI*\n\nInvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.\n\n### *Unified Canvas*\n\nThe Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.\n\n### *Workflows & Nodes*\n\nInvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.\n\n### *Board & Gallery Management*\n\nInvoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow. \n\n### Other features\n\n- *Support for both ckpt and diffusers models*\n- *SD 2.0, 2.1, XL support*\n- *Upscaling Tools*\n- *Embedding Manager & Support*\n- *Model Manager & Support*\n- *Workflow creation & management*\n- *Node-Based Architecture*\n\n\n### Latest Changes\n\nFor our latest changes, view our [Release\nNotes](https://github.com/invoke-ai/InvokeAI/releases) and the\n[CHANGELOG](docs/CHANGELOG.md).\n\n### Troubleshooting / FAQ\n\nPlease check out our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** to get solutions for common installation\nproblems and other issues. For more help, please join our [Discord][discord link]\n\n## Contributing\n\nAnyone who wishes to contribute to this project, whether documentation, features, bug fixes, code\ncleanup, testing, or code reviews, is very much encouraged to do so.\n\nGet started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.\n\nIf you are unfamiliar with how\nto contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing: \n[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).\n\nWe hope you enjoy using our software as much as we enjoy creating it,\nand we hope that some of those of you who are reading this will elect\nto become part of our community.\n\nWelcome to InvokeAI!\n\n### Contributors\n\nThis fork is a combined effort of various people from across the world.\n[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for\ntheir time, hard work and effort.\n\n### Support\n\nFor support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].\n\nOriginal portions of the software are Copyright (c) 2023 by respective contributors.\n\n",
    "bugtrack_url": null,
    "license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.   ",
    "summary": "An implementation of Stable Diffusion which provides various new features and options to aid the image generation process",
    "version": "4.1.0",
    "project_urls": {
        "Bug Reports": "https://github.com/invoke-ai/InvokeAI/issues",
        "Discord": "https://discord.gg/ZmtBAhwWhy",
        "Documentation": "https://invoke-ai.github.io/InvokeAI/",
        "Homepage": "https://invoke-ai.github.io/InvokeAI/",
        "Source": "https://github.com/invoke-ai/InvokeAI/"
    },
    "split_keywords": [
        "stable-diffusion",
        " ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6fef48e0bed150c341fab26a1a40da884d2992b564ef0968f9104d4616f34b03",
                "md5": "7edb65d5194baabb780817d7b9ba1c2f",
                "sha256": "41e6d8a4ee9c10df9a8b36a9c08cd3ed7a35ba6b0b04c49e43eb0e0ba8d35c2f"
            },
            "downloads": -1,
            "filename": "InvokeAI-4.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7edb65d5194baabb780817d7b9ba1c2f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.10",
            "size": 2114171,
            "upload_time": "2024-04-17T22:01:27",
            "upload_time_iso_8601": "2024-04-17T22:01:27.386056Z",
            "url": "https://files.pythonhosted.org/packages/6f/ef/48e0bed150c341fab26a1a40da884d2992b564ef0968f9104d4616f34b03/InvokeAI-4.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f0e5ccbec46ddb9eadf0706881b7e5b12082fdcfaef1366cbfa2964e716c2eed",
                "md5": "3ddb50d6949c9bd44e97285a240e2c7f",
                "sha256": "715563d3040a565987427fcb47806b842e6aec616dabd6d747e8c3fdfb56eec4"
            },
            "downloads": -1,
            "filename": "InvokeAI-4.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3ddb50d6949c9bd44e97285a240e2c7f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.10",
            "size": 1961391,
            "upload_time": "2024-04-17T22:01:30",
            "upload_time_iso_8601": "2024-04-17T22:01:30.467639Z",
            "url": "https://files.pythonhosted.org/packages/f0/e5/ccbec46ddb9eadf0706881b7e5b12082fdcfaef1366cbfa2964e716c2eed/InvokeAI-4.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-17 22:01:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "invoke-ai",
    "github_project": "InvokeAI",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "invokeai"
}
        
Elapsed time: 0.28590s