catalyst-pdm


Namecatalyst-pdm JSON
Version 22.4.1 PyPI version JSON
download
home_pagehttps://github.com/AndrewLaptev/catalyst_pdm
SummaryCatalyst fork compatible with PDM
upload_time2023-04-03 21:30:57
maintainer
docs_urlNone
authorSergey Kolesnikov
requires_python>=3.7.0
licenseApache License 2.0
keywords machine learning distributed computing deep learning reinforcement learning computer vision natural language processing recommendation systems information retrieval pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">

[![Catalyst logo](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/catalyst_logo.png)](https://github.com/catalyst-team/catalyst)

**Accelerated Deep Learning R&D**

[![CodeFactor](https://www.codefactor.io/repository/github/catalyst-team/catalyst/badge)](https://www.codefactor.io/repository/github/catalyst-team/catalyst)
[![Pipi version](https://img.shields.io/pypi/v/catalyst.svg)](https://pypi.org/project/catalyst/)
[![Docs](https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Fcatalyst%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https://catalyst-team.github.io/catalyst/index.html)
[![Docker](https://img.shields.io/badge/docker-hub-blue)](https://hub.docker.com/r/catalystteam/catalyst/tags)
[![PyPI Status](https://pepy.tech/badge/catalyst)](https://pepy.tech/project/catalyst)

[![Twitter](https://img.shields.io/badge/news-twitter-499feb)](https://twitter.com/CatalystTeam)
[![Telegram](https://img.shields.io/badge/channel-telegram-blue)](https://t.me/catalyst_team)
[![Slack](https://img.shields.io/badge/Catalyst-slack-success)](https://join.slack.com/t/catalyst-team-devs/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)
[![Github contributors](https://img.shields.io/github/contributors/catalyst-team/catalyst.svg?logo=github&logoColor=white)](https://github.com/catalyst-team/catalyst/graphs/contributors)

![codestyle](https://github.com/catalyst-team/catalyst/workflows/codestyle/badge.svg?branch=master&event=push)
![docs](https://github.com/catalyst-team/catalyst/workflows/docs/badge.svg?branch=master&event=push)
![catalyst](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
![integrations](https://github.com/catalyst-team/catalyst/workflows/integrations/badge.svg?branch=master&event=push)

[![python](https://img.shields.io/badge/python_3.6-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
[![python](https://img.shields.io/badge/python_3.7-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
[![python](https://img.shields.io/badge/python_3.8-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)

[![os](https://img.shields.io/badge/Linux-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
[![os](https://img.shields.io/badge/OSX-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
[![os](https://img.shields.io/badge/WSL-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)
</div>

Catalyst is a PyTorch framework for Deep Learning Research and Development.
It focuses on reproducibility, rapid experimentation, and codebase reuse
so you can create something new rather than write yet another train loop.
<br/> Break the cycle – use the Catalyst!

- [Project Manifest](https://github.com/catalyst-team/catalyst/blob/master/MANIFEST.md)
- [Framework architecture](https://miro.com/app/board/o9J_lxBO-2k=/)
- [Catalyst at AI Landscape](https://landscape.lfai.foundation/selected=catalyst)
- Part of the [PyTorch Ecosystem](https://pytorch.org/ecosystem/)

<details>
<summary>Catalyst at PyTorch Ecosystem Day 2021</summary>
<p>

[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTED21.png)](https://github.com/catalyst-team/catalyst)

</p>
</details>

<details>
<summary>Catalyst at PyTorch Developer Day 2021</summary>
<p>

[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTDD21.png)](https://github.com/catalyst-team/catalyst)

</p>
</details>

----

## Getting started

```bash
pip install -U catalyst
```

```python
import os
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.02)
loaders = {
    "train": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),
    "valid": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),
}

runner = dl.SupervisedRunner(
    input_key="features", output_key="logits", target_key="targets", loss_key="loss"
)

# model training
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    num_epochs=1,
    callbacks=[
        dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5)),
        dl.PrecisionRecallF1SupportCallback(input_key="logits", target_key="targets"),
    ],
    logdir="./logs",
    valid_loader="valid",
    valid_metric="loss",
    minimize_valid_metric=True,
    verbose=True,
)

# model evaluation
metrics = runner.evaluate_loader(
    loader=loaders["valid"],
    callbacks=[dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5))],
)

# model inference
for prediction in runner.predict_loader(loader=loaders["valid"]):
    assert prediction["logits"].detach().cpu().numpy().shape[-1] == 10

# model post-processing
model = runner.model.cpu()
batch = next(iter(loaders["valid"]))[0]
utils.trace_model(model=model, batch=batch)
utils.quantize_model(model=model)
utils.prune_model(model=model, pruning_fn="l1_unstructured", amount=0.8)
utils.onnx_export(model=model, batch=batch, file="./logs/mnist.onnx", verbose=True)
```

### Step-by-step Guide
1. Start with [Catalyst — A PyTorch Framework for Accelerated Deep Learning R&D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef) introduction.
1. Try [notebook tutorials](#minimal-examples) or check [minimal examples](#minimal-examples) for first deep dive.
1. Read [blog posts](https://catalyst-team.com/post/) with use-cases and guides.
1. Learn machine learning with our ["Deep Learning with Catalyst" course](https://catalyst-team.com/#course).
1. And finally, [join our slack](https://join.slack.com/t/catalyst-team-core/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw) if you want to chat with the team and contributors.


## Table of Contents
- [Getting started](#getting-started)
  - [Step-by-step Guide](#step-by-step-guide)
- [Table of Contents](#table-of-contents)
- [Overview](#overview)
  - [Installation](#installation)
  - [Documentation](#documentation)
  - [Minimal Examples](#minimal-examples)
  - [Tests](#tests)
  - [Blog Posts](#blog-posts)
  - [Talks](#talks)
- [Community](#community)
  - [Contribution Guide](#contribution-guide)
  - [User Feedback](#user-feedback)
  - [Acknowledgments](#acknowledgments)
  - [Trusted by](#trusted-by)
  - [Citation](#citation)


## Overview
Catalyst helps you implement compact
but full-featured Deep Learning pipelines with just a few lines of code.
You get a training loop with metrics, early-stopping, model checkpointing,
and other features without the boilerplate.


### Installation

Generic installation:
```bash
pip install -U catalyst
```

<details>
<summary>Specialized versions, extra requirements might apply</summary>
<p>

```bash
pip install catalyst[ml]         # installs ML-based Catalyst
pip install catalyst[cv]         # installs CV-based Catalyst
# master version installation
pip install git+https://github.com/catalyst-team/catalyst@master --upgrade
# all available extensions are listed here:
# https://github.com/catalyst-team/catalyst/blob/master/setup.py
```
</p>
</details>

Catalyst is compatible with: Python 3.7+. PyTorch 1.4+. <br/>
Tested on Ubuntu 16.04/18.04/20.04, macOS 10.15, Windows 10, and Windows Subsystem for Linux.

### Documentation
- [master](https://catalyst-team.github.io/catalyst/)
- [22.02](https://catalyst-team.github.io/catalyst/v22.02/index.html)

- <details>
  <summary>2021 edition</summary>
  <p>

    - [21.12](https://catalyst-team.github.io/catalyst/v21.12/index.html)
    - [21.11](https://catalyst-team.github.io/catalyst/v21.11/index.html)
    - [21.10](https://catalyst-team.github.io/catalyst/v21.10/index.html)
    - [21.09](https://catalyst-team.github.io/catalyst/v21.09/index.html)
    - [21.08](https://catalyst-team.github.io/catalyst/v21.08/index.html)
    - [21.07](https://catalyst-team.github.io/catalyst/v21.07/index.html)
    - [21.06](https://catalyst-team.github.io/catalyst/v21.06/index.html)
    - [21.05](https://catalyst-team.github.io/catalyst/v21.05/index.html) ([Catalyst — A PyTorch Framework for Accelerated Deep Learning R&D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef))
    - [21.04/21.04.1](https://catalyst-team.github.io/catalyst/v21.04/index.html), [21.04.2](https://catalyst-team.github.io/catalyst/v21.04.2/index.html)
    - [21.03](https://catalyst-team.github.io/catalyst/v21.03/index.html), [21.03.1/21.03.2](https://catalyst-team.github.io/catalyst/v21.03.1/index.html)

  </p>
  </details>
- <details>
  <summary>2020 edition</summary>
  <p>

    - [20.12](https://catalyst-team.github.io/catalyst/v20.12/index.html)
    - [20.11](https://catalyst-team.github.io/catalyst/v20.11/index.html)
    - [20.10](https://catalyst-team.github.io/catalyst/v20.10/index.html)
    - [20.09](https://catalyst-team.github.io/catalyst/v20.09/index.html)
    - [20.08.2](https://catalyst-team.github.io/catalyst/v20.08.2/index.html)
    - [20.07](https://catalyst-team.github.io/catalyst/v20.07/index.html) ([dev blog: 20.07 release](https://medium.com/pytorch/catalyst-dev-blog-20-07-release-fb489cd23e14?source=friends_link&sk=7ab92169658fe9a9e1c44068f28cc36c))
    - [20.06](https://catalyst-team.github.io/catalyst/v20.06/index.html)
    - [20.05](https://catalyst-team.github.io/catalyst/v20.05/index.html), [20.05.1](https://catalyst-team.github.io/catalyst/v20.05.1/index.html)
    - [20.04](https://catalyst-team.github.io/catalyst/v20.04/index.html), [20.04.1](https://catalyst-team.github.io/catalyst/v20.04.1/index.html), [20.04.2](https://catalyst-team.github.io/catalyst/v20.04.2/index.html)

  </p>
  </details>


### Minimal Examples

- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customizing_what_happens_in_train.ipynb) Introduction tutorial "[Customizing what happens in `train`](./examples/notebooks/customizing_what_happens_in_train.ipynb)"
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customization_tutorial.ipynb) Demo with [customization examples](./examples/notebooks/customization_tutorial.ipynb)
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/reinforcement_learning.ipynb) [Reinforcement Learning with Catalyst](./examples/notebooks/reinforcement_learning.ipynb)
- [And more](./examples/)

<details>
<summary>CustomRunner – PyTorch for-loop decomposition</summary>
<p>

```python
import os
from torch import nn, optim
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
optimizer = optim.Adam(model.parameters(), lr=0.02)

train_data = MNIST(os.getcwd(), train=True)
valid_data = MNIST(os.getcwd(), train=False)
loaders = {
    "train": DataLoader(train_data, batch_size=32),
    "valid": DataLoader(valid_data, batch_size=32),
}

class CustomRunner(dl.Runner):
    def predict_batch(self, batch):
        # model inference step
        return self.model(batch[0].to(self.engine.device))

    def on_loader_start(self, runner):
        super().on_loader_start(runner)
        self.meters = {
            key: metrics.AdditiveMetric(compute_on_call=False)
            for key in ["loss", "accuracy01", "accuracy03"]
        }

    def handle_batch(self, batch):
        # model train/valid step
        # unpack the batch
        x, y = batch
        # run model forward pass
        logits = self.model(x)
        # compute the loss
        loss = F.cross_entropy(logits, y)
        # compute the metrics
        accuracy01, accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))
        # log metrics
        self.batch_metrics.update(
            {"loss": loss, "accuracy01": accuracy01, "accuracy03": accuracy03}
        )
        for key in ["loss", "accuracy01", "accuracy03"]:
            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)
        # run model backward pass
        if self.is_train_loader:
            self.engine.backward(loss)
            self.optimizer.step()
            self.optimizer.zero_grad()

    def on_loader_end(self, runner):
        for key in ["loss", "accuracy01", "accuracy03"]:
            self.loader_metrics[key] = self.meters[key].compute()[0]
        super().on_loader_end(runner)

runner = CustomRunner()
# model training
runner.train(
    model=model,
    optimizer=optimizer,
    loaders=loaders,
    logdir="./logs",
    num_epochs=5,
    verbose=True,
    valid_loader="valid",
    valid_metric="loss",
    minimize_valid_metric=True,
)
# model inference
for logits in runner.predict_loader(loader=loaders["valid"]):
    assert logits.detach().cpu().numpy().shape[-1] == 10
```
</p>
</details>

<details>
<summary>ML - linear regression</summary>
<p>

```python
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl

# data
num_samples, num_features = int(1e4), int(1e1)
X, y = torch.rand(num_samples, num_features), torch.rand(num_samples)
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}

# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])

# model training
runner = dl.SupervisedRunner()
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    logdir="./logdir",
    valid_loader="valid",
    valid_metric="loss",
    minimize_valid_metric=True,
    num_epochs=8,
    verbose=True,
)
```
</p>
</details>


<details>
<summary>ML - multiclass classification</summary>
<p>

```python
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl

# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples,) * num_classes).to(torch.int64)

# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}

# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])

# model training
runner = dl.SupervisedRunner(
    input_key="features", output_key="logits", target_key="targets", loss_key="loss"
)
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    logdir="./logdir",
    num_epochs=3,
    valid_loader="valid",
    valid_metric="accuracy03",
    minimize_valid_metric=False,
    verbose=True,
    callbacks=[
        dl.AccuracyCallback(input_key="logits", target_key="targets", num_classes=num_classes),
        # uncomment for extra metrics:
        # dl.PrecisionRecallF1SupportCallback(
        #     input_key="logits", target_key="targets", num_classes=num_classes
        # ),
        # dl.AUCCallback(input_key="logits", target_key="targets"),
        # catalyst[ml] required ``pip install catalyst[ml]``
        # dl.ConfusionMatrixCallback(
        #     input_key="logits", target_key="targets", num_classes=num_classes
        # ),
    ],
)
```
</p>
</details>


<details>
<summary>ML - multilabel classification</summary>
<p>

```python
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl

# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, num_classes) > 0.5).to(torch.float32)

# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}

# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])

# model training
runner = dl.SupervisedRunner(
    input_key="features", output_key="logits", target_key="targets", loss_key="loss"
)
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    logdir="./logdir",
    num_epochs=3,
    valid_loader="valid",
    valid_metric="accuracy01",
    minimize_valid_metric=False,
    verbose=True,
    callbacks=[
        dl.BatchTransformCallback(
            transform=torch.sigmoid,
            scope="on_batch_end",
            input_key="logits",
            output_key="scores"
        ),
        dl.AUCCallback(input_key="scores", target_key="targets"),
        # uncomment for extra metrics:
        # dl.MultilabelAccuracyCallback(input_key="scores", target_key="targets", threshold=0.5),
        # dl.MultilabelPrecisionRecallF1SupportCallback(
        #     input_key="scores", target_key="targets", threshold=0.5
        # ),
    ]
)
```
</p>
</details>


<details>
<summary>ML - multihead classification</summary>
<p>

```python
import torch
from torch import nn, optim
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl

# sample data
num_samples, num_features, num_classes1, num_classes2 = int(1e4), int(1e1), 4, 10
X = torch.rand(num_samples, num_features)
y1 = (torch.rand(num_samples,) * num_classes1).to(torch.int64)
y2 = (torch.rand(num_samples,) * num_classes2).to(torch.int64)

# pytorch loaders
dataset = TensorDataset(X, y1, y2)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}

class CustomModule(nn.Module):
    def __init__(self, in_features: int, out_features1: int, out_features2: int):
        super().__init__()
        self.shared = nn.Linear(in_features, 128)
        self.head1 = nn.Linear(128, out_features1)
        self.head2 = nn.Linear(128, out_features2)

    def forward(self, x):
        x = self.shared(x)
        y1 = self.head1(x)
        y2 = self.head2(x)
        return y1, y2

# model, criterion, optimizer, scheduler
model = CustomModule(num_features, num_classes1, num_classes2)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, [2])

class CustomRunner(dl.Runner):
    def handle_batch(self, batch):
        x, y1, y2 = batch
        y1_hat, y2_hat = self.model(x)
        self.batch = {
            "features": x,
            "logits1": y1_hat,
            "logits2": y2_hat,
            "targets1": y1,
            "targets2": y2,
        }

# model training
runner = CustomRunner()
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    num_epochs=3,
    verbose=True,
    callbacks=[
        dl.CriterionCallback(metric_key="loss1", input_key="logits1", target_key="targets1"),
        dl.CriterionCallback(metric_key="loss2", input_key="logits2", target_key="targets2"),
        dl.MetricAggregationCallback(metric_key="loss", metrics=["loss1", "loss2"], mode="mean"),
        dl.BackwardCallback(metric_key="loss"),
        dl.OptimizerCallback(metric_key="loss"),
        dl.SchedulerCallback(),
        dl.AccuracyCallback(
            input_key="logits1", target_key="targets1", num_classes=num_classes1, prefix="one_"
        ),
        dl.AccuracyCallback(
            input_key="logits2", target_key="targets2", num_classes=num_classes2, prefix="two_"
        ),
        # catalyst[ml] required ``pip install catalyst[ml]``
        # dl.ConfusionMatrixCallback(
        #     input_key="logits1", target_key="targets1", num_classes=num_classes1, prefix="one_cm"
        # ),
        # dl.ConfusionMatrixCallback(
        #     input_key="logits2", target_key="targets2", num_classes=num_classes2, prefix="two_cm"
        # ),
        dl.CheckpointCallback(
            logdir="./logs/one",
            loader_key="valid", metric_key="one_accuracy01", minimize=False, topk=1
        ),
        dl.CheckpointCallback(
            logdir="./logs/two",
            loader_key="valid", metric_key="two_accuracy03", minimize=False, topk=3
        ),
    ],
    loggers={"console": dl.ConsoleLogger(), "tb": dl.TensorboardLogger("./logs/tb")},
)
```
</p>
</details>


<details>
<summary>ML – RecSys</summary>
<p>

```python
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl

# sample data
num_users, num_features, num_items = int(1e4), int(1e1), 10
X = torch.rand(num_users, num_features)
y = (torch.rand(num_users, num_items) > 0.5).to(torch.float32)

# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}

# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_items)
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])

# model training
runner = dl.SupervisedRunner(
    input_key="features", output_key="logits", target_key="targets", loss_key="loss"
)
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    num_epochs=3,
    verbose=True,
    callbacks=[
        dl.BatchTransformCallback(
            transform=torch.sigmoid,
            scope="on_batch_end",
            input_key="logits",
            output_key="scores"
        ),
        dl.CriterionCallback(input_key="logits", target_key="targets", metric_key="loss"),
        # uncomment for extra metrics:
        # dl.AUCCallback(input_key="scores", target_key="targets"),
        # dl.HitrateCallback(input_key="scores", target_key="targets", topk=(1, 3, 5)),
        # dl.MRRCallback(input_key="scores", target_key="targets", topk=(1, 3, 5)),
        # dl.MAPCallback(input_key="scores", target_key="targets", topk=(1, 3, 5)),
        # dl.NDCGCallback(input_key="scores", target_key="targets", topk=(1, 3, 5)),
        dl.BackwardCallback(metric_key="loss"),
        dl.OptimizerCallback(metric_key="loss"),
        dl.SchedulerCallback(),
        dl.CheckpointCallback(
            logdir="./logs", loader_key="valid", metric_key="loss", minimize=True
        ),
    ]
)
```
</p>
</details>


<details>
<summary>CV - MNIST classification</summary>
<p>

```python
import os
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.02)

train_data = MNIST(os.getcwd(), train=True)
valid_data = MNIST(os.getcwd(), train=False)
loaders = {
    "train": DataLoader(train_data, batch_size=32),
    "valid": DataLoader(valid_data, batch_size=32),
}

runner = dl.SupervisedRunner()
# model training
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    num_epochs=1,
    logdir="./logs",
    valid_loader="valid",
    valid_metric="loss",
    minimize_valid_metric=True,
    verbose=True,
# uncomment for extra metrics:
#     callbacks=[
#         dl.AccuracyCallback(input_key="logits", target_key="targets", num_classes=10),
#         dl.PrecisionRecallF1SupportCallback(
#             input_key="logits", target_key="targets", num_classes=10
#         ),
#         dl.AUCCallback(input_key="logits", target_key="targets"),
#         # catalyst[ml] required ``pip install catalyst[ml]``
#         dl.ConfusionMatrixCallback(
#             input_key="logits", target_key="targets", num_classes=num_classes
#         ),
#     ]
)
```
</p>
</details>


<details>
<summary>CV - MNIST segmentation</summary>
<p>

```python
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.losses import IoULoss


model = nn.Sequential(
    nn.Conv2d(1, 1, 3, 1, 1), nn.ReLU(),
    nn.Conv2d(1, 1, 3, 1, 1), nn.Sigmoid(),
)
criterion = IoULoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)

train_data = MNIST(os.getcwd(), train=True)
valid_data = MNIST(os.getcwd(), train=False)
loaders = {
    "train": DataLoader(train_data, batch_size=32),
    "valid": DataLoader(valid_data, batch_size=32),
}

class CustomRunner(dl.SupervisedRunner):
    def handle_batch(self, batch):
        x = batch[self._input_key]
        x_noise = (x + torch.rand_like(x)).clamp_(0, 1)
        x_ = self.model(x_noise)
        self.batch = {self._input_key: x, self._output_key: x_, self._target_key: x}

runner = CustomRunner(
    input_key="features", output_key="scores", target_key="targets", loss_key="loss"
)
# model training
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    num_epochs=1,
    callbacks=[
        dl.IOUCallback(input_key="scores", target_key="targets"),
        dl.DiceCallback(input_key="scores", target_key="targets"),
        dl.TrevskyCallback(input_key="scores", target_key="targets", alpha=0.2),
    ],
    logdir="./logdir",
    valid_loader="valid",
    valid_metric="loss",
    minimize_valid_metric=True,
    verbose=True,
)
```
</p>
</details>


<details>
<summary>CV - MNIST metric learning</summary>
<p>

```python
import os
from torch.optim import Adam
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.data import HardTripletsSampler
from catalyst.contrib.datasets import MnistMLDataset, MnistQGDataset
from catalyst.contrib.losses import TripletMarginLossWithSampler
from catalyst.contrib.models import MnistSimpleNet
from catalyst.data.sampler import BatchBalanceClassSampler


# 1. train and valid loaders
train_dataset = MnistMLDataset(root=os.getcwd())
sampler = BatchBalanceClassSampler(
    labels=train_dataset.get_labels(), num_classes=5, num_samples=10, num_batches=10
)
train_loader = DataLoader(dataset=train_dataset, batch_sampler=sampler)

valid_dataset = MnistQGDataset(root=os.getcwd(), gallery_fraq=0.2)
valid_loader = DataLoader(dataset=valid_dataset, batch_size=1024)

# 2. model and optimizer
model = MnistSimpleNet(out_features=16)
optimizer = Adam(model.parameters(), lr=0.001)

# 3. criterion with triplets sampling
sampler_inbatch = HardTripletsSampler(norm_required=False)
criterion = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)

# 4. training with catalyst Runner
class CustomRunner(dl.SupervisedRunner):
    def handle_batch(self, batch) -> None:
        if self.is_train_loader:
            images, targets = batch["features"].float(), batch["targets"].long()
            features = self.model(images)
            self.batch = {"embeddings": features, "targets": targets,}
        else:
            images, targets, is_query = \
                batch["features"].float(), batch["targets"].long(), batch["is_query"].bool()
            features = self.model(images)
            self.batch = {"embeddings": features, "targets": targets, "is_query": is_query}

callbacks = [
    dl.ControlFlowCallbackWrapper(
        dl.CriterionCallback(input_key="embeddings", target_key="targets", metric_key="loss"),
        loaders="train",
    ),
    dl.ControlFlowCallbackWrapper(
        dl.CMCScoreCallback(
            embeddings_key="embeddings",
            labels_key="targets",
            is_query_key="is_query",
            topk=[1],
        ),
        loaders="valid",
    ),
    dl.PeriodicLoaderCallback(
        valid_loader_key="valid", valid_metric_key="cmc01", minimize=False, valid=2
    ),
]

runner = CustomRunner(input_key="features", output_key="embeddings")
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    callbacks=callbacks,
    loaders={"train": train_loader, "valid": valid_loader},
    verbose=False,
    logdir="./logs",
    valid_loader="valid",
    valid_metric="cmc01",
    minimize_valid_metric=False,
    num_epochs=10,
)
```
</p>
</details>


<details>
<summary>CV - MNIST GAN</summary>
<p>

```python
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.layers import GlobalMaxPool2d, Lambda

latent_dim = 128
generator = nn.Sequential(
    # We want to generate 128 coefficients to reshape into a 7x7x128 map
    nn.Linear(128, 128 * 7 * 7),
    nn.LeakyReLU(0.2, inplace=True),
    Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),
    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
    nn.LeakyReLU(0.2, inplace=True),
    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
    nn.LeakyReLU(0.2, inplace=True),
    nn.Conv2d(128, 1, (7, 7), padding=3),
    nn.Sigmoid(),
)
discriminator = nn.Sequential(
    nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),
    nn.LeakyReLU(0.2, inplace=True),
    nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),
    nn.LeakyReLU(0.2, inplace=True),
    GlobalMaxPool2d(),
    nn.Flatten(),
    nn.Linear(128, 1),
)

model = nn.ModuleDict({"generator": generator, "discriminator": discriminator})
criterion = {"generator": nn.BCEWithLogitsLoss(), "discriminator": nn.BCEWithLogitsLoss()}
optimizer = {
    "generator": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
    "discriminator": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
}
train_data = MNIST(os.getcwd(), train=False)
loaders = {"train": DataLoader(train_data, batch_size=32)}

class CustomRunner(dl.Runner):
    def predict_batch(self, batch):
        batch_size = 1
        # Sample random points in the latent space
        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)
        # Decode them to fake images
        generated_images = self.model["generator"](random_latent_vectors).detach()
        return generated_images

    def handle_batch(self, batch):
        real_images, _ = batch
        batch_size = real_images.shape[0]

        # Sample random points in the latent space
        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)

        # Decode them to fake images
        generated_images = self.model["generator"](random_latent_vectors).detach()
        # Combine them with real images
        combined_images = torch.cat([generated_images, real_images])

        # Assemble labels discriminating real from fake images
        labels = \
            torch.cat([torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))]).to(self.engine.device)
        # Add random noise to the labels - important trick!
        labels += 0.05 * torch.rand(labels.shape).to(self.engine.device)

        # Discriminator forward
        combined_predictions = self.model["discriminator"](combined_images)

        # Sample random points in the latent space
        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)
        # Assemble labels that say "all real images"
        misleading_labels = torch.zeros((batch_size, 1)).to(self.engine.device)

        # Generator forward
        generated_images = self.model["generator"](random_latent_vectors)
        generated_predictions = self.model["discriminator"](generated_images)

        self.batch = {
            "combined_predictions": combined_predictions,
            "labels": labels,
            "generated_predictions": generated_predictions,
            "misleading_labels": misleading_labels,
        }


runner = CustomRunner()
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    callbacks=[
        dl.CriterionCallback(
            input_key="combined_predictions",
            target_key="labels",
            metric_key="loss_discriminator",
            criterion_key="discriminator",
        ),
        dl.BackwardCallback(metric_key="loss_discriminator"),
        dl.OptimizerCallback(
            optimizer_key="discriminator",
            metric_key="loss_discriminator",
        ),
        dl.CriterionCallback(
            input_key="generated_predictions",
            target_key="misleading_labels",
            metric_key="loss_generator",
            criterion_key="generator",
        ),
        dl.BackwardCallback(metric_key="loss_generator"),
        dl.OptimizerCallback(
            optimizer_key="generator",
            metric_key="loss_generator",
        ),
    ],
    valid_loader="train",
    valid_metric="loss_generator",
    minimize_valid_metric=True,
    num_epochs=20,
    verbose=True,
    logdir="./logs_gan",
)

# visualization (matplotlib required):
# import matplotlib.pyplot as plt
# %matplotlib inline
# plt.imshow(runner.predict_batch(None)[0, 0].cpu().numpy())
```
</p>
</details>


<details>
<summary>CV - MNIST VAE</summary>
<p>

```python
import os
import torch
from torch import nn, optim
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.datasets import MNIST

LOG_SCALE_MAX = 2
LOG_SCALE_MIN = -10

def normal_sample(loc, log_scale):
    scale = torch.exp(0.5 * log_scale)
    return loc + scale * torch.randn_like(scale)

class VAE(nn.Module):
    def __init__(self, in_features, hid_features):
        super().__init__()
        self.hid_features = hid_features
        self.encoder = nn.Linear(in_features, hid_features * 2)
        self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())

    def forward(self, x, deterministic=False):
        z = self.encoder(x)
        bs, z_dim = z.shape

        loc, log_scale = z[:, : z_dim // 2], z[:, z_dim // 2 :]
        log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)

        z_ = loc if deterministic else normal_sample(loc, log_scale)
        z_ = z_.view(bs, -1)
        x_ = self.decoder(z_)

        return x_, loc, log_scale

class CustomRunner(dl.IRunner):
    def __init__(self, hid_features, logdir, engine):
        super().__init__()
        self.hid_features = hid_features
        self._logdir = logdir
        self._engine = engine

    def get_engine(self):
        return self._engine

    def get_loggers(self):
        return {
            "console": dl.ConsoleLogger(),
            "csv": dl.CSVLogger(logdir=self._logdir),
            "tensorboard": dl.TensorboardLogger(logdir=self._logdir),
        }

    @property
    def num_epochs(self) -> int:
        return 1

    def get_loaders(self):
        loaders = {
            "train": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),
            "valid": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),
        }
        return loaders

    def get_model(self):
        model = self.model if self.model is not None else VAE(28 * 28, self.hid_features)
        return model

    def get_optimizer(self, model):
        return optim.Adam(model.parameters(), lr=0.02)

    def get_callbacks(self):
        return {
            "backward": dl.BackwardCallback(metric_key="loss"),
            "optimizer": dl.OptimizerCallback(metric_key="loss"),
            "checkpoint": dl.CheckpointCallback(
                self._logdir,
                loader_key="valid",
                metric_key="loss",
                minimize=True,
                topk=3,
            ),
        }

    def on_loader_start(self, runner):
        super().on_loader_start(runner)
        self.meters = {
            key: metrics.AdditiveMetric(compute_on_call=False)
            for key in ["loss_ae", "loss_kld", "loss"]
        }

    def handle_batch(self, batch):
        x, _ = batch
        x = x.view(x.size(0), -1)
        x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)

        loss_ae = F.mse_loss(x_, x)
        loss_kld = (
            -0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)
        ).mean()
        loss = loss_ae + loss_kld * 0.01

        self.batch_metrics = {"loss_ae": loss_ae, "loss_kld": loss_kld, "loss": loss}
        for key in ["loss_ae", "loss_kld", "loss"]:
            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)

    def on_loader_end(self, runner):
        for key in ["loss_ae", "loss_kld", "loss"]:
            self.loader_metrics[key] = self.meters[key].compute()[0]
        super().on_loader_end(runner)

    def predict_batch(self, batch):
        random_latent_vectors = torch.randn(1, self.hid_features).to(self.engine.device)
        generated_images = self.model.decoder(random_latent_vectors).detach()
        return generated_images

runner = CustomRunner(128, "./logs", dl.CPUEngine())
runner.run()
# visualization (matplotlib required):
# import matplotlib.pyplot as plt
# %matplotlib inline
# plt.imshow(runner.predict_batch(None)[0].cpu().numpy().reshape(28, 28))
```
</p>
</details>


<details>
<summary>AutoML - hyperparameters optimization with Optuna</summary>
<p>

```python
import os
import optuna
import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.datasets import MNIST


def objective(trial):
    lr = trial.suggest_loguniform("lr", 1e-3, 1e-1)
    num_hidden = int(trial.suggest_loguniform("num_hidden", 32, 128))

    train_data = MNIST(os.getcwd(), train=True)
    valid_data = MNIST(os.getcwd(), train=False)
    loaders = {
        "train": DataLoader(train_data, batch_size=32),
        "valid": DataLoader(valid_data, batch_size=32),
    }
    model = nn.Sequential(
        nn.Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)
    )
    optimizer = torch.optim.Adam(model.parameters(), lr=lr)
    criterion = nn.CrossEntropyLoss()

    runner = dl.SupervisedRunner(input_key="features", output_key="logits", target_key="targets")
    runner.train(
        model=model,
        criterion=criterion,
        optimizer=optimizer,
        loaders=loaders,
        callbacks={
            "accuracy": dl.AccuracyCallback(
                input_key="logits", target_key="targets", num_classes=10
            ),
            # catalyst[optuna] required ``pip install catalyst[optuna]``
            "optuna": dl.OptunaPruningCallback(
                loader_key="valid", metric_key="accuracy01", minimize=False, trial=trial
            ),
        },
        num_epochs=3,
    )
    score = trial.best_score
    return score

study = optuna.create_study(
    direction="maximize",
    pruner=optuna.pruners.MedianPruner(
        n_startup_trials=1, n_warmup_steps=0, interval_steps=1
    ),
)
study.optimize(objective, n_trials=3, timeout=300)
print(study.best_value, study.best_params)
```
</p>
</details>

<details>
<summary>Config API - minimal example</summary>
<p>

```yaml title="example.yaml"
runner:
  _target_: catalyst.runners.SupervisedRunner
  model:
    _var_: model
    _target_: torch.nn.Sequential
    args:
      - _target_: torch.nn.Flatten
      - _target_: torch.nn.Linear
        in_features: 784  # 28 * 28
        out_features: 10
  input_key: features
  output_key: &output_key logits
  target_key: &target_key targets
  loss_key: &loss_key loss

run:
  # ≈ stage 1
  - _call_: train  # runner.train(...)

    criterion:
      _target_: torch.nn.CrossEntropyLoss

    optimizer:
      _target_: torch.optim.Adam
      params:  # model.parameters()
        _var_: model.parameters
      lr: 0.02

    loaders:
      train:
        _target_: torch.utils.data.DataLoader
        dataset:
          _target_: catalyst.contrib.datasets.MNIST
          root: data
          train: y
        batch_size: 32

      &valid_loader_key valid:
        &valid_loader
        _target_: torch.utils.data.DataLoader
        dataset:
          _target_: catalyst.contrib.datasets.MNIST
          root: data
          train: n
        batch_size: 32

    callbacks:
      - &accuracy_metric
        _target_: catalyst.callbacks.AccuracyCallback
        input_key: *output_key
        target_key: *target_key
        topk: [1,3,5]
      - _target_: catalyst.callbacks.PrecisionRecallF1SupportCallback
        input_key: *output_key
        target_key: *target_key

    num_epochs: 1
    logdir: logs
    valid_loader: *valid_loader_key
    valid_metric: *loss_key
    minimize_valid_metric: y
    verbose: y

  # ≈ stage 2
  - _call_: evaluate_loader  # runner.evaluate_loader(...)
    loader: *valid_loader
    callbacks:
      - *accuracy_metric

```

```sh
catalyst-run --config example.yaml
```
</p>
</details>

### Tests
All Catalyst code, features, and pipelines [are fully tested](./tests).
We also have our own [catalyst-codestyle](https://github.com/catalyst-team/codestyle) and a corresponding pre-commit hook.
During testing, we train a variety of different models: image classification,
image segmentation, text classification, GANs, and much more.
We then compare their convergence metrics in order to verify
the correctness of the training procedure and its reproducibility.
As a result, Catalyst provides fully tested and reproducible
best practices for your deep learning research and development.

### [Blog Posts](https://catalyst-team.com/post/)

### [Talks](https://catalyst-team.com/talk/)


## Community

### Accelerated with Catalyst

<details>
<summary>Research Papers</summary>
<p>

- [Hierarchical Attention for Sentiment Classification with Visualization](https://github.com/neuromation/ml-recipe-hier-attention)
- [Pediatric Bone Age Assessment](https://github.com/neuromation/ml-recipe-bone-age)
- [Implementation of the paper "Tell Me Where to Look: Guided Attention Inference Network"](https://github.com/ngxbac/GAIN)
- [Implementation of the paper "Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks"](https://github.com/yukkyo/PyTorch-FilterResponseNormalizationLayer)
- [Implementation of the paper "Utterance-level Aggregation For Speaker Recognition In The Wild"](https://github.com/ptJexio/Speaker-Recognition)
- [Implementation of the paper "Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation"](https://github.com/vitrioil/Speech-Separation)
- [Implementation of the paper "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks"](https://github.com/leverxgroup/esrgan)

</p>
</details>

<details>
<summary>Blog Posts</summary>
<p>

- [Solving the Cocktail Party Problem using PyTorch](https://medium.com/pytorch/addressing-the-cocktail-party-problem-using-pytorch-305fb74560ea)
- [Beyond fashion: Deep Learning with Catalyst (Config API)](https://evilmartians.com/chronicles/beyond-fashion-deep-learning-with-catalyst)
- [Tutorial from Notebook API to Config API (RU)](https://github.com/Bekovmi/Segmentation_tutorial)

</p>
</details>

<details>
<summary>Competitions</summary>
<p>

- [Kaggle Quick, Draw! Doodle Recognition Challenge](https://github.com/ngxbac/Kaggle-QuickDraw) - 11th place
- [Catalyst.RL - NeurIPS 2018: AI for Prosthetics Challenge](https://github.com/Scitator/neurips-18-prosthetics-challenge) – 3rd place
- [Kaggle Google Landmark 2019](https://github.com/ngxbac/Kaggle-Google-Landmark-2019) - 30th place
- [iMet Collection 2019 - FGVC6](https://github.com/ngxbac/Kaggle-iMet) - 24th place
- [ID R&D Anti-spoofing Challenge](https://github.com/bagxi/idrnd-anti-spoofing-challenge-solution) - 14th place
- [NeurIPS 2019: Recursion Cellular Image Classification](https://github.com/ngxbac/Kaggle-Recursion-Cellular) - 4th place
- [MICCAI 2019: Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019](https://github.com/ngxbac/StructSeg2019)
  * 3rd place solution for `Task 3: Organ-at-risk segmentation from chest CT scans`
  * and 4th place solution for `Task 4: Gross Target Volume segmentation of lung cancer`
- [Kaggle Seversteal steel detection](https://github.com/bamps53/kaggle-severstal) - 5th place
- [RSNA Intracranial Hemorrhage Detection](https://github.com/ngxbac/Kaggle-RSNA) - 5th place
- [APTOS 2019 Blindness Detection](https://github.com/BloodAxe/Kaggle-2019-Blindness-Detection) – 7th place
- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/run-skeleton-run-in-3d) – 2nd place
- [xView2 Damage Assessment Challenge](https://github.com/BloodAxe/xView2-Solution) - 3rd place


</p>
</details>

<details>
<summary>Toolkits</summary>
<p>

- [Catalyst.RL](https://github.com/Scitator/catalyst-rl-framework) – A Distributed Framework for Reproducible RL Research by [Scitator](https://github.com/Scitator)
- [Catalyst.Classification](https://github.com/catalyst-team/classification) - Comprehensive classification pipeline with Pseudo-Labeling by [Bagxi](https://github.com/bagxi) and [Pdanilov](https://github.com/pdanilov)
- [Catalyst.Segmentation](https://github.com/catalyst-team/segmentation) - Segmentation pipelines - binary, semantic and instance, by [Bagxi](https://github.com/bagxi)
- [Catalyst.Detection](https://github.com/catalyst-team/detection) - Anchor-free detection pipeline by [Avi2011class](https://github.com/Avi2011class) and [TezRomacH](https://github.com/TezRomacH)
- [Catalyst.GAN](https://github.com/catalyst-team/gan) - Reproducible GANs pipelines by [Asmekal](https://github.com/asmekal)
- [Catalyst.Neuro](https://github.com/catalyst-team/neuro) - Brain image analysis project, in collaboration with [TReNDS Center](https://trendscenter.org)
- [MLComp](https://github.com/catalyst-team/mlcomp) – Distributed DAG framework for machine learning with UI by [Lightforever](https://github.com/lightforever)
- [Pytorch toolbelt](https://github.com/BloodAxe/pytorch-toolbelt) - PyTorch extensions for fast R&D prototyping and Kaggle farming by [BloodAxe](https://github.com/BloodAxe)
- [Helper functions](https://github.com/ternaus/iglovikov_helper_functions) - An assorted collection of helper functions by [Ternaus](https://github.com/ternaus)
- [BERT Distillation with Catalyst](https://github.com/elephantmipt/bert-distillation) by [elephantmipt](https://github.com/elephantmipt)

</p>
</details>


<details>
<summary>Other</summary>
<p>

- [CamVid Segmentation Example](https://github.com/BloodAxe/Catalyst-CamVid-Segmentation-Example) - Example of semantic segmentation for CamVid dataset
- [Notebook API tutorial for segmentation in Understanding Clouds from Satellite Images Competition](https://www.kaggle.com/artgor/segmentation-in-pytorch-using-convenient-tools/)
- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/learning-to-move-starter-kit) – starter kit
- [Catalyst.RL - NeurIPS 2019: Animal-AI Olympics](https://github.com/Scitator/animal-olympics-starter-kit) - starter kit
- [Inria Segmentation Example](https://github.com/BloodAxe/Catalyst-Inria-Segmentation-Example) - An example of training segmentation model for Inria Sattelite Segmentation Challenge
- [iglovikov_segmentation](https://github.com/ternaus/iglovikov_segmentation) - Semantic segmentation pipeline using Catalyst
- [Logging Catalyst Runs to Comet](https://colab.research.google.com/drive/1TaG27HcMh2jyRKBGsqRXLiGUfsHVyCq6?usp=sharing) - An example of how to log metrics, hyperparameters and more from Catalyst runs to [Comet](https://www.comet.ml/site/data-scientists/)

</p>
</details>


See other projects at [the GitHub dependency graph](https://github.com/catalyst-team/catalyst/network/dependents).

If your project implements a paper,
a notable use-case/tutorial, or a Kaggle competition solution, or
if your code simply presents interesting results and uses Catalyst,
we would be happy to add your project to the list above!
Do not hesitate to send us a PR with a brief description of the project similar to the above.

### Contribution Guide

We appreciate all contributions.
If you are planning to contribute back bug-fixes, there is no need to run that by us; just send a PR.
If you plan to contribute new features, new utility functions, or extensions,
please open an issue first and discuss it with us.

- Please see the [Contribution Guide](CONTRIBUTING.md) for more information.
- By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md).


### User Feedback

We've created `feedback@catalyst-team.com` as an additional channel for user feedback.

- If you like the project and want to thank us, this is the right place.
- If you would like to start a collaboration between your team and Catalyst team to improve Deep Learning R&D, you are always welcome.
- If you don't like Github Issues and prefer email, feel free to email us.
- Finally, if you do not like something, please, share it with us, and we can see how to improve it.

We appreciate any type of feedback. Thank you!


### Acknowledgments

Since the beginning of the Сatalyst development, a lot of people have influenced it in a lot of different ways.

#### Catalyst.Team
- [Dmytro Doroshenko](https://www.linkedin.com/in/dmytro-doroshenko-05671112a/) ([ditwoo](https://github.com/Ditwoo))
- [Eugene Kachan](https://www.linkedin.com/in/yauheni-kachan/) ([bagxi](https://github.com/bagxi))
- [Nikita Balagansky](https://www.linkedin.com/in/nikita-balagansky-50414a19a/) ([elephantmipt](https://github.com/elephantmipt))
- [Sergey Kolesnikov](https://www.scitator.com/) ([scitator](https://github.com/Scitator))

#### Catalyst.Contributors
- [Aleksey Grinchuk](https://www.facebook.com/grinchuk.alexey) ([alexgrinch](https://github.com/AlexGrinch))
- [Aleksey Shabanov](https://linkedin.com/in/aleksey-shabanov-96b351189) ([AlekseySh](https://github.com/AlekseySh))
- [Alex Gaziev](https://www.linkedin.com/in/alexgaziev/) ([gazay](https://github.com/gazay))
- [Andrey Zharkov](https://www.linkedin.com/in/andrey-zharkov-8554a1153/) ([asmekal](https://github.com/asmekal))
- [Artem Zolkin](https://www.linkedin.com/in/artem-zolkin-b5155571/) ([arquestro](https://github.com/Arquestro))
- [David Kuryakin](https://www.linkedin.com/in/dkuryakin/) ([dkuryakin](https://github.com/dkuryakin))
- [Evgeny Semyonov](https://www.linkedin.com/in/ewan-semyonov/) ([lightforever](https://github.com/lightforever))
- [Eugene Khvedchenya](https://www.linkedin.com/in/cvtalks/) ([bloodaxe](https://github.com/BloodAxe))
- [Ivan Stepanenko](https://www.facebook.com/istepanenko)
- [Julia Shenshina](https://github.com/julia-shenshina) ([julia-shenshina](https://github.com/julia-shenshina))
- [Nguyen Xuan Bac](https://www.linkedin.com/in/bac-nguyen-xuan-70340b66/) ([ngxbac](https://github.com/ngxbac))
- [Roman Tezikov](http://linkedin.com/in/roman-tezikov/) ([TezRomacH](https://github.com/TezRomacH))
- [Valentin Khrulkov](https://www.linkedin.com/in/vkhrulkov/) ([khrulkovv](https://github.com/KhrulkovV))
- [Vladimir Iglovikov](https://www.linkedin.com/in/iglovikov/) ([ternaus](https://github.com/ternaus))
- [Vsevolod Poletaev](https://linkedin.com/in/vsevolod-poletaev-468071165) ([hexfaker](https://github.com/hexfaker))
- [Yury Kashnitsky](https://www.linkedin.com/in/kashnitskiy/) ([yorko](https://github.com/Yorko))


### Trusted by
- [Awecom](https://www.awecom.com)
- Researchers at the [Center for Translational Research in Neuroimaging and Data Science (TReNDS)](https://trendscenter.org)
- [Deep Learning School](https://en.dlschool.org)
- Researchers at [Emory University](https://www.emory.edu)
- [Evil Martians](https://evilmartians.com)
- Researchers at the [Georgia Institute of Technology](https://www.gatech.edu)
- Researchers at [Georgia State University](https://www.gsu.edu)
- [Helios](http://helios.to)
- [HPCD Lab](https://www.hpcdlab.com)
- [iFarm](https://ifarmproject.com)
- [Kinoplan](http://kinoplan.io/)
- Researchers at the [Moscow Institute of Physics and Technology](https://mipt.ru/english/)
- [Neuromation](https://neuromation.io)
- [Poteha Labs](https://potehalabs.com/en/)
- [Provectus](https://provectus.com)
- Researchers at the [Skolkovo Institute of Science and Technology](https://www.skoltech.ru/en)
- [SoftConstruct](https://www.softconstruct.io/)
- Researchers at [Tinkoff](https://www.tinkoff.ru/eng/)
- Researchers at [Yandex.Research](https://research.yandex.com)


### Citation

Please use this bibtex if you want to cite this repository in your publications:

    @misc{catalyst,
        author = {Kolesnikov, Sergey},
        title = {Catalyst - Accelerated deep learning R&D},
        year = {2018},
        publisher = {GitHub},
        journal = {GitHub repository},
        howpublished = {\url{https://github.com/catalyst-team/catalyst}},
    }



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AndrewLaptev/catalyst_pdm",
    "name": "catalyst-pdm",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": "",
    "keywords": "Machine Learning,Distributed Computing,Deep Learning,Reinforcement Learning,Computer Vision,Natural Language Processing,Recommendation Systems,Information Retrieval,PyTorch",
    "author": "Sergey Kolesnikov",
    "author_email": "scitator@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e7/d2/e71bf9eabb7ddab8d26e2deb536a1b7eb64d3e46413424ebeee9e179ec2e/catalyst_pdm-22.4.1.tar.gz",
    "platform": null,
    "description": "\n<div align=\"center\">\n\n[![Catalyst logo](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/catalyst_logo.png)](https://github.com/catalyst-team/catalyst)\n\n**Accelerated Deep Learning R&D**\n\n[![CodeFactor](https://www.codefactor.io/repository/github/catalyst-team/catalyst/badge)](https://www.codefactor.io/repository/github/catalyst-team/catalyst)\n[![Pipi version](https://img.shields.io/pypi/v/catalyst.svg)](https://pypi.org/project/catalyst/)\n[![Docs](https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Fcatalyst%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https://catalyst-team.github.io/catalyst/index.html)\n[![Docker](https://img.shields.io/badge/docker-hub-blue)](https://hub.docker.com/r/catalystteam/catalyst/tags)\n[![PyPI Status](https://pepy.tech/badge/catalyst)](https://pepy.tech/project/catalyst)\n\n[![Twitter](https://img.shields.io/badge/news-twitter-499feb)](https://twitter.com/CatalystTeam)\n[![Telegram](https://img.shields.io/badge/channel-telegram-blue)](https://t.me/catalyst_team)\n[![Slack](https://img.shields.io/badge/Catalyst-slack-success)](https://join.slack.com/t/catalyst-team-devs/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n[![Github contributors](https://img.shields.io/github/contributors/catalyst-team/catalyst.svg?logo=github&logoColor=white)](https://github.com/catalyst-team/catalyst/graphs/contributors)\n\n![codestyle](https://github.com/catalyst-team/catalyst/workflows/codestyle/badge.svg?branch=master&event=push)\n![docs](https://github.com/catalyst-team/catalyst/workflows/docs/badge.svg?branch=master&event=push)\n![catalyst](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n![integrations](https://github.com/catalyst-team/catalyst/workflows/integrations/badge.svg?branch=master&event=push)\n\n[![python](https://img.shields.io/badge/python_3.6-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n[![python](https://img.shields.io/badge/python_3.7-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n[![python](https://img.shields.io/badge/python_3.8-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n\n[![os](https://img.shields.io/badge/Linux-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n[![os](https://img.shields.io/badge/OSX-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n[![os](https://img.shields.io/badge/WSL-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master&event=push)\n</div>\n\nCatalyst is a PyTorch framework for Deep Learning Research and Development.\nIt focuses on reproducibility, rapid experimentation, and codebase reuse\nso you can create something new rather than write yet another train loop.\n<br/> Break the cycle \u2013 use the Catalyst!\n\n- [Project Manifest](https://github.com/catalyst-team/catalyst/blob/master/MANIFEST.md)\n- [Framework architecture](https://miro.com/app/board/o9J_lxBO-2k=/)\n- [Catalyst at AI Landscape](https://landscape.lfai.foundation/selected=catalyst)\n- Part of the [PyTorch Ecosystem](https://pytorch.org/ecosystem/)\n\n<details>\n<summary>Catalyst at PyTorch Ecosystem Day 2021</summary>\n<p>\n\n[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTED21.png)](https://github.com/catalyst-team/catalyst)\n\n</p>\n</details>\n\n<details>\n<summary>Catalyst at PyTorch Developer Day 2021</summary>\n<p>\n\n[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTDD21.png)](https://github.com/catalyst-team/catalyst)\n\n</p>\n</details>\n\n----\n\n## Getting started\n\n```bash\npip install -U catalyst\n```\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, utils\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\nloaders = {\n    \"train\": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),\n    \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n}\n\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\n\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.PrecisionRecallF1SupportCallback(input_key=\"logits\", target_key=\"targets\"),\n    ],\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n\n# model evaluation\nmetrics = runner.evaluate_loader(\n    loader=loaders[\"valid\"],\n    callbacks=[dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5))],\n)\n\n# model inference\nfor prediction in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert prediction[\"logits\"].detach().cpu().numpy().shape[-1] == 10\n\n# model post-processing\nmodel = runner.model.cpu()\nbatch = next(iter(loaders[\"valid\"]))[0]\nutils.trace_model(model=model, batch=batch)\nutils.quantize_model(model=model)\nutils.prune_model(model=model, pruning_fn=\"l1_unstructured\", amount=0.8)\nutils.onnx_export(model=model, batch=batch, file=\"./logs/mnist.onnx\", verbose=True)\n```\n\n### Step-by-step Guide\n1. Start with [Catalyst \u2014 A PyTorch Framework for Accelerated Deep Learning R&D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef) introduction.\n1. Try [notebook tutorials](#minimal-examples) or check [minimal examples](#minimal-examples) for first deep dive.\n1. Read [blog posts](https://catalyst-team.com/post/) with use-cases and guides.\n1. Learn machine learning with our [\"Deep Learning with Catalyst\" course](https://catalyst-team.com/#course).\n1. And finally, [join our slack](https://join.slack.com/t/catalyst-team-core/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw) if you want to chat with the team and contributors.\n\n\n## Table of Contents\n- [Getting started](#getting-started)\n  - [Step-by-step Guide](#step-by-step-guide)\n- [Table of Contents](#table-of-contents)\n- [Overview](#overview)\n  - [Installation](#installation)\n  - [Documentation](#documentation)\n  - [Minimal Examples](#minimal-examples)\n  - [Tests](#tests)\n  - [Blog Posts](#blog-posts)\n  - [Talks](#talks)\n- [Community](#community)\n  - [Contribution Guide](#contribution-guide)\n  - [User Feedback](#user-feedback)\n  - [Acknowledgments](#acknowledgments)\n  - [Trusted by](#trusted-by)\n  - [Citation](#citation)\n\n\n## Overview\nCatalyst helps you implement compact\nbut full-featured Deep Learning pipelines with just a few lines of code.\nYou get a training loop with metrics, early-stopping, model checkpointing,\nand other features without the boilerplate.\n\n\n### Installation\n\nGeneric installation:\n```bash\npip install -U catalyst\n```\n\n<details>\n<summary>Specialized versions, extra requirements might apply</summary>\n<p>\n\n```bash\npip install catalyst[ml]         # installs ML-based Catalyst\npip install catalyst[cv]         # installs CV-based Catalyst\n# master version installation\npip install git+https://github.com/catalyst-team/catalyst@master --upgrade\n# all available extensions are listed here:\n# https://github.com/catalyst-team/catalyst/blob/master/setup.py\n```\n</p>\n</details>\n\nCatalyst is compatible with: Python 3.7+. PyTorch 1.4+. <br/>\nTested on Ubuntu 16.04/18.04/20.04, macOS 10.15, Windows 10, and Windows Subsystem for Linux.\n\n### Documentation\n- [master](https://catalyst-team.github.io/catalyst/)\n- [22.02](https://catalyst-team.github.io/catalyst/v22.02/index.html)\n\n- <details>\n  <summary>2021 edition</summary>\n  <p>\n\n    - [21.12](https://catalyst-team.github.io/catalyst/v21.12/index.html)\n    - [21.11](https://catalyst-team.github.io/catalyst/v21.11/index.html)\n    - [21.10](https://catalyst-team.github.io/catalyst/v21.10/index.html)\n    - [21.09](https://catalyst-team.github.io/catalyst/v21.09/index.html)\n    - [21.08](https://catalyst-team.github.io/catalyst/v21.08/index.html)\n    - [21.07](https://catalyst-team.github.io/catalyst/v21.07/index.html)\n    - [21.06](https://catalyst-team.github.io/catalyst/v21.06/index.html)\n    - [21.05](https://catalyst-team.github.io/catalyst/v21.05/index.html) ([Catalyst \u2014 A PyTorch Framework for Accelerated Deep Learning R&D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef))\n    - [21.04/21.04.1](https://catalyst-team.github.io/catalyst/v21.04/index.html), [21.04.2](https://catalyst-team.github.io/catalyst/v21.04.2/index.html)\n    - [21.03](https://catalyst-team.github.io/catalyst/v21.03/index.html), [21.03.1/21.03.2](https://catalyst-team.github.io/catalyst/v21.03.1/index.html)\n\n  </p>\n  </details>\n- <details>\n  <summary>2020 edition</summary>\n  <p>\n\n    - [20.12](https://catalyst-team.github.io/catalyst/v20.12/index.html)\n    - [20.11](https://catalyst-team.github.io/catalyst/v20.11/index.html)\n    - [20.10](https://catalyst-team.github.io/catalyst/v20.10/index.html)\n    - [20.09](https://catalyst-team.github.io/catalyst/v20.09/index.html)\n    - [20.08.2](https://catalyst-team.github.io/catalyst/v20.08.2/index.html)\n    - [20.07](https://catalyst-team.github.io/catalyst/v20.07/index.html) ([dev blog: 20.07 release](https://medium.com/pytorch/catalyst-dev-blog-20-07-release-fb489cd23e14?source=friends_link&sk=7ab92169658fe9a9e1c44068f28cc36c))\n    - [20.06](https://catalyst-team.github.io/catalyst/v20.06/index.html)\n    - [20.05](https://catalyst-team.github.io/catalyst/v20.05/index.html), [20.05.1](https://catalyst-team.github.io/catalyst/v20.05.1/index.html)\n    - [20.04](https://catalyst-team.github.io/catalyst/v20.04/index.html), [20.04.1](https://catalyst-team.github.io/catalyst/v20.04.1/index.html), [20.04.2](https://catalyst-team.github.io/catalyst/v20.04.2/index.html)\n\n  </p>\n  </details>\n\n\n### Minimal Examples\n\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customizing_what_happens_in_train.ipynb) Introduction tutorial \"[Customizing what happens in `train`](./examples/notebooks/customizing_what_happens_in_train.ipynb)\"\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customization_tutorial.ipynb) Demo with [customization examples](./examples/notebooks/customization_tutorial.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/reinforcement_learning.ipynb) [Reinforcement Learning with Catalyst](./examples/notebooks/reinforcement_learning.ipynb)\n- [And more](./examples/)\n\n<details>\n<summary>CustomRunner \u2013 PyTorch for-loop decomposition</summary>\n<p>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        # model inference step\n        return self.model(batch[0].to(self.engine.device))\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss\", \"accuracy01\", \"accuracy03\"]\n        }\n\n    def handle_batch(self, batch):\n        # model train/valid step\n        # unpack the batch\n        x, y = batch\n        # run model forward pass\n        logits = self.model(x)\n        # compute the loss\n        loss = F.cross_entropy(logits, y)\n        # compute the metrics\n        accuracy01, accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))\n        # log metrics\n        self.batch_metrics.update(\n            {\"loss\": loss, \"accuracy01\": accuracy01, \"accuracy03\": accuracy03}\n        )\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n        # run model backward pass\n        if self.is_train_loader:\n            self.engine.backward(loss)\n            self.optimizer.step()\n            self.optimizer.zero_grad()\n\n    def on_loader_end(self, runner):\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\nrunner = CustomRunner()\n# model training\nrunner.train(\n    model=model,\n    optimizer=optimizer,\n    loaders=loaders,\n    logdir=\"./logs\",\n    num_epochs=5,\n    verbose=True,\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n)\n# model inference\nfor logits in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert logits.detach().cpu().numpy().shape[-1] == 10\n```\n</p>\n</details>\n\n<details>\n<summary>ML - linear regression</summary>\n<p>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# data\nnum_samples, num_features = int(1e4), int(1e1)\nX, y = torch.rand(num_samples, num_features), torch.rand(num_samples)\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, 1)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])\n\n# model training\nrunner = dl.SupervisedRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    num_epochs=8,\n    verbose=True,\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>ML - multiclass classification</summary>\n<p>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples,) * num_classes).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy03\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=num_classes),\n        # uncomment for extra metrics:\n        # dl.PrecisionRecallF1SupportCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n        # dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n    ],\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>ML - multilabel classification</summary>\n<p>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples, num_classes) > 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy01\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # uncomment for extra metrics:\n        # dl.MultilabelAccuracyCallback(input_key=\"scores\", target_key=\"targets\", threshold=0.5),\n        # dl.MultilabelPrecisionRecallF1SupportCallback(\n        #     input_key=\"scores\", target_key=\"targets\", threshold=0.5\n        # ),\n    ]\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>ML - multihead classification</summary>\n<p>\n\n```python\nimport torch\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes1, num_classes2 = int(1e4), int(1e1), 4, 10\nX = torch.rand(num_samples, num_features)\ny1 = (torch.rand(num_samples,) * num_classes1).to(torch.int64)\ny2 = (torch.rand(num_samples,) * num_classes2).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y1, y2)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\nclass CustomModule(nn.Module):\n    def __init__(self, in_features: int, out_features1: int, out_features2: int):\n        super().__init__()\n        self.shared = nn.Linear(in_features, 128)\n        self.head1 = nn.Linear(128, out_features1)\n        self.head2 = nn.Linear(128, out_features2)\n\n    def forward(self, x):\n        x = self.shared(x)\n        y1 = self.head1(x)\n        y2 = self.head2(x)\n        return y1, y2\n\n# model, criterion, optimizer, scheduler\nmodel = CustomModule(num_features, num_classes1, num_classes2)\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters())\nscheduler = optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\nclass CustomRunner(dl.Runner):\n    def handle_batch(self, batch):\n        x, y1, y2 = batch\n        y1_hat, y2_hat = self.model(x)\n        self.batch = {\n            \"features\": x,\n            \"logits1\": y1_hat,\n            \"logits2\": y2_hat,\n            \"targets1\": y1,\n            \"targets2\": y2,\n        }\n\n# model training\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.CriterionCallback(metric_key=\"loss1\", input_key=\"logits1\", target_key=\"targets1\"),\n        dl.CriterionCallback(metric_key=\"loss2\", input_key=\"logits2\", target_key=\"targets2\"),\n        dl.MetricAggregationCallback(metric_key=\"loss\", metrics=[\"loss1\", \"loss2\"], mode=\"mean\"),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.AccuracyCallback(\n            input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_\"\n        ),\n        dl.AccuracyCallback(\n            input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_\"\n        ),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_cm\"\n        # ),\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_cm\"\n        # ),\n        dl.CheckpointCallback(\n            logdir=\"./logs/one\",\n            loader_key=\"valid\", metric_key=\"one_accuracy01\", minimize=False, topk=1\n        ),\n        dl.CheckpointCallback(\n            logdir=\"./logs/two\",\n            loader_key=\"valid\", metric_key=\"two_accuracy03\", minimize=False, topk=3\n        ),\n    ],\n    loggers={\"console\": dl.ConsoleLogger(), \"tb\": dl.TensorboardLogger(\"./logs/tb\")},\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>ML \u2013 RecSys</summary>\n<p>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_users, num_features, num_items = int(1e4), int(1e1), 10\nX = torch.rand(num_users, num_features)\ny = (torch.rand(num_users, num_items) > 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_items)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.CriterionCallback(input_key=\"logits\", target_key=\"targets\", metric_key=\"loss\"),\n        # uncomment for extra metrics:\n        # dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # dl.HitrateCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MRRCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MAPCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.NDCGCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.CheckpointCallback(\n            logdir=\"./logs\", loader_key=\"valid\", metric_key=\"loss\", minimize=True\n        ),\n    ]\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>CV - MNIST classification</summary>\n<p>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nrunner = dl.SupervisedRunner()\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n# uncomment for extra metrics:\n#     callbacks=[\n#         dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=10),\n#         dl.PrecisionRecallF1SupportCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=10\n#         ),\n#         dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n#         # catalyst[ml] required ``pip install catalyst[ml]``\n#         dl.ConfusionMatrixCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n#         ),\n#     ]\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>CV - MNIST segmentation</summary>\n<p>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.losses import IoULoss\n\n\nmodel = nn.Sequential(\n    nn.Conv2d(1, 1, 3, 1, 1), nn.ReLU(),\n    nn.Conv2d(1, 1, 3, 1, 1), nn.Sigmoid(),\n)\ncriterion = IoULoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch):\n        x = batch[self._input_key]\n        x_noise = (x + torch.rand_like(x)).clamp_(0, 1)\n        x_ = self.model(x_noise)\n        self.batch = {self._input_key: x, self._output_key: x_, self._target_key: x}\n\nrunner = CustomRunner(\n    input_key=\"features\", output_key=\"scores\", target_key=\"targets\", loss_key=\"loss\"\n)\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.IOUCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.DiceCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.TrevskyCallback(input_key=\"scores\", target_key=\"targets\", alpha=0.2),\n    ],\n    logdir=\"./logdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>CV - MNIST metric learning</summary>\n<p>\n\n```python\nimport os\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.data import HardTripletsSampler\nfrom catalyst.contrib.datasets import MnistMLDataset, MnistQGDataset\nfrom catalyst.contrib.losses import TripletMarginLossWithSampler\nfrom catalyst.contrib.models import MnistSimpleNet\nfrom catalyst.data.sampler import BatchBalanceClassSampler\n\n\n# 1. train and valid loaders\ntrain_dataset = MnistMLDataset(root=os.getcwd())\nsampler = BatchBalanceClassSampler(\n    labels=train_dataset.get_labels(), num_classes=5, num_samples=10, num_batches=10\n)\ntrain_loader = DataLoader(dataset=train_dataset, batch_sampler=sampler)\n\nvalid_dataset = MnistQGDataset(root=os.getcwd(), gallery_fraq=0.2)\nvalid_loader = DataLoader(dataset=valid_dataset, batch_size=1024)\n\n# 2. model and optimizer\nmodel = MnistSimpleNet(out_features=16)\noptimizer = Adam(model.parameters(), lr=0.001)\n\n# 3. criterion with triplets sampling\nsampler_inbatch = HardTripletsSampler(norm_required=False)\ncriterion = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)\n\n# 4. training with catalyst Runner\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch) -> None:\n        if self.is_train_loader:\n            images, targets = batch[\"features\"].float(), batch[\"targets\"].long()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets,}\n        else:\n            images, targets, is_query = \\\n                batch[\"features\"].float(), batch[\"targets\"].long(), batch[\"is_query\"].bool()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets, \"is_query\": is_query}\n\ncallbacks = [\n    dl.ControlFlowCallbackWrapper(\n        dl.CriterionCallback(input_key=\"embeddings\", target_key=\"targets\", metric_key=\"loss\"),\n        loaders=\"train\",\n    ),\n    dl.ControlFlowCallbackWrapper(\n        dl.CMCScoreCallback(\n            embeddings_key=\"embeddings\",\n            labels_key=\"targets\",\n            is_query_key=\"is_query\",\n            topk=[1],\n        ),\n        loaders=\"valid\",\n    ),\n    dl.PeriodicLoaderCallback(\n        valid_loader_key=\"valid\", valid_metric_key=\"cmc01\", minimize=False, valid=2\n    ),\n]\n\nrunner = CustomRunner(input_key=\"features\", output_key=\"embeddings\")\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    callbacks=callbacks,\n    loaders={\"train\": train_loader, \"valid\": valid_loader},\n    verbose=False,\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"cmc01\",\n    minimize_valid_metric=False,\n    num_epochs=10,\n)\n```\n</p>\n</details>\n\n\n<details>\n<summary>CV - MNIST GAN</summary>\n<p>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.layers import GlobalMaxPool2d, Lambda\n\nlatent_dim = 128\ngenerator = nn.Sequential(\n    # We want to generate 128 coefficients to reshape into a 7x7x128 map\n    nn.Linear(128, 128 * 7 * 7),\n    nn.LeakyReLU(0.2, inplace=True),\n    Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(128, 1, (7, 7), padding=3),\n    nn.Sigmoid(),\n)\ndiscriminator = nn.Sequential(\n    nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    GlobalMaxPool2d(),\n    nn.Flatten(),\n    nn.Linear(128, 1),\n)\n\nmodel = nn.ModuleDict({\"generator\": generator, \"discriminator\": discriminator})\ncriterion = {\"generator\": nn.BCEWithLogitsLoss(), \"discriminator\": nn.BCEWithLogitsLoss()}\noptimizer = {\n    \"generator\": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n    \"discriminator\": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n}\ntrain_data = MNIST(os.getcwd(), train=False)\nloaders = {\"train\": DataLoader(train_data, batch_size=32)}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        batch_size = 1\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        return generated_images\n\n    def handle_batch(self, batch):\n        real_images, _ = batch\n        batch_size = real_images.shape[0]\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        # Combine them with real images\n        combined_images = torch.cat([generated_images, real_images])\n\n        # Assemble labels discriminating real from fake images\n        labels = \\\n            torch.cat([torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))]).to(self.engine.device)\n        # Add random noise to the labels - important trick!\n        labels += 0.05 * torch.rand(labels.shape).to(self.engine.device)\n\n        # Discriminator forward\n        combined_predictions = self.model[\"discriminator\"](combined_images)\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Assemble labels that say \"all real images\"\n        misleading_labels = torch.zeros((batch_size, 1)).to(self.engine.device)\n\n        # Generator forward\n        generated_images = self.model[\"generator\"](random_latent_vectors)\n        generated_predictions = self.model[\"discriminator\"](generated_images)\n\n        self.batch = {\n            \"combined_predictions\": combined_predictions,\n            \"labels\": labels,\n            \"generated_predictions\": generated_predictions,\n            \"misleading_labels\": misleading_labels,\n        }\n\n\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    callbacks=[\n        dl.CriterionCallback(\n            input_key=\"combined_predictions\",\n            target_key=\"labels\",\n            metric_key=\"loss_discriminator\",\n            criterion_key=\"discriminator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_discriminator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"discriminator\",\n            metric_key=\"loss_discriminator\",\n        ),\n        dl.CriterionCallback(\n            input_key=\"generated_predictions\",\n            target_key=\"misleading_labels\",\n            metric_key=\"loss_generator\",\n            criterion_key=\"generator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_generator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"generator\",\n            metric_key=\"loss_generator\",\n        ),\n    ],\n    valid_loader=\"train\",\n    valid_metric=\"loss_generator\",\n    minimize_valid_metric=True,\n    num_epochs=20,\n    verbose=True,\n    logdir=\"./logs_gan\",\n)\n\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0, 0].cpu().numpy())\n```\n</p>\n</details>\n\n\n<details>\n<summary>CV - MNIST VAE</summary>\n<p>\n\n```python\nimport os\nimport torch\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nLOG_SCALE_MAX = 2\nLOG_SCALE_MIN = -10\n\ndef normal_sample(loc, log_scale):\n    scale = torch.exp(0.5 * log_scale)\n    return loc + scale * torch.randn_like(scale)\n\nclass VAE(nn.Module):\n    def __init__(self, in_features, hid_features):\n        super().__init__()\n        self.hid_features = hid_features\n        self.encoder = nn.Linear(in_features, hid_features * 2)\n        self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n\n    def forward(self, x, deterministic=False):\n        z = self.encoder(x)\n        bs, z_dim = z.shape\n\n        loc, log_scale = z[:, : z_dim // 2], z[:, z_dim // 2 :]\n        log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)\n\n        z_ = loc if deterministic else normal_sample(loc, log_scale)\n        z_ = z_.view(bs, -1)\n        x_ = self.decoder(z_)\n\n        return x_, loc, log_scale\n\nclass CustomRunner(dl.IRunner):\n    def __init__(self, hid_features, logdir, engine):\n        super().__init__()\n        self.hid_features = hid_features\n        self._logdir = logdir\n        self._engine = engine\n\n    def get_engine(self):\n        return self._engine\n\n    def get_loggers(self):\n        return {\n            \"console\": dl.ConsoleLogger(),\n            \"csv\": dl.CSVLogger(logdir=self._logdir),\n            \"tensorboard\": dl.TensorboardLogger(logdir=self._logdir),\n        }\n\n    @property\n    def num_epochs(self) -> int:\n        return 1\n\n    def get_loaders(self):\n        loaders = {\n            \"train\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n            \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n        }\n        return loaders\n\n    def get_model(self):\n        model = self.model if self.model is not None else VAE(28 * 28, self.hid_features)\n        return model\n\n    def get_optimizer(self, model):\n        return optim.Adam(model.parameters(), lr=0.02)\n\n    def get_callbacks(self):\n        return {\n            \"backward\": dl.BackwardCallback(metric_key=\"loss\"),\n            \"optimizer\": dl.OptimizerCallback(metric_key=\"loss\"),\n            \"checkpoint\": dl.CheckpointCallback(\n                self._logdir,\n                loader_key=\"valid\",\n                metric_key=\"loss\",\n                minimize=True,\n                topk=3,\n            ),\n        }\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss_ae\", \"loss_kld\", \"loss\"]\n        }\n\n    def handle_batch(self, batch):\n        x, _ = batch\n        x = x.view(x.size(0), -1)\n        x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)\n\n        loss_ae = F.mse_loss(x_, x)\n        loss_kld = (\n            -0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)\n        ).mean()\n        loss = loss_ae + loss_kld * 0.01\n\n        self.batch_metrics = {\"loss_ae\": loss_ae, \"loss_kld\": loss_kld, \"loss\": loss}\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n\n    def on_loader_end(self, runner):\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\n    def predict_batch(self, batch):\n        random_latent_vectors = torch.randn(1, self.hid_features).to(self.engine.device)\n        generated_images = self.model.decoder(random_latent_vectors).detach()\n        return generated_images\n\nrunner = CustomRunner(128, \"./logs\", dl.CPUEngine())\nrunner.run()\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0].cpu().numpy().reshape(28, 28))\n```\n</p>\n</details>\n\n\n<details>\n<summary>AutoML - hyperparameters optimization with Optuna</summary>\n<p>\n\n```python\nimport os\nimport optuna\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\n\ndef objective(trial):\n    lr = trial.suggest_loguniform(\"lr\", 1e-3, 1e-1)\n    num_hidden = int(trial.suggest_loguniform(\"num_hidden\", 32, 128))\n\n    train_data = MNIST(os.getcwd(), train=True)\n    valid_data = MNIST(os.getcwd(), train=False)\n    loaders = {\n        \"train\": DataLoader(train_data, batch_size=32),\n        \"valid\": DataLoader(valid_data, batch_size=32),\n    }\n    model = nn.Sequential(\n        nn.Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)\n    )\n    optimizer = torch.optim.Adam(model.parameters(), lr=lr)\n    criterion = nn.CrossEntropyLoss()\n\n    runner = dl.SupervisedRunner(input_key=\"features\", output_key=\"logits\", target_key=\"targets\")\n    runner.train(\n        model=model,\n        criterion=criterion,\n        optimizer=optimizer,\n        loaders=loaders,\n        callbacks={\n            \"accuracy\": dl.AccuracyCallback(\n                input_key=\"logits\", target_key=\"targets\", num_classes=10\n            ),\n            # catalyst[optuna] required ``pip install catalyst[optuna]``\n            \"optuna\": dl.OptunaPruningCallback(\n                loader_key=\"valid\", metric_key=\"accuracy01\", minimize=False, trial=trial\n            ),\n        },\n        num_epochs=3,\n    )\n    score = trial.best_score\n    return score\n\nstudy = optuna.create_study(\n    direction=\"maximize\",\n    pruner=optuna.pruners.MedianPruner(\n        n_startup_trials=1, n_warmup_steps=0, interval_steps=1\n    ),\n)\nstudy.optimize(objective, n_trials=3, timeout=300)\nprint(study.best_value, study.best_params)\n```\n</p>\n</details>\n\n<details>\n<summary>Config API - minimal example</summary>\n<p>\n\n```yaml title=\"example.yaml\"\nrunner:\n  _target_: catalyst.runners.SupervisedRunner\n  model:\n    _var_: model\n    _target_: torch.nn.Sequential\n    args:\n      - _target_: torch.nn.Flatten\n      - _target_: torch.nn.Linear\n        in_features: 784  # 28 * 28\n        out_features: 10\n  input_key: features\n  output_key: &output_key logits\n  target_key: &target_key targets\n  loss_key: &loss_key loss\n\nrun:\n  # \u2248 stage 1\n  - _call_: train  # runner.train(...)\n\n    criterion:\n      _target_: torch.nn.CrossEntropyLoss\n\n    optimizer:\n      _target_: torch.optim.Adam\n      params:  # model.parameters()\n        _var_: model.parameters\n      lr: 0.02\n\n    loaders:\n      train:\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: y\n        batch_size: 32\n\n      &valid_loader_key valid:\n        &valid_loader\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: n\n        batch_size: 32\n\n    callbacks:\n      - &accuracy_metric\n        _target_: catalyst.callbacks.AccuracyCallback\n        input_key: *output_key\n        target_key: *target_key\n        topk: [1,3,5]\n      - _target_: catalyst.callbacks.PrecisionRecallF1SupportCallback\n        input_key: *output_key\n        target_key: *target_key\n\n    num_epochs: 1\n    logdir: logs\n    valid_loader: *valid_loader_key\n    valid_metric: *loss_key\n    minimize_valid_metric: y\n    verbose: y\n\n  # \u2248 stage 2\n  - _call_: evaluate_loader  # runner.evaluate_loader(...)\n    loader: *valid_loader\n    callbacks:\n      - *accuracy_metric\n\n```\n\n```sh\ncatalyst-run --config example.yaml\n```\n</p>\n</details>\n\n### Tests\nAll Catalyst code, features, and pipelines [are fully tested](./tests).\nWe also have our own [catalyst-codestyle](https://github.com/catalyst-team/codestyle) and a corresponding pre-commit hook.\nDuring testing, we train a variety of different models: image classification,\nimage segmentation, text classification, GANs, and much more.\nWe then compare their convergence metrics in order to verify\nthe correctness of the training procedure and its reproducibility.\nAs a result, Catalyst provides fully tested and reproducible\nbest practices for your deep learning research and development.\n\n### [Blog Posts](https://catalyst-team.com/post/)\n\n### [Talks](https://catalyst-team.com/talk/)\n\n\n## Community\n\n### Accelerated with Catalyst\n\n<details>\n<summary>Research Papers</summary>\n<p>\n\n- [Hierarchical Attention for Sentiment Classification with Visualization](https://github.com/neuromation/ml-recipe-hier-attention)\n- [Pediatric Bone Age Assessment](https://github.com/neuromation/ml-recipe-bone-age)\n- [Implementation of the paper \"Tell Me Where to Look: Guided Attention Inference Network\"](https://github.com/ngxbac/GAIN)\n- [Implementation of the paper \"Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks\"](https://github.com/yukkyo/PyTorch-FilterResponseNormalizationLayer)\n- [Implementation of the paper \"Utterance-level Aggregation For Speaker Recognition In The Wild\"](https://github.com/ptJexio/Speaker-Recognition)\n- [Implementation of the paper \"Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation\"](https://github.com/vitrioil/Speech-Separation)\n- [Implementation of the paper \"ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks\"](https://github.com/leverxgroup/esrgan)\n\n</p>\n</details>\n\n<details>\n<summary>Blog Posts</summary>\n<p>\n\n- [Solving the Cocktail Party Problem using PyTorch](https://medium.com/pytorch/addressing-the-cocktail-party-problem-using-pytorch-305fb74560ea)\n- [Beyond fashion: Deep Learning with Catalyst (Config API)](https://evilmartians.com/chronicles/beyond-fashion-deep-learning-with-catalyst)\n- [Tutorial from Notebook API to Config API (RU)](https://github.com/Bekovmi/Segmentation_tutorial)\n\n</p>\n</details>\n\n<details>\n<summary>Competitions</summary>\n<p>\n\n- [Kaggle Quick, Draw! Doodle Recognition Challenge](https://github.com/ngxbac/Kaggle-QuickDraw) - 11th place\n- [Catalyst.RL - NeurIPS 2018: AI for Prosthetics Challenge](https://github.com/Scitator/neurips-18-prosthetics-challenge) \u2013 3rd place\n- [Kaggle Google Landmark 2019](https://github.com/ngxbac/Kaggle-Google-Landmark-2019) - 30th place\n- [iMet Collection 2019 - FGVC6](https://github.com/ngxbac/Kaggle-iMet) - 24th place\n- [ID R&D Anti-spoofing Challenge](https://github.com/bagxi/idrnd-anti-spoofing-challenge-solution) - 14th place\n- [NeurIPS 2019: Recursion Cellular Image Classification](https://github.com/ngxbac/Kaggle-Recursion-Cellular) - 4th place\n- [MICCAI 2019: Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019](https://github.com/ngxbac/StructSeg2019)\n  * 3rd place solution for `Task 3: Organ-at-risk segmentation from chest CT scans`\n  * and 4th place solution for `Task 4: Gross Target Volume segmentation of lung cancer`\n- [Kaggle Seversteal steel detection](https://github.com/bamps53/kaggle-severstal) - 5th place\n- [RSNA Intracranial Hemorrhage Detection](https://github.com/ngxbac/Kaggle-RSNA) - 5th place\n- [APTOS 2019 Blindness Detection](https://github.com/BloodAxe/Kaggle-2019-Blindness-Detection) \u2013 7th place\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/run-skeleton-run-in-3d) \u2013 2nd place\n- [xView2 Damage Assessment Challenge](https://github.com/BloodAxe/xView2-Solution) - 3rd place\n\n\n</p>\n</details>\n\n<details>\n<summary>Toolkits</summary>\n<p>\n\n- [Catalyst.RL](https://github.com/Scitator/catalyst-rl-framework) \u2013 A Distributed Framework for Reproducible RL Research by [Scitator](https://github.com/Scitator)\n- [Catalyst.Classification](https://github.com/catalyst-team/classification) - Comprehensive classification pipeline with Pseudo-Labeling by [Bagxi](https://github.com/bagxi) and [Pdanilov](https://github.com/pdanilov)\n- [Catalyst.Segmentation](https://github.com/catalyst-team/segmentation) - Segmentation pipelines - binary, semantic and instance, by [Bagxi](https://github.com/bagxi)\n- [Catalyst.Detection](https://github.com/catalyst-team/detection) - Anchor-free detection pipeline by [Avi2011class](https://github.com/Avi2011class) and [TezRomacH](https://github.com/TezRomacH)\n- [Catalyst.GAN](https://github.com/catalyst-team/gan) - Reproducible GANs pipelines by [Asmekal](https://github.com/asmekal)\n- [Catalyst.Neuro](https://github.com/catalyst-team/neuro) - Brain image analysis project, in collaboration with [TReNDS Center](https://trendscenter.org)\n- [MLComp](https://github.com/catalyst-team/mlcomp) \u2013 Distributed DAG framework for machine learning with UI by [Lightforever](https://github.com/lightforever)\n- [Pytorch toolbelt](https://github.com/BloodAxe/pytorch-toolbelt) - PyTorch extensions for fast R&D prototyping and Kaggle farming by [BloodAxe](https://github.com/BloodAxe)\n- [Helper functions](https://github.com/ternaus/iglovikov_helper_functions) - An assorted collection of helper functions by [Ternaus](https://github.com/ternaus)\n- [BERT Distillation with Catalyst](https://github.com/elephantmipt/bert-distillation) by [elephantmipt](https://github.com/elephantmipt)\n\n</p>\n</details>\n\n\n<details>\n<summary>Other</summary>\n<p>\n\n- [CamVid Segmentation Example](https://github.com/BloodAxe/Catalyst-CamVid-Segmentation-Example) - Example of semantic segmentation for CamVid dataset\n- [Notebook API tutorial for segmentation in Understanding Clouds from Satellite Images Competition](https://www.kaggle.com/artgor/segmentation-in-pytorch-using-convenient-tools/)\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/learning-to-move-starter-kit) \u2013 starter kit\n- [Catalyst.RL - NeurIPS 2019: Animal-AI Olympics](https://github.com/Scitator/animal-olympics-starter-kit) - starter kit\n- [Inria Segmentation Example](https://github.com/BloodAxe/Catalyst-Inria-Segmentation-Example) - An example of training segmentation model for Inria Sattelite Segmentation Challenge\n- [iglovikov_segmentation](https://github.com/ternaus/iglovikov_segmentation) - Semantic segmentation pipeline using Catalyst\n- [Logging Catalyst Runs to Comet](https://colab.research.google.com/drive/1TaG27HcMh2jyRKBGsqRXLiGUfsHVyCq6?usp=sharing) - An example of how to log metrics, hyperparameters and more from Catalyst runs to [Comet](https://www.comet.ml/site/data-scientists/)\n\n</p>\n</details>\n\n\nSee other projects at [the GitHub dependency graph](https://github.com/catalyst-team/catalyst/network/dependents).\n\nIf your project implements a paper,\na notable use-case/tutorial, or a Kaggle competition solution, or\nif your code simply presents interesting results and uses Catalyst,\nwe would be happy to add your project to the list above!\nDo not hesitate to send us a PR with a brief description of the project similar to the above.\n\n### Contribution Guide\n\nWe appreciate all contributions.\nIf you are planning to contribute back bug-fixes, there is no need to run that by us; just send a PR.\nIf you plan to contribute new features, new utility functions, or extensions,\nplease open an issue first and discuss it with us.\n\n- Please see the [Contribution Guide](CONTRIBUTING.md) for more information.\n- By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md).\n\n\n### User Feedback\n\nWe've created `feedback@catalyst-team.com` as an additional channel for user feedback.\n\n- If you like the project and want to thank us, this is the right place.\n- If you would like to start a collaboration between your team and Catalyst team to improve Deep Learning R&D, you are always welcome.\n- If you don't like Github Issues and prefer email, feel free to email us.\n- Finally, if you do not like something, please, share it with us, and we can see how to improve it.\n\nWe appreciate any type of feedback. Thank you!\n\n\n### Acknowledgments\n\nSince the beginning of the \u0421atalyst development, a lot of people have influenced it in a lot of different ways.\n\n#### Catalyst.Team\n- [Dmytro Doroshenko](https://www.linkedin.com/in/dmytro-doroshenko-05671112a/) ([ditwoo](https://github.com/Ditwoo))\n- [Eugene Kachan](https://www.linkedin.com/in/yauheni-kachan/) ([bagxi](https://github.com/bagxi))\n- [Nikita Balagansky](https://www.linkedin.com/in/nikita-balagansky-50414a19a/) ([elephantmipt](https://github.com/elephantmipt))\n- [Sergey Kolesnikov](https://www.scitator.com/) ([scitator](https://github.com/Scitator))\n\n#### Catalyst.Contributors\n- [Aleksey Grinchuk](https://www.facebook.com/grinchuk.alexey) ([alexgrinch](https://github.com/AlexGrinch))\n- [Aleksey Shabanov](https://linkedin.com/in/aleksey-shabanov-96b351189) ([AlekseySh](https://github.com/AlekseySh))\n- [Alex Gaziev](https://www.linkedin.com/in/alexgaziev/) ([gazay](https://github.com/gazay))\n- [Andrey Zharkov](https://www.linkedin.com/in/andrey-zharkov-8554a1153/) ([asmekal](https://github.com/asmekal))\n- [Artem Zolkin](https://www.linkedin.com/in/artem-zolkin-b5155571/) ([arquestro](https://github.com/Arquestro))\n- [David Kuryakin](https://www.linkedin.com/in/dkuryakin/) ([dkuryakin](https://github.com/dkuryakin))\n- [Evgeny Semyonov](https://www.linkedin.com/in/ewan-semyonov/) ([lightforever](https://github.com/lightforever))\n- [Eugene Khvedchenya](https://www.linkedin.com/in/cvtalks/) ([bloodaxe](https://github.com/BloodAxe))\n- [Ivan Stepanenko](https://www.facebook.com/istepanenko)\n- [Julia Shenshina](https://github.com/julia-shenshina) ([julia-shenshina](https://github.com/julia-shenshina))\n- [Nguyen Xuan Bac](https://www.linkedin.com/in/bac-nguyen-xuan-70340b66/) ([ngxbac](https://github.com/ngxbac))\n- [Roman Tezikov](http://linkedin.com/in/roman-tezikov/) ([TezRomacH](https://github.com/TezRomacH))\n- [Valentin Khrulkov](https://www.linkedin.com/in/vkhrulkov/) ([khrulkovv](https://github.com/KhrulkovV))\n- [Vladimir Iglovikov](https://www.linkedin.com/in/iglovikov/) ([ternaus](https://github.com/ternaus))\n- [Vsevolod Poletaev](https://linkedin.com/in/vsevolod-poletaev-468071165) ([hexfaker](https://github.com/hexfaker))\n- [Yury Kashnitsky](https://www.linkedin.com/in/kashnitskiy/) ([yorko](https://github.com/Yorko))\n\n\n### Trusted by\n- [Awecom](https://www.awecom.com)\n- Researchers at the [Center for Translational Research in Neuroimaging and Data Science (TReNDS)](https://trendscenter.org)\n- [Deep Learning School](https://en.dlschool.org)\n- Researchers at [Emory University](https://www.emory.edu)\n- [Evil Martians](https://evilmartians.com)\n- Researchers at the [Georgia Institute of Technology](https://www.gatech.edu)\n- Researchers at [Georgia State University](https://www.gsu.edu)\n- [Helios](http://helios.to)\n- [HPCD Lab](https://www.hpcdlab.com)\n- [iFarm](https://ifarmproject.com)\n- [Kinoplan](http://kinoplan.io/)\n- Researchers at the [Moscow Institute of Physics and Technology](https://mipt.ru/english/)\n- [Neuromation](https://neuromation.io)\n- [Poteha Labs](https://potehalabs.com/en/)\n- [Provectus](https://provectus.com)\n- Researchers at the [Skolkovo Institute of Science and Technology](https://www.skoltech.ru/en)\n- [SoftConstruct](https://www.softconstruct.io/)\n- Researchers at [Tinkoff](https://www.tinkoff.ru/eng/)\n- Researchers at [Yandex.Research](https://research.yandex.com)\n\n\n### Citation\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n    @misc{catalyst,\n        author = {Kolesnikov, Sergey},\n        title = {Catalyst - Accelerated deep learning R&D},\n        year = {2018},\n        publisher = {GitHub},\n        journal = {GitHub repository},\n        howpublished = {\\url{https://github.com/catalyst-team/catalyst}},\n    }\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Catalyst fork compatible with PDM",
    "version": "22.4.1",
    "split_keywords": [
        "machine learning",
        "distributed computing",
        "deep learning",
        "reinforcement learning",
        "computer vision",
        "natural language processing",
        "recommendation systems",
        "information retrieval",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "55142af8dcf18c22488df22ad64703e60eff6a1d578f949a0f33ec417d849e68",
                "md5": "9637573f2cb7607f6aac1b52db8da4ec",
                "sha256": "fa3a6b1fea39c5931d04b014033c4458eb7e17759b7415c352039eda31dc7189"
            },
            "downloads": -1,
            "filename": "catalyst_pdm-22.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9637573f2cb7607f6aac1b52db8da4ec",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 446865,
            "upload_time": "2023-04-03T21:30:53",
            "upload_time_iso_8601": "2023-04-03T21:30:53.645401Z",
            "url": "https://files.pythonhosted.org/packages/55/14/2af8dcf18c22488df22ad64703e60eff6a1d578f949a0f33ec417d849e68/catalyst_pdm-22.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e7d2e71bf9eabb7ddab8d26e2deb536a1b7eb64d3e46413424ebeee9e179ec2e",
                "md5": "da392084e195b3ba3c88ffa75fd102b8",
                "sha256": "e3ca724cbf5b631cba409307d5103c2295e34341fe68272d982d6c5a4ec8ba0a"
            },
            "downloads": -1,
            "filename": "catalyst_pdm-22.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "da392084e195b3ba3c88ffa75fd102b8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 320990,
            "upload_time": "2023-04-03T21:30:57",
            "upload_time_iso_8601": "2023-04-03T21:30:57.298120Z",
            "url": "https://files.pythonhosted.org/packages/e7/d2/e71bf9eabb7ddab8d26e2deb536a1b7eb64d3e46413424ebeee9e179ec2e/catalyst_pdm-22.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-03 21:30:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "AndrewLaptev",
    "github_project": "catalyst_pdm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "catalyst-pdm"
}
        
Elapsed time: 0.05659s