torchkeras


Nametorchkeras JSON
Version 4.0.2 PyPI version JSON
download
home_pagehttps://github.com/lyhue1991/torchkeras
Summarypytorch❤️keras
upload_time2024-10-23 14:36:48
maintainerNone
docs_urlNone
authorlyhue1991,octopus,Laugh
requires_python>=3.5
licenseNone
keywords vlog deep-learning dl pytorch torch keras
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Pytorch❤️Keras

English | [简体中文](README.md)


The torchkeras library is a simple tool for training neural network in pytorch jusk in a keras style. 😋😋


## 1, Introduction


With torchkeras, You need not to write your training loop with many lines of code, all you need to do is just 

like these two steps as below:

(i) create your network and wrap it and the loss_fn together with torchkeras.KerasModel like this: 
`model = torchkeras.KerasModel(net,loss_fn=nn.BCEWithLogitsLoss())`.

(ii) fit your model with the training data and validate data.


The main code of use torchkeras is like below.

```python
import torch 
import torchkeras

model = torchkeras.KerasModel(net,
                              loss_fn = nn.BCEWithLogitsLoss(),
                              optimizer= torch.optim.Adam(net.parameters(),lr = 0.001),
                              metrics_dict = {"acc":torchmetrics.Accuracy(task='binary')}
                             )
dfhistory=model.fit(train_data=dl_train, 
                    val_data=dl_val, 
                    epochs=20, 
                    patience=3, 
                    ckpt_path='checkpoint',
                    monitor="val_acc",
                    mode="max",
                    plot=True
                   )

```

![](./data/torchkeras_plot.gif)


Besides,You can use torchkeras.VLog to get the dynamic training visualization any where as you like ~

```python
import time
import math,random
from torchkeras import VLog

epochs = 10
batchs = 30

#0, init vlog
vlog = VLog(epochs, monitor_metric='val_loss', monitor_mode='min') 

#1, log_start 
vlog.log_start() 

for epoch in range(epochs):
    
    #train
    for step in range(batchs):
        
        #2, log_step (for training step)
        vlog.log_step({'train_loss':100-2.5*epoch+math.sin(2*step/batchs)}) 
        time.sleep(0.05)
        
    #eval    
    for step in range(20):
        
        #3, log_step (for eval step)
        vlog.log_step({'val_loss':100-2*epoch+math.sin(2*step/batchs)},training=False)
        time.sleep(0.05)
        
    #4, log_epoch
    vlog.log_epoch({'val_loss':100 - 2*epoch+2*random.random()-1,
                    'train_loss':100-2.5*epoch+2*random.random()-1})  

# 5, log_end
vlog.log_end()

```


**This project seems somehow powerful, but the source code is very simple.**

**Actually, only about 200 lines of Python code.**

**If you want to understand or modify some details of this project, feel free to read and change the source code!!!**

```python

```

## 2, Features 


The main features supported by torchkeras are listed below.

Versions when these features are introduced and the libraries which they used  or inspired from are given.



|features| supported from version | used or inspired by library  |
|:----|:-------------------:|:--------------|
|✅ training progress bar | 3.0.0   | use tqdm,inspired by keras|
|✅ training metrics  | 3.0.0   | inspired by pytorch_lightning |
|✅ notebook visualization in traning |  3.8.0  |inspired by fastai |
|✅ early stopping | 3.0.0   | inspired by keras |
|✅ gpu training | 3.0.0    |use accelerate|
|✅ multi-gpus training(ddp) |   3.6.0 | use accelerate|
|✅ fp16/bf16 training|   3.6.0  | use accelerate|
|✅ tensorboard callback |   3.7.0  |use tensorboard |
|✅ wandb callback |  3.7.0 |use wandb |
|✅ VLog |  3.9.5 | use matplotlib|


```python

```

### 3, Basic Examples 

You can follow these full examples to get started with torchkeras.


|example| read notebook code     |  run example in kaggle| 
|:----|:-------------------------|:-----------:|
|①kerasmodel basic 🔥🔥|  [**torchkeras.KerasModel example**](./01,kerasmodel_example.ipynb)  |  <br><div></a><a href="https://www.kaggle.com/lyhue1991/kerasmodel-example"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a></div><br>  |
|②kerasmodel wandb 🔥🔥🔥|[**torchkeras.KerasModel with wandb demo**](./02,kerasmodel_wandb_demo.ipynb)   |  <br><div></a><a href="https://www.kaggle.com/lyhue1991/kerasmodel-wandb-example"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a></div><br>  |
|③kerasmodel tunning 🔥🔥🔥|[**torchkeras.KerasModel with wandb sweep demo**](./03,kerasmodel_tuning_demo.ipynb)   |  <br><div></a><a href="https://www.kaggle.com/lyhue1991/torchkeras-loves-wandb-sweep"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a></div><br>  |
|④kerasmodel tensorboard | [**torchkeras.KerasModel with tensorboard example**](./04,kerasmodel_tensorboard_demo.ipynb)   |  |
|⑤kerasmodel ddp/tpu | [**torchkeras.KerasModel  ddp tpu examples**](https://www.kaggle.com/code/lyhue1991/torchkeras-ddp-tpu-examples)   |<br><div></a><a href="https://www.kaggle.com/lyhue1991/torchkeras-ddp-tpu-examples"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a></div><br>  |
|⑥ VLog for lightgbm/ultralytics/transformers🔥🔥🔥| [**VLog example**](./10,vlog_example.ipynb)   |  |


### 4, Advanced Examples 

In some using cases, because of the differences  of the model input types, you need to rewrite the StepRunner of 
KerasModel. Here are some examples.

|example|model library  |notebook |
|:----|:-----------|:-----------:|
||||
|**RL**|||
|ReinforcementLearning——Q-Learning🔥🔥|- |[Q-learning](./examples/Q-learning.ipynb)|
|ReinforcementLearning——DQN|- |[DQN](./examples/DQN.ipynb)|
||||
|**Tabular**|||
|BinaryClassification——LightGBM |- |[LightGBM](./examples/LightGBM二分类.ipynb)|
|MultiClassification——FTTransformer🔥🔥🔥🔥🔥|- |[FTTransformer](./examples/FTTransformer多分类.ipynb)|
|BinaryClassification——FM|- |[FM](./examples/FM二分类.ipynb)|
|BinaryClassification——DeepFM|- |[DeepFM](./examples/DeepFM二分类.ipynb)|
|BinaryClassification——DeepCross|- |[DeepCross](./examples/DeepCross二分类.ipynb)|
||||
|**CV**|||
|ImageClassification——Resnet|  -  | [Resnet](./examples/ResNet.ipynb) |
|ImageSegmentation——UNet|  - | [UNet](./examples/UNet.ipynb) |
|ObjectDetection——SSD| -  | [SSD](./examples/SSD.ipynb) |
|OCR——CRNN 🔥🔥| -  | [CRNN-CTC](./examples/CRNN_CTC.ipynb) |
|ImageClassification——SwinTransformer|timm| [Swin](./examples/SwinTransformer——timm.ipynb)|
|ObjectDetection——FasterRCNN| torchvision  |  [FasterRCNN](./examples/FasterRCNN——vision.ipynb) | 
|ImageSegmentation——DeepLabV3++ | segmentation_models_pytorch |  [Deeplabv3++](./examples/Deeplabv3plus——smp.ipynb) |
|InstanceSegmentation——MaskRCNN | detectron2 |  [MaskRCNN](./examples/MaskRCNN——detectron2.ipynb) |
|ObjectDetection——YOLOv8 🔥🔥🔥| ultralytics |  [YOLOv8](./examples/YOLOV8_Detect——ultralytics.ipynb) |
|InstanceSegmentation——YOLOv8 🔥🔥🔥| ultralytics |  [YOLOv8](./examples/YOLOV8_Segment——ultralytics.ipynb) |
||||
|**NLP**|||
|Seq2Seq——Transformer🔥🔥| - |  [Transformer](./examples/Dive_into_Transformer.ipynb) |
|TextGeneration——Llama🔥| - |  [Llama](./examples/Dive_into_Llama.ipynb) |
|TextClassification——BERT | transformers |  [BERT](./examples/BERT——transformers.ipynb) |
|TokenClassification——BERT | transformers |  [BERT_NER](./examples/BERT_NER——transformers.ipynb) |
|FinetuneLLM——ChatGLM2_LoRA 🔥🔥🔥| transformers,peft |  [ChatGLM2_LoRA](./examples/ChatGLM2_LoRA——transformers.ipynb) |
|FinetuneLLM——ChatGLM2_AdaLoRA 🔥| transformers,peft |  [ChatGLM2_AdaLoRA](./examples/ChatGLM2_AdaLoRA——transformers.ipynb) |
|FinetuneLLM——ChatGLM2_QLoRA🔥 | transformers |  [ChatGLM2_QLoRA_Kaggle](./examples/ChatGLM2_QLoRA_Kaggle——transformers.ipynb) |
|FinetuneLLM——BaiChuan13B_QLoRA🔥 | transformers |  [BaiChuan13B_QLoRA](./examples/BaiChuan13B_QLoRA——transformers.ipynb) |
|FinetuneLLM——BaiChuan13B_NER 🔥🔥🔥| transformers |  [BaiChuan13B_NER](./examples/BaiChuan13B_NER——transformers.ipynb) |
|FinetuneLLM——BaiChuan13B_MultiRounds 🔥| transformers |  [BaiChuan13B_MultiRounds](./examples/BaiChuan13B_MultiRounds——transformers.ipynb) |
|FinetuneLLM——Qwen7B_MultiRounds 🔥🔥🔥| transformers |  [Qwen7B_MultiRounds](./examples/Qwen7B_MultiRounds——transformers.ipynb) |
|FinetuneLLM——BaiChuan2_13B 🔥| transformers |  [BaiChuan2_13B](./examples/BaiChuan2_13B——transformers.ipynb) |


**If you want to understand or modify some details of this project, feel free to read and change the source code!!!**

Any other questions, you can contact the author form the wechat official account below:

**算法美食屋** 


![](https://tva1.sinaimg.cn/large/e6c9d24egy1h41m2zugguj20k00b9q46.jpg)


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lyhue1991/torchkeras",
    "name": "torchkeras",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.5",
    "maintainer_email": null,
    "keywords": "vlog, deep-learning, DL, pytorch, torch, keras",
    "author": "lyhue1991,octopus,Laugh",
    "author_email": "lyhue1991@163.com",
    "download_url": "https://files.pythonhosted.org/packages/76/44/00c5a4d70ef484506f17efc7ce8f42b0b18baa262ba75bf73497d6934d48/torchkeras-4.0.2.tar.gz",
    "platform": null,
    "description": "\n# Pytorch\u2764\ufe0fKeras\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](README.md)\n\n\nThe torchkeras library is a simple tool for training neural network in pytorch jusk in a keras style. \ud83d\ude0b\ud83d\ude0b\n\n\n## 1, Introduction\n\n\nWith torchkeras, You need not to write your training loop with many lines of code, all you need to do is just \n\nlike these two steps as below:\n\n(i) create your network and wrap it and the loss_fn together with torchkeras.KerasModel like this: \n`model = torchkeras.KerasModel(net,loss_fn=nn.BCEWithLogitsLoss())`.\n\n(ii) fit your model with the training data and validate data.\n\n\nThe main code of use torchkeras is like below.\n\n```python\nimport torch \nimport torchkeras\n\nmodel = torchkeras.KerasModel(net,\n                              loss_fn = nn.BCEWithLogitsLoss(),\n                              optimizer= torch.optim.Adam(net.parameters(),lr = 0.001),\n                              metrics_dict = {\"acc\":torchmetrics.Accuracy(task='binary')}\n                             )\ndfhistory=model.fit(train_data=dl_train, \n                    val_data=dl_val, \n                    epochs=20, \n                    patience=3, \n                    ckpt_path='checkpoint',\n                    monitor=\"val_acc\",\n                    mode=\"max\",\n                    plot=True\n                   )\n\n```\n\n![](./data/torchkeras_plot.gif)\n\n\nBesides\uff0cYou can use torchkeras.VLog to get the dynamic training visualization any where as you like ~\n\n```python\nimport time\nimport math,random\nfrom torchkeras import VLog\n\nepochs = 10\nbatchs = 30\n\n#0, init vlog\nvlog = VLog(epochs, monitor_metric='val_loss', monitor_mode='min') \n\n#1, log_start \nvlog.log_start() \n\nfor epoch in range(epochs):\n    \n    #train\n    for step in range(batchs):\n        \n        #2, log_step (for training step)\n        vlog.log_step({'train_loss':100-2.5*epoch+math.sin(2*step/batchs)}) \n        time.sleep(0.05)\n        \n    #eval    \n    for step in range(20):\n        \n        #3, log_step (for eval step)\n        vlog.log_step({'val_loss':100-2*epoch+math.sin(2*step/batchs)},training=False)\n        time.sleep(0.05)\n        \n    #4, log_epoch\n    vlog.log_epoch({'val_loss':100 - 2*epoch+2*random.random()-1,\n                    'train_loss':100-2.5*epoch+2*random.random()-1})  \n\n# 5, log_end\nvlog.log_end()\n\n```\n\n\n**This project seems somehow powerful, but the source code is very simple.**\n\n**Actually, only about 200 lines of Python code.**\n\n**If you want to understand or modify some details of this project, feel free to read and change the source code!!!**\n\n```python\n\n```\n\n## 2, Features \n\n\nThe main features supported by torchkeras are listed below.\n\nVersions when these features are introduced and the libraries which they used  or inspired from are given.\n\n\n\n|features| supported from version | used or inspired by library  |\n|:----|:-------------------:|:--------------|\n|\u2705 training progress bar | 3.0.0   | use tqdm,inspired by keras|\n|\u2705 training metrics  | 3.0.0   | inspired by pytorch_lightning |\n|\u2705 notebook visualization in traning |  3.8.0  |inspired by fastai |\n|\u2705 early stopping | 3.0.0   | inspired by keras |\n|\u2705 gpu training | 3.0.0    |use accelerate|\n|\u2705 multi-gpus training(ddp) |   3.6.0 | use accelerate|\n|\u2705 fp16/bf16 training|   3.6.0  | use accelerate|\n|\u2705 tensorboard callback |   3.7.0  |use tensorboard |\n|\u2705 wandb callback |  3.7.0 |use wandb |\n|\u2705 VLog |  3.9.5 | use matplotlib|\n\n\n```python\n\n```\n\n### 3, Basic Examples \n\nYou can follow these full examples to get started with torchkeras.\n\n\n|example| read notebook code     |  run example in kaggle| \n|:----|:-------------------------|:-----------:|\n|\u2460kerasmodel basic \ud83d\udd25\ud83d\udd25|  [**torchkeras.KerasModel example**](./01\uff0ckerasmodel_example.ipynb)  |  <br><div></a><a href=\"https://www.kaggle.com/lyhue1991/kerasmodel-example\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a></div><br>  |\n|\u2461kerasmodel wandb \ud83d\udd25\ud83d\udd25\ud83d\udd25|[**torchkeras.KerasModel with wandb demo**](./02\uff0ckerasmodel_wandb_demo.ipynb)   |  <br><div></a><a href=\"https://www.kaggle.com/lyhue1991/kerasmodel-wandb-example\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a></div><br>  |\n|\u2462kerasmodel tunning \ud83d\udd25\ud83d\udd25\ud83d\udd25|[**torchkeras.KerasModel with wandb sweep demo**](./03\uff0ckerasmodel_tuning_demo.ipynb)   |  <br><div></a><a href=\"https://www.kaggle.com/lyhue1991/torchkeras-loves-wandb-sweep\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a></div><br>  |\n|\u2463kerasmodel tensorboard | [**torchkeras.KerasModel with tensorboard example**](./04\uff0ckerasmodel_tensorboard_demo.ipynb)   |  |\n|\u2464kerasmodel ddp/tpu | [**torchkeras.KerasModel  ddp tpu examples**](https://www.kaggle.com/code/lyhue1991/torchkeras-ddp-tpu-examples)   |<br><div></a><a href=\"https://www.kaggle.com/lyhue1991/torchkeras-ddp-tpu-examples\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a></div><br>  |\n|\u2465 VLog for lightgbm/ultralytics/transformers\ud83d\udd25\ud83d\udd25\ud83d\udd25| [**VLog example**](./10\uff0cvlog_example.ipynb)   |  |\n\n\n### 4, Advanced Examples \n\nIn some using cases, because of the differences  of the model input types, you need to rewrite the StepRunner of \nKerasModel. Here are some examples.\n\n|example|model library  |notebook |\n|:----|:-----------|:-----------:|\n||||\n|**RL**|||\n|ReinforcementLearning\u2014\u2014Q-Learning\ud83d\udd25\ud83d\udd25|- |[Q-learning](./examples/Q-learning.ipynb)|\n|ReinforcementLearning\u2014\u2014DQN|- |[DQN](./examples/DQN.ipynb)|\n||||\n|**Tabular**|||\n|BinaryClassification\u2014\u2014LightGBM |- |[LightGBM](./examples/LightGBM\u4e8c\u5206\u7c7b.ipynb)|\n|MultiClassification\u2014\u2014FTTransformer\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25|- |[FTTransformer](./examples/FTTransformer\u591a\u5206\u7c7b.ipynb)|\n|BinaryClassification\u2014\u2014FM|- |[FM](./examples/FM\u4e8c\u5206\u7c7b.ipynb)|\n|BinaryClassification\u2014\u2014DeepFM|- |[DeepFM](./examples/DeepFM\u4e8c\u5206\u7c7b.ipynb)|\n|BinaryClassification\u2014\u2014DeepCross|- |[DeepCross](./examples/DeepCross\u4e8c\u5206\u7c7b.ipynb)|\n||||\n|**CV**|||\n|ImageClassification\u2014\u2014Resnet|  -  | [Resnet](./examples/ResNet.ipynb) |\n|ImageSegmentation\u2014\u2014UNet|  - | [UNet](./examples/UNet.ipynb) |\n|ObjectDetection\u2014\u2014SSD| -  | [SSD](./examples/SSD.ipynb) |\n|OCR\u2014\u2014CRNN \ud83d\udd25\ud83d\udd25| -  | [CRNN-CTC](./examples/CRNN_CTC.ipynb) |\n|ImageClassification\u2014\u2014SwinTransformer|timm| [Swin](./examples/SwinTransformer\u2014\u2014timm.ipynb)|\n|ObjectDetection\u2014\u2014FasterRCNN| torchvision  |  [FasterRCNN](./examples/FasterRCNN\u2014\u2014vision.ipynb) | \n|ImageSegmentation\u2014\u2014DeepLabV3++ | segmentation_models_pytorch |  [Deeplabv3++](./examples/Deeplabv3plus\u2014\u2014smp.ipynb) |\n|InstanceSegmentation\u2014\u2014MaskRCNN | detectron2 |  [MaskRCNN](./examples/MaskRCNN\u2014\u2014detectron2.ipynb) |\n|ObjectDetection\u2014\u2014YOLOv8 \ud83d\udd25\ud83d\udd25\ud83d\udd25| ultralytics |  [YOLOv8](./examples/YOLOV8_Detect\u2014\u2014ultralytics.ipynb) |\n|InstanceSegmentation\u2014\u2014YOLOv8 \ud83d\udd25\ud83d\udd25\ud83d\udd25| ultralytics |  [YOLOv8](./examples/YOLOV8_Segment\u2014\u2014ultralytics.ipynb) |\n||||\n|**NLP**|||\n|Seq2Seq\u2014\u2014Transformer\ud83d\udd25\ud83d\udd25| - |  [Transformer](./examples/Dive_into_Transformer.ipynb) |\n|TextGeneration\u2014\u2014Llama\ud83d\udd25| - |  [Llama](./examples/Dive_into_Llama.ipynb) |\n|TextClassification\u2014\u2014BERT | transformers |  [BERT](./examples/BERT\u2014\u2014transformers.ipynb) |\n|TokenClassification\u2014\u2014BERT | transformers |  [BERT_NER](./examples/BERT_NER\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014ChatGLM2_LoRA \ud83d\udd25\ud83d\udd25\ud83d\udd25| transformers,peft |  [ChatGLM2_LoRA](./examples/ChatGLM2_LoRA\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014ChatGLM2_AdaLoRA \ud83d\udd25| transformers,peft |  [ChatGLM2_AdaLoRA](./examples/ChatGLM2_AdaLoRA\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014ChatGLM2_QLoRA\ud83d\udd25 | transformers |  [ChatGLM2_QLoRA_Kaggle](./examples/ChatGLM2_QLoRA_Kaggle\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014BaiChuan13B_QLoRA\ud83d\udd25 | transformers |  [BaiChuan13B_QLoRA](./examples/BaiChuan13B_QLoRA\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014BaiChuan13B_NER \ud83d\udd25\ud83d\udd25\ud83d\udd25| transformers |  [BaiChuan13B_NER](./examples/BaiChuan13B_NER\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014BaiChuan13B_MultiRounds \ud83d\udd25| transformers |  [BaiChuan13B_MultiRounds](./examples/BaiChuan13B_MultiRounds\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014Qwen7B_MultiRounds \ud83d\udd25\ud83d\udd25\ud83d\udd25| transformers |  [Qwen7B_MultiRounds](./examples/Qwen7B_MultiRounds\u2014\u2014transformers.ipynb) |\n|FinetuneLLM\u2014\u2014BaiChuan2_13B \ud83d\udd25| transformers |  [BaiChuan2_13B](./examples/BaiChuan2_13B\u2014\u2014transformers.ipynb) |\n\n\n**If you want to understand or modify some details of this project, feel free to read and change the source code!!!**\n\nAny other questions, you can contact the author form the wechat official account below:\n\n**\u7b97\u6cd5\u7f8e\u98df\u5c4b** \n\n\n![](https://tva1.sinaimg.cn/large/e6c9d24egy1h41m2zugguj20k00b9q46.jpg)\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "pytorch\u2764\ufe0fkeras",
    "version": "4.0.2",
    "project_urls": {
        "Homepage": "https://github.com/lyhue1991/torchkeras"
    },
    "split_keywords": [
        "vlog",
        " deep-learning",
        " dl",
        " pytorch",
        " torch",
        " keras"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c1c4944a55b855d23f98c8150364ed5912e24685327db8ef200aacd3475a4ce5",
                "md5": "dc0b945a5707e1c4d3014d48df0ff6d0",
                "sha256": "2f51d4f8bee1ae0a53d726ba9a56ddda560dbf37eac2146c35aa5e83b27c39ee"
            },
            "downloads": -1,
            "filename": "torchkeras-4.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dc0b945a5707e1c4d3014d48df0ff6d0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.5",
            "size": 6639143,
            "upload_time": "2024-10-23T14:36:35",
            "upload_time_iso_8601": "2024-10-23T14:36:35.119937Z",
            "url": "https://files.pythonhosted.org/packages/c1/c4/944a55b855d23f98c8150364ed5912e24685327db8ef200aacd3475a4ce5/torchkeras-4.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "764400c5a4d70ef484506f17efc7ce8f42b0b18baa262ba75bf73497d6934d48",
                "md5": "fe80e27cb5169bd862c4d773b73c4a8a",
                "sha256": "f33256fecc5e4de9c55abd2a2b1f82c883aa279c89881e300757c3455d57e2eb"
            },
            "downloads": -1,
            "filename": "torchkeras-4.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "fe80e27cb5169bd862c4d773b73c4a8a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.5",
            "size": 6583632,
            "upload_time": "2024-10-23T14:36:48",
            "upload_time_iso_8601": "2024-10-23T14:36:48.089128Z",
            "url": "https://files.pythonhosted.org/packages/76/44/00c5a4d70ef484506f17efc7ce8f42b0b18baa262ba75bf73497d6934d48/torchkeras-4.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-23 14:36:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lyhue1991",
    "github_project": "torchkeras",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "torchkeras"
}
        
Elapsed time: 4.39230s