funcodec


Namefuncodec JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/alibaba-damo-academy/FunCodec.git
SummaryFunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec
upload_time2023-12-22 08:21:55
maintainer
docs_urlNone
authorSpeech Lab, Alibaba Group, China
requires_python>=3.8.0
licenseThe MIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec

This project is still working on progress.

## News
- 2023.12.22 🎉🎉: We release the training and inference recipes for LauraTTS as well as pre-trained models. 
[LauraTTS](https://arxiv.org/abs/2310.04673) is a powerful codec-based zero-shot text-to-speech synthesizer, 
which outperforms VALL-E in terms of semantic consistency and speaker similarity.
Please refer `egs/text2speech_laura/README.md` for more details.

## Installation

```shell
git clone https://github.com/alibaba/FunCodec.git && cd FunCodec
pip install --editable ./
```

## Available models
🤗 links to the Huggingface model hub, while ⭐ refers the Modelscope.

| Model name                                                          |                                                                                                              Model hub                                                                                                               |  Corpus  |  Bitrate  | Parameters | Flops  |
|:--------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|:---------:|:----------:|:------:|
| audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch             |             [🤗](https://huggingface.co/alibaba-damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch/summary)             | General  | 250~8000  |  57.83 M   | 7.73G  |
| audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch             |             [🤗](https://huggingface.co/alibaba-damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch/summary)             | General  | 500~16000 |  14.85 M   | 3.72 G |
| audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch               |               [🤗](https://huggingface.co/alibaba-damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch/summary)               | LibriTTS | 250~8000  |  57.83 M   | 7.73G  |
| audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch               |               [🤗](https://huggingface.co/alibaba-damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch/summary)               | LibriTTS | 500~16000 |  14.85 M   | 3.72 G |
| audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch | [🤗](https://huggingface.co/alibaba-damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch/summary) | LibriTTS | 500~16000 |   4.50 M   | 2.18 G | 
| audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch | [🤗](https://huggingface.co/alibaba-damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch) [⭐](https://www.modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch/summary) | LibriTTS | 500~16000 |   0.52 M   | 0.34 G |

## Model Download
### Download models from ModelScope
Please refer `egs/LibriTTS/codec/encoding_decoding.sh` to download pretrained models:
```shell
cd egs/LibriTTS/codec
model_name=audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch
bash encoding_decoding.sh --stage 0 --model_name ${model_name} --model_hub modelscope
# The pre-trained model will be downloaded to exp/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch
```

### Download models from Huggingface
Please refer `egs/LibriTTS/codec/encoding_decoding.sh` to download pretrained models:
```shell
cd egs/LibriTTS/codec
model_name=audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch
bash encoding_decoding.sh --stage 0 --model_name ${model_name} --model_hub huggingface
# The pre-trained model will be downloaded to exp/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch
```

## Inference
### Batch inference
Please refer `egs/LibriTTS/codec/encoding_decoding.sh` to perform encoding and decoding.
Extract codes with an input file `input_wav.scp`, 
and the codes will be saved to `output_dir/codecs.txt` in a format of jsonl.
```shell
cd egs/LibriTTS/codec
bash encoding_decoding.sh --stage 1 --batch_size 16 --num_workers 4 --gpu_devices "0,1" \
  --model_dir exp/${model_name} --bit_width 16000 \
  --wav_scp input_wav.scp  --out_dir outputs/codecs/
# input_wav.scp has the following format:
# uttid1 path/to/file1.wav
# uttid2 path/to/file2.wav
# ...
```

Decode codes with an input file `codecs.txt`, 
and the reconstructed waveform will be saved to `output_dir/logdir/output.*/*.wav`.
```shell
bash encoding_decoding.sh --stage 2 --batch_size 16 --num_workers 4 --gpu_devices "0,1" \
  --model_dir exp/${model_name} --bit_width 16000 --file_sampling_rate 16000 \
  --wav_scp codecs.txt --out_dir outputs/recon_wavs 
# codecs.scp is the output of above encoding stage, which has the following format:
# uttid1 [[[1, 2, 3, ...],[2, 3, 4, ...], ...]]
# uttid2 [[[9, 7, 5, ...],[3, 1, 2, ...], ...]]
```

<!---
### Demo inference
--->

## Training
### Training on open-source corpora
For commonly-used open-source corpora, you can train a model using the recipe in `egs` directory.
For example, to train a model on the `LibriTTS` corpus, you can use `egs/LibriTTS/codec/run.sh`:
```shell
# entry the LibriTTS recipe directory
cd egs/LibriTTS/codec
# run data downloading, preparation and training stages with 2 GPUs (device 0 and 1)
bash run.sh --stage 0 --stop_stage 3 --gpu_devices 0,1 --gpu_num 2
```
We recommend run the script stage by stage to have an overview of FunCodec.

### Training on customized data
For uncovered corpora or customized dataset, you can prepare the data by yourself.
In general, FunCodec employs the kaldi-like `wav.scp` file to organize the data files.
`wav.scp` has the following format:
```shell
# for waveform files
uttid1 /path/to/uttid1.wav
uttid2 /path/to/uttid2.wav
# for kaldi-ark files
uttid3 /path/to/ark1.wav:10
uttid4 /path/to/ark1.wav:200
uttid5 /path/to/ark2.wav:10
```
As shown in the above example, FunCodec supports the combination of waveforms or kaldi-ark files 
in one `wav.scp` file for both training and inference.
Here is a demo script to train a model on your customized dataset named `foo`:
```shell
cd egs/LibriTTS/codec
# 0. make the directory for train, dev and test sets
mkdir -p dump/foo/train dump/foo/dev dump/foo/test

# 1a. if you already have the wav.scp file, just place them under the corresponding directories
mv train.scp dump/foo/train/; mv dev.scp dump/foo/dev/; mv test.scp dump/foo/test/;
# 1b. if you don't have the wav.scp file, you can prepare it as follows
find path/to/train_set/ -iname "*.wav" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/train/wav.scp
find path/to/dev_set/   -iname "*.wav" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/dev/wav.scp
find path/to/test_set/  -iname "*.wav" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/test/wav.scp

# 2. collate shape files
mkdir exp/foo_states/train exp/foo_states/dev
torchrun --nproc_per_node=4 --master_port=1234 scripts/gen_wav_length.py --wav_scp dump/foo/train/wav.scp --out_dir exp/foo_states/train/wav_length
cat exp/foo_states/train/wav_length/wav_length.*.txt | shuf > exp/foo_states/train/speech_shape
torchrun --nproc_per_node=4 --master_port=1234 scripts/gen_wav_length.py --wav_scp dump/foo/dev/wav.scp --out_dir exp/foo_states/dev/wav_length
cat exp/foo_states/dev/wav_length/wav_length.*.txt | shuf > exp/foo_states/dev/speech_shape

# 3. train the model with 2 GPUs (device 4 and 5) on the customized dataset (foo)
bash run.sh --gpu_devices 4,5 --gpu_num 2 --dumpdir dump/foo --state_dir foo_states
```

## Acknowledge

1. We had a consistent design of [FunASR](https://github.com/alibaba/FunASR), including dataloader, model definition and so on.
2. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for data preparation.
3. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet). FunCodec follows up the training and finetuning pipelines of ESPnet.
4. We borrowed the design of model architecture from [Enocdec](https://github.com/facebookresearch/encodec) and [Enocdec_Trainner](https://github.com/Mikxox/EnCodec_Trainer).

## License
This project is licensed under [The MIT License](https://opensource.org/licenses/MIT). 
FunCodec also contains various third-party components and some code modified from other repos 
under other open source licenses.

## Citations

``` bibtex
@misc{du2023funcodec,
      title={FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec},
      author={Zhihao Du, Shiliang Zhang, Kai Hu, Siqi Zheng},
      year={2023},
      eprint={2309.07405},
      archivePrefix={arXiv},
      primaryClass={cs.Sound}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/alibaba-damo-academy/FunCodec.git",
    "name": "funcodec",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Speech Lab, Alibaba Group, China",
    "author_email": "funcodec@list.alibaba-inc.com",
    "download_url": "https://files.pythonhosted.org/packages/f1/c5/ebb4a36c6ccb939cbad4010a97d541bdbc70ade46a9132cc2ed60716cccd/funcodec-0.2.0.tar.gz",
    "platform": null,
    "description": "# FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec\n\nThis project is still working on progress.\n\n## News\n- 2023.12.22 \ud83c\udf89\ud83c\udf89: We release the training and inference recipes for LauraTTS as well as pre-trained models. \n[LauraTTS](https://arxiv.org/abs/2310.04673) is a powerful codec-based zero-shot text-to-speech synthesizer, \nwhich outperforms VALL-E in terms of semantic consistency and speaker similarity.\nPlease refer `egs/text2speech_laura/README.md` for more details.\n\n## Installation\n\n```shell\ngit clone https://github.com/alibaba/FunCodec.git && cd FunCodec\npip install --editable ./\n```\n\n## Available models\n\ud83e\udd17 links to the Huggingface model hub, while \u2b50 refers the Modelscope.\n\n| Model name                                                          |                                                                                                              Model hub                                                                                                               |  Corpus  |  Bitrate  | Parameters | Flops  |\n|:--------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|:---------:|:----------:|:------:|\n| audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch             |             [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch/summary)             | General  | 250~8000  |  57.83 M   | 7.73G  |\n| audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch             |             [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch/summary)             | General  | 500~16000 |  14.85 M   | 3.72 G |\n| audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch               |               [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch/summary)               | LibriTTS | 250~8000  |  57.83 M   | 7.73G  |\n| audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch               |               [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch/summary)               | LibriTTS | 500~16000 |  14.85 M   | 3.72 G |\n| audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch | [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch/summary) | LibriTTS | 500~16000 |   4.50 M   | 2.18 G | \n| audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch | [\ud83e\udd17](https://huggingface.co/alibaba-damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch) [\u2b50](https://www.modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch/summary) | LibriTTS | 500~16000 |   0.52 M   | 0.34 G |\n\n## Model Download\n### Download models from ModelScope\nPlease refer `egs/LibriTTS/codec/encoding_decoding.sh` to download pretrained models:\n```shell\ncd egs/LibriTTS/codec\nmodel_name=audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch\nbash encoding_decoding.sh --stage 0 --model_name ${model_name} --model_hub modelscope\n# The pre-trained model will be downloaded to exp/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch\n```\n\n### Download models from Huggingface\nPlease refer `egs/LibriTTS/codec/encoding_decoding.sh` to download pretrained models:\n```shell\ncd egs/LibriTTS/codec\nmodel_name=audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch\nbash encoding_decoding.sh --stage 0 --model_name ${model_name} --model_hub huggingface\n# The pre-trained model will be downloaded to exp/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch\n```\n\n## Inference\n### Batch inference\nPlease refer `egs/LibriTTS/codec/encoding_decoding.sh` to perform encoding and decoding.\nExtract codes with an input file `input_wav.scp`, \nand the codes will be saved to `output_dir/codecs.txt` in a format of jsonl.\n```shell\ncd egs/LibriTTS/codec\nbash encoding_decoding.sh --stage 1 --batch_size 16 --num_workers 4 --gpu_devices \"0,1\" \\\n  --model_dir exp/${model_name} --bit_width 16000 \\\n  --wav_scp input_wav.scp  --out_dir outputs/codecs/\n# input_wav.scp has the following format\uff1a\n# uttid1 path/to/file1.wav\n# uttid2 path/to/file2.wav\n# ...\n```\n\nDecode codes with an input file `codecs.txt`, \nand the reconstructed waveform will be saved to `output_dir/logdir/output.*/*.wav`.\n```shell\nbash encoding_decoding.sh --stage 2 --batch_size 16 --num_workers 4 --gpu_devices \"0,1\" \\\n  --model_dir exp/${model_name} --bit_width 16000 --file_sampling_rate 16000 \\\n  --wav_scp codecs.txt --out_dir outputs/recon_wavs \n# codecs.scp is the output of above encoding stage, which has the following format\uff1a\n# uttid1 [[[1, 2, 3, ...],[2, 3, 4, ...], ...]]\n# uttid2 [[[9, 7, 5, ...],[3, 1, 2, ...], ...]]\n```\n\n<!---\n### Demo inference\n--->\n\n## Training\n### Training on open-source corpora\nFor commonly-used open-source corpora, you can train a model using the recipe in `egs` directory.\nFor example, to train a model on the `LibriTTS` corpus, you can use `egs/LibriTTS/codec/run.sh`:\n```shell\n# entry the LibriTTS recipe directory\ncd egs/LibriTTS/codec\n# run data downloading, preparation and training stages with 2 GPUs (device 0 and 1)\nbash run.sh --stage 0 --stop_stage 3 --gpu_devices 0,1 --gpu_num 2\n```\nWe recommend run the script stage by stage to have an overview of FunCodec.\n\n### Training on customized data\nFor uncovered corpora or customized dataset, you can prepare the data by yourself.\nIn general, FunCodec employs the kaldi-like `wav.scp` file to organize the data files.\n`wav.scp` has the following format:\n```shell\n# for waveform files\nuttid1 /path/to/uttid1.wav\nuttid2 /path/to/uttid2.wav\n# for kaldi-ark files\nuttid3 /path/to/ark1.wav:10\nuttid4 /path/to/ark1.wav:200\nuttid5 /path/to/ark2.wav:10\n```\nAs shown in the above example, FunCodec supports the combination of waveforms or kaldi-ark files \nin one `wav.scp` file for both training and inference.\nHere is a demo script to train a model on your customized dataset named `foo`:\n```shell\ncd egs/LibriTTS/codec\n# 0. make the directory for train, dev and test sets\nmkdir -p dump/foo/train dump/foo/dev dump/foo/test\n\n# 1a. if you already have the wav.scp file, just place them under the corresponding directories\nmv train.scp dump/foo/train/; mv dev.scp dump/foo/dev/; mv test.scp dump/foo/test/;\n# 1b. if you don't have the wav.scp file, you can prepare it as follows\nfind path/to/train_set/ -iname \"*.wav\" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/train/wav.scp\nfind path/to/dev_set/   -iname \"*.wav\" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/dev/wav.scp\nfind path/to/test_set/  -iname \"*.wav\" | awk -F '/' '{print $(NF),$0}' | sort > dump/foo/test/wav.scp\n\n# 2. collate shape files\nmkdir exp/foo_states/train exp/foo_states/dev\ntorchrun --nproc_per_node=4 --master_port=1234 scripts/gen_wav_length.py --wav_scp dump/foo/train/wav.scp --out_dir exp/foo_states/train/wav_length\ncat exp/foo_states/train/wav_length/wav_length.*.txt | shuf > exp/foo_states/train/speech_shape\ntorchrun --nproc_per_node=4 --master_port=1234 scripts/gen_wav_length.py --wav_scp dump/foo/dev/wav.scp --out_dir exp/foo_states/dev/wav_length\ncat exp/foo_states/dev/wav_length/wav_length.*.txt | shuf > exp/foo_states/dev/speech_shape\n\n# 3. train the model with 2 GPUs (device 4 and 5) on the customized dataset (foo)\nbash run.sh --gpu_devices 4,5 --gpu_num 2 --dumpdir dump/foo --state_dir foo_states\n```\n\n## Acknowledge\n\n1. We had a consistent design of [FunASR](https://github.com/alibaba/FunASR), including dataloader, model definition and so on.\n2. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for data preparation.\n3. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet). FunCodec follows up the training and finetuning pipelines of ESPnet.\n4. We borrowed the design of model architecture from [Enocdec](https://github.com/facebookresearch/encodec) and [Enocdec_Trainner](https://github.com/Mikxox/EnCodec_Trainer).\n\n## License\nThis project is licensed under [The MIT License](https://opensource.org/licenses/MIT). \nFunCodec also contains various third-party components and some code modified from other repos \nunder other open source licenses.\n\n## Citations\n\n``` bibtex\n@misc{du2023funcodec,\n      title={FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec},\n      author={Zhihao Du, Shiliang Zhang, Kai Hu, Siqi Zheng},\n      year={2023},\n      eprint={2309.07405},\n      archivePrefix={arXiv},\n      primaryClass={cs.Sound}\n}\n```\n",
    "bugtrack_url": null,
    "license": "The MIT License",
    "summary": "FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/alibaba-damo-academy/FunCodec.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53a2e7c75e1e824f79b924fad63769e4fb80ef100f7b2e709ed72b38e8a799d0",
                "md5": "b89f4cce6b27b87eb44dbb1d7adb3b91",
                "sha256": "373b2b1dc7b06625ac87f183e073971fbc3584faf49c7bf8aa049658c9d6b42e"
            },
            "downloads": -1,
            "filename": "funcodec-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b89f4cce6b27b87eb44dbb1d7adb3b91",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 467660,
            "upload_time": "2023-12-22T08:21:52",
            "upload_time_iso_8601": "2023-12-22T08:21:52.787601Z",
            "url": "https://files.pythonhosted.org/packages/53/a2/e7c75e1e824f79b924fad63769e4fb80ef100f7b2e709ed72b38e8a799d0/funcodec-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f1c5ebb4a36c6ccb939cbad4010a97d541bdbc70ade46a9132cc2ed60716cccd",
                "md5": "b5133f522c4a8a5c1b0fd7b120177e29",
                "sha256": "a676dcbec7fa1df5a7127ad6767f707bfa406e4f58800b2050896517a981a56d"
            },
            "downloads": -1,
            "filename": "funcodec-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b5133f522c4a8a5c1b0fd7b120177e29",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 362952,
            "upload_time": "2023-12-22T08:21:55",
            "upload_time_iso_8601": "2023-12-22T08:21:55.511206Z",
            "url": "https://files.pythonhosted.org/packages/f1/c5/ebb4a36c6ccb939cbad4010a97d541bdbc70ade46a9132cc2ed60716cccd/funcodec-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-22 08:21:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "alibaba-damo-academy",
    "github_project": "FunCodec",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "funcodec"
}
        
Elapsed time: 0.15359s