leksara


Nameleksara JSON
Version 0.0.4 PyPI version JSON
download
home_pageNone
SummaryLibrary pemrosesan teks Bahasa Indonesia untuk domain e-commerce (cleaning, PII masking, review mining, pipeline).
upload_time2025-09-20 09:04:41
maintainerNone
docs_urlNone
authorRhendy Saragih
requires_python>=3.9
licenseMIT
keywords nlp indonesian text-cleaning ecommerce pii preprocessing review-mining normalization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Leksara

## Description
**Leksara** is a Python toolkit designed to streamline the preprocessing and cleaning of raw text data for Data Scientists and Machine Learning Engineers. It focuses on handling messy and noisy text data from various domains such as e-commerce, social media, and medical documents. The tool helps clean text by removing punctuation, stopwords, contractions, and other irrelevant content, allowing for efficient data analysis and machine learning model preparation.

## Key Features
- **Basic Cleaning Pipeline**: A straightforward pipeline to clean raw text data by handling common tasks like punctuation removal, casing normalization, and stopword filtering.
- **Advanced Customization**: Users can create custom cleaning pipelines tailored to specific datasets, including support for regex pattern matching, stemming, and custom dictionaries.
- **Preset Options**: Includes predefined cleaning presets for various domains like e-commerce, allowing for one-click cleaning.
- **Slang and Informal Text Handling**: Users can define their own custom dictionaries for slang terms and informal language, especially useful for Indonesian text.

## Usage Examples

### Basic Usage: Basic Cleaning Pipeline
This example demonstrates how to clean e-commerce product reviews using a pre-built preset.

```python
from Leksara  import Leksara 

df['cleaned_review'] = Leksara(df['review_text'], preset='ecommerce_review')
print(df[['review_id', 'cleaned_review']])
```

**Input Data (df):**

| review_id | review_text                            |
|-----------|----------------------------------------|
| 1         | `<p>brgnya ORI & pengiriman cepat. Mantulll 👍</p>` |
| 2         | `Kualitasnya krg bgs, ga sesuai ekspektasi...` |

**Output Data:**

| review_id | cleaned_review                 |
|-----------|---------------------------------|
| 1         | `barang nya original pengiriman cepat mantap` |
| 2         | `kualitasnya kurang bagus tidak sesuai ekspektasi` |

### Advanced Usage: Custom Cleaning Pipeline
Customize the pipeline to mask phone numbers and normalize whitespace in chat logs.

```python
from Leksara import Leksara
from Leksara.functions import to_lowercase, normalize_whitespace
from Leksara.patterns import MASK_PHONE_NUMBER

custom_pipeline = {
    'patterns': [MASK_PHONE_NUMBER],
    'functions': [to_lowercase, normalize_whitespace]
}

df['safe_message'] = Leksara(df['chat_message'], pipeline=custom_pipeline)
print(df[['chat_id', 'safe_message']])
```

**Input Data (df):**

| chat_id | chat_message                           |
|---------|----------------------------------------|
| 101     | `Hi kak, pesanan saya INV/123 blm sampai. No HP saya 081234567890` |
| 102     | `Tolong dibantu ya sis, thanks`        |

**Output Data:**

| chat_id | safe_message                           |
|---------|----------------------------------------|
| 101     | `hi kak, pesanan saya inv/123 blm sampai. no hp saya [PHONE_NUMBER]` |
| 102     | `tolong dibantu ya sis, thanks`        |

## Goals & Objectives
- Provide an intuitive and adaptable cleaning tool for Indonesian text, focusing on domains like e-commerce.
- Enable Data Scientists and ML Engineers to clean and preprocess text with minimal effort.
- Allow for deep customization through configuration options and the use of custom dictionaries.

## Success Metrics
- **On-time Delivery**: Targeted release by October 15, 2025.
- **Processing Speed**: Clean a 10,000-row Pandas Series in under 5 seconds.
- **Cleaning Accuracy**: Achieve over 95% accuracy for core cleaning functions.

## Folder Structure
Below is the recommended folder structure for organizing the project:
```
[Leksara]/
├── pyproject.toml                  # packaging & deps
├── setup.py                        # setup (legacy)
├── requirements.txt                # runtime deps
├── README.md                       # overview & usage
├── REPOSITORY_GUIDELINES.md
├── LICENSE
├── .gitignore
├── data/                           # (opsional) data non-package
│   ├── raw/
│   ├── processed/
│   └── external/
├── docs/
│   ├── index.md
│   ├── usage.md
│   ├── presets.md
│   └── benchmarks.md
├── leksara/                        # package utama (huruf kecil)
│   ├── __init__.py                 # public API surface
│   ├── clean.py                    # basic_clean orchestrator
│   ├── presets.py                  # PRESETS, get_preset(), apply_preset()
│   ├── utils.py                    # helper legacy (unicode normalize, control-chars)
│   ├── cleaning.py                 # remove_tags, case_normal, remove_whitespace (+emoji fallback)
│   ├── miner.py                    # rating, elongation, acronyms, slang, contraction, normalize_word
│   ├── pii.py                      # remove/replace phone|email|address|id
│   ├── pipeline.py                 # shim: exports PipelineConfig, ReviewChain
│   ├── cartboard/
│   │   ├── __init__.py
│   │   ├── frame.py                # build_frame(), REQUIRED_COLUMNS
│   │   └── flags.py                # heuristik flag kolom
│   ├── review_chain/
│   │   ├── __init__.py
│   │   ├── pipeline.py             # PipelineConfig, ReviewChain, review_chain()
│   │   ├── benchmark.py            # timing per stage & total
│   │   └── schemas.py              # tipe konfigurasi pipeline/preset
│   ├── utils/
│   │   ├── __init__.py             # normalize_text, unicode_normalize_nfkc, strip_control_chars, io helpers
│   │   ├── unicode.py              # NFKC normalize
│   │   ├── io.py                   # importlib.resources helpers
│   │   └── regex_cache.py          # precompile & cache pattern
│   ├── functions/                  # modul granular + legacy shims
│   │   ├── __init__.py
│   │   ├── cartboard.py            # shim lama (jika dibutuhkan)
│   │   ├── cleaning.py             # util pembersihan level-fungsi
│   │   ├── miner.py                # review funcs (rating, acronyms, slang, dst.)
│   │   ├── pii.py                  # PII handlers
│   │   ├── normalize_repeated.py   # reduksi pengulangan karakter
│   │   ├── normalize_whitespace.py
│   │   ├── remove_digits.py
│   │   ├── remove_punctuation.py
│   │   ├── stopwords.py
│   │   ├── strip_html.py
│   │   ├── to_lowercase.py
│   │   └── utils/
│   │       ├── __init__.py
│   │       ├── unicode.py
│   │       ├── io.py
│   │       └── regexes.py          # RE_HTML_TAGS, RE_PHONE, RE_EMAIL, RE_ADDRESS, RE_KTP, RE_ELONGATION
│   └── data/                       # package data (dibundel saat install)
│       ├── stopwords_id.txt
│       ├── slang_map.json
│       ├── acronyms.json
│       └── patterns/
│           ├── phone.regex
│           ├── email.regex
│           ├── address.regex
│           └── ktp.regex
└── tests/
    ├── __init__.py
    ├── conftest.py                 # tambahkan repo-root ke sys.path untuk import lokal
    ├── acceptance/
    │   └── test_f1_f5.py
    ├── integration/
    │   ├── test_pipeline_end_to_end.py
    │   └── test_preset_ecommerce_review.py
    ├── unit/
    │   ├── test_cartboard.py
    │   ├── test_cleaning.py
    │   ├── test_miner.py
    │   ├── test_pii.py
    │   └── test_utils.py
    ├── test_clean.py
    ├── test_presets.py
    └── test_utils.py
```

## Milestones

| Sprint | Dates                | Goal                                           |
|--------|----------------------|------------------------------------------------|
| 1      | Aug 18 – Aug 22      | Project Kickoff, Discovery, Set up repository |
| 2      | Aug 22 – Aug 29      | Build Core Cleaning Engine                    |
| 3      | Aug 29 – Sep 5       | Develop Configurable Features                 |
| 4      | Sep 5 – Sep 12       | Implement Advanced Customization              |
| 5      | Sep 12 – Sep 19      | Refine API                                    |
| 6      | Sep 19 – Sep 26      | Optimize System                               |
| 7      | Sep 26 – Oct 3       | Finalize Documentation                        |
| 8      | Oct 3 – Oct 10       | Prepare for Launch                            |

## Requirements
- Python 3.x
- Pandas

### Install
```bash
pip install Leksara
```

## Contributors
- **Vivian & Zahra** – Document Owners
- **Salsa** – UI/UX Designer
- **Aufi, Althaf, Rhendy, Adit** – Data Science Team
- **Alya, Vivin** – Data Analyst Team

For more details on the features and usage, refer to the official documentation linked above.

## Links
- [UI Design](https://www.figma.com/proto/ATkL3Omdc2ZdT7ppldx2Br/Laplace-Project?node-id=41-19&t=OIOqDyu4cKp3Q90P-1)
- [Product Design and Mockups](https://www.figma.com/proto/ATkL3Omdc2ZdT7ppldx2Br/Laplace-Project?node-id=41-19&t=OIOqDyu4cKp3Q90P-1)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "leksara",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "nlp, indonesian, text-cleaning, ecommerce, pii, preprocessing, review-mining, normalization",
    "author": "Rhendy Saragih",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/f1/03/1e1ed10f6f824ff77657ce39a6761749a61177609be13fec0e0bbf5c22dd/leksara-0.0.4.tar.gz",
    "platform": null,
    "description": "# Leksara\n\n## Description\n**Leksara** is a Python toolkit designed to streamline the preprocessing and cleaning of raw text data for Data Scientists and Machine Learning Engineers. It focuses on handling messy and noisy text data from various domains such as e-commerce, social media, and medical documents. The tool helps clean text by removing punctuation, stopwords, contractions, and other irrelevant content, allowing for efficient data analysis and machine learning model preparation.\n\n## Key Features\n- **Basic Cleaning Pipeline**: A straightforward pipeline to clean raw text data by handling common tasks like punctuation removal, casing normalization, and stopword filtering.\n- **Advanced Customization**: Users can create custom cleaning pipelines tailored to specific datasets, including support for regex pattern matching, stemming, and custom dictionaries.\n- **Preset Options**: Includes predefined cleaning presets for various domains like e-commerce, allowing for one-click cleaning.\n- **Slang and Informal Text Handling**: Users can define their own custom dictionaries for slang terms and informal language, especially useful for Indonesian text.\n\n## Usage Examples\n\n### Basic Usage: Basic Cleaning Pipeline\nThis example demonstrates how to clean e-commerce product reviews using a pre-built preset.\n\n```python\nfrom Leksara  import Leksara \n\ndf['cleaned_review'] = Leksara(df['review_text'], preset='ecommerce_review')\nprint(df[['review_id', 'cleaned_review']])\n```\n\n**Input Data (df):**\n\n| review_id | review_text                            |\n|-----------|----------------------------------------|\n| 1         | `<p>brgnya ORI & pengiriman cepat. Mantulll \ud83d\udc4d</p>` |\n| 2         | `Kualitasnya krg bgs, ga sesuai ekspektasi...` |\n\n**Output Data:**\n\n| review_id | cleaned_review                 |\n|-----------|---------------------------------|\n| 1         | `barang nya original pengiriman cepat mantap` |\n| 2         | `kualitasnya kurang bagus tidak sesuai ekspektasi` |\n\n### Advanced Usage: Custom Cleaning Pipeline\nCustomize the pipeline to mask phone numbers and normalize whitespace in chat logs.\n\n```python\nfrom Leksara import Leksara\nfrom Leksara.functions import to_lowercase, normalize_whitespace\nfrom Leksara.patterns import MASK_PHONE_NUMBER\n\ncustom_pipeline = {\n    'patterns': [MASK_PHONE_NUMBER],\n    'functions': [to_lowercase, normalize_whitespace]\n}\n\ndf['safe_message'] = Leksara(df['chat_message'], pipeline=custom_pipeline)\nprint(df[['chat_id', 'safe_message']])\n```\n\n**Input Data (df):**\n\n| chat_id | chat_message                           |\n|---------|----------------------------------------|\n| 101     | `Hi kak, pesanan saya INV/123 blm sampai. No HP saya 081234567890` |\n| 102     | `Tolong dibantu ya sis, thanks`        |\n\n**Output Data:**\n\n| chat_id | safe_message                           |\n|---------|----------------------------------------|\n| 101     | `hi kak, pesanan saya inv/123 blm sampai. no hp saya [PHONE_NUMBER]` |\n| 102     | `tolong dibantu ya sis, thanks`        |\n\n## Goals & Objectives\n- Provide an intuitive and adaptable cleaning tool for Indonesian text, focusing on domains like e-commerce.\n- Enable Data Scientists and ML Engineers to clean and preprocess text with minimal effort.\n- Allow for deep customization through configuration options and the use of custom dictionaries.\n\n## Success Metrics\n- **On-time Delivery**: Targeted release by October 15, 2025.\n- **Processing Speed**: Clean a 10,000-row Pandas Series in under 5 seconds.\n- **Cleaning Accuracy**: Achieve over 95% accuracy for core cleaning functions.\n\n## Folder Structure\nBelow is the recommended folder structure for organizing the project:\n```\n[Leksara]/\n\u251c\u2500\u2500 pyproject.toml                  # packaging & deps\n\u251c\u2500\u2500 setup.py                        # setup (legacy)\n\u251c\u2500\u2500 requirements.txt                # runtime deps\n\u251c\u2500\u2500 README.md                       # overview & usage\n\u251c\u2500\u2500 REPOSITORY_GUIDELINES.md\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 .gitignore\n\u251c\u2500\u2500 data/                           # (opsional) data non-package\n\u2502   \u251c\u2500\u2500 raw/\n\u2502   \u251c\u2500\u2500 processed/\n\u2502   \u2514\u2500\u2500 external/\n\u251c\u2500\u2500 docs/\n\u2502   \u251c\u2500\u2500 index.md\n\u2502   \u251c\u2500\u2500 usage.md\n\u2502   \u251c\u2500\u2500 presets.md\n\u2502   \u2514\u2500\u2500 benchmarks.md\n\u251c\u2500\u2500 leksara/                        # package utama (huruf kecil)\n\u2502   \u251c\u2500\u2500 __init__.py                 # public API surface\n\u2502   \u251c\u2500\u2500 clean.py                    # basic_clean orchestrator\n\u2502   \u251c\u2500\u2500 presets.py                  # PRESETS, get_preset(), apply_preset()\n\u2502   \u251c\u2500\u2500 utils.py                    # helper legacy (unicode normalize, control-chars)\n\u2502   \u251c\u2500\u2500 cleaning.py                 # remove_tags, case_normal, remove_whitespace (+emoji fallback)\n\u2502   \u251c\u2500\u2500 miner.py                    # rating, elongation, acronyms, slang, contraction, normalize_word\n\u2502   \u251c\u2500\u2500 pii.py                      # remove/replace phone|email|address|id\n\u2502   \u251c\u2500\u2500 pipeline.py                 # shim: exports PipelineConfig, ReviewChain\n\u2502   \u251c\u2500\u2500 cartboard/\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u251c\u2500\u2500 frame.py                # build_frame(), REQUIRED_COLUMNS\n\u2502   \u2502   \u2514\u2500\u2500 flags.py                # heuristik flag kolom\n\u2502   \u251c\u2500\u2500 review_chain/\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u251c\u2500\u2500 pipeline.py             # PipelineConfig, ReviewChain, review_chain()\n\u2502   \u2502   \u251c\u2500\u2500 benchmark.py            # timing per stage & total\n\u2502   \u2502   \u2514\u2500\u2500 schemas.py              # tipe konfigurasi pipeline/preset\n\u2502   \u251c\u2500\u2500 utils/\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py             # normalize_text, unicode_normalize_nfkc, strip_control_chars, io helpers\n\u2502   \u2502   \u251c\u2500\u2500 unicode.py              # NFKC normalize\n\u2502   \u2502   \u251c\u2500\u2500 io.py                   # importlib.resources helpers\n\u2502   \u2502   \u2514\u2500\u2500 regex_cache.py          # precompile & cache pattern\n\u2502   \u251c\u2500\u2500 functions/                  # modul granular + legacy shims\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u251c\u2500\u2500 cartboard.py            # shim lama (jika dibutuhkan)\n\u2502   \u2502   \u251c\u2500\u2500 cleaning.py             # util pembersihan level-fungsi\n\u2502   \u2502   \u251c\u2500\u2500 miner.py                # review funcs (rating, acronyms, slang, dst.)\n\u2502   \u2502   \u251c\u2500\u2500 pii.py                  # PII handlers\n\u2502   \u2502   \u251c\u2500\u2500 normalize_repeated.py   # reduksi pengulangan karakter\n\u2502   \u2502   \u251c\u2500\u2500 normalize_whitespace.py\n\u2502   \u2502   \u251c\u2500\u2500 remove_digits.py\n\u2502   \u2502   \u251c\u2500\u2500 remove_punctuation.py\n\u2502   \u2502   \u251c\u2500\u2500 stopwords.py\n\u2502   \u2502   \u251c\u2500\u2500 strip_html.py\n\u2502   \u2502   \u251c\u2500\u2500 to_lowercase.py\n\u2502   \u2502   \u2514\u2500\u2500 utils/\n\u2502   \u2502       \u251c\u2500\u2500 __init__.py\n\u2502   \u2502       \u251c\u2500\u2500 unicode.py\n\u2502   \u2502       \u251c\u2500\u2500 io.py\n\u2502   \u2502       \u2514\u2500\u2500 regexes.py          # RE_HTML_TAGS, RE_PHONE, RE_EMAIL, RE_ADDRESS, RE_KTP, RE_ELONGATION\n\u2502   \u2514\u2500\u2500 data/                       # package data (dibundel saat install)\n\u2502       \u251c\u2500\u2500 stopwords_id.txt\n\u2502       \u251c\u2500\u2500 slang_map.json\n\u2502       \u251c\u2500\u2500 acronyms.json\n\u2502       \u2514\u2500\u2500 patterns/\n\u2502           \u251c\u2500\u2500 phone.regex\n\u2502           \u251c\u2500\u2500 email.regex\n\u2502           \u251c\u2500\u2500 address.regex\n\u2502           \u2514\u2500\u2500 ktp.regex\n\u2514\u2500\u2500 tests/\n    \u251c\u2500\u2500 __init__.py\n    \u251c\u2500\u2500 conftest.py                 # tambahkan repo-root ke sys.path untuk import lokal\n    \u251c\u2500\u2500 acceptance/\n    \u2502   \u2514\u2500\u2500 test_f1_f5.py\n    \u251c\u2500\u2500 integration/\n    \u2502   \u251c\u2500\u2500 test_pipeline_end_to_end.py\n    \u2502   \u2514\u2500\u2500 test_preset_ecommerce_review.py\n    \u251c\u2500\u2500 unit/\n    \u2502   \u251c\u2500\u2500 test_cartboard.py\n    \u2502   \u251c\u2500\u2500 test_cleaning.py\n    \u2502   \u251c\u2500\u2500 test_miner.py\n    \u2502   \u251c\u2500\u2500 test_pii.py\n    \u2502   \u2514\u2500\u2500 test_utils.py\n    \u251c\u2500\u2500 test_clean.py\n    \u251c\u2500\u2500 test_presets.py\n    \u2514\u2500\u2500 test_utils.py\n```\n\n## Milestones\n\n| Sprint | Dates                | Goal                                           |\n|--------|----------------------|------------------------------------------------|\n| 1      | Aug 18 \u2013 Aug 22      | Project Kickoff, Discovery, Set up repository |\n| 2      | Aug 22 \u2013 Aug 29      | Build Core Cleaning Engine                    |\n| 3      | Aug 29 \u2013 Sep 5       | Develop Configurable Features                 |\n| 4      | Sep 5 \u2013 Sep 12       | Implement Advanced Customization              |\n| 5      | Sep 12 \u2013 Sep 19      | Refine API                                    |\n| 6      | Sep 19 \u2013 Sep 26      | Optimize System                               |\n| 7      | Sep 26 \u2013 Oct 3       | Finalize Documentation                        |\n| 8      | Oct 3 \u2013 Oct 10       | Prepare for Launch                            |\n\n## Requirements\n- Python 3.x\n- Pandas\n\n### Install\n```bash\npip install Leksara\n```\n\n## Contributors\n- **Vivian & Zahra** \u2013 Document Owners\n- **Salsa** \u2013 UI/UX Designer\n- **Aufi, Althaf, Rhendy, Adit** \u2013 Data Science Team\n- **Alya, Vivin** \u2013 Data Analyst Team\n\nFor more details on the features and usage, refer to the official documentation linked above.\n\n## Links\n- [UI Design](https://www.figma.com/proto/ATkL3Omdc2ZdT7ppldx2Br/Laplace-Project?node-id=41-19&t=OIOqDyu4cKp3Q90P-1)\n- [Product Design and Mockups](https://www.figma.com/proto/ATkL3Omdc2ZdT7ppldx2Br/Laplace-Project?node-id=41-19&t=OIOqDyu4cKp3Q90P-1)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Library pemrosesan teks Bahasa Indonesia untuk domain e-commerce (cleaning, PII masking, review mining, pipeline).",
    "version": "0.0.4",
    "project_urls": {
        "Documentation": "https://example.com/leksara/docs",
        "Homepage": "https://example.com/leksara",
        "Issues": "https://example.com/leksara/issues",
        "Source": "https://example.com/leksara/repo"
    },
    "split_keywords": [
        "nlp",
        " indonesian",
        " text-cleaning",
        " ecommerce",
        " pii",
        " preprocessing",
        " review-mining",
        " normalization"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "313debdb182bb7072635a0e6a866eef9842ae157d7279b4501373eb792353f76",
                "md5": "0ae09973a708e8386d6fea9ad86d6734",
                "sha256": "2d6ff1b09511504800eed9e8be242802ac2b5f96d1e0f9bceb43661667bf1a80"
            },
            "downloads": -1,
            "filename": "leksara-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0ae09973a708e8386d6fea9ad86d6734",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 22728,
            "upload_time": "2025-09-20T09:04:40",
            "upload_time_iso_8601": "2025-09-20T09:04:40.085400Z",
            "url": "https://files.pythonhosted.org/packages/31/3d/ebdb182bb7072635a0e6a866eef9842ae157d7279b4501373eb792353f76/leksara-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f1031e1ed10f6f824ff77657ce39a6761749a61177609be13fec0e0bbf5c22dd",
                "md5": "8ecd911b95026a2ed1f6bcaf8d3758d7",
                "sha256": "db40272a05a60c36fc6fe4f191fa1c850c8da09b830a7a17869096fd50e89b23"
            },
            "downloads": -1,
            "filename": "leksara-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "8ecd911b95026a2ed1f6bcaf8d3758d7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 22558,
            "upload_time": "2025-09-20T09:04:41",
            "upload_time_iso_8601": "2025-09-20T09:04:41.569886Z",
            "url": "https://files.pythonhosted.org/packages/f1/03/1e1ed10f6f824ff77657ce39a6761749a61177609be13fec0e0bbf5c22dd/leksara-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-20 09:04:41",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "leksara"
}
        
Elapsed time: 3.16466s