skclust


Nameskclust JSON
Version 2025.9.8 PyPI version JSON
download
home_pagehttps://github.com/jolespin/skclust
SummaryA comprehensive clustering toolkit with advanced tree cutting and visualization
upload_time2025-09-08 21:07:56
maintainerNone
docs_urlNone
authorJosh L. Espinoza
requires_python>=3.8
licenseMIT
keywords clustering hierarchical-clustering dendrogram tree-cutting machine-learning data-analysis bioinformatics network-analysis visualization scikit-learn
VCS
bugtrack_url
requirements numpy pandas scipy scikit-learn matplotlib seaborn networkx loguru
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # skclust
A comprehensive clustering toolkit with advanced tree cutting, visualization, and network analysis capabilities.

[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![scikit-learn compatible](https://img.shields.io/badge/sklearn-compatible-orange.svg)](https://scikit-learn.org)
![Beta](https://img.shields.io/badge/status-beta-orange)
![Not Production Ready](https://img.shields.io/badge/production-not%20ready-red)

**Warning: This is a beta release and has not been thoroughly tested.**

##  Features

- **Scikit-learn compatible** API for seamless integration
- **Multiple linkage methods** (Ward, Complete, Average, Single, etc.)
- **Advanced tree cutting** with dynamic, height-based, and max-cluster methods
- **Rich visualizations** with dendrograms and metadata tracks
- **Network analysis** with connectivity metrics and NetworkX integration
- **Cluster validation** using silhouette analysis
- **Tree export** in Newick format for phylogenetic analysis
- **Distance matrix support** for precomputed distances
- **Metadata tracks** for biological and experimental annotations

## Installation

```bash
pip install skclust
```

## Quick Start

### Hierarchical Clustering

```python
import pandas as pd
import numpy as np
from sklearn.datasets import make_blobs
from skclust import HierarchicalClustering

# Generate sample data
X, y = make_blobs(n_samples=100, centers=4, random_state=42)
X_df = pd.DataFrame(X, columns=['feature_1', 'feature_2'])

# Perform hierarchical clustering
hc = HierarchicalClustering(
    method='ward',
    cut_method='dynamic',
    min_cluster_size=5
)

# Fit and get cluster labels
labels = hc.fit_transform(X_df)
print(f"Found {hc.n_clusters_} clusters")

# Plot dendrogram with clusters
fig, axes = hc.plot(figsize=(12, 6), show_clusters=True)
```

### Representative Sampling

```python
from skclust import KMeansRepresentativeSampler

# Create representative test set (10% of data)
sampler = KMeansRepresentativeSampler(
    sampling_size=0.1,
    stratify=True,  # Maintain class proportions
    method='minibatch'
)

# Get train/test split
X_train, X_test, y_train, y_test = sampler.fit(X_df, y).get_train_test_split(X_df, y)

print(f"Train set: {len(X_train)} samples")
print(f"Test set: {len(X_test)} samples ({len(X_test)/len(X_df)*100:.1f}%)")
```

## Advanced Usage

### Adding Metadata Tracks

```python
# Add continuous metadata track
sample_scores = pd.Series(np.random.randn(100), index=X_df.index)
hc.add_track('Quality Score', sample_scores, track_type='continuous')

# Add categorical metadata track
sample_groups = pd.Series(['A', 'B', 'C'] * 34, index=X_df.index[:100])
hc.add_track('Group', sample_groups, track_type='categorical')

# Plot with metadata tracks
fig, axes = hc.plot(show_tracks=True, figsize=(12, 8))
```

### Custom Tree Cutting

```python
# Cut by height
hc_height = HierarchicalClustering(
    method='ward',
    cut_method='height',
    cut_threshold=50.0
)
labels_height = hc_height.fit_transform(X_df)

# Cut by number of clusters
hc_maxclust = HierarchicalClustering(
    method='complete',
    cut_method='maxclust',
    cut_threshold=5  # Force exactly 5 clusters
)
labels_maxclust = hc_maxclust.fit_transform(X_df)
```

### Distance Matrix Input

```python
from scipy.spatial.distance import pdist, squareform

# Compute custom distance matrix
distances = pdist(X_df, metric='cosine')
distance_matrix = pd.DataFrame(squareform(distances), 
                              index=X_df.index, 
                              columns=X_df.index)

# Cluster using precomputed distances
hc_custom = HierarchicalClustering(method='average')
labels_custom = hc_custom.fit_transform(distance_matrix)
```

### Stratified Representative Sampling

```python
# Enhanced stratified sampling with minority class boosting
sampler_enhanced = KMeansRepresentativeSampler(
    sampling_size=0.15,
    stratify=True,
    coverage_boost=2.0,  # Boost minority classes
    min_clusters_per_class=3,  # Ensure minimum representation
    method='kmeans'
)

X_train, X_test, y_train, y_test = sampler_enhanced.fit(X_df, y).get_train_test_split(X_df, y)

# Check class balance preservation
print("Original class distribution:")
print(pd.Series(y).value_counts().sort_index())
print("\nTest set class distribution:")
print(pd.Series(y_test).value_counts().sort_index())
```

## API Reference

### HierarchicalClustering

**Parameters:**
- `method`: Linkage method ('ward', 'complete', 'average', 'single', 'centroid', 'median', 'weighted')
- `metric`: Distance metric for computing pairwise distances
- `cut_method`: Tree cutting method ('dynamic', 'height', 'maxclust')
- `min_cluster_size`: Minimum cluster size for dynamic cutting
- `deep_split`: Deep split parameter for dynamic cutting (0-4)
- `cut_threshold`: Threshold for height/maxclust cutting
- `cluster_prefix`: String prefix for cluster labels (e.g., "C" → "C1", "C2")

**Key Methods:**
- `fit(X)`: Fit hierarchical clustering to data
- `transform()`: Return cluster labels
- `add_track(name, data, track_type)`: Add metadata track for visualization
- `plot()`: Generate dendrogram with optional tracks and clusters
- `summary()`: Print clustering summary statistics

### KMeansRepresentativeSampler

**Parameters:**
- `sampling_size`: Proportion of data for test set (0.0-1.0)
- `stratify`: Whether to maintain class proportions
- `method`: Clustering method ('minibatch', 'kmeans')
- `coverage_boost`: Boost factor for minority classes (>1.0)
- `min_clusters_per_class`: Minimum clusters per class
- `batch_size`: Batch size for MiniBatchKMeans

**Key Methods:**
- `fit(X, y)`: Fit sampler and identify representatives
- `transform(X)`: Return representative samples
- `get_train_test_split(X, y)`: Get train/test split

## Examples with Real Data

### Iris Dataset

```python
from sklearn.datasets import load_iris

# Load iris dataset
iris = load_iris()
X_iris = pd.DataFrame(iris.data, columns=iris.feature_names)
y_iris = pd.Series(iris.target, name='species')

# Hierarchical clustering
hc_iris = HierarchicalClustering(
    method='ward',
    cut_method='dynamic',
    min_cluster_size=10,
    cluster_prefix='Cluster_'
)

clusters = hc_iris.fit_transform(X_iris)

# Add species information as track
species_names = pd.Series([iris.target_names[i] for i in y_iris], index=X_iris.index)
hc_iris.add_track('True Species', species_names, track_type='categorical')

# Plot results
fig, axes = hc_iris.plot(show_clusters=True, show_tracks=True, figsize=(15, 8))
```

### Creating Balanced Test Sets

```python
# Create representative test set maintaining species balance
sampler_iris = KMeansRepresentativeSampler(
    sampling_size=0.2,  # 20% test set
    stratify=True,
    coverage_boost=1.0,  # Equal representation
    method='kmeans',
    random_state=42
)

X_train, X_test, y_train, y_test = sampler_iris.fit(X_iris, y_iris).get_train_test_split(X_iris, y_iris)

print(f"Train set: {len(X_train)} samples")
print(f"Test set: {len(X_test)} samples")
print(f"Representative indices: {sampler_iris.representative_indices_[:10].tolist()}")
```

## Dependencies

### Required
- numpy
- pandas
- scikit-learn
- scipy
- matplotlib
- seaborn
- networkx
- loguru

### Optional (for enhanced functionality)
- dynamicTreeCut (dynamic tree cutting)
- skbio (tree representations)
- fastcluster (faster linkage computation)
- ensemble_networkx (network analysis)

## Author

Josh L. Espinoza

##  License

This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.

##  Original Implementation

This package is based on the hierarchical clustering implementation originally developed in the [Soothsayer](https://github.com/jolespin/soothsayer) framework:

**Espinoza JL, Dupont CL, O'Rourke A, Beyhan S, Morales P, et al. (2021) Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence approach. PLOS Computational Biology 17(3): e1008857.** [https://doi.org/10.1371/journal.pcbi.1008857](https://doi.org/10.1371/journal.pcbi.1008857)

The original implementation provided the foundation for the hierarchical clustering algorithms, metadata track visualization, and eigenprofile analysis features in this package.

##  Acknowledgments

- Built on top of scipy, scikit-learn, and networkx
- Original implementation developed in the [Soothsayer framework](https://github.com/jolespin/soothsayer)
- Inspired by WGCNA and other biological clustering tools
- Dynamic tree cutting algorithms from the dynamicTreeCut package

##  Support

- **Documentation**: [Link to docs]
- **Issues**: [GitHub Issues](https://github.com/your-username/hierarchical-clustering/issues)
- **Discussions**: [GitHub Discussions](https://github.com/your-username/hierarchical-clustering/discussions)

##  Citation

If you use this package in your research, please cite:

**Original Soothsayer implementation:**
```bibtex
@article{espinoza2021predicting,
  title={Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence approach},
  author={Espinoza, Josh L and Dupont, Chris L and O'Rourke, Aubrie and Beyhan, Seherzada and Morales, Paula and others},
  journal={PLOS Computational Biology},
  volume={17},
  number={3},
  pages={e1008857},
  year={2021},
  publisher={Public Library of Science San Francisco, CA USA},
  doi={10.1371/journal.pcbi.1008857},
  url={https://doi.org/10.1371/journal.pcbi.1008857}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jolespin/skclust",
    "name": "skclust",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "clustering hierarchical-clustering dendrogram tree-cutting machine-learning data-analysis bioinformatics network-analysis visualization scikit-learn",
    "author": "Josh L. Espinoza",
    "author_email": "jol.espinoz@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/60/29/3132061fc2c45aae1648329acb2687fb47c432d30cfbf1d0cd2609c7a9d7/skclust-2025.9.8.tar.gz",
    "platform": null,
    "description": "# skclust\nA comprehensive clustering toolkit with advanced tree cutting, visualization, and network analysis capabilities.\n\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![scikit-learn compatible](https://img.shields.io/badge/sklearn-compatible-orange.svg)](https://scikit-learn.org)\n![Beta](https://img.shields.io/badge/status-beta-orange)\n![Not Production Ready](https://img.shields.io/badge/production-not%20ready-red)\n\n**Warning: This is a beta release and has not been thoroughly tested.**\n\n##  Features\n\n- **Scikit-learn compatible** API for seamless integration\n- **Multiple linkage methods** (Ward, Complete, Average, Single, etc.)\n- **Advanced tree cutting** with dynamic, height-based, and max-cluster methods\n- **Rich visualizations** with dendrograms and metadata tracks\n- **Network analysis** with connectivity metrics and NetworkX integration\n- **Cluster validation** using silhouette analysis\n- **Tree export** in Newick format for phylogenetic analysis\n- **Distance matrix support** for precomputed distances\n- **Metadata tracks** for biological and experimental annotations\n\n## Installation\n\n```bash\npip install skclust\n```\n\n## Quick Start\n\n### Hierarchical Clustering\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom sklearn.datasets import make_blobs\nfrom skclust import HierarchicalClustering\n\n# Generate sample data\nX, y = make_blobs(n_samples=100, centers=4, random_state=42)\nX_df = pd.DataFrame(X, columns=['feature_1', 'feature_2'])\n\n# Perform hierarchical clustering\nhc = HierarchicalClustering(\n    method='ward',\n    cut_method='dynamic',\n    min_cluster_size=5\n)\n\n# Fit and get cluster labels\nlabels = hc.fit_transform(X_df)\nprint(f\"Found {hc.n_clusters_} clusters\")\n\n# Plot dendrogram with clusters\nfig, axes = hc.plot(figsize=(12, 6), show_clusters=True)\n```\n\n### Representative Sampling\n\n```python\nfrom skclust import KMeansRepresentativeSampler\n\n# Create representative test set (10% of data)\nsampler = KMeansRepresentativeSampler(\n    sampling_size=0.1,\n    stratify=True,  # Maintain class proportions\n    method='minibatch'\n)\n\n# Get train/test split\nX_train, X_test, y_train, y_test = sampler.fit(X_df, y).get_train_test_split(X_df, y)\n\nprint(f\"Train set: {len(X_train)} samples\")\nprint(f\"Test set: {len(X_test)} samples ({len(X_test)/len(X_df)*100:.1f}%)\")\n```\n\n## Advanced Usage\n\n### Adding Metadata Tracks\n\n```python\n# Add continuous metadata track\nsample_scores = pd.Series(np.random.randn(100), index=X_df.index)\nhc.add_track('Quality Score', sample_scores, track_type='continuous')\n\n# Add categorical metadata track\nsample_groups = pd.Series(['A', 'B', 'C'] * 34, index=X_df.index[:100])\nhc.add_track('Group', sample_groups, track_type='categorical')\n\n# Plot with metadata tracks\nfig, axes = hc.plot(show_tracks=True, figsize=(12, 8))\n```\n\n### Custom Tree Cutting\n\n```python\n# Cut by height\nhc_height = HierarchicalClustering(\n    method='ward',\n    cut_method='height',\n    cut_threshold=50.0\n)\nlabels_height = hc_height.fit_transform(X_df)\n\n# Cut by number of clusters\nhc_maxclust = HierarchicalClustering(\n    method='complete',\n    cut_method='maxclust',\n    cut_threshold=5  # Force exactly 5 clusters\n)\nlabels_maxclust = hc_maxclust.fit_transform(X_df)\n```\n\n### Distance Matrix Input\n\n```python\nfrom scipy.spatial.distance import pdist, squareform\n\n# Compute custom distance matrix\ndistances = pdist(X_df, metric='cosine')\ndistance_matrix = pd.DataFrame(squareform(distances), \n                              index=X_df.index, \n                              columns=X_df.index)\n\n# Cluster using precomputed distances\nhc_custom = HierarchicalClustering(method='average')\nlabels_custom = hc_custom.fit_transform(distance_matrix)\n```\n\n### Stratified Representative Sampling\n\n```python\n# Enhanced stratified sampling with minority class boosting\nsampler_enhanced = KMeansRepresentativeSampler(\n    sampling_size=0.15,\n    stratify=True,\n    coverage_boost=2.0,  # Boost minority classes\n    min_clusters_per_class=3,  # Ensure minimum representation\n    method='kmeans'\n)\n\nX_train, X_test, y_train, y_test = sampler_enhanced.fit(X_df, y).get_train_test_split(X_df, y)\n\n# Check class balance preservation\nprint(\"Original class distribution:\")\nprint(pd.Series(y).value_counts().sort_index())\nprint(\"\\nTest set class distribution:\")\nprint(pd.Series(y_test).value_counts().sort_index())\n```\n\n## API Reference\n\n### HierarchicalClustering\n\n**Parameters:**\n- `method`: Linkage method ('ward', 'complete', 'average', 'single', 'centroid', 'median', 'weighted')\n- `metric`: Distance metric for computing pairwise distances\n- `cut_method`: Tree cutting method ('dynamic', 'height', 'maxclust')\n- `min_cluster_size`: Minimum cluster size for dynamic cutting\n- `deep_split`: Deep split parameter for dynamic cutting (0-4)\n- `cut_threshold`: Threshold for height/maxclust cutting\n- `cluster_prefix`: String prefix for cluster labels (e.g., \"C\" \u2192 \"C1\", \"C2\")\n\n**Key Methods:**\n- `fit(X)`: Fit hierarchical clustering to data\n- `transform()`: Return cluster labels\n- `add_track(name, data, track_type)`: Add metadata track for visualization\n- `plot()`: Generate dendrogram with optional tracks and clusters\n- `summary()`: Print clustering summary statistics\n\n### KMeansRepresentativeSampler\n\n**Parameters:**\n- `sampling_size`: Proportion of data for test set (0.0-1.0)\n- `stratify`: Whether to maintain class proportions\n- `method`: Clustering method ('minibatch', 'kmeans')\n- `coverage_boost`: Boost factor for minority classes (>1.0)\n- `min_clusters_per_class`: Minimum clusters per class\n- `batch_size`: Batch size for MiniBatchKMeans\n\n**Key Methods:**\n- `fit(X, y)`: Fit sampler and identify representatives\n- `transform(X)`: Return representative samples\n- `get_train_test_split(X, y)`: Get train/test split\n\n## Examples with Real Data\n\n### Iris Dataset\n\n```python\nfrom sklearn.datasets import load_iris\n\n# Load iris dataset\niris = load_iris()\nX_iris = pd.DataFrame(iris.data, columns=iris.feature_names)\ny_iris = pd.Series(iris.target, name='species')\n\n# Hierarchical clustering\nhc_iris = HierarchicalClustering(\n    method='ward',\n    cut_method='dynamic',\n    min_cluster_size=10,\n    cluster_prefix='Cluster_'\n)\n\nclusters = hc_iris.fit_transform(X_iris)\n\n# Add species information as track\nspecies_names = pd.Series([iris.target_names[i] for i in y_iris], index=X_iris.index)\nhc_iris.add_track('True Species', species_names, track_type='categorical')\n\n# Plot results\nfig, axes = hc_iris.plot(show_clusters=True, show_tracks=True, figsize=(15, 8))\n```\n\n### Creating Balanced Test Sets\n\n```python\n# Create representative test set maintaining species balance\nsampler_iris = KMeansRepresentativeSampler(\n    sampling_size=0.2,  # 20% test set\n    stratify=True,\n    coverage_boost=1.0,  # Equal representation\n    method='kmeans',\n    random_state=42\n)\n\nX_train, X_test, y_train, y_test = sampler_iris.fit(X_iris, y_iris).get_train_test_split(X_iris, y_iris)\n\nprint(f\"Train set: {len(X_train)} samples\")\nprint(f\"Test set: {len(X_test)} samples\")\nprint(f\"Representative indices: {sampler_iris.representative_indices_[:10].tolist()}\")\n```\n\n## Dependencies\n\n### Required\n- numpy\n- pandas\n- scikit-learn\n- scipy\n- matplotlib\n- seaborn\n- networkx\n- loguru\n\n### Optional (for enhanced functionality)\n- dynamicTreeCut (dynamic tree cutting)\n- skbio (tree representations)\n- fastcluster (faster linkage computation)\n- ensemble_networkx (network analysis)\n\n## Author\n\nJosh L. Espinoza\n\n##  License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n##  Original Implementation\n\nThis package is based on the hierarchical clustering implementation originally developed in the [Soothsayer](https://github.com/jolespin/soothsayer) framework:\n\n**Espinoza JL, Dupont CL, O'Rourke A, Beyhan S, Morales P, et al. (2021) Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence approach. PLOS Computational Biology 17(3): e1008857.** [https://doi.org/10.1371/journal.pcbi.1008857](https://doi.org/10.1371/journal.pcbi.1008857)\n\nThe original implementation provided the foundation for the hierarchical clustering algorithms, metadata track visualization, and eigenprofile analysis features in this package.\n\n##  Acknowledgments\n\n- Built on top of scipy, scikit-learn, and networkx\n- Original implementation developed in the [Soothsayer framework](https://github.com/jolespin/soothsayer)\n- Inspired by WGCNA and other biological clustering tools\n- Dynamic tree cutting algorithms from the dynamicTreeCut package\n\n##  Support\n\n- **Documentation**: [Link to docs]\n- **Issues**: [GitHub Issues](https://github.com/your-username/hierarchical-clustering/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/your-username/hierarchical-clustering/discussions)\n\n##  Citation\n\nIf you use this package in your research, please cite:\n\n**Original Soothsayer implementation:**\n```bibtex\n@article{espinoza2021predicting,\n  title={Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence approach},\n  author={Espinoza, Josh L and Dupont, Chris L and O'Rourke, Aubrie and Beyhan, Seherzada and Morales, Paula and others},\n  journal={PLOS Computational Biology},\n  volume={17},\n  number={3},\n  pages={e1008857},\n  year={2021},\n  publisher={Public Library of Science San Francisco, CA USA},\n  doi={10.1371/journal.pcbi.1008857},\n  url={https://doi.org/10.1371/journal.pcbi.1008857}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A comprehensive clustering toolkit with advanced tree cutting and visualization",
    "version": "2025.9.8",
    "project_urls": {
        "Bug Reports": "https://github.com/jolespin/skclust/issues",
        "Homepage": "https://github.com/jolespin/skclust",
        "Source": "https://github.com/jolespin/skclust"
    },
    "split_keywords": [
        "clustering",
        "hierarchical-clustering",
        "dendrogram",
        "tree-cutting",
        "machine-learning",
        "data-analysis",
        "bioinformatics",
        "network-analysis",
        "visualization",
        "scikit-learn"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "60293132061fc2c45aae1648329acb2687fb47c432d30cfbf1d0cd2609c7a9d7",
                "md5": "0f5b4396721c354f6669e99f6767569a",
                "sha256": "e71ea8429d27c64b60b237cfee0a8c5941ec3a79c78a4db2dfa12d7313c6bf5a"
            },
            "downloads": -1,
            "filename": "skclust-2025.9.8.tar.gz",
            "has_sig": false,
            "md5_digest": "0f5b4396721c354f6669e99f6767569a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 20756,
            "upload_time": "2025-09-08T21:07:56",
            "upload_time_iso_8601": "2025-09-08T21:07:56.199174Z",
            "url": "https://files.pythonhosted.org/packages/60/29/3132061fc2c45aae1648329acb2687fb47c432d30cfbf1d0cd2609c7a9d7/skclust-2025.9.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-08 21:07:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jolespin",
    "github_project": "skclust",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.19.0"
                ],
                [
                    "<",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.3.0"
                ]
            ]
        },
        {
            "name": "seaborn",
            "specs": [
                [
                    ">=",
                    "0.11.0"
                ]
            ]
        },
        {
            "name": "networkx",
            "specs": [
                [
                    ">=",
                    "2.6.0"
                ]
            ]
        },
        {
            "name": "loguru",
            "specs": []
        }
    ],
    "lcname": "skclust"
}
        
Elapsed time: 2.07090s