hdbscan


Namehdbscan JSON
Version 0.8.33 PyPI version JSON
download
home_pagehttp://github.com/scikit-learn-contrib/hdbscan
SummaryClustering based on density with variable density clusters
upload_time2023-07-18 17:52:36
maintainerLeland McInnes
docs_urlNone
author
requires_python
licenseBSD
keywords cluster clustering density hierarchical
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            .. image:: https://img.shields.io/pypi/v/hdbscan.svg
    :target: https://pypi.python.org/pypi/hdbscan/
    :alt: PyPI Version
.. image:: https://anaconda.org/conda-forge/hdbscan/badges/version.svg
    :target: https://anaconda.org/conda-forge/hdbscan
    :alt: Conda-forge Version
.. image:: https://anaconda.org/conda-forge/hdbscan/badges/downloads.svg
    :target: https://anaconda.org/conda-forge/hdbscan
    :alt: Conda-forge downloads
.. image:: https://img.shields.io/pypi/l/hdbscan.svg
    :target: https://github.com/scikit-learn-contrib/hdbscan/blob/master/LICENSE
    :alt: License
.. image:: https://travis-ci.org/scikit-learn-contrib/hdbscan.svg
    :target: https://travis-ci.org/scikit-learn-contrib/hdbscan
    :alt: Travis Build Status
.. image:: https://codecov.io/gh/scikit-learn-contrib/hdbscan/branch/master/graph/badge.svg
  :target: https://codecov.io/gh/scikit-learn-contrib/hdbscan
    :alt: Test Coverage
.. image:: https://readthedocs.org/projects/hdbscan/badge/?version=latest
    :target: https://hdbscan.readthedocs.org
    :alt: Docs
.. image:: http://joss.theoj.org/papers/10.21105/joss.00205/status.svg
    :target: http://joss.theoj.org/papers/10.21105/joss.00205
    :alt: JOSS article
.. image:: https://mybinder.org/badge.svg 
    :target: https://mybinder.org/v2/gh/scikit-learn-contrib/hdbscan
    :alt: Launch example notebooks in Binder


=======
HDBSCAN
=======

HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications
with Noise. Performs DBSCAN over varying epsilon values and integrates 
the result to find a clustering that gives the best stability over epsilon.
This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN),
and be more robust to parameter selection.

In practice this means that HDBSCAN returns a good clustering straight
away with little or no parameter tuning -- and the primary parameter,
minimum cluster size, is intuitive and easy to select.

HDBSCAN is ideal for exploratory data analysis; it's a fast and robust
algorithm that you can trust to return meaningful clusters (if there
are any).

Based on the papers:

    McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* 
    In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
    2017 `[pdf] <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8215642>`_

    R. Campello, D. Moulavi, and J. Sander, *Density-Based Clustering Based on
    Hierarchical Density Estimates*
    In: Advances in Knowledge Discovery and Data Mining, Springer, pp 160-172.
    2013
    
Documentation, including tutorials, are available on ReadTheDocs at http://hdbscan.readthedocs.io/en/latest/ .  
    
Notebooks `comparing HDBSCAN to other clustering algorithms <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Comparing%20Clustering%20Algorithms.ipynb>`_, explaining `how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb>`_ and `comparing performance with other python clustering implementations <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>`_ are available.

------------------
How to use HDBSCAN
------------------

The hdbscan package inherits from sklearn classes, and thus drops in neatly
next to other sklearn clusterers with an identical calling API. Similarly it
supports input in a variety of formats: an array (or pandas dataframe, or
sparse matrix) of shape ``(num_samples x num_features)``; an array (or sparse matrix)
giving a distance matrix between samples.

.. code:: python

    import hdbscan
    from sklearn.datasets import make_blobs
    
    data, _ = make_blobs(1000)
    
    clusterer = hdbscan.HDBSCAN(min_cluster_size=10)
    cluster_labels = clusterer.fit_predict(data)

-----------
Performance
-----------

Significant effort has been put into making the hdbscan implementation as fast as 
possible. It is `orders of magnitude faster than the reference implementation <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Python%20vs%20Java.ipynb>`_ in Java,
and is currently faster than highly optimized single linkage implementations in C and C++.
`version 0.7 performance can be seen in this notebook <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>`_ .
In particular `performance on low dimensional data is better than sklearn's DBSCAN <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations%202D%20v0.7.ipynb>`_ ,
and via support for caching with joblib, re-clustering with different parameters
can be almost free.

------------------------
Additional functionality
------------------------

The hdbscan package comes equipped with visualization tools to help you
understand your clustering results. After fitting data the clusterer
object has attributes for:

* The condensed cluster hierarchy
* The robust single linkage cluster hierarchy
* The reachability distance minimal spanning tree

All of which come equipped with methods for plotting and converting
to Pandas or NetworkX for further analysis. See the notebook on
`how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb>`_ for examples and further details.

The clusterer objects also have an attribute providing cluster membership
strengths, resulting in optional soft clustering (and no further compute 
expense). Finally each cluster also receives a persistence score giving
the stability of the cluster over the range of distance scales present
in the data. This provides a measure of the relative strength of clusters.

-----------------
Outlier Detection
-----------------

The HDBSCAN clusterer objects also support the GLOSH outlier detection algorithm. 
After fitting the clusterer to data the outlier scores can be accessed via the
``outlier_scores_`` attribute. The result is a vector of score values, one for
each data point that was fit. Higher scores represent more outlier like objects.
Selecting outliers via upper quantiles is often a good approach.

Based on the paper:
    R.J.G.B. Campello, D. Moulavi, A. Zimek and J. Sander 
    *Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection*, 
    ACM Trans. on Knowledge Discovery from Data, Vol 10, 1 (July 2015), 1-51.

---------------------
Robust single linkage
---------------------

The hdbscan package also provides support for the *robust single linkage*
clustering algorithm of Chaudhuri and Dasgupta. As with the HDBSCAN 
implementation this is a high performance version of the algorithm 
outperforming scipy's standard single linkage implementation. The
robust single linkage hierarchy is available as an attribute of
the robust single linkage clusterer, again with the ability to plot
or export the hierarchy, and to extract flat clusterings at a given
cut level and gamma value.

Example usage:

.. code:: python

    import hdbscan
    from sklearn.datasets import make_blobs
    
    data, _ = make_blobs(1000)
    
    clusterer = hdbscan.RobustSingleLinkage(cut=0.125, k=7)
    cluster_labels = clusterer.fit_predict(data)
    hierarchy = clusterer.cluster_hierarchy_
    alt_labels = hierarchy.get_clusters(0.100, 5)
    hierarchy.plot()


Based on the paper:
    K. Chaudhuri and S. Dasgupta.
    *"Rates of convergence for the cluster tree."*
    In Advances in Neural Information Processing Systems, 2010.

----------
Installing
----------

Easiest install, if you have Anaconda (thanks to conda-forge which is awesome!):

.. code:: bash

    conda install -c conda-forge hdbscan

PyPI install, presuming you have an up to date pip:

.. code:: bash

    pip install hdbscan

Binary wheels for a number of platforms are available thanks to the work of
Ryan Helinski <rlhelinski@gmail.com>.

If pip is having difficulties pulling the dependencies then we'd suggest to first upgrade
pip to at least version 10 and try again:

.. code:: bash

    pip install --upgrade pip
    pip install hdbscan

Otherwise install the dependencies manually using anaconda followed by pulling hdbscan from pip:

.. code:: bash

    conda install cython
    conda install numpy scipy
    conda install scikit-learn
    pip install hdbscan


For a manual install of the latest code directly from GitHub:

.. code:: bash

    pip install --upgrade git+https://github.com/scikit-learn-contrib/hdbscan.git#egg=hdbscan


Alternatively download the package, install requirements, and manually run the installer:


.. code:: bash

    wget https://github.com/scikit-learn-contrib/hdbscan/archive/master.zip
    unzip master.zip
    rm master.zip
    cd hdbscan-master
    
    pip install -r requirements.txt
    
    python setup.py install

-----------------
Running the Tests
-----------------

The package tests can be run after installation using the command:

.. code:: bash

    nosetests -s hdbscan

or, if ``nose`` is installed but ``nosetests`` is not in your ``PATH`` variable:

.. code:: bash

    python -m nose -s hdbscan

If one or more of the tests fail, please report a bug at https://github.com/scikit-learn-contrib/hdbscan/issues/new

--------------
Python Version
--------------

The hdbscan library supports both Python 2 and Python 3. However we recommend Python 3 as the better option if it is available to you.
    
----------------
Help and Support
----------------

For simple issues you can consult the `FAQ <https://hdbscan.readthedocs.io/en/latest/faq.html>`_ in the documentation.
If your issue is not suitably resolved there, please check the `issues <https://github.com/scikit-learn-contrib/hdbscan/issues>`_ on github. Finally, if no solution is available there feel free to `open an issue <https://github.com/scikit-learn-contrib/hdbscan/issues/new>`_ ; the authors will attempt to respond in a reasonably timely fashion.

------------
Contributing
------------

We welcome contributions in any form! Assistance with documentation, particularly expanding tutorials,
is always welcome. To contribute please `fork the project <https://github.com/scikit-learn-contrib/hdbscan/issues#fork-destination-box>`_ make your changes and submit a pull request. We will do our best to work through any issues with
you and get your code merged into the main branch.

------
Citing
------

If you have used this codebase in a scientific publication and wish to cite it, please use the `Journal of Open Source Software article <http://joss.theoj.org/papers/10.21105/joss.00205>`_.

    L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering*
    In: Journal of Open Source Software, The Open Journal, volume 2, number 11.
    2017
    
.. code:: bibtex

    @article{mcinnes2017hdbscan,
      title={hdbscan: Hierarchical density based clustering},
      author={McInnes, Leland and Healy, John and Astels, Steve},
      journal={The Journal of Open Source Software},
      volume={2},
      number={11},
      pages={205},
      year={2017}
    }
    
To reference the high performance algorithm developed in this library please cite our paper in ICDMW 2017 proceedings.

    McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* 
    In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
    2017


.. code:: bibtex

    @inproceedings{mcinnes2017accelerated,
      title={Accelerated Hierarchical Density Based Clustering},
      author={McInnes, Leland and Healy, John},
      booktitle={Data Mining Workshops (ICDMW), 2017 IEEE International Conference on},
      pages={33--42},
      year={2017},
      organization={IEEE}
    }

---------
Licensing
---------

The hdbscan package is 3-clause BSD licensed. Enjoy.

            

Raw data

            {
    "_id": null,
    "home_page": "http://github.com/scikit-learn-contrib/hdbscan",
    "name": "hdbscan",
    "maintainer": "Leland McInnes",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "leland.mcinnes@gmail.com",
    "keywords": "cluster clustering density hierarchical",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/44/2c/b6bb84999f1c82cf0abd28595ff8aff2e495e18f8718b6b18bb11a012de4/hdbscan-0.8.33.tar.gz",
    "platform": null,
    "description": ".. image:: https://img.shields.io/pypi/v/hdbscan.svg\n    :target: https://pypi.python.org/pypi/hdbscan/\n    :alt: PyPI Version\n.. image:: https://anaconda.org/conda-forge/hdbscan/badges/version.svg\n    :target: https://anaconda.org/conda-forge/hdbscan\n    :alt: Conda-forge Version\n.. image:: https://anaconda.org/conda-forge/hdbscan/badges/downloads.svg\n    :target: https://anaconda.org/conda-forge/hdbscan\n    :alt: Conda-forge downloads\n.. image:: https://img.shields.io/pypi/l/hdbscan.svg\n    :target: https://github.com/scikit-learn-contrib/hdbscan/blob/master/LICENSE\n    :alt: License\n.. image:: https://travis-ci.org/scikit-learn-contrib/hdbscan.svg\n    :target: https://travis-ci.org/scikit-learn-contrib/hdbscan\n    :alt: Travis Build Status\n.. image:: https://codecov.io/gh/scikit-learn-contrib/hdbscan/branch/master/graph/badge.svg\n  :target: https://codecov.io/gh/scikit-learn-contrib/hdbscan\n    :alt: Test Coverage\n.. image:: https://readthedocs.org/projects/hdbscan/badge/?version=latest\n    :target: https://hdbscan.readthedocs.org\n    :alt: Docs\n.. image:: http://joss.theoj.org/papers/10.21105/joss.00205/status.svg\n    :target: http://joss.theoj.org/papers/10.21105/joss.00205\n    :alt: JOSS article\n.. image:: https://mybinder.org/badge.svg \n    :target: https://mybinder.org/v2/gh/scikit-learn-contrib/hdbscan\n    :alt: Launch example notebooks in Binder\n\n\n=======\nHDBSCAN\n=======\n\nHDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications\nwith Noise. Performs DBSCAN over varying epsilon values and integrates \nthe result to find a clustering that gives the best stability over epsilon.\nThis allows HDBSCAN to find clusters of varying densities (unlike DBSCAN),\nand be more robust to parameter selection.\n\nIn practice this means that HDBSCAN returns a good clustering straight\naway with little or no parameter tuning -- and the primary parameter,\nminimum cluster size, is intuitive and easy to select.\n\nHDBSCAN is ideal for exploratory data analysis; it's a fast and robust\nalgorithm that you can trust to return meaningful clusters (if there\nare any).\n\nBased on the papers:\n\n    McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* \n    In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.\n    2017 `[pdf] <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8215642>`_\n\n    R. Campello, D. Moulavi, and J. Sander, *Density-Based Clustering Based on\n    Hierarchical Density Estimates*\n    In: Advances in Knowledge Discovery and Data Mining, Springer, pp 160-172.\n    2013\n    \nDocumentation, including tutorials, are available on ReadTheDocs at http://hdbscan.readthedocs.io/en/latest/ .  \n    \nNotebooks `comparing HDBSCAN to other clustering algorithms <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Comparing%20Clustering%20Algorithms.ipynb>`_, explaining `how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb>`_ and `comparing performance with other python clustering implementations <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>`_ are available.\n\n------------------\nHow to use HDBSCAN\n------------------\n\nThe hdbscan package inherits from sklearn classes, and thus drops in neatly\nnext to other sklearn clusterers with an identical calling API. Similarly it\nsupports input in a variety of formats: an array (or pandas dataframe, or\nsparse matrix) of shape ``(num_samples x num_features)``; an array (or sparse matrix)\ngiving a distance matrix between samples.\n\n.. code:: python\n\n    import hdbscan\n    from sklearn.datasets import make_blobs\n    \n    data, _ = make_blobs(1000)\n    \n    clusterer = hdbscan.HDBSCAN(min_cluster_size=10)\n    cluster_labels = clusterer.fit_predict(data)\n\n-----------\nPerformance\n-----------\n\nSignificant effort has been put into making the hdbscan implementation as fast as \npossible. It is `orders of magnitude faster than the reference implementation <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Python%20vs%20Java.ipynb>`_ in Java,\nand is currently faster than highly optimized single linkage implementations in C and C++.\n`version 0.7 performance can be seen in this notebook <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>`_ .\nIn particular `performance on low dimensional data is better than sklearn's DBSCAN <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations%202D%20v0.7.ipynb>`_ ,\nand via support for caching with joblib, re-clustering with different parameters\ncan be almost free.\n\n------------------------\nAdditional functionality\n------------------------\n\nThe hdbscan package comes equipped with visualization tools to help you\nunderstand your clustering results. After fitting data the clusterer\nobject has attributes for:\n\n* The condensed cluster hierarchy\n* The robust single linkage cluster hierarchy\n* The reachability distance minimal spanning tree\n\nAll of which come equipped with methods for plotting and converting\nto Pandas or NetworkX for further analysis. See the notebook on\n`how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb>`_ for examples and further details.\n\nThe clusterer objects also have an attribute providing cluster membership\nstrengths, resulting in optional soft clustering (and no further compute \nexpense). Finally each cluster also receives a persistence score giving\nthe stability of the cluster over the range of distance scales present\nin the data. This provides a measure of the relative strength of clusters.\n\n-----------------\nOutlier Detection\n-----------------\n\nThe HDBSCAN clusterer objects also support the GLOSH outlier detection algorithm. \nAfter fitting the clusterer to data the outlier scores can be accessed via the\n``outlier_scores_`` attribute. The result is a vector of score values, one for\neach data point that was fit. Higher scores represent more outlier like objects.\nSelecting outliers via upper quantiles is often a good approach.\n\nBased on the paper:\n    R.J.G.B. Campello, D. Moulavi, A. Zimek and J. Sander \n    *Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection*, \n    ACM Trans. on Knowledge Discovery from Data, Vol 10, 1 (July 2015), 1-51.\n\n---------------------\nRobust single linkage\n---------------------\n\nThe hdbscan package also provides support for the *robust single linkage*\nclustering algorithm of Chaudhuri and Dasgupta. As with the HDBSCAN \nimplementation this is a high performance version of the algorithm \noutperforming scipy's standard single linkage implementation. The\nrobust single linkage hierarchy is available as an attribute of\nthe robust single linkage clusterer, again with the ability to plot\nor export the hierarchy, and to extract flat clusterings at a given\ncut level and gamma value.\n\nExample usage:\n\n.. code:: python\n\n    import hdbscan\n    from sklearn.datasets import make_blobs\n    \n    data, _ = make_blobs(1000)\n    \n    clusterer = hdbscan.RobustSingleLinkage(cut=0.125, k=7)\n    cluster_labels = clusterer.fit_predict(data)\n    hierarchy = clusterer.cluster_hierarchy_\n    alt_labels = hierarchy.get_clusters(0.100, 5)\n    hierarchy.plot()\n\n\nBased on the paper:\n    K. Chaudhuri and S. Dasgupta.\n    *\"Rates of convergence for the cluster tree.\"*\n    In Advances in Neural Information Processing Systems, 2010.\n\n----------\nInstalling\n----------\n\nEasiest install, if you have Anaconda (thanks to conda-forge which is awesome!):\n\n.. code:: bash\n\n    conda install -c conda-forge hdbscan\n\nPyPI install, presuming you have an up to date pip:\n\n.. code:: bash\n\n    pip install hdbscan\n\nBinary wheels for a number of platforms are available thanks to the work of\nRyan Helinski <rlhelinski@gmail.com>.\n\nIf pip is having difficulties pulling the dependencies then we'd suggest to first upgrade\npip to at least version 10 and try again:\n\n.. code:: bash\n\n    pip install --upgrade pip\n    pip install hdbscan\n\nOtherwise install the dependencies manually using anaconda followed by pulling hdbscan from pip:\n\n.. code:: bash\n\n    conda install cython\n    conda install numpy scipy\n    conda install scikit-learn\n    pip install hdbscan\n\n\nFor a manual install of the latest code directly from GitHub:\n\n.. code:: bash\n\n    pip install --upgrade git+https://github.com/scikit-learn-contrib/hdbscan.git#egg=hdbscan\n\n\nAlternatively download the package, install requirements, and manually run the installer:\n\n\n.. code:: bash\n\n    wget https://github.com/scikit-learn-contrib/hdbscan/archive/master.zip\n    unzip master.zip\n    rm master.zip\n    cd hdbscan-master\n    \n    pip install -r requirements.txt\n    \n    python setup.py install\n\n-----------------\nRunning the Tests\n-----------------\n\nThe package tests can be run after installation using the command:\n\n.. code:: bash\n\n    nosetests -s hdbscan\n\nor, if ``nose`` is installed but ``nosetests`` is not in your ``PATH`` variable:\n\n.. code:: bash\n\n    python -m nose -s hdbscan\n\nIf one or more of the tests fail, please report a bug at https://github.com/scikit-learn-contrib/hdbscan/issues/new\n\n--------------\nPython Version\n--------------\n\nThe hdbscan library supports both Python 2 and Python 3. However we recommend Python 3 as the better option if it is available to you.\n    \n----------------\nHelp and Support\n----------------\n\nFor simple issues you can consult the `FAQ <https://hdbscan.readthedocs.io/en/latest/faq.html>`_ in the documentation.\nIf your issue is not suitably resolved there, please check the `issues <https://github.com/scikit-learn-contrib/hdbscan/issues>`_ on github. Finally, if no solution is available there feel free to `open an issue <https://github.com/scikit-learn-contrib/hdbscan/issues/new>`_ ; the authors will attempt to respond in a reasonably timely fashion.\n\n------------\nContributing\n------------\n\nWe welcome contributions in any form! Assistance with documentation, particularly expanding tutorials,\nis always welcome. To contribute please `fork the project <https://github.com/scikit-learn-contrib/hdbscan/issues#fork-destination-box>`_ make your changes and submit a pull request. We will do our best to work through any issues with\nyou and get your code merged into the main branch.\n\n------\nCiting\n------\n\nIf you have used this codebase in a scientific publication and wish to cite it, please use the `Journal of Open Source Software article <http://joss.theoj.org/papers/10.21105/joss.00205>`_.\n\n    L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering*\n    In: Journal of Open Source Software, The Open Journal, volume 2, number 11.\n    2017\n    \n.. code:: bibtex\n\n    @article{mcinnes2017hdbscan,\n      title={hdbscan: Hierarchical density based clustering},\n      author={McInnes, Leland and Healy, John and Astels, Steve},\n      journal={The Journal of Open Source Software},\n      volume={2},\n      number={11},\n      pages={205},\n      year={2017}\n    }\n    \nTo reference the high performance algorithm developed in this library please cite our paper in ICDMW 2017 proceedings.\n\n    McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* \n    In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.\n    2017\n\n\n.. code:: bibtex\n\n    @inproceedings{mcinnes2017accelerated,\n      title={Accelerated Hierarchical Density Based Clustering},\n      author={McInnes, Leland and Healy, John},\n      booktitle={Data Mining Workshops (ICDMW), 2017 IEEE International Conference on},\n      pages={33--42},\n      year={2017},\n      organization={IEEE}\n    }\n\n---------\nLicensing\n---------\n\nThe hdbscan package is 3-clause BSD licensed. Enjoy.\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Clustering based on density with variable density clusters",
    "version": "0.8.33",
    "project_urls": {
        "Homepage": "http://github.com/scikit-learn-contrib/hdbscan"
    },
    "split_keywords": [
        "cluster",
        "clustering",
        "density",
        "hierarchical"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "442cb6bb84999f1c82cf0abd28595ff8aff2e495e18f8718b6b18bb11a012de4",
                "md5": "4158c4c0ee5eedd7b02d862329f8e19e",
                "sha256": "57fabc5f0e45f48d2407b35c731192abc896376411fe7e4bb836ffa03d38f90d"
            },
            "downloads": -1,
            "filename": "hdbscan-0.8.33.tar.gz",
            "has_sig": false,
            "md5_digest": "4158c4c0ee5eedd7b02d862329f8e19e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5201533,
            "upload_time": "2023-07-18T17:52:36",
            "upload_time_iso_8601": "2023-07-18T17:52:36.529389Z",
            "url": "https://files.pythonhosted.org/packages/44/2c/b6bb84999f1c82cf0abd28595ff8aff2e495e18f8718b6b18bb11a012de4/hdbscan-0.8.33.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-18 17:52:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "scikit-learn-contrib",
    "github_project": "hdbscan",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "circle": true,
    "requirements": [],
    "lcname": "hdbscan"
}
        
Elapsed time: 0.14243s