group-lasso


Namegroup-lasso JSON
Version 1.5.0 PyPI version JSON
download
home_page
SummaryFast group lasso regularised linear models in a sklearn-style API.
upload_time2021-02-04 10:40:14
maintainer
docs_urlNone
authorYngve Mardal Moe
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ===========
Group Lasso
===========

.. image:: https://pepy.tech/badge/group-lasso
    :target: https://pepy.tech/project/group-lasso
    :alt: PyPI Downloads

.. image:: https://travis-ci.org/yngvem/group-lasso.svg?branch=master
    :target: https://github.com/yngvem/group-lasso

.. image:: https://coveralls.io/repos/github/yngvem/group-lasso/badge.svg
    :target: https://coveralls.io/github/yngvem/group-lasso

.. image:: https://readthedocs.org/projects/group-lasso/badge/?version=latest
    :target: https://group-lasso.readthedocs.io/en/latest/?badge=latest

.. image:: https://img.shields.io/pypi/l/group-lasso.svg
    :target: https://github.com/yngvem/group-lasso/blob/master/LICENSE

.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
    :target: https://github.com/python/black

.. image:: https://www.codefactor.io/repository/github/yngvem/group-lasso/badge
   :target: https://www.codefactor.io/repository/github/yngvem/group-lasso
   :alt: CodeFactor

The group lasso [1]_ regulariser is a well known method to achieve structured 
sparsity in machine learning and statistics. The idea is to create 
non-overlapping groups of covariates, and recover regression weights in which 
only a sparse set of these covariate groups have non-zero components.

There are several reasons for why this might be a good idea. Say for example 
that we have a set of sensors and each of these sensors generate five 
measurements. We don't want to maintain an unneccesary number of sensors. 
If we try normal LASSO regression, then we will get sparse components. 
However, these sparse components might not correspond to a sparse set of 
sensors, since they each generate five measurements. If we instead use group 
LASSO with measurements grouped by which sensor they were measured by, then
we will get a sparse set of sensors.

An extension of the group lasso regulariser is the sparse group lasso
regulariser [2]_, which imposes both group-wise sparsity and coefficient-wise
sparsity. This is done by combining the group lasso penalty with the
traditional lasso penalty. In this library, I have implemented an efficient
sparse group lasso solver being fully scikit-learn API compliant.

------------------
About this project
------------------
This project is developed by Yngve Mardal Moe and released under an MIT 
lisence. I am still working out a few things so changes might come rapidly.

------------------
Installation guide
------------------
Group-lasso requires Python 3.5+, numpy and scikit-learn. 
To install group-lasso via ``pip``, simply run the command::

    pip install group-lasso

Alternatively, you can manually pull this repository and run the
``setup.py`` file::

    git clone https://github.com/yngvem/group-lasso.git
    cd group-lasso
    python setup.py

-------------
Documentation
-------------

You can read the full documentation on 
`readthedocs <https://group-lasso.readthedocs.io/en/latest/maths.html>`_.

--------
Examples
--------

There are several examples that show usage of the library
`here <https://group-lasso.readthedocs.io/en/latest/auto_examples/index.html>`_.

------------
Further work
------------

1. Fully test with sparse arrays and make examples
2. Make it easier to work with categorical data
3. Poisson regression

----------------------
Implementation details
----------------------
The problem is solved using the FISTA optimiser [3]_ with a gradient-based 
adaptive restarting scheme [4]_. No line search is currently implemented, but 
I hope to look at that later.

Although fast, the FISTA optimiser does not achieve as low loss values as the 
significantly slower second order interior point methods. This might, at 
first glance, seem like a problem. However, it does recover the sparsity 
patterns of the data, which can be used to train a new model with the given 
subset of the features.

Also, even though the FISTA optimiser is not meant for stochastic 
optimisation, it has to my experience not suffered a large fall in 
performance when the mini batch was large enough. I have therefore 
implemented mini-batch optimisation using FISTA, and thus been able to fit 
models based on data with ~500 columns and 10 000 000 rows on my moderately 
priced laptop.

Finally, we note that since FISTA uses Nesterov acceleration, is not a 
descent algorithm. We can therefore not expect the loss to decrease 
monotonically.

----------
References
----------

.. [1] Yuan, M. and Lin, Y. (2006), Model selection and estimation in
   regression with grouped variables. Journal of the Royal Statistical
   Society: Series B (Statistical Methodology), 68: 49-67.
   doi:10.1111/j.1467-9868.2005.00532.x

.. [2] Simon, N., Friedman, J., Hastie, T., & Tibshirani, R. (2013).
    A sparse-group lasso. Journal of Computational and Graphical
    Statistics, 22(2), 231-245.

.. [3] Beck, A. and Teboulle, M. (2009), A Fast Iterative 
   Shrinkage-Thresholding Algorithm for Linear Inverse Problems.
   SIAM Journal on Imaging Sciences 2009 2:1, 183-202.
   doi:10.1137/080716542  

.. [4] O’Donoghue, B. & Candès, E. (2015), Adaptive Restart for
   Accelerated Gradient Schemes. Found Comput Math 15: 715.
   doi:10.1007/s10208-013-9150-



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "group-lasso",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Yngve Mardal Moe",
    "author_email": "yngve.m.moe@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/21/b4/784d01db4eb7f3eafb9f1a9ac6f141d7050aba615cf6e0186ba9ebdaa299/group-lasso-1.5.0.tar.gz",
    "platform": "",
    "description": "===========\nGroup Lasso\n===========\n\n.. image:: https://pepy.tech/badge/group-lasso\n    :target: https://pepy.tech/project/group-lasso\n    :alt: PyPI Downloads\n\n.. image:: https://travis-ci.org/yngvem/group-lasso.svg?branch=master\n    :target: https://github.com/yngvem/group-lasso\n\n.. image:: https://coveralls.io/repos/github/yngvem/group-lasso/badge.svg\n    :target: https://coveralls.io/github/yngvem/group-lasso\n\n.. image:: https://readthedocs.org/projects/group-lasso/badge/?version=latest\n    :target: https://group-lasso.readthedocs.io/en/latest/?badge=latest\n\n.. image:: https://img.shields.io/pypi/l/group-lasso.svg\n    :target: https://github.com/yngvem/group-lasso/blob/master/LICENSE\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n    :target: https://github.com/python/black\n\n.. image:: https://www.codefactor.io/repository/github/yngvem/group-lasso/badge\n   :target: https://www.codefactor.io/repository/github/yngvem/group-lasso\n   :alt: CodeFactor\n\nThe group lasso [1]_ regulariser is a well known method to achieve structured \nsparsity in machine learning and statistics. The idea is to create \nnon-overlapping groups of covariates, and recover regression weights in which \nonly a sparse set of these covariate groups have non-zero components.\n\nThere are several reasons for why this might be a good idea. Say for example \nthat we have a set of sensors and each of these sensors generate five \nmeasurements. We don't want to maintain an unneccesary number of sensors. \nIf we try normal LASSO regression, then we will get sparse components. \nHowever, these sparse components might not correspond to a sparse set of \nsensors, since they each generate five measurements. If we instead use group \nLASSO with measurements grouped by which sensor they were measured by, then\nwe will get a sparse set of sensors.\n\nAn extension of the group lasso regulariser is the sparse group lasso\nregulariser [2]_, which imposes both group-wise sparsity and coefficient-wise\nsparsity. This is done by combining the group lasso penalty with the\ntraditional lasso penalty. In this library, I have implemented an efficient\nsparse group lasso solver being fully scikit-learn API compliant.\n\n------------------\nAbout this project\n------------------\nThis project is developed by Yngve Mardal Moe and released under an MIT \nlisence. I am still working out a few things so changes might come rapidly.\n\n------------------\nInstallation guide\n------------------\nGroup-lasso requires Python 3.5+, numpy and scikit-learn. \nTo install group-lasso via ``pip``, simply run the command::\n\n    pip install group-lasso\n\nAlternatively, you can manually pull this repository and run the\n``setup.py`` file::\n\n    git clone https://github.com/yngvem/group-lasso.git\n    cd group-lasso\n    python setup.py\n\n-------------\nDocumentation\n-------------\n\nYou can read the full documentation on \n`readthedocs <https://group-lasso.readthedocs.io/en/latest/maths.html>`_.\n\n--------\nExamples\n--------\n\nThere are several examples that show usage of the library\n`here <https://group-lasso.readthedocs.io/en/latest/auto_examples/index.html>`_.\n\n------------\nFurther work\n------------\n\n1. Fully test with sparse arrays and make examples\n2. Make it easier to work with categorical data\n3. Poisson regression\n\n----------------------\nImplementation details\n----------------------\nThe problem is solved using the FISTA optimiser [3]_ with a gradient-based \nadaptive restarting scheme [4]_. No line search is currently implemented, but \nI hope to look at that later.\n\nAlthough fast, the FISTA optimiser does not achieve as low loss values as the \nsignificantly slower second order interior point methods. This might, at \nfirst glance, seem like a problem. However, it does recover the sparsity \npatterns of the data, which can be used to train a new model with the given \nsubset of the features.\n\nAlso, even though the FISTA optimiser is not meant for stochastic \noptimisation, it has to my experience not suffered a large fall in \nperformance when the mini batch was large enough. I have therefore \nimplemented mini-batch optimisation using FISTA, and thus been able to fit \nmodels based on data with ~500 columns and 10 000 000 rows on my moderately \npriced laptop.\n\nFinally, we note that since FISTA uses Nesterov acceleration, is not a \ndescent algorithm. We can therefore not expect the loss to decrease \nmonotonically.\n\n----------\nReferences\n----------\n\n.. [1] Yuan, M. and Lin, Y. (2006), Model selection and estimation in\n   regression with grouped variables. Journal of the Royal Statistical\n   Society: Series B (Statistical Methodology), 68: 49-67.\n   doi:10.1111/j.1467-9868.2005.00532.x\n\n.. [2] Simon, N., Friedman, J., Hastie, T., & Tibshirani, R. (2013).\n    A sparse-group lasso. Journal of Computational and Graphical\n    Statistics, 22(2), 231-245.\n\n.. [3] Beck, A. and Teboulle, M. (2009), A Fast Iterative \n   Shrinkage-Thresholding Algorithm for Linear Inverse Problems.\n   SIAM Journal on Imaging Sciences 2009 2:1, 183-202.\n   doi:10.1137/080716542  \n\n.. [4] O\u2019Donoghue, B. & Cand\u00e8s, E. (2015), Adaptive Restart for\n   Accelerated Gradient Schemes. Found Comput Math 15: 715.\n   doi:10.1007/s10208-013-9150-\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Fast group lasso regularised linear models in a sklearn-style API.",
    "version": "1.5.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6312ca38bf6ce7e97ce1b07652efdcec5e69caa0cce8f738afd66268c186fb3b",
                "md5": "d3dc35910675795d95510b906c1dab65",
                "sha256": "a20ad4807834a4438a8829a36e0f355c7633e347aa73502dae8a22fc6e75e977"
            },
            "downloads": -1,
            "filename": "group_lasso-1.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d3dc35910675795d95510b906c1dab65",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 33109,
            "upload_time": "2021-02-04T10:40:06",
            "upload_time_iso_8601": "2021-02-04T10:40:06.401371Z",
            "url": "https://files.pythonhosted.org/packages/63/12/ca38bf6ce7e97ce1b07652efdcec5e69caa0cce8f738afd66268c186fb3b/group_lasso-1.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "21b4784d01db4eb7f3eafb9f1a9ac6f141d7050aba615cf6e0186ba9ebdaa299",
                "md5": "b276a20d246a9904833d59d052c34dec",
                "sha256": "3a86115fdfa387021c805a8e3bf09c1f1cc1e32b880778ab017488199ef57310"
            },
            "downloads": -1,
            "filename": "group-lasso-1.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b276a20d246a9904833d59d052c34dec",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4337694,
            "upload_time": "2021-02-04T10:40:14",
            "upload_time_iso_8601": "2021-02-04T10:40:14.596663Z",
            "url": "https://files.pythonhosted.org/packages/21/b4/784d01db4eb7f3eafb9f1a9ac6f141d7050aba615cf6e0186ba9ebdaa299/group-lasso-1.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-02-04 10:40:14",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "group-lasso"
}
        
Elapsed time: 0.08315s