sparse-numba


Namesparse-numba JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/th1275/sparse_numba
SummaryCustomized sparse solver with Numba support
upload_time2025-04-09 02:09:43
maintainerNone
docs_urlNone
authorTianqi Hong
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Sparse_Numba

A lightweight, Numba-compatible sparse linear solver designed for efficient parallel computations in Python.

[![PyPI version](https://badge.fury.io/py/sparse-numba.svg)](https://badge.fury.io/py/sparse-numba)
[![Build Status](https://github.com/th1275/sparse_numba/actions/workflows/build_wheels.yml/badge.svg)](https://github.com/th1275/sparse_numba/actions)
[![Python Versions](https://img.shields.io/pypi/pyversions/sparse-numba.svg)](https://pypi.org/project/sparse-numba/)

## Why Sparse_Numba?

Python is widely used for rapid prototyping and demonstration, 
despite its limitations in computationally intensive tasks. 
Existing sparse linear solvers (e.g., SciPy and KVXOPT) are efficient 
for single-task scenarios but face performance bottlenecks 
if there are frequent data exchanges and Python's Global Interpreter Lock (GIL).

Sparse_Numba addresses these limitations by 
providing a sparse linear solver fully compatible with 
Numba's Just-In-Time (JIT) compilation. 
This design allows computationally intensive tasks 
to run efficiently in parallel, bypassing Python's GIL 
and significantly improving multi-task solving speed.

## Installation

```bash
pip install sparse-numba
```
Due to the license issue, this package cannot include DLLs from umfpack. To run the existing function in this package, the user needs to install umfpack by yourself and add the necessary DLLs to the system path or put under: 
```
.venv/site-packages/sparse_numba/vendor/suitesparse/bin
```
Support for SuperLU solver has been added in the current version (0.1.6). Other solvers might be added soon. Sorry for this inconvenience.


### Installing from source (Windows)

If installing from source on Windows, you need to have MinGW installed and configured for Python:

1. Install MinGW-w64 (x86_64-posix-seh)
2. Add MinGW bin directory to your PATH
3. Create or edit your distutils.cfg file:
   - Location: `%USERPROFILE%\.distutils.cfg`
   - Content:
     ```
     [build]
     compiler=mingw32
     ```
4. Then:
 ```bash
python -m build --wheel
pip install dist/sparse_numba-%YOURVERSION%.whl
```
 

**Note:** Despite installing MinGW-w64 (64-bit), the compiler setting is still `mingw32`. This is the correct name for the distutils compiler specification and does not affect the bitness of the compiled extension.


## Usage

```python
import numpy as np
from sparse_numba import umfpack_solve_csc, superlu_solve_csc

# Example with CSC format (Compressed Sparse Column)
# Create a sparse matrix in CSC format
indptr = np.array([0, 2, 3, 6])
indices = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
b = np.array([1.0, 2.0, 3.0])

# Solve the linear system Ax = b
    # umfpack solver
x_umfpack = umfpack_solve_csc(data, indices, indptr, b)
print(x_umfpack)

    # superlu solver
x_superlu = superlu_solve_csc(data, indices, indptr, b)
print(x_superlu)

# More examples for COO and CSR formats...
```

## Performance Comparison

### Single Problem Performance

We compare the computational speed with 
SciPy for solving single problems of different sizes. 
The test result on an Intel Ultra 7 258V processor.
1. UMFPACK V.S. SciPy (spsolve):

![Single Problem Benchmark](benchmark_single_problem_umfpack.png)

2. SuperLU V.S. SciPy (spsolve):

![Single Problem Benchmark](benchmark_single_problem_superlu.png)

### Multi-task Performance

We compare the multi-task performance of Sparse_Numba with sequential SciPy.

3. UMFPACK V.S. SciPy (spsolve):

![Parallel Solver Benchmark](benchmark_parallel_solver_umfpack.png) 
![Speedup Factor](speedup_parallel_solver_umfpack.png)

4. SuperLU V.S. Scipy (spsolve):

![Parallel Solver Benchmark](benchmark_parallel_solver_superlu.png) 
![Speedup Factor](speedup_parallel_solver_superlu.png)

**Note:** The initialization time is included in these benchmarks. 
This is why the Numba-compatible function is slower initially, 
but the performance advantage becomes evident as parallelization takes effect.

## Features and Limitations

### Current Features
- UMFPACK solver integration with Numba compatibility
- SuperLU solver integration with Numba compatibility
- Support for CSC, COO, and CSR sparse matrix formats
- Efficient parallel solving for multiple systems

### Limitations
- The UMFPACK DLL files are not redistributed in this tool
- Other solvers are under development
- Performance may be limited for extremely ill-conditioned matrices
- **Only developed for Windows**, other platform will be supported soon

## Roadmap

This package serves as a temporary solution 
until Python's no-GIL and improved JIT features become widely available. 
At that time, established libraries like SciPy and KVXOPT will likely 
offer more comprehensive implementations with parallel computing features.

## License

BSD 3-Clause License

### License Statement of OpenBLAS:
DLL of OpenBLAS can be obtained from build: https://github.com/OpenMathLib/OpenBLAS
DLL of SuperLU can be obtained from build: https://github.com/xiaoyeli/superlu

## Citation

If you use Sparse_Numba in your research, you can consider to cite:

```
@software{hong2025sparse_numba,
  author = {Hong, Tianqi},
  title = {Sparse_Numba: A Numba-Compatible Sparse Solver},
  year = {2025},
  publisher = {GitHub},
  url = {https://github.com/th1275/sparse_numba}
}
```

## Contributing to Sparse_Numba

As an entry-level (or baby-level) developer, I still need more time to figure out the workflow. Due to my limited availability, this tool will also be updated very slowly. Please be patient. 

Thank you!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/th1275/sparse_numba",
    "name": "sparse-numba",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Tianqi Hong",
    "author_email": "tianqi.hong@uga.edu",
    "download_url": null,
    "platform": null,
    "description": "# Sparse_Numba\r\n\r\nA lightweight, Numba-compatible sparse linear solver designed for efficient parallel computations in Python.\r\n\r\n[![PyPI version](https://badge.fury.io/py/sparse-numba.svg)](https://badge.fury.io/py/sparse-numba)\r\n[![Build Status](https://github.com/th1275/sparse_numba/actions/workflows/build_wheels.yml/badge.svg)](https://github.com/th1275/sparse_numba/actions)\r\n[![Python Versions](https://img.shields.io/pypi/pyversions/sparse-numba.svg)](https://pypi.org/project/sparse-numba/)\r\n\r\n## Why Sparse_Numba?\r\n\r\nPython is widely used for rapid prototyping and demonstration, \r\ndespite its limitations in computationally intensive tasks. \r\nExisting sparse linear solvers (e.g., SciPy and KVXOPT) are efficient \r\nfor single-task scenarios but face performance bottlenecks \r\nif there are frequent data exchanges and Python's Global Interpreter Lock (GIL).\r\n\r\nSparse_Numba addresses these limitations by \r\nproviding a sparse linear solver fully compatible with \r\nNumba's Just-In-Time (JIT) compilation. \r\nThis design allows computationally intensive tasks \r\nto run efficiently in parallel, bypassing Python's GIL \r\nand significantly improving multi-task solving speed.\r\n\r\n## Installation\r\n\r\n```bash\r\npip install sparse-numba\r\n```\r\nDue to the license issue, this package cannot include DLLs from umfpack. To run the existing function in this package, the user needs to install umfpack by yourself and add the necessary DLLs to the system path or put under: \r\n```\r\n.venv/site-packages/sparse_numba/vendor/suitesparse/bin\r\n```\r\nSupport for SuperLU solver has been added in the current version (0.1.6). Other solvers might be added soon. Sorry for this inconvenience.\r\n\r\n\r\n### Installing from source (Windows)\r\n\r\nIf installing from source on Windows, you need to have MinGW installed and configured for Python:\r\n\r\n1. Install MinGW-w64 (x86_64-posix-seh)\r\n2. Add MinGW bin directory to your PATH\r\n3. Create or edit your distutils.cfg file:\r\n   - Location: `%USERPROFILE%\\.distutils.cfg`\r\n   - Content:\r\n     ```\r\n     [build]\r\n     compiler=mingw32\r\n     ```\r\n4. Then:\r\n ```bash\r\npython -m build --wheel\r\npip install dist/sparse_numba-%YOURVERSION%.whl\r\n```\r\n \r\n\r\n**Note:** Despite installing MinGW-w64 (64-bit), the compiler setting is still `mingw32`. This is the correct name for the distutils compiler specification and does not affect the bitness of the compiled extension.\r\n\r\n\r\n## Usage\r\n\r\n```python\r\nimport numpy as np\r\nfrom sparse_numba import umfpack_solve_csc, superlu_solve_csc\r\n\r\n# Example with CSC format (Compressed Sparse Column)\r\n# Create a sparse matrix in CSC format\r\nindptr = np.array([0, 2, 3, 6])\r\nindices = np.array([0, 2, 2, 0, 1, 2])\r\ndata = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])\r\nb = np.array([1.0, 2.0, 3.0])\r\n\r\n# Solve the linear system Ax = b\r\n    # umfpack solver\r\nx_umfpack = umfpack_solve_csc(data, indices, indptr, b)\r\nprint(x_umfpack)\r\n\r\n    # superlu solver\r\nx_superlu = superlu_solve_csc(data, indices, indptr, b)\r\nprint(x_superlu)\r\n\r\n# More examples for COO and CSR formats...\r\n```\r\n\r\n## Performance Comparison\r\n\r\n### Single Problem Performance\r\n\r\nWe compare the computational speed with \r\nSciPy for solving single problems of different sizes. \r\nThe test result on an Intel Ultra 7 258V processor.\r\n1. UMFPACK V.S. SciPy (spsolve):\r\n\r\n![Single Problem Benchmark](benchmark_single_problem_umfpack.png)\r\n\r\n2. SuperLU V.S. SciPy (spsolve):\r\n\r\n![Single Problem Benchmark](benchmark_single_problem_superlu.png)\r\n\r\n### Multi-task Performance\r\n\r\nWe compare the multi-task performance of Sparse_Numba with sequential SciPy.\r\n\r\n3. UMFPACK V.S. SciPy (spsolve):\r\n\r\n![Parallel Solver Benchmark](benchmark_parallel_solver_umfpack.png) \r\n![Speedup Factor](speedup_parallel_solver_umfpack.png)\r\n\r\n4. SuperLU V.S. Scipy (spsolve):\r\n\r\n![Parallel Solver Benchmark](benchmark_parallel_solver_superlu.png) \r\n![Speedup Factor](speedup_parallel_solver_superlu.png)\r\n\r\n**Note:** The initialization time is included in these benchmarks. \r\nThis is why the Numba-compatible function is slower initially, \r\nbut the performance advantage becomes evident as parallelization takes effect.\r\n\r\n## Features and Limitations\r\n\r\n### Current Features\r\n- UMFPACK solver integration with Numba compatibility\r\n- SuperLU solver integration with Numba compatibility\r\n- Support for CSC, COO, and CSR sparse matrix formats\r\n- Efficient parallel solving for multiple systems\r\n\r\n### Limitations\r\n- The UMFPACK DLL files are not redistributed in this tool\r\n- Other solvers are under development\r\n- Performance may be limited for extremely ill-conditioned matrices\r\n- **Only developed for Windows**, other platform will be supported soon\r\n\r\n## Roadmap\r\n\r\nThis package serves as a temporary solution \r\nuntil Python's no-GIL and improved JIT features become widely available. \r\nAt that time, established libraries like SciPy and KVXOPT will likely \r\noffer more comprehensive implementations with parallel computing features.\r\n\r\n## License\r\n\r\nBSD 3-Clause License\r\n\r\n### License Statement of OpenBLAS:\r\nDLL of OpenBLAS can be obtained from build: https://github.com/OpenMathLib/OpenBLAS\r\nDLL of SuperLU can be obtained from build: https://github.com/xiaoyeli/superlu\r\n\r\n## Citation\r\n\r\nIf you use Sparse_Numba in your research, you can consider to cite:\r\n\r\n```\r\n@software{hong2025sparse_numba,\r\n  author = {Hong, Tianqi},\r\n  title = {Sparse_Numba: A Numba-Compatible Sparse Solver},\r\n  year = {2025},\r\n  publisher = {GitHub},\r\n  url = {https://github.com/th1275/sparse_numba}\r\n}\r\n```\r\n\r\n## Contributing to Sparse_Numba\r\n\r\nAs an entry-level (or baby-level) developer, I still need more time to figure out the workflow. Due to my limited availability, this tool will also be updated very slowly. Please be patient. \r\n\r\nThank you!\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Customized sparse solver with Numba support",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/th1275/sparse_numba"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f20ac44286eee1a39708fcfd60bb960f7bc7b62c2ab5f3e77ffa7556e40bd4e0",
                "md5": "5c51341b89516f3de1b3d9868bc2e7da",
                "sha256": "dd53fbea13452d6e191465c8819d1e3889a49fe7ed095247b5e0e62e78d7339e"
            },
            "downloads": -1,
            "filename": "sparse_numba-0.1.6-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "5c51341b89516f3de1b3d9868bc2e7da",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 15234910,
            "upload_time": "2025-04-09T02:09:43",
            "upload_time_iso_8601": "2025-04-09T02:09:43.751826Z",
            "url": "https://files.pythonhosted.org/packages/f2/0a/c44286eee1a39708fcfd60bb960f7bc7b62c2ab5f3e77ffa7556e40bd4e0/sparse_numba-0.1.6-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-04-09 02:09:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "th1275",
    "github_project": "sparse_numba",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sparse-numba"
}
        
Elapsed time: 0.37713s