# LinearBoost Classifier

[](https://pypi.org/project/linearboost/)

LinearBoost is a fast and accurate classification algorithm built to enhance the performance of the linear classifier SEFR. It combines efficiency and accuracy, delivering state-of-the-art F1 scores and classification performance.
In benchmarks across seven well-known datasets, LinearBoost:
- Outperformed XGBoost on all seven datasets
- Surpassed LightGBM on five datasets
- Achieved up to **98% faster runtime** compared to both algorithms
Key Features:
- High Accuracy: Comparable to or exceeding Gradient Boosting Decision Trees (GBDTs)
- Exceptional Speed: Blazing fast training and inference times
- Resource Efficient: Low memory usage, ideal for large datasets
## 🚀 New Major Release (v0.1.2)
Version 0.1.2 of **LinearBoost Classifier** is released. Here are the changes:
- The codebase is refactored into a new structure.
- SAMME.R algorithm is returned to the classifier.
- Both SEFR and LinearBoostClassifier classes are refactored to fully adhere to Scikit-learn's conventions and API. Now, they are standard Scikit-learn estimators that can be used in Scikit-learn pipelines, grid search, etc.
- Added unit tests (using pytest) to ensure the estimators adhere to Scikit-learn conventions.
- Added fit_intercept parameter to SEFR similar to other linear estimators in Scikit-learn (e.g., LogisticRegression, LinearRegression, etc.).
- Removed random_state parameter from LinearBoostClassifier as it doesn't affect the result, since SEFR doesn't expose a random_state argument. According to Scikit-learn documentation for this parameter in AdaBoostClassifier:
> it is only used when estimator exposes a random_state.
- Added docstring to both SEFR and LinearBoostClassifier classes.
- Used uv for project and package management.
- Used ruff and isort for formatting and lining.
- Added a GitHub workflow (*.github/workflows/ci.yml*) for CI on PRs.
- Improved Scikit-learn compatibility.
Get Started and Documentation
-----------------------------
The documentation is available at https://linearboost.readthedocs.io/.
## Recommended Parameters for LinearBoost
The following parameters yielded optimal results during testing. All results are based on 10-fold Cross-Validation:
- **`n_estimators`**:
A range of 10 to 200 is suggested, with higher values potentially improving performance at the cost of longer training times.
- **`learning_rate`**:
Values between 0.01 and 1 typically perform well. Adjust based on the dataset's complexity and noise.
- **`algorithm`**:
Use either `SAMME` or `SAMME.R`. The choice depends on the specific problem:
- `SAMME`: May be better for datasets with clearer separations between classes.
- `SAMME.R`: Can handle more nuanced class probabilities.
**Note:** As of scikit-learn v1.6, the `algorithm` parameter is deprecated and will be removed in v1.8. LinearBoostClassifier will only implement the 'SAMME' algorithm in newer versions.
- **`scaler`**:
The following scaling methods are recommended based on dataset characteristics:
- `minmax`: Best for datasets where features are on different scales but bounded.
- `robust`: Effective for datasets with outliers.
- `quantile-uniform`: Normalizes features to a uniform distribution.
- `quantile-normal`: Normalizes features to a normal (Gaussian) distribution.
These parameters should serve as a solid starting point for most datasets. For fine-tuning, consider using hyperparameter optimization tools like [Optuna](https://optuna.org/).
Results
-------
All of the results are reported based on 10-fold Cross-Validation. The weighted F1 score is reported, i.e. f1_score(y_valid, y_pred, average = 'weighted').
## Performance Comparison: F1 Scores Across Datasets
The following table presents the F1 scores of LinearBoost in comparison with XGBoost, CatBoost, and LightGBM across seven standard benchmark datasets. **Each result is obtained by running Optuna with 200 trials** to find the best hyperparameters for each algorithm and dataset, ensuring a fair and robust comparison.
| Dataset | XGBoost | CatBoost | LightGBM | LinearBoost |
|--------------------------------------|----------|----------|----------|-------------|
| Breast Cancer Wisconsin (Diagnostic) | 0.9767 | 0.9859 | 0.9771 | 0.9822 |
| Heart Disease | 0.8502 | 0.8529 | 0.8467 | 0.8507 |
| Pima Indians Diabetes Database | 0.7719 | 0.7776 | 0.7816 | 0.7753 |
| Banknote Authentication | 0.9985 | 1.0000 | 0.9993 | 1.0000 |
| Haberman's Survival | 0.7193 | 0.7427 | 0.7257 | 0.7485 |
| Loan Status Prediction | 0.8281 | 0.8495 | 0.8277 | 0.8387 |
| PCMAC | 0.9310 | 0.9351 | 0.9361 | 0.9331 |
### Experiment Details
- **Hyperparameter Optimization**:
- Each algorithm was tuned using **Optuna**, a powerful hyperparameter optimization framework.
- **200 trials** were conducted for each algorithm-dataset pair to identify the optimal hyperparameters.
- **Consistency**: This rigorous approach ensures fair comparison by evaluating each algorithm under its best-performing configuration.
### Key Highlights
- **LinearBoost** achieves competitive or superior F1 scores compared to the state-of-the-art algorithms.
- **Haberman's Survival**: LinearBoost achieves the highest F1 score (**0.7485**), outperforming all other algorithms.
- **Banknote Authentication**: LinearBoost matches the perfect F1 score of **1** achieved by CatBoost.
- LinearBoost demonstrates consistent performance across diverse datasets, making it a robust and efficient choice for classification tasks.
## Runtime Comparison: Time to Reach Best F1 Score
The following table shows the runtime (in seconds) required by LinearBoost, XGBoost, CatBoost, and LightGBM to achieve their best F1 scores. **Each result is obtained by running Optuna with 200 trials** to optimize the hyperparameters for each algorithm and dataset.
| Dataset | XGBoost | CatBoost | LightGBM | LinearBoost |
|--------------------------------------|----------|----------|----------|-------------|
| Breast Cancer Wisconsin (Diagnostic) | 3.22 | 9.68 | 4.52 | 0.30 |
| Heart Disease | 1.13 | 0.60 | 0.51 | 0.49 |
| Pima Indians Diabetes Database | 6.86 | 3.50 | 2.52 | 0.16 |
| Banknote Authentication | 0.46 | 4.26 | 5.54 | 0.33 |
| Haberman's Survival | 4.41 | 8.28 | 5.72 | 0.11 |
| Loan Status Prediction | 0.83 | 97.89 | 28.41 | 0.44 |
| PCMAC | 150.33 | 83.52 | 42.23 | 75.06 |
### Experiment Details
- **Hyperparameter Optimization**:
- Each algorithm was tuned using **Optuna** with **200 trials** per algorithm-dataset pair.
- The runtime includes the time to reach the best F1 score using the optimized hyperparameters.
- **Fair Comparison**: All algorithms were evaluated under their best configurations to ensure consistency.
### Key Highlights
- **LinearBoost** demonstrates exceptional runtime efficiency while achieving competitive F1 scores:
- **Breast Cancer Wisconsin (Diagnostic)**: LinearBoost achieves the best F1 score in just **0.30 seconds**, compared to **3.22 seconds** for XGBoost and **9.68 seconds** for CatBoost.
- **Loan Status Prediction**: LinearBoost runs in **0.44 seconds**, outperforming LightGBM (**28.41 seconds**) and CatBoost (**97.89 seconds**).
- Across most datasets, LinearBoost reduces runtime by up to **98%** compared to XGBoost and LightGBM while maintaining competitive performance.
### Tuned Hyperparameters
#### XGBoost
```python
params = {
'objective': 'binary:logistic',
'use_label_encoder': False,
'n_estimators': trial.suggest_int('n_estimators', 20, 1000),
'max_depth': trial.suggest_int('max_depth', 1, 20),
'learning_rate': trial.suggest_uniform('learning_rate', 0.01, 0.7),
'gamma': trial.suggest_loguniform('gamma', 1e-8, 1.0),
'min_child_weight': trial.suggest_int('min_child_weight', 1, 10),
'subsample': trial.suggest_float('subsample', 0.5, 1.0),
'colsample_bytree': trial.suggest_float('colsample_bytree', 0.5, 1.0),
'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-8, 1.0),
'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-8, 1.0),
'enable_categorical': True,
'eval_metric': 'logloss'
}
```
#### CatBoost
```python
params = {
'iterations': trial.suggest_int('iterations', 50, 500),
'depth': trial.suggest_int('depth', 1, 16),
'learning_rate': trial.suggest_loguniform('learning_rate', 1e-3, 0.5),
'l2_leaf_reg': trial.suggest_loguniform('l2_leaf_reg', 1e-8, 10.0),
'random_strength': trial.suggest_loguniform('random_strength', 1e-8, 10.0),
'bagging_temperature': trial.suggest_loguniform('bagging_temperature', 1e-1, 10.0),
'border_count': trial.suggest_int('border_count', 32, 255),
'grow_policy': trial.suggest_categorical('grow_policy', ['SymmetricTree', 'Depthwise', 'Lossguide']),
'min_data_in_leaf': trial.suggest_int('min_data_in_leaf', 1, 100),
'rsm': trial.suggest_uniform('rsm', 0.1, 1.0),
'loss_function': 'Logloss',
'eval_metric': 'F1',
'cat_features': categorical_cols
}
```
#### LightGBM
```python
params = {
'objective': 'binary',
'metric': 'binary_logloss',
'boosting_type': trial.suggest_categorical('boosting_type', ['gbdt', 'dart', 'goss']),
'num_leaves': trial.suggest_int('num_leaves', 2, 256),
'learning_rate': trial.suggest_loguniform('learning_rate', 1e-3, 0.1),
'n_estimators': trial.suggest_int('n_estimators', 20, 1000),
'max_depth': trial.suggest_int('max_depth', 1, 20),
'min_child_samples': trial.suggest_int('min_child_samples', 1, 100),
'subsample': trial.suggest_uniform('subsample', 0.5, 1.0),
'colsample_bytree': trial.suggest_uniform('colsample_bytree', 0.5, 1.0),
'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-8, 10.0),
'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-8, 10.0),
'min_split_gain': trial.suggest_loguniform('min_split_gain', 1e-8, 1.0),
'cat_smooth': trial.suggest_int('cat_smooth', 1, 100),
'cat_l2': trial.suggest_loguniform('cat_l2', 1e-8, 10.0),
'verbosity': -1
}
```
#### LinearBoost
```python
params = {
'n_estimators': trial.suggest_int('n_estimators', 10, 200),
'learning_rate': trial.suggest_loguniform('learning_rate', 0.01, 1),
'algorithm': trial.suggest_categorical('algorithm', ['SAMME', 'SAMME.R']),
'scaler': trial.suggest_categorical('scaler', ['minmax', 'robust', 'quantile-uniform', 'quantile-normal'])
}
```
### Why LinearBoost?
LinearBoost's combination of **runtime efficiency** and **high accuracy** makes it a powerful choice for real-world machine learning tasks, particularly in resource-constrained or real-time applications.
### 📰 Featured in:
- [LightGBM Alternatives: A Comprehensive Comparison](https://nightwatcherai.com/blog/lightgbm-alternatives)
_by Jordan Cole, March 11, 2025_
*Discusses how LinearBoost outperforms traditional boosting frameworks in terms of speed while maintaining accuracy.*
Future Developments
-----------------------------
These are not supported in this current version, but are in the future plans:
- Supporting categorical variables
- Adding regression
Reference Paper
-----------------------------
The paper is written by Hamidreza Keshavarz (Independent Researcher based in Berlin, Germany) and Reza Rawassizadeh (Department of Computer Science, Metropolitan college, Boston University, United States). It will be available soon.
License
-------
This project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/LinearBoost/linearboost-classifier/blob/main/LICENSE) for additional details.
## Acknowledgments
Some portions of this code are adapted from the scikit-learn project
(https://scikit-learn.org), which is licensed under the BSD 3-Clause License.
See the `licenses/` folder for details. The modifications and additions made to the original code are licensed under the MIT License © 2025 Hamidreza Keshavarz, Reza Rawassizadeh.
The original code from scikit-learn is available at [scikit-learn GitHub repository](https://github.com/scikit-learn/scikit-learn)
Special Thanks to:
- **Mehdi Samsami** – for software engineering, refactoring, and ensuring compatibility.
Raw data
{
"_id": null,
"home_page": null,
"name": "linearboost",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.8",
"maintainer_email": null,
"keywords": "adaboost, boost, boosting, classification, classifier, linear",
"author": null,
"author_email": "Hamidreza Keshavarz <hamid9@outlook.com>, Reza Rawassizadeh <rezar@bu.edu>",
"download_url": "https://files.pythonhosted.org/packages/7c/66/8e05aa98d719d144ad3db2fb49aabe2b7c283cd9a86ffb38aa3f9e408742/linearboost-0.1.3.tar.gz",
"platform": null,
"description": "# LinearBoost Classifier\n\n\n[](https://pypi.org/project/linearboost/)\n\n\n\nLinearBoost is a fast and accurate classification algorithm built to enhance the performance of the linear classifier SEFR. It combines efficiency and accuracy, delivering state-of-the-art F1 scores and classification performance.\n\nIn benchmarks across seven well-known datasets, LinearBoost:\n\n- Outperformed XGBoost on all seven datasets\n- Surpassed LightGBM on five datasets\n- Achieved up to **98% faster runtime** compared to both algorithms\n\nKey Features:\n\n- High Accuracy: Comparable to or exceeding Gradient Boosting Decision Trees (GBDTs)\n- Exceptional Speed: Blazing fast training and inference times\n- Resource Efficient: Low memory usage, ideal for large datasets\n\n## \ud83d\ude80 New Major Release (v0.1.2)\nVersion 0.1.2 of **LinearBoost Classifier** is released. Here are the changes:\n\n- The codebase is refactored into a new structure.\n- SAMME.R algorithm is returned to the classifier.\n- Both SEFR and LinearBoostClassifier classes are refactored to fully adhere to Scikit-learn's conventions and API. Now, they are standard Scikit-learn estimators that can be used in Scikit-learn pipelines, grid search, etc.\n- Added unit tests (using pytest) to ensure the estimators adhere to Scikit-learn conventions.\n- Added fit_intercept parameter to SEFR similar to other linear estimators in Scikit-learn (e.g., LogisticRegression, LinearRegression, etc.).\n- Removed random_state parameter from LinearBoostClassifier as it doesn't affect the result, since SEFR doesn't expose a random_state argument. According to Scikit-learn documentation for this parameter in AdaBoostClassifier:\n > it is only used when estimator exposes a random_state.\n- Added docstring to both SEFR and LinearBoostClassifier classes.\n- Used uv for project and package management.\n- Used ruff and isort for formatting and lining.\n- Added a GitHub workflow (*.github/workflows/ci.yml*) for CI on PRs.\n- Improved Scikit-learn compatibility.\n\n\nGet Started and Documentation\n-----------------------------\n\nThe documentation is available at https://linearboost.readthedocs.io/.\n\n## Recommended Parameters for LinearBoost\n\nThe following parameters yielded optimal results during testing. All results are based on 10-fold Cross-Validation:\n\n- **`n_estimators`**:\n A range of 10 to 200 is suggested, with higher values potentially improving performance at the cost of longer training times.\n\n- **`learning_rate`**:\n Values between 0.01 and 1 typically perform well. Adjust based on the dataset's complexity and noise.\n\n- **`algorithm`**:\n Use either `SAMME` or `SAMME.R`. The choice depends on the specific problem:\n - `SAMME`: May be better for datasets with clearer separations between classes.\n - `SAMME.R`: Can handle more nuanced class probabilities.\n\n **Note:** As of scikit-learn v1.6, the `algorithm` parameter is deprecated and will be removed in v1.8. LinearBoostClassifier will only implement the 'SAMME' algorithm in newer versions.\n\n- **`scaler`**:\n The following scaling methods are recommended based on dataset characteristics:\n - `minmax`: Best for datasets where features are on different scales but bounded.\n - `robust`: Effective for datasets with outliers.\n - `quantile-uniform`: Normalizes features to a uniform distribution.\n - `quantile-normal`: Normalizes features to a normal (Gaussian) distribution.\n\nThese parameters should serve as a solid starting point for most datasets. For fine-tuning, consider using hyperparameter optimization tools like [Optuna](https://optuna.org/).\n\nResults\n-------\n\nAll of the results are reported based on 10-fold Cross-Validation. The weighted F1 score is reported, i.e. f1_score(y_valid, y_pred, average = 'weighted').\n\n## Performance Comparison: F1 Scores Across Datasets\n\nThe following table presents the F1 scores of LinearBoost in comparison with XGBoost, CatBoost, and LightGBM across seven standard benchmark datasets. **Each result is obtained by running Optuna with 200 trials** to find the best hyperparameters for each algorithm and dataset, ensuring a fair and robust comparison.\n\n| Dataset | XGBoost | CatBoost | LightGBM | LinearBoost |\n|--------------------------------------|----------|----------|----------|-------------|\n| Breast Cancer Wisconsin (Diagnostic) | 0.9767 | 0.9859 | 0.9771 | 0.9822 |\n| Heart Disease | 0.8502 | 0.8529 | 0.8467 | 0.8507 |\n| Pima Indians Diabetes Database | 0.7719 | 0.7776 | 0.7816 | 0.7753 |\n| Banknote Authentication | 0.9985 | 1.0000 | 0.9993 | 1.0000 |\n| Haberman's Survival | 0.7193 | 0.7427 | 0.7257 | 0.7485 |\n| Loan Status Prediction | 0.8281 | 0.8495 | 0.8277 | 0.8387 |\n| PCMAC | 0.9310 | 0.9351 | 0.9361 | 0.9331 |\n\n### Experiment Details\n- **Hyperparameter Optimization**:\n - Each algorithm was tuned using **Optuna**, a powerful hyperparameter optimization framework.\n - **200 trials** were conducted for each algorithm-dataset pair to identify the optimal hyperparameters.\n- **Consistency**: This rigorous approach ensures fair comparison by evaluating each algorithm under its best-performing configuration.\n\n### Key Highlights\n- **LinearBoost** achieves competitive or superior F1 scores compared to the state-of-the-art algorithms.\n- **Haberman's Survival**: LinearBoost achieves the highest F1 score (**0.7485**), outperforming all other algorithms.\n- **Banknote Authentication**: LinearBoost matches the perfect F1 score of **1** achieved by CatBoost.\n- LinearBoost demonstrates consistent performance across diverse datasets, making it a robust and efficient choice for classification tasks.\n\n## Runtime Comparison: Time to Reach Best F1 Score\n\nThe following table shows the runtime (in seconds) required by LinearBoost, XGBoost, CatBoost, and LightGBM to achieve their best F1 scores. **Each result is obtained by running Optuna with 200 trials** to optimize the hyperparameters for each algorithm and dataset.\n\n| Dataset | XGBoost | CatBoost | LightGBM | LinearBoost |\n|--------------------------------------|----------|----------|----------|-------------|\n| Breast Cancer Wisconsin (Diagnostic) | 3.22 | 9.68 | 4.52 | 0.30 |\n| Heart Disease | 1.13 | 0.60 | 0.51 | 0.49 |\n| Pima Indians Diabetes Database | 6.86 | 3.50 | 2.52 | 0.16 |\n| Banknote Authentication | 0.46 | 4.26 | 5.54 | 0.33 |\n| Haberman's Survival | 4.41 | 8.28 | 5.72 | 0.11 |\n| Loan Status Prediction | 0.83 | 97.89 | 28.41 | 0.44 |\n| PCMAC | 150.33 | 83.52 | 42.23 | 75.06 |\n\n### Experiment Details\n- **Hyperparameter Optimization**:\n - Each algorithm was tuned using **Optuna** with **200 trials** per algorithm-dataset pair.\n - The runtime includes the time to reach the best F1 score using the optimized hyperparameters.\n- **Fair Comparison**: All algorithms were evaluated under their best configurations to ensure consistency.\n\n### Key Highlights\n- **LinearBoost** demonstrates exceptional runtime efficiency while achieving competitive F1 scores:\n - **Breast Cancer Wisconsin (Diagnostic)**: LinearBoost achieves the best F1 score in just **0.30 seconds**, compared to **3.22 seconds** for XGBoost and **9.68 seconds** for CatBoost.\n - **Loan Status Prediction**: LinearBoost runs in **0.44 seconds**, outperforming LightGBM (**28.41 seconds**) and CatBoost (**97.89 seconds**).\n- Across most datasets, LinearBoost reduces runtime by up to **98%** compared to XGBoost and LightGBM while maintaining competitive performance.\n\n### Tuned Hyperparameters\n\n#### XGBoost\n```python\nparams = {\n 'objective': 'binary:logistic',\n 'use_label_encoder': False,\n 'n_estimators': trial.suggest_int('n_estimators', 20, 1000),\n 'max_depth': trial.suggest_int('max_depth', 1, 20),\n 'learning_rate': trial.suggest_uniform('learning_rate', 0.01, 0.7),\n 'gamma': trial.suggest_loguniform('gamma', 1e-8, 1.0),\n 'min_child_weight': trial.suggest_int('min_child_weight', 1, 10),\n 'subsample': trial.suggest_float('subsample', 0.5, 1.0),\n 'colsample_bytree': trial.suggest_float('colsample_bytree', 0.5, 1.0),\n 'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-8, 1.0),\n 'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-8, 1.0),\n 'enable_categorical': True,\n 'eval_metric': 'logloss'\n}\n```\n\n#### CatBoost\n```python\nparams = {\n 'iterations': trial.suggest_int('iterations', 50, 500),\n 'depth': trial.suggest_int('depth', 1, 16),\n 'learning_rate': trial.suggest_loguniform('learning_rate', 1e-3, 0.5),\n 'l2_leaf_reg': trial.suggest_loguniform('l2_leaf_reg', 1e-8, 10.0),\n 'random_strength': trial.suggest_loguniform('random_strength', 1e-8, 10.0),\n 'bagging_temperature': trial.suggest_loguniform('bagging_temperature', 1e-1, 10.0),\n 'border_count': trial.suggest_int('border_count', 32, 255),\n 'grow_policy': trial.suggest_categorical('grow_policy', ['SymmetricTree', 'Depthwise', 'Lossguide']),\n 'min_data_in_leaf': trial.suggest_int('min_data_in_leaf', 1, 100),\n 'rsm': trial.suggest_uniform('rsm', 0.1, 1.0),\n 'loss_function': 'Logloss',\n 'eval_metric': 'F1',\n 'cat_features': categorical_cols\n}\n```\n\n#### LightGBM\n```python\nparams = {\n 'objective': 'binary',\n 'metric': 'binary_logloss',\n 'boosting_type': trial.suggest_categorical('boosting_type', ['gbdt', 'dart', 'goss']),\n 'num_leaves': trial.suggest_int('num_leaves', 2, 256),\n 'learning_rate': trial.suggest_loguniform('learning_rate', 1e-3, 0.1),\n 'n_estimators': trial.suggest_int('n_estimators', 20, 1000),\n 'max_depth': trial.suggest_int('max_depth', 1, 20),\n 'min_child_samples': trial.suggest_int('min_child_samples', 1, 100),\n 'subsample': trial.suggest_uniform('subsample', 0.5, 1.0),\n 'colsample_bytree': trial.suggest_uniform('colsample_bytree', 0.5, 1.0),\n 'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-8, 10.0),\n 'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-8, 10.0),\n 'min_split_gain': trial.suggest_loguniform('min_split_gain', 1e-8, 1.0),\n 'cat_smooth': trial.suggest_int('cat_smooth', 1, 100),\n 'cat_l2': trial.suggest_loguniform('cat_l2', 1e-8, 10.0),\n 'verbosity': -1\n}\n```\n\n#### LinearBoost\n```python\nparams = {\n 'n_estimators': trial.suggest_int('n_estimators', 10, 200),\n 'learning_rate': trial.suggest_loguniform('learning_rate', 0.01, 1),\n 'algorithm': trial.suggest_categorical('algorithm', ['SAMME', 'SAMME.R']),\n 'scaler': trial.suggest_categorical('scaler', ['minmax', 'robust', 'quantile-uniform', 'quantile-normal'])\n}\n```\n\n### Why LinearBoost?\nLinearBoost's combination of **runtime efficiency** and **high accuracy** makes it a powerful choice for real-world machine learning tasks, particularly in resource-constrained or real-time applications.\n\n### \ud83d\udcf0 Featured in:\n- [LightGBM Alternatives: A Comprehensive Comparison](https://nightwatcherai.com/blog/lightgbm-alternatives)\n _by Jordan Cole, March 11, 2025_\n *Discusses how LinearBoost outperforms traditional boosting frameworks in terms of speed while maintaining accuracy.*\n\n\nFuture Developments\n-----------------------------\nThese are not supported in this current version, but are in the future plans:\n- Supporting categorical variables\n- Adding regression\n\nReference Paper\n-----------------------------\nThe paper is written by Hamidreza Keshavarz (Independent Researcher based in Berlin, Germany) and Reza Rawassizadeh (Department of Computer Science, Metropolitan college, Boston University, United States). It will be available soon.\n\nLicense\n-------\n\nThis project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/LinearBoost/linearboost-classifier/blob/main/LICENSE) for additional details.\n\n## Acknowledgments\n\nSome portions of this code are adapted from the scikit-learn project\n(https://scikit-learn.org), which is licensed under the BSD 3-Clause License.\nSee the `licenses/` folder for details. The modifications and additions made to the original code are licensed under the MIT License \u00a9 2025 Hamidreza Keshavarz, Reza Rawassizadeh.\nThe original code from scikit-learn is available at [scikit-learn GitHub repository](https://github.com/scikit-learn/scikit-learn)\n\nSpecial Thanks to:\n- **Mehdi Samsami** \u2013 for software engineering, refactoring, and ensuring compatibility.\n",
"bugtrack_url": null,
"license": null,
"summary": "LinearBoost Classifier is a rapid and accurate classification algorithm that builds upon a very fast, linear classifier.",
"version": "0.1.3",
"project_urls": {
"Documentation": "https://linearboost.readthedocs.io",
"Homepage": "https://github.com/LinearBoost/linearboost-classifier",
"Issues": "https://github.com/LinearBoost/linearboost-classifier/issues",
"Repository": "https://github.com/LinearBoost/linearboost-classifier"
},
"split_keywords": [
"adaboost",
" boost",
" boosting",
" classification",
" classifier",
" linear"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1cd36398d6fc638ce46bb10fe8f88d35d97291174f69aed3a49313881fe7efaa",
"md5": "9068e2e753b29093b0745eb57b71da3d",
"sha256": "e55e306cbd1f934525dc551c6d469e0a44a2ad18eb4c5725d4c089e4db724f97"
},
"downloads": -1,
"filename": "linearboost-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9068e2e753b29093b0745eb57b71da3d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.8",
"size": 19027,
"upload_time": "2025-07-21T22:15:10",
"upload_time_iso_8601": "2025-07-21T22:15:10.934072Z",
"url": "https://files.pythonhosted.org/packages/1c/d3/6398d6fc638ce46bb10fe8f88d35d97291174f69aed3a49313881fe7efaa/linearboost-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7c668e05aa98d719d144ad3db2fb49aabe2b7c283cd9a86ffb38aa3f9e408742",
"md5": "5f7ebe78932d6d46961b10ee2d4c4453",
"sha256": "889c3fc3c3f94ff30ae9e3628cd885ba635d1bd779393ced8cea9a535c6a1a88"
},
"downloads": -1,
"filename": "linearboost-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "5f7ebe78932d6d46961b10ee2d4c4453",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.8",
"size": 189814,
"upload_time": "2025-07-21T22:15:12",
"upload_time_iso_8601": "2025-07-21T22:15:12.729559Z",
"url": "https://files.pythonhosted.org/packages/7c/66/8e05aa98d719d144ad3db2fb49aabe2b7c283cd9a86ffb38aa3f9e408742/linearboost-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-21 22:15:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LinearBoost",
"github_project": "linearboost-classifier",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.2.2"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
">=",
"4.1.0"
]
]
}
],
"lcname": "linearboost"
}