# FeatureCraft
<div align="center">
<p><strong>Automatic feature engineering with beautiful explanations - zero configuration, maximum transparency!</strong></p>
<p>Intelligent preprocessing pipelines + automatic explanations for every decision + sklearn integration</p>
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/featurecraft/)
</div>
## Quick Start
```bash
pip install featurecraft
```
```python
from featurecraft.pipeline import AutoFeatureEngineer
import pandas as pd
# Load data
df = pd.read_csv("data.csv")
X, y = df.drop("target", axis=1), df["target"]
# Fit and transform with automatic explanations!
afe = AutoFeatureEngineer()
X_transformed = afe.fit_transform(X, y, estimator_family="tree")
# Beautiful formatted explanations automatically print showing:
# - Why each transformation was chosen
# - What columns were affected
# - Configuration parameters used
# - Performance tips and recommendations
```
That's it! Zero configuration, maximum transparency.
## Table of Contents
- [Quick Start](#quick-start)
- [About The Project](#about-the-project)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Usage](#usage)
- [CLI Usage](#cli-usage)
- [Python API](#python-api)
- [Estimator Families](#estimator-families)
- [Configuration](#configuration)
- [Output Artifacts](#output-artifacts)
- [Examples](#examples)
- [Documentation](#documentation)
- [Roadmap](#roadmap)
- [Contributing](#contributing)
- [License](#license)
## About The Project
FeatureCraft is a comprehensive feature engineering library that automates the process of transforming raw tabular data into machine learning-ready features. It provides intelligent preprocessing, feature selection, encoding, and scaling tailored for different estimator families.
### Key Features
✨ **Automatic Feature Engineering**: Intelligent preprocessing pipeline that handles missing values, outliers, categorical encoding, and feature scaling
🔮 **Automatic Explainability**: Beautiful, automatic explanations of every transformation decision printed to console - understand WHY each preprocessing choice was made without any extra code!
🔍 **Dataset Analysis**: Comprehensive insights into your data including distributions, correlations, and data quality issues
📊 **Multiple Estimator Support**: Optimized preprocessing for tree-based models, linear models, SVMs, k-NN, and neural networks
🛠️ **Sklearn Integration**: Seamless integration with scikit-learn pipelines and ecosystem
📈 **HTML Reports**: Interactive visualizations and insights reports
⚡ **CLI & Python API**: Choose between command-line interface or programmatic usage
🕐 **Time Series Support**: Optional time-series aware preprocessing
🤖 **AI-Powered Intelligence**: Advanced LLM-driven feature engineering with adaptive optimization
### AI-Powered Intelligence
FeatureCraft includes sophisticated AI components that leverage Large Language Models and machine learning to make intelligent feature engineering decisions:
#### 🤖 AIFeatureAdvisor
Uses Large Language Models (OpenAI, Anthropic, or local models) to analyze your dataset characteristics and recommend optimal feature engineering strategies. It considers:
- Column types and distributions
- Missing value patterns
- Outlier characteristics
- Feature interactions and relationships
- Domain-specific preprocessing requirements
#### 📋 FeatureEngineeringPlanner
Orchestrates the entire feature engineering workflow:
- Analyzes dataset characteristics
- Gets AI recommendations for optimal strategies
- Applies smart optimizations based on data patterns
- Configures preprocessing pipelines automatically
#### 🔄 AdaptiveConfigOptimizer
Learns from model performance feedback to continuously improve feature engineering strategies:
- Tracks performance metrics across different datasets
- Identifies successful patterns and configurations
- Adapts recommendations based on historical results
- Prevents overfitting through intelligent feature selection
### Why FeatureCraft?
* **Automated Workflow**: No need to manually handle different data types and preprocessing steps
* **Automatic Explanations**: See beautiful formatted explanations of every decision - enabled by default with zero extra code!
* **Best Practices**: Implements proven feature engineering techniques
* **Performance Optimized**: Different preprocessing strategies for different model types
* **Production Ready**: Exports sklearn-compatible pipelines for deployment
* **Comprehensive Analysis**: Deep insights into your dataset characteristics
* **Explainable AI**: Understand why transformations are applied and how they affect your data
* **Transparency**: Rich explanations help build trust and debug preprocessing decisions
## Getting Started
### Prerequisites
- Python 3.9 or higher
- pandas >= 1.5
- scikit-learn >= 1.3
- numpy >= 1.23
### Installation
```bash
pip install featurecraft
```
All features including AI-powered planning, enhanced encoders, SHAP explainability, and schema validation are included by default.
## Usage
### CLI Usage
```bash
# Analyze dataset and generate comprehensive report
featurecraft analyze --input data.csv --target target_column --out artifacts/
# Fit preprocessing pipeline and transform data
featurecraft fit-transform --input data.csv --target target_column --out artifacts/ --estimator-family tree
# Open the generated HTML report
open artifacts/report.html
```
### Python API
#### Basic Usage with Automatic Explanations
```python
import pandas as pd
from featurecraft.pipeline import AutoFeatureEngineer
from featurecraft.config import FeatureCraftConfig
# Load your data
df = pd.read_csv("your_data.csv")
X, y = df.drop(columns=["target_column"]), df["target_column"]
# Initialize with automatic explanations (enabled by default)
config = FeatureCraftConfig(
explain_transformations=True, # Enable explanations (default)
explain_auto_print=True # Auto-print after fit (default)
)
afe = AutoFeatureEngineer(config=config)
# Fit and transform - explanations print automatically!
Xt = afe.fit_transform(X, y, estimator_family="tree")
# Beautiful formatted explanations appear here showing:
# - Column classifications
# - Imputation strategies
# - Encoding decisions
# - Scaling choices
# - Feature transformations
# - And WHY each decision was made!
print(f"Transformed {X.shape[1]} features into {Xt.shape[1]} features")
# Export pipeline for production use
afe.export("artifacts")
```
#### Advanced: Analyze + Manual Explanation Control
```python
# Analyze dataset first (optional but recommended)
summary = afe.analyze(df, target="target_column")
print(f"Detected task: {summary.task}")
print(f"Found {len(summary.issues)} data quality issues")
# Fit with manual explanation control
config = FeatureCraftConfig(explain_auto_print=False) # Disable auto-print
afe = AutoFeatureEngineer(config=config)
afe.fit(X, y, estimator_family="linear")
# Print or save explanations manually
afe.print_explanation() # Rich console output
afe.save_explanation("artifacts/explanation.md", format="markdown")
afe.save_explanation("artifacts/explanation.json", format="json")
```
### Estimator Families
Choose the preprocessing strategy based on your model type:
| Family | Models | Scaling | Encoding | Best For |
|--------|--------|---------|----------|----------|
| `tree` | XGBoost, LightGBM, Random Forest | None | Label Encoding | Tree-based models |
| `linear` | Linear/Logistic Regression | StandardScaler | One-hot + Target | Linear models |
| `svm` | SVM, SVC | StandardScaler | One-hot | Support Vector Machines |
| `knn` | k-Nearest Neighbors | MinMaxScaler | Label Encoding | Distance-based models |
| `nn` | Neural Networks | MinMaxScaler | Label Encoding | Deep learning |
### Configuration
Customize the preprocessing behavior:
```python
from featurecraft.config import FeatureCraftConfig
# Custom configuration
config = FeatureCraftConfig(
low_cardinality_max=15, # Max unique values for low-cardinality features
outlier_share_threshold=0.1, # Threshold for outlier detection
random_state=42, # For reproducible results
# Explainability (enabled by default!)
explain_transformations=True, # Enable detailed explanations (default: True)
explain_auto_print=True, # Auto-print after fit() (default: True)
explain_save_path=None, # Optional: auto-save to file path
)
afe = AutoFeatureEngineer(config=config)
# Note: You can also use AutoFeatureEngineer() without config
# and get automatic explanations by default!
```
### Output Artifacts
The `export()` method creates:
- `pipeline.joblib`: Fitted sklearn Pipeline ready for production
- `metadata.json`: Configuration and processing summary
- `feature_names.txt`: List of all output feature names
- `explanation.md`: Human-readable explanation of transformation decisions (when explanations enabled)
- `explanation.json`: Machine-readable explanation data (when explanations enabled)
The `analyze()` method generates:
- `report.html`: Interactive HTML report with plots, insights, and recommendations
## Examples
Check out the [examples](./examples/) directory for comprehensive usage examples:
- **[01_quickstart.py](./examples/01_quickstart.py)**: Basic usage with automatic explanations on Iris and Wine datasets
- **[02_kaggle_benchmark.py](./examples/02_kaggle_benchmark.py)**: Kaggle dataset benchmarking
- **[03_complex_kaggle_benchmark.py](./examples/03_complex_kaggle_benchmark.py)**: Advanced benchmarking with complex datasets
- **[06_explainability_demo.py](./examples/06_explainability_demo.py)**: Deep dive into explanation features and export formats
Run the quickstart example to see automatic explanations in action:
```bash
python examples/01_quickstart.py
```
This will demonstrate automatic feature engineering with beautiful formatted explanations for the Iris and Wine datasets!
## Documentation
### Getting Started
- **[Getting Started](./docs/getting-started.md)**: Quick start guide for new users
- **[API Reference](./docs/api-reference.md)**: Complete Python API documentation
- **[CLI Reference](./docs/cli-reference.md)**: Command-line interface guide
### Configuration & Optimization
- **[Configuration Guide](./docs/configuration.md)**: Comprehensive parameter reference
- **[Optimization Guide](./docs/optimization-guide.md)**: Performance tuning and best practices
### Advanced Topics
- **[Advanced Features](./docs/advanced-features.md)**: Drift detection, leakage prevention, SHAP
- **[Benchmarks](./docs/benchmarks.md)**: Real-world dataset performance results
- **[Troubleshooting](./docs/troubleshooting.md)**: Common issues and solutions
## Roadmap
See the [open issues](https://github.com/featurecraft/featurecraft/issues) for a list of proposed features and known issues.
- [ ] Enhanced time series preprocessing
- [ ] Feature selection algorithms
- [ ] Integration with popular ML frameworks
- [ ] GPU acceleration support
- [ ] Advanced outlier detection methods
- [ ] Automated feature interaction detection
## Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
## License
Distributed under the MIT License. See `LICENSE` for more information.
---
<div align="center">
<p><strong>Built with ❤️ for the machine learning community</strong></p>
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "featurecraft",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "auto, feature engineering, machine learning, sklearn, tabular, time series",
"author": null,
"author_email": "FeatureCraft Authors <maintainers@featurecraft.dev>",
"download_url": "https://files.pythonhosted.org/packages/bf/d4/1c72e5c1630e825fe348feca16e782e00751687737bb2e4e1661fcb2853b/featurecraft-1.0.0.tar.gz",
"platform": null,
"description": "# FeatureCraft\n\n<div align=\"center\">\n <p><strong>Automatic feature engineering with beautiful explanations - zero configuration, maximum transparency!</strong></p>\n <p>Intelligent preprocessing pipelines + automatic explanations for every decision + sklearn integration</p>\n\n [](https://www.python.org/downloads/)\n [](https://opensource.org/licenses/MIT)\n [](https://pypi.org/project/featurecraft/)\n\n</div>\n\n## Quick Start\n\n```bash\npip install featurecraft\n```\n\n```python\nfrom featurecraft.pipeline import AutoFeatureEngineer\nimport pandas as pd\n\n# Load data\ndf = pd.read_csv(\"data.csv\")\nX, y = df.drop(\"target\", axis=1), df[\"target\"]\n\n# Fit and transform with automatic explanations!\nafe = AutoFeatureEngineer()\nX_transformed = afe.fit_transform(X, y, estimator_family=\"tree\")\n\n# Beautiful formatted explanations automatically print showing:\n# - Why each transformation was chosen\n# - What columns were affected\n# - Configuration parameters used\n# - Performance tips and recommendations\n```\n\nThat's it! Zero configuration, maximum transparency.\n\n## Table of Contents\n\n- [Quick Start](#quick-start)\n- [About The Project](#about-the-project)\n- [Getting Started](#getting-started)\n - [Prerequisites](#prerequisites)\n - [Installation](#installation)\n- [Usage](#usage)\n - [CLI Usage](#cli-usage)\n - [Python API](#python-api)\n - [Estimator Families](#estimator-families)\n - [Configuration](#configuration)\n - [Output Artifacts](#output-artifacts)\n- [Examples](#examples)\n- [Documentation](#documentation)\n- [Roadmap](#roadmap)\n- [Contributing](#contributing)\n- [License](#license)\n\n## About The Project\n\nFeatureCraft is a comprehensive feature engineering library that automates the process of transforming raw tabular data into machine learning-ready features. It provides intelligent preprocessing, feature selection, encoding, and scaling tailored for different estimator families.\n\n### Key Features\n\n\u2728 **Automatic Feature Engineering**: Intelligent preprocessing pipeline that handles missing values, outliers, categorical encoding, and feature scaling\n\n\ud83d\udd2e **Automatic Explainability**: Beautiful, automatic explanations of every transformation decision printed to console - understand WHY each preprocessing choice was made without any extra code!\n\n\ud83d\udd0d **Dataset Analysis**: Comprehensive insights into your data including distributions, correlations, and data quality issues\n\n\ud83d\udcca **Multiple Estimator Support**: Optimized preprocessing for tree-based models, linear models, SVMs, k-NN, and neural networks\n\n\ud83d\udee0\ufe0f **Sklearn Integration**: Seamless integration with scikit-learn pipelines and ecosystem\n\n\ud83d\udcc8 **HTML Reports**: Interactive visualizations and insights reports\n\n\u26a1 **CLI & Python API**: Choose between command-line interface or programmatic usage\n\n\ud83d\udd50 **Time Series Support**: Optional time-series aware preprocessing\n\n\ud83e\udd16 **AI-Powered Intelligence**: Advanced LLM-driven feature engineering with adaptive optimization\n\n### AI-Powered Intelligence\n\nFeatureCraft includes sophisticated AI components that leverage Large Language Models and machine learning to make intelligent feature engineering decisions:\n\n#### \ud83e\udd16 AIFeatureAdvisor\nUses Large Language Models (OpenAI, Anthropic, or local models) to analyze your dataset characteristics and recommend optimal feature engineering strategies. It considers:\n- Column types and distributions\n- Missing value patterns\n- Outlier characteristics\n- Feature interactions and relationships\n- Domain-specific preprocessing requirements\n\n#### \ud83d\udccb FeatureEngineeringPlanner\nOrchestrates the entire feature engineering workflow:\n- Analyzes dataset characteristics\n- Gets AI recommendations for optimal strategies\n- Applies smart optimizations based on data patterns\n- Configures preprocessing pipelines automatically\n\n#### \ud83d\udd04 AdaptiveConfigOptimizer\nLearns from model performance feedback to continuously improve feature engineering strategies:\n- Tracks performance metrics across different datasets\n- Identifies successful patterns and configurations\n- Adapts recommendations based on historical results\n- Prevents overfitting through intelligent feature selection\n\n### Why FeatureCraft?\n\n* **Automated Workflow**: No need to manually handle different data types and preprocessing steps\n* **Automatic Explanations**: See beautiful formatted explanations of every decision - enabled by default with zero extra code!\n* **Best Practices**: Implements proven feature engineering techniques\n* **Performance Optimized**: Different preprocessing strategies for different model types\n* **Production Ready**: Exports sklearn-compatible pipelines for deployment\n* **Comprehensive Analysis**: Deep insights into your dataset characteristics\n* **Explainable AI**: Understand why transformations are applied and how they affect your data\n* **Transparency**: Rich explanations help build trust and debug preprocessing decisions\n\n## Getting Started\n\n### Prerequisites\n\n- Python 3.9 or higher\n- pandas >= 1.5\n- scikit-learn >= 1.3\n- numpy >= 1.23\n\n### Installation\n\n```bash\npip install featurecraft\n```\n\nAll features including AI-powered planning, enhanced encoders, SHAP explainability, and schema validation are included by default.\n\n## Usage\n\n### CLI Usage\n\n```bash\n# Analyze dataset and generate comprehensive report\nfeaturecraft analyze --input data.csv --target target_column --out artifacts/\n\n# Fit preprocessing pipeline and transform data\nfeaturecraft fit-transform --input data.csv --target target_column --out artifacts/ --estimator-family tree\n\n# Open the generated HTML report\nopen artifacts/report.html\n```\n\n### Python API\n\n#### Basic Usage with Automatic Explanations\n\n```python\nimport pandas as pd\nfrom featurecraft.pipeline import AutoFeatureEngineer\nfrom featurecraft.config import FeatureCraftConfig\n\n# Load your data\ndf = pd.read_csv(\"your_data.csv\")\nX, y = df.drop(columns=[\"target_column\"]), df[\"target_column\"]\n\n# Initialize with automatic explanations (enabled by default)\nconfig = FeatureCraftConfig(\n explain_transformations=True, # Enable explanations (default)\n explain_auto_print=True # Auto-print after fit (default)\n)\nafe = AutoFeatureEngineer(config=config)\n\n# Fit and transform - explanations print automatically!\nXt = afe.fit_transform(X, y, estimator_family=\"tree\")\n# Beautiful formatted explanations appear here showing:\n# - Column classifications\n# - Imputation strategies\n# - Encoding decisions\n# - Scaling choices\n# - Feature transformations\n# - And WHY each decision was made!\n\nprint(f\"Transformed {X.shape[1]} features into {Xt.shape[1]} features\")\n\n# Export pipeline for production use\nafe.export(\"artifacts\")\n```\n\n#### Advanced: Analyze + Manual Explanation Control\n\n```python\n# Analyze dataset first (optional but recommended)\nsummary = afe.analyze(df, target=\"target_column\")\nprint(f\"Detected task: {summary.task}\")\nprint(f\"Found {len(summary.issues)} data quality issues\")\n\n# Fit with manual explanation control\nconfig = FeatureCraftConfig(explain_auto_print=False) # Disable auto-print\nafe = AutoFeatureEngineer(config=config)\nafe.fit(X, y, estimator_family=\"linear\")\n\n# Print or save explanations manually\nafe.print_explanation() # Rich console output\nafe.save_explanation(\"artifacts/explanation.md\", format=\"markdown\")\nafe.save_explanation(\"artifacts/explanation.json\", format=\"json\")\n```\n\n### Estimator Families\n\nChoose the preprocessing strategy based on your model type:\n\n| Family | Models | Scaling | Encoding | Best For |\n|--------|--------|---------|----------|----------|\n| `tree` | XGBoost, LightGBM, Random Forest | None | Label Encoding | Tree-based models |\n| `linear` | Linear/Logistic Regression | StandardScaler | One-hot + Target | Linear models |\n| `svm` | SVM, SVC | StandardScaler | One-hot | Support Vector Machines |\n| `knn` | k-Nearest Neighbors | MinMaxScaler | Label Encoding | Distance-based models |\n| `nn` | Neural Networks | MinMaxScaler | Label Encoding | Deep learning |\n\n### Configuration\n\nCustomize the preprocessing behavior:\n\n```python\nfrom featurecraft.config import FeatureCraftConfig\n\n# Custom configuration\nconfig = FeatureCraftConfig(\n low_cardinality_max=15, # Max unique values for low-cardinality features\n outlier_share_threshold=0.1, # Threshold for outlier detection\n random_state=42, # For reproducible results\n \n # Explainability (enabled by default!)\n explain_transformations=True, # Enable detailed explanations (default: True)\n explain_auto_print=True, # Auto-print after fit() (default: True)\n explain_save_path=None, # Optional: auto-save to file path\n)\n\nafe = AutoFeatureEngineer(config=config)\n\n# Note: You can also use AutoFeatureEngineer() without config \n# and get automatic explanations by default!\n```\n\n### Output Artifacts\n\nThe `export()` method creates:\n\n- `pipeline.joblib`: Fitted sklearn Pipeline ready for production\n- `metadata.json`: Configuration and processing summary\n- `feature_names.txt`: List of all output feature names\n- `explanation.md`: Human-readable explanation of transformation decisions (when explanations enabled)\n- `explanation.json`: Machine-readable explanation data (when explanations enabled)\n\nThe `analyze()` method generates:\n\n- `report.html`: Interactive HTML report with plots, insights, and recommendations\n\n## Examples\n\nCheck out the [examples](./examples/) directory for comprehensive usage examples:\n\n- **[01_quickstart.py](./examples/01_quickstart.py)**: Basic usage with automatic explanations on Iris and Wine datasets\n- **[02_kaggle_benchmark.py](./examples/02_kaggle_benchmark.py)**: Kaggle dataset benchmarking\n- **[03_complex_kaggle_benchmark.py](./examples/03_complex_kaggle_benchmark.py)**: Advanced benchmarking with complex datasets\n- **[06_explainability_demo.py](./examples/06_explainability_demo.py)**: Deep dive into explanation features and export formats\n\nRun the quickstart example to see automatic explanations in action:\n\n```bash\npython examples/01_quickstart.py\n```\n\nThis will demonstrate automatic feature engineering with beautiful formatted explanations for the Iris and Wine datasets!\n\n## Documentation\n\n### Getting Started\n- **[Getting Started](./docs/getting-started.md)**: Quick start guide for new users\n- **[API Reference](./docs/api-reference.md)**: Complete Python API documentation\n- **[CLI Reference](./docs/cli-reference.md)**: Command-line interface guide\n\n### Configuration & Optimization\n- **[Configuration Guide](./docs/configuration.md)**: Comprehensive parameter reference\n- **[Optimization Guide](./docs/optimization-guide.md)**: Performance tuning and best practices\n\n### Advanced Topics\n- **[Advanced Features](./docs/advanced-features.md)**: Drift detection, leakage prevention, SHAP\n- **[Benchmarks](./docs/benchmarks.md)**: Real-world dataset performance results\n- **[Troubleshooting](./docs/troubleshooting.md)**: Common issues and solutions\n\n## Roadmap\n\nSee the [open issues](https://github.com/featurecraft/featurecraft/issues) for a list of proposed features and known issues.\n\n- [ ] Enhanced time series preprocessing\n- [ ] Feature selection algorithms\n- [ ] Integration with popular ML frameworks\n- [ ] GPU acceleration support\n- [ ] Advanced outlier detection methods\n- [ ] Automated feature interaction detection\n\n## Contributing\n\nContributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.\n\nIf you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag \"enhancement\".\n\nDon't forget to give the project a star! Thanks again!\n\n\n## License\n\nDistributed under the MIT License. See `LICENSE` for more information.\n\n---\n\n<div align=\"center\">\n <p><strong>Built with \u2764\ufe0f for the machine learning community</strong></p>\n</div>\n",
"bugtrack_url": null,
"license": null,
"summary": "Automatic feature engineering, insights, and sklearn pipelines for tabular ML with optional time-series support.",
"version": "1.0.0",
"project_urls": {
"Documentation": "https://github.com/featurecraft/featurecraft/blob/main/docs/quickstart.md",
"Homepage": "https://github.com/featurecraft/featurecraft",
"Issues": "https://github.com/featurecraft/featurecraft/issues",
"Repository": "https://github.com/featurecraft/featurecraft"
},
"split_keywords": [
"auto",
" feature engineering",
" machine learning",
" sklearn",
" tabular",
" time series"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e90abd81dc4441a5a09ba0de808a7ef59f53700b960b7388a082abb50a907a3d",
"md5": "1c6de8b19dbe584de9dcdff042358e39",
"sha256": "1682eb33cc3fd003e0686e79f55849ad4d53ca2d53ec649e6b66ab9187e4c42d"
},
"downloads": -1,
"filename": "featurecraft-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1c6de8b19dbe584de9dcdff042358e39",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 178649,
"upload_time": "2025-10-09T18:19:07",
"upload_time_iso_8601": "2025-10-09T18:19:07.817251Z",
"url": "https://files.pythonhosted.org/packages/e9/0a/bd81dc4441a5a09ba0de808a7ef59f53700b960b7388a082abb50a907a3d/featurecraft-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "bfd41c72e5c1630e825fe348feca16e782e00751687737bb2e4e1661fcb2853b",
"md5": "273ae1637dc1fc032a8cc4d6f10b3f7d",
"sha256": "afbf026bd66d8318bb6be9d833f9833203dc2feafdcddef6a880fcd612f6858d"
},
"downloads": -1,
"filename": "featurecraft-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "273ae1637dc1fc032a8cc4d6f10b3f7d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 162707,
"upload_time": "2025-10-09T18:19:11",
"upload_time_iso_8601": "2025-10-09T18:19:11.381836Z",
"url": "https://files.pythonhosted.org/packages/bf/d4/1c72e5c1630e825fe348feca16e782e00751687737bb2e4e1661fcb2853b/featurecraft-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-09 18:19:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "featurecraft",
"github_project": "featurecraft",
"github_not_found": true,
"lcname": "featurecraft"
}