feature-engineering


Namefeature-engineering JSON
Version 2.1.4 PyPI version JSON
download
home_pagehttps://github.com/knowusuboaky/feature_engineering
SummaryUnleash the Power of Your Data with Feature Engineering: The Ultimate Python Library for Machine Learning Preprocessing and Enhancement
upload_time2024-04-09 02:00:21
maintainerNone
docs_urlNone
authorKwadwo Daddy Nyame Owusu - Boakye
requires_python>=3.6
licenseNone
keywords machine learning data preprocessing feature engineering data transformation data cleaning outlier handling imputation scaling one-hot encoding feature selection variance threshold correlation filtering embedded methods wrapper methods class imbalance sampling techniques hyperparameter tuning model optimization categorical data processing numeric data processing date feature extraction custom transformers python data science predictive modeling classification regression ml workflows data analysis data enrichment model performance improvement
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Feature Engineering

Unleash the full potential of your data with the Feature Engineering library, the ultimate Python toolkit designed to streamline and enhance your machine learning preprocessing and feature engineering workflows. Whether you're dealing with classification, regression, or any ML challenge, this library equips you with a robust set of tools to efficiently process numeric, categorical, and date features, tackle outliers, and engineer impactful new features.

## Further Description
Transform your machine learning workflows with Feature Engineering, the Python library designed to elevate your data preparation process. This cutting-edge tool streamlines the often cumbersome tasks of preprocessing and feature engineering, enabling you to unlock the full potential of your data with ease. Whether you're tackling classification, regression, or any other machine learning challenge, Feature Engineering equips you with a robust set of functionalities to efficiently handle numeric, categorical, and date features, manage outliers, and engineer new, impactful features.

Crafted with both novices and seasoned data scientists in mind, Feature Engineering offers an intuitive, flexible interface for custom transformations, advanced date processing, and dynamic feature selection strategies. From handling imbalances with sophisticated sampling techniques to optimizing your models through hyperparameter tuning, this library is your all-in-one solution for preparing your data for predictive modeling.

Dive into a world where data preprocessing and feature engineering are no longer barriers but catalysts for success. Elevate your machine learning projects with Feature Engineering and turn your data into a competitive advantage.

## Features
- **Data Preprocessing**: Simplify the often complex tasks of data cleaning, normalization, and transformation.
- **Feature Engineering**: Automatically extract and select the most relevant features for your models.
- **Handling Class Imbalance**: Utilize sophisticated sampling techniques to address class imbalances in your dataset.
- **Hyperparameter Tuning**: Optimize your machine learning models with integrated hyperparameter tuning capabilities.
- **Custom Transformations**: Apply custom data transformations with ease, tailored to your unique dataset requirements.

## Process Map
<img src="https://github.com/knowusuboaky/feature_engineering/blob/main/README%20file/mermaid%20figure%20-1.png?raw=true" width="850" height="700" alt="Optional Alt Text">

## Installation

You can install the Scorecard Generator via pip:

``` bash

pip install feature_engineering==2.1.4
```

## Load Package
``` bash

from feature_engineering import FeatureEngineering
```


## Usage

``` bash
# Import necessary modules
import numpy as np
import pandas as pd

from feature_engineering import FeatureEngineering

# Load your dataset
data = pd.read_csv('your_dataset.csv')

# Deal with missing values in your target variable
from feature_engineering import Cleaner
cleaned_data = Cleaner(data, 
                      'target_column', 
                      method=['categorical', 'missing'])

# Initialize the FeatureEngineering object with dataset and configuration
FeatureEngineering =(data, target, numeric_columns=None, 
                     categorical_columns=None, date_columns=[], 
                     handle_outliers=None, imputation_strategy='mean', 
                     task='classification', n_features_to_select=None, 
                     threshold_correlation=0.9, filter_method=True, selection_strategy='embedded', 
                     sampling_strategies=None, include_holiday=None, custom_transformers=None, hyperparameter_tuning=None, evaluation_metric='accuracy', verbose=True, 
                     min_class_size=None, remainder = 'passthrough')

# Run the entire preprocessing and feature engineering pipeline
df_preprocessed, selected_features = FeatureEngineering.run()

# Display the preprocessed table/dataframe
df_preprocessed

# Display the selected features
selected_features

# Now your data is ready for model training
```

## Function Explanation

### Cleaner
The `Cleaner` function is designed for cleaning and optionally transforming a target variable within a dataset. It is flexible, allowing users to specify the treatment of the target variable based on its data type (`numerical` or `categorical`) and the desired action for handling missing values or transforming the data. The function parameters and the method options are explained below:

#### Function Parameters
`data` (pd.DataFrame): This is the dataset containing the target variable that you want to clean or transform. It should be passed as a pandas DataFrame.

`target_column` (str): This is the name of the column within the DataFrame that you intend to clean or transform. It specifies the target variable.

`method` (list): This is a two-element list where the first element specifies the data type, and the second element specifies the action for handling missing values or for transforming the data. The default is ['auto', 'auto'], indicating that both the data type and the action will be automatically determined based on the data.

#### Method Options
The `method` parameter is a list with two elements: [`data_type`, `action`].

First Element (`data_type`): Specifies the expected data type of the target column. Options include:

`'numerical'`: Indicates that the target variable is numeric (e.g., integers, floats).
`'categorical'`: Indicates that the target variable is categorical (e.g., strings, categories).
`'auto'`: The function will automatically infer the data type of the target variable based on its content. If the column's dtype is int64 or float64, it will be treated as numerical; otherwise, it will be treated as categorical.
Second Element (`action`): Specifies the action to take for handling missing values or transforming the target variable. Options vary depending on whether the target is `numerical` or `categorical`:

- **For numerical data**:
`'mean'`: Fills missing values with the mean of the column.
`'median'`: Fills missing values with the median of the column.
`'zero'`: Fills missing values with zero (0).
`'drop'`: Drops rows where the target column is missing.

**For categorical data**:
`'mode'`: Fills missing values with the mode (most frequent value) of the column.
`'missing'`: Fills missing values with the string 'Missing', explicitly marking them as missing.
`'encode'`: Applies one-hot encoding to the column, transforming it into multiple binary columns indicating the presence or absence of each category, including a separate column for missing values.
`'drop'`: Similar to numerical data, it drops rows where the target column is missing.

#### Examples of Method Picks
`method`=[`'numerical'`, `'mean'`]: For a `numerical target column`, this option will fill missing values with the column's `mean`. It's a common approach for handling missing data in numerical columns, assuming the data is roughly normally distributed or when the mean is considered a reasonable estimate for missing values.

`method`=[`'categorical'`, `'encode'`]: For a `categorical target column`, this option will perform `one-hot encoding`, creating a new binary column for each category (including missing values as a category). This transformation is useful for preparing categorical data for many types of machine learning models that require numerical input.

`method`=[`'auto'`, `'drop'`]: This option will automatically detect the data type of the target column and drop any rows where the `target column` is missing. This approach is data type-agnostic and can be used when preserving only complete cases is important for the analysis or modeling.

The flexibility of the method argument allows for tailored data preprocessing steps that can be adjusted based on the nature of the data and the requirements of subsequent analysis or modeling tasks.

### FeatureEngineering
Let's delve into detailed explanations of each argument in the `FeatureEngineering` class to understand their roles and implications in the feature engineering process:

#### Function Parameters
`data` (DataFrame): This is the primary dataset containing both `features` (independent variables) and the `target` (dependent variable). This DataFrame is the starting point of the feature engineering process, where all transformations, imputations, and selections will be applied.

`target` (string): The name of the column in data that represents the variable you are trying to predict. This column is excluded from transformations and is used to guide supervised learning tasks like classification or regression.

`numeric_columns` (list of strings, optional): A list specifying the columns in data that should be treated as numeric features. These columns are subject to scaling and numerical `imputation`. If not provided, the class automatically identifies columns of numeric data types (int64, float64) as numeric features.

`categorical_columns` (list of strings, optional): Specifies the columns in data that are categorical. These columns will undergo categorical `imputation` (for missing values) and `encoding` (transforming text or categorical labels into numeric form). If left unspecified, the class automatically selects columns of data types object and category as categorical.

`date_columns` (list of strings, optional): Identifies columns that contain date information. The class can expand these columns into multiple derived features such as `day of the month`, `month`, `year`, `day of the week`, and `week of the year`, enriching the dataset with potentially useful temporal information.

`handle_outliers` (list, optional): Dictates the approach for detecting and handling outliers in numeric data. The first element specifies the detection method (`'IQR'` for Interquartile Range or `'Z-score'`), and the second element defines the handling strategy (`'remove'` to exclude outliers, `'impute'` to replace outliers with a central tendency measure, or `'cap'` to limit outliers to a specified range).

`imputation_strategy` (string or dictionary, optional): Determines how missing values should be filled in. It can be a single string applicable to all columns (e.g., `'mean'`, `'median'`, `'most_frequent'`) or a dictionary mapping column names to specific imputation strategies, allowing for tailored imputation across different types of features.

`task` (string): Specifies the machine learning task, either `'classification'` or `'regression'`. This influences decisions related to model selection, feature selection techniques, and evaluation metrics appropriate for the predictive modeling task at hand.

`n_features_to_select` (int, optional): The number of features to retain after feature selection. If not set, no limit is applied to the number of features selected. This parameter is particularly useful when aiming to reduce dimensionality to a specific number of features.

`threshold_correlation` (float): Sets a threshold for identifying multicollinearity among features. Features with a pairwise correlation higher than this threshold are candidates for removal, helping to mitigate the adverse effects of multicollinearity on model performance.

`filter_method` (boolean): Enables or disables the application of filter methods in feature selection, such as removing features with `low variance` or `high correlation`. Setting this to `True` activates these filter methods, while `False` bypasses them.

`selection_strategy` (string): Chooses the strategy for feature selection. `'embedded'` refers to methods that integrate feature selection as part of the model training process (e.g., using model coefficients or importances), whereas `'wrapper'` involves selecting features by evaluating model performance across subsets of features.

`sampling_strategies` (list of dictionaries, optional): Outlines one or more strategies to address class imbalance through sampling. Each dictionary specifies a sampling technique (e.g., `'SMOTE'`, `'ADASYN'`, `'RandomUndersampling'`,`'ClusterCentroids'`, `'SMOTEENN'`, `'SMOTETomek'`) and its parameters. This allows for sophisticated handling of imbalanced datasets to improve model fairness and accuracy.

`include_holiday` (tuple, optional): If provided, adds a binary feature indicating whether a date falls on a public holiday. The tuple should contain the column name containing dates and the country code to reference the correct holiday calendar.

`custom_transformers` (dictionary, optional): Maps column names to custom transformer objects for applying specific transformations to selected columns. This enables highly customized preprocessing steps tailored to the unique characteristics of certain features.

`hyperparameter_tuning` (dictionary, optional): Specifies the configuration for hyperparameter tuning, including the tuning method (`'grid_search'` or `'random_search'`), parameter grid, and other settings like cross-validation folds. This facilitates the optimization of model parameters for improved performance.

`evaluation_metric` (string): The metric used to evaluate model performance during feature selection and hyperparameter tuning. The choice of metric should align with the machine learning task and the specific objectives of the modeling effort (e.g., `'accuracy'`, `'f1'`, `'r2'`).

`verbose` (boolean): Controls the verbosity of the class's operations. When set to True, progress updates and informational messages are displayed throughout the feature engineering process, offering insights into the steps being performed.

`min_class_size` (int, optional): Specifies the minimum size of the smallest class in a classification task. This parameter can influence the choice of sampling strategies and ensure that cross-validation splits are made in a way that respects the class distribution.

`remainder` (string, optional): Determines how columns not explicitly mentioned in `numeric_columns` or `categorical_columns` are treated. `'passthrough'` includes these columns without changes, while `'drop'` excludes them from the processed dataset.

These arguments collectively offer extensive control over the feature engineering process, allowing users to tailor preprocessing, feature selection, and model optimization steps to their specific dataset and modeling goals.

#### Method Options
- **`data`**:
   - **Options**: Any `pandas.DataFrame` with the dataset for preprocessing.

- **`target`**:
   - **Options**: Column name (string) in `data` that represents the dependent variable.

- **`numeric_columns`**:
   - **Options**:
     - `None`: Auto-select columns with data types `int64` and `float64`.
     - List of strings: Specific column names as numeric features.

- **`categorical_columns`**:
   - **Options**:
     - `None`: Auto-select columns with data types `object` and `category`.
     - List of strings: Specific column names as categorical features.

- **`date_columns`**:
   - **Options**:
     - `[]`: No date processing.
     - List of strings: Column names with date information for feature expansion.

- **`handle_outliers`**:
   - **Options**:
     - `None`: No outlier handling.
     - `[method, strategy]`:
       - `method`: `'IQR'`, `'Z-score'`.
       - `strategy`: `'remove'`, `'impute'`, `'cap'`.

- **`imputation_strategy`**:
   - **Options**:
     - String: `'mean'`, `'median'`, `'most_frequent'`, `'constant'`.
     - Dictionary: Maps columns to specific imputation strategies.

- **`task`**:
   - **Options**:
     - `'classification'`
     - `'regression'`

- **`n_features_to_select`**:
   - **Options**:
     - `None`: No limit on the number of features.
     - Integer: Specific number of features to retain.

- **`threshold_correlation`**:
    - **Options**:
      - Float (0 to 1): Cutoff for considering features as highly correlated.

- **`filter_method`**:
    - **Options**:
      - `True`: Apply filter methods for feature selection.
      - `False`: Do not apply filter methods.

- **`selection_strategy`**:
    - **Options**:
      - `'embedded'`: Model-based feature selection. 
      - `'wrapper'`: Performance-based feature selection.

- **`sampling_strategies`**:
    - **Options**:
      - `None`: No sampling for class imbalance.
      - List of dictionaries: Specifies sampling techniques and parameters.
      - `'SMOTE'`, `'ADASYN'`, `'RandomUndersampling'`,`'ClusterCentroids'`, `'SMOTEENN'`, and `'SMOTETomek'`
      - Example:
      ```bash
      ## Set sampling_strategies = [
         {'name': 'SMOTE', 'random_state': 42, 'k_neighbors': 2}  # Example specifying parameters for SMOTE
      ```

- **`include_holiday`**:
    - **Options**:
      - `None`: No holiday indicator.
      - Tuple: `(date_column_name, country_code)` for holiday feature.

- **`custom_transformers`**:
    - **Options**:
      - `{}`: No custom transformations.
      - Dictionary: Maps column names to custom transformer objects.
      - Example: 
      ```bash
      # class SquareTransformer(BaseEstimator, TransformerMixin):
         def fit(self, X, y=None):
            return self

         def transform(self, X):
            return X ** 2

      ##Now set custom_transformers = {'numeric_column': SquareTransformer()}
      ```
      - Another Example:
      ```bash
      class LogTransformer(BaseEstimator, TransformerMixin):
         def fit(self, X, y=None):
            return self  # nothing to do here

         def transform(self, X):
            # Ensure X is a DataFrame
            X = pd.DataFrame(X)
            # Apply transformation only to numeric columns
            for col in X.select_dtypes(include=['float64', 'int64']).columns:
                  # Ensure no negative values or zeros; you might adjust this logic based on your needs
                  X[col] = np.log1p(np.maximum(0, X[col]))
            return X
      ##Now set custom_transformers = {'numeric_column': LogTransformer()}
      ```

- **`hyperparameter_tuning`**:
    - **Options**:
      - `None`: No hyperparameter tuning.
      - Dictionary: Specifies tuning method, parameter grid, and settings.
      - Example: 
      ```bash
      ## Set hyperparameter_tuning = {
      'method': 'random_search',  # Choose 'grid_search' or 'random_search'
      'param_grid': {  # Specify the hyperparameter grid or distributions
         'n_estimators': [100, 200, 300],
         'max_depth': [5, 10, 15, None],
      },
      'cv': 5,  # Number of cross-validation folds
      'random_state': 42,  # Seed for reproducibility
      'n_iter': 10  # Number of parameter settings sampled (for RandomizedSearchCV)
   }
      ```

- **`evaluation_metric`**:
    - **Options**:
      - Classification: `'accuracy'`, `'precision'`, `'recall'`, `'f1'`, etc.
      - Regression: `'r2'`, `'neg_mean_squared_error'`, `'neg_mean_absolute_error'`, etc.

- **`verbose`**:
    - **Options**:
      - `True`: Display progress and informational messages.
      - `False`: Suppress messages.

- **`min_class_size`**:
    - **Options**:
      - `None`: Auto-determined.
      - Integer: Specifies the minimum size of any class.

- **`remainder`**:
    - **Options**:
      - `'passthrough'`: Include unspecified columns without changes.
      - `'drop'`: Exclude these columns from the processed dataset.

## Ideal Uses of the Feature Engineering Library

The Feature Engineering library is crafted to significantly enhance machine learning workflows through sophisticated preprocessing and feature engineering capabilities. Here are some prime scenarios where this library shines:

### Automated Data Preprocessing
- **Data Cleaning**: Automates the process of making messy datasets clean, efficiently handling missing values, outliers, and incorrect entries.
- **Data Transformation**: Facilitates seamless application of transformations such as scaling, normalization, or tailored transformations to specific data distributions.

### Feature Extraction and Engineering
- **Date Features**: Extracts and engineers meaningful features from date and time columns, crucial for time-series analysis or models relying on temporal context.
- **Text Data**: Engineers features from text data, including sentiment scores, word counts, or TF-IDF values, enhancing the dataset's dimensionality for ML algorithms.

### Handling Categorical Data
- **Encoding**: Transforms categorical variables into machine-readable formats, using techniques like one-hot encoding, target encoding, or embeddings.
- **Dimensionality Reduction**: Applies methods to reduce the dimensionality of high-cardinality categorical features, aiming to improve model performance.

### Dealing with Class Imbalance
- **Resampling Techniques**: Implements under-sampling, over-sampling, and hybrid methods to tackle class imbalance, enhancing model robustness.
- **Custom Sampling Strategies**: Allows for the experimentation with custom sampling strategies tailored to the dataset and problem specifics.

### Advanced Feature Selection
- **Filter Methods**: Employs variance thresholds and correlation matrices to eliminate redundant or irrelevant features.
- **Wrapper Methods**: Utilizes methods like recursive feature elimination to pinpoint the most predictive features.
- **Embedded Methods**: Leverages models with inherent feature importance metrics for feature selection.

### Model Optimization
- **Hyperparameter Tuning Integration**: Seamlessly integrates with hyperparameter tuning processes for simultaneous optimization of preprocessing steps and model parameters.
- **Pipeline Compatibility**: Ensures compatibility with scikit-learn pipelines, facilitating experimentation with various preprocessing and modeling workflows.

### Scalability and Flexibility
- **Custom Transformers**: Supports the creation and integration of custom transformers for unique preprocessing needs, offering unparalleled flexibility.
- **Scalability**: Designed to handle datasets of various sizes and complexities efficiently, from small academic datasets to large-scale industrial data.

### Interdisciplinary Projects
- **Cross-Domain Applicability**: Its versatile feature engineering capabilities make it suitable for a wide range of domains, including finance, healthcare, marketing, and NLP.

### Educational Use
- **Learning Tool**: Acts as an invaluable resource for students and professionals eager to delve into feature engineering and preprocessing techniques, offering hands-on experience.

### Research and Development
- **Experimental Prototyping**: Aids in the rapid prototyping of models within research settings, allowing researchers to concentrate on hypothesis testing and model innovation.

By providing a comprehensive suite of preprocessing and feature engineering tools, the Feature Engineering library aims to be an indispensable asset in enhancing the efficiency and efficacy of machine learning projects, democratizing advanced data manipulation techniques for practitioners across a spectrum of fields.

## Contributing
Contributions to the Feature Engineering are highly appreciated! Whether it's bug fixes, feature enhancements, or documentation improvements, your contributions can help make the library even more powerful and user-friendly for the community. Feel free to open issues, submit pull requests, or suggest new features on the project's GitHub repository.

## Documentation & Examples
For documentation and usage examples, visit the GitHub repository: https://github.com/knowusuboaky/feature_engineering

**Author**: Kwadwo Daddy Nyame Owusu - Boakye\
**Email**: kwadwo.owusuboakye@outlook.com\
**License**: MIT

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/knowusuboaky/feature_engineering",
    "name": "feature-engineering",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "machine learning, data preprocessing, feature engineering, data transformation, data cleaning, outlier handling, imputation, scaling, one-hot encoding, feature selection, variance threshold, correlation filtering, embedded methods, wrapper methods, class imbalance, sampling techniques, hyperparameter tuning, model optimization, categorical data processing, numeric data processing, date feature extraction, custom transformers, Python, data science, predictive modeling, classification, regression, ML workflows, data analysis, data enrichment, model performance improvement",
    "author": "Kwadwo Daddy Nyame Owusu - Boakye",
    "author_email": "kwadwo.owusuboakye@outlook.com",
    "download_url": "https://files.pythonhosted.org/packages/99/0e/4b3f7b39400ce805eb96b7c7c0ce36404000e4605b480fe643d070fb1787/feature_engineering-2.1.4.tar.gz",
    "platform": null,
    "description": "# Feature Engineering\r\n\r\nUnleash the full potential of your data with the Feature Engineering library, the ultimate Python toolkit designed to streamline and enhance your machine learning preprocessing and feature engineering workflows. Whether you're dealing with classification, regression, or any ML challenge, this library equips you with a robust set of tools to efficiently process numeric, categorical, and date features, tackle outliers, and engineer impactful new features.\r\n\r\n## Further Description\r\nTransform your machine learning workflows with Feature Engineering, the Python library designed to elevate your data preparation process. This cutting-edge tool streamlines the often cumbersome tasks of preprocessing and feature engineering, enabling you to unlock the full potential of your data with ease. Whether you're tackling classification, regression, or any other machine learning challenge, Feature Engineering equips you with a robust set of functionalities to efficiently handle numeric, categorical, and date features, manage outliers, and engineer new, impactful features.\r\n\r\nCrafted with both novices and seasoned data scientists in mind, Feature Engineering offers an intuitive, flexible interface for custom transformations, advanced date processing, and dynamic feature selection strategies. From handling imbalances with sophisticated sampling techniques to optimizing your models through hyperparameter tuning, this library is your all-in-one solution for preparing your data for predictive modeling.\r\n\r\nDive into a world where data preprocessing and feature engineering are no longer barriers but catalysts for success. Elevate your machine learning projects with Feature Engineering and turn your data into a competitive advantage.\r\n\r\n## Features\r\n- **Data Preprocessing**: Simplify the often complex tasks of data cleaning, normalization, and transformation.\r\n- **Feature Engineering**: Automatically extract and select the most relevant features for your models.\r\n- **Handling Class Imbalance**: Utilize sophisticated sampling techniques to address class imbalances in your dataset.\r\n- **Hyperparameter Tuning**: Optimize your machine learning models with integrated hyperparameter tuning capabilities.\r\n- **Custom Transformations**: Apply custom data transformations with ease, tailored to your unique dataset requirements.\r\n\r\n## Process Map\r\n<img src=\"https://github.com/knowusuboaky/feature_engineering/blob/main/README%20file/mermaid%20figure%20-1.png?raw=true\" width=\"850\" height=\"700\" alt=\"Optional Alt Text\">\r\n\r\n## Installation\r\n\r\nYou can install the Scorecard Generator via pip:\r\n\r\n``` bash\r\n\r\npip install feature_engineering==2.1.4\r\n```\r\n\r\n## Load Package\r\n``` bash\r\n\r\nfrom feature_engineering import FeatureEngineering\r\n```\r\n\r\n\r\n## Usage\r\n\r\n``` bash\r\n# Import necessary modules\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nfrom feature_engineering import FeatureEngineering\r\n\r\n# Load your dataset\r\ndata = pd.read_csv('your_dataset.csv')\r\n\r\n# Deal with missing values in your target variable\r\nfrom feature_engineering import Cleaner\r\ncleaned_data = Cleaner(data, \r\n                      'target_column', \r\n                      method=['categorical', 'missing'])\r\n\r\n# Initialize the FeatureEngineering object with dataset and configuration\r\nFeatureEngineering =(data, target, numeric_columns=None, \r\n                     categorical_columns=None, date_columns=[], \r\n                     handle_outliers=None, imputation_strategy='mean', \r\n                     task='classification', n_features_to_select=None, \r\n                     threshold_correlation=0.9, filter_method=True, selection_strategy='embedded', \r\n                     sampling_strategies=None, include_holiday=None, custom_transformers=None, hyperparameter_tuning=None, evaluation_metric='accuracy', verbose=True, \r\n                     min_class_size=None, remainder = 'passthrough')\r\n\r\n# Run the entire preprocessing and feature engineering pipeline\r\ndf_preprocessed, selected_features = FeatureEngineering.run()\r\n\r\n# Display the preprocessed table/dataframe\r\ndf_preprocessed\r\n\r\n# Display the selected features\r\nselected_features\r\n\r\n# Now your data is ready for model training\r\n```\r\n\r\n## Function Explanation\r\n\r\n### Cleaner\r\nThe `Cleaner` function is designed for cleaning and optionally transforming a target variable within a dataset. It is flexible, allowing users to specify the treatment of the target variable based on its data type (`numerical` or `categorical`) and the desired action for handling missing values or transforming the data. The function parameters and the method options are explained below:\r\n\r\n#### Function Parameters\r\n`data` (pd.DataFrame): This is the dataset containing the target variable that you want to clean or transform. It should be passed as a pandas DataFrame.\r\n\r\n`target_column` (str): This is the name of the column within the DataFrame that you intend to clean or transform. It specifies the target variable.\r\n\r\n`method` (list): This is a two-element list where the first element specifies the data type, and the second element specifies the action for handling missing values or for transforming the data. The default is ['auto', 'auto'], indicating that both the data type and the action will be automatically determined based on the data.\r\n\r\n#### Method Options\r\nThe `method` parameter is a list with two elements: [`data_type`, `action`].\r\n\r\nFirst Element (`data_type`): Specifies the expected data type of the target column. Options include:\r\n\r\n`'numerical'`: Indicates that the target variable is numeric (e.g., integers, floats).\r\n`'categorical'`: Indicates that the target variable is categorical (e.g., strings, categories).\r\n`'auto'`: The function will automatically infer the data type of the target variable based on its content. If the column's dtype is int64 or float64, it will be treated as numerical; otherwise, it will be treated as categorical.\r\nSecond Element (`action`): Specifies the action to take for handling missing values or transforming the target variable. Options vary depending on whether the target is `numerical` or `categorical`:\r\n\r\n- **For numerical data**:\r\n`'mean'`: Fills missing values with the mean of the column.\r\n`'median'`: Fills missing values with the median of the column.\r\n`'zero'`: Fills missing values with zero (0).\r\n`'drop'`: Drops rows where the target column is missing.\r\n\r\n**For categorical data**:\r\n`'mode'`: Fills missing values with the mode (most frequent value) of the column.\r\n`'missing'`: Fills missing values with the string 'Missing', explicitly marking them as missing.\r\n`'encode'`: Applies one-hot encoding to the column, transforming it into multiple binary columns indicating the presence or absence of each category, including a separate column for missing values.\r\n`'drop'`: Similar to numerical data, it drops rows where the target column is missing.\r\n\r\n#### Examples of Method Picks\r\n`method`=[`'numerical'`, `'mean'`]: For a `numerical target column`, this option will fill missing values with the column's `mean`. It's a common approach for handling missing data in numerical columns, assuming the data is roughly normally distributed or when the mean is considered a reasonable estimate for missing values.\r\n\r\n`method`=[`'categorical'`, `'encode'`]: For a `categorical target column`, this option will perform `one-hot encoding`, creating a new binary column for each category (including missing values as a category). This transformation is useful for preparing categorical data for many types of machine learning models that require numerical input.\r\n\r\n`method`=[`'auto'`, `'drop'`]: This option will automatically detect the data type of the target column and drop any rows where the `target column` is missing. This approach is data type-agnostic and can be used when preserving only complete cases is important for the analysis or modeling.\r\n\r\nThe flexibility of the method argument allows for tailored data preprocessing steps that can be adjusted based on the nature of the data and the requirements of subsequent analysis or modeling tasks.\r\n\r\n### FeatureEngineering\r\nLet's delve into detailed explanations of each argument in the `FeatureEngineering` class to understand their roles and implications in the feature engineering process:\r\n\r\n#### Function Parameters\r\n`data` (DataFrame): This is the primary dataset containing both `features` (independent variables) and the `target` (dependent variable). This DataFrame is the starting point of the feature engineering process, where all transformations, imputations, and selections will be applied.\r\n\r\n`target` (string): The name of the column in data that represents the variable you are trying to predict. This column is excluded from transformations and is used to guide supervised learning tasks like classification or regression.\r\n\r\n`numeric_columns` (list of strings, optional): A list specifying the columns in data that should be treated as numeric features. These columns are subject to scaling and numerical `imputation`. If not provided, the class automatically identifies columns of numeric data types (int64, float64) as numeric features.\r\n\r\n`categorical_columns` (list of strings, optional): Specifies the columns in data that are categorical. These columns will undergo categorical `imputation` (for missing values) and `encoding` (transforming text or categorical labels into numeric form). If left unspecified, the class automatically selects columns of data types object and category as categorical.\r\n\r\n`date_columns` (list of strings, optional): Identifies columns that contain date information. The class can expand these columns into multiple derived features such as `day of the month`, `month`, `year`, `day of the week`, and `week of the year`, enriching the dataset with potentially useful temporal information.\r\n\r\n`handle_outliers` (list, optional): Dictates the approach for detecting and handling outliers in numeric data. The first element specifies the detection method (`'IQR'` for Interquartile Range or `'Z-score'`), and the second element defines the handling strategy (`'remove'` to exclude outliers, `'impute'` to replace outliers with a central tendency measure, or `'cap'` to limit outliers to a specified range).\r\n\r\n`imputation_strategy` (string or dictionary, optional): Determines how missing values should be filled in. It can be a single string applicable to all columns (e.g., `'mean'`, `'median'`, `'most_frequent'`) or a dictionary mapping column names to specific imputation strategies, allowing for tailored imputation across different types of features.\r\n\r\n`task` (string): Specifies the machine learning task, either `'classification'` or `'regression'`. This influences decisions related to model selection, feature selection techniques, and evaluation metrics appropriate for the predictive modeling task at hand.\r\n\r\n`n_features_to_select` (int, optional): The number of features to retain after feature selection. If not set, no limit is applied to the number of features selected. This parameter is particularly useful when aiming to reduce dimensionality to a specific number of features.\r\n\r\n`threshold_correlation` (float): Sets a threshold for identifying multicollinearity among features. Features with a pairwise correlation higher than this threshold are candidates for removal, helping to mitigate the adverse effects of multicollinearity on model performance.\r\n\r\n`filter_method` (boolean): Enables or disables the application of filter methods in feature selection, such as removing features with `low variance` or `high correlation`. Setting this to `True` activates these filter methods, while `False` bypasses them.\r\n\r\n`selection_strategy` (string): Chooses the strategy for feature selection. `'embedded'` refers to methods that integrate feature selection as part of the model training process (e.g., using model coefficients or importances), whereas `'wrapper'` involves selecting features by evaluating model performance across subsets of features.\r\n\r\n`sampling_strategies` (list of dictionaries, optional): Outlines one or more strategies to address class imbalance through sampling. Each dictionary specifies a sampling technique (e.g., `'SMOTE'`, `'ADASYN'`, `'RandomUndersampling'`,`'ClusterCentroids'`, `'SMOTEENN'`, `'SMOTETomek'`) and its parameters. This allows for sophisticated handling of imbalanced datasets to improve model fairness and accuracy.\r\n\r\n`include_holiday` (tuple, optional): If provided, adds a binary feature indicating whether a date falls on a public holiday. The tuple should contain the column name containing dates and the country code to reference the correct holiday calendar.\r\n\r\n`custom_transformers` (dictionary, optional): Maps column names to custom transformer objects for applying specific transformations to selected columns. This enables highly customized preprocessing steps tailored to the unique characteristics of certain features.\r\n\r\n`hyperparameter_tuning` (dictionary, optional): Specifies the configuration for hyperparameter tuning, including the tuning method (`'grid_search'` or `'random_search'`), parameter grid, and other settings like cross-validation folds. This facilitates the optimization of model parameters for improved performance.\r\n\r\n`evaluation_metric` (string): The metric used to evaluate model performance during feature selection and hyperparameter tuning. The choice of metric should align with the machine learning task and the specific objectives of the modeling effort (e.g., `'accuracy'`, `'f1'`, `'r2'`).\r\n\r\n`verbose` (boolean): Controls the verbosity of the class's operations. When set to True, progress updates and informational messages are displayed throughout the feature engineering process, offering insights into the steps being performed.\r\n\r\n`min_class_size` (int, optional): Specifies the minimum size of the smallest class in a classification task. This parameter can influence the choice of sampling strategies and ensure that cross-validation splits are made in a way that respects the class distribution.\r\n\r\n`remainder` (string, optional): Determines how columns not explicitly mentioned in `numeric_columns` or `categorical_columns` are treated. `'passthrough'` includes these columns without changes, while `'drop'` excludes them from the processed dataset.\r\n\r\nThese arguments collectively offer extensive control over the feature engineering process, allowing users to tailor preprocessing, feature selection, and model optimization steps to their specific dataset and modeling goals.\r\n\r\n#### Method Options\r\n- **`data`**:\r\n   - **Options**: Any `pandas.DataFrame` with the dataset for preprocessing.\r\n\r\n- **`target`**:\r\n   - **Options**: Column name (string) in `data` that represents the dependent variable.\r\n\r\n- **`numeric_columns`**:\r\n   - **Options**:\r\n     - `None`: Auto-select columns with data types `int64` and `float64`.\r\n     - List of strings: Specific column names as numeric features.\r\n\r\n- **`categorical_columns`**:\r\n   - **Options**:\r\n     - `None`: Auto-select columns with data types `object` and `category`.\r\n     - List of strings: Specific column names as categorical features.\r\n\r\n- **`date_columns`**:\r\n   - **Options**:\r\n     - `[]`: No date processing.\r\n     - List of strings: Column names with date information for feature expansion.\r\n\r\n- **`handle_outliers`**:\r\n   - **Options**:\r\n     - `None`: No outlier handling.\r\n     - `[method, strategy]`:\r\n       - `method`: `'IQR'`, `'Z-score'`.\r\n       - `strategy`: `'remove'`, `'impute'`, `'cap'`.\r\n\r\n- **`imputation_strategy`**:\r\n   - **Options**:\r\n     - String: `'mean'`, `'median'`, `'most_frequent'`, `'constant'`.\r\n     - Dictionary: Maps columns to specific imputation strategies.\r\n\r\n- **`task`**:\r\n   - **Options**:\r\n     - `'classification'`\r\n     - `'regression'`\r\n\r\n- **`n_features_to_select`**:\r\n   - **Options**:\r\n     - `None`: No limit on the number of features.\r\n     - Integer: Specific number of features to retain.\r\n\r\n- **`threshold_correlation`**:\r\n    - **Options**:\r\n      - Float (0 to 1): Cutoff for considering features as highly correlated.\r\n\r\n- **`filter_method`**:\r\n    - **Options**:\r\n      - `True`: Apply filter methods for feature selection.\r\n      - `False`: Do not apply filter methods.\r\n\r\n- **`selection_strategy`**:\r\n    - **Options**:\r\n      - `'embedded'`: Model-based feature selection. \r\n      - `'wrapper'`: Performance-based feature selection.\r\n\r\n- **`sampling_strategies`**:\r\n    - **Options**:\r\n      - `None`: No sampling for class imbalance.\r\n      - List of dictionaries: Specifies sampling techniques and parameters.\r\n      - `'SMOTE'`, `'ADASYN'`, `'RandomUndersampling'`,`'ClusterCentroids'`, `'SMOTEENN'`, and `'SMOTETomek'`\r\n      - Example:\r\n      ```bash\r\n      ## Set sampling_strategies = [\r\n         {'name': 'SMOTE', 'random_state': 42, 'k_neighbors': 2}  # Example specifying parameters for SMOTE\r\n      ```\r\n\r\n- **`include_holiday`**:\r\n    - **Options**:\r\n      - `None`: No holiday indicator.\r\n      - Tuple: `(date_column_name, country_code)` for holiday feature.\r\n\r\n- **`custom_transformers`**:\r\n    - **Options**:\r\n      - `{}`: No custom transformations.\r\n      - Dictionary: Maps column names to custom transformer objects.\r\n      - Example: \r\n      ```bash\r\n      # class SquareTransformer(BaseEstimator, TransformerMixin):\r\n         def fit(self, X, y=None):\r\n            return self\r\n\r\n         def transform(self, X):\r\n            return X ** 2\r\n\r\n      ##Now set custom_transformers = {'numeric_column': SquareTransformer()}\r\n      ```\r\n      - Another Example:\r\n      ```bash\r\n      class LogTransformer(BaseEstimator, TransformerMixin):\r\n         def fit(self, X, y=None):\r\n            return self  # nothing to do here\r\n\r\n         def transform(self, X):\r\n            # Ensure X is a DataFrame\r\n            X = pd.DataFrame(X)\r\n            # Apply transformation only to numeric columns\r\n            for col in X.select_dtypes(include=['float64', 'int64']).columns:\r\n                  # Ensure no negative values or zeros; you might adjust this logic based on your needs\r\n                  X[col] = np.log1p(np.maximum(0, X[col]))\r\n            return X\r\n      ##Now set custom_transformers = {'numeric_column': LogTransformer()}\r\n      ```\r\n\r\n- **`hyperparameter_tuning`**:\r\n    - **Options**:\r\n      - `None`: No hyperparameter tuning.\r\n      - Dictionary: Specifies tuning method, parameter grid, and settings.\r\n      - Example: \r\n      ```bash\r\n      ## Set hyperparameter_tuning = {\r\n      'method': 'random_search',  # Choose 'grid_search' or 'random_search'\r\n      'param_grid': {  # Specify the hyperparameter grid or distributions\r\n         'n_estimators': [100, 200, 300],\r\n         'max_depth': [5, 10, 15, None],\r\n      },\r\n      'cv': 5,  # Number of cross-validation folds\r\n      'random_state': 42,  # Seed for reproducibility\r\n      'n_iter': 10  # Number of parameter settings sampled (for RandomizedSearchCV)\r\n   }\r\n      ```\r\n\r\n- **`evaluation_metric`**:\r\n    - **Options**:\r\n      - Classification: `'accuracy'`, `'precision'`, `'recall'`, `'f1'`, etc.\r\n      - Regression: `'r2'`, `'neg_mean_squared_error'`, `'neg_mean_absolute_error'`, etc.\r\n\r\n- **`verbose`**:\r\n    - **Options**:\r\n      - `True`: Display progress and informational messages.\r\n      - `False`: Suppress messages.\r\n\r\n- **`min_class_size`**:\r\n    - **Options**:\r\n      - `None`: Auto-determined.\r\n      - Integer: Specifies the minimum size of any class.\r\n\r\n- **`remainder`**:\r\n    - **Options**:\r\n      - `'passthrough'`: Include unspecified columns without changes.\r\n      - `'drop'`: Exclude these columns from the processed dataset.\r\n\r\n## Ideal Uses of the Feature Engineering Library\r\n\r\nThe Feature Engineering library is crafted to significantly enhance machine learning workflows through sophisticated preprocessing and feature engineering capabilities. Here are some prime scenarios where this library shines:\r\n\r\n### Automated Data Preprocessing\r\n- **Data Cleaning**: Automates the process of making messy datasets clean, efficiently handling missing values, outliers, and incorrect entries.\r\n- **Data Transformation**: Facilitates seamless application of transformations such as scaling, normalization, or tailored transformations to specific data distributions.\r\n\r\n### Feature Extraction and Engineering\r\n- **Date Features**: Extracts and engineers meaningful features from date and time columns, crucial for time-series analysis or models relying on temporal context.\r\n- **Text Data**: Engineers features from text data, including sentiment scores, word counts, or TF-IDF values, enhancing the dataset's dimensionality for ML algorithms.\r\n\r\n### Handling Categorical Data\r\n- **Encoding**: Transforms categorical variables into machine-readable formats, using techniques like one-hot encoding, target encoding, or embeddings.\r\n- **Dimensionality Reduction**: Applies methods to reduce the dimensionality of high-cardinality categorical features, aiming to improve model performance.\r\n\r\n### Dealing with Class Imbalance\r\n- **Resampling Techniques**: Implements under-sampling, over-sampling, and hybrid methods to tackle class imbalance, enhancing model robustness.\r\n- **Custom Sampling Strategies**: Allows for the experimentation with custom sampling strategies tailored to the dataset and problem specifics.\r\n\r\n### Advanced Feature Selection\r\n- **Filter Methods**: Employs variance thresholds and correlation matrices to eliminate redundant or irrelevant features.\r\n- **Wrapper Methods**: Utilizes methods like recursive feature elimination to pinpoint the most predictive features.\r\n- **Embedded Methods**: Leverages models with inherent feature importance metrics for feature selection.\r\n\r\n### Model Optimization\r\n- **Hyperparameter Tuning Integration**: Seamlessly integrates with hyperparameter tuning processes for simultaneous optimization of preprocessing steps and model parameters.\r\n- **Pipeline Compatibility**: Ensures compatibility with scikit-learn pipelines, facilitating experimentation with various preprocessing and modeling workflows.\r\n\r\n### Scalability and Flexibility\r\n- **Custom Transformers**: Supports the creation and integration of custom transformers for unique preprocessing needs, offering unparalleled flexibility.\r\n- **Scalability**: Designed to handle datasets of various sizes and complexities efficiently, from small academic datasets to large-scale industrial data.\r\n\r\n### Interdisciplinary Projects\r\n- **Cross-Domain Applicability**: Its versatile feature engineering capabilities make it suitable for a wide range of domains, including finance, healthcare, marketing, and NLP.\r\n\r\n### Educational Use\r\n- **Learning Tool**: Acts as an invaluable resource for students and professionals eager to delve into feature engineering and preprocessing techniques, offering hands-on experience.\r\n\r\n### Research and Development\r\n- **Experimental Prototyping**: Aids in the rapid prototyping of models within research settings, allowing researchers to concentrate on hypothesis testing and model innovation.\r\n\r\nBy providing a comprehensive suite of preprocessing and feature engineering tools, the Feature Engineering library aims to be an indispensable asset in enhancing the efficiency and efficacy of machine learning projects, democratizing advanced data manipulation techniques for practitioners across a spectrum of fields.\r\n\r\n## Contributing\r\nContributions to the Feature Engineering are highly appreciated! Whether it's bug fixes, feature enhancements, or documentation improvements, your contributions can help make the library even more powerful and user-friendly for the community. Feel free to open issues, submit pull requests, or suggest new features on the project's GitHub repository.\r\n\r\n## Documentation & Examples\r\nFor documentation and usage examples, visit the GitHub repository: https://github.com/knowusuboaky/feature_engineering\r\n\r\n**Author**: Kwadwo Daddy Nyame Owusu - Boakye\\\r\n**Email**: kwadwo.owusuboakye@outlook.com\\\r\n**License**: MIT\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Unleash the Power of Your Data with Feature Engineering: The Ultimate Python Library for Machine Learning Preprocessing and Enhancement",
    "version": "2.1.4",
    "project_urls": {
        "Homepage": "https://github.com/knowusuboaky/feature_engineering"
    },
    "split_keywords": [
        "machine learning",
        " data preprocessing",
        " feature engineering",
        " data transformation",
        " data cleaning",
        " outlier handling",
        " imputation",
        " scaling",
        " one-hot encoding",
        " feature selection",
        " variance threshold",
        " correlation filtering",
        " embedded methods",
        " wrapper methods",
        " class imbalance",
        " sampling techniques",
        " hyperparameter tuning",
        " model optimization",
        " categorical data processing",
        " numeric data processing",
        " date feature extraction",
        " custom transformers",
        " python",
        " data science",
        " predictive modeling",
        " classification",
        " regression",
        " ml workflows",
        " data analysis",
        " data enrichment",
        " model performance improvement"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "990e4b3f7b39400ce805eb96b7c7c0ce36404000e4605b480fe643d070fb1787",
                "md5": "3314bf3db9ea0b8c3192b36aca26e416",
                "sha256": "44588036ae72cabac1a75641233631a87b9ce0b79b69d53a61d377130d488514"
            },
            "downloads": -1,
            "filename": "feature_engineering-2.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "3314bf3db9ea0b8c3192b36aca26e416",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 22932,
            "upload_time": "2024-04-09T02:00:21",
            "upload_time_iso_8601": "2024-04-09T02:00:21.317258Z",
            "url": "https://files.pythonhosted.org/packages/99/0e/4b3f7b39400ce805eb96b7c7c0ce36404000e4605b480fe643d070fb1787/feature_engineering-2.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-09 02:00:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "knowusuboaky",
    "github_project": "feature_engineering",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "feature-engineering"
}
        
Elapsed time: 0.48570s