# pandas-type-detector
π **Intelligent DataFrame Type Detection with Locale Awareness**
A robust, production-ready library for automatically detecting and converting pandas DataFrame column types with sophisticated locale-aware parsing, confidence scoring, and enhanced text filtering capabilities.
[](https://www.python.org/downloads/)
[](#testing)
[](https://opensource.org/licenses/MIT)
## π Key Features
- **π Locale-Aware Parsing**: Native support for PT-BR and EN-US number formats, dates, and boolean values
- **π― Smart Text Filtering**: Advanced algorithms prevent text containing numbers from being misclassified as numeric
- **π Confidence Scoring**: Get reliability scores for each type detection decision
- **π‘οΈ Robust Error Handling**: Configurable strategies for handling conversion errors
- **β‘ Performance Optimized**: Intelligent sampling and early-exit strategies for large datasets
- **π§© Modular Architecture**: Extensible design for adding new data types and locales
- **β
Production Tested**: Successfully handles complex real-world data scenarios
## π¦ Installation
```bash
pip install pandas-type-detector
```
## π― Quick Start
```python
import pandas as pd
from pandas_type_detector import TypeDetectionPipeline
# Sample data with mixed formats
data = {
'revenue': ['1.234,56', '2.890,00', '543,21'], # PT-BR currency format
'quantity': ['10', '25', '8'], # Integers
'active': ['Sim', 'NΓ£o', 'Sim'], # PT-BR booleans
'date': ['2025-01-15', '2025-02-20', '2025-03-10'], # ISO dates
'description': ['(31) Product A', '(45) Service B', '(12) Item C'] # Text with numbers
}
df = pd.DataFrame(data)
print("Original dtypes:")
print(df.dtypes)
# All columns are 'object' initially
# Initialize pipeline with Portuguese (Brazil) locale
pipeline = TypeDetectionPipeline(locale="pt-br", on_error="coerce")
# Automatically detect and convert types
df_converted = pipeline.fix_dataframe_dtypes(df)
print("\\nConverted dtypes:")
print(df_converted.dtypes)
# Output:
# revenue float64 β Correctly parsed PT-BR format
# quantity Int64 β Detected as integer
# active boolean β Portuguese booleans converted
# date datetime64[ns] β ISO dates parsed
# description object β Text with numbers kept as text
```
## π Locale Support
### π§π· PT-BR (Portuguese Brazil)
- **Decimal separator**: `,` (comma) β `1.234,56` becomes `1234.56`
- **Thousands separator**: `.` (dot) β `1.000.000,00`
- **Currency symbols**: `R$`, `BRL`
- **Boolean values**: `Sim`/`NΓ£o`, `Verdadeiro`/`Falso`, `S`/`N`
- **Date formats**: `DD/MM/YYYY`, `YYYY-MM-DD`
### πΊπΈ EN-US (English United States)
- **Decimal separator**: `.` (dot) β `1,234.56`
- **Thousands separator**: `,` (comma) β `1,000,000.00`
- **Currency symbols**: `$`, `USD`
- **Boolean values**: `True`/`False`, `Yes`/`No`, `Y`/`N`
- **Date formats**: `MM/DD/YYYY`, `YYYY-MM-DD`
## π Advanced Usage
### π§ Error Handling Strategies
```python
# Strategy 1: Coerce errors to NaN (default - recommended)
pipeline = TypeDetectionPipeline(locale="en-us", on_error="coerce")
df_safe = pipeline.fix_dataframe_dtypes(df)
# Strategy 2: Raise exceptions on conversion errors
pipeline = TypeDetectionPipeline(locale="en-us", on_error="raise")
try:
df_strict = pipeline.fix_dataframe_dtypes(df)
except ValueError as e:
print(f"Conversion error: {e}")
# Strategy 3: Ignore problematic columns
pipeline = TypeDetectionPipeline(locale="en-us", on_error="ignore")
df_conservative = pipeline.fix_dataframe_dtypes(df)
```
### π Individual Column Analysis
```python
# Get detailed detection information
result = pipeline.detect_column_type(df['revenue'])
print(f"Detected type: {result.data_type.value}")
print(f"Confidence: {result.confidence:.2%}")
print(f"Locale: {result.metadata['locale']}")
print(f"Parsing details: {result.metadata}")
# Example output:
# Detected type: float
# Confidence: 95.00%
# Locale: pt-br
# Parsing details: {'locale': 'pt-br', 'is_integer': False, 'numeric_count': 3, ...}
```
### ποΈ Column Selection and Skipping
```python
# Skip specific columns during conversion
df_converted = pipeline.fix_dataframe_dtypes(
df,
skip_columns=['id', 'raw_text', 'keep_as_string']
)
# Skip columns remain as original 'object' type
# Other columns are automatically converted
```
### βοΈ Performance Tuning
```python
# Optimize for large datasets
pipeline = TypeDetectionPipeline(
locale="pt-br",
sample_size=5000, # Analyze up to 5000 rows per column (default: 1000)
on_error="coerce"
)
# For smaller datasets, use full analysis
pipeline = TypeDetectionPipeline(
locale="en-us",
sample_size=10000 # Effectively analyze all rows for small datasets
)
```
## π‘οΈ Smart Text Filtering
One of the key improvements in this library is sophisticated text filtering that prevents common misclassification issues:
```python
# These text values are correctly identified as text, not numeric
problematic_data = pd.Series([
"(31) Week from 28/jul to 3/aug", # Text with numbers
"(45) Product description", # Text with parenthetical numbers
"Order #12345 - Item A", # Mixed text and numbers
"Section 3.1.4 Overview" # Version numbers in text
])
result = pipeline.detect_column_type(problematic_data)
print(result.data_type) # DataType.TEXT (correctly identified as text)
```
## π§ͺ Testing
The library includes a comprehensive test suite with 17 test cases covering all functionality:
```bash
cd pandas-type-detector
poetry run pytest tests/test.py -v
```
### Test Coverage
- β
Numeric detection (integers, floats) for both locales
- β
Boolean detection in multiple languages
- β
DateTime parsing and conversion
- β
Text-with-numbers rejection algorithms
- β
Skip columns functionality
- β
Error handling strategies
- β
Real-world data scenarios
- β
Edge cases and boundary conditions
## π Supported Data Types
| Data Type | Description | Example Values |
|-----------|-------------|----------------|
| **Integer** | Whole numbers | `123`, `1.000` (PT-BR), `1,000` (EN-US) |
| **Float** | Decimal numbers | `123,45` (PT-BR), `123.45` (EN-US) |
| **Boolean** | True/False values | `Sim/NΓ£o` (PT-BR), `Yes/No` (EN-US) |
| **DateTime** | Date and time | `2025-01-15`, `15/01/2025` |
| **Text** | String data | Any text, including mixed alphanumeric |
## π§ Extensibility
### Adding a New Locale
```python
from pandas_type_detector import LOCALES, LocaleConfig
# Add German locale
LOCALES['de-de'] = LocaleConfig(
name='de-de',
decimal_separator=',',
thousands_separator='.',
currency_symbols=['β¬', 'EUR'],
date_formats=[r'^\\d{1,2}\\.\\d{1,2}\\.\\d{4}$'] # DD.MM.YYYY
)
# Use the new locale
pipeline = TypeDetectionPipeline(locale="de-de")
```
### Creating Custom Detectors
```python
from pandas_type_detector import TypeDetector, DataType, DetectionResult
class EmailDetector(TypeDetector):
def detect(self, series):
# Custom email detection logic
email_pattern = r'^[\\w\\.-]+@[\\w\\.-]+\\.[\\w]+$'
matches = series.str.match(email_pattern).sum()
confidence = matches / len(series)
if confidence >= 0.8:
return DetectionResult(DataType.TEXT, confidence, {"format": "email"})
return DetectionResult(DataType.UNKNOWN, confidence, {})
def convert(self, series):
# Email-specific processing if needed
return series.astype(str)
```
## π Performance Characteristics
- **Memory Efficient**: Processes columns independently without loading entire dataset
- **Sampling Strategy**: Configurable sampling reduces processing time for large datasets
- **Early Exit**: Stops analysis when high confidence is reached (β₯90%)
- **Production Ready**: Optimized for ETL pipelines and data processing workflows
### Benchmarks
- β
Tested with datasets up to 14,607 rows in production
- β
Handles complex mixed-format data reliably
- β
Minimal performance overhead on modern hardware
## π€ Contributing
We welcome contributions! The modular architecture makes it easy to:
1. **Add new locales** - Extend `LOCALES` configuration
2. **Create new detectors** - Inherit from `TypeDetector` base class
3. **Improve algorithms** - Enhance existing detection logic
4. **Add test cases** - Expand the test suite for new scenarios
### Development Setup
```bash
git clone https://github.com/yourusername/pandas-type-detector
cd pandas-type-detector
poetry install
poetry run pytest
```
## π Requirements
- **Python**: 3.7+ (tested on 3.7, 3.8, 3.9, 3.10, 3.11, 3.12)
- **pandas**: β₯1.0.0
- **numpy**: β₯1.19.0
## π License
MIT License - see [LICENSE](LICENSE) file for details.
## π Acknowledgments
This library was developed to solve real-world data quality challenges in Brazilian financial and business data processing. It has been successfully deployed in production environments handling complex PT-BR formatted datasets.
Special thanks to the pandas and NumPy communities for providing the foundation that makes this work possible.
## π Support
- **π Bug Reports**: [GitHub Issues](https://github.com/yourusername/pandas-type-detector/issues)
- **π‘ Feature Requests**: [GitHub Discussions](https://github.com/yourusername/pandas-type-detector/discussions)
- **π Documentation**: This README and inline code documentation
- **π§ͺ Examples**: See `tests/test.py` for comprehensive usage examples
---
*Made with β€οΈ for the pandas community - Simplifying data type detection across cultures and locales*
Raw data
{
"_id": null,
"home_page": "https://github.com/machado000/pandas-type-detector",
"name": "pandas-type-detector",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "pandas, dataframe, type-detection, data-cleaning, locale, pt-br, excel, confidence-scoring, text-filtering",
"author": "Joao Brito",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/7e/42/c1601ef7b4aac464fa915e1fb06572571ccc4e9e4efb0ee8be5196a2c808/pandas_type_detector-1.0.1.tar.gz",
"platform": null,
"description": "# pandas-type-detector\n\n\ud83d\udd0d **Intelligent DataFrame Type Detection with Locale Awareness**\n\nA robust, production-ready library for automatically detecting and converting pandas DataFrame column types with sophisticated locale-aware parsing, confidence scoring, and enhanced text filtering capabilities.\n\n[](https://www.python.org/downloads/)\n[](#testing)\n[](https://opensource.org/licenses/MIT)\n\n## \ud83d\ude80 Key Features\n\n- **\ud83c\udf0d Locale-Aware Parsing**: Native support for PT-BR and EN-US number formats, dates, and boolean values\n- **\ud83c\udfaf Smart Text Filtering**: Advanced algorithms prevent text containing numbers from being misclassified as numeric\n- **\ud83d\udcca Confidence Scoring**: Get reliability scores for each type detection decision\n- **\ud83d\udee1\ufe0f Robust Error Handling**: Configurable strategies for handling conversion errors\n- **\u26a1 Performance Optimized**: Intelligent sampling and early-exit strategies for large datasets\n- **\ud83e\udde9 Modular Architecture**: Extensible design for adding new data types and locales\n- **\u2705 Production Tested**: Successfully handles complex real-world data scenarios\n\n## \ud83d\udce6 Installation\n\n```bash\npip install pandas-type-detector\n```\n\n## \ud83c\udfaf Quick Start\n\n```python\nimport pandas as pd\nfrom pandas_type_detector import TypeDetectionPipeline\n\n# Sample data with mixed formats\ndata = {\n 'revenue': ['1.234,56', '2.890,00', '543,21'], # PT-BR currency format\n 'quantity': ['10', '25', '8'], # Integers\n 'active': ['Sim', 'N\u00e3o', 'Sim'], # PT-BR booleans\n 'date': ['2025-01-15', '2025-02-20', '2025-03-10'], # ISO dates\n 'description': ['(31) Product A', '(45) Service B', '(12) Item C'] # Text with numbers\n}\n\ndf = pd.DataFrame(data)\nprint(\"Original dtypes:\")\nprint(df.dtypes)\n# All columns are 'object' initially\n\n# Initialize pipeline with Portuguese (Brazil) locale\npipeline = TypeDetectionPipeline(locale=\"pt-br\", on_error=\"coerce\")\n\n# Automatically detect and convert types\ndf_converted = pipeline.fix_dataframe_dtypes(df)\n\nprint(\"\\\\nConverted dtypes:\")\nprint(df_converted.dtypes)\n# Output:\n# revenue float64 \u2190 Correctly parsed PT-BR format\n# quantity Int64 \u2190 Detected as integer\n# active boolean \u2190 Portuguese booleans converted\n# date datetime64[ns] \u2190 ISO dates parsed\n# description object \u2190 Text with numbers kept as text\n```\n\n## \ud83c\udf10 Locale Support\n\n### \ud83c\udde7\ud83c\uddf7 PT-BR (Portuguese Brazil)\n- **Decimal separator**: `,` (comma) \u2192 `1.234,56` becomes `1234.56`\n- **Thousands separator**: `.` (dot) \u2192 `1.000.000,00`\n- **Currency symbols**: `R$`, `BRL`\n- **Boolean values**: `Sim`/`N\u00e3o`, `Verdadeiro`/`Falso`, `S`/`N`\n- **Date formats**: `DD/MM/YYYY`, `YYYY-MM-DD`\n\n### \ud83c\uddfa\ud83c\uddf8 EN-US (English United States) \n- **Decimal separator**: `.` (dot) \u2192 `1,234.56`\n- **Thousands separator**: `,` (comma) \u2192 `1,000,000.00`\n- **Currency symbols**: `$`, `USD`\n- **Boolean values**: `True`/`False`, `Yes`/`No`, `Y`/`N`\n- **Date formats**: `MM/DD/YYYY`, `YYYY-MM-DD`\n\n## \ud83d\udcda Advanced Usage\n\n### \ud83d\udd27 Error Handling Strategies\n\n```python\n# Strategy 1: Coerce errors to NaN (default - recommended)\npipeline = TypeDetectionPipeline(locale=\"en-us\", on_error=\"coerce\")\ndf_safe = pipeline.fix_dataframe_dtypes(df)\n\n# Strategy 2: Raise exceptions on conversion errors\npipeline = TypeDetectionPipeline(locale=\"en-us\", on_error=\"raise\")\ntry:\n df_strict = pipeline.fix_dataframe_dtypes(df)\nexcept ValueError as e:\n print(f\"Conversion error: {e}\")\n\n# Strategy 3: Ignore problematic columns\npipeline = TypeDetectionPipeline(locale=\"en-us\", on_error=\"ignore\")\ndf_conservative = pipeline.fix_dataframe_dtypes(df)\n```\n\n### \ud83d\udd0d Individual Column Analysis\n\n```python\n# Get detailed detection information\nresult = pipeline.detect_column_type(df['revenue'])\n\nprint(f\"Detected type: {result.data_type.value}\")\nprint(f\"Confidence: {result.confidence:.2%}\")\nprint(f\"Locale: {result.metadata['locale']}\")\nprint(f\"Parsing details: {result.metadata}\")\n\n# Example output:\n# Detected type: float\n# Confidence: 95.00%\n# Locale: pt-br\n# Parsing details: {'locale': 'pt-br', 'is_integer': False, 'numeric_count': 3, ...}\n```\n\n### \ud83c\udf9b\ufe0f Column Selection and Skipping\n\n```python\n# Skip specific columns during conversion\ndf_converted = pipeline.fix_dataframe_dtypes(\n df, \n skip_columns=['id', 'raw_text', 'keep_as_string']\n)\n\n# Skip columns remain as original 'object' type\n# Other columns are automatically converted\n```\n\n### \u2699\ufe0f Performance Tuning\n\n```python\n# Optimize for large datasets\npipeline = TypeDetectionPipeline(\n locale=\"pt-br\",\n sample_size=5000, # Analyze up to 5000 rows per column (default: 1000)\n on_error=\"coerce\"\n)\n\n# For smaller datasets, use full analysis\npipeline = TypeDetectionPipeline(\n locale=\"en-us\", \n sample_size=10000 # Effectively analyze all rows for small datasets\n)\n```\n\n## \ud83d\udee1\ufe0f Smart Text Filtering\n\nOne of the key improvements in this library is sophisticated text filtering that prevents common misclassification issues:\n\n```python\n# These text values are correctly identified as text, not numeric\nproblematic_data = pd.Series([\n \"(31) Week from 28/jul to 3/aug\", # Text with numbers\n \"(45) Product description\", # Text with parenthetical numbers \n \"Order #12345 - Item A\", # Mixed text and numbers\n \"Section 3.1.4 Overview\" # Version numbers in text\n])\n\nresult = pipeline.detect_column_type(problematic_data)\nprint(result.data_type) # DataType.TEXT (correctly identified as text)\n```\n\n## \ud83e\uddea Testing\n\nThe library includes a comprehensive test suite with 17 test cases covering all functionality:\n\n```bash\ncd pandas-type-detector\npoetry run pytest tests/test.py -v\n```\n\n### Test Coverage\n- \u2705 Numeric detection (integers, floats) for both locales\n- \u2705 Boolean detection in multiple languages\n- \u2705 DateTime parsing and conversion\n- \u2705 Text-with-numbers rejection algorithms\n- \u2705 Skip columns functionality\n- \u2705 Error handling strategies\n- \u2705 Real-world data scenarios\n- \u2705 Edge cases and boundary conditions\n\n## \ud83d\udcca Supported Data Types\n\n| Data Type | Description | Example Values |\n|-----------|-------------|----------------|\n| **Integer** | Whole numbers | `123`, `1.000` (PT-BR), `1,000` (EN-US) |\n| **Float** | Decimal numbers | `123,45` (PT-BR), `123.45` (EN-US) |\n| **Boolean** | True/False values | `Sim/N\u00e3o` (PT-BR), `Yes/No` (EN-US) |\n| **DateTime** | Date and time | `2025-01-15`, `15/01/2025` |\n| **Text** | String data | Any text, including mixed alphanumeric |\n## \ud83d\udd27 Extensibility\n\n### Adding a New Locale\n\n```python\nfrom pandas_type_detector import LOCALES, LocaleConfig\n\n# Add German locale\nLOCALES['de-de'] = LocaleConfig(\n name='de-de',\n decimal_separator=',',\n thousands_separator='.',\n currency_symbols=['\u20ac', 'EUR'],\n date_formats=[r'^\\\\d{1,2}\\\\.\\\\d{1,2}\\\\.\\\\d{4}$'] # DD.MM.YYYY\n)\n\n# Use the new locale\npipeline = TypeDetectionPipeline(locale=\"de-de\")\n```\n\n### Creating Custom Detectors\n\n```python\nfrom pandas_type_detector import TypeDetector, DataType, DetectionResult\n\nclass EmailDetector(TypeDetector):\n def detect(self, series):\n # Custom email detection logic\n email_pattern = r'^[\\\\w\\\\.-]+@[\\\\w\\\\.-]+\\\\.[\\\\w]+$'\n matches = series.str.match(email_pattern).sum()\n confidence = matches / len(series)\n \n if confidence >= 0.8:\n return DetectionResult(DataType.TEXT, confidence, {\"format\": \"email\"})\n return DetectionResult(DataType.UNKNOWN, confidence, {})\n \n def convert(self, series):\n # Email-specific processing if needed\n return series.astype(str)\n```\n\n## \ud83d\ude80 Performance Characteristics\n\n- **Memory Efficient**: Processes columns independently without loading entire dataset\n- **Sampling Strategy**: Configurable sampling reduces processing time for large datasets\n- **Early Exit**: Stops analysis when high confidence is reached (\u226590%)\n- **Production Ready**: Optimized for ETL pipelines and data processing workflows\n\n### Benchmarks\n- \u2705 Tested with datasets up to 14,607 rows in production\n- \u2705 Handles complex mixed-format data reliably\n- \u2705 Minimal performance overhead on modern hardware\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! The modular architecture makes it easy to:\n\n1. **Add new locales** - Extend `LOCALES` configuration\n2. **Create new detectors** - Inherit from `TypeDetector` base class\n3. **Improve algorithms** - Enhance existing detection logic\n4. **Add test cases** - Expand the test suite for new scenarios\n\n### Development Setup\n\n```bash\ngit clone https://github.com/yourusername/pandas-type-detector\ncd pandas-type-detector\npoetry install\npoetry run pytest\n```\n\n## \ud83d\udccb Requirements\n\n- **Python**: 3.7+ (tested on 3.7, 3.8, 3.9, 3.10, 3.11, 3.12)\n- **pandas**: \u22651.0.0\n- **numpy**: \u22651.19.0\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\nThis library was developed to solve real-world data quality challenges in Brazilian financial and business data processing. It has been successfully deployed in production environments handling complex PT-BR formatted datasets.\n\nSpecial thanks to the pandas and NumPy communities for providing the foundation that makes this work possible.\n\n## \ud83d\udcde Support\n\n- **\ud83d\udc1b Bug Reports**: [GitHub Issues](https://github.com/yourusername/pandas-type-detector/issues)\n- **\ud83d\udca1 Feature Requests**: [GitHub Discussions](https://github.com/yourusername/pandas-type-detector/discussions)\n- **\ud83d\udcd6 Documentation**: This README and inline code documentation\n- **\ud83e\uddea Examples**: See `tests/test.py` for comprehensive usage examples\n\n---\n\n*Made with \u2764\ufe0f for the pandas community - Simplifying data type detection across cultures and locales*\n",
"bugtrack_url": null,
"license": null,
"summary": "Intelligent DataFrame type detection with sophisticated locale-aware parsing, confidence scoring, and smart text filtering",
"version": "1.0.1",
"project_urls": {
"Homepage": "https://github.com/machado000/pandas-type-detector",
"Issues": "https://github.com/machado000/pandas-type-detector/issues"
},
"split_keywords": [
"pandas",
" dataframe",
" type-detection",
" data-cleaning",
" locale",
" pt-br",
" excel",
" confidence-scoring",
" text-filtering"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e3445314c14c2bc9c4fc7c50ebc0b248f34dcb6ffaaebcbd2073d874a195ba73",
"md5": "dc4dce5ede8dd04b29b93743b77134da",
"sha256": "8c5174f8d7f71f77c8456c368bb6db466b8479d088df7be093e5e64e4182131d"
},
"downloads": -1,
"filename": "pandas_type_detector-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "dc4dce5ede8dd04b29b93743b77134da",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 12543,
"upload_time": "2025-08-21T03:26:22",
"upload_time_iso_8601": "2025-08-21T03:26:22.468910Z",
"url": "https://files.pythonhosted.org/packages/e3/44/5314c14c2bc9c4fc7c50ebc0b248f34dcb6ffaaebcbd2073d874a195ba73/pandas_type_detector-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7e42c1601ef7b4aac464fa915e1fb06572571ccc4e9e4efb0ee8be5196a2c808",
"md5": "3698c978c9f83ba0207bdf2ac4bea6ba",
"sha256": "e3ef84f90db385a251a3c35e35638e4846c094414eaf12d63d91789eb4b6201b"
},
"downloads": -1,
"filename": "pandas_type_detector-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "3698c978c9f83ba0207bdf2ac4bea6ba",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 14924,
"upload_time": "2025-08-21T03:26:23",
"upload_time_iso_8601": "2025-08-21T03:26:23.903250Z",
"url": "https://files.pythonhosted.org/packages/7e/42/c1601ef7b4aac464fa915e1fb06572571ccc4e9e4efb0ee8be5196a2c808/pandas_type_detector-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 03:26:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "machado000",
"github_project": "pandas-type-detector",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pandas-type-detector"
}