csv-mcp-server


Namecsv-mcp-server JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryMCP server for comprehensive CSV file operations with pandas-based tools
upload_time2025-08-13 06:53:17
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords csv data-analysis data-manipulation data-profiling data-quality data-validation fastmcp mcp model-context-protocol outlier-detection pandas
VCS
bugtrack_url
requirements fastmcp pandas numpy pydantic aiofiles python-dateutil httpx openpyxl pyarrow tabulate numexpr bottleneck pytz
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CSV Editor - AI-Powered CSV Processing via MCP

[![Python](https://img.shields.io/badge/Python-3.8%2B-blue)](https://www.python.org/)
[![MCP](https://img.shields.io/badge/MCP-Compatible-green)](https://modelcontextprotocol.io/)
[![License](https://img.shields.io/badge/License-MIT-yellow)](LICENSE)
[![FastMCP](https://img.shields.io/badge/Built%20with-FastMCP-purple)](https://github.com/jlowin/fastmcp)
[![Pandas](https://img.shields.io/badge/Powered%20by-Pandas-150458)](https://pandas.pydata.org/)

**Transform how AI assistants work with CSV data.** CSV Editor is a high-performance MCP server that gives Claude, ChatGPT, and other AI assistants powerful data manipulation capabilities through simple commands.

## ๐ŸŽฏ Why CSV Editor?

### The Problem
AI assistants struggle with complex data operations - they can read files but lack tools for filtering, transforming, analyzing, and validating CSV data efficiently.

### The Solution  
CSV Editor bridges this gap by providing AI assistants with 40+ specialized tools for CSV operations, turning them into powerful data analysts that can:
- Clean messy datasets in seconds
- Perform complex statistical analysis
- Validate data quality automatically
- Transform data with natural language commands
- Track all changes with undo/redo capabilities

### Key Differentiators
| Feature | CSV Editor | Traditional Tools |
|---------|-----------|------------------|
| **AI Integration** | Native MCP protocol | Manual operations |
| **Auto-Save** | Automatic with strategies | Manual save required |
| **History Tracking** | Full undo/redo with snapshots | Limited or none |
| **Session Management** | Multi-user isolated sessions | Single user |
| **Data Validation** | Built-in quality scoring | Separate tools needed |
| **Performance** | Handles GB+ files with chunking | Memory limitations |

## โšก Quick Demo

```python
# Your AI assistant can now do this:
"Load the sales data and remove duplicates"
"Filter for Q4 2024 transactions over $10,000"  
"Calculate correlation between price and quantity"
"Fill missing values with the median"
"Export as Excel with the analysis"

# All with automatic history tracking and undo capability!
```

## ๐Ÿš€ Quick Start (2 minutes)

### Fastest Installation (Recommended)

```bash
# Install uv if needed (one-time setup)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone and run
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
uv sync
uv run csv-editor
```

### Configure Your AI Assistant

<details>
<summary><b>Claude Desktop</b> (Click to expand)</summary>

Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):

```json
{
  "mcpServers": {
    "csv-editor": {
      "command": "uv",
      "args": ["tool", "run", "csv-editor"],
      "env": {
        "CSV_MAX_FILE_SIZE": "1073741824"
      }
    }
  }
}
```
</details>

<details>
<summary><b>Other Clients</b> (Continue, Cline, Windsurf, Zed)</summary>

See [MCP_CONFIG.md](MCP_CONFIG.md) for detailed configuration.

</details>

## ๐Ÿ’ก Real-World Use Cases

### ๐Ÿ“Š Data Analyst Workflow
```python
# Morning: Load yesterday's data
session = load_csv("daily_sales.csv")

# Clean: Remove duplicates and fix types
remove_duplicates(session_id)
change_column_type("date", "datetime")
fill_missing_values(strategy="median", columns=["revenue"])

# Analyze: Get insights
get_statistics(columns=["revenue", "quantity"])
detect_outliers(method="iqr", threshold=1.5)
get_correlation_matrix(min_correlation=0.5)

# Report: Export cleaned data
export_csv(format="excel", file_path="clean_sales.xlsx")
```

### ๐Ÿญ ETL Pipeline
```python
# Extract from multiple sources
load_csv_from_url("https://api.example.com/data.csv")

# Transform with complex operations
filter_rows(conditions=[
    {"column": "status", "operator": "==", "value": "active"},
    {"column": "amount", "operator": ">", "value": 1000}
])
add_column(name="quarter", formula="Q{(month-1)//3 + 1}")
group_by_aggregate(group_by=["quarter"], aggregations={
    "amount": ["sum", "mean"],
    "customer_id": "count"
})

# Load to different formats
export_csv(format="parquet")  # For data warehouse
export_csv(format="json")     # For API
```

### ๐Ÿ” Data Quality Assurance
```python
# Validate incoming data
validate_schema(schema={
    "customer_id": {"type": "integer", "required": True},
    "email": {"type": "string", "pattern": r"^[^@]+@[^@]+\.[^@]+$"},
    "age": {"type": "integer", "min": 0, "max": 120}
})

# Quality scoring
quality_report = check_data_quality()
# Returns: overall_score, missing_data%, duplicates, outliers

# Anomaly detection
anomalies = find_anomalies(methods=["statistical", "pattern"])
```

## ๐ŸŽจ Core Features

### Data Operations
- **Load & Export**: CSV, JSON, Excel, Parquet, HTML, Markdown
- **Transform**: Filter, sort, group, pivot, join
- **Clean**: Remove duplicates, handle missing values, fix types
- **Calculate**: Add computed columns, aggregations

### Analysis Tools  
- **Statistics**: Descriptive stats, correlations, distributions
- **Outliers**: IQR, Z-score, custom thresholds
- **Profiling**: Complete data quality reports
- **Validation**: Schema checking, quality scoring

### Productivity Features
- **Auto-Save**: Never lose work with configurable strategies
- **History**: Full undo/redo with operation tracking
- **Sessions**: Multi-user support with isolation
- **Performance**: Stream processing for large files

## ๐Ÿ“š Available Tools

<details>
<summary><b>Complete Tool List</b> (40+ tools)</summary>

### I/O Operations
- `load_csv` - Load from file
- `load_csv_from_url` - Load from URL
- `load_csv_from_content` - Load from string
- `export_csv` - Export to various formats
- `get_session_info` - Session details
- `list_sessions` - Active sessions
- `close_session` - Cleanup

### Data Manipulation
- `filter_rows` - Complex filtering
- `sort_data` - Multi-column sort
- `select_columns` - Column selection
- `rename_columns` - Rename columns
- `add_column` - Add computed columns
- `remove_columns` - Remove columns
- `update_column` - Update values
- `change_column_type` - Type conversion
- `fill_missing_values` - Handle nulls
- `remove_duplicates` - Deduplicate

### Analysis
- `get_statistics` - Statistical summary
- `get_column_statistics` - Column stats
- `get_correlation_matrix` - Correlations
- `group_by_aggregate` - Group operations
- `get_value_counts` - Frequency counts
- `detect_outliers` - Find outliers
- `profile_data` - Data profiling

### Validation
- `validate_schema` - Schema validation
- `check_data_quality` - Quality metrics
- `find_anomalies` - Anomaly detection

### Auto-Save & History
- `configure_auto_save` - Setup auto-save
- `get_auto_save_status` - Check status
- `undo` / `redo` - Navigate history
- `get_history` - View operations
- `restore_to_operation` - Time travel

</details>

## โš™๏ธ Configuration

### Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `CSV_MAX_FILE_SIZE` | 1GB | Maximum file size |
| `CSV_SESSION_TIMEOUT` | 3600s | Session timeout |
| `CSV_CHUNK_SIZE` | 10000 | Processing chunk size |
| `CSV_AUTO_SAVE` | true | Enable auto-save |

### Auto-Save Strategies

CSV Editor automatically saves your work with configurable strategies:

- **Overwrite** (default) - Update original file
- **Backup** - Create timestamped backups
- **Versioned** - Maintain version history
- **Custom** - Save to specified location

```python
# Configure auto-save
configure_auto_save(
    strategy="backup",
    backup_dir="/backups",
    max_backups=10
)
```

## ๐Ÿ› ๏ธ Advanced Installation Options

<details>
<summary><b>Alternative Installation Methods</b></summary>

### Using pip
```bash
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
pip install -e .
```

### Using pipx (Global)
```bash
pipx install git+https://github.com/santoshray02/csv-editor.git
```

### From PyPI (Coming Soon)
```bash
uv pip install csv-editor
# or
pip install csv-editor
```

</details>

## ๐Ÿงช Development

### Running Tests
```bash
uv run test           # Run tests
uv run test-cov       # With coverage
uv run all-checks     # Format, lint, type-check, test
```

### Project Structure
```
csv-editor/
โ”œโ”€โ”€ src/csv_editor/   # Core implementation
โ”‚   โ”œโ”€โ”€ tools/        # MCP tool implementations
โ”‚   โ”œโ”€โ”€ models/       # Data models
โ”‚   โ””โ”€โ”€ server.py     # MCP server
โ”œโ”€โ”€ tests/            # Test suite
โ”œโ”€โ”€ examples/         # Usage examples
โ””โ”€โ”€ docs/            # Documentation
```

## ๐Ÿค Contributing

We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

### Quick Contribution Guide
1. Fork the repository
2. Create a feature branch
3. Make your changes with tests
4. Run `uv run all-checks`
5. Submit a pull request

## ๐Ÿ“ˆ Roadmap

- [ ] SQL query interface
- [ ] Real-time collaboration
- [ ] Advanced visualizations
- [ ] Machine learning integrations
- [ ] Cloud storage support
- [ ] Performance optimizations for 10GB+ files

## ๐Ÿ’ฌ Support

- **Issues**: [GitHub Issues](https://github.com/santoshray02/csv-editor/issues)
- **Discussions**: [GitHub Discussions](https://github.com/santoshray02/csv-editor/discussions)
- **Documentation**: [Wiki](https://github.com/santoshray02/csv-editor/wiki)

## ๐Ÿ“„ License

MIT License - see [LICENSE](LICENSE) file

## ๐Ÿ™ Acknowledgments

Built with:
- [FastMCP](https://github.com/jlowin/fastmcp) - Fast Model Context Protocol
- [Pandas](https://pandas.pydata.org/) - Data manipulation
- [NumPy](https://numpy.org/) - Numerical computing

---

**Ready to supercharge your AI's data capabilities?** [Get started in 2 minutes โ†’](#-quick-start-2-minutes)
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "csv-mcp-server",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Santosh Ray <rayskumar02@gmail.com>",
    "keywords": "csv, data-analysis, data-manipulation, data-profiling, data-quality, data-validation, fastmcp, mcp, model-context-protocol, outlier-detection, pandas",
    "author": null,
    "author_email": "Santosh Ray <rayskumar02@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/7a/61/649474851e8a55fb1b431d2181248b4da36f5cf263b8d8ec04562a020e24/csv_mcp_server-1.0.0.tar.gz",
    "platform": null,
    "description": "# CSV Editor - AI-Powered CSV Processing via MCP\n\n[![Python](https://img.shields.io/badge/Python-3.8%2B-blue)](https://www.python.org/)\n[![MCP](https://img.shields.io/badge/MCP-Compatible-green)](https://modelcontextprotocol.io/)\n[![License](https://img.shields.io/badge/License-MIT-yellow)](LICENSE)\n[![FastMCP](https://img.shields.io/badge/Built%20with-FastMCP-purple)](https://github.com/jlowin/fastmcp)\n[![Pandas](https://img.shields.io/badge/Powered%20by-Pandas-150458)](https://pandas.pydata.org/)\n\n**Transform how AI assistants work with CSV data.** CSV Editor is a high-performance MCP server that gives Claude, ChatGPT, and other AI assistants powerful data manipulation capabilities through simple commands.\n\n## \ud83c\udfaf Why CSV Editor?\n\n### The Problem\nAI assistants struggle with complex data operations - they can read files but lack tools for filtering, transforming, analyzing, and validating CSV data efficiently.\n\n### The Solution  \nCSV Editor bridges this gap by providing AI assistants with 40+ specialized tools for CSV operations, turning them into powerful data analysts that can:\n- Clean messy datasets in seconds\n- Perform complex statistical analysis\n- Validate data quality automatically\n- Transform data with natural language commands\n- Track all changes with undo/redo capabilities\n\n### Key Differentiators\n| Feature | CSV Editor | Traditional Tools |\n|---------|-----------|------------------|\n| **AI Integration** | Native MCP protocol | Manual operations |\n| **Auto-Save** | Automatic with strategies | Manual save required |\n| **History Tracking** | Full undo/redo with snapshots | Limited or none |\n| **Session Management** | Multi-user isolated sessions | Single user |\n| **Data Validation** | Built-in quality scoring | Separate tools needed |\n| **Performance** | Handles GB+ files with chunking | Memory limitations |\n\n## \u26a1 Quick Demo\n\n```python\n# Your AI assistant can now do this:\n\"Load the sales data and remove duplicates\"\n\"Filter for Q4 2024 transactions over $10,000\"  \n\"Calculate correlation between price and quantity\"\n\"Fill missing values with the median\"\n\"Export as Excel with the analysis\"\n\n# All with automatic history tracking and undo capability!\n```\n\n## \ud83d\ude80 Quick Start (2 minutes)\n\n### Fastest Installation (Recommended)\n\n```bash\n# Install uv if needed (one-time setup)\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Clone and run\ngit clone https://github.com/santoshray02/csv-editor.git\ncd csv-editor\nuv sync\nuv run csv-editor\n```\n\n### Configure Your AI Assistant\n\n<details>\n<summary><b>Claude Desktop</b> (Click to expand)</summary>\n\nAdd to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):\n\n```json\n{\n  \"mcpServers\": {\n    \"csv-editor\": {\n      \"command\": \"uv\",\n      \"args\": [\"tool\", \"run\", \"csv-editor\"],\n      \"env\": {\n        \"CSV_MAX_FILE_SIZE\": \"1073741824\"\n      }\n    }\n  }\n}\n```\n</details>\n\n<details>\n<summary><b>Other Clients</b> (Continue, Cline, Windsurf, Zed)</summary>\n\nSee [MCP_CONFIG.md](MCP_CONFIG.md) for detailed configuration.\n\n</details>\n\n## \ud83d\udca1 Real-World Use Cases\n\n### \ud83d\udcca Data Analyst Workflow\n```python\n# Morning: Load yesterday's data\nsession = load_csv(\"daily_sales.csv\")\n\n# Clean: Remove duplicates and fix types\nremove_duplicates(session_id)\nchange_column_type(\"date\", \"datetime\")\nfill_missing_values(strategy=\"median\", columns=[\"revenue\"])\n\n# Analyze: Get insights\nget_statistics(columns=[\"revenue\", \"quantity\"])\ndetect_outliers(method=\"iqr\", threshold=1.5)\nget_correlation_matrix(min_correlation=0.5)\n\n# Report: Export cleaned data\nexport_csv(format=\"excel\", file_path=\"clean_sales.xlsx\")\n```\n\n### \ud83c\udfed ETL Pipeline\n```python\n# Extract from multiple sources\nload_csv_from_url(\"https://api.example.com/data.csv\")\n\n# Transform with complex operations\nfilter_rows(conditions=[\n    {\"column\": \"status\", \"operator\": \"==\", \"value\": \"active\"},\n    {\"column\": \"amount\", \"operator\": \">\", \"value\": 1000}\n])\nadd_column(name=\"quarter\", formula=\"Q{(month-1)//3 + 1}\")\ngroup_by_aggregate(group_by=[\"quarter\"], aggregations={\n    \"amount\": [\"sum\", \"mean\"],\n    \"customer_id\": \"count\"\n})\n\n# Load to different formats\nexport_csv(format=\"parquet\")  # For data warehouse\nexport_csv(format=\"json\")     # For API\n```\n\n### \ud83d\udd0d Data Quality Assurance\n```python\n# Validate incoming data\nvalidate_schema(schema={\n    \"customer_id\": {\"type\": \"integer\", \"required\": True},\n    \"email\": {\"type\": \"string\", \"pattern\": r\"^[^@]+@[^@]+\\.[^@]+$\"},\n    \"age\": {\"type\": \"integer\", \"min\": 0, \"max\": 120}\n})\n\n# Quality scoring\nquality_report = check_data_quality()\n# Returns: overall_score, missing_data%, duplicates, outliers\n\n# Anomaly detection\nanomalies = find_anomalies(methods=[\"statistical\", \"pattern\"])\n```\n\n## \ud83c\udfa8 Core Features\n\n### Data Operations\n- **Load & Export**: CSV, JSON, Excel, Parquet, HTML, Markdown\n- **Transform**: Filter, sort, group, pivot, join\n- **Clean**: Remove duplicates, handle missing values, fix types\n- **Calculate**: Add computed columns, aggregations\n\n### Analysis Tools  \n- **Statistics**: Descriptive stats, correlations, distributions\n- **Outliers**: IQR, Z-score, custom thresholds\n- **Profiling**: Complete data quality reports\n- **Validation**: Schema checking, quality scoring\n\n### Productivity Features\n- **Auto-Save**: Never lose work with configurable strategies\n- **History**: Full undo/redo with operation tracking\n- **Sessions**: Multi-user support with isolation\n- **Performance**: Stream processing for large files\n\n## \ud83d\udcda Available Tools\n\n<details>\n<summary><b>Complete Tool List</b> (40+ tools)</summary>\n\n### I/O Operations\n- `load_csv` - Load from file\n- `load_csv_from_url` - Load from URL\n- `load_csv_from_content` - Load from string\n- `export_csv` - Export to various formats\n- `get_session_info` - Session details\n- `list_sessions` - Active sessions\n- `close_session` - Cleanup\n\n### Data Manipulation\n- `filter_rows` - Complex filtering\n- `sort_data` - Multi-column sort\n- `select_columns` - Column selection\n- `rename_columns` - Rename columns\n- `add_column` - Add computed columns\n- `remove_columns` - Remove columns\n- `update_column` - Update values\n- `change_column_type` - Type conversion\n- `fill_missing_values` - Handle nulls\n- `remove_duplicates` - Deduplicate\n\n### Analysis\n- `get_statistics` - Statistical summary\n- `get_column_statistics` - Column stats\n- `get_correlation_matrix` - Correlations\n- `group_by_aggregate` - Group operations\n- `get_value_counts` - Frequency counts\n- `detect_outliers` - Find outliers\n- `profile_data` - Data profiling\n\n### Validation\n- `validate_schema` - Schema validation\n- `check_data_quality` - Quality metrics\n- `find_anomalies` - Anomaly detection\n\n### Auto-Save & History\n- `configure_auto_save` - Setup auto-save\n- `get_auto_save_status` - Check status\n- `undo` / `redo` - Navigate history\n- `get_history` - View operations\n- `restore_to_operation` - Time travel\n\n</details>\n\n## \u2699\ufe0f Configuration\n\n### Environment Variables\n\n| Variable | Default | Description |\n|----------|---------|-------------|\n| `CSV_MAX_FILE_SIZE` | 1GB | Maximum file size |\n| `CSV_SESSION_TIMEOUT` | 3600s | Session timeout |\n| `CSV_CHUNK_SIZE` | 10000 | Processing chunk size |\n| `CSV_AUTO_SAVE` | true | Enable auto-save |\n\n### Auto-Save Strategies\n\nCSV Editor automatically saves your work with configurable strategies:\n\n- **Overwrite** (default) - Update original file\n- **Backup** - Create timestamped backups\n- **Versioned** - Maintain version history\n- **Custom** - Save to specified location\n\n```python\n# Configure auto-save\nconfigure_auto_save(\n    strategy=\"backup\",\n    backup_dir=\"/backups\",\n    max_backups=10\n)\n```\n\n## \ud83d\udee0\ufe0f Advanced Installation Options\n\n<details>\n<summary><b>Alternative Installation Methods</b></summary>\n\n### Using pip\n```bash\ngit clone https://github.com/santoshray02/csv-editor.git\ncd csv-editor\npip install -e .\n```\n\n### Using pipx (Global)\n```bash\npipx install git+https://github.com/santoshray02/csv-editor.git\n```\n\n### From PyPI (Coming Soon)\n```bash\nuv pip install csv-editor\n# or\npip install csv-editor\n```\n\n</details>\n\n## \ud83e\uddea Development\n\n### Running Tests\n```bash\nuv run test           # Run tests\nuv run test-cov       # With coverage\nuv run all-checks     # Format, lint, type-check, test\n```\n\n### Project Structure\n```\ncsv-editor/\n\u251c\u2500\u2500 src/csv_editor/   # Core implementation\n\u2502   \u251c\u2500\u2500 tools/        # MCP tool implementations\n\u2502   \u251c\u2500\u2500 models/       # Data models\n\u2502   \u2514\u2500\u2500 server.py     # MCP server\n\u251c\u2500\u2500 tests/            # Test suite\n\u251c\u2500\u2500 examples/         # Usage examples\n\u2514\u2500\u2500 docs/            # Documentation\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Quick Contribution Guide\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes with tests\n4. Run `uv run all-checks`\n5. Submit a pull request\n\n## \ud83d\udcc8 Roadmap\n\n- [ ] SQL query interface\n- [ ] Real-time collaboration\n- [ ] Advanced visualizations\n- [ ] Machine learning integrations\n- [ ] Cloud storage support\n- [ ] Performance optimizations for 10GB+ files\n\n## \ud83d\udcac Support\n\n- **Issues**: [GitHub Issues](https://github.com/santoshray02/csv-editor/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/santoshray02/csv-editor/discussions)\n- **Documentation**: [Wiki](https://github.com/santoshray02/csv-editor/wiki)\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) file\n\n## \ud83d\ude4f Acknowledgments\n\nBuilt with:\n- [FastMCP](https://github.com/jlowin/fastmcp) - Fast Model Context Protocol\n- [Pandas](https://pandas.pydata.org/) - Data manipulation\n- [NumPy](https://numpy.org/) - Numerical computing\n\n---\n\n**Ready to supercharge your AI's data capabilities?** [Get started in 2 minutes \u2192](#-quick-start-2-minutes)",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "MCP server for comprehensive CSV file operations with pandas-based tools",
    "version": "1.0.0",
    "project_urls": {
        "Changelog": "https://github.com/santoshray02/csv-editor/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/santoshray02/csv-editor#readme",
        "Homepage": "https://github.com/santoshray02/csv-editor",
        "Issues": "https://github.com/santoshray02/csv-editor/issues",
        "Release Notes": "https://github.com/santoshray02/csv-editor/releases",
        "Repository": "https://github.com/santoshray02/csv-editor"
    },
    "split_keywords": [
        "csv",
        " data-analysis",
        " data-manipulation",
        " data-profiling",
        " data-quality",
        " data-validation",
        " fastmcp",
        " mcp",
        " model-context-protocol",
        " outlier-detection",
        " pandas"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "93df274555fb8b3e2bd366b0f814a94c2ee5f70ba9ad8bc2a5b7d35e52a383a0",
                "md5": "fb9b88c9e1d2f586923169cbc7bb86eb",
                "sha256": "f015dbff99c7cff337b294cca01ffe4e9dc4b1a5fa37d86a7f21d92004efbec6"
            },
            "downloads": -1,
            "filename": "csv_mcp_server-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fb9b88c9e1d2f586923169cbc7bb86eb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 52185,
            "upload_time": "2025-08-13T06:53:14",
            "upload_time_iso_8601": "2025-08-13T06:53:14.176336Z",
            "url": "https://files.pythonhosted.org/packages/93/df/274555fb8b3e2bd366b0f814a94c2ee5f70ba9ad8bc2a5b7d35e52a383a0/csv_mcp_server-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7a61649474851e8a55fb1b431d2181248b4da36f5cf263b8d8ec04562a020e24",
                "md5": "05d15932a57f5818e541ef0a063eb9bc",
                "sha256": "7978627ba5f3760fedd8ac711fdd43b2b365c2294a08783743f5f65b1c486265"
            },
            "downloads": -1,
            "filename": "csv_mcp_server-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "05d15932a57f5818e541ef0a063eb9bc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 1786886,
            "upload_time": "2025-08-13T06:53:17",
            "upload_time_iso_8601": "2025-08-13T06:53:17.011530Z",
            "url": "https://files.pythonhosted.org/packages/7a/61/649474851e8a55fb1b431d2181248b4da36f5cf263b8d8ec04562a020e24/csv_mcp_server-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-13 06:53:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "santoshray02",
    "github_project": "csv-editor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "fastmcp",
            "specs": [
                [
                    ">=",
                    "2.11.3"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "2.2.3"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "2.1.3"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    ">=",
                    "2.10.4"
                ]
            ]
        },
        {
            "name": "aiofiles",
            "specs": [
                [
                    ">=",
                    "24.1.0"
                ]
            ]
        },
        {
            "name": "python-dateutil",
            "specs": [
                [
                    ">=",
                    "2.9.0"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    ">=",
                    "0.27.0"
                ]
            ]
        },
        {
            "name": "openpyxl",
            "specs": [
                [
                    ">=",
                    "3.1.5"
                ]
            ]
        },
        {
            "name": "pyarrow",
            "specs": [
                [
                    ">=",
                    "17.0.0"
                ]
            ]
        },
        {
            "name": "tabulate",
            "specs": [
                [
                    ">=",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "numexpr",
            "specs": [
                [
                    ">=",
                    "2.10.0"
                ]
            ]
        },
        {
            "name": "bottleneck",
            "specs": [
                [
                    ">=",
                    "1.4.0"
                ]
            ]
        },
        {
            "name": "pytz",
            "specs": [
                [
                    ">=",
                    "2024.2"
                ]
            ]
        }
    ],
    "lcname": "csv-mcp-server"
}
        
Elapsed time: 1.11499s