<div align="center">

**A high-performance pandas-like DataFrame library powered by Polars**
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/psf/black)
*Combine the familiar pandas API with Polars' blazing-fast performance*
</div>
---
## ✨ Features
- 🐼 **Pandas-like API** - Use familiar pandas syntax without learning a new library
- ⚡ **Polars Backend** - Leverage Polars' optimized engine for maximum performance
- 🔄 **Lazy Evaluation** - Optimize queries with lazy operations before execution
- 📊 **Comprehensive I/O** - Read/write CSV, Parquet, JSON, and Excel files
- 🎯 **Automatic Fallback** - Seamless fallback to pandas for unimplemented methods
- 🔧 **Type Safety** - Support for pandas-like type casting and schema inference
## 🎯 Why nitro-pandas?
**nitro-pandas** bridges the gap between pandas' user-friendly API and Polars' exceptional performance. If you're familiar with pandas but need better performance, nitro-pandas is the perfect solution.
### Performance Comparison
| Operation | pandas | nitro-pandas (Polars) | Speedup |
|-----------|--------|---------------------|---------|
| Large CSV Read | 10s | 2s | **5x faster** |
| GroupBy Aggregation | 5s | 0.5s | **10x faster** |
| Filter Operations | 3s | 0.3s | **10x faster** |
*Results may vary based on data size and hardware*
## 📦 Installation
```bash
# Using uv (recommended)
uv add nitro-pandas
# Using pip
pip install nitro-pandas
```
### Requirements
- **Python 3.11+**
- **Dependencies** (automatically installed):
- `polars>=1.30.0` - High-performance DataFrame engine
- `pandas>=2.2.3` - For fallback methods
- `fastexcel>=0.7.0` - Fast Excel reading
- `openpyxl>=3.1.5` - Excel file support
- `pyarrow>=20.0.0` - Parquet file support
## 🚀 Quick Start
### Basic Usage
```python
import nitro_pandas as npd
# Create a DataFrame (pandas-like syntax)
df = npd.DataFrame({
'name': ['Alice', 'Bob', 'Charlie'],
'age': [25, 30, 35],
'city': ['Paris', 'London', 'New York']
})
# Access columns (returns pandas Series for compatibility)
ages = df['age']
print(ages > 30) # Boolean Series
# Filter data
filtered = df.loc[df['age'] > 30]
print(filtered)
```
### Reading Files
```python
# Read CSV
df = npd.read_csv('data.csv')
# Read with lazy evaluation (optimized for large files)
lf = npd.read_csv_lazy('large_data.csv')
df = lf.query('id > 1000').collect()
# Read other formats
df_parquet = npd.read_parquet('data.parquet')
df_excel = npd.read_excel('data.xlsx')
df_json = npd.read_json('data.json')
```
### Data Operations
```python
# GroupBy operations (pandas-like syntax, Polars backend)
result = df.groupby('city')['age'].mean()
print(result)
# Multi-column groupby
result = df.groupby(['city', 'category'])['value'].sum()
# Aggregations with dictionaries
result = df.groupby('category').agg({
'value': 'mean',
'count': 'sum'
})
# Sorting and filtering
df_sorted = df.sort_values('age', ascending=False)
df_filtered = df.query("age > 25 and city == 'Paris'")
```
### Writing Files
```python
# Write to various formats
df.to_csv('output.csv')
df.to_parquet('output.parquet')
df.to_json('output.json')
df.to_excel('output.xlsx')
```
## 📚 API Reference
### DataFrame Operations
#### Creation
```python
# From dictionary
df = npd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
# From Polars DataFrame
df = npd.DataFrame(pl.DataFrame({'a': [1, 2, 3]}))
# Empty DataFrame
df = npd.DataFrame()
```
#### Indexing
```python
# Column selection
df['column_name'] # Returns pandas Series
df[['col1', 'col2']] # Returns DataFrame
# Boolean filtering
df[df['age'] > 30] # Returns DataFrame
# Label-based indexing
df.loc[df['age'] > 30, 'name'] # Returns Series
df.loc[0:5, ['name', 'age']] # Returns DataFrame
# Position-based indexing
df.iloc[0:5, 0:2] # Returns DataFrame
```
#### Transformations
```python
# Type casting (pandas-like types)
df = df.astype({'id': 'int64', 'name': 'str'})
# Rename columns
df = df.rename(columns={'old_name': 'new_name'})
# Drop rows/columns
df = df.drop(labels=[0, 1], axis=0) # Drop rows
df = df.drop(labels=['col1'], axis=1) # Drop columns
# Fill null values
df = df.fillna({'column': 0})
# Sort values
df = df.sort_values('age', ascending=False)
```
### I/O Functions
#### CSV
```python
# Eager reading
df = npd.read_csv('file.csv',
sep=',',
usecols=['col1', 'col2'],
dtype={'id': 'int64'})
# Lazy reading
lf = npd.read_csv_lazy('file.csv', n_rows=1000)
df = lf.collect()
```
#### Parquet
```python
# Eager reading
df = npd.read_parquet('file.parquet',
columns=['col1', 'col2'],
n_rows=1000)
# Lazy reading
lf = npd.read_parquet_lazy('file.parquet')
df = lf.collect()
```
#### Excel
```python
# Eager reading
df = npd.read_excel('file.xlsx',
sheet_name=0,
usecols=['col1', 'col2'],
nrows=1000)
# Lazy reading
lf = npd.read_excel_lazy('file.xlsx', sheet_name='Sheet1')
df = lf.collect()
```
#### JSON
```python
# Eager reading
df = npd.read_json('file.json',
dtype={'id': 'int64'},
n_rows=1000)
# Lazy reading
lf = npd.read_json_lazy('file.json', lines=True)
df = lf.collect()
```
### LazyFrame Operations
```python
# Create lazy frame
lf = npd.read_csv_lazy('large_file.csv')
# Chain operations (optimized before execution)
result = (lf
.query('age > 30')
.groupby('city')
.agg({'value': 'mean'}))
# Execute query
df = result.collect()
# Sort after collection if needed
df = df.sort_values('value', ascending=False)
```
## 🔄 Migration from pandas
Migrating from pandas to nitro-pandas is straightforward:
```python
# Before (pandas)
import pandas as pd
df = pd.read_csv('data.csv')
result = df.groupby('category')['value'].mean()
# After (nitro-pandas)
import nitro_pandas as npd
df = npd.read_csv('data.csv')
result = df.groupby('category')['value'].mean()
```
Most pandas operations work the same way! The main differences:
- **Single column selection** (`df['col']`) returns a pandas Series (not a nitro-pandas Series) to maintain compatibility with pandas expressions and boolean indexing
- **Comparison operations** (`df > 2`) return pandas DataFrames for boolean indexing compatibility
- **Unimplemented methods**: Automatic fallback to pandas is available at **both the DataFrame instance level and the package level**:
```python
# ✅ Works: fallback on DataFrame instance
df = npd.DataFrame({'a': [1, 2, 3]})
result = df.describe() # Falls back to pandas DataFrame method
# ✅ Works: fallback at package level
import pandas as pd
df_pd = pd.DataFrame({'a': [1, 2, 1], 'b': ['x', 'y', 'x']})
result = npd.get_dummies(df_pd) # Falls back to pandas module function
result = npd.date_range('2024-01-01', periods=5) # Falls back to pandas
```
Note: Methods that only exist on DataFrame instances (like `describe()`) are only available via DataFrame instances, not at the package level.
- **Mixed types in columns**: Unlike pandas, Polars (and thus nitro-pandas) does **not** allow mixed types within a single column. Each column must have a consistent type. If your pandas DataFrame has mixed types in a column, Polars will coerce them to a common type (usually `object`/string) or raise an error.
```python
# ❌ This works in pandas but NOT in Polars/nitro-pandas
pd.DataFrame({'col': [1, 'text', 3.5]}) # Mixed int, str, float
# ✅ Polars will coerce to string or raise error
npd.DataFrame({'col': [1, 'text', 3.5]}) # All values become strings
```
- **No `inplace` parameter**: Polars operations are always immutable (return new DataFrames), so nitro-pandas does **not** support the `inplace=True` parameter found in pandas. All operations return new DataFrame objects.
```python
# ❌ This works in pandas but NOT in nitro-pandas
df.drop(columns=['col'], inplace=True) # inplace not supported
# ✅ Always assign the result
df = df.drop(labels=['col'], axis=1) # Returns new DataFrame
```
## 🏗️ Project Structure
```
nitro-pandas/
├── nitro_pandas/
│ ├── __init__.py # Package initialization
│ ├── dataframe.py # DataFrame implementation
│ ├── lazyframe.py # LazyFrame implementation
│ └── io/
│ ├── __init__.py # IO module exports
│ ├── csv.py # CSV I/O
│ ├── parquet.py # Parquet I/O
│ ├── json.py # JSON I/O
│ └── excel.py # Excel I/O
├── tests/
│ ├── test_dataframe.py # DataFrame tests
│ ├── test_groupby.py # GroupBy tests
│ ├── test_io.py # I/O tests
│ └── helpers.py # Test utilities
├── pyproject.toml # Project configuration
└── README.md # This file
```
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
### Development Setup
```bash
# Clone repository
git clone https://github.com/yourusername/nitro-pandas.git
cd nitro-pandas
# Install development dependencies
uv sync --dev
# Run tests
uv run python tests/test_runner.py
```
## 📝 License
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.
The MIT License is a permissive open-source license that allows anyone to:
- ✅ Use the software for any purpose (commercial or personal)
- ✅ Modify the software
- ✅ Distribute the software
- ✅ Sublicense the software
**In short: Everyone can use it freely!**
## 🙏 Acknowledgments
- [Polars](https://www.pola.rs/) - For the high-performance DataFrame engine
- [pandas](https://pandas.pydata.org/) - For the API inspiration and fallback support
## 📧 Contact
For questions, suggestions, or support, please open an issue on GitHub.
---
<div align="center">
**Made with ❤️ for the Python data science community**
⭐ Star this repo if you find it useful!
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "nitro-pandas",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "data-analysis, dataframe, pandas, performance, polars",
"author": null,
"author_email": "Labdi Wassim <labdiwassim18@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/3d/8c/3781b461abb516a3cda07c6a183594c15c7e639a0d79831177560030a5b8/nitro_pandas-0.1.4.tar.gz",
"platform": null,
"description": "\n<div align=\"center\">\n\n\n\n**A high-performance pandas-like DataFrame library powered by Polars**\n\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://github.com/psf/black)\n\n*Combine the familiar pandas API with Polars' blazing-fast performance*\n\n</div>\n\n---\n\n## \u2728 Features\n\n- \ud83d\udc3c **Pandas-like API** - Use familiar pandas syntax without learning a new library\n- \u26a1 **Polars Backend** - Leverage Polars' optimized engine for maximum performance\n- \ud83d\udd04 **Lazy Evaluation** - Optimize queries with lazy operations before execution\n- \ud83d\udcca **Comprehensive I/O** - Read/write CSV, Parquet, JSON, and Excel files\n- \ud83c\udfaf **Automatic Fallback** - Seamless fallback to pandas for unimplemented methods\n- \ud83d\udd27 **Type Safety** - Support for pandas-like type casting and schema inference\n\n## \ud83c\udfaf Why nitro-pandas?\n\n**nitro-pandas** bridges the gap between pandas' user-friendly API and Polars' exceptional performance. If you're familiar with pandas but need better performance, nitro-pandas is the perfect solution.\n\n### Performance Comparison\n\n| Operation | pandas | nitro-pandas (Polars) | Speedup |\n|-----------|--------|---------------------|---------|\n| Large CSV Read | 10s | 2s | **5x faster** |\n| GroupBy Aggregation | 5s | 0.5s | **10x faster** |\n| Filter Operations | 3s | 0.3s | **10x faster** |\n\n*Results may vary based on data size and hardware*\n\n## \ud83d\udce6 Installation\n\n```bash\n# Using uv (recommended)\nuv add nitro-pandas\n\n# Using pip\npip install nitro-pandas\n```\n\n### Requirements\n\n- **Python 3.11+**\n- **Dependencies** (automatically installed):\n - `polars>=1.30.0` - High-performance DataFrame engine\n - `pandas>=2.2.3` - For fallback methods\n - `fastexcel>=0.7.0` - Fast Excel reading\n - `openpyxl>=3.1.5` - Excel file support\n - `pyarrow>=20.0.0` - Parquet file support\n\n## \ud83d\ude80 Quick Start\n\n### Basic Usage\n\n```python\nimport nitro_pandas as npd\n\n# Create a DataFrame (pandas-like syntax)\ndf = npd.DataFrame({\n 'name': ['Alice', 'Bob', 'Charlie'],\n 'age': [25, 30, 35],\n 'city': ['Paris', 'London', 'New York']\n})\n\n# Access columns (returns pandas Series for compatibility)\nages = df['age']\nprint(ages > 30) # Boolean Series\n\n# Filter data\nfiltered = df.loc[df['age'] > 30]\nprint(filtered)\n```\n\n### Reading Files\n\n```python\n# Read CSV\ndf = npd.read_csv('data.csv')\n\n# Read with lazy evaluation (optimized for large files)\nlf = npd.read_csv_lazy('large_data.csv')\ndf = lf.query('id > 1000').collect()\n\n# Read other formats\ndf_parquet = npd.read_parquet('data.parquet')\ndf_excel = npd.read_excel('data.xlsx')\ndf_json = npd.read_json('data.json')\n```\n\n### Data Operations\n\n```python\n# GroupBy operations (pandas-like syntax, Polars backend)\nresult = df.groupby('city')['age'].mean()\nprint(result)\n\n# Multi-column groupby\nresult = df.groupby(['city', 'category'])['value'].sum()\n\n# Aggregations with dictionaries\nresult = df.groupby('category').agg({\n 'value': 'mean',\n 'count': 'sum'\n})\n\n# Sorting and filtering\ndf_sorted = df.sort_values('age', ascending=False)\ndf_filtered = df.query(\"age > 25 and city == 'Paris'\")\n```\n\n### Writing Files\n\n```python\n# Write to various formats\ndf.to_csv('output.csv')\ndf.to_parquet('output.parquet')\ndf.to_json('output.json')\ndf.to_excel('output.xlsx')\n```\n\n## \ud83d\udcda API Reference\n\n### DataFrame Operations\n\n#### Creation\n```python\n# From dictionary\ndf = npd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})\n\n# From Polars DataFrame\ndf = npd.DataFrame(pl.DataFrame({'a': [1, 2, 3]}))\n\n# Empty DataFrame\ndf = npd.DataFrame()\n```\n\n#### Indexing\n```python\n# Column selection\ndf['column_name'] # Returns pandas Series\ndf[['col1', 'col2']] # Returns DataFrame\n\n# Boolean filtering\ndf[df['age'] > 30] # Returns DataFrame\n\n# Label-based indexing\ndf.loc[df['age'] > 30, 'name'] # Returns Series\ndf.loc[0:5, ['name', 'age']] # Returns DataFrame\n\n# Position-based indexing\ndf.iloc[0:5, 0:2] # Returns DataFrame\n```\n\n#### Transformations\n```python\n# Type casting (pandas-like types)\ndf = df.astype({'id': 'int64', 'name': 'str'})\n\n# Rename columns\ndf = df.rename(columns={'old_name': 'new_name'})\n\n# Drop rows/columns\ndf = df.drop(labels=[0, 1], axis=0) # Drop rows\ndf = df.drop(labels=['col1'], axis=1) # Drop columns\n\n# Fill null values\ndf = df.fillna({'column': 0})\n\n# Sort values\ndf = df.sort_values('age', ascending=False)\n```\n\n### I/O Functions\n\n#### CSV\n```python\n# Eager reading\ndf = npd.read_csv('file.csv', \n sep=',',\n usecols=['col1', 'col2'],\n dtype={'id': 'int64'})\n\n# Lazy reading\nlf = npd.read_csv_lazy('file.csv', n_rows=1000)\ndf = lf.collect()\n```\n\n#### Parquet\n```python\n# Eager reading\ndf = npd.read_parquet('file.parquet',\n columns=['col1', 'col2'],\n n_rows=1000)\n\n# Lazy reading\nlf = npd.read_parquet_lazy('file.parquet')\ndf = lf.collect()\n```\n\n#### Excel\n```python\n# Eager reading\ndf = npd.read_excel('file.xlsx',\n sheet_name=0,\n usecols=['col1', 'col2'],\n nrows=1000)\n\n# Lazy reading\nlf = npd.read_excel_lazy('file.xlsx', sheet_name='Sheet1')\ndf = lf.collect()\n```\n\n#### JSON\n```python\n# Eager reading\ndf = npd.read_json('file.json',\n dtype={'id': 'int64'},\n n_rows=1000)\n\n# Lazy reading\nlf = npd.read_json_lazy('file.json', lines=True)\ndf = lf.collect()\n```\n\n### LazyFrame Operations\n\n```python\n# Create lazy frame\nlf = npd.read_csv_lazy('large_file.csv')\n\n# Chain operations (optimized before execution)\nresult = (lf\n .query('age > 30')\n .groupby('city')\n .agg({'value': 'mean'}))\n\n# Execute query\ndf = result.collect()\n# Sort after collection if needed\ndf = df.sort_values('value', ascending=False)\n```\n\n## \ud83d\udd04 Migration from pandas\n\nMigrating from pandas to nitro-pandas is straightforward:\n\n```python\n# Before (pandas)\nimport pandas as pd\ndf = pd.read_csv('data.csv')\nresult = df.groupby('category')['value'].mean()\n\n# After (nitro-pandas)\nimport nitro_pandas as npd\ndf = npd.read_csv('data.csv')\nresult = df.groupby('category')['value'].mean()\n```\n\nMost pandas operations work the same way! The main differences:\n\n- **Single column selection** (`df['col']`) returns a pandas Series (not a nitro-pandas Series) to maintain compatibility with pandas expressions and boolean indexing\n- **Comparison operations** (`df > 2`) return pandas DataFrames for boolean indexing compatibility\n- **Unimplemented methods**: Automatic fallback to pandas is available at **both the DataFrame instance level and the package level**:\n ```python\n # \u2705 Works: fallback on DataFrame instance\n df = npd.DataFrame({'a': [1, 2, 3]})\n result = df.describe() # Falls back to pandas DataFrame method\n \n # \u2705 Works: fallback at package level\n import pandas as pd\n df_pd = pd.DataFrame({'a': [1, 2, 1], 'b': ['x', 'y', 'x']})\n result = npd.get_dummies(df_pd) # Falls back to pandas module function\n result = npd.date_range('2024-01-01', periods=5) # Falls back to pandas\n ```\n Note: Methods that only exist on DataFrame instances (like `describe()`) are only available via DataFrame instances, not at the package level.\n- **Mixed types in columns**: Unlike pandas, Polars (and thus nitro-pandas) does **not** allow mixed types within a single column. Each column must have a consistent type. If your pandas DataFrame has mixed types in a column, Polars will coerce them to a common type (usually `object`/string) or raise an error.\n ```python\n # \u274c This works in pandas but NOT in Polars/nitro-pandas\n pd.DataFrame({'col': [1, 'text', 3.5]}) # Mixed int, str, float\n \n # \u2705 Polars will coerce to string or raise error\n npd.DataFrame({'col': [1, 'text', 3.5]}) # All values become strings\n ```\n- **No `inplace` parameter**: Polars operations are always immutable (return new DataFrames), so nitro-pandas does **not** support the `inplace=True` parameter found in pandas. All operations return new DataFrame objects.\n ```python\n # \u274c This works in pandas but NOT in nitro-pandas\n df.drop(columns=['col'], inplace=True) # inplace not supported\n \n # \u2705 Always assign the result\n df = df.drop(labels=['col'], axis=1) # Returns new DataFrame\n ```\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nnitro-pandas/\n\u251c\u2500\u2500 nitro_pandas/\n\u2502 \u251c\u2500\u2500 __init__.py # Package initialization\n\u2502 \u251c\u2500\u2500 dataframe.py # DataFrame implementation\n\u2502 \u251c\u2500\u2500 lazyframe.py # LazyFrame implementation\n\u2502 \u2514\u2500\u2500 io/\n\u2502 \u251c\u2500\u2500 __init__.py # IO module exports\n\u2502 \u251c\u2500\u2500 csv.py # CSV I/O\n\u2502 \u251c\u2500\u2500 parquet.py # Parquet I/O\n\u2502 \u251c\u2500\u2500 json.py # JSON I/O\n\u2502 \u2514\u2500\u2500 excel.py # Excel I/O\n\u251c\u2500\u2500 tests/\n\u2502 \u251c\u2500\u2500 test_dataframe.py # DataFrame tests\n\u2502 \u251c\u2500\u2500 test_groupby.py # GroupBy tests\n\u2502 \u251c\u2500\u2500 test_io.py # I/O tests\n\u2502 \u2514\u2500\u2500 helpers.py # Test utilities\n\u251c\u2500\u2500 pyproject.toml # Project configuration\n\u2514\u2500\u2500 README.md # This file\n```\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)\n4. Push to the branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\n### Development Setup\n\n```bash\n# Clone repository\ngit clone https://github.com/yourusername/nitro-pandas.git\ncd nitro-pandas\n\n# Install development dependencies\nuv sync --dev\n\n# Run tests\nuv run python tests/test_runner.py\n```\n\n## \ud83d\udcdd License\n\nThis project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.\n\nThe MIT License is a permissive open-source license that allows anyone to:\n- \u2705 Use the software for any purpose (commercial or personal)\n- \u2705 Modify the software\n- \u2705 Distribute the software\n- \u2705 Sublicense the software\n\n**In short: Everyone can use it freely!**\n\n## \ud83d\ude4f Acknowledgments\n\n- [Polars](https://www.pola.rs/) - For the high-performance DataFrame engine\n- [pandas](https://pandas.pydata.org/) - For the API inspiration and fallback support\n\n## \ud83d\udce7 Contact\n\nFor questions, suggestions, or support, please open an issue on GitHub.\n\n---\n\n<div align=\"center\">\n\n**Made with \u2764\ufe0f for the Python data science community**\n\n\u2b50 Star this repo if you find it useful!\n\n</div>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A pandas-like API wrapper around Polars for high-performance data manipulation",
"version": "0.1.4",
"project_urls": {
"Documentation": "https://github.com/Wassim17Labdi/nitro-pandas#readme",
"Homepage": "https://github.com/Wassim17Labdi/nitro-pandas",
"Issues": "https://github.com/Wassim17Labdi/nitro-pandas/issues",
"Repository": "https://github.com/Wassim17Labdi/nitro-pandas",
"Source Code": "https://github.com/Wassim17Labdi/nitro-pandas"
},
"split_keywords": [
"data-analysis",
" dataframe",
" pandas",
" performance",
" polars"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5409fffeea7ffccecc090c2980a6a6ab0a974d2e74b9278458925313354a9a06",
"md5": "7f54d41aaf9c8f826315782a26fb11a3",
"sha256": "df772fc1741ffc509902e66a4874702b85d5e5f2bfd32e60d6edc913913ca45e"
},
"downloads": -1,
"filename": "nitro_pandas-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7f54d41aaf9c8f826315782a26fb11a3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 27109,
"upload_time": "2025-11-14T02:17:48",
"upload_time_iso_8601": "2025-11-14T02:17:48.179606Z",
"url": "https://files.pythonhosted.org/packages/54/09/fffeea7ffccecc090c2980a6a6ab0a974d2e74b9278458925313354a9a06/nitro_pandas-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3d8c3781b461abb516a3cda07c6a183594c15c7e639a0d79831177560030a5b8",
"md5": "13c6d31838ab7385700b15542bdd14e7",
"sha256": "af4f92a4ad5d4f780ed1e11ad25bc555a624c7792d3f42cbf7f36663b57064e3"
},
"downloads": -1,
"filename": "nitro_pandas-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "13c6d31838ab7385700b15542bdd14e7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 123030,
"upload_time": "2025-11-14T02:17:49",
"upload_time_iso_8601": "2025-11-14T02:17:49.888615Z",
"url": "https://files.pythonhosted.org/packages/3d/8c/3781b461abb516a3cda07c6a183594c15c7e639a0d79831177560030a5b8/nitro_pandas-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-14 02:17:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Wassim17Labdi",
"github_project": "nitro-pandas#readme",
"github_not_found": true,
"lcname": "nitro-pandas"
}