# SqrtSpace SpaceTime for Python
[](https://badge.fury.io/py/sqrtspace-spacetime)
[](https://pypi.org/project/sqrtspace-spacetime/)
[](https://github.com/sqrtspace/sqrtspace-python/blob/main/LICENSE)
[](https://sqrtspace-spacetime.readthedocs.io/en/latest/?badge=latest)
Memory-efficient algorithms and data structures for Python using Williams' √n space-time tradeoffs.
**Paper Repository**: [github.com/sqrtspace/sqrtspace-paper](https://github.com/sqrtspace/sqrtspace-paper)
## Installation
```bash
pip install sqrtspace-spacetime
```
For ML features:
```bash
pip install sqrtspace-spacetime[ml]
```
For all features:
```bash
pip install sqrtspace-spacetime[all]
```
## Core Concepts
SpaceTime implements theoretical computer science results showing that many algorithms can achieve better memory usage by accepting slightly slower runtime. The key insight is using √n memory instead of n memory, where n is the input size.
### Key Features
- **Memory-Efficient Collections**: Arrays and dictionaries that automatically spill to disk
- **External Algorithms**: Sort and group large datasets using minimal memory
- **Streaming Operations**: Process files larger than RAM with elegant API
- **Auto-Checkpointing**: Resume long computations from where they left off
- **Memory Profiling**: Identify optimization opportunities in your code
- **ML Optimizations**: Reduce neural network training memory by up to 90%
## Quick Start
### Basic Usage
```python
from sqrtspace_spacetime import SpaceTimeArray, external_sort, Stream
# Memory-efficient array that spills to disk
array = SpaceTimeArray(threshold=10000)
for i in range(1000000):
array.append(i)
# Sort large datasets with minimal memory
huge_list = list(range(10000000, 0, -1))
sorted_data = external_sort(huge_list) # Uses only √n memory
# Stream processing
Stream.from_csv('huge_file.csv') \
.filter(lambda row: row['value'] > 100) \
.map(lambda row: row['value'] * 1.1) \
.group_by(lambda row: row['category']) \
.to_csv('processed.csv')
```
## Examples
### Basic Examples
See [`examples/basic_usage.py`](examples/basic_usage.py) for comprehensive examples of:
- SpaceTimeArray and SpaceTimeDict usage
- External sorting and grouping
- Stream processing
- Memory profiling
- Auto-checkpointing
### FastAPI Web Application
Check out [`examples/fastapi-app/`](examples/fastapi-app/) for a production-ready web application featuring:
- Streaming endpoints for large datasets
- Server-Sent Events (SSE) for real-time data
- Memory-efficient CSV exports
- Checkpointed background tasks
- ML model serving with memory constraints
See the [FastAPI example README](examples/fastapi-app/README.md) for detailed documentation.
### Machine Learning Pipeline
Explore [`examples/ml-pipeline/`](examples/ml-pipeline/) for ML-specific patterns:
- Training models on datasets larger than RAM
- Memory-efficient feature extraction
- Checkpointed training loops
- Streaming predictions
- Integration with PyTorch and TensorFlow
See the [ML Pipeline README](examples/ml-pipeline/README.md) for complete documentation.
### Memory-Efficient Collections
```python
from sqrtspace_spacetime import SpaceTimeArray, SpaceTimeDict
# Array that automatically manages memory
array = SpaceTimeArray(threshold=1000) # Keep 1000 items in memory
for i in range(1000000):
array.append(f"item_{i}")
# Dictionary with LRU eviction to disk
cache = SpaceTimeDict(threshold=10000)
for key, value in huge_dataset:
cache[key] = expensive_computation(value)
```
### External Algorithms
```python
from sqrtspace_spacetime import external_sort, external_groupby
# Sort 100M items using only ~10K memory
data = list(range(100_000_000, 0, -1))
sorted_data = external_sort(data)
# Group by with aggregation
sales = [
{'store': 'A', 'amount': 100},
{'store': 'B', 'amount': 200},
# ... millions more
]
by_store = external_groupby(
sales,
key_func=lambda x: x['store']
)
# Aggregate with minimal memory
from sqrtspace_spacetime.algorithms import groupby_sum
totals = groupby_sum(
sales,
key_func=lambda x: x['store'],
value_func=lambda x: x['amount']
)
```
### Streaming Operations
```python
from sqrtspace_spacetime import Stream
# Process large files efficiently
stream = Stream.from_csv('sales_2023.csv')
.filter(lambda row: row['amount'] > 0)
.map(lambda row: {
'month': row['date'][:7],
'amount': float(row['amount'])
})
.group_by(lambda row: row['month'])
.to_csv('monthly_summary.csv')
# Chain operations
top_products = Stream.from_jsonl('products.jsonl') \
.filter(lambda p: p['in_stock']) \
.sort(key=lambda p: p['revenue'], reverse=True) \
.take(100) \
.collect()
```
### Auto-Checkpointing
```python
from sqrtspace_spacetime.checkpoint import auto_checkpoint
@auto_checkpoint(total_iterations=1000000)
def process_large_dataset(data):
results = []
for i, item in enumerate(data):
# Process item
result = expensive_computation(item)
results.append(result)
# Yield state for checkpointing
yield {'i': i, 'results': results}
return results
# Automatically resumes from checkpoint if interrupted
results = process_large_dataset(huge_dataset)
```
### Memory Profiling
```python
from sqrtspace_spacetime.profiler import profile, profile_memory
@profile(output_file="profile.json")
def my_algorithm(data):
# Process data
return results
# Get detailed memory analysis
result, report = my_algorithm(data)
print(report.summary)
# Simple memory tracking
@profile_memory(threshold_mb=100)
def memory_heavy_function():
# Alerts if memory usage exceeds threshold
large_list = list(range(10000000))
return sum(large_list)
```
### ML Memory Optimization
```python
from sqrtspace_spacetime.ml import MLMemoryOptimizer
import torch.nn as nn
# Analyze model memory usage
model = nn.Sequential(
nn.Linear(784, 256),
nn.ReLU(),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 10)
)
optimizer = MLMemoryOptimizer()
profile = optimizer.analyze_model(model, input_shape=(784,), batch_size=32)
# Get optimization plan
plan = optimizer.optimize(profile, target_batch_size=128)
print(plan.explanation)
# Apply optimizations
config = optimizer.get_training_config(plan, profile)
```
## Advanced Features
### Memory Pressure Handling
```python
from sqrtspace_spacetime.memory import MemoryMonitor, LoggingHandler
# Monitor memory pressure
monitor = MemoryMonitor()
monitor.add_handler(LoggingHandler())
# Your arrays automatically respond to memory pressure
array = SpaceTimeArray()
# Arrays spill to disk when memory is low
```
### Configuration
```python
from sqrtspace_spacetime import SpaceTimeConfig
# Global configuration
SpaceTimeConfig.set_defaults(
memory_limit=2 * 1024**3, # 2GB
chunk_strategy='sqrt_n',
compression='gzip',
external_storage_path='/fast/ssd/temp'
)
```
### Parallel Processing
```python
from sqrtspace_spacetime.batch import BatchProcessor
processor = BatchProcessor(
memory_threshold=0.8,
checkpoint_enabled=True
)
# Process in memory-efficient batches
result = processor.process(
huge_list,
lambda batch: [transform(item) for item in batch]
)
print(f"Processed {result.get_success_count()} items")
```
## Real-World Examples
### Processing Large CSV Files
```python
from sqrtspace_spacetime import Stream
from sqrtspace_spacetime.profiler import profile_memory
@profile_memory(threshold_mb=500)
def analyze_sales_data(filename):
# Stream process to stay under memory limit
return Stream.from_csv(filename) \
.filter(lambda row: row['status'] == 'completed') \
.map(lambda row: {
'product': row['product_id'],
'revenue': float(row['price']) * int(row['quantity'])
}) \
.group_by(lambda row: row['product']) \
.sort(key=lambda group: sum(r['revenue'] for r in group[1]), reverse=True) \
.take(10) \
.collect()
top_products = analyze_sales_data('sales_2023.csv')
```
### Training Large Neural Networks
```python
from sqrtspace_spacetime.ml import MLMemoryOptimizer, GradientCheckpointer
import torch.nn as nn
# Memory-efficient training
def train_large_model(model, train_loader, epochs=10):
# Analyze memory requirements
optimizer = MLMemoryOptimizer()
profile = optimizer.analyze_model(model, input_shape=(3, 224, 224), batch_size=32)
# Get optimization plan
plan = optimizer.optimize(profile, target_batch_size=128)
# Apply gradient checkpointing
checkpointer = GradientCheckpointer()
model = checkpointer.apply_checkpointing(model, plan.checkpoint_layers)
# Train with optimized settings
for epoch in range(epochs):
for batch in train_loader:
# Training loop with automatic memory management
pass
```
### Data Pipeline with Checkpoints
```python
from sqrtspace_spacetime import Stream
from sqrtspace_spacetime.checkpoint import auto_checkpoint
@auto_checkpoint(total_iterations=1000000)
def process_user_events(event_file):
processed = 0
for event in Stream.from_jsonl(event_file):
# Complex processing
user_profile = enhance_profile(event)
recommendations = generate_recommendations(user_profile)
save_to_database(recommendations)
processed += 1
# Checkpoint state
yield {'processed': processed, 'last_event': event['id']}
return processed
# Automatically resumes if interrupted
total = process_user_events('events.jsonl')
```
## Performance Benchmarks
| Operation | Standard Python | SpaceTime | Memory Reduction | Time Overhead |
|-----------|----------------|-----------|------------------|---------------|
| Sort 10M integers | 400MB | 20MB | 95% | 40% |
| Process 1GB CSV | 1GB | 32MB | 97% | 20% |
| Group by on 1M rows | 200MB | 14MB | 93% | 30% |
| Neural network training | 8GB | 2GB | 75% | 15% |
## API Reference
### Collections
- `SpaceTimeArray`: Memory-efficient list with disk spillover
- `SpaceTimeDict`: Memory-efficient dictionary with LRU eviction
### Algorithms
- `external_sort()`: Sort large datasets with √n memory
- `external_groupby()`: Group large datasets with √n memory
- `external_join()`: Join large datasets efficiently
### Streaming
- `Stream`: Lazy evaluation stream processing
- `FileStream`: Stream lines from files
- `CSVStream`: Stream CSV rows
- `JSONLStream`: Stream JSON Lines
### Memory Management
- `MemoryMonitor`: Monitor memory pressure
- `MemoryPressureHandler`: Custom pressure handlers
### Checkpointing
- `@auto_checkpoint`: Automatic checkpointing decorator
- `CheckpointManager`: Manual checkpoint control
### ML Optimization
- `MLMemoryOptimizer`: Analyze and optimize models
- `GradientCheckpointer`: Apply gradient checkpointing
### Profiling
- `@profile`: Full profiling decorator
- `@profile_memory`: Memory-only profiling
- `SpaceTimeProfiler`: Programmatic profiling
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
Apache License 2.0. See [LICENSE](LICENSE) for details.
## Citation
If you use SpaceTime in your research, please cite:
```bibtex
@software{sqrtspace_spacetime,
title = {SqrtSpace SpaceTime: Memory-Efficient Python Library},
author={Friedel Jr., David H.},
year = {2025},
url = {https://github.com/sqrtspace/sqrtspace-python}
}
```
## Links
- [Documentation](https://sqrtspace-spacetime.readthedocs.io)
- [PyPI Package](https://pypi.org/project/sqrtspace-spacetime/)
- [GitHub Repository](https://github.com/sqrtspace/sqrtspace-python)
- [Issue Tracker](https://github.com/sqrtspace/sqrtspace-python/issues)
Raw data
{
"_id": null,
"home_page": null,
"name": "sqrtspace-spacetime",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "memory, efficiency, algorithms, spacetime, external-memory, streaming",
"author": "SqrtSpace Contributors",
"author_email": "\"David H. Friedel Jr.\" <dfriedel@marketally.ai>",
"download_url": "https://files.pythonhosted.org/packages/e8/5f/fb52ff2ae329995a542a5fe0948511164fc48830b00927d040e432d7ccb2/sqrtspace_spacetime-0.1.0.tar.gz",
"platform": null,
"description": "# SqrtSpace SpaceTime for Python\n\n[](https://badge.fury.io/py/sqrtspace-spacetime)\n[](https://pypi.org/project/sqrtspace-spacetime/)\n[](https://github.com/sqrtspace/sqrtspace-python/blob/main/LICENSE)\n[](https://sqrtspace-spacetime.readthedocs.io/en/latest/?badge=latest)\n\nMemory-efficient algorithms and data structures for Python using Williams' \u221an space-time tradeoffs.\n\n**Paper Repository**: [github.com/sqrtspace/sqrtspace-paper](https://github.com/sqrtspace/sqrtspace-paper) \n\n## Installation\n\n```bash\npip install sqrtspace-spacetime\n```\n\nFor ML features:\n```bash\npip install sqrtspace-spacetime[ml]\n```\n\nFor all features:\n```bash\npip install sqrtspace-spacetime[all]\n```\n\n## Core Concepts\n\nSpaceTime implements theoretical computer science results showing that many algorithms can achieve better memory usage by accepting slightly slower runtime. The key insight is using \u221an memory instead of n memory, where n is the input size.\n\n### Key Features\n\n- **Memory-Efficient Collections**: Arrays and dictionaries that automatically spill to disk\n- **External Algorithms**: Sort and group large datasets using minimal memory\n- **Streaming Operations**: Process files larger than RAM with elegant API\n- **Auto-Checkpointing**: Resume long computations from where they left off\n- **Memory Profiling**: Identify optimization opportunities in your code\n- **ML Optimizations**: Reduce neural network training memory by up to 90%\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom sqrtspace_spacetime import SpaceTimeArray, external_sort, Stream\n\n# Memory-efficient array that spills to disk\narray = SpaceTimeArray(threshold=10000)\nfor i in range(1000000):\n array.append(i)\n\n# Sort large datasets with minimal memory\nhuge_list = list(range(10000000, 0, -1))\nsorted_data = external_sort(huge_list) # Uses only \u221an memory\n\n# Stream processing\nStream.from_csv('huge_file.csv') \\\n .filter(lambda row: row['value'] > 100) \\\n .map(lambda row: row['value'] * 1.1) \\\n .group_by(lambda row: row['category']) \\\n .to_csv('processed.csv')\n```\n\n## Examples\n\n### Basic Examples\nSee [`examples/basic_usage.py`](examples/basic_usage.py) for comprehensive examples of:\n- SpaceTimeArray and SpaceTimeDict usage\n- External sorting and grouping\n- Stream processing\n- Memory profiling\n- Auto-checkpointing\n\n### FastAPI Web Application\nCheck out [`examples/fastapi-app/`](examples/fastapi-app/) for a production-ready web application featuring:\n- Streaming endpoints for large datasets\n- Server-Sent Events (SSE) for real-time data\n- Memory-efficient CSV exports\n- Checkpointed background tasks\n- ML model serving with memory constraints\n\nSee the [FastAPI example README](examples/fastapi-app/README.md) for detailed documentation.\n\n### Machine Learning Pipeline\nExplore [`examples/ml-pipeline/`](examples/ml-pipeline/) for ML-specific patterns:\n- Training models on datasets larger than RAM\n- Memory-efficient feature extraction\n- Checkpointed training loops\n- Streaming predictions\n- Integration with PyTorch and TensorFlow\n\nSee the [ML Pipeline README](examples/ml-pipeline/README.md) for complete documentation.\n\n### Memory-Efficient Collections\n\n```python\nfrom sqrtspace_spacetime import SpaceTimeArray, SpaceTimeDict\n\n# Array that automatically manages memory\narray = SpaceTimeArray(threshold=1000) # Keep 1000 items in memory\nfor i in range(1000000):\n array.append(f\"item_{i}\")\n\n# Dictionary with LRU eviction to disk\ncache = SpaceTimeDict(threshold=10000)\nfor key, value in huge_dataset:\n cache[key] = expensive_computation(value)\n```\n\n### External Algorithms\n\n```python\nfrom sqrtspace_spacetime import external_sort, external_groupby\n\n# Sort 100M items using only ~10K memory\ndata = list(range(100_000_000, 0, -1))\nsorted_data = external_sort(data)\n\n# Group by with aggregation\nsales = [\n {'store': 'A', 'amount': 100},\n {'store': 'B', 'amount': 200},\n # ... millions more\n]\n\nby_store = external_groupby(\n sales,\n key_func=lambda x: x['store']\n)\n\n# Aggregate with minimal memory\nfrom sqrtspace_spacetime.algorithms import groupby_sum\ntotals = groupby_sum(\n sales,\n key_func=lambda x: x['store'],\n value_func=lambda x: x['amount']\n)\n```\n\n### Streaming Operations\n\n```python\nfrom sqrtspace_spacetime import Stream\n\n# Process large files efficiently\nstream = Stream.from_csv('sales_2023.csv')\n .filter(lambda row: row['amount'] > 0)\n .map(lambda row: {\n 'month': row['date'][:7],\n 'amount': float(row['amount'])\n })\n .group_by(lambda row: row['month'])\n .to_csv('monthly_summary.csv')\n\n# Chain operations\ntop_products = Stream.from_jsonl('products.jsonl') \\\n .filter(lambda p: p['in_stock']) \\\n .sort(key=lambda p: p['revenue'], reverse=True) \\\n .take(100) \\\n .collect()\n```\n\n### Auto-Checkpointing\n\n```python\nfrom sqrtspace_spacetime.checkpoint import auto_checkpoint\n\n@auto_checkpoint(total_iterations=1000000)\ndef process_large_dataset(data):\n results = []\n for i, item in enumerate(data):\n # Process item\n result = expensive_computation(item)\n results.append(result)\n \n # Yield state for checkpointing\n yield {'i': i, 'results': results}\n \n return results\n\n# Automatically resumes from checkpoint if interrupted\nresults = process_large_dataset(huge_dataset)\n```\n\n### Memory Profiling\n\n```python\nfrom sqrtspace_spacetime.profiler import profile, profile_memory\n\n@profile(output_file=\"profile.json\")\ndef my_algorithm(data):\n # Process data\n return results\n\n# Get detailed memory analysis\nresult, report = my_algorithm(data)\nprint(report.summary)\n\n# Simple memory tracking\n@profile_memory(threshold_mb=100)\ndef memory_heavy_function():\n # Alerts if memory usage exceeds threshold\n large_list = list(range(10000000))\n return sum(large_list)\n```\n\n### ML Memory Optimization\n\n```python\nfrom sqrtspace_spacetime.ml import MLMemoryOptimizer\nimport torch.nn as nn\n\n# Analyze model memory usage\nmodel = nn.Sequential(\n nn.Linear(784, 256),\n nn.ReLU(),\n nn.Linear(256, 128),\n nn.ReLU(),\n nn.Linear(128, 10)\n)\n\noptimizer = MLMemoryOptimizer()\nprofile = optimizer.analyze_model(model, input_shape=(784,), batch_size=32)\n\n# Get optimization plan\nplan = optimizer.optimize(profile, target_batch_size=128)\nprint(plan.explanation)\n\n# Apply optimizations\nconfig = optimizer.get_training_config(plan, profile)\n```\n\n## Advanced Features\n\n### Memory Pressure Handling\n\n```python\nfrom sqrtspace_spacetime.memory import MemoryMonitor, LoggingHandler\n\n# Monitor memory pressure\nmonitor = MemoryMonitor()\nmonitor.add_handler(LoggingHandler())\n\n# Your arrays automatically respond to memory pressure\narray = SpaceTimeArray()\n# Arrays spill to disk when memory is low\n```\n\n### Configuration\n\n```python\nfrom sqrtspace_spacetime import SpaceTimeConfig\n\n# Global configuration\nSpaceTimeConfig.set_defaults(\n memory_limit=2 * 1024**3, # 2GB\n chunk_strategy='sqrt_n',\n compression='gzip',\n external_storage_path='/fast/ssd/temp'\n)\n```\n\n### Parallel Processing\n\n```python\nfrom sqrtspace_spacetime.batch import BatchProcessor\n\nprocessor = BatchProcessor(\n memory_threshold=0.8,\n checkpoint_enabled=True\n)\n\n# Process in memory-efficient batches\nresult = processor.process(\n huge_list,\n lambda batch: [transform(item) for item in batch]\n)\n\nprint(f\"Processed {result.get_success_count()} items\")\n```\n\n## Real-World Examples\n\n### Processing Large CSV Files\n\n```python\nfrom sqrtspace_spacetime import Stream\nfrom sqrtspace_spacetime.profiler import profile_memory\n\n@profile_memory(threshold_mb=500)\ndef analyze_sales_data(filename):\n # Stream process to stay under memory limit\n return Stream.from_csv(filename) \\\n .filter(lambda row: row['status'] == 'completed') \\\n .map(lambda row: {\n 'product': row['product_id'],\n 'revenue': float(row['price']) * int(row['quantity'])\n }) \\\n .group_by(lambda row: row['product']) \\\n .sort(key=lambda group: sum(r['revenue'] for r in group[1]), reverse=True) \\\n .take(10) \\\n .collect()\n\ntop_products = analyze_sales_data('sales_2023.csv')\n```\n\n### Training Large Neural Networks\n\n```python\nfrom sqrtspace_spacetime.ml import MLMemoryOptimizer, GradientCheckpointer\nimport torch.nn as nn\n\n# Memory-efficient training\ndef train_large_model(model, train_loader, epochs=10):\n # Analyze memory requirements\n optimizer = MLMemoryOptimizer()\n profile = optimizer.analyze_model(model, input_shape=(3, 224, 224), batch_size=32)\n \n # Get optimization plan\n plan = optimizer.optimize(profile, target_batch_size=128)\n \n # Apply gradient checkpointing\n checkpointer = GradientCheckpointer()\n model = checkpointer.apply_checkpointing(model, plan.checkpoint_layers)\n \n # Train with optimized settings\n for epoch in range(epochs):\n for batch in train_loader:\n # Training loop with automatic memory management\n pass\n```\n\n### Data Pipeline with Checkpoints\n\n```python\nfrom sqrtspace_spacetime import Stream\nfrom sqrtspace_spacetime.checkpoint import auto_checkpoint\n\n@auto_checkpoint(total_iterations=1000000)\ndef process_user_events(event_file):\n processed = 0\n \n for event in Stream.from_jsonl(event_file):\n # Complex processing\n user_profile = enhance_profile(event)\n recommendations = generate_recommendations(user_profile)\n \n save_to_database(recommendations)\n processed += 1\n \n # Checkpoint state\n yield {'processed': processed, 'last_event': event['id']}\n \n return processed\n\n# Automatically resumes if interrupted\ntotal = process_user_events('events.jsonl')\n```\n\n## Performance Benchmarks\n\n| Operation | Standard Python | SpaceTime | Memory Reduction | Time Overhead |\n|-----------|----------------|-----------|------------------|---------------|\n| Sort 10M integers | 400MB | 20MB | 95% | 40% |\n| Process 1GB CSV | 1GB | 32MB | 97% | 20% |\n| Group by on 1M rows | 200MB | 14MB | 93% | 30% |\n| Neural network training | 8GB | 2GB | 75% | 15% |\n\n## API Reference\n\n### Collections\n- `SpaceTimeArray`: Memory-efficient list with disk spillover\n- `SpaceTimeDict`: Memory-efficient dictionary with LRU eviction\n\n### Algorithms\n- `external_sort()`: Sort large datasets with \u221an memory\n- `external_groupby()`: Group large datasets with \u221an memory\n- `external_join()`: Join large datasets efficiently\n\n### Streaming\n- `Stream`: Lazy evaluation stream processing\n- `FileStream`: Stream lines from files\n- `CSVStream`: Stream CSV rows\n- `JSONLStream`: Stream JSON Lines\n\n### Memory Management\n- `MemoryMonitor`: Monitor memory pressure\n- `MemoryPressureHandler`: Custom pressure handlers\n\n### Checkpointing\n- `@auto_checkpoint`: Automatic checkpointing decorator\n- `CheckpointManager`: Manual checkpoint control\n\n### ML Optimization\n- `MLMemoryOptimizer`: Analyze and optimize models\n- `GradientCheckpointer`: Apply gradient checkpointing\n\n### Profiling\n- `@profile`: Full profiling decorator\n- `@profile_memory`: Memory-only profiling\n- `SpaceTimeProfiler`: Programmatic profiling\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## License\n\nApache License 2.0. See [LICENSE](LICENSE) for details.\n\n## Citation\n\nIf you use SpaceTime in your research, please cite:\n\n```bibtex\n@software{sqrtspace_spacetime,\n title = {SqrtSpace SpaceTime: Memory-Efficient Python Library},\n author={Friedel Jr., David H.},\n year = {2025},\n url = {https://github.com/sqrtspace/sqrtspace-python}\n}\n```\n\n## Links\n\n- [Documentation](https://sqrtspace-spacetime.readthedocs.io)\n- [PyPI Package](https://pypi.org/project/sqrtspace-spacetime/)\n- [GitHub Repository](https://github.com/sqrtspace/sqrtspace-python)\n- [Issue Tracker](https://github.com/sqrtspace/sqrtspace-python/issues)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Memory-efficient algorithms and data structures using Williams' \u221an space-time tradeoffs",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://sqrtspace-spacetime.readthedocs.io",
"Homepage": "https://github.com/sqrtspace/sqrtspace-python",
"Issues": "https://github.com/sqrtspace/sqrtspace-python/issues",
"Repository": "https://github.com/sqrtspace/sqrtspace-python.git"
},
"split_keywords": [
"memory",
" efficiency",
" algorithms",
" spacetime",
" external-memory",
" streaming"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5da67327132c0c110bd4ed59dbd3350ab064876dff00351e847fb337ad4fe423",
"md5": "54776e667f30190d23d119815512ffac",
"sha256": "51e44cb17b7f433ecb9af95a194628d8537b2081e57ec47825f5dd73224d0acc"
},
"downloads": -1,
"filename": "sqrtspace_spacetime-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "54776e667f30190d23d119815512ffac",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 53581,
"upload_time": "2025-07-20T20:44:57",
"upload_time_iso_8601": "2025-07-20T20:44:57.773687Z",
"url": "https://files.pythonhosted.org/packages/5d/a6/7327132c0c110bd4ed59dbd3350ab064876dff00351e847fb337ad4fe423/sqrtspace_spacetime-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e85ffb52ff2ae329995a542a5fe0948511164fc48830b00927d040e432d7ccb2",
"md5": "c0852b209bcf8348bddc3236698a2682",
"sha256": "b0904964797b0f672b2c4fc5f831cf71bf22d80524c2d86a26d8952a65707f54"
},
"downloads": -1,
"filename": "sqrtspace_spacetime-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "c0852b209bcf8348bddc3236698a2682",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 54167,
"upload_time": "2025-07-20T20:44:59",
"upload_time_iso_8601": "2025-07-20T20:44:59.237722Z",
"url": "https://files.pythonhosted.org/packages/e8/5f/fb52ff2ae329995a542a5fe0948511164fc48830b00927d040e432d7ccb2/sqrtspace_spacetime-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 20:44:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sqrtspace",
"github_project": "sqrtspace-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "psutil",
"specs": [
[
">=",
"5.8.0"
]
]
},
{
"name": "aiofiles",
"specs": [
[
">=",
"0.8.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.62.0"
]
]
}
],
"lcname": "sqrtspace-spacetime"
}