# Neuracore Python Client
Neuracore is a powerful robotics and machine learning client library for seamless robot data collection, model deployment, and interaction with comprehensive support for custom data types and real-time inference.
## Features
- Easy robot initialization and connection (URDF and MuJoCo MJCF support)
- Streaming data logging with custom data types
- Model endpoint management (local and remote)
- Real-time policy inference and deployment
- Flexible dataset creation and synchronization
- Open source training infrastructure with Hydra configuration
- Custom algorithm development and upload
- Multi-modal data support (joint positions, velocities, RGB images, language, custom data, and more)
## Installation
```bash
pip install neuracore
```
For training and ML development:
```bash
pip install neuracore[ml]
```
For MuJoCo MJCF support:
```bash
pip install neuracore[mjcf]
```
## Quick Start
Ensure you have an account at [neuracore.app](https://www.neuracore.app/)
### Authentication
```python
import neuracore as nc
# This will save your API key locally
nc.login()
```
### Robot Connection
```python
# Connect to a robot with URDF
nc.connect_robot(
robot_name="MyRobot",
urdf_path="/path/to/robot.urdf",
overwrite=False # Set to True to overwrite existing robot config
)
# Or connect using MuJoCo MJCF
nc.connect_robot(
robot_name="MyRobot",
mjcf_path="/path/to/robot.xml"
)
```
### Data Collection and Logging
#### Basic Data Logging
```python
import time
# Create a dataset for recording
nc.create_dataset(
name="My Robot Dataset",
description="Example dataset with multiple data types"
)
# Start recording
nc.start_recording()
# Log various data types with timestamps
t = time.time()
nc.log_joint_positions({'joint1': 0.5, 'joint2': -0.3}, timestamp=t)
nc.log_joint_velocities({'joint1': 0.1, 'joint2': -0.05}, timestamp=t)
nc.log_joint_target_positions({'joint1': 0.6, 'joint2': -0.2}, timestamp=t)
# Log camera data
nc.log_rgb("top_camera", image_array, timestamp=t)
# Log language instructions
nc.log_language("Pick up the red cube", timestamp=t)
# Log custom data
custom_sensor_data = [1.2, 3.4, 5.6]
nc.log_custom_data("force_sensor", custom_sensor_data, timestamp=t)
# Stop recording
nc.stop_recording()
```
#### Live Data Control
```python
# Stop live data streaming (saves bandwidth, doesn't affect recording)
nc.stop_live_data(robot_name="MyRobot", instance=0)
# Resume live data streaming
nc.start_live_data(robot_name="MyRobot", instance=0)
```
### Dataset Access and Visualization
```python
# Load a dataset
dataset = nc.get_dataset("My Robot Dataset")
# Synchronize data types at a specific frequency
from neuracore.core.nc_types import DataType
synced_dataset = dataset.synchronize(
frequency=10, # Hz
data_types=[DataType.JOINT_POSITIONS, DataType.RGB_IMAGE, DataType.LANGUAGE]
)
print(f"Dataset has {len(synced_dataset)} episodes")
# Access synchronized data
for episode in synced_dataset[:5]: # First 5 episodes
for step in episode:
joint_pos = step.joint_positions
rgb_images = step.rgb_images
language = step.language
# Process your data
```
### Model Inference
#### Local Model Inference
```python
# Load a trained model locally
policy = nc.policy(train_run_name="MyTrainingJob")
# Or load from file path
# policy = nc.policy(model_file="/path/to/model.nc.zip")
# Set specific checkpoint (optional, defaults to last epoch)
policy.set_checkpoint(epoch=-1)
# Predict actions
predicted_sync_points = policy.predict(timeout=5)
joint_target_positions = [sp.joint_target_positions for sp in predicted_sync_points]
actions = [jtp.numpy() for jtp in joint_target_positions if jtp is not None]
```
#### Remote Model Inference
```python
# Connect to a remote endpoint
try:
policy = nc.policy_remote_server("MyEndpointName")
predicted_sync_points = policy.predict(timeout=5)
# Process predictions...
except nc.EndpointError:
print("Endpoint not available. Please start it at neuracore.app/dashboard/endpoints")
```
#### Local Server Deployment
```python
# Connect to a local policy server
policy = nc.policy_local_server(train_run_name="MyTrainingJob")
```
## Command Line Tools
Neuracore provides several command-line utilities:
### Authentication
```bash
# Interactive login to save API key
nc-login
```
Use the `--email` and `--password` option if you wish to login non-interactively.
### Organization Management
```bash
# Select your current organization
nc-select-org
```
Use the `--org-name` option if you wish to select the org non-interactively.
### Server Operations
```bash
# Launch local policy server for inference
nc-launch-server --job_id <job_id> --org_id <org_id> [--host <host>] [--port <port>]
# Example:
nc-launch-server --job_id my_job_123 --org_id my_org_456 --host 0.0.0.0 --port 8080
```
**Parameters:**
- `--job_id`: Required. The job ID to run
- `--org_id`: Required. Your organization ID
- `--host`: Optional. Host address (default: 0.0.0.0)
- `--port`: Optional. Port number (default: 8080)
### Algorithm Validation
```bash
# Validate custom algorithms before upload
neuracore-validate /path/to/your/algorithm
```
## Open Source Training
Neuracore includes a comprehensive training infrastructure with Hydra configuration management for local model development.
### Training Structure
```
neuracore/
ml/
train.py # Main training script
config/ # Hydra configuration files
config.yaml # Main configuration
algorithm/ # Algorithm-specific configs
diffusion_policy.yaml
act.yaml
simple_vla.yaml
cnnmlp.yaml
...
training/ # Training configurations
dataset/ # Dataset configurations
algorithms/ # Built-in algorithms
datasets/ # Dataset implementations
trainers/ # Distributed training utilities
utils/ # Training utilities
```
### Training Examples
```bash
# Basic training with Diffusion Policy
python -m neuracore.ml.train algorithm=diffusion_policy dataset_name="my_dataset"
# Train ACT with custom hyperparameters
python -m neuracore.ml.train algorithm=act algorithm.lr=5e-4 algorithm.hidden_dim=1024 dataset_name="my_dataset"
# Auto-tune batch size
python -m neuracore.ml.train algorithm=diffusion_policy batch_size=auto dataset_name="my_dataset"
# Hyperparameter sweeps
python -m neuracore.ml.train --multirun algorithm=cnnmlp algorithm.lr=1e-4,5e-4,1e-3 algorithm.hidden_dim=256,512,1024 dataset_name="my_dataset"
# Multi-modal training with images and language
python -m neuracore.ml.train algorithm=simple_vla dataset_name="my_multimodal_dataset" input_data_types='["joint_positions","rgb_image","language"]'
```
### Configuration Management
```yaml
# config/config.yaml
defaults:
- algorithm: diffusion_policy
- training: default
- dataset: default
# Core parameters
epochs: 100
batch_size: "auto"
seed: 42
# Multi-modal data support
input_data_types:
- "joint_positions"
- "rgb_image"
- "language"
output_data_types:
- "joint_target_positions"
```
### Training Features
- **Distributed Training**: Multi-GPU support with PyTorch DDP
- **Automatic Batch Size Tuning**: Find optimal batch sizes automatically
- **Memory Monitoring**: Prevent OOM errors with built-in monitoring
- **TensorBoard Integration**: Comprehensive logging and visualization
- **Checkpoint Management**: Automatic saving and resuming
- **Cloud Integration**: Seamless integration with Neuracore SaaS platform
- **Multi-modal Support**: Images, joint states, language, and custom data types
## Custom Algorithm Development
Create custom algorithms by extending the `NeuracoreModel` class:
```python
import torch
from neuracore.ml import NeuracoreModel, BatchedInferenceSamples, BatchedTrainingSamples, BatchedTrainingOutputs
from neuracore.core.nc_types import DataType, ModelInitDescription, ModelPrediction
class MyCustomAlgorithm(NeuracoreModel):
def __init__(self, model_init_description: ModelInitDescription, **kwargs):
super().__init__(model_init_description)
# Your model initialization here
def forward(self, batch: BatchedInferenceSamples) -> ModelPrediction:
# Your inference logic
pass
def training_step(self, batch: BatchedTrainingSamples) -> BatchedTrainingOutputs:
# Your training logic
pass
def configure_optimizers(self) -> list[torch.optim.Optimizer]:
# Return list of optimizers
pass
@staticmethod
def get_supported_input_data_types() -> list[DataType]:
return [DataType.JOINT_POSITIONS, DataType.RGB_IMAGE]
@staticmethod
def get_supported_output_data_types() -> list[DataType]:
return [DataType.JOINT_TARGET_POSITIONS]
```
### Algorithm Upload Options
1. **Open Source Contribution**: Submit a PR to the Neuracore repository
2. **Private Upload**: Upload directly at [neuracore.app](https://neuracore.app)
- Single Python file with your `NeuracoreModel` class
- ZIP file containing your algorithm directory with `requirements.txt`
## Environment Variables
Configure Neuracore behavior with environment variables (case insensitive, prefixed with `NEURACORE_`):
| Variable | Function | Valid Values | Default |
| -------------------------------------------- | ------------------------------------------ | -------------- | ------------------------------- |
| `NEURACORE_REMOTE_RECORDING_TRIGGER_ENABLED` | Allow remote recording triggers | `true`/`false` | `true` |
| `NEURACORE_PROVIDE_LIVE_DATA` | Enable live data streaming from this node | `true`/`false` | `true` |
| `NEURACORE_CONSUME_LIVE_DATA` | Enable live data consumption for inference | `true`/`false` | `true` |
| `NEURACORE_API_URL` | Base URL for Neuracore platform | URL string | `https://api.neuracore.app/api` |
## Performance Considerations
### Bandwidth Optimization
- Use appropriate camera resolutions
- Log only necessary joint states
- Maintain consistent joint combinations (max 50 concurrent streams)
- Consider hardware-accelerated H.264 encoding for video
### Processing Optimization
- Enable hardware acceleration for video encoding
- Limit simultaneous dashboard viewers during recording
- Distribute data collection across multiple machines when needed
- Use `nc.stop_live_data()` when live monitoring isn't required
## Documentation
- [Creating Custom Algorithms](./docs/creating_custom_algorithms.md)
- [Performance Limitations](./docs/limitations.md)
- [Examples](./examples/README.md)
## Development Setup
```bash
git clone https://github.com/neuracoreai/neuracore
cd neuracore
pip install -e .[dev,ml]
```
## Testing
```bash
export NEURACORE_API_URL=http://localhost:8000/api
pytest tests/
```
## Contributing
We welcome contributions! Please see our contributing guidelines and submit pull requests for:
- New algorithms and models
- Performance improvements
- Documentation enhancements
- Bug fixes and feature requests
Raw data
{
"_id": null,
"home_page": "https://github.com/neuracoreai/neuracore",
"name": "neuracore",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "robotics machine-learning ai client-library",
"author": "Stephen James",
"author_email": "stephen@neuraco.com",
"download_url": "https://files.pythonhosted.org/packages/a4/0c/8a1c217f7a1f758c53263f32b644ae42b61d139a3955aa99d6051f570115/neuracore-3.0.1.tar.gz",
"platform": null,
"description": "# Neuracore Python Client\n\nNeuracore is a powerful robotics and machine learning client library for seamless robot data collection, model deployment, and interaction with comprehensive support for custom data types and real-time inference.\n\n## Features\n\n- Easy robot initialization and connection (URDF and MuJoCo MJCF support)\n- Streaming data logging with custom data types\n- Model endpoint management (local and remote)\n- Real-time policy inference and deployment\n- Flexible dataset creation and synchronization\n- Open source training infrastructure with Hydra configuration\n- Custom algorithm development and upload\n- Multi-modal data support (joint positions, velocities, RGB images, language, custom data, and more)\n\n## Installation\n\n```bash\npip install neuracore\n```\n\nFor training and ML development:\n```bash\npip install neuracore[ml]\n```\n\nFor MuJoCo MJCF support:\n```bash\npip install neuracore[mjcf]\n```\n\n## Quick Start\n\nEnsure you have an account at [neuracore.app](https://www.neuracore.app/)\n\n### Authentication\n\n```python\nimport neuracore as nc\n\n# This will save your API key locally\nnc.login()\n```\n\n### Robot Connection\n\n```python\n# Connect to a robot with URDF\nnc.connect_robot(\n robot_name=\"MyRobot\", \n urdf_path=\"/path/to/robot.urdf\",\n overwrite=False # Set to True to overwrite existing robot config\n)\n\n# Or connect using MuJoCo MJCF\nnc.connect_robot(\n robot_name=\"MyRobot\", \n mjcf_path=\"/path/to/robot.xml\"\n)\n```\n\n### Data Collection and Logging\n\n#### Basic Data Logging\n\n```python\nimport time\n\n# Create a dataset for recording\nnc.create_dataset(\n name=\"My Robot Dataset\",\n description=\"Example dataset with multiple data types\"\n)\n\n# Start recording\nnc.start_recording()\n\n# Log various data types with timestamps\nt = time.time()\nnc.log_joint_positions({'joint1': 0.5, 'joint2': -0.3}, timestamp=t)\nnc.log_joint_velocities({'joint1': 0.1, 'joint2': -0.05}, timestamp=t)\nnc.log_joint_target_positions({'joint1': 0.6, 'joint2': -0.2}, timestamp=t)\n\n# Log camera data\nnc.log_rgb(\"top_camera\", image_array, timestamp=t)\n\n# Log language instructions\nnc.log_language(\"Pick up the red cube\", timestamp=t)\n\n# Log custom data\ncustom_sensor_data = [1.2, 3.4, 5.6]\nnc.log_custom_data(\"force_sensor\", custom_sensor_data, timestamp=t)\n\n# Stop recording\nnc.stop_recording()\n```\n\n#### Live Data Control\n\n```python\n# Stop live data streaming (saves bandwidth, doesn't affect recording)\nnc.stop_live_data(robot_name=\"MyRobot\", instance=0)\n\n# Resume live data streaming\nnc.start_live_data(robot_name=\"MyRobot\", instance=0)\n```\n\n### Dataset Access and Visualization\n\n```python\n# Load a dataset\ndataset = nc.get_dataset(\"My Robot Dataset\")\n\n# Synchronize data types at a specific frequency\nfrom neuracore.core.nc_types import DataType\n\nsynced_dataset = dataset.synchronize(\n frequency=10, # Hz\n data_types=[DataType.JOINT_POSITIONS, DataType.RGB_IMAGE, DataType.LANGUAGE]\n)\n\nprint(f\"Dataset has {len(synced_dataset)} episodes\")\n\n# Access synchronized data\nfor episode in synced_dataset[:5]: # First 5 episodes\n for step in episode:\n joint_pos = step.joint_positions\n rgb_images = step.rgb_images\n language = step.language\n # Process your data\n```\n\n### Model Inference\n\n#### Local Model Inference\n\n```python\n# Load a trained model locally\npolicy = nc.policy(train_run_name=\"MyTrainingJob\")\n\n# Or load from file path\n# policy = nc.policy(model_file=\"/path/to/model.nc.zip\")\n\n# Set specific checkpoint (optional, defaults to last epoch)\npolicy.set_checkpoint(epoch=-1)\n\n# Predict actions\npredicted_sync_points = policy.predict(timeout=5)\njoint_target_positions = [sp.joint_target_positions for sp in predicted_sync_points]\nactions = [jtp.numpy() for jtp in joint_target_positions if jtp is not None]\n```\n\n#### Remote Model Inference\n\n```python\n# Connect to a remote endpoint\ntry:\n policy = nc.policy_remote_server(\"MyEndpointName\")\n predicted_sync_points = policy.predict(timeout=5)\n # Process predictions...\nexcept nc.EndpointError:\n print(\"Endpoint not available. Please start it at neuracore.app/dashboard/endpoints\")\n```\n\n#### Local Server Deployment\n\n```python\n# Connect to a local policy server\npolicy = nc.policy_local_server(train_run_name=\"MyTrainingJob\")\n```\n\n## Command Line Tools\n\nNeuracore provides several command-line utilities:\n\n### Authentication\n```bash\n# Interactive login to save API key\nnc-login\n```\n\nUse the `--email` and `--password` option if you wish to login non-interactively.\n\n### Organization Management\n```bash\n# Select your current organization\nnc-select-org\n```\n\nUse the `--org-name` option if you wish to select the org non-interactively.\n\n### Server Operations\n```bash\n# Launch local policy server for inference\nnc-launch-server --job_id <job_id> --org_id <org_id> [--host <host>] [--port <port>]\n\n# Example:\nnc-launch-server --job_id my_job_123 --org_id my_org_456 --host 0.0.0.0 --port 8080\n```\n\n**Parameters:**\n- `--job_id`: Required. The job ID to run\n- `--org_id`: Required. Your organization ID\n- `--host`: Optional. Host address (default: 0.0.0.0)\n- `--port`: Optional. Port number (default: 8080)\n\n### Algorithm Validation\n```bash\n# Validate custom algorithms before upload\nneuracore-validate /path/to/your/algorithm\n```\n\n## Open Source Training\n\nNeuracore includes a comprehensive training infrastructure with Hydra configuration management for local model development.\n\n### Training Structure\n\n```\nneuracore/\n ml/\n train.py # Main training script\n config/ # Hydra configuration files\n config.yaml # Main configuration\n algorithm/ # Algorithm-specific configs\n diffusion_policy.yaml\n act.yaml\n simple_vla.yaml\n cnnmlp.yaml\n ...\n training/ # Training configurations\n dataset/ # Dataset configurations\n algorithms/ # Built-in algorithms\n datasets/ # Dataset implementations\n trainers/ # Distributed training utilities\n utils/ # Training utilities\n```\n\n### Training Examples\n\n```bash\n# Basic training with Diffusion Policy\npython -m neuracore.ml.train algorithm=diffusion_policy dataset_name=\"my_dataset\"\n\n# Train ACT with custom hyperparameters\npython -m neuracore.ml.train algorithm=act algorithm.lr=5e-4 algorithm.hidden_dim=1024 dataset_name=\"my_dataset\"\n\n# Auto-tune batch size\npython -m neuracore.ml.train algorithm=diffusion_policy batch_size=auto dataset_name=\"my_dataset\"\n\n# Hyperparameter sweeps\npython -m neuracore.ml.train --multirun algorithm=cnnmlp algorithm.lr=1e-4,5e-4,1e-3 algorithm.hidden_dim=256,512,1024 dataset_name=\"my_dataset\"\n\n# Multi-modal training with images and language\npython -m neuracore.ml.train algorithm=simple_vla dataset_name=\"my_multimodal_dataset\" input_data_types='[\"joint_positions\",\"rgb_image\",\"language\"]'\n```\n\n### Configuration Management\n\n```yaml\n# config/config.yaml\ndefaults:\n - algorithm: diffusion_policy\n - training: default\n - dataset: default\n\n# Core parameters\nepochs: 100\nbatch_size: \"auto\"\nseed: 42\n\n# Multi-modal data support\ninput_data_types:\n - \"joint_positions\"\n - \"rgb_image\"\n - \"language\"\noutput_data_types:\n - \"joint_target_positions\"\n```\n\n### Training Features\n\n- **Distributed Training**: Multi-GPU support with PyTorch DDP\n- **Automatic Batch Size Tuning**: Find optimal batch sizes automatically\n- **Memory Monitoring**: Prevent OOM errors with built-in monitoring\n- **TensorBoard Integration**: Comprehensive logging and visualization\n- **Checkpoint Management**: Automatic saving and resuming\n- **Cloud Integration**: Seamless integration with Neuracore SaaS platform\n- **Multi-modal Support**: Images, joint states, language, and custom data types\n\n## Custom Algorithm Development\n\nCreate custom algorithms by extending the `NeuracoreModel` class:\n\n```python\nimport torch\nfrom neuracore.ml import NeuracoreModel, BatchedInferenceSamples, BatchedTrainingSamples, BatchedTrainingOutputs\nfrom neuracore.core.nc_types import DataType, ModelInitDescription, ModelPrediction\n\nclass MyCustomAlgorithm(NeuracoreModel):\n def __init__(self, model_init_description: ModelInitDescription, **kwargs):\n super().__init__(model_init_description)\n # Your model initialization here\n \n def forward(self, batch: BatchedInferenceSamples) -> ModelPrediction:\n # Your inference logic\n pass\n \n def training_step(self, batch: BatchedTrainingSamples) -> BatchedTrainingOutputs:\n # Your training logic\n pass\n \n def configure_optimizers(self) -> list[torch.optim.Optimizer]:\n # Return list of optimizers\n pass\n \n @staticmethod\n def get_supported_input_data_types() -> list[DataType]:\n return [DataType.JOINT_POSITIONS, DataType.RGB_IMAGE]\n \n @staticmethod\n def get_supported_output_data_types() -> list[DataType]:\n return [DataType.JOINT_TARGET_POSITIONS]\n```\n\n### Algorithm Upload Options\n\n1. **Open Source Contribution**: Submit a PR to the Neuracore repository\n2. **Private Upload**: Upload directly at [neuracore.app](https://neuracore.app)\n - Single Python file with your `NeuracoreModel` class\n - ZIP file containing your algorithm directory with `requirements.txt`\n\n## Environment Variables\n\nConfigure Neuracore behavior with environment variables (case insensitive, prefixed with `NEURACORE_`):\n\n| Variable | Function | Valid Values | Default |\n| -------------------------------------------- | ------------------------------------------ | -------------- | ------------------------------- |\n| `NEURACORE_REMOTE_RECORDING_TRIGGER_ENABLED` | Allow remote recording triggers | `true`/`false` | `true` |\n| `NEURACORE_PROVIDE_LIVE_DATA` | Enable live data streaming from this node | `true`/`false` | `true` |\n| `NEURACORE_CONSUME_LIVE_DATA` | Enable live data consumption for inference | `true`/`false` | `true` |\n| `NEURACORE_API_URL` | Base URL for Neuracore platform | URL string | `https://api.neuracore.app/api` |\n\n\n## Performance Considerations\n\n### Bandwidth Optimization\n- Use appropriate camera resolutions\n- Log only necessary joint states\n- Maintain consistent joint combinations (max 50 concurrent streams)\n- Consider hardware-accelerated H.264 encoding for video\n\n### Processing Optimization\n- Enable hardware acceleration for video encoding\n- Limit simultaneous dashboard viewers during recording\n- Distribute data collection across multiple machines when needed\n- Use `nc.stop_live_data()` when live monitoring isn't required\n\n## Documentation\n\n- [Creating Custom Algorithms](./docs/creating_custom_algorithms.md)\n- [Performance Limitations](./docs/limitations.md)\n- [Examples](./examples/README.md)\n\n## Development Setup\n\n```bash\ngit clone https://github.com/neuracoreai/neuracore\ncd neuracore\npip install -e .[dev,ml]\n```\n\n## Testing\n\n```bash\nexport NEURACORE_API_URL=http://localhost:8000/api\npytest tests/\n```\n\n## Contributing\n\nWe welcome contributions! Please see our contributing guidelines and submit pull requests for:\n- New algorithms and models\n- Performance improvements\n- Documentation enhancements\n- Bug fixes and feature requests\n",
"bugtrack_url": null,
"license": null,
"summary": "Neuracore Client Library",
"version": "3.0.1",
"project_urls": {
"Homepage": "https://github.com/neuracoreai/neuracore"
},
"split_keywords": [
"robotics",
"machine-learning",
"ai",
"client-library"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "aad290a6b9e7a10de10a41424037dc6d17e6997278a0b9f83c6290d3a85b8b7b",
"md5": "cd493353684a7474856d6708598e2ba8",
"sha256": "1b52f8a9e79b88865702e31a530a90d199a448576793652ed1b053af54369550"
},
"downloads": -1,
"filename": "neuracore-3.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cd493353684a7474856d6708598e2ba8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 239295,
"upload_time": "2025-07-31T18:20:41",
"upload_time_iso_8601": "2025-07-31T18:20:41.984973Z",
"url": "https://files.pythonhosted.org/packages/aa/d2/90a6b9e7a10de10a41424037dc6d17e6997278a0b9f83c6290d3a85b8b7b/neuracore-3.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a40c8a1c217f7a1f758c53263f32b644ae42b61d139a3955aa99d6051f570115",
"md5": "947aa04012f59774cea9c18c9b6e3490",
"sha256": "bb6770964b3557471de775891e8c42c075d235f46db7b29f5b2fa6b39ad4063c"
},
"downloads": -1,
"filename": "neuracore-3.0.1.tar.gz",
"has_sig": false,
"md5_digest": "947aa04012f59774cea9c18c9b6e3490",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 188747,
"upload_time": "2025-07-31T18:20:43",
"upload_time_iso_8601": "2025-07-31T18:20:43.726577Z",
"url": "https://files.pythonhosted.org/packages/a4/0c/8a1c217f7a1f758c53263f32b644ae42b61d139a3955aa99d6051f570115/neuracore-3.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-31 18:20:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "neuracoreai",
"github_project": "neuracore",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "neuracore"
}