# ManipulaPy
<div align="center">
[](https://pypi.org/project/ManipulaPy/)
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/agpl-3.0)


[](https://joss.theoj.org/papers/e0e68c2dcd8ac9dfc1354c7ee37eb7aa)
**A comprehensive, GPU-accelerated Python package for robotic manipulator analysis, simulation, planning, control, and perception.**
[Quick Start](#quick-start) β’ [Documentation](#documentation) β’ [Examples](#examples) β’ [Installation](#installation) β’ [Contributing](#contributing)
</div>
---
## π― Overview
ManipulaPy is a modern, comprehensive framework that bridges the gap between basic robotics libraries and sophisticated research tools. It provides seamless integration of kinematics, dynamics, control, and perception systems with optional CUDA acceleration for real-time applications.
### Why ManipulaPy?
**π§ Unified Framework**: Complete integration from low-level kinematics to high-level perception
**β‘ GPU Accelerated**: CUDA kernels for trajectory planning and dynamics computation
**π¬ Research Ready**: Mathematical rigor with practical implementation
**π§© Modular Design**: Use individual components or the complete system
**π Well Documented**: Comprehensive guides with theoretical foundations
**π Open Source**: AGPL-3.0 licensed for transparency and collaboration
---
## β¨ Key Features
<table>
<tr>
<td width="50%">
### π§ **Core Robotics**
- **Kinematics**: Forward/inverse kinematics with Jacobian analysis
- **Dynamics**: Mass matrix, Coriolis forces, gravity compensation
- **Control**: PID, computed torque, adaptive, robust algorithms
- **Singularity Analysis**: Detect singularities and workspace boundaries
</td>
<td width="50%">
### π **Advanced Capabilities**
- **Path Planning**: CUDA-accelerated trajectory generation
- **Simulation**: Real-time PyBullet physics simulation
- **Vision**: Stereo vision, YOLO detection, point clouds
- **URDF Processing**: Convert robot models to Python objects
</td>
</tr>
</table>
---
## <a id="quick-start"></a>π Quick Start
### Prerequisites
Before installing ManipulaPy, make sure your system has:
1. **NVIDIA Drivers & CUDA Toolkit**
- `nvcc` on your `PATH` (e.g. via `sudo apt install nvidia-cuda-toolkit` or the [official NVIDIA CUDA installer](https://developer.nvidia.com/cuda-downloads)).
- Verify with:
```bash
nvidia-smi # should list your GPU(s) and driver version
nvcc --version # should print CUDA version
```
2. **cuDNN**
- Download and install cuDNN for your CUDA version from [NVIDIA's cuDNN installation guide](https://docs.nvidia.com/deeplearning/cudnn/installation/latest/).
- Verify headers/libs under `/usr/include` and `/usr/lib/x86_64-linux-gnu` (or your distroβs equivalent).
---
<h2 id="installation">Installation</h2>
```bash
# Basic installation (CPU-only)
pip install ManipulaPy
# With GPU support (CUDA 11.x)
pip install ManipulaPy[gpu-cuda11]
# With GPU support (CUDA 12.x)
pip install ManipulaPy[gpu-cuda12]
# Development installation (with dev extras)
git clone https://github.com/boelnasr/ManipulaPy.git
cd ManipulaPy
pip install -e .[dev]
````
---
### PostβInstall Check
After installation, confirm that ManipulaPy can see your GPU:
```bash
# Check that CUDA is available to ManipulaPy
python3 - <<EOF
from ManipulaPy import cuda_kernels
if cuda_kernels.check_cuda_availability():
props = cuda_kernels.get_gpu_properties()
print(f"β
CUDA is available on device: {props['name']} "
f"({props['multiprocessor_count']} SMs, "
f"{props['max_threads_per_block']} max threads/block)")
else:
raise RuntimeError("β CUDA not detected or not properly configured in ManipulaPy.")
EOF
```
If you see the β
message with your GPU name, youβre all set! Otherwise, doubleβcheck the CUDA Toolkit and cuDNN installation steps above. \`\`\`
### 30-Second Demo
```python
import numpy as np
from ManipulaPy.urdf_processor import URDFToSerialManipulator
from ManipulaPy.path_planning import OptimizedTrajectoryPlanning
from ManipulaPy.control import ManipulatorController
# Load robot model
try:
from ManipulaPy.ManipulaPy_data.xarm import urdf_file
except ImportError:
urdf_file = "path/to/your/robot.urdf"
# Initialize robot
urdf_processor = URDFToSerialManipulator(urdf_file)
robot = urdf_processor.serial_manipulator
dynamics = urdf_processor.dynamics
# Forward kinematics
joint_angles = np.array([0.1, 0.2, -0.3, -0.5, 0.2, 0.1])
end_effector_pose = robot.forward_kinematics(joint_angles)
print(f"End-effector position: {end_effector_pose[:3, 3]}")
# Trajectory planning
joint_limits = [(-np.pi, np.pi)] * 6
planner = OptimizedTrajectoryPlanning(robot, urdf_file, dynamics, joint_limits)
trajectory = planner.joint_trajectory(
thetastart=np.zeros(6),
thetaend=joint_angles,
Tf=5.0, N=100, method=5
)
print(f"β
Generated {trajectory['positions'].shape[0]} trajectory points")
```
---
## π Core Modules
### π§ Kinematics & Dynamics
<details>
<summary><b>Forward & Inverse Kinematics</b></summary>
```python
# Forward kinematics
pose = robot.forward_kinematics(joint_angles, frame="space")
# Inverse kinematics with advanced solver
target_pose = np.eye(4)
target_pose[:3, 3] = [0.5, 0.3, 0.4]
solution, success, iterations = robot.iterative_inverse_kinematics(
T_desired=target_pose,
thetalist0=joint_angles,
eomg=1e-6, ev=1e-6,
max_iterations=5000,
plot_residuals=True
)
```
</details>
<details>
<summary><b>Dynamic Analysis</b></summary>
```python
from ManipulaPy.dynamics import ManipulatorDynamics
# Compute dynamics quantities
M = dynamics.mass_matrix(joint_angles)
C = dynamics.velocity_quadratic_forces(joint_angles, joint_velocities)
G = dynamics.gravity_forces(joint_angles, g=[0, 0, -9.81])
# Inverse dynamics: Ο = M(q)qΜ + C(q,qΜ) + G(q)
torques = dynamics.inverse_dynamics(
joint_angles, joint_velocities, joint_accelerations,
[0, 0, -9.81], np.zeros(6)
)
```
</details>
### π€οΈ Path Planning & Control
<details>
<summary><b>Advanced Trajectory Planning</b></summary>
```python
# GPU-accelerated trajectory planning
planner = OptimizedTrajectoryPlanning(
robot, urdf_file, dynamics, joint_limits,
use_cuda=True, # Enable GPU acceleration
cuda_threshold=200, # Auto-switch threshold
enable_profiling=True
)
# Joint space trajectory
trajectory = planner.joint_trajectory(
thetastart=start_config,
thetaend=end_config,
Tf=5.0, N=1000, method=5 # Quintic time scaling
)
# Cartesian space trajectory
cartesian_traj = planner.cartesian_trajectory(
Xstart=start_pose, Xend=end_pose,
Tf=3.0, N=500, method=3 # Cubic time scaling
)
# Performance monitoring
stats = planner.get_performance_stats()
print(f"GPU usage: {stats['gpu_usage_percent']:.1f}%")
```
</details>
<details>
<summary><b>Advanced Control Systems</b></summary>
```python
from ManipulaPy.control import ManipulatorController
controller = ManipulatorController(dynamics)
# Auto-tuned PID control using Ziegler-Nichols
Ku, Tu = 50.0, 0.5 # Ultimate gain and period
Kp, Ki, Kd = controller.ziegler_nichols_tuning(Ku, Tu, kind="PID")
# Computed torque control
control_torque = controller.computed_torque_control(
thetalistd=desired_positions,
dthetalistd=desired_velocities,
ddthetalistd=desired_accelerations,
thetalist=current_positions,
dthetalist=current_velocities,
g=[0, 0, -9.81], dt=0.01,
Kp=Kp, Ki=Ki, Kd=Kd
)
# Adaptive control
adaptive_torque = controller.adaptive_control(
thetalist=current_positions,
dthetalist=current_velocities,
ddthetalist=desired_accelerations,
g=[0, 0, -9.81], Ftip=np.zeros(6),
measurement_error=position_error,
adaptation_gain=0.1
)
```
</details>
### π Simulation & Visualization
<details>
<summary><b>Real-time PyBullet Simulation</b></summary>
```python
from ManipulaPy.sim import Simulation
# Create simulation environment
sim = Simulation(
urdf_file_path=urdf_file,
joint_limits=joint_limits,
time_step=0.01,
real_time_factor=1.0
)
# Initialize and run
sim.initialize_robot()
sim.initialize_planner_and_controller()
sim.add_joint_parameters() # GUI sliders
# Execute trajectory
final_pose = sim.run_trajectory(trajectory["positions"])
# Manual control with collision detection
sim.manual_control()
```
</details>
<details>
<summary><b>Singularity & Workspace Analysis</b></summary>
```python
from ManipulaPy.singularity import Singularity
analyzer = Singularity(robot)
# Singularity detection
is_singular = analyzer.singularity_analysis(joint_angles)
condition_number = analyzer.condition_number(joint_angles)
# Manipulability ellipsoid
analyzer.manipulability_ellipsoid(joint_angles)
# Workspace visualization with GPU acceleration
analyzer.plot_workspace_monte_carlo(
joint_limits=joint_limits,
num_samples=10000
)
```
</details>
### ποΈ Vision & Perception
<details>
<summary><b>Computer Vision Pipeline</b></summary>
```python
from ManipulaPy.vision import Vision
from ManipulaPy.perception import Perception
# Camera configuration
camera_config = {
"name": "main_camera",
"intrinsic_matrix": np.array([[500, 0, 320], [0, 500, 240], [0, 0, 1]]),
"translation": [0, 0, 1.5],
"rotation": [0, -30, 0], # degrees
"fov": 60,
"use_opencv": True, # Real camera
"device_index": 0
}
# Stereo vision setup
left_cam = {**camera_config, "translation": [-0.1, 0, 1.5]}
right_cam = {**camera_config, "translation": [0.1, 0, 1.5]}
vision = Vision(
camera_configs=[camera_config],
stereo_configs=(left_cam, right_cam)
)
# Object detection and clustering
perception = Perception(vision)
obstacles, labels = perception.detect_and_cluster_obstacles(
depth_threshold=3.0,
eps=0.1, min_samples=5
)
# 3D point cloud from stereo
if vision.stereo_enabled:
left_img, _ = vision.capture_image(0)
right_img, _ = vision.capture_image(1)
point_cloud = vision.get_stereo_point_cloud(left_img, right_img)
```
</details>
---
## π Performance Features
### GPU Acceleration
ManipulaPy includes highly optimized CUDA kernels for performance-critical operations:
```python
from ManipulaPy.cuda_kernels import check_cuda_availability
if check_cuda_availability():
print("π CUDA acceleration available!")
# Automatic GPU/CPU switching based on problem size
planner = OptimizedTrajectoryPlanning(
robot, urdf_file, dynamics, joint_limits,
use_cuda=None, # Auto-detect
cuda_threshold=200, # Switch threshold
memory_pool_size_mb=512 # GPU memory pool
)
# Batch processing for multiple trajectories
batch_trajectories = planner.batch_joint_trajectory(
thetastart_batch=start_configs, # (batch_size, n_joints)
thetaend_batch=end_configs,
Tf=5.0, N=1000, method=5
)
else:
print("CPU mode - install GPU support for acceleration")
```
### Performance Monitoring
```python
# Benchmark different implementations
results = planner.benchmark_performance([
{"N": 1000, "joints": 6, "name": "Medium"},
{"N": 5000, "joints": 6, "name": "Large"},
{"N": 1000, "joints": 12, "name": "Many joints"}
])
for name, result in results.items():
print(f"{name}: {result['total_time']:.3f}s, GPU: {result['used_gpu']}")
```
---
<h2 id="examples">π Examples & Tutorials</h2>
The `Examples/` directory contains comprehensive demonstrations organized into three levels:
### π― Basic Examples (β)
Perfect for getting started with ManipulaPy fundamentals.
| Example | Description | Output |
|---------|-------------|--------|
| `kinematics_basic_demo.py` | Forward/inverse kinematics with visualization | Manipulability analysis plots |
| `dynamics_basic_demo.py` | Mass matrix, Coriolis forces, gravity compensation | Complete robot analysis |
| `control_basic_demo.py` | PID, computed torque, feedforward control | Control strategy comparison |
| `urdf_processing_basic_demo.py` | URDF to SerialManipulator conversion | Configuration space analysis |
| `visualization_basic_demo.py` | End-effector paths and workspace visualization | 3D trajectory plots |
### π§ Intermediate Examples (ββ)
Advanced features and integrated systems.
| Example | Description | Key Features |
|---------|-------------|--------------|
| `trajectory_planning_intermediate_demo.py` | Multi-segment trajectories and optimization | GPU acceleration, smoothing |
| `singularity_analysis_intermediate_demo.py` | Workspace analysis and singularity avoidance | Manipulability ellipsoids |
| `control_comparison_intermediate_demo.py` | Multiple control strategies benchmarking | Real-time monitoring |
| `perception_intermediate_demo.py` | Computer vision pipeline with clustering | YOLO detection, stereo vision |
| `simulation_intermediate_demo.py` | Complete PyBullet integration | Real-time physics simulation |
### π Advanced Examples (βββ)
Research-grade implementations and high-performance computing.
| Example | Description | Advanced Features |
|---------|-------------|-------------------|
| `gpu_acceleration_advanced_demo.py` | CUDA kernels and performance optimization | Memory efficiency analysis |
| `batch_processing_advanced_demo.py` | Large-scale trajectory generation | Batch scaling analysis |
| `collision_avoidance_advanced_demo.py` | Real-time obstacle avoidance | Potential field visualization |
| `optimal_control_advanced_demo.py` | Advanced control algorithms | Performance statistics |
| `stereo_vision_advanced_demo.py` | 3D perception and point cloud processing | Advanced perception analysis |
| `real_robot_integration_advanced_demo.py` | Hardware integration examples | Real-time simulation |
### πββοΈ Running Examples
```bash
cd Examples/
# Basic Examples - Start here!
cd basic_examples/
python kinematics_basic_demo.py
python dynamics_basic_demo.py
python control_basic_demo.py
# Intermediate Examples - Integrated systems
cd ../intermediate_examples/
python trajectory_planning_intermediate_demo.py
python perception_intermediate_demo.py --enable-yolo
python simulation_intermediate_demo.py --urdf simple_arm.urdf
# Advanced Examples - Research-grade
cd ../advanced_examples/
python gpu_acceleration_advanced_demo.py --benchmark
python batch_processing_advanced_demo.py --size 1000
python collision_avoidance_advanced_demo.py --visualize
```
### π Example Outputs
The examples generate various outputs:
- **π Analysis Reports**: `.txt` files with detailed performance metrics
- **π Visualizations**: `.png` plots for trajectories, workspaces, and analysis
- **π Logs**: `.log` files for debugging and monitoring
- **π― Models**: Pre-trained YOLO models and URDF files
### π¨ Generated Visualizations
Examples create rich visualizations including:
- **Trajectory Analysis**: Multi-segment paths and optimization results
- **Workspace Visualization**: 3D manipulability and reachability analysis
- **Control Performance**: Real-time monitoring and comparison plots
- **Perception Results**: Object detection, clustering, and stereo vision
- **Performance Benchmarks**: GPU vs CPU timing and memory usage
### π Example Selection Guide
**New to ManipulaPy?** β Start with `basic_examples/kinematics_basic_demo.py`
**Need trajectory planning?** β Try `intermediate_examples/trajectory_planning_intermediate_demo.py`
**Working with vision?** β Check `intermediate_examples/perception_intermediate_demo.py`
**Performance optimization?** β Explore `advanced_examples/gpu_acceleration_advanced_demo.py`
**Research applications?** β Dive into `advanced_examples/batch_processing_advanced_demo.py`
---
## π§ͺ Testing & Validation
### Test Suite
```bash
# Install test dependencies
pip install ManipulaPy[dev]
# Run all tests
python -m pytest tests/ -v --cov=ManipulaPy
# Test specific modules
python -m pytest tests/test_kinematics.py -v
python -m pytest tests/test_dynamics.py -v
python -m pytest tests/test_control.py -v
python -m pytest tests/test_cuda_kernels.py -v # GPU tests
```
### β
High-Coverage Modules
| Module | Coverage | Notes |
| ------------------- | -------- | --------------------------------- |
| `kinematics.py` | **98%** | Excellent β near full coverage |
| `dynamics.py` | **100%** | Fully tested |
| `perception.py` | **92%** | Very solid coverage |
| `vision.py` | **83%** | Good; some PyBullet paths skipped |
| `urdf_processor.py` | **81%** | Strong test coverage |
---
### β οΈ Needs More Testing
| Module | Coverage | Notes |
| ---------------- | -------- | -------------------------------------------------------- |
| `control.py` | **81%** | Many skipped due to CuPy mock β test with GPU to improve |
| `sim.py` | **77%** | Manual control & GUI parts partially tested |
| `singularity.py` | **64%** | Workspace plots & CUDA sampling untested |
| `utils.py` | **61%** | Some math utils & decorators untested |
---
### π¨ Low/No Coverage
| Module | Coverage | Notes |
| -------------------- | -------- | ----------------------------------------------------- |
| `path_planning.py` | **39%** | Large gaps in CUDA-accelerated and plotting logic |
| `cuda_kernels.py` | **16%** | Most tests skipped β `NUMBA_DISABLE_CUDA=1` |
| `transformations.py` | **0%** | Not tested at all β consider adding basic SE(3) tests |
---
---
## π§ͺ Benchmarking & Validation
ManipulaPy includes a comprehensive benchmarking suite to validate performance and accuracy across different hardware configurations.
### Benchmark Suite
Located in the `Benchmark/` directory, the suite provides three key tools:
| Benchmark | Purpose | Use Case |
|-----------|---------|----------|
| `performance_benchmark.py` | Comprehensive performance analysis | Full system evaluation and optimization |
| `accuracy_benchmark.py` | Numerical precision validation | Algorithm correctness verification |
| `quick_benchmark.py` | Fast development testing | CI/CD integration and regression testing |
### Real Performance Results
**Latest benchmark on 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs):**
```bash
=== ManipulaPy Performance Benchmark Results ===
Hardware: 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs, 1024 threads/block)
Test Configuration: Large-scale problems (10K-100K trajectory points)
Overall Performance:
Total Tests: 36 scenarios
Success Rate: 91.7% (33/36) β
Overall Speedup: 13.02Γ average acceleration
CPU Mean Time: 6.88s β GPU Mean Time: 0.53s
π EXCEPTIONAL PERFORMANCE HIGHLIGHTS:
Inverse Dynamics (CUDA Accelerated):
Mean GPU Speedup: 3,624Γ (3.6K times faster!)
Peak Performance: 5,563Γ speedup achieved
Real-time Impact: 7s β 0.002s computation
Joint Trajectory Planning:
Mean GPU Speedup: 2.29Γ
Best Case: 7.96Γ speedup
Large Problems: Consistent GPU acceleration
Cartesian Trajectories:
Mean GPU Speedup: 1.02Γ (CPU competitive)
Consistent Performance: Β±0.04 variance
```
### Performance Recommendations
**π― OPTIMAL GPU USE CASES:**
- β
Inverse dynamics computation (**1000Γ-5000Γ speedup**)
- β
Large trajectory generation (>10K points)
- β
Batch processing multiple trajectories
- β
Real-time control applications
**β οΈ CPU-OPTIMAL SCENARIOS:**
- Small trajectories (<1K points)
- Cartesian space interpolation
- Single-shot computations
- Development and debugging
### Running Benchmarks
```bash
# Quick performance check (< 60 seconds)
cd Benchmark/
python quick_benchmark.py
# Comprehensive GPU vs CPU analysis
python performance_benchmark.py --gpu --plot --save-results
# Validate numerical accuracy
python accuracy_benchmark.py --tolerance 1e-8
```
<h2 id="documentation">π Documentation</h2>
### Online Documentation
- **[Complete API Reference](https://manipulapy.readthedocs.io/)**
- **[User Guide](https://manipulapy.readthedocs.io/en/latest/api/index.html)**
- **[API Reference](https://manipulapy.readthedocs.io/en/latest/theory.html)**
- **[GPU Programming Guide](https://manipulapy.readthedocs.io/en/latest/user_guide/CUDA_Kernels.html)**
### Quick Reference
```python
# Check installation and dependencies
import ManipulaPy
ManipulaPy.check_dependencies(verbose=True)
# Module overview
print(ManipulaPy.__version__) # Current version
print(ManipulaPy.__all__) # Available modules
# GPU capabilities
from ManipulaPy.cuda_kernels import get_gpu_properties
props = get_gpu_properties()
if props:
print(f"GPU: {props['multiprocessor_count']} SMs")
```
---
<h2 id="contributing">π€ Contributing</h2>
We love your input! Whether youβre reporting a bug, proposing a new feature, or improving our docs, hereβs how to get started:
### 1. Report an Issue
Please open a GitHub Issue with:
- A descriptive title
- Steps to reproduce
- Expected vs. actual behavior
- Any relevant logs or screenshots
### 2. Submit a Pull Request
1. Fork this repository and create your branch:
```bash
git clone https://github.com/<your-username>/ManipulaPy.git
cd ManipulaPy
git checkout -b feature/my-feature
```
2. Install and set up the development environment:
```bash
pip install -e .[dev]
pre-commit install # to run formatters and linters
```
3. Make your changes, then run tests and quality checks:
```bash
# Run the full test suite
python -m pytest tests/ -v
# Lint and format
black ManipulaPy/
flake8 ManipulaPy/
mypy ManipulaPy/
```
4. Commit with clear, focused messages and push your branch:
```bash
git add .
git commit -m "Add awesome new feature"
git push origin feature/my-feature
```
5. Open a Pull Request against `main` describing your changes.
### 3. Seek Support
- **Design questions:** [GitHub Discussions](https://github.com/boelnasr/ManipulaPy/discussions)
- **Bug reports:** [GitHub Issues](https://github.com/boelnasr/ManipulaPy/issues)
- **Email:** aboelnasr1997@gmail.com
### 4. Code of Conduct
Please follow our [Code of Conduct](CODE_OF_CONDUCT.md) to keep this community welcoming.
### Contribution Areas
- π **Bug Reports**: Issues and edge cases
- β¨ **New Features**: Algorithms and capabilities
- π **Documentation**: Guides and examples
- π **Performance**: CUDA kernels and optimizations
- π§ͺ **Testing**: Test coverage and validation
- π¨ **Visualization**: Plotting and animation tools
### Guidelines
- Follow **PEP 8** style guidelines
- Add **comprehensive tests** for new features
- Update **documentation** for API changes
- Include **working examples** for new functionality
- Maintain **backward compatibility** when possible
---
## π License & Citation
### License
ManipulaPy is licensed under the **GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later)**.
**Key Points:**
- β
**Free to use** for research and education
- β
**Modify and distribute** under same license
- β
**Commercial use** allowed under AGPL terms
- β οΈ **Network services** must provide source code
- π **See [LICENSE](LICENSE)** for complete terms
### Citation
If you use ManipulaPy in your research, please cite:
```bibtex
@software{manipulapy2025,
title={ManipulaPy: A Comprehensive Python Package for Robotic Manipulator Analysis and Control},
author={Mohamed Aboelnasr},
year={2025},
url={https://github.com/boelnasr/ManipulaPy},
version={1.1.1},
license={AGPL-3.0-or-later},
}
```
### Dependencies
All dependencies are AGPL-3.0 compatible:
- **Core**: `numpy`, `scipy`, `matplotlib` (BSD)
- **Vision**: `opencv-python` (Apache 2.0), `ultralytics` (AGPL-3.0)
- **GPU**: `cupy` (MIT), `numba` (BSD)
- **Simulation**: `pybullet` (Zlib), `urchin` (MIT)
---
## π Support & Community
### Getting Help
1. **π Documentation**: [manipulapy.readthedocs.io](https://manipulapy.readthedocs.io/)
2. **π‘ Examples**: Check the `Examples/` directory
3. **π Issues**: [GitHub Issues](https://github.com/boelnasr/ManipulaPy/issues)
4. **π¬ Discussions**: [GitHub Discussions](https://github.com/boelnasr/ManipulaPy/discussions)
5. **π§ Contact**: [aboelnasr1997@gmail.com](mailto:aboelnasr1997@gmail.com)
### Community
- **π Star** the project if you find it useful
- **π΄ Fork** to contribute improvements
- **π’ Share** with the robotics community
- **π Cite** in your academic work
### Contact Information
**Created and maintained by Mohamed Aboelnasr**
- π§ **Email**: [aboelnasr1997@gmail.com](mailto:aboelnasr1997@gmail.com)
- π **GitHub**: [@boelnasr](https://github.com/boelnasr)
- π **LinkedIn**: Connect for collaboration opportunities
---
## π Why Choose ManipulaPy?
<table>
<tr>
<td width="33%">
### π¬ **For Researchers**
- Comprehensive algorithms with solid mathematical foundations
- Extensible modular design for new methods
- Well-documented with theoretical background
- Proper citation format for publications
- AGPL-3.0 license for open science
</td>
<td width="33%">
### π©βπ» **For Developers**
- High-performance GPU acceleration
- Clean, readable Python code
- Modular architecture
- Comprehensive test suite
- Active development and support
</td>
<td width="33%">
### π **For Industry**
- Production-ready with robust error handling
- Scalable for real-time applications
- Clear licensing for commercial use
- Professional documentation
- Regular updates and maintenance
</td>
</tr>
</table>
---
<div align="center">
**π€ ManipulaPy v1.1.0: Professional robotics tools for the Python ecosystem**
[](https://github.com/boelnasr/ManipulaPy)
[](https://pypi.org/project/ManipulaPy/)
*Empowering robotics research and development with comprehensive, GPU-accelerated tools*
[β Star on GitHub](https://github.com/boelnasr/ManipulaPy) β’ [π¦ Install from PyPI](https://pypi.org/project/ManipulaPy/) β’ [π Read the Docs](https://manipulapy.readthedocs.io/)
</div>
Raw data
{
"_id": null,
"home_page": "https://github.com/boelnasr/ManipulaPy",
"name": "ManipulaPy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "robotics, kinematics, dynamics, trajectory-planning, simulation, cuda, computer-vision, control-systems, manipulator",
"author": "Mohamed Aboelnasr",
"author_email": "Mohamed Aboelnasr <aboelnasr1997@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/db/3e/a677bc96a43786e5a2eb5b721acf200571f6af1bbaf7d34f20d589463d1a/manipulapy-1.1.3.tar.gz",
"platform": "any",
"description": "# ManipulaPy\n\n<div align=\"center\">\n\n[](https://pypi.org/project/ManipulaPy/)\n[](https://www.python.org/downloads/)\n[](https://www.gnu.org/licenses/agpl-3.0)\n\n\n[](https://joss.theoj.org/papers/e0e68c2dcd8ac9dfc1354c7ee37eb7aa)\n\n**A comprehensive, GPU-accelerated Python package for robotic manipulator analysis, simulation, planning, control, and perception.**\n\n[Quick Start](#quick-start) \u2022 [Documentation](#documentation) \u2022 [Examples](#examples) \u2022 [Installation](#installation) \u2022 [Contributing](#contributing)\n\n</div>\n\n---\n\n## \ud83c\udfaf Overview\n\nManipulaPy is a modern, comprehensive framework that bridges the gap between basic robotics libraries and sophisticated research tools. It provides seamless integration of kinematics, dynamics, control, and perception systems with optional CUDA acceleration for real-time applications.\n\n### Why ManipulaPy?\n\n**\ud83d\udd27 Unified Framework**: Complete integration from low-level kinematics to high-level perception \n**\u26a1 GPU Accelerated**: CUDA kernels for trajectory planning and dynamics computation \n**\ud83d\udd2c Research Ready**: Mathematical rigor with practical implementation \n**\ud83e\udde9 Modular Design**: Use individual components or the complete system \n**\ud83d\udcd6 Well Documented**: Comprehensive guides with theoretical foundations \n**\ud83c\udd93 Open Source**: AGPL-3.0 licensed for transparency and collaboration\n\n---\n\n## \u2728 Key Features\n\n<table>\n<tr>\n<td width=\"50%\">\n\n### \ud83d\udd27 **Core Robotics**\n- **Kinematics**: Forward/inverse kinematics with Jacobian analysis\n- **Dynamics**: Mass matrix, Coriolis forces, gravity compensation\n- **Control**: PID, computed torque, adaptive, robust algorithms\n- **Singularity Analysis**: Detect singularities and workspace boundaries\n\n</td>\n<td width=\"50%\">\n\n### \ud83d\ude80 **Advanced Capabilities**\n- **Path Planning**: CUDA-accelerated trajectory generation\n- **Simulation**: Real-time PyBullet physics simulation\n- **Vision**: Stereo vision, YOLO detection, point clouds\n- **URDF Processing**: Convert robot models to Python objects\n\n</td>\n</tr>\n</table>\n\n---\n\n\n\n## <a id=\"quick-start\"></a>\ud83d\ude80 Quick Start\n\n### Prerequisites\n\nBefore installing ManipulaPy, make sure your system has:\n\n1. **NVIDIA Drivers & CUDA Toolkit** \n - `nvcc` on your `PATH` (e.g. via `sudo apt install nvidia-cuda-toolkit` or the [official NVIDIA CUDA installer](https://developer.nvidia.com/cuda-downloads)). \n - Verify with:\n ```bash\n nvidia-smi # should list your GPU(s) and driver version\n nvcc --version # should print CUDA version\n ```\n\n2. **cuDNN** \n - Download and install cuDNN for your CUDA version from [NVIDIA's cuDNN installation guide](https://docs.nvidia.com/deeplearning/cudnn/installation/latest/). \n - Verify headers/libs under `/usr/include` and `/usr/lib/x86_64-linux-gnu` (or your distro\u2019s equivalent).\n\n---\n\n<h2 id=\"installation\">Installation</h2>\n\n```bash\n# Basic installation (CPU-only)\npip install ManipulaPy\n\n# With GPU support (CUDA 11.x)\npip install ManipulaPy[gpu-cuda11]\n\n# With GPU support (CUDA 12.x)\npip install ManipulaPy[gpu-cuda12]\n\n# Development installation (with dev extras)\ngit clone https://github.com/boelnasr/ManipulaPy.git\ncd ManipulaPy\npip install -e .[dev]\n````\n\n---\n\n### Post\u2010Install Check\n\nAfter installation, confirm that ManipulaPy can see your GPU:\n\n```bash\n# Check that CUDA is available to ManipulaPy\npython3 - <<EOF\nfrom ManipulaPy import cuda_kernels\n\nif cuda_kernels.check_cuda_availability():\n props = cuda_kernels.get_gpu_properties()\n print(f\"\u2705 CUDA is available on device: {props['name']} \"\n f\"({props['multiprocessor_count']} SMs, \"\n f\"{props['max_threads_per_block']} max threads/block)\")\nelse:\n raise RuntimeError(\"\u274c CUDA not detected or not properly configured in ManipulaPy.\")\nEOF\n\n```\n\nIf you see the \u2705 message with your GPU name, you\u2019re all set! Otherwise, double\u2011check the CUDA Toolkit and cuDNN installation steps above. \\`\\`\\`\n\n\n### 30-Second Demo\n\n```python\nimport numpy as np\nfrom ManipulaPy.urdf_processor import URDFToSerialManipulator\nfrom ManipulaPy.path_planning import OptimizedTrajectoryPlanning\nfrom ManipulaPy.control import ManipulatorController\n\n# Load robot model\ntry:\n from ManipulaPy.ManipulaPy_data.xarm import urdf_file\nexcept ImportError:\n urdf_file = \"path/to/your/robot.urdf\"\n\n# Initialize robot\nurdf_processor = URDFToSerialManipulator(urdf_file)\nrobot = urdf_processor.serial_manipulator\ndynamics = urdf_processor.dynamics\n\n# Forward kinematics\njoint_angles = np.array([0.1, 0.2, -0.3, -0.5, 0.2, 0.1])\nend_effector_pose = robot.forward_kinematics(joint_angles)\nprint(f\"End-effector position: {end_effector_pose[:3, 3]}\")\n\n# Trajectory planning\njoint_limits = [(-np.pi, np.pi)] * 6\nplanner = OptimizedTrajectoryPlanning(robot, urdf_file, dynamics, joint_limits)\n\ntrajectory = planner.joint_trajectory(\n thetastart=np.zeros(6),\n thetaend=joint_angles,\n Tf=5.0, N=100, method=5\n)\n\nprint(f\"\u2705 Generated {trajectory['positions'].shape[0]} trajectory points\")\n```\n\n---\n\n## \ud83d\udcda Core Modules\n\n### \ud83d\udd27 Kinematics & Dynamics\n\n<details>\n<summary><b>Forward & Inverse Kinematics</b></summary>\n\n```python\n# Forward kinematics\npose = robot.forward_kinematics(joint_angles, frame=\"space\")\n\n# Inverse kinematics with advanced solver\ntarget_pose = np.eye(4)\ntarget_pose[:3, 3] = [0.5, 0.3, 0.4]\n\nsolution, success, iterations = robot.iterative_inverse_kinematics(\n T_desired=target_pose,\n thetalist0=joint_angles,\n eomg=1e-6, ev=1e-6,\n max_iterations=5000,\n plot_residuals=True\n)\n```\n\n</details>\n\n<details>\n<summary><b>Dynamic Analysis</b></summary>\n\n```python\nfrom ManipulaPy.dynamics import ManipulatorDynamics\n\n# Compute dynamics quantities\nM = dynamics.mass_matrix(joint_angles)\nC = dynamics.velocity_quadratic_forces(joint_angles, joint_velocities)\nG = dynamics.gravity_forces(joint_angles, g=[0, 0, -9.81])\n\n# Inverse dynamics: \u03c4 = M(q)q\u0308 + C(q,q\u0307) + G(q)\ntorques = dynamics.inverse_dynamics(\n joint_angles, joint_velocities, joint_accelerations,\n [0, 0, -9.81], np.zeros(6)\n)\n```\n\n</details>\n\n### \ud83d\udee4\ufe0f Path Planning & Control\n\n<details>\n<summary><b>Advanced Trajectory Planning</b></summary>\n\n```python\n# GPU-accelerated trajectory planning\nplanner = OptimizedTrajectoryPlanning(\n robot, urdf_file, dynamics, joint_limits,\n use_cuda=True, # Enable GPU acceleration\n cuda_threshold=200, # Auto-switch threshold\n enable_profiling=True\n)\n\n# Joint space trajectory\ntrajectory = planner.joint_trajectory(\n thetastart=start_config,\n thetaend=end_config,\n Tf=5.0, N=1000, method=5 # Quintic time scaling\n)\n\n# Cartesian space trajectory\ncartesian_traj = planner.cartesian_trajectory(\n Xstart=start_pose, Xend=end_pose,\n Tf=3.0, N=500, method=3 # Cubic time scaling\n)\n\n# Performance monitoring\nstats = planner.get_performance_stats()\nprint(f\"GPU usage: {stats['gpu_usage_percent']:.1f}%\")\n```\n\n</details>\n\n<details>\n<summary><b>Advanced Control Systems</b></summary>\n\n```python\nfrom ManipulaPy.control import ManipulatorController\n\ncontroller = ManipulatorController(dynamics)\n\n# Auto-tuned PID control using Ziegler-Nichols\nKu, Tu = 50.0, 0.5 # Ultimate gain and period\nKp, Ki, Kd = controller.ziegler_nichols_tuning(Ku, Tu, kind=\"PID\")\n\n# Computed torque control\ncontrol_torque = controller.computed_torque_control(\n thetalistd=desired_positions,\n dthetalistd=desired_velocities,\n ddthetalistd=desired_accelerations,\n thetalist=current_positions,\n dthetalist=current_velocities,\n g=[0, 0, -9.81], dt=0.01,\n Kp=Kp, Ki=Ki, Kd=Kd\n)\n\n# Adaptive control\nadaptive_torque = controller.adaptive_control(\n thetalist=current_positions,\n dthetalist=current_velocities,\n ddthetalist=desired_accelerations,\n g=[0, 0, -9.81], Ftip=np.zeros(6),\n measurement_error=position_error,\n adaptation_gain=0.1\n)\n```\n\n</details>\n\n### \ud83c\udf10 Simulation & Visualization\n\n<details>\n<summary><b>Real-time PyBullet Simulation</b></summary>\n\n```python\nfrom ManipulaPy.sim import Simulation\n\n# Create simulation environment\nsim = Simulation(\n urdf_file_path=urdf_file,\n joint_limits=joint_limits,\n time_step=0.01,\n real_time_factor=1.0\n)\n\n# Initialize and run\nsim.initialize_robot()\nsim.initialize_planner_and_controller()\nsim.add_joint_parameters() # GUI sliders\n\n# Execute trajectory\nfinal_pose = sim.run_trajectory(trajectory[\"positions\"])\n\n# Manual control with collision detection\nsim.manual_control()\n```\n\n</details>\n\n<details>\n<summary><b>Singularity & Workspace Analysis</b></summary>\n\n```python\nfrom ManipulaPy.singularity import Singularity\n\nanalyzer = Singularity(robot)\n\n# Singularity detection\nis_singular = analyzer.singularity_analysis(joint_angles)\ncondition_number = analyzer.condition_number(joint_angles)\n\n# Manipulability ellipsoid\nanalyzer.manipulability_ellipsoid(joint_angles)\n\n# Workspace visualization with GPU acceleration\nanalyzer.plot_workspace_monte_carlo(\n joint_limits=joint_limits,\n num_samples=10000\n)\n```\n\n</details>\n\n### \ud83d\udc41\ufe0f Vision & Perception\n\n<details>\n<summary><b>Computer Vision Pipeline</b></summary>\n\n```python\nfrom ManipulaPy.vision import Vision\nfrom ManipulaPy.perception import Perception\n\n# Camera configuration\ncamera_config = {\n \"name\": \"main_camera\",\n \"intrinsic_matrix\": np.array([[500, 0, 320], [0, 500, 240], [0, 0, 1]]),\n \"translation\": [0, 0, 1.5],\n \"rotation\": [0, -30, 0], # degrees\n \"fov\": 60,\n \"use_opencv\": True, # Real camera\n \"device_index\": 0\n}\n\n# Stereo vision setup\nleft_cam = {**camera_config, \"translation\": [-0.1, 0, 1.5]}\nright_cam = {**camera_config, \"translation\": [0.1, 0, 1.5]}\n\nvision = Vision(\n camera_configs=[camera_config],\n stereo_configs=(left_cam, right_cam)\n)\n\n# Object detection and clustering\nperception = Perception(vision)\nobstacles, labels = perception.detect_and_cluster_obstacles(\n depth_threshold=3.0,\n eps=0.1, min_samples=5\n)\n\n# 3D point cloud from stereo\nif vision.stereo_enabled:\n left_img, _ = vision.capture_image(0)\n right_img, _ = vision.capture_image(1)\n point_cloud = vision.get_stereo_point_cloud(left_img, right_img)\n```\n\n</details>\n\n---\n\n## \ud83d\udcca Performance Features\n\n### GPU Acceleration\n\nManipulaPy includes highly optimized CUDA kernels for performance-critical operations:\n\n```python\nfrom ManipulaPy.cuda_kernels import check_cuda_availability\n\nif check_cuda_availability():\n print(\"\ud83d\ude80 CUDA acceleration available!\")\n \n # Automatic GPU/CPU switching based on problem size\n planner = OptimizedTrajectoryPlanning(\n robot, urdf_file, dynamics, joint_limits,\n use_cuda=None, # Auto-detect\n cuda_threshold=200, # Switch threshold\n memory_pool_size_mb=512 # GPU memory pool\n )\n \n # Batch processing for multiple trajectories\n batch_trajectories = planner.batch_joint_trajectory(\n thetastart_batch=start_configs, # (batch_size, n_joints)\n thetaend_batch=end_configs,\n Tf=5.0, N=1000, method=5\n )\nelse:\n print(\"CPU mode - install GPU support for acceleration\")\n```\n\n### Performance Monitoring\n\n```python\n# Benchmark different implementations\nresults = planner.benchmark_performance([\n {\"N\": 1000, \"joints\": 6, \"name\": \"Medium\"},\n {\"N\": 5000, \"joints\": 6, \"name\": \"Large\"},\n {\"N\": 1000, \"joints\": 12, \"name\": \"Many joints\"}\n])\n\nfor name, result in results.items():\n print(f\"{name}: {result['total_time']:.3f}s, GPU: {result['used_gpu']}\")\n```\n\n---\n\n<h2 id=\"examples\">\ud83d\udcc1 Examples & Tutorials</h2>\n\n\nThe `Examples/` directory contains comprehensive demonstrations organized into three levels:\n\n### \ud83c\udfaf Basic Examples (\u2b50)\nPerfect for getting started with ManipulaPy fundamentals.\n\n| Example | Description | Output |\n|---------|-------------|--------|\n| `kinematics_basic_demo.py` | Forward/inverse kinematics with visualization | Manipulability analysis plots |\n| `dynamics_basic_demo.py` | Mass matrix, Coriolis forces, gravity compensation | Complete robot analysis |\n| `control_basic_demo.py` | PID, computed torque, feedforward control | Control strategy comparison |\n| `urdf_processing_basic_demo.py` | URDF to SerialManipulator conversion | Configuration space analysis |\n| `visualization_basic_demo.py` | End-effector paths and workspace visualization | 3D trajectory plots |\n\n### \ud83d\udd27 Intermediate Examples (\u2b50\u2b50)\nAdvanced features and integrated systems.\n\n| Example | Description | Key Features |\n|---------|-------------|--------------|\n| `trajectory_planning_intermediate_demo.py` | Multi-segment trajectories and optimization | GPU acceleration, smoothing |\n| `singularity_analysis_intermediate_demo.py` | Workspace analysis and singularity avoidance | Manipulability ellipsoids |\n| `control_comparison_intermediate_demo.py` | Multiple control strategies benchmarking | Real-time monitoring |\n| `perception_intermediate_demo.py` | Computer vision pipeline with clustering | YOLO detection, stereo vision |\n| `simulation_intermediate_demo.py` | Complete PyBullet integration | Real-time physics simulation |\n\n### \ud83d\ude80 Advanced Examples (\u2b50\u2b50\u2b50)\nResearch-grade implementations and high-performance computing.\n\n| Example | Description | Advanced Features |\n|---------|-------------|-------------------|\n| `gpu_acceleration_advanced_demo.py` | CUDA kernels and performance optimization | Memory efficiency analysis |\n| `batch_processing_advanced_demo.py` | Large-scale trajectory generation | Batch scaling analysis |\n| `collision_avoidance_advanced_demo.py` | Real-time obstacle avoidance | Potential field visualization |\n| `optimal_control_advanced_demo.py` | Advanced control algorithms | Performance statistics |\n| `stereo_vision_advanced_demo.py` | 3D perception and point cloud processing | Advanced perception analysis |\n| `real_robot_integration_advanced_demo.py` | Hardware integration examples | Real-time simulation |\n\n### \ud83c\udfc3\u200d\u2642\ufe0f Running Examples\n\n```bash\ncd Examples/\n\n# Basic Examples - Start here!\ncd basic_examples/\npython kinematics_basic_demo.py\npython dynamics_basic_demo.py\npython control_basic_demo.py\n\n# Intermediate Examples - Integrated systems\ncd ../intermediate_examples/\npython trajectory_planning_intermediate_demo.py\npython perception_intermediate_demo.py --enable-yolo\npython simulation_intermediate_demo.py --urdf simple_arm.urdf\n\n# Advanced Examples - Research-grade\ncd ../advanced_examples/\npython gpu_acceleration_advanced_demo.py --benchmark\npython batch_processing_advanced_demo.py --size 1000\npython collision_avoidance_advanced_demo.py --visualize\n```\n\n### \ud83d\udcca Example Outputs\n\nThe examples generate various outputs:\n- **\ud83d\udcc8 Analysis Reports**: `.txt` files with detailed performance metrics\n- **\ud83d\udcca Visualizations**: `.png` plots for trajectories, workspaces, and analysis\n- **\ud83d\udcdd Logs**: `.log` files for debugging and monitoring\n- **\ud83c\udfaf Models**: Pre-trained YOLO models and URDF files\n\n### \ud83c\udfa8 Generated Visualizations\n\nExamples create rich visualizations including:\n- **Trajectory Analysis**: Multi-segment paths and optimization results\n- **Workspace Visualization**: 3D manipulability and reachability analysis \n- **Control Performance**: Real-time monitoring and comparison plots\n- **Perception Results**: Object detection, clustering, and stereo vision\n- **Performance Benchmarks**: GPU vs CPU timing and memory usage\n\n\n\n### \ud83d\udd0d Example Selection Guide\n\n**New to ManipulaPy?** \u2192 Start with `basic_examples/kinematics_basic_demo.py`\n\n**Need trajectory planning?** \u2192 Try `intermediate_examples/trajectory_planning_intermediate_demo.py`\n\n**Working with vision?** \u2192 Check `intermediate_examples/perception_intermediate_demo.py`\n\n**Performance optimization?** \u2192 Explore `advanced_examples/gpu_acceleration_advanced_demo.py`\n\n**Research applications?** \u2192 Dive into `advanced_examples/batch_processing_advanced_demo.py`\n\n---\n## \ud83e\uddea Testing & Validation\n\n### Test Suite\n\n```bash\n# Install test dependencies\npip install ManipulaPy[dev]\n\n# Run all tests\npython -m pytest tests/ -v --cov=ManipulaPy\n\n# Test specific modules\npython -m pytest tests/test_kinematics.py -v\npython -m pytest tests/test_dynamics.py -v\npython -m pytest tests/test_control.py -v\npython -m pytest tests/test_cuda_kernels.py -v # GPU tests\n\n```\n\n### \u2705 High-Coverage Modules\n\n| Module | Coverage | Notes |\n| ------------------- | -------- | --------------------------------- |\n| `kinematics.py` | **98%** | Excellent \u2014 near full coverage |\n| `dynamics.py` | **100%** | Fully tested |\n| `perception.py` | **92%** | Very solid coverage |\n| `vision.py` | **83%** | Good; some PyBullet paths skipped |\n| `urdf_processor.py` | **81%** | Strong test coverage |\n\n---\n\n### \u26a0\ufe0f Needs More Testing\n\n| Module | Coverage | Notes |\n| ---------------- | -------- | -------------------------------------------------------- |\n| `control.py` | **81%** | Many skipped due to CuPy mock \u2014 test with GPU to improve |\n| `sim.py` | **77%** | Manual control & GUI parts partially tested |\n| `singularity.py` | **64%** | Workspace plots & CUDA sampling untested |\n| `utils.py` | **61%** | Some math utils & decorators untested |\n\n---\n\n### \ud83d\udea8 Low/No Coverage\n\n| Module | Coverage | Notes |\n| -------------------- | -------- | ----------------------------------------------------- |\n| `path_planning.py` | **39%** | Large gaps in CUDA-accelerated and plotting logic |\n| `cuda_kernels.py` | **16%** | Most tests skipped \u2014 `NUMBA_DISABLE_CUDA=1` |\n| `transformations.py` | **0%** | Not tested at all \u2014 consider adding basic SE(3) tests |\n\n---\n\n\n\n---\n\n## \ud83e\uddea Benchmarking & Validation\n\nManipulaPy includes a comprehensive benchmarking suite to validate performance and accuracy across different hardware configurations.\n\n### Benchmark Suite\n\nLocated in the `Benchmark/` directory, the suite provides three key tools:\n\n| Benchmark | Purpose | Use Case |\n|-----------|---------|----------|\n| `performance_benchmark.py` | Comprehensive performance analysis | Full system evaluation and optimization |\n| `accuracy_benchmark.py` | Numerical precision validation | Algorithm correctness verification |\n| `quick_benchmark.py` | Fast development testing | CI/CD integration and regression testing |\n\n### Real Performance Results\n\n**Latest benchmark on 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs):**\n\n```bash\n=== ManipulaPy Performance Benchmark Results ===\nHardware: 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs, 1024 threads/block)\nTest Configuration: Large-scale problems (10K-100K trajectory points)\n\nOverall Performance:\n Total Tests: 36 scenarios\n Success Rate: 91.7% (33/36) \u2705\n Overall Speedup: 13.02\u00d7 average acceleration\n CPU Mean Time: 6.88s \u2192 GPU Mean Time: 0.53s\n\n\ud83d\ude80 EXCEPTIONAL PERFORMANCE HIGHLIGHTS:\n\nInverse Dynamics (CUDA Accelerated):\n Mean GPU Speedup: 3,624\u00d7 (3.6K times faster!)\n Peak Performance: 5,563\u00d7 speedup achieved\n Real-time Impact: 7s \u2192 0.002s computation\n\nJoint Trajectory Planning:\n Mean GPU Speedup: 2.29\u00d7\n Best Case: 7.96\u00d7 speedup\n Large Problems: Consistent GPU acceleration\n\nCartesian Trajectories:\n Mean GPU Speedup: 1.02\u00d7 (CPU competitive)\n Consistent Performance: \u00b10.04 variance\n```\n\n### Performance Recommendations\n\n**\ud83c\udfaf OPTIMAL GPU USE CASES:**\n- \u2705 Inverse dynamics computation (**1000\u00d7-5000\u00d7 speedup**)\n- \u2705 Large trajectory generation (>10K points)\n- \u2705 Batch processing multiple trajectories\n- \u2705 Real-time control applications\n\n**\u26a0\ufe0f CPU-OPTIMAL SCENARIOS:**\n- Small trajectories (<1K points)\n- Cartesian space interpolation\n- Single-shot computations\n- Development and debugging\n\n### Running Benchmarks\n\n```bash\n# Quick performance check (< 60 seconds)\ncd Benchmark/\npython quick_benchmark.py\n\n# Comprehensive GPU vs CPU analysis\npython performance_benchmark.py --gpu --plot --save-results\n\n# Validate numerical accuracy\npython accuracy_benchmark.py --tolerance 1e-8\n```\n\n\n<h2 id=\"documentation\">\ud83d\udcd6 Documentation</h2>\n\n\n### Online Documentation\n- **[Complete API Reference](https://manipulapy.readthedocs.io/)**\n- **[User Guide](https://manipulapy.readthedocs.io/en/latest/api/index.html)**\n- **[API Reference](https://manipulapy.readthedocs.io/en/latest/theory.html)**\n- **[GPU Programming Guide](https://manipulapy.readthedocs.io/en/latest/user_guide/CUDA_Kernels.html)**\n\n### Quick Reference\n\n```python\n# Check installation and dependencies\nimport ManipulaPy\nManipulaPy.check_dependencies(verbose=True)\n\n# Module overview\nprint(ManipulaPy.__version__) # Current version\nprint(ManipulaPy.__all__) # Available modules\n\n# GPU capabilities\nfrom ManipulaPy.cuda_kernels import get_gpu_properties\nprops = get_gpu_properties()\nif props:\n print(f\"GPU: {props['multiprocessor_count']} SMs\")\n```\n\n---\n\n<h2 id=\"contributing\">\ud83e\udd1d Contributing</h2>\n\n\nWe love your input! Whether you\u2019re reporting a bug, proposing a new feature, or improving our docs, here\u2019s how to get started:\n\n### 1. Report an Issue\nPlease open a GitHub Issue with:\n- A descriptive title \n- Steps to reproduce \n- Expected vs. actual behavior \n- Any relevant logs or screenshots \n\n### 2. Submit a Pull Request\n1. Fork this repository and create your branch:\n ```bash\n git clone https://github.com/<your-username>/ManipulaPy.git\n cd ManipulaPy\n git checkout -b feature/my-feature\n ```\n2. Install and set up the development environment:\n ```bash\n pip install -e .[dev]\n pre-commit install # to run formatters and linters\n ```\n3. Make your changes, then run tests and quality checks:\n ```bash\n # Run the full test suite\n python -m pytest tests/ -v\n\n # Lint and format\n black ManipulaPy/\n flake8 ManipulaPy/\n mypy ManipulaPy/\n ```\n4. Commit with clear, focused messages and push your branch:\n ```bash\n git add .\n git commit -m \"Add awesome new feature\"\n git push origin feature/my-feature\n ```\n5. Open a Pull Request against `main` describing your changes.\n\n### 3. Seek Support\n- **Design questions:** [GitHub Discussions](https://github.com/boelnasr/ManipulaPy/discussions) \n- **Bug reports:** [GitHub Issues](https://github.com/boelnasr/ManipulaPy/issues) \n- **Email:** aboelnasr1997@gmail.com \n\n### 4. Code of Conduct\nPlease follow our [Code of Conduct](CODE_OF_CONDUCT.md) to keep this community welcoming. \n\n\n### Contribution Areas\n\n- \ud83d\udc1b **Bug Reports**: Issues and edge cases\n- \u2728 **New Features**: Algorithms and capabilities\n- \ud83d\udcda **Documentation**: Guides and examples\n- \ud83d\ude80 **Performance**: CUDA kernels and optimizations\n- \ud83e\uddea **Testing**: Test coverage and validation\n- \ud83c\udfa8 **Visualization**: Plotting and animation tools\n\n### Guidelines\n\n- Follow **PEP 8** style guidelines\n- Add **comprehensive tests** for new features\n- Update **documentation** for API changes\n- Include **working examples** for new functionality\n- Maintain **backward compatibility** when possible\n\n---\n\n## \ud83d\udcc4 License & Citation\n\n### License\n\nManipulaPy is licensed under the **GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later)**.\n\n**Key Points:**\n- \u2705 **Free to use** for research and education\n- \u2705 **Modify and distribute** under same license\n- \u2705 **Commercial use** allowed under AGPL terms\n- \u26a0\ufe0f **Network services** must provide source code\n- \ud83d\udcdc **See [LICENSE](LICENSE)** for complete terms\n\n### Citation\n\nIf you use ManipulaPy in your research, please cite:\n\n```bibtex\n@software{manipulapy2025,\n title={ManipulaPy: A Comprehensive Python Package for Robotic Manipulator Analysis and Control},\n author={Mohamed Aboelnasr},\n year={2025},\n url={https://github.com/boelnasr/ManipulaPy},\n version={1.1.1},\n license={AGPL-3.0-or-later},\n\n}\n```\n\n### Dependencies\n\nAll dependencies are AGPL-3.0 compatible:\n- **Core**: `numpy`, `scipy`, `matplotlib` (BSD)\n- **Vision**: `opencv-python` (Apache 2.0), `ultralytics` (AGPL-3.0)\n- **GPU**: `cupy` (MIT), `numba` (BSD)\n- **Simulation**: `pybullet` (Zlib), `urchin` (MIT)\n\n---\n\n## \ud83d\udcde Support & Community\n\n### Getting Help\n\n1. **\ud83d\udcda Documentation**: [manipulapy.readthedocs.io](https://manipulapy.readthedocs.io/)\n2. **\ud83d\udca1 Examples**: Check the `Examples/` directory\n3. **\ud83d\udc1b Issues**: [GitHub Issues](https://github.com/boelnasr/ManipulaPy/issues)\n4. **\ud83d\udcac Discussions**: [GitHub Discussions](https://github.com/boelnasr/ManipulaPy/discussions)\n5. **\ud83d\udce7 Contact**: [aboelnasr1997@gmail.com](mailto:aboelnasr1997@gmail.com)\n\n### Community\n\n- **\ud83c\udf1f Star** the project if you find it useful\n- **\ud83c\udf74 Fork** to contribute improvements\n- **\ud83d\udce2 Share** with the robotics community\n- **\ud83d\udcdd Cite** in your academic work\n\n### Contact Information\n\n**Created and maintained by Mohamed Aboelnasr**\n\n- \ud83d\udce7 **Email**: [aboelnasr1997@gmail.com](mailto:aboelnasr1997@gmail.com)\n- \ud83d\udc19 **GitHub**: [@boelnasr](https://github.com/boelnasr)\n- \ud83d\udd17 **LinkedIn**: Connect for collaboration opportunities\n\n---\n\n## \ud83c\udfc6 Why Choose ManipulaPy?\n\n<table>\n<tr>\n<td width=\"33%\">\n\n### \ud83d\udd2c **For Researchers**\n- Comprehensive algorithms with solid mathematical foundations\n- Extensible modular design for new methods\n- Well-documented with theoretical background\n- Proper citation format for publications\n- AGPL-3.0 license for open science\n\n</td>\n<td width=\"33%\">\n\n### \ud83d\udc69\u200d\ud83d\udcbb **For Developers**\n- High-performance GPU acceleration\n- Clean, readable Python code\n- Modular architecture\n- Comprehensive test suite\n- Active development and support\n\n</td>\n<td width=\"33%\">\n\n### \ud83c\udfed **For Industry**\n- Production-ready with robust error handling\n- Scalable for real-time applications\n- Clear licensing for commercial use\n- Professional documentation\n- Regular updates and maintenance\n\n</td>\n</tr>\n</table>\n\n---\n\n<div align=\"center\">\n\n**\ud83e\udd16 ManipulaPy v1.1.0: Professional robotics tools for the Python ecosystem**\n\n[](https://github.com/boelnasr/ManipulaPy)\n[](https://pypi.org/project/ManipulaPy/)\n\n*Empowering robotics research and development with comprehensive, GPU-accelerated tools*\n\n[\u2b50 Star on GitHub](https://github.com/boelnasr/ManipulaPy) \u2022 [\ud83d\udce6 Install from PyPI](https://pypi.org/project/ManipulaPy/) \u2022 [\ud83d\udcd6 Read the Docs](https://manipulapy.readthedocs.io/)\n\n</div>\n",
"bugtrack_url": null,
"license": "AGPL-3.0-or-later",
"summary": "A comprehensive Python package for robotic manipulator analysis and control",
"version": "1.1.3",
"project_urls": {
"Bug Tracker": "https://github.com/boelnasr/ManipulaPy/issues",
"Changelog": "https://github.com/boelnasr/ManipulaPy/blob/main/CHANGELOG.md",
"Documentation": "https://manipulapy.readthedocs.io/",
"Funding": "https://github.com/sponsors/boelnasr",
"Homepage": "https://github.com/boelnasr/ManipulaPy",
"Repository": "https://github.com/boelnasr/ManipulaPy.git"
},
"split_keywords": [
"robotics",
" kinematics",
" dynamics",
" trajectory-planning",
" simulation",
" cuda",
" computer-vision",
" control-systems",
" manipulator"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3673d31abc7c91811e7789fa2cb592bea9743669432c5c8b235ba0448af574aa",
"md5": "f2994af9648f84c066f78d534f0bfe6c",
"sha256": "ef187ca39698b6ae92f4d7cac9813a1da9b778b56f68fa479e2fd4ec4ef63590"
},
"downloads": -1,
"filename": "manipulapy-1.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f2994af9648f84c066f78d534f0bfe6c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 4142976,
"upload_time": "2025-07-24T13:42:16",
"upload_time_iso_8601": "2025-07-24T13:42:16.029463Z",
"url": "https://files.pythonhosted.org/packages/36/73/d31abc7c91811e7789fa2cb592bea9743669432c5c8b235ba0448af574aa/manipulapy-1.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "db3ea677bc96a43786e5a2eb5b721acf200571f6af1bbaf7d34f20d589463d1a",
"md5": "05da3c22f0bd7997181c65abe1bdd48c",
"sha256": "5baac4bf2c52edea5c4de97ae1927a3d135bc571e010045fe5de461314633057"
},
"downloads": -1,
"filename": "manipulapy-1.1.3.tar.gz",
"has_sig": false,
"md5_digest": "05da3c22f0bd7997181c65abe1bdd48c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 13812276,
"upload_time": "2025-07-24T13:42:22",
"upload_time_iso_8601": "2025-07-24T13:42:22.655296Z",
"url": "https://files.pythonhosted.org/packages/db/3e/a677bc96a43786e5a2eb5b721acf200571f6af1bbaf7d34f20d589463d1a/manipulapy-1.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-24 13:42:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "boelnasr",
"github_project": "ManipulaPy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.19.2"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.5.2"
]
]
},
{
"name": "pybullet",
"specs": [
[
">=",
"3.0.6"
]
]
},
{
"name": "pycuda",
"specs": [
[
">=",
"2021.1"
]
]
},
{
"name": "trimesh",
"specs": [
[
">=",
"3.9.14"
]
]
},
{
"name": "urchin",
"specs": [
[
">=",
"0.0.27"
]
]
},
{
"name": "numba",
"specs": [
[
">=",
"0.55"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.3"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
">=",
"4.5"
]
]
},
{
"name": "ultralytics",
"specs": [
[
">=",
"8.0"
]
]
},
{
"name": "cupy",
"specs": [
[
">=",
"9.0"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"1.8.0"
]
]
}
],
"lcname": "manipulapy"
}