Name | perceptionpro JSON |
Version |
0.1.2
JSON |
| download |
home_page | None |
Summary | PerceptionPro is a package for computer vision tasks such as head pose estimation, eye tracking, and object detection. |
upload_time | 2025-01-21 16:13:38 |
maintainer | None |
docs_url | None |
author | Umar Balak |
requires_python | >=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PerceptionPro
**PerceptionPro** is a Python package designed for real-time monitoring and alert systems. It provides modular components for head pose estimation, eye tracking, and object detection, all integrated into a cohesive alert system.
## Features
* **Head Pose Estimation:** Tracks the orientation of the user's head in real-time, providing insights into the direction of attention.
* **Eye Tracking:** Detects and analyzes eye movements and gaze direction, ensuring effective focus monitoring.
* **Object Detection:** Identifies objects in the environment, supporting compliance and situational awareness.
* **Alert System:** Integrated mechanism to trigger alerts based on configurable thresholds for head pose, eye tracking, and object detection.
## Installation
```bash
pip install perceptionpro
```
## Components
**1. Head Pose Estimation:**
* Module: `HeadPoseEstimator`
* Uses facial landmarks to determine the orientation of the user's head (e.g., Left, Right, Up, Down, Center).
**2. Eye Tracking:**
* Module: `EyeTracker`
* Tracks eye movements and gaze direction for applications like focus monitoring and attention analysis.
**3. Object Detection:**
* Module: `ObjectDetector`
* Leverages the YOLO model to identify and count objects such as person, book, and cell phone in real-time.
**4. Alert System:**
* Threshold-based system to monitor head pose, eye tracking, and detected objects, triggering alerts when defined criteria are met.
## Usage
```python
import cv2
import time
import keyboard
from perceptionpro.core import PerceptionInit
def main():
"""
Main function to initialize the camera, process video frames,
and handle user input to quit.
"""
try:
# Initialize the camera
camera = cv2.VideoCapture(0)
if not camera.isOpened():
print("Error: Camera could not be opened.")
return
else:
print("Camera opened successfully.")
# Initialize the PerceptionInit with speech enabled
vision = PerceptionInit(camera, model_path="https://github.com/UmarBalak/ProctorVision/raw/refs/heads/main/yolov8s.pt")
# Initialize variables
violation_count = 0
prev_time = time.time()
while True:
# Capture frame-by-frame
ret, frame = camera.read()
if not ret:
print("Failed to capture video. Check your camera connection.")
break
# Calculate frame rate
current_time = time.time()
elapsed_time = current_time - prev_time
fps = 1 / elapsed_time if elapsed_time > 0 else 0
prev_time = current_time
# Process frame metrics
result, eye_d, head_d, fps, obj_d, alert_msg = vision.track()
print("Procesed")
if not result:
violation_count += 1
print(f"Warning: {violation_count} - {alert_msg}")
if violation_count == 4:
print("The exam has been terminated.")
break
else:
pass
# Print real-time metrics to console
print(f"FPS: {fps:.2f}")
print(f"Eye Direction: {eye_d}")
print(f"Head Direction: {head_d}")
print(f"Background: {'Ok' if obj_d else 'Object detected'}")
# Check if 'q' is pressed to exit the loop
if keyboard.is_pressed('q'):
print("User requested to stop the process.")
break
except Exception as e:
print(f"An unexpected error occurred: {e}")
finally:
# Always release the camera and close windows
if 'camera' in locals() and camera.isOpened():
camera.release()
print("Camera released. Process complete.")
if __name__ == "__main__":
main()
```
Raw data
{
"_id": null,
"home_page": null,
"name": "perceptionpro",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Umar Balak",
"author_email": "umarbalak35@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b7/9a/a3c7b9ddbcaf62139f611247995a4ce339428653a4a38b7cbe2786b92394/perceptionpro-0.1.2.tar.gz",
"platform": null,
"description": "# PerceptionPro\r\n\r\n**PerceptionPro** is a Python package designed for real-time monitoring and alert systems. It provides modular components for head pose estimation, eye tracking, and object detection, all integrated into a cohesive alert system.\r\n\r\n## Features\r\n* **Head Pose Estimation:** Tracks the orientation of the user's head in real-time, providing insights into the direction of attention.\r\n\r\n* **Eye Tracking:** Detects and analyzes eye movements and gaze direction, ensuring effective focus monitoring.\r\n\r\n* **Object Detection:** Identifies objects in the environment, supporting compliance and situational awareness.\r\n\r\n* **Alert System:** Integrated mechanism to trigger alerts based on configurable thresholds for head pose, eye tracking, and object detection.\r\n\r\n\r\n## Installation\r\n\r\n```bash\r\npip install perceptionpro\r\n```\r\n\r\n## Components\r\n\r\n**1. Head Pose Estimation:**\r\n\r\n* Module: `HeadPoseEstimator`\r\n\r\n* Uses facial landmarks to determine the orientation of the user's head (e.g., Left, Right, Up, Down, Center).\r\n\r\n**2. Eye Tracking:**\r\n\r\n* Module: `EyeTracker`\r\n\r\n* Tracks eye movements and gaze direction for applications like focus monitoring and attention analysis.\r\n\r\n**3. Object Detection:**\r\n\r\n* Module: `ObjectDetector`\r\n\r\n* Leverages the YOLO model to identify and count objects such as person, book, and cell phone in real-time.\r\n\r\n**4. Alert System:**\r\n\r\n* Threshold-based system to monitor head pose, eye tracking, and detected objects, triggering alerts when defined criteria are met.\r\n\r\n## Usage\r\n```python\r\nimport cv2\r\nimport time\r\nimport keyboard \r\nfrom perceptionpro.core import PerceptionInit\r\n\r\ndef main():\r\n \"\"\"\r\n Main function to initialize the camera, process video frames,\r\n and handle user input to quit.\r\n \"\"\"\r\n try:\r\n # Initialize the camera\r\n camera = cv2.VideoCapture(0)\r\n\r\n if not camera.isOpened():\r\n print(\"Error: Camera could not be opened.\")\r\n return\r\n else:\r\n print(\"Camera opened successfully.\")\r\n\r\n # Initialize the PerceptionInit with speech enabled\r\n vision = PerceptionInit(camera, model_path=\"https://github.com/UmarBalak/ProctorVision/raw/refs/heads/main/yolov8s.pt\")\r\n\r\n # Initialize variables\r\n violation_count = 0\r\n prev_time = time.time()\r\n\r\n while True:\r\n # Capture frame-by-frame\r\n ret, frame = camera.read()\r\n\r\n if not ret:\r\n print(\"Failed to capture video. Check your camera connection.\")\r\n break\r\n\r\n # Calculate frame rate\r\n current_time = time.time()\r\n elapsed_time = current_time - prev_time\r\n fps = 1 / elapsed_time if elapsed_time > 0 else 0\r\n prev_time = current_time\r\n\r\n # Process frame metrics\r\n result, eye_d, head_d, fps, obj_d, alert_msg = vision.track()\r\n print(\"Procesed\")\r\n\r\n if not result:\r\n violation_count += 1\r\n print(f\"Warning: {violation_count} - {alert_msg}\")\r\n\r\n if violation_count == 4:\r\n print(\"The exam has been terminated.\")\r\n break\r\n else:\r\n pass\r\n # Print real-time metrics to console\r\n print(f\"FPS: {fps:.2f}\")\r\n print(f\"Eye Direction: {eye_d}\")\r\n print(f\"Head Direction: {head_d}\")\r\n print(f\"Background: {'Ok' if obj_d else 'Object detected'}\")\r\n\r\n # Check if 'q' is pressed to exit the loop\r\n if keyboard.is_pressed('q'):\r\n print(\"User requested to stop the process.\")\r\n break\r\n\r\n except Exception as e:\r\n print(f\"An unexpected error occurred: {e}\")\r\n\r\n finally:\r\n # Always release the camera and close windows\r\n if 'camera' in locals() and camera.isOpened():\r\n camera.release()\r\n print(\"Camera released. Process complete.\")\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n\r\n```\r\n\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "PerceptionPro is a package for computer vision tasks such as head pose estimation, eye tracking, and object detection.",
"version": "0.1.2",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fb18110b370c982826ab96a57643f074009c145990dedeacad1dfbc3d5b68281",
"md5": "d32d9d297549d3c25170d01c8460b35e",
"sha256": "0faae5193f2a7c2a7c3e8ccce68c0277d4bf7ef1e0b892c6ca9a2ca8e96827a0"
},
"downloads": -1,
"filename": "perceptionpro-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d32d9d297549d3c25170d01c8460b35e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 9505,
"upload_time": "2025-01-21T16:13:37",
"upload_time_iso_8601": "2025-01-21T16:13:37.025240Z",
"url": "https://files.pythonhosted.org/packages/fb/18/110b370c982826ab96a57643f074009c145990dedeacad1dfbc3d5b68281/perceptionpro-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b79aa3c7b9ddbcaf62139f611247995a4ce339428653a4a38b7cbe2786b92394",
"md5": "e4424c8ae7265635a30d15a5e2cc0238",
"sha256": "8233ad35189d8200e4e5028d11be544d90e8f6f3375fd5737c5b8a3a86b604ca"
},
"downloads": -1,
"filename": "perceptionpro-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "e4424c8ae7265635a30d15a5e2cc0238",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 9486,
"upload_time": "2025-01-21T16:13:38",
"upload_time_iso_8601": "2025-01-21T16:13:38.993917Z",
"url": "https://files.pythonhosted.org/packages/b7/9a/a3c7b9ddbcaf62139f611247995a4ce339428653a4a38b7cbe2786b92394/perceptionpro-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-21 16:13:38",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "perceptionpro"
}