Name | ATEM JSON |
Version |
1.0.1
JSON |
| download |
home_page | None |
Summary | Adaptive Task Execution Manager for robotics, integrating AI and pathfinding. |
upload_time | 2024-11-17 21:05:14 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | None |
keywords |
ftc
robotics
ai
pathfinding
autonomous
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ATEM - Adaptive Task Execution Model
---
**ATEM** (Adaptive Task Execution Model) is a machine learning-based project designed to determine the optimal sequence of tasks to maximize points in the autonomous phase of FTC robotics competitions. The project utilizes TensorFlow and TensorFlow Lite for efficient model deployment.
## **Features**
- **Model Training**:
- Train a TensorFlow model to optimize task sequences for maximum points.
- Tasks and their respective points are dynamically loaded from a JSON file.
- Outputs a TensorFlow Lite model for lightweight deployment.
- **Model Interpretation**:
- Given a list of tasks, predicts the optimal sequence and total points.
- Outputs human-readable task orders and scores.
---
## **How It Works**
### 1. **Task JSON File**
The `tasks.json` file defines the tasks available for the autonomous phase:
```json
{
"tasks": [
{ "name": "High Basket", "points": 10, "time": 5 },
{ "name": "Low Basket", "points": 5, "time": 3 },
"..."
]
}
```
## **Training the Model**
The model uses task data to train on sequences of tasks for maximizing points within a time limit:
- Loads tasks from the tasks.json file.
- Generates random task sequences within the given time constraint.
- Encodes tasks and trains a model to predict scores based on sequences.
- Outputs a TensorFlow Lite model for deployment.
## **Interpreting the Model**
The interpreter script takes a sequence of tasks, predicts the total points, and outputs the best sequence in human-readable format.
## **Technical Details**
**Model Architecture**
**Input:**
- Task indices (embedded into dense vectors).
- Task times (numeric values).
- Hidden Layers:
- Dense layers for feature extraction and sequence analysis.
- **Output**
- Predicted total points for a given task sequence.
**Data Encoding**
- Task names are encoded as numerical indices.
- Task times are padded to a fixed length for uniform input.
---
# Adaptive Task Prediction Model
---
## Overview
The Adaptive Task Prediction Model is designed to enable real-time decision-making for autonomous robots. It processes sensor data after each task completion, predicts the next optimal task, and adjusts its strategy based on the robot’s current state and environmental feedback.
This dynamic approach ensures the robot maximizes performance, conserves resources, and adapts to unexpected changes in real-world scenarios.
---
## Workflow
### 1. **Sensor Data Collection**
After completing each task, the robot gathers sensor data to provide a snapshot of its current state:
- **Time Elapsed**: Time taken to complete the task.
- **Distance to Target**: The robot's proximity to the next goal.
- **Gyro Angle**: Orientation relative to the reference.
- **Battery Level**: Remaining energy for task prioritization.
- Additional sensor inputs like vision or LIDAR can be incorporated.
---
### 2. **Feature Encoding**
Sensor data and the current task ID are encoded into a format compatible with the machine learning model:
- Continuous values are normalized for consistent input ranges.
- Categorical values are converted to embeddings or indices.
---
### 3. **Real-Time Model Inference**
The model processes the encoded input to:
1. **Predict the Next Task**:
- Outputs the most likely task to maximize performance.
2. **Provide Task Scores**:
- Confidence levels for all possible tasks.
**Example**:
```plaintext
Input:
- Current Task: "Observation Zone"
- Sensor Data: {time_elapsed: 20, distance_to_target: 0.5, gyro_angle: 45, battery_level: 70}
Output:
- Predicted Next Task: "High Basket"
- Task Scores: [0.1, 0.8, 0.1]
```
## Model Inferencing
The Adaptive Task Prediction Model utilizes a TensorFlow Lite (TFLite) model for efficient inference. This lightweight, optimized model is specifically designed for resource-constrained environments like robotics systems, ensuring fast and accurate predictions in real time.
---
### **The model requires encoded inputs representing:**
- Current Task: Encoded as a numerical ID using the task_to_index mapping.
- sensor Data: Real-time inputs such as:
- time_elapsed: Normalized elapsed time.
- distance_to_target: Scaled distance to the next target.
- gyro_angle: Angle, normalized to a fixed range.
- battery_level: Percentage value normalized between 0 and 1.
*The inputs are padded to match the model’s expected dimensions if needed.
### **Once the input data is prepared, it is passed into the TFLite interpreter:**
- The interpreter runs the input through the pre-trained model.
- The output includes:
- Predicted Task Scores: Confidence scores for each possible task.
- Selected Task: The task with the highest score.
### **How the AI Adapts in Real-Time**
- After completing a task, the robot feeds its current state (task + sensor data) into the model.
- The AI processes the input and:
- Predicts the next task to perform.
- Scores all potential tasks to indicate confidence levels.
- The robot executes the predicted task with the highest score.
- The process repeats, ensuring continuous adaptation to changing environments and constraints.
Raw data
{
"_id": null,
"home_page": null,
"name": "ATEM",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "FTC, robotics, AI, pathfinding, autonomous",
"author": null,
"author_email": "Your Name <your.email@example.com>",
"download_url": "https://files.pythonhosted.org/packages/68/ae/4d085cd185c111f776d1855a032b48b25b14ed644aaa64859f532cfbe0cc/atem-1.0.1.tar.gz",
"platform": null,
"description": "# ATEM - Adaptive Task Execution Model\n\n---\n**ATEM** (Adaptive Task Execution Model) is a machine learning-based project designed to determine the optimal sequence of tasks to maximize points in the autonomous phase of FTC robotics competitions. The project utilizes TensorFlow and TensorFlow Lite for efficient model deployment.\n\n\n\n## **Features**\n- **Model Training**:\n - Train a TensorFlow model to optimize task sequences for maximum points.\n - Tasks and their respective points are dynamically loaded from a JSON file.\n - Outputs a TensorFlow Lite model for lightweight deployment.\n\n- **Model Interpretation**:\n - Given a list of tasks, predicts the optimal sequence and total points.\n - Outputs human-readable task orders and scores.\n\n---\n\n## **How It Works**\n### 1. **Task JSON File**\nThe `tasks.json` file defines the tasks available for the autonomous phase:\n```json\n{\n \"tasks\": [\n { \"name\": \"High Basket\", \"points\": 10, \"time\": 5 },\n { \"name\": \"Low Basket\", \"points\": 5, \"time\": 3 },\n \"...\"\n ]\n}\n```\n## **Training the Model**\n\nThe model uses task data to train on sequences of tasks for maximizing points within a time limit:\n- Loads tasks from the tasks.json file. \n- Generates random task sequences within the given time constraint. \n- Encodes tasks and trains a model to predict scores based on sequences. \n- Outputs a TensorFlow Lite model for deployment.\n\n\n## **Interpreting the Model**\n\nThe interpreter script takes a sequence of tasks, predicts the total points, and outputs the best sequence in human-readable format.\n\n\n## **Technical Details**\n\n**Model Architecture**\n\n**Input:**\n- Task indices (embedded into dense vectors).\n- Task times (numeric values).\n- Hidden Layers:\n- Dense layers for feature extraction and sequence analysis.\n\n- **Output**\n- Predicted total points for a given task sequence.\n\n**Data Encoding**\n\n- Task names are encoded as numerical indices.\n- Task times are padded to a fixed length for uniform input.\n\n---\n\n# Adaptive Task Prediction Model\n\n---\n\n## Overview\nThe Adaptive Task Prediction Model is designed to enable real-time decision-making for autonomous robots. It processes sensor data after each task completion, predicts the next optimal task, and adjusts its strategy based on the robot\u2019s current state and environmental feedback.\n\nThis dynamic approach ensures the robot maximizes performance, conserves resources, and adapts to unexpected changes in real-world scenarios.\n\n---\n\n## Workflow\n\n### 1. **Sensor Data Collection**\nAfter completing each task, the robot gathers sensor data to provide a snapshot of its current state:\n- **Time Elapsed**: Time taken to complete the task.\n- **Distance to Target**: The robot's proximity to the next goal.\n- **Gyro Angle**: Orientation relative to the reference.\n- **Battery Level**: Remaining energy for task prioritization.\n- Additional sensor inputs like vision or LIDAR can be incorporated.\n\n---\n\n### 2. **Feature Encoding**\nSensor data and the current task ID are encoded into a format compatible with the machine learning model:\n- Continuous values are normalized for consistent input ranges.\n- Categorical values are converted to embeddings or indices.\n\n---\n\n### 3. **Real-Time Model Inference**\nThe model processes the encoded input to:\n1. **Predict the Next Task**:\n - Outputs the most likely task to maximize performance.\n2. **Provide Task Scores**:\n - Confidence levels for all possible tasks.\n\n**Example**:\n```plaintext\nInput:\n- Current Task: \"Observation Zone\"\n- Sensor Data: {time_elapsed: 20, distance_to_target: 0.5, gyro_angle: 45, battery_level: 70}\n\nOutput:\n- Predicted Next Task: \"High Basket\"\n- Task Scores: [0.1, 0.8, 0.1]\n```\n\n## Model Inferencing\nThe Adaptive Task Prediction Model utilizes a TensorFlow Lite (TFLite) model for efficient inference. This lightweight, optimized model is specifically designed for resource-constrained environments like robotics systems, ensuring fast and accurate predictions in real time.\n\n---\n\n### **The model requires encoded inputs representing:**\n- Current Task: Encoded as a numerical ID using the task_to_index mapping.\n- sensor Data: Real-time inputs such as:\n- time_elapsed: Normalized elapsed time.\n- distance_to_target: Scaled distance to the next target.\n- gyro_angle: Angle, normalized to a fixed range.\n- battery_level: Percentage value normalized between 0 and 1.\n\n*The inputs are padded to match the model\u2019s expected dimensions if needed.\n\n### **Once the input data is prepared, it is passed into the TFLite interpreter:**\n- The interpreter runs the input through the pre-trained model.\n- The output includes:\n- Predicted Task Scores: Confidence scores for each possible task.\n- Selected Task: The task with the highest score.\n\n\n### **How the AI Adapts in Real-Time**\n\n- After completing a task, the robot feeds its current state (task + sensor data) into the model.\n- The AI processes the input and:\n- Predicts the next task to perform.\n- Scores all potential tasks to indicate confidence levels.\n- The robot executes the predicted task with the highest score.\n- The process repeats, ensuring continuous adaptation to changing environments and constraints.\n",
"bugtrack_url": null,
"license": null,
"summary": "Adaptive Task Execution Manager for robotics, integrating AI and pathfinding.",
"version": "1.0.1",
"project_urls": {
"Documentation": "https://github.com/yourusername/ATEM#readme",
"Homepage": "https://github.com/yourusername/ATEM"
},
"split_keywords": [
"ftc",
" robotics",
" ai",
" pathfinding",
" autonomous"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ef577c88144499d3f69a35254f02d4df803f8477bbf27ac7feb30207d8a4d3da",
"md5": "1778a4550e929712410206ff2c039f4e",
"sha256": "20a7a212cfa1e962ab090848f82fa75ff6a18c673378cc972236ddc802197f06"
},
"downloads": -1,
"filename": "ATEM-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1778a4550e929712410206ff2c039f4e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11890,
"upload_time": "2024-11-17T21:05:13",
"upload_time_iso_8601": "2024-11-17T21:05:13.193892Z",
"url": "https://files.pythonhosted.org/packages/ef/57/7c88144499d3f69a35254f02d4df803f8477bbf27ac7feb30207d8a4d3da/ATEM-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "68ae4d085cd185c111f776d1855a032b48b25b14ed644aaa64859f532cfbe0cc",
"md5": "8a2bbe9c5f3ddf9b5b71a2aca37bf7d1",
"sha256": "926bef7cbbeb6b9d2c2e3297b716890b734196dc82ebd256e4e1d729b16767dc"
},
"downloads": -1,
"filename": "atem-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "8a2bbe9c5f3ddf9b5b71a2aca37bf7d1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 8375,
"upload_time": "2024-11-17T21:05:14",
"upload_time_iso_8601": "2024-11-17T21:05:14.989232Z",
"url": "https://files.pythonhosted.org/packages/68/ae/4d085cd185c111f776d1855a032b48b25b14ed644aaa64859f532cfbe0cc/atem-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-17 21:05:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "ATEM#readme",
"github_not_found": true,
"lcname": "atem"
}