# Script Run Tracker
A lightweight SQLite-based run tracker for Python scripts with automatic logging capabilities.
This library is a companion tool for the **Python Automation Project**, designed to track execution history, monitor scheduled jobs, data pipelines, and automation scripts with detailed logging.
## Features
- 📊 **SQLite-based tracking** - No external dependencies
- 📝 **Automatic logging** - Creates timestamped log files for each run
- 🔄 **Log rotation** - Automatically manages old log files
- ⚡ **Context manager support** - Clean and simple API
- 🎯 **Trigger tracking** - Distinguishes between manual and scheduled runs
- ✅ **Status tracking** - Automatically tracks success/failure states
- 📁 **Project organization** - Logs stored alongside your scripts
## Installation
```bash
pip install script-run-tracker
```
## Related Projects
This library is part of the **Python Automation Project** ecosystem, designed to provide robust tracking and monitoring capabilities for automated workflows.
## Quick Start
### 1. Initialize the database
First, create the database schema (run once):
```python
from run_tracker import init_database
init_database('tracking.db')
```
### 2. Register your flow
```python
from run_tracker import register_flow
register_flow(
db_path='tracking.db',
flow_name='Daily Data Processing',
flow_path='/path/to/your/script.py',
description='Processes daily sales data'
)
```
### 3. Use in your script
```python
from run_tracker import RunTracker
with RunTracker('tracking.db', trigger_type='scheduler') as tracker:
tracker.log("Starting data processing")
# Your code here
data = load_data()
tracker.log(f"Loaded {len(data)} records")
process_data(data)
tracker.log("Processing complete")
```
## Usage
### Basic Usage
```python
from run_tracker import RunTracker
with RunTracker('tracking.db') as tracker:
tracker.log("Process started")
# Your code here
tracker.log("Process completed")
```
### Custom Project Name
```python
with RunTracker('tracking.db', project_name='my_etl_job') as tracker:
tracker.log("ETL job started")
# Your code here
```
### Trigger Types
Automatically detect whether the script was run manually or by a scheduler:
```python
import sys
from run_tracker import RunTracker
# Detect trigger type from command line argument
trigger_type = sys.argv[1] if len(sys.argv) > 1 else "scheduler"
with RunTracker('tracking.db', trigger_type=trigger_type, max_log_files=10) as tracker:
tracker.log("Process started")
# Your code here
```
**Usage:**
```bash
# Run manually (defaults to 'scheduler' if no argument)
python your_script.py
# Explicitly set trigger type
python your_script.py manual
python your_script.py scheduler
```
Alternatively, you can explicitly set the trigger type:
```python
# For scheduled runs
with RunTracker('tracking.db', trigger_type='scheduler') as tracker:
tracker.log("Automated run started")
# For manual runs
with RunTracker('tracking.db', trigger_type='manual') as tracker:
tracker.log("Manual run started")
```
### Log Levels
```python
with RunTracker('tracking.db') as tracker:
tracker.log("Informational message", level='INFO')
tracker.log("Debug information", level='DEBUG')
tracker.log("Warning message", level='WARNING')
tracker.log("Error occurred", level='ERROR')
tracker.log("Critical issue", level='CRITICAL')
```
### Configure Log Retention
```python
# Keep only the last 5 log files
with RunTracker('tracking.db', max_log_files=5) as tracker:
tracker.log("Starting with custom retention")
```
## Database Schema
The package automatically creates two tables:
**flows** - Stores information about your scripts
- flow_id (PRIMARY KEY)
- flow_name
- flow_path
- description
- is_active
- created_at
**runs** - Stores execution history
- run_id (PRIMARY KEY)
- flow_id (FOREIGN KEY)
- status ('running', 'success', 'fail')
- trigger_type ('scheduler', 'manual')
- start_time
- finish_time
- error_message
- log_file_path
## Log Files
Log files are automatically created in a `logs/` directory next to your script:
```
your_project/
├── your_script.py
└── logs/
├── your_script_run_1.log
├── your_script_run_2.log
└── your_script_run_3.log
```
## Error Handling
RunTracker automatically captures and logs exceptions:
```python
with RunTracker('tracking.db') as tracker:
tracker.log("Starting risky operation")
# If this raises an exception, it will be:
# 1. Logged to the log file
# 2. Stored in the database
# 3. Re-raised for your handling
risky_operation()
```
## Utility Functions
### Initialize Database
```python
from run_tracker import init_database
init_database('tracking.db')
```
### Register a Flow
```python
from run_tracker import register_flow
register_flow(
db_path='tracking.db',
flow_name='Data Sync Job',
flow_path='/opt/scripts/data_sync.py',
description='Syncs data from external API'
)
```
### Deactivate a Flow
```python
from run_tracker import deactivate_flow
deactivate_flow('tracking.db', 'Data Sync Job')
```
## Requirements
- Python 3.7+
- No external dependencies (uses only Python standard library)
## License
MIT License
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Support
If you encounter any issues or have questions, please file an issue on the GitHub repository.
Raw data
{
"_id": null,
"home_page": "https://github.com/domiterd/run_tracker",
"name": "script-run-tracker",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "logging, tracking, sqlite, scheduler, monitoring",
"author": "Dominik Domiter",
"author_email": "Dominik Domiter <dominik.domiter@autowallis.hu>",
"download_url": "https://files.pythonhosted.org/packages/f8/c5/2e41f5d5b5692225d598506c72c15e0b597e1a798d450d0f1dfbc63a4d8e/script_run_tracker-0.1.2.tar.gz",
"platform": null,
"description": "# Script Run Tracker\r\n\r\nA lightweight SQLite-based run tracker for Python scripts with automatic logging capabilities. \r\n\r\nThis library is a companion tool for the **Python Automation Project**, designed to track execution history, monitor scheduled jobs, data pipelines, and automation scripts with detailed logging.\r\n\r\n## Features\r\n\r\n- \ud83d\udcca **SQLite-based tracking** - No external dependencies\r\n- \ud83d\udcdd **Automatic logging** - Creates timestamped log files for each run\r\n- \ud83d\udd04 **Log rotation** - Automatically manages old log files\r\n- \u26a1 **Context manager support** - Clean and simple API\r\n- \ud83c\udfaf **Trigger tracking** - Distinguishes between manual and scheduled runs\r\n- \u2705 **Status tracking** - Automatically tracks success/failure states\r\n- \ud83d\udcc1 **Project organization** - Logs stored alongside your scripts\r\n\r\n## Installation\r\n\r\n```bash\r\npip install script-run-tracker\r\n```\r\n\r\n## Related Projects\r\n\r\nThis library is part of the **Python Automation Project** ecosystem, designed to provide robust tracking and monitoring capabilities for automated workflows.\r\n\r\n## Quick Start\r\n\r\n### 1. Initialize the database\r\n\r\nFirst, create the database schema (run once):\r\n\r\n```python\r\nfrom run_tracker import init_database\r\n\r\ninit_database('tracking.db')\r\n```\r\n\r\n### 2. Register your flow\r\n\r\n```python\r\nfrom run_tracker import register_flow\r\n\r\nregister_flow(\r\n db_path='tracking.db',\r\n flow_name='Daily Data Processing',\r\n flow_path='/path/to/your/script.py',\r\n description='Processes daily sales data'\r\n)\r\n```\r\n\r\n### 3. Use in your script\r\n\r\n```python\r\nfrom run_tracker import RunTracker\r\n\r\nwith RunTracker('tracking.db', trigger_type='scheduler') as tracker:\r\n tracker.log(\"Starting data processing\")\r\n \r\n # Your code here\r\n data = load_data()\r\n tracker.log(f\"Loaded {len(data)} records\")\r\n \r\n process_data(data)\r\n tracker.log(\"Processing complete\")\r\n```\r\n\r\n## Usage\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom run_tracker import RunTracker\r\n\r\nwith RunTracker('tracking.db') as tracker:\r\n tracker.log(\"Process started\")\r\n # Your code here\r\n tracker.log(\"Process completed\")\r\n```\r\n\r\n### Custom Project Name\r\n\r\n```python\r\nwith RunTracker('tracking.db', project_name='my_etl_job') as tracker:\r\n tracker.log(\"ETL job started\")\r\n # Your code here\r\n```\r\n\r\n### Trigger Types\r\n\r\nAutomatically detect whether the script was run manually or by a scheduler:\r\n\r\n```python\r\nimport sys\r\nfrom run_tracker import RunTracker\r\n\r\n# Detect trigger type from command line argument\r\ntrigger_type = sys.argv[1] if len(sys.argv) > 1 else \"scheduler\"\r\n\r\nwith RunTracker('tracking.db', trigger_type=trigger_type, max_log_files=10) as tracker:\r\n tracker.log(\"Process started\")\r\n # Your code here\r\n```\r\n\r\n**Usage:**\r\n```bash\r\n# Run manually (defaults to 'scheduler' if no argument)\r\npython your_script.py\r\n\r\n# Explicitly set trigger type\r\npython your_script.py manual\r\npython your_script.py scheduler\r\n```\r\n\r\nAlternatively, you can explicitly set the trigger type:\r\n\r\n```python\r\n# For scheduled runs\r\nwith RunTracker('tracking.db', trigger_type='scheduler') as tracker:\r\n tracker.log(\"Automated run started\")\r\n\r\n# For manual runs\r\nwith RunTracker('tracking.db', trigger_type='manual') as tracker:\r\n tracker.log(\"Manual run started\")\r\n```\r\n\r\n### Log Levels\r\n\r\n```python\r\nwith RunTracker('tracking.db') as tracker:\r\n tracker.log(\"Informational message\", level='INFO')\r\n tracker.log(\"Debug information\", level='DEBUG')\r\n tracker.log(\"Warning message\", level='WARNING')\r\n tracker.log(\"Error occurred\", level='ERROR')\r\n tracker.log(\"Critical issue\", level='CRITICAL')\r\n```\r\n\r\n### Configure Log Retention\r\n\r\n```python\r\n# Keep only the last 5 log files\r\nwith RunTracker('tracking.db', max_log_files=5) as tracker:\r\n tracker.log(\"Starting with custom retention\")\r\n```\r\n\r\n## Database Schema\r\n\r\nThe package automatically creates two tables:\r\n\r\n**flows** - Stores information about your scripts\r\n- flow_id (PRIMARY KEY)\r\n- flow_name\r\n- flow_path\r\n- description\r\n- is_active\r\n- created_at\r\n\r\n**runs** - Stores execution history\r\n- run_id (PRIMARY KEY)\r\n- flow_id (FOREIGN KEY)\r\n- status ('running', 'success', 'fail')\r\n- trigger_type ('scheduler', 'manual')\r\n- start_time\r\n- finish_time\r\n- error_message\r\n- log_file_path\r\n\r\n## Log Files\r\n\r\nLog files are automatically created in a `logs/` directory next to your script:\r\n\r\n```\r\nyour_project/\r\n\u251c\u2500\u2500 your_script.py\r\n\u2514\u2500\u2500 logs/\r\n \u251c\u2500\u2500 your_script_run_1.log\r\n \u251c\u2500\u2500 your_script_run_2.log\r\n \u2514\u2500\u2500 your_script_run_3.log\r\n```\r\n\r\n## Error Handling\r\n\r\nRunTracker automatically captures and logs exceptions:\r\n\r\n```python\r\nwith RunTracker('tracking.db') as tracker:\r\n tracker.log(\"Starting risky operation\")\r\n \r\n # If this raises an exception, it will be:\r\n # 1. Logged to the log file\r\n # 2. Stored in the database\r\n # 3. Re-raised for your handling\r\n risky_operation()\r\n```\r\n\r\n## Utility Functions\r\n\r\n### Initialize Database\r\n\r\n```python\r\nfrom run_tracker import init_database\r\n\r\ninit_database('tracking.db')\r\n```\r\n\r\n### Register a Flow\r\n\r\n```python\r\nfrom run_tracker import register_flow\r\n\r\nregister_flow(\r\n db_path='tracking.db',\r\n flow_name='Data Sync Job',\r\n flow_path='/opt/scripts/data_sync.py',\r\n description='Syncs data from external API'\r\n)\r\n```\r\n\r\n### Deactivate a Flow\r\n\r\n```python\r\nfrom run_tracker import deactivate_flow\r\n\r\ndeactivate_flow('tracking.db', 'Data Sync Job')\r\n```\r\n\r\n## Requirements\r\n\r\n- Python 3.7+\r\n- No external dependencies (uses only Python standard library)\r\n\r\n## License\r\n\r\nMIT License\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n## Support\r\n\r\nIf you encounter any issues or have questions, please file an issue on the GitHub repository.\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A lightweight SQLite-based run tracker for Python scripts",
"version": "0.1.2",
"project_urls": {
"Bug Reports": "https://github.com/domiterd/script-run-tracker/issues",
"Homepage": "https://github.com/domiterd/script-run-tracker",
"Source": "https://github.com/domiterd/script-run-tracker"
},
"split_keywords": [
"logging",
" tracking",
" sqlite",
" scheduler",
" monitoring"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "bf89fdb836b54bb59ab1c5ed9886d65c0fab157f44758f08ec47875d30f66e4f",
"md5": "a4c642c4b8abe499c57d33051dc60ba0",
"sha256": "3e25416b24cd52d4840c02933d37c9fd08ae930e9e98aea43f880ce06a415fc3"
},
"downloads": -1,
"filename": "script_run_tracker-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a4c642c4b8abe499c57d33051dc60ba0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 9172,
"upload_time": "2025-10-10T13:12:08",
"upload_time_iso_8601": "2025-10-10T13:12:08.849234Z",
"url": "https://files.pythonhosted.org/packages/bf/89/fdb836b54bb59ab1c5ed9886d65c0fab157f44758f08ec47875d30f66e4f/script_run_tracker-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f8c52e41f5d5b5692225d598506c72c15e0b597e1a798d450d0f1dfbc63a4d8e",
"md5": "10267ae283061a811dac5d3cc816f784",
"sha256": "deff1133205acddb9ab74fa5640a3ffb23cf8889fd81e8066827f3b006444797"
},
"downloads": -1,
"filename": "script_run_tracker-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "10267ae283061a811dac5d3cc816f784",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 11493,
"upload_time": "2025-10-10T13:12:09",
"upload_time_iso_8601": "2025-10-10T13:12:09.987651Z",
"url": "https://files.pythonhosted.org/packages/f8/c5/2e41f5d5b5692225d598506c72c15e0b597e1a798d450d0f1dfbc63a4d8e/script_run_tracker-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-10 13:12:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "domiterd",
"github_project": "run_tracker",
"github_not_found": true,
"lcname": "script-run-tracker"
}