<p align="center">
<img src=".github/images/automagik_logo.png" alt="Spark Logo" width="600"/>
</p>
# Spark
> **Because magic shouldn't be complicated. **
Spark is an automagion engine that seamlessly integrates with multiple [LangFlow](https://github.com/langflow-ai/langflow) instances. Deploy AI-driven flows, schedule one-time or recurring tasks, and monitor everything with minimal fussβno coding required.
## π Ecosystem
- **[AutoMagik Agents](https://github.com/namastexlabs/automagik-agents)**: Develop production-level AI agents
- **[AutoMagik UI](https://github.com/namastexlabs/automagik-ui)**: Create agents using natural language with our dedicated UI
## π Installation
Spark provides two setup options:
### Prerequisites
- Linux-based system (Ubuntu/Debian recommended)
- Docker and Docker Compose (automatically installed on Ubuntu/Debian if not present)
### Local Production Setup
For a production-ready local environment:
```bash
./scripts/setup_local.sh
```
### Development Setup
For development with PostgreSQL and Redis Docker containers:
```bash
./scripts/setup_dev.sh
```
### What Happens During Setup
Both setup scripts will:
- Create necessary environment files
- Install Docker if needed (on Ubuntu/Debian)
- Set up all required services
- Install the CLI tool (optional)
- Guide you through the entire process
### After Installation
You'll have access to:
- **Spark API**: Running at [http://localhost:8883](http://localhost:8883)
- **PostgreSQL Database**: Available at `localhost:15432`
- **Worker Service**: Running and ready to process tasks
- **CLI Tool**: Installed (if chosen during setup)
### Verifying Your Installation
The setup automatically verifies all services, but you can also check manually:
```bash
# Access API documentation
open http://localhost:8883/api/v1/docs # Interactive Swagger UI
open http://localhost:8883/api/v1/redoc # ReDoc documentation
# List flows (requires CLI installation)
source .venv/bin/activate
automagik-spark flow list
```
## π§© System Components
- **API Server**: Handles all HTTP requests and core logic
- **Worker**: Processes tasks and schedules
- **Database**: PostgreSQL with all required tables automatically created
- **LangFlow** (optional): Visual flow editor for creating AI workflows
- **CLI Tool** (optional): Command-line interface for managing flows and tasks
## ποΈ System Architecture
```mermaid
flowchart LR
subgraph Services
DB[PostgreSQL]
LF1[LangFlow Instance 1]
LF2[LangFlow Instance 2]
end
subgraph Spark
CLI[CLI]
API[API Server]
CW[Celery Worker]
W[Worker]
end
API -- uses --> DB
API -- triggers --> CW
W -- processes --> API
API -- integrates with --> LF1
API -- integrates with --> LF2
CLI -- controls --> API
API -- has UI --> UI[Automagik UI]
```
### Core Components Explained
- **API**: Core service handling requests and business logic
- **Worker**: Processes tasks and schedules
- **CLI**: Command-line tool for managing flows and tasks
- **PostgreSQL**: Stores flows, tasks, schedules, and other data
- **LangFlow**: Optional service for creating and editing flows
## π API Documentation
For complete API documentation, visit:
- **Swagger UI**: [http://localhost:8883/api/v1/docs](http://localhost:8883/api/v1/docs)
- **ReDoc**: [http://localhost:8883/api/v1/redoc](http://localhost:8883/api/v1/redoc)
## π οΈ Next Steps
1. If you installed LangFlow, visit [http://localhost:17860](http://localhost:17860) to create your first flow
2. Use the API at [http://localhost:8883/api/v1/docs](http://localhost:8883/api/v1/docs) to manage your flows and tasks
3. Try out the CLI commands with `automagik-spark --help`
4. Monitor task execution through logs and API endpoints
## π Telemetry
Spark collects anonymous usage analytics to help improve the project. This data helps us understand which features are most useful and prioritize development efforts.
### What We Collect
- Command usage and performance metrics
- API endpoint usage patterns
- Workflow execution statistics
- System information (OS, Python version)
- Error rates and types
### What We DON'T Collect
- Personal information or credentials
- Actual workflow data or content
- File paths or environment variables
- Database connection strings or API keys
### How to Disable Telemetry
**Environment Variable:**
```bash
export AUTOMAGIK_SPARK_DISABLE_TELEMETRY=true
```
**CLI Commands:**
```bash
# Disable permanently
automagik-spark telemetry disable
# Check status
automagik-spark telemetry status
# See what data is collected
automagik-spark telemetry info
# Use --no-telemetry flag for single session
automagik-spark --no-telemetry <command>
```
**Opt-out File:**
```bash
touch ~/.automagik-no-telemetry
```
Telemetry is automatically disabled in CI/testing environments.
## πΊοΈ Roadmap
Spark's future development focuses on:
- TBA
---
<p align="center">
<b>Spark: Bringing AI Automation to Life</b>
</p>
Raw data
{
"_id": null,
"home_page": "https://github.com/namastexlabs/automagik-spark",
"name": "automagik-spark",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "automation, workflow, langflow, ai, llm",
"author": "NamasteX Labs",
"author_email": "Felipe Rosa <felipe@namastex.ai>, Cezar Vasconcelos <cezar@namastex.ai>",
"download_url": "https://files.pythonhosted.org/packages/50/e8/ee5994b18e42bc5c88941995dc8ecc28721d35e5f37c3b133f2a5f513282/automagik_spark-0.3.7.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\".github/images/automagik_logo.png\" alt=\"Spark Logo\" width=\"600\"/>\n</p>\n\n# Spark\n\n> **Because magic shouldn't be complicated. **\n\nSpark is an automagion engine that seamlessly integrates with multiple [LangFlow](https://github.com/langflow-ai/langflow) instances. Deploy AI-driven flows, schedule one-time or recurring tasks, and monitor everything with minimal fuss\u2014no coding required.\n\n## \ud83d\udd17 Ecosystem\n\n- **[AutoMagik Agents](https://github.com/namastexlabs/automagik-agents)**: Develop production-level AI agents\n- **[AutoMagik UI](https://github.com/namastexlabs/automagik-ui)**: Create agents using natural language with our dedicated UI\n\n## \ud83d\ude80 Installation\n\nSpark provides two setup options:\n\n### Prerequisites\n\n- Linux-based system (Ubuntu/Debian recommended)\n- Docker and Docker Compose (automatically installed on Ubuntu/Debian if not present)\n\n### Local Production Setup\n\nFor a production-ready local environment:\n\n```bash\n./scripts/setup_local.sh\n```\n\n### Development Setup\n\nFor development with PostgreSQL and Redis Docker containers:\n\n```bash\n./scripts/setup_dev.sh\n```\n\n### What Happens During Setup\n\nBoth setup scripts will:\n- Create necessary environment files\n- Install Docker if needed (on Ubuntu/Debian)\n- Set up all required services\n- Install the CLI tool (optional)\n- Guide you through the entire process\n\n### After Installation\n\nYou'll have access to:\n- **Spark API**: Running at [http://localhost:8883](http://localhost:8883)\n- **PostgreSQL Database**: Available at `localhost:15432`\n- **Worker Service**: Running and ready to process tasks\n- **CLI Tool**: Installed (if chosen during setup)\n\n### Verifying Your Installation\n\nThe setup automatically verifies all services, but you can also check manually:\n\n```bash\n# Access API documentation\nopen http://localhost:8883/api/v1/docs # Interactive Swagger UI\nopen http://localhost:8883/api/v1/redoc # ReDoc documentation\n\n# List flows (requires CLI installation)\nsource .venv/bin/activate\nautomagik-spark flow list\n```\n\n## \ud83e\udde9 System Components\n\n- **API Server**: Handles all HTTP requests and core logic\n- **Worker**: Processes tasks and schedules\n- **Database**: PostgreSQL with all required tables automatically created\n- **LangFlow** (optional): Visual flow editor for creating AI workflows\n- **CLI Tool** (optional): Command-line interface for managing flows and tasks\n\n## \ud83c\udfd7\ufe0f System Architecture\n\n```mermaid\nflowchart LR\n subgraph Services\n DB[PostgreSQL]\n LF1[LangFlow Instance 1]\n LF2[LangFlow Instance 2]\n end\n subgraph Spark\n CLI[CLI]\n API[API Server]\n CW[Celery Worker]\n W[Worker]\n end\n API -- uses --> DB\n API -- triggers --> CW\n W -- processes --> API\n API -- integrates with --> LF1\n API -- integrates with --> LF2\n CLI -- controls --> API\n API -- has UI --> UI[Automagik UI]\n```\n\n### Core Components Explained\n\n- **API**: Core service handling requests and business logic\n- **Worker**: Processes tasks and schedules\n- **CLI**: Command-line tool for managing flows and tasks\n- **PostgreSQL**: Stores flows, tasks, schedules, and other data\n- **LangFlow**: Optional service for creating and editing flows\n\n## \ud83d\udcda API Documentation\n\nFor complete API documentation, visit:\n- **Swagger UI**: [http://localhost:8883/api/v1/docs](http://localhost:8883/api/v1/docs)\n- **ReDoc**: [http://localhost:8883/api/v1/redoc](http://localhost:8883/api/v1/redoc)\n\n## \ud83d\udee0\ufe0f Next Steps\n\n1. If you installed LangFlow, visit [http://localhost:17860](http://localhost:17860) to create your first flow\n2. Use the API at [http://localhost:8883/api/v1/docs](http://localhost:8883/api/v1/docs) to manage your flows and tasks\n3. Try out the CLI commands with `automagik-spark --help`\n4. Monitor task execution through logs and API endpoints\n\n## \ud83d\udcca Telemetry\n\nSpark collects anonymous usage analytics to help improve the project. This data helps us understand which features are most useful and prioritize development efforts.\n\n### What We Collect\n- Command usage and performance metrics\n- API endpoint usage patterns\n- Workflow execution statistics\n- System information (OS, Python version)\n- Error rates and types\n\n### What We DON'T Collect\n- Personal information or credentials\n- Actual workflow data or content\n- File paths or environment variables\n- Database connection strings or API keys\n\n### How to Disable Telemetry\n\n**Environment Variable:**\n```bash\nexport AUTOMAGIK_SPARK_DISABLE_TELEMETRY=true\n```\n\n**CLI Commands:**\n```bash\n# Disable permanently\nautomagik-spark telemetry disable\n\n# Check status\nautomagik-spark telemetry status\n\n# See what data is collected\nautomagik-spark telemetry info\n\n# Use --no-telemetry flag for single session\nautomagik-spark --no-telemetry <command>\n```\n\n**Opt-out File:**\n```bash\ntouch ~/.automagik-no-telemetry\n```\n\nTelemetry is automatically disabled in CI/testing environments.\n\n## \ud83d\uddfa\ufe0f Roadmap\n\nSpark's future development focuses on:\n- TBA\n\n---\n\n<p align=\"center\">\n <b>Spark: Bringing AI Automation to Life</b>\n</p>\n",
"bugtrack_url": null,
"license": null,
"summary": "AutoMagik Spark - Automagion Engine with LangFlow integration",
"version": "0.3.7",
"project_urls": {
"Bug Tracker": "https://github.com/namastexlabs/automagik-spark/issues",
"Documentation": "https://github.com/namastexlabs/automagik-spark/tree/main/docs",
"Homepage": "https://github.com/namastexlabs/automagik-spark",
"Repository": "https://github.com/namastexlabs/automagik-spark"
},
"split_keywords": [
"automation",
" workflow",
" langflow",
" ai",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b34f281258e6df49bf2b8017da550eb92813ce67d20afef66370de6b62289018",
"md5": "8f3cbe84cfd735158df840cba0bd1090",
"sha256": "9bff7ebef5c086f5279b645c260652fcbda5f814dddb12aa12925b289c637979"
},
"downloads": -1,
"filename": "automagik_spark-0.3.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8f3cbe84cfd735158df840cba0bd1090",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 104886,
"upload_time": "2025-08-25T22:44:12",
"upload_time_iso_8601": "2025-08-25T22:44:12.585585Z",
"url": "https://files.pythonhosted.org/packages/b3/4f/281258e6df49bf2b8017da550eb92813ce67d20afef66370de6b62289018/automagik_spark-0.3.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "50e8ee5994b18e42bc5c88941995dc8ecc28721d35e5f37c3b133f2a5f513282",
"md5": "bf270548708b5bb57838a184f6947187",
"sha256": "5a6a1de115d137ece2df9a9a551bdadbc10e043ba950aff65d4be9a93987b3a9"
},
"downloads": -1,
"filename": "automagik_spark-0.3.7.tar.gz",
"has_sig": false,
"md5_digest": "bf270548708b5bb57838a184f6947187",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 95171,
"upload_time": "2025-08-25T22:44:14",
"upload_time_iso_8601": "2025-08-25T22:44:14.600906Z",
"url": "https://files.pythonhosted.org/packages/50/e8/ee5994b18e42bc5c88941995dc8ecc28721d35e5f37c3b133f2a5f513282/automagik_spark-0.3.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-25 22:44:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "namastexlabs",
"github_project": "automagik-spark",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "automagik-spark"
}