# FabricFlow
[](https://pypi.org/project/fabricflow/)
[](https://pepy.tech/projects/fabricflow)
---
**FabricFlow** is a code-first Python SDK for building, managing, and automating Microsoft Fabric data pipelines, workspaces, and core items. It provides a high-level, object-oriented interface for interacting with the Microsoft Fabric REST API, enabling you to create, execute, and monitor data pipelines programmatically.
---
## Features
- **Pipeline Templates**: Easily create data pipelines from reusable templates (e.g., SQL Server to Lakehouse).
- **Pipeline Execution**: Trigger, monitor, and extract results from pipeline runs.
- **Copy & Lookup Activities**: Build and execute copy and lookup activities with source/sink abstractions.
- **Modular Architecture**: Activities, sources, and sinks are organized in separate modules for better organization.
- **Workspace & Item Management**: CRUD operations for workspaces and core items.
- **Connection & Capacity Utilities**: Resolve and manage connections and capacities.
- **Logging Utilities**: Simple logging setup for consistent diagnostics.
- **Service Principal Authentication**: Authenticate securely with Microsoft Fabric REST API using Azure Service Principal credentials.
---
## Installation
```sh
pip install fabricflow
```
---
## Sample Usage
Below is a sample workflow that demonstrates how to use FabricFlow to automate workspace creation, pipeline deployment, and data copy operations in Microsoft Fabric.
### 1. Import Required Libraries
```python
from sempy.fabric import FabricRestClient
from fabricflow import create_workspace, create_data_pipeline
from fabricflow.pipeline.activities import Copy, Lookup
from fabricflow.pipeline.sources import SQLServerSource, GoogleBigQuerySource, PostgreSQLSource, FileSystemSource
from fabricflow.pipeline.sinks import LakehouseTableSink, ParquetFileSink, LakehouseFilesSink
from fabricflow.pipeline.sinks.types import FileCopyBehavior
from fabricflow.pipeline.templates import (
DataPipelineTemplates,
COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE,
COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE_FOR_EACH,
COPY_FILES_TO_LAKEHOUSE,
LOOKUP_SQL_SERVER,
LOOKUP_SQL_SERVER_FOR_EACH
)
```
### 2. Initialize Fabric Client
```python
fabric_client = FabricRestClient()
```
> **Note**: If you are using `ServicePrincipalTokenProvider`, please make sure your Service Principal has access to the workspace and connections you are using.
### 3. Define Workspace and Capacity
```python
capacity_name = "FabricFlow"
workspace_name = "FabricFlow"
```
### 4. Create Workspace (Optional)
You can create a new workspace, or use an existing one by specifying its name.
```python
create_workspace(fabric_client, workspace_name, capacity_name)
```
### 5. Deploy Data Pipeline Templates
You can also create individual data pipeline templates by selecting specific templates from the list.
```python
for template in DataPipelineTemplates:
create_data_pipeline(
fabric_client,
template,
workspace_name
)
```
### 6. Define Source and Sink Details
```python
SOURCE_CONNECTION_ID = "your-source-connection-id"
SOURCE_DATABASE_NAME = "AdventureWorks2022"
SINK_WORKSPACE_ID = "your-sink-workspace-id"
SINK_LAKEHOUSE_ID = "your-sink-lakehouse-id"
ITEMS_TO_LOAD = [
{
"source_schema_name": "Sales",
"source_table_name": "SalesOrderHeader",
"source_query": "SELECT * FROM [Sales].[SalesOrderHeader]",
"sink_table_name": "SalesOrderHeader",
"sink_schema_name": "dbo",
"sink_table_action": "Overwrite",
"load_type": "Incremental",
"primary_key_columns": ["SalesOrderID"],
"skip": True,
"load_from_timestamp": None,
"load_to_timestamp": None,
},
# Add more items as needed...
]
```
### 7. Copy Data
You can copy data using either a single item per pipeline run (Option 1) or multiple items per pipeline run (Option 2). Choose the option that best fits your requirements.
> **Note**: The examples below uses the new `Copy` class. You can also use `CopyManager` for backward compatibility, but `Copy` is recommended for new code.
#### Option 1: Single Item Per Pipeline Run
```python
copy = Copy(
fabric_client,
workspace_name,
COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE
)
source = SQLServerSource(
source_connection_id=SOURCE_CONNECTION_ID,
source_database_name=SOURCE_DATABASE_NAME,
source_query=ITEMS_TO_LOAD[0]["source_query"],
)
sink = LakehouseTableSink(
sink_workspace=SINK_WORKSPACE_ID,
sink_lakehouse=SINK_LAKEHOUSE_ID,
sink_table_name=ITEMS_TO_LOAD[0]["sink_table_name"],
sink_schema_name=ITEMS_TO_LOAD[0]["sink_schema_name"],
sink_table_action=ITEMS_TO_LOAD[0]["sink_table_action"],
)
result = (
copy
.source(source)
.sink(sink)
.execute()
)
```
#### Option 2: Multiple Items Per Pipeline Run
```python
copy = Copy(
fabric_client,
workspace_name,
COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE_FOR_EACH
)
source = SQLServerSource(
source_connection_id=SOURCE_CONNECTION_ID,
source_database_name=SOURCE_DATABASE_NAME,
)
sink = LakehouseTableSink(
sink_workspace=SINK_WORKSPACE_ID,
sink_lakehouse=SINK_LAKEHOUSE_ID,
)
result = (
copy
.source(source)
.sink(sink)
.items(ITEMS_TO_LOAD)
.execute()
)
```
### 8. Lookup Data (New Feature)
FabricFlow now supports lookup operations for data validation and enrichment:
```python
# Single lookup operation
lookup = Lookup(
fabric_client,
workspace_name,
LOOKUP_SQL_SERVER
)
source = SQLServerSource(
source_connection_id=SOURCE_CONNECTION_ID,
source_database_name=SOURCE_DATABASE_NAME,
source_query="SELECT COUNT(*) as record_count FROM [Sales].[SalesOrderHeader]",
)
result = (
lookup
.source(source)
.execute()
)
# Multiple lookup operations
lookup_items = [
{
"source_query": "SELECT COUNT(*) as order_count FROM [Sales].[SalesOrderHeader]",
"first_row_only": True,
},
{
"source_query": "SELECT MAX(OrderDate) as latest_order FROM [Sales].[SalesOrderHeader]",
"first_row_only": True,
}
]
lookup = Lookup(
fabric_client,
workspace_name,
LOOKUP_SQL_SERVER_FOR_EACH
)
result = (
lookup
.source(source)
.items(lookup_items)
.execute()
)
```
### File System to Lakehouse Copy (New Feature)
FabricFlow now supports copying files from file servers directly to Lakehouse Files area:
```python
copy = Copy(
fabric_client,
workspace_name,
COPY_FILES_TO_LAKEHOUSE
)
# Define file system source with pattern matching and filtering
source = FileSystemSource(
source_connection_id="your-file-server-connection-id",
source_folder_pattern="incoming/data/*", # Wildcard folder pattern
source_file_pattern="*.csv", # File pattern
source_modified_after="2025-01-01T00:00:00Z", # Optional date filter
recursive_search=True, # Recursive directory search
delete_source_after_copy=False, # Keep source files
max_concurrent_connections=10 # Connection limit
)
# Define lakehouse files sink
sink = LakehouseFilesSink(
sink_lakehouse="data-lakehouse",
sink_workspace="analytics-workspace",
sink_directory="processed/files", # Target directory in lakehouse
copy_behavior=FileCopyBehavior.PRESERVE_HIERARCHY, # Maintain folder structure
enable_staging=False, # Direct copy without staging
parallel_copies=4, # Parallel operations
max_concurrent_connections=10 # Connection limit
)
result = (
copy
.source(source)
.sink(sink)
.execute()
)
```
---
## API Overview
Below are the main classes and functions available in FabricFlow:
### Core Pipeline Components
- `DataPipelineExecutor` – Execute data pipelines and monitor their status.
- `DataPipelineError` – Exception class for pipeline errors.
- `PipelineStatus` – Enum for pipeline run statuses.
- `DataPipelineTemplates` – Enum for pipeline templates.
- `get_template` – Retrieve a pipeline template definition.
- `get_base64_str` – Utility for base64 encoding of template files.
- `create_data_pipeline` – Create a new data pipeline from template.
### Pipeline Activities
- `Copy` – Build and execute copy activities (replaces `CopyManager`).
- `Lookup` – Build and execute lookup activities for data validation.
### Sources and Sinks
- `SQLServerSource` – Define SQL Server as a data source.
- `GoogleBigQuerySource` – Define Google BigQuery as a data source.
- `PostgreSQLSource` – Define PostgreSQL as a data source.
- `FileSystemSource` – Define file server as a data source for file-based operations.
- `BaseSource` – Base class for all data sources.
- `LakehouseTableSink` – Define a Lakehouse table as a data sink.
- `ParquetFileSink` – Define a Parquet file as a data sink.
- `LakehouseFilesSink` – Define Lakehouse Files area as a data sink for file operations.
- `BaseSink` – Base class for all data sinks.
- `SinkType` / `SourceType` – Enums for sink and source types.
- `FileCopyBehavior` – Enum for file copy behavior options.
### Workspace and Item Management
- `FabricCoreItemsManager` – Manage core Fabric items via APIs.
- `FabricWorkspacesManager` – Manage Fabric workspaces via APIs.
- `get_workspace_id` – Get a workspace ID or return the current one.
- `create_workspace` – Create a new workspace and assign to a capacity.
- `FabricItemType` – Enum for Fabric item types.
### Utilities
- `setup_logging` – Configure logging for diagnostics.
- `resolve_connection_id` – Resolve a connection by name or ID.
- `resolve_capacity_id` – Resolve a capacity by name or ID.
- `ServicePrincipalTokenProvider` – Handles Azure Service Principal authentication.
---
## Activities, Sources, and Sinks
FabricFlow provides a modular architecture with separate packages for activities, sources, sinks, and templates:
- **Activities**: `Copy`, `Lookup` - Build and execute pipeline activities
- **Sources**: `SQLServerSource`, `GoogleBigQuerySource`, `PostgreSQLSource`, `FileSystemSource`, `BaseSource`, `SourceType` - Define data sources
- **Sinks**: `LakehouseTableSink`, `ParquetFileSink`, `LakehouseFilesSink`, `BaseSink`, `SinkType`, `FileCopyBehavior` - Define data destinations
- **Templates**: Pre-built pipeline definitions for common patterns
### Backward Compatibility
- **CopyManager → Copy**: The `CopyManager` class is now renamed to `Copy` for consistency. Existing code using `CopyManager` will continue to work (backward compatible alias), but new code should use `Copy`.
---
## Development
Read the [Contributing](CONTRIBUTING.md) file.
## License
[MIT License](LICENSE)
---
## Author
Parth Lad
[LinkedIn](https://www.linkedin.com/in/ladparth/) | [Website](https://thenavigatedata.com/)
## Acknowledgements
- [Microsoft Fabric REST API](https://learn.microsoft.com/en-us/rest/api/fabric/)
- [Sempy](https://pypi.org/project/sempy/)
Raw data
{
"_id": null,
"home_page": null,
"name": "fabricflow",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "data, pipeline, ETL, Microsoft Fabric, data engineering, dataOps, data factory pipelines",
"author": "Parth Lad",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/71/ad/c524f8825fdbb82da79af924addc5e884764cac25db19776b9ebad210b21/fabricflow-0.1.4.tar.gz",
"platform": null,
"description": "# FabricFlow\n\n[](https://pypi.org/project/fabricflow/)\n[](https://pepy.tech/projects/fabricflow)\n\n---\n\n**FabricFlow** is a code-first Python SDK for building, managing, and automating Microsoft Fabric data pipelines, workspaces, and core items. It provides a high-level, object-oriented interface for interacting with the Microsoft Fabric REST API, enabling you to create, execute, and monitor data pipelines programmatically.\n\n---\n\n## Features\n\n- **Pipeline Templates**: Easily create data pipelines from reusable templates (e.g., SQL Server to Lakehouse).\n- **Pipeline Execution**: Trigger, monitor, and extract results from pipeline runs.\n- **Copy & Lookup Activities**: Build and execute copy and lookup activities with source/sink abstractions.\n- **Modular Architecture**: Activities, sources, and sinks are organized in separate modules for better organization.\n- **Workspace & Item Management**: CRUD operations for workspaces and core items.\n- **Connection & Capacity Utilities**: Resolve and manage connections and capacities.\n- **Logging Utilities**: Simple logging setup for consistent diagnostics.\n- **Service Principal Authentication**: Authenticate securely with Microsoft Fabric REST API using Azure Service Principal credentials.\n\n---\n\n## Installation\n\n```sh\npip install fabricflow\n```\n\n---\n\n## Sample Usage\n\nBelow is a sample workflow that demonstrates how to use FabricFlow to automate workspace creation, pipeline deployment, and data copy operations in Microsoft Fabric.\n\n### 1. Import Required Libraries\n\n```python\nfrom sempy.fabric import FabricRestClient\nfrom fabricflow import create_workspace, create_data_pipeline\nfrom fabricflow.pipeline.activities import Copy, Lookup\nfrom fabricflow.pipeline.sources import SQLServerSource, GoogleBigQuerySource, PostgreSQLSource, FileSystemSource\nfrom fabricflow.pipeline.sinks import LakehouseTableSink, ParquetFileSink, LakehouseFilesSink\nfrom fabricflow.pipeline.sinks.types import FileCopyBehavior\nfrom fabricflow.pipeline.templates import (\n DataPipelineTemplates,\n COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE,\n COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE_FOR_EACH,\n COPY_FILES_TO_LAKEHOUSE,\n LOOKUP_SQL_SERVER,\n LOOKUP_SQL_SERVER_FOR_EACH\n)\n```\n\n### 2. Initialize Fabric Client\n\n```python\nfabric_client = FabricRestClient()\n```\n\n> **Note**: If you are using `ServicePrincipalTokenProvider`, please make sure your Service Principal has access to the workspace and connections you are using.\n\n### 3. Define Workspace and Capacity\n\n```python\ncapacity_name = \"FabricFlow\"\nworkspace_name = \"FabricFlow\"\n```\n\n### 4. Create Workspace (Optional)\n\nYou can create a new workspace, or use an existing one by specifying its name.\n\n```python\ncreate_workspace(fabric_client, workspace_name, capacity_name)\n```\n\n### 5. Deploy Data Pipeline Templates\n\nYou can also create individual data pipeline templates by selecting specific templates from the list.\n\n```python\nfor template in DataPipelineTemplates:\n create_data_pipeline(\n fabric_client,\n template,\n workspace_name\n )\n```\n\n### 6. Define Source and Sink Details\n\n```python\nSOURCE_CONNECTION_ID = \"your-source-connection-id\"\nSOURCE_DATABASE_NAME = \"AdventureWorks2022\"\n\nSINK_WORKSPACE_ID = \"your-sink-workspace-id\"\nSINK_LAKEHOUSE_ID = \"your-sink-lakehouse-id\"\n\nITEMS_TO_LOAD = [\n {\n \"source_schema_name\": \"Sales\",\n \"source_table_name\": \"SalesOrderHeader\",\n \"source_query\": \"SELECT * FROM [Sales].[SalesOrderHeader]\",\n \"sink_table_name\": \"SalesOrderHeader\",\n \"sink_schema_name\": \"dbo\",\n \"sink_table_action\": \"Overwrite\",\n \"load_type\": \"Incremental\",\n \"primary_key_columns\": [\"SalesOrderID\"],\n \"skip\": True,\n \"load_from_timestamp\": None,\n \"load_to_timestamp\": None,\n },\n # Add more items as needed...\n]\n```\n\n### 7. Copy Data\n\nYou can copy data using either a single item per pipeline run (Option 1) or multiple items per pipeline run (Option 2). Choose the option that best fits your requirements.\n\n> **Note**: The examples below uses the new `Copy` class. You can also use `CopyManager` for backward compatibility, but `Copy` is recommended for new code.\n\n#### Option 1: Single Item Per Pipeline Run\n\n```python\ncopy = Copy(\n fabric_client,\n workspace_name,\n COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE\n)\n\nsource = SQLServerSource(\n source_connection_id=SOURCE_CONNECTION_ID,\n source_database_name=SOURCE_DATABASE_NAME,\n source_query=ITEMS_TO_LOAD[0][\"source_query\"],\n)\n\nsink = LakehouseTableSink(\n sink_workspace=SINK_WORKSPACE_ID,\n sink_lakehouse=SINK_LAKEHOUSE_ID,\n sink_table_name=ITEMS_TO_LOAD[0][\"sink_table_name\"],\n sink_schema_name=ITEMS_TO_LOAD[0][\"sink_schema_name\"],\n sink_table_action=ITEMS_TO_LOAD[0][\"sink_table_action\"],\n)\n\nresult = (\n copy\n .source(source)\n .sink(sink)\n .execute()\n)\n\n```\n\n#### Option 2: Multiple Items Per Pipeline Run\n\n```python\ncopy = Copy(\n fabric_client,\n workspace_name,\n COPY_SQL_SERVER_TO_LAKEHOUSE_TABLE_FOR_EACH\n)\n\nsource = SQLServerSource(\n source_connection_id=SOURCE_CONNECTION_ID,\n source_database_name=SOURCE_DATABASE_NAME,\n)\n\nsink = LakehouseTableSink(\n sink_workspace=SINK_WORKSPACE_ID,\n sink_lakehouse=SINK_LAKEHOUSE_ID,\n)\n\nresult = (\n copy\n .source(source)\n .sink(sink)\n .items(ITEMS_TO_LOAD)\n .execute()\n)\n```\n\n### 8. Lookup Data (New Feature)\n\nFabricFlow now supports lookup operations for data validation and enrichment:\n\n```python\n# Single lookup operation\nlookup = Lookup(\n fabric_client,\n workspace_name,\n LOOKUP_SQL_SERVER\n)\n\nsource = SQLServerSource(\n source_connection_id=SOURCE_CONNECTION_ID,\n source_database_name=SOURCE_DATABASE_NAME,\n source_query=\"SELECT COUNT(*) as record_count FROM [Sales].[SalesOrderHeader]\",\n)\n\nresult = (\n lookup\n .source(source)\n .execute()\n)\n\n# Multiple lookup operations\nlookup_items = [\n {\n \"source_query\": \"SELECT COUNT(*) as order_count FROM [Sales].[SalesOrderHeader]\",\n \"first_row_only\": True,\n },\n {\n \"source_query\": \"SELECT MAX(OrderDate) as latest_order FROM [Sales].[SalesOrderHeader]\",\n \"first_row_only\": True,\n }\n]\n\nlookup = Lookup(\n fabric_client,\n workspace_name,\n LOOKUP_SQL_SERVER_FOR_EACH\n)\n\nresult = (\n lookup\n .source(source)\n .items(lookup_items)\n .execute()\n)\n```\n\n### File System to Lakehouse Copy (New Feature)\n\nFabricFlow now supports copying files from file servers directly to Lakehouse Files area:\n\n```python\ncopy = Copy(\n fabric_client,\n workspace_name,\n COPY_FILES_TO_LAKEHOUSE\n)\n\n# Define file system source with pattern matching and filtering\nsource = FileSystemSource(\n source_connection_id=\"your-file-server-connection-id\",\n source_folder_pattern=\"incoming/data/*\", # Wildcard folder pattern\n source_file_pattern=\"*.csv\", # File pattern\n source_modified_after=\"2025-01-01T00:00:00Z\", # Optional date filter\n recursive_search=True, # Recursive directory search\n delete_source_after_copy=False, # Keep source files\n max_concurrent_connections=10 # Connection limit\n)\n\n# Define lakehouse files sink\nsink = LakehouseFilesSink(\n sink_lakehouse=\"data-lakehouse\",\n sink_workspace=\"analytics-workspace\", \n sink_directory=\"processed/files\", # Target directory in lakehouse\n copy_behavior=FileCopyBehavior.PRESERVE_HIERARCHY, # Maintain folder structure\n enable_staging=False, # Direct copy without staging\n parallel_copies=4, # Parallel operations\n max_concurrent_connections=10 # Connection limit\n)\n\nresult = (\n copy\n .source(source)\n .sink(sink)\n .execute()\n)\n```\n\n---\n\n## API Overview\n\nBelow are the main classes and functions available in FabricFlow:\n\n### Core Pipeline Components\n\n- `DataPipelineExecutor` \u2013 Execute data pipelines and monitor their status.\n- `DataPipelineError` \u2013 Exception class for pipeline errors.\n- `PipelineStatus` \u2013 Enum for pipeline run statuses.\n- `DataPipelineTemplates` \u2013 Enum for pipeline templates.\n- `get_template` \u2013 Retrieve a pipeline template definition.\n- `get_base64_str` \u2013 Utility for base64 encoding of template files.\n- `create_data_pipeline` \u2013 Create a new data pipeline from template.\n\n### Pipeline Activities\n\n- `Copy` \u2013 Build and execute copy activities (replaces `CopyManager`).\n- `Lookup` \u2013 Build and execute lookup activities for data validation.\n\n### Sources and Sinks\n\n- `SQLServerSource` \u2013 Define SQL Server as a data source.\n- `GoogleBigQuerySource` \u2013 Define Google BigQuery as a data source.\n- `PostgreSQLSource` \u2013 Define PostgreSQL as a data source.\n- `FileSystemSource` \u2013 Define file server as a data source for file-based operations.\n- `BaseSource` \u2013 Base class for all data sources.\n- `LakehouseTableSink` \u2013 Define a Lakehouse table as a data sink.\n- `ParquetFileSink` \u2013 Define a Parquet file as a data sink.\n- `LakehouseFilesSink` \u2013 Define Lakehouse Files area as a data sink for file operations.\n- `BaseSink` \u2013 Base class for all data sinks.\n- `SinkType` / `SourceType` \u2013 Enums for sink and source types.\n- `FileCopyBehavior` \u2013 Enum for file copy behavior options.\n\n### Workspace and Item Management\n\n- `FabricCoreItemsManager` \u2013 Manage core Fabric items via APIs.\n- `FabricWorkspacesManager` \u2013 Manage Fabric workspaces via APIs.\n- `get_workspace_id` \u2013 Get a workspace ID or return the current one.\n- `create_workspace` \u2013 Create a new workspace and assign to a capacity.\n- `FabricItemType` \u2013 Enum for Fabric item types.\n\n### Utilities\n\n- `setup_logging` \u2013 Configure logging for diagnostics.\n- `resolve_connection_id` \u2013 Resolve a connection by name or ID.\n- `resolve_capacity_id` \u2013 Resolve a capacity by name or ID.\n- `ServicePrincipalTokenProvider` \u2013 Handles Azure Service Principal authentication.\n\n---\n\n## Activities, Sources, and Sinks\n\nFabricFlow provides a modular architecture with separate packages for activities, sources, sinks, and templates:\n\n- **Activities**: `Copy`, `Lookup` - Build and execute pipeline activities\n- **Sources**: `SQLServerSource`, `GoogleBigQuerySource`, `PostgreSQLSource`, `FileSystemSource`, `BaseSource`, `SourceType` - Define data sources\n- **Sinks**: `LakehouseTableSink`, `ParquetFileSink`, `LakehouseFilesSink`, `BaseSink`, `SinkType`, `FileCopyBehavior` - Define data destinations\n- **Templates**: Pre-built pipeline definitions for common patterns\n\n### Backward Compatibility\n\n- **CopyManager \u2192 Copy**: The `CopyManager` class is now renamed to `Copy` for consistency. Existing code using `CopyManager` will continue to work (backward compatible alias), but new code should use `Copy`.\n\n---\n\n## Development\n\nRead the [Contributing](CONTRIBUTING.md) file.\n\n## License\n\n[MIT License](LICENSE)\n\n---\n\n## Author\n\nParth Lad\n\n[LinkedIn](https://www.linkedin.com/in/ladparth/) | [Website](https://thenavigatedata.com/)\n\n## Acknowledgements\n\n- [Microsoft Fabric REST API](https://learn.microsoft.com/en-us/rest/api/fabric/)\n- [Sempy](https://pypi.org/project/sempy/)\n",
"bugtrack_url": null,
"license": null,
"summary": "A code-first approach for MS Fabric data pipelines and ETL.",
"version": "0.1.4",
"project_urls": {
"Changelog": "https://github.com/ladparth/fabricflow/blob/main/CHANGELOG.md",
"Homepage": "https://github.com/ladparth/fabricflow",
"Issues": "https://github.com/ladparth/fabricflow/issues",
"Repository": "https://github.com/ladparth/fabricflow"
},
"split_keywords": [
"data",
" pipeline",
" etl",
" microsoft fabric",
" data engineering",
" dataops",
" data factory pipelines"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d7c86759e4314c98d1bc795e6da507a6a6668361f082734b709595ae7dff6cd9",
"md5": "a1686f95cbb36a046ae1fbfb4e45951e",
"sha256": "271b75acc7c79a96229f1308966b3cf18ba1d76d0327ff6adbd9c12fc0e0c980"
},
"downloads": -1,
"filename": "fabricflow-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a1686f95cbb36a046ae1fbfb4e45951e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 79174,
"upload_time": "2025-07-13T23:25:46",
"upload_time_iso_8601": "2025-07-13T23:25:46.155472Z",
"url": "https://files.pythonhosted.org/packages/d7/c8/6759e4314c98d1bc795e6da507a6a6668361f082734b709595ae7dff6cd9/fabricflow-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "71adc524f8825fdbb82da79af924addc5e884764cac25db19776b9ebad210b21",
"md5": "c4a3bc819446b0de5ec10c797f5a5702",
"sha256": "65f04272b0eb712b26f1d387105a5dfcc9f8a57b7518f0e61221a23cde8041da"
},
"downloads": -1,
"filename": "fabricflow-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "c4a3bc819446b0de5ec10c797f5a5702",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 50777,
"upload_time": "2025-07-13T23:25:47",
"upload_time_iso_8601": "2025-07-13T23:25:47.578429Z",
"url": "https://files.pythonhosted.org/packages/71/ad/c524f8825fdbb82da79af924addc5e884764cac25db19776b9ebad210b21/fabricflow-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-13 23:25:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ladparth",
"github_project": "fabricflow",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "fabricflow"
}