# CitizenK
**CitizenK** is a simple but powerful Python Library for developing reactive async Kafka microservices, built on top of [Confluent Kafka Python](https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html), [FastAPI](https://fastapi.tiangolo.com/) and [Pydantic](https://docs.pydantic.dev/).
**CitizenK Replicator** is an additional tool that we developed using the same technology to simplify data transfer between production and staging environments. It's not a substitution for Confluent's replicator which is a much more robust tool for replicating data between multiple production environments
------------------------------------------------------------------------
------------------------------------------------------------------------
## How we got here...
We exclusively use Python for service development. Our work involves crafting web services, creating ETL code, and engaging in data science, all within the Python ecosystem. A few years back, as we embarked on the Lanternn project, our quest led us to seek a Python library that could facilitate the construction of distributed, scalable processing pipelines built on top of Kafka. These pipelines needed to accommodate both stateless and stateful microservices seamlessly. After extensive exploration, we found that the most suitable solution at the time was Faust.
Faust is a stream processing library that borrows concepts from Kafka Streams and brings them into the Python realm. Beyond this, Faust boasts an impressive array of features, including a robust web server, comprehensive schema validation and management, all built upon an agents/actors architecture. With such a compelling set of attributes, it was impossible for us to resist its allure.
We went on to develop numerous services utilizing Faust, and by and large, we were quite content with the results. However, as time progressed, we came to realize that Kafka Streams was not the ideal fit for our needs. Its complexity made it challenging to manage, and we found simpler alternatives for state management, such as Redis, to be more suitable. Additionally, concerns began to emerge about the long-term viability of Faust, particularly in the absence of its creator Ask Solem. Moreover, the underlying Kafka libraries it relied upon, aiokafka and python-kafka, lacked the robust community support necessary to address the stability issues we encountered.
Concurrently, we observed that frameworks like FastAPI and Confluent Kafka, which we were already using, enjoyed strong backing from vibrant and sizable communities. This realization led us to explore the possibility of combining these frameworks to establish a new foundation for our pipelines, one that would offer greater stability and long-term viability, and would be easy to migrate to from Faust.
The choice of the name "CitizenK" embodies our belief that Python should occupy a prominent position within the Kafka ecosystem. It also draws inspiration from Kafka's renowned novel, "The Trial," which chronicles the plight of Josef K., a man ensnared and prosecuted by an enigmatic, distant authority. The nature of his alleged transgression remains shrouded in mystery, a narrative that resonates with our journey in the world of Kafka.
## Existing tools
- Faust
- Fastkafka
------------------------------------------------------------------------
## Tutorial
You can see an example of how to use CitizenK in the demo app
### Creating a CitizenK app
First, we create a CitizenK app similar to how we create a FastAPI app, but with additional arguments:
- kafka_config: provides configuration for connecting and configuring the Kafka client
- app_name: Mainly used as the consumer group name
- app_type: SINK (consumer only), SOURCE(producer only) or TRANSFORM(producer-consumer)
- auto_generate_apis: Will auto generate FastAPI to consume and produce to workers and topics
- agents_in_thread: Will run the consumer agents in a thread and not in an async loop
- consumer_group_init_offset: Where to start consuming when the consumer group is created
- consumer_group_auto_commit: commit after consume / commit after processing completed successfully in all agents
- exit_on_agent_exception: exit the service if there is exception in an agent
``` python
app = CitizenK(
kafka_config=config.source_kafka,
app_name="citizenk",
app_type=AppType.TRANSFORM,
debug=True,
title="CitizenK Demo App",
auto_generate_apis=True,
agents_in_thread=config.AGENTS_IN_THREAD,
api_router_prefix=prefix,
api_port=config.API_PORT,
schema_registry_url=config.KAFKA_SCHEMA_REGISTRY,
version=config.VERSION,
consumer_group_init_offset = "latest",
consumer_group_auto_commit = True,
consumer_extra_config=config.KAFKA_CONSUMER_EXTRA_CONFIG,
producer_extra_config=config.KAFKA_PRODUCER_EXTRA_CONFIG,
exit_on_agent_exception=True,
openapi_url=prefix + "/openapi.json",
docs_url=prefix + "/docs",
license_info={
"name": "Apache 2.0",
"url": "https://www.apache.org/licenses/LICENSE-2.0.html",
},
)
```
### Creating CitizenK topics
Next, we create topics for the app and define the model for the topics using Pydantic
Topics can be either INPUT, OUTPUT or BIDIR
``` python
class Video(JSONSchema):
camera_id: int
path: str
timestamp: datetime
class ProcessedVideo(JSONSchema):
camera_id: int
path: str
timestamp: datetime
valid: bool
t1 = app.topic(name="B", value_type=Video, topic_dir=TopicDir.BIDIR)
t2 = app.topic(name="C", value_type=ProcessedVideo, topic_dir=TopicDir.BIDIR)
t3 = app.topic(name="D", value_type=ProcessedVideo, topic_dir=TopicDir.OUTPUT)
```
Schemas can also be AVRO
``` python
class AvroProcessedVideo(AvroBase):
camera_id: int
path: str
timestamp: datetime
valid: bool
t4 = app.topic(
name="E",
value_type=AvroProcessedVideo,
topic_dir=TopicDir.BIDIR,
schema_type=SchemaType.AVRO,
)
```
In case the schema is unknown or not managed, Pydantic offers an option to allow extra non managed fields:
``` python
class AnythingModel(BaseModel):
class Config:
extra = Extra.allow
```
### Creating CitizenK agents
And lastly, we create gents that process the Kafka messages.
Agents can listen to multiple topics and accept either values or the entire Kafka event (key, value, offset, partition, timestamp...). Agents can also accept a self argument to get a reference to the Agent object.
In non auto commit apps, offsets are committed only after all agents processed the event successfully.
- topics: one or more topics to process
- batch_size: specify desired batch_size. Default = 1
- batch_timeout: How long to wait for batch to arrive: Default = 10 seconds
``` python
@app.agent(topics=t1, batch_size=100)
async def process_videos_t1(events: list[KafkaEvent]):
# Process incoming video
for event in events:
camera_id = event.value.camera_id
video_counts[camera_id] += 1
v = ProcessedVideo(
camera_id=camera_id,
path=event.value.path,
timestamp=event.value.timestamp,
valid=bool(camera_id % 2),
)
t2.send(value=v, key=str(v.camera_id))
@app.agent(topics=t2, batch_size=100)
async def process_videos_t2(values: list[BaseModel]):
# Process incoming video
for value in values:
if value.valid:
t3.send(value=value, key=str(value.camera_id))
```
### Auto endpoints
To help debug and evaluate the service, CitizenK automatically creates web endpoints that help you send messages to topics and agents.
- info: get service info
- topics: send events to topics
- agents: send events directly to agents, bypassing topics
- stats: get Kafka stats for producer and consumer
![CitizenK Demo API](docs/citizenk_demo_api.jpg)
### Creating additional CitizenK endpoints
Just like any other FastAPI app, you can create get, post and put endpoints that either interact with Kafka or perform some other tasks, non Kafka related
``` python
@router.post("/events", response_class=JSONResponse)
async def produce_video_events(
values: list[Video],
topic: str = Query(),
):
"""Sends events to the given topic"""
if topic not in app.topics:
raise HTTPException(status_code=400, detail="Topic not supported by app")
t = app.topics[topic]
for v in values:
t.send(value=v, key=str(v.camera_id))
return {"status": "ok"}
@router.get("/topics", response_class=JSONResponse)
async def get_source_topics():
"""Returns the list of topics from the source kafka"""
admin = KafkaAdapter(config.source_kafka)
topics = sorted(list(admin.get_all_broker_topics().keys()))
return {"topics": topics}
```
### Multiple workers behind a load balancer
CitizenK includes two special decorators for scenarios where the service has multiple workers behind a load balancer and the web request needs to reach a specific worker that holds a partition.
- topic_router: forwards the request based on the topic and key (JSON / HTML)
- broadcast_router: aggregates the responses from all workers into a single JSON
Both routers support GET, POST, PUT and DELETE commands
``` python
@router.get("/topic_test", response_class=JSONResponse)
@app.topic_router(topic=t1, match_info="camera_id")
async def test_topic_router(request: Request, camera_id: int):
"""Returns the list of groups from the target kafka"""
return {"key": camera_id, "count": video_counts[camera_id]}
@router.get("/broadcast_test", response_class=JSONResponse)
@app.broadcast_router()
async def test_broadcast_router(request: Request):
"""Returns the list of groups from the target kafka"""
return video_counts
```
### Websocket
CitizenK support for Websocket agents
``` python
@app.agent(topics=t2, batch_size=100, websocket_route=prefix + "/ws")
async def websocket_agent(values: list[BaseModel]) -> str:
values = [json.loads(v.model_dump_json()) for v in values if not v.valid]
return json.dumps(values, indent=4)
```
This agent exposes a WebSocket endpoint for one or more clients to connect to. It then processes incoming Kafka messages from topic t2 and sends the returned string value to all the existing live WebSocket "/ws" connections. The main use case for this is to bridge between Kafka and Websocket. One possible use case for this feature is to send filtered Kafka events to a web app or mobile app.
The other direction frontend --> Kafka is probably easier to implement with a normal REST post endpoint and is not supported yet.
### Event handlers tasks and repeat / cron tasks
To start tasks in certain condition, just like FastAPI on_event("startup") / on_event("shutdown"), CitizenK includes similar mechanism:
on_citizenk_event("startup")
on_citizenk_event("shutdown")
on_citizenk_event("agent_thread_startup")
agent_thread_startup can be used to run tasks in the agents thread, while normal startup runs them in the web thread.
``` python
@app.on_citizenk_event("agent_thread_startup")
async def startup_debug():
logger.debug("Demo App starting")
```
CitizenK also includes repeatable tasks:
``` python
def repeat_every(
*,
seconds: float,
wait_first: bool = False,
logger: logging.Logger | None = None,
raise_exceptions: bool = False,
max_repetitions: int | None = None,
) -> Callable:
def repeat_at(
*,
cron: str,
logger: logging.Logger = None,
raise_exceptions: bool = False,
max_repetitions: int = None,
) -> Callable:
```
that normally works with event handlers like this:
``` python
@app.on_citizenk_event("agent_thread_startup")
@repeat_every(seconds=5)
async def agent_thread_debug():
logger.debug("In agent thread... thread=%s",threading.get_ident())
```
## Things to be aware of...
CitizenK is a single-threaded async app. i.e. If a coroutine spends too much time in processing without awaiting IO, it will block other coroutines from running. Specifically, when using a load balancer with health checks, it's important to pay attention to the time between health checks and see that it's higher than the longest-running agent. Fixed using:agents_in_thread
To help tune the service. CitizenK includes the concept of batch size:i.e. how many events to consume and process every batch across all agents.
Additionally like any other Kafka service. it's important to tune several kafka [consumer](https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#fetch-max-bytes) and [producer](https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html) configs. Specifically ensure rebalancing is not triggered unintentionally: Alternatively this list includes [all Kafka configs](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md)
Consumer:
- fetch.max.bytes (50 Mbytes): The maximum amount of data the server should return for a fetch request. Reduce if processing each record takes significant time.
- max.poll.records(500): The maximum number of records returned in a single call to poll().
- max.poll.interval.ms (5 min): The maximum delay between invocations of poll() when using consumer group management
Group rebalancing in stateful services:
- Prefer static membership + Increase session.timeout.ms to 2-5 minutes (how long it takes a new service to come up)
- partition.assignment.strategy': 'range'
Producer:
- linger.ms(0): Important to set to 0/5/10/50/200 on moderate/high load
- batch.size(16K): Increase if sending large buffers to Kafka
Both:
- compression.type(none): gzip, snappy, or lz4
More explanation in here: [Solving My Weird Kafka Rebalancing Problems & Explaining What Is Happening and Why?](https://medium.com/bakdata/solving-my-weird-kafka-rebalancing-problems-c05e99535435)
## CitizenK vs Faust
| Topic | CitizenK | Faust |
| ------ |------- | ----- |
| Creating an app | app = Citizenk() | app = faust.App() |
| Creating a topic | topic = app.topic() | topic = app.topic() |
| Creating an agent | @app.agent() | @app.agent() |
| Creating a table | not supported | app.Table() |
| Creating a timer | @repeat_every | @app.timer() |
| Creating a task | background_tasks.add_task() | @app.task() |
| Creating a page | @app.get() | @app.page() |
| Routing requests | @app.topic_router | @app.topic_route |
| Broadcast requests | @app.broadcast_router() | Not supported |
| Models | Pydantic | faust.Record |
| Model to dict | to_representation() | model_dump_json() |
| Serializers | JSON, AVRO | JSON, RAW, PICKLE, AVRO |
| Kafka library | confluent | aiokafka |
| Websockets agents | Supported | Not supported |
------------------------------------------------------------------------
# CitizenK Replicator
## Scenarios
### Staging environment
1. I have a staging environment and I want to replicate some production topics to it
2. At some point I want to produce the staging topics in the staging environment using a staging service. So I switch off the replication and populate the same staging topic with real data.
3. When I finish the testing in staging, I want to switch back to production, so that I can save on costs.
4. If the workload is high, I want to replicate most (i.e. 90%) of the messages from production and only produce just a little (i.e. 10%) of the data from staging. This way in the same topic, I will have mixed data and potential schema from the two environments
5. When switching between environments (i.e. on configuration change), I want to change the offset to the latest on the new topic, so that the handover is not too chaotic
6. I also want to delete the consumer group of the service in staging, so that when it come back up again, it won't see a lag.
7. Additionally sometimes, I want to migrate data between production and staging due to schema change or different identities.
![Replicator](docs/replicator.jpg)
### Dev environment + live data
1. When I test a service locally or in a dev environment, possibly with a local Kafka, I want the local Kafka to have real data, so that I can test the service for a long period of time with live data.
2. Theoretically, I can connect the dev service to the staging or production Kafka cluster, however, this presents a stability/security risk to the remote cluster. There is also a risk that the service will join a consumer group and participate accidentally in the remote workload. This approach also prevents parallel testing as there can be a conflict between the consumers.
3. So one solution would be to replicate the topics from staging to the local/dev Kafka, maybe with some filtering to reduce the load, so that the local service is not overwhelmed with too much data
### On premise kafka -- cloud kafka bridge
1. I have a local Kafka and I want to replicate some topics to the remote cloud
2. You can use this tool for this scenario, however Confluent replicator or Kafka MirrorMaker are probably more suitable
### Dev environment + replayed data
1. When I test a service locally or in a dev environment, possibly with a local Kafka, I want the local Kafka to replay historical/simulated messages from a file.
2. this scenario is a bit different from the previous ones, as there is no Kafka consumer, just a producer. And you can say that it is more of a tool than a service.
3. The messages are read from a file with a timestamp (one file for each topic), and injected into the right topic with the correct timing keeping the same gap between now, and the initial timestamp.
### Cluster -- Cluster replication
1. You can use this tool for this scenario, however Confluent replicator or Kafka MirrorMaker are probably more suitable
## Existing tools
1. Confluent replicator: Looks like a good tool, but not open source, expensive
2. Kafka Mirror Maker: Open source but doesn't support filtering
3. kcat: Nice tool, but not for these scenarios
## Implementation details
1. Using containerised python
2. Based on Confluent Kafka API + FastAP
3. Does not create the topics or partitions automatically. It assumes they exists and configured
3. Deployed as a distributed service
4. Filter based on JMESPath for JSON messages
5. Allow two consumer options: with consumer group, without consumer group
6. write code following DDD principles
## Configuration
- LOG_LEVEL: service log level
- JSON_LOGGING: Use json logging
- API_PREFIX: API prefix
- FILE_DATA_PATH: Location of json files when reading and writing topics from file
- KAFKA_SOURCE_SERVER_URL: Source Bootstrap Servers
- KAFKA_SOURCE_USE_TLS: Enable Source SSL: 0,1
- KAFKA_SOURCE_SASL_MECHANISM: Source SASL mechanism: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512
- KAFKA_SOURCE_SASL_USERNAME: Source SASL username
- KAFKA_SOURCE_SASL_PASSWORD: Source SASL password
- KAFKA_SOURCE_GROUP_NAME: Source group name, or leave empty to consume without a consumer group
- KAFKA_SOURCE_EXTRA_CONFIG_<KAFKA_CONFIG_KEY>: Any valid kafka consumer config (uppercase, replace . with _)
- KAFKA_TARGET_SERVER_URL: Target Bootstrap Servers
- KAFKA_TARGET_USE_TLS: Enable Target SSL
- KAFKA_TARGET_SASL_MECHANISM: Target SASL mechanism: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512
- KAFKA_TARGET_SASL_USERNAME: Target SASL username
- KAFKA_TARGET_SASL_PASSWORD: Target SASL password
- KAFKA_TARGET_EXTRA_CONFIG_<KAFKA_CONFIG_KEY>: Any valid kafka producer config (uppercase, replace . with _)
- READ_MAPPINGS_EVERY_SECONDS: How often to check for new mappings in the file system
- CACULATE_STATS_EVERY_SECONDS: How often to calculate stats
- DELETE_GROUPS_EVERY_SECONDS: How often to check for new group deletion
## Current solution limitations
1. Currently only supports JSON schema.
## API
[API Description](docs/replicator_openapi.md)
![Replicator API](docs/replicator_api.jpg)
## User Interface
![Replicator User Interface](docs/replicator_ui.jpg)
The user interface allows you to add a new mapping and edit/delete an existing mapping.
## Usage
Provide a JSON list of topic mappings in this format, either directly, or through templates:
```json
[
{
"group": "first",
"name": "File A to B",
"source_topic_name": "A",
"target_topic_name": "{{B}}",
"source_is_file": true
},
{
"group": "first",
"name": "Topic B to C",
"source_topic_name": "{{B}}",
"target_topic_name": "{{C}}"
},
{
"group": "second",
"name": "Topic C to D Using filter",
"source_topic_name": "{{C}}",
"target_topic_name": "D",
"valid_jmespath": "key == 'hello' && value.msg == 'world'",
"enabled": true
},
{
"group": "second",
"name": "TopicCtoD",
"topics": [{
"source":"{{C}}",
"target": "D"
}],
"enabled": true,
"target_service_consumer_group": "service"
},
{
"group": "second",
"name": "Topic D to File E",
"source_topic_name": "D",
"target_topic_name": "E",
"target_is_file": true
}
]
```
- name: A unique mapping name
- group: groups multiple mapping in the same category
- enabled: Enable / disable mapping
- source_topic_name: The topic to read from in the source cluster
- target_topic_name: The topic to write to in the target cluster
- valid_jmespath: filter criteria
- source_is_file: If the source is a json file
- target_is_file: If the target is a json file
- topics: an extension that allows one mapping with several source-target mappings
- target_service_consumer_group: The service consumer group to delete when replication is enabled
The final mapping file is defined this way:
```json
{
"templates":[
{
"template":"name",
"vars":{
"A":"A",
"B":"B",
"C":"C"
}
}
],
"enabled":[],
"disabled":["TopicCtoD"],
"mappings":[{
"group": "third",
"name": "Disabled Topic U to File V",
"source_topic_name": "U",
"target_topic_name": "V",
"target_is_file": true,
"enabled": false
}],
"comment": "no comment"
}
```
- templates: A list of templates and their corresponding vars to render the template
- enabled / disabled: Overrides the template enabled flag
- mappings: Extra mappings
- comment: A comment that describes the latest change in the mappings
## Topic Level Mapping
Topic level mappings allows mapping of key/values when replicating a topic. This might be useful if for example the schema / enums / keys are different between the environments
To support this, the replicator supports value mappings for each topic that it consumes from the source in the following JSON format:
There are two mapping formats:
- value.payload.product_id : map source product_id to target product_id
- key:partition: map source key to target partition (drop entire msg if partition equals -1000)
```json
{
"key":{
"1":10,
"2":12
},
"value.payload.product_id":{
"1001":1,
"1002":12,
"1003":14
},
"value.payload.user_name":{
"A":"A name",
"B":"B name",
"C":"C name"
},
"key:partition":{
"1":0,
"2":0,
"3":-1000,
"4":1,
"5":1,
"6":1,
}
}
```
## Stats
Returns a list of JSON stats for each mapping in the following format:
```json
{
"time": "2023-05-25 18:20:43.875557",
"started": "2023-05-25 08:08:35.728313",
"queue": 180,
"mappings": [
{
"name": "Topic B to C",
"source_topic_name": "B",
"target_topic_name": "C",
"valid_jmespath": null,
"target_service_consumer_group": null,
"consumer_group_up": false,
"assignments": [0,1,2],
"lag": 1258,
"source_count": 27739,
"target_count": 27739
}
]
}
```
## Grafana Integration
To view the stats in Grafana, use the Infinity data source with the following settings:
![Replicator Grafana Interface](docs/replicator_grafana.jpg)
## Consumer API
To simplify debug and to support other use cases, the replicator also includes an end point to consume messages from a given topic.
## License
[Apache License v2.0](https://www.apache.org/licenses/LICENSE-2.0)
Raw data
{
"_id": null,
"home_page": "https://pypi.org/user/valerann/",
"name": "citizenk",
"maintainer": "Valerann",
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.1",
"maintainer_email": "info@valerann.com",
"keywords": "kafka, fastapi, confluent, pydantic, distributed, real-time",
"author": "Valerann",
"author_email": "info@valerann.com",
"download_url": "https://files.pythonhosted.org/packages/98/8e/6e33fc3305eb0913eff5d9d8f7ababded86b6cc97a0235d9094a02ed04e7/citizenk-0.1.65.tar.gz",
"platform": null,
"description": "# CitizenK\n**CitizenK** is a simple but powerful Python Library for developing reactive async Kafka microservices, built on top of [Confluent Kafka Python](https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html), [FastAPI](https://fastapi.tiangolo.com/) and [Pydantic](https://docs.pydantic.dev/).\n\n**CitizenK Replicator** is an additional tool that we developed using the same technology to simplify data transfer between production and staging environments. It's not a substitution for Confluent's replicator which is a much more robust tool for replicating data between multiple production environments\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n\n## How we got here...\nWe exclusively use Python for service development. Our work involves crafting web services, creating ETL code, and engaging in data science, all within the Python ecosystem. A few years back, as we embarked on the Lanternn project, our quest led us to seek a Python library that could facilitate the construction of distributed, scalable processing pipelines built on top of Kafka. These pipelines needed to accommodate both stateless and stateful microservices seamlessly. After extensive exploration, we found that the most suitable solution at the time was Faust.\n\nFaust is a stream processing library that borrows concepts from Kafka Streams and brings them into the Python realm. Beyond this, Faust boasts an impressive array of features, including a robust web server, comprehensive schema validation and management, all built upon an agents/actors architecture. With such a compelling set of attributes, it was impossible for us to resist its allure.\n\nWe went on to develop numerous services utilizing Faust, and by and large, we were quite content with the results. However, as time progressed, we came to realize that Kafka Streams was not the ideal fit for our needs. Its complexity made it challenging to manage, and we found simpler alternatives for state management, such as Redis, to be more suitable. Additionally, concerns began to emerge about the long-term viability of Faust, particularly in the absence of its creator Ask Solem. Moreover, the underlying Kafka libraries it relied upon, aiokafka and python-kafka, lacked the robust community support necessary to address the stability issues we encountered.\n\nConcurrently, we observed that frameworks like FastAPI and Confluent Kafka, which we were already using, enjoyed strong backing from vibrant and sizable communities. This realization led us to explore the possibility of combining these frameworks to establish a new foundation for our pipelines, one that would offer greater stability and long-term viability, and would be easy to migrate to from Faust.\n\nThe choice of the name \"CitizenK\" embodies our belief that Python should occupy a prominent position within the Kafka ecosystem. It also draws inspiration from Kafka's renowned novel, \"The Trial,\" which chronicles the plight of Josef K., a man ensnared and prosecuted by an enigmatic, distant authority. The nature of his alleged transgression remains shrouded in mystery, a narrative that resonates with our journey in the world of Kafka.\n\n\n## Existing tools\n- Faust\n- Fastkafka\n\n------------------------------------------------------------------------\n\n## Tutorial\n\nYou can see an example of how to use CitizenK in the demo app\n\n### Creating a CitizenK app\n\nFirst, we create a CitizenK app similar to how we create a FastAPI app, but with additional arguments:\n\n- kafka_config: provides configuration for connecting and configuring the Kafka client\n- app_name: Mainly used as the consumer group name\n- app_type: SINK (consumer only), SOURCE(producer only) or TRANSFORM(producer-consumer)\n- auto_generate_apis: Will auto generate FastAPI to consume and produce to workers and topics\n- agents_in_thread: Will run the consumer agents in a thread and not in an async loop\n- consumer_group_init_offset: Where to start consuming when the consumer group is created\n- consumer_group_auto_commit: commit after consume / commit after processing completed successfully in all agents\n- exit_on_agent_exception: exit the service if there is exception in an agent\n\n``` python\napp = CitizenK(\n kafka_config=config.source_kafka,\n app_name=\"citizenk\",\n app_type=AppType.TRANSFORM,\n debug=True,\n title=\"CitizenK Demo App\",\n auto_generate_apis=True,\n agents_in_thread=config.AGENTS_IN_THREAD,\n api_router_prefix=prefix,\n api_port=config.API_PORT,\n schema_registry_url=config.KAFKA_SCHEMA_REGISTRY,\n version=config.VERSION,\n consumer_group_init_offset = \"latest\",\n consumer_group_auto_commit = True,\n consumer_extra_config=config.KAFKA_CONSUMER_EXTRA_CONFIG,\n producer_extra_config=config.KAFKA_PRODUCER_EXTRA_CONFIG,\n exit_on_agent_exception=True,\n openapi_url=prefix + \"/openapi.json\",\n docs_url=prefix + \"/docs\",\n license_info={\n \"name\": \"Apache 2.0\",\n \"url\": \"https://www.apache.org/licenses/LICENSE-2.0.html\",\n },\n)\n```\n\n### Creating CitizenK topics\nNext, we create topics for the app and define the model for the topics using Pydantic\n\nTopics can be either INPUT, OUTPUT or BIDIR\n\n``` python\nclass Video(JSONSchema):\n camera_id: int\n path: str\n timestamp: datetime\n\n\nclass ProcessedVideo(JSONSchema):\n camera_id: int\n path: str\n timestamp: datetime\n valid: bool\n\n\nt1 = app.topic(name=\"B\", value_type=Video, topic_dir=TopicDir.BIDIR)\nt2 = app.topic(name=\"C\", value_type=ProcessedVideo, topic_dir=TopicDir.BIDIR)\nt3 = app.topic(name=\"D\", value_type=ProcessedVideo, topic_dir=TopicDir.OUTPUT)\n```\n\nSchemas can also be AVRO\n\n``` python\nclass AvroProcessedVideo(AvroBase):\n camera_id: int\n path: str\n timestamp: datetime\n valid: bool\n\nt4 = app.topic(\n name=\"E\",\n value_type=AvroProcessedVideo,\n topic_dir=TopicDir.BIDIR,\n schema_type=SchemaType.AVRO,\n)\n```\n\nIn case the schema is unknown or not managed, Pydantic offers an option to allow extra non managed fields:\n\n``` python\nclass AnythingModel(BaseModel):\n class Config:\n extra = Extra.allow\n```\n\n### Creating CitizenK agents\nAnd lastly, we create gents that process the Kafka messages.\n\nAgents can listen to multiple topics and accept either values or the entire Kafka event (key, value, offset, partition, timestamp...). Agents can also accept a self argument to get a reference to the Agent object.\n\nIn non auto commit apps, offsets are committed only after all agents processed the event successfully.\n\n- topics: one or more topics to process\n- batch_size: specify desired batch_size. Default = 1\n- batch_timeout: How long to wait for batch to arrive: Default = 10 seconds\n\n``` python\n@app.agent(topics=t1, batch_size=100)\nasync def process_videos_t1(events: list[KafkaEvent]):\n # Process incoming video\n for event in events:\n camera_id = event.value.camera_id\n video_counts[camera_id] += 1\n v = ProcessedVideo(\n camera_id=camera_id,\n path=event.value.path,\n timestamp=event.value.timestamp,\n valid=bool(camera_id % 2),\n )\n t2.send(value=v, key=str(v.camera_id))\n\n\n@app.agent(topics=t2, batch_size=100)\nasync def process_videos_t2(values: list[BaseModel]):\n # Process incoming video\n for value in values:\n if value.valid:\n t3.send(value=value, key=str(value.camera_id))\n\n```\n\n### Auto endpoints\nTo help debug and evaluate the service, CitizenK automatically creates web endpoints that help you send messages to topics and agents.\n\n- info: get service info\n- topics: send events to topics\n- agents: send events directly to agents, bypassing topics\n- stats: get Kafka stats for producer and consumer\n\n![CitizenK Demo API](docs/citizenk_demo_api.jpg)\n\n\n### Creating additional CitizenK endpoints\nJust like any other FastAPI app, you can create get, post and put endpoints that either interact with Kafka or perform some other tasks, non Kafka related\n\n``` python\n@router.post(\"/events\", response_class=JSONResponse)\nasync def produce_video_events(\n values: list[Video],\n topic: str = Query(),\n):\n \"\"\"Sends events to the given topic\"\"\"\n if topic not in app.topics:\n raise HTTPException(status_code=400, detail=\"Topic not supported by app\")\n t = app.topics[topic]\n for v in values:\n t.send(value=v, key=str(v.camera_id))\n return {\"status\": \"ok\"}\n\n\n@router.get(\"/topics\", response_class=JSONResponse)\nasync def get_source_topics():\n \"\"\"Returns the list of topics from the source kafka\"\"\"\n admin = KafkaAdapter(config.source_kafka)\n topics = sorted(list(admin.get_all_broker_topics().keys()))\n return {\"topics\": topics}\n```\n\n### Multiple workers behind a load balancer\nCitizenK includes two special decorators for scenarios where the service has multiple workers behind a load balancer and the web request needs to reach a specific worker that holds a partition.\n\n- topic_router: forwards the request based on the topic and key (JSON / HTML)\n- broadcast_router: aggregates the responses from all workers into a single JSON\n\nBoth routers support GET, POST, PUT and DELETE commands\n\n``` python\n@router.get(\"/topic_test\", response_class=JSONResponse)\n@app.topic_router(topic=t1, match_info=\"camera_id\")\nasync def test_topic_router(request: Request, camera_id: int):\n \"\"\"Returns the list of groups from the target kafka\"\"\"\n return {\"key\": camera_id, \"count\": video_counts[camera_id]}\n\n\n@router.get(\"/broadcast_test\", response_class=JSONResponse)\n@app.broadcast_router()\nasync def test_broadcast_router(request: Request):\n \"\"\"Returns the list of groups from the target kafka\"\"\"\n return video_counts\n```\n\n### Websocket\nCitizenK support for Websocket agents\n\n``` python\n@app.agent(topics=t2, batch_size=100, websocket_route=prefix + \"/ws\")\nasync def websocket_agent(values: list[BaseModel]) -> str:\n values = [json.loads(v.model_dump_json()) for v in values if not v.valid]\n return json.dumps(values, indent=4)\n```\n\nThis agent exposes a WebSocket endpoint for one or more clients to connect to. It then processes incoming Kafka messages from topic t2 and sends the returned string value to all the existing live WebSocket \"/ws\" connections. The main use case for this is to bridge between Kafka and Websocket. One possible use case for this feature is to send filtered Kafka events to a web app or mobile app.\n\n\nThe other direction frontend --> Kafka is probably easier to implement with a normal REST post endpoint and is not supported yet.\n\n### Event handlers tasks and repeat / cron tasks\nTo start tasks in certain condition, just like FastAPI on_event(\"startup\") / on_event(\"shutdown\"), CitizenK includes similar mechanism:\n\non_citizenk_event(\"startup\")\non_citizenk_event(\"shutdown\")\non_citizenk_event(\"agent_thread_startup\")\n\nagent_thread_startup can be used to run tasks in the agents thread, while normal startup runs them in the web thread.\n\n``` python\n@app.on_citizenk_event(\"agent_thread_startup\")\nasync def startup_debug():\n logger.debug(\"Demo App starting\")\n```\n\nCitizenK also includes repeatable tasks:\n\n``` python\ndef repeat_every(\n *,\n seconds: float,\n wait_first: bool = False,\n logger: logging.Logger | None = None,\n raise_exceptions: bool = False,\n max_repetitions: int | None = None,\n) -> Callable:\n\ndef repeat_at(\n *,\n cron: str,\n logger: logging.Logger = None,\n raise_exceptions: bool = False,\n max_repetitions: int = None,\n) -> Callable:\n```\n\nthat normally works with event handlers like this:\n``` python\n@app.on_citizenk_event(\"agent_thread_startup\")\n@repeat_every(seconds=5)\nasync def agent_thread_debug():\n logger.debug(\"In agent thread... thread=%s\",threading.get_ident())\n```\n\n## Things to be aware of...\nCitizenK is a single-threaded async app. i.e. If a coroutine spends too much time in processing without awaiting IO, it will block other coroutines from running. Specifically, when using a load balancer with health checks, it's important to pay attention to the time between health checks and see that it's higher than the longest-running agent. Fixed using:agents_in_thread\n\n\nTo help tune the service. CitizenK includes the concept of batch size:i.e. how many events to consume and process every batch across all agents.\n\nAdditionally like any other Kafka service. it's important to tune several kafka [consumer](https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#fetch-max-bytes) and [producer](https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html) configs. Specifically ensure rebalancing is not triggered unintentionally: Alternatively this list includes [all Kafka configs](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md)\n\nConsumer:\n- fetch.max.bytes (50 Mbytes): The maximum amount of data the server should return for a fetch request. Reduce if processing each record takes significant time.\n- max.poll.records(500): The maximum number of records returned in a single call to poll().\n- max.poll.interval.ms (5 min): The maximum delay between invocations of poll() when using consumer group management\n\nGroup rebalancing in stateful services:\n- Prefer static membership + Increase session.timeout.ms to 2-5 minutes (how long it takes a new service to come up)\n- partition.assignment.strategy': 'range'\n\nProducer:\n- linger.ms(0): Important to set to 0/5/10/50/200 on moderate/high load\n- batch.size(16K): Increase if sending large buffers to Kafka\n\nBoth:\n- compression.type(none): gzip, snappy, or lz4\n\nMore explanation in here: [Solving My Weird Kafka Rebalancing Problems & Explaining What Is Happening and Why?](https://medium.com/bakdata/solving-my-weird-kafka-rebalancing-problems-c05e99535435)\n\n\n## CitizenK vs Faust\n\n| Topic | CitizenK | Faust |\n| ------ |------- | ----- |\n| Creating an app | app = Citizenk() | app = faust.App() |\n| Creating a topic | topic = app.topic() | topic = app.topic() |\n| Creating an agent | @app.agent() | @app.agent() |\n| Creating a table | not supported | app.Table() |\n| Creating a timer | @repeat_every | @app.timer() |\n| Creating a task | background_tasks.add_task() | @app.task() |\n| Creating a page | @app.get() | @app.page() |\n| Routing requests | @app.topic_router | @app.topic_route |\n| Broadcast requests | @app.broadcast_router() | Not supported |\n| Models | Pydantic | faust.Record |\n| Model to dict | to_representation() | model_dump_json() |\n| Serializers | JSON, AVRO | JSON, RAW, PICKLE, AVRO |\n| Kafka library | confluent | aiokafka |\n| Websockets agents | Supported | Not supported |\n\n------------------------------------------------------------------------\n\n\n# CitizenK Replicator\n## Scenarios\n\n### Staging environment\n1. I have a staging environment and I want to replicate some production topics to it\n\n2. At some point I want to produce the staging topics in the staging environment using a staging service. So I switch off the replication and populate the same staging topic with real data.\n\n3. When I finish the testing in staging, I want to switch back to production, so that I can save on costs.\n\n4. If the workload is high, I want to replicate most (i.e. 90%) of the messages from production and only produce just a little (i.e. 10%) of the data from staging. This way in the same topic, I will have mixed data and potential schema from the two environments\n\n5. When switching between environments (i.e. on configuration change), I want to change the offset to the latest on the new topic, so that the handover is not too chaotic\n\n6. I also want to delete the consumer group of the service in staging, so that when it come back up again, it won't see a lag.\n\n7. Additionally sometimes, I want to migrate data between production and staging due to schema change or different identities.\n\n![Replicator](docs/replicator.jpg)\n\n### Dev environment + live data\n1. When I test a service locally or in a dev environment, possibly with a local Kafka, I want the local Kafka to have real data, so that I can test the service for a long period of time with live data.\n\n2. Theoretically, I can connect the dev service to the staging or production Kafka cluster, however, this presents a stability/security risk to the remote cluster. There is also a risk that the service will join a consumer group and participate accidentally in the remote workload. This approach also prevents parallel testing as there can be a conflict between the consumers.\n\n3. So one solution would be to replicate the topics from staging to the local/dev Kafka, maybe with some filtering to reduce the load, so that the local service is not overwhelmed with too much data\n\n### On premise kafka -- cloud kafka bridge\n1. I have a local Kafka and I want to replicate some topics to the remote cloud\n\n2. You can use this tool for this scenario, however Confluent replicator or Kafka MirrorMaker are probably more suitable\n\n### Dev environment + replayed data\n1. When I test a service locally or in a dev environment, possibly with a local Kafka, I want the local Kafka to replay historical/simulated messages from a file.\n\n2. this scenario is a bit different from the previous ones, as there is no Kafka consumer, just a producer. And you can say that it is more of a tool than a service.\n\n3. The messages are read from a file with a timestamp (one file for each topic), and injected into the right topic with the correct timing keeping the same gap between now, and the initial timestamp.\n\n\n### Cluster -- Cluster replication\n1. You can use this tool for this scenario, however Confluent replicator or Kafka MirrorMaker are probably more suitable\n\n## Existing tools\n1. Confluent replicator: Looks like a good tool, but not open source, expensive\n2. Kafka Mirror Maker: Open source but doesn't support filtering\n3. kcat: Nice tool, but not for these scenarios\n\n\n## Implementation details\n1. Using containerised python\n2. Based on Confluent Kafka API + FastAP\n3. Does not create the topics or partitions automatically. It assumes they exists and configured\n3. Deployed as a distributed service\n4. Filter based on JMESPath for JSON messages\n5. Allow two consumer options: with consumer group, without consumer group\n6. write code following DDD principles\n\n## Configuration\n- LOG_LEVEL: service log level\n- JSON_LOGGING: Use json logging\n- API_PREFIX: API prefix\n- FILE_DATA_PATH: Location of json files when reading and writing topics from file\n- KAFKA_SOURCE_SERVER_URL: Source Bootstrap Servers\n- KAFKA_SOURCE_USE_TLS: Enable Source SSL: 0,1\n- KAFKA_SOURCE_SASL_MECHANISM: Source SASL mechanism: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512\n- KAFKA_SOURCE_SASL_USERNAME: Source SASL username\n- KAFKA_SOURCE_SASL_PASSWORD: Source SASL password\n- KAFKA_SOURCE_GROUP_NAME: Source group name, or leave empty to consume without a consumer group\n- KAFKA_SOURCE_EXTRA_CONFIG_<KAFKA_CONFIG_KEY>: Any valid kafka consumer config (uppercase, replace . with _)\n- KAFKA_TARGET_SERVER_URL: Target Bootstrap Servers\n- KAFKA_TARGET_USE_TLS: Enable Target SSL\n- KAFKA_TARGET_SASL_MECHANISM: Target SASL mechanism: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512\n- KAFKA_TARGET_SASL_USERNAME: Target SASL username\n- KAFKA_TARGET_SASL_PASSWORD: Target SASL password\n- KAFKA_TARGET_EXTRA_CONFIG_<KAFKA_CONFIG_KEY>: Any valid kafka producer config (uppercase, replace . with _)\n- READ_MAPPINGS_EVERY_SECONDS: How often to check for new mappings in the file system\n- CACULATE_STATS_EVERY_SECONDS: How often to calculate stats\n- DELETE_GROUPS_EVERY_SECONDS: How often to check for new group deletion\n\n## Current solution limitations\n1. Currently only supports JSON schema.\n\n## API\n[API Description](docs/replicator_openapi.md)\n\n![Replicator API](docs/replicator_api.jpg)\n\n## User Interface\n![Replicator User Interface](docs/replicator_ui.jpg)\n\nThe user interface allows you to add a new mapping and edit/delete an existing mapping.\n\n## Usage\nProvide a JSON list of topic mappings in this format, either directly, or through templates:\n```json\n[\n {\n \"group\": \"first\",\n \"name\": \"File A to B\",\n \"source_topic_name\": \"A\",\n \"target_topic_name\": \"{{B}}\",\n \"source_is_file\": true\n },\n {\n \"group\": \"first\",\n \"name\": \"Topic B to C\",\n \"source_topic_name\": \"{{B}}\",\n \"target_topic_name\": \"{{C}}\"\n },\n {\n \"group\": \"second\",\n \"name\": \"Topic C to D Using filter\",\n \"source_topic_name\": \"{{C}}\",\n \"target_topic_name\": \"D\",\n \"valid_jmespath\": \"key == 'hello' && value.msg == 'world'\",\n \"enabled\": true\n },\n {\n \"group\": \"second\",\n \"name\": \"TopicCtoD\",\n \"topics\": [{\n \"source\":\"{{C}}\",\n \"target\": \"D\"\n }],\n \"enabled\": true,\n \"target_service_consumer_group\": \"service\"\n },\n {\n \"group\": \"second\",\n \"name\": \"Topic D to File E\",\n \"source_topic_name\": \"D\",\n \"target_topic_name\": \"E\",\n \"target_is_file\": true\n }\n]\n```\n- name: A unique mapping name\n- group: groups multiple mapping in the same category\n- enabled: Enable / disable mapping\n- source_topic_name: The topic to read from in the source cluster\n- target_topic_name: The topic to write to in the target cluster\n- valid_jmespath: filter criteria\n- source_is_file: If the source is a json file\n- target_is_file: If the target is a json file\n- topics: an extension that allows one mapping with several source-target mappings\n- target_service_consumer_group: The service consumer group to delete when replication is enabled\n\nThe final mapping file is defined this way:\n```json\n{\n \"templates\":[\n {\n \"template\":\"name\",\n \"vars\":{\n \"A\":\"A\",\n \"B\":\"B\",\n \"C\":\"C\"\n }\n }\n ],\n \"enabled\":[],\n \"disabled\":[\"TopicCtoD\"],\n \"mappings\":[{\n \"group\": \"third\",\n \"name\": \"Disabled Topic U to File V\",\n \"source_topic_name\": \"U\",\n \"target_topic_name\": \"V\",\n \"target_is_file\": true,\n \"enabled\": false\n }],\n \"comment\": \"no comment\"\n}\n```\n\n- templates: A list of templates and their corresponding vars to render the template\n- enabled / disabled: Overrides the template enabled flag\n- mappings: Extra mappings\n- comment: A comment that describes the latest change in the mappings\n\n## Topic Level Mapping\nTopic level mappings allows mapping of key/values when replicating a topic. This might be useful if for example the schema / enums / keys are different between the environments\n\nTo support this, the replicator supports value mappings for each topic that it consumes from the source in the following JSON format:\n\nThere are two mapping formats:\n- value.payload.product_id : map source product_id to target product_id\n- key:partition: map source key to target partition (drop entire msg if partition equals -1000)\n\n```json\n{\n \"key\":{\n \"1\":10,\n \"2\":12\n },\n \"value.payload.product_id\":{\n \"1001\":1,\n \"1002\":12,\n \"1003\":14\n },\n \"value.payload.user_name\":{\n \"A\":\"A name\",\n \"B\":\"B name\",\n \"C\":\"C name\"\n },\n \"key:partition\":{\n \"1\":0,\n \"2\":0,\n \"3\":-1000,\n \"4\":1,\n \"5\":1,\n \"6\":1,\n }\n}\n```\n\n## Stats\nReturns a list of JSON stats for each mapping in the following format:\n```json\n{\n \"time\": \"2023-05-25 18:20:43.875557\",\n \"started\": \"2023-05-25 08:08:35.728313\",\n \"queue\": 180,\n \"mappings\": [\n {\n \"name\": \"Topic B to C\",\n \"source_topic_name\": \"B\",\n \"target_topic_name\": \"C\",\n \"valid_jmespath\": null,\n \"target_service_consumer_group\": null,\n \"consumer_group_up\": false,\n \"assignments\": [0,1,2],\n \"lag\": 1258,\n \"source_count\": 27739,\n \"target_count\": 27739\n }\n ]\n}\n```\n\n## Grafana Integration\nTo view the stats in Grafana, use the Infinity data source with the following settings:\n\n![Replicator Grafana Interface](docs/replicator_grafana.jpg)\n\n## Consumer API\nTo simplify debug and to support other use cases, the replicator also includes an end point to consume messages from a given topic.\n\n\n## License\n[Apache License v2.0](https://www.apache.org/licenses/LICENSE-2.0)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "An async Kafka Python Framework based on FastAPI and Confluent Kafka",
"version": "0.1.65",
"project_urls": {
"Homepage": "https://pypi.org/user/valerann/",
"Repository": "https://github.com/Valerann/citizenk"
},
"split_keywords": [
"kafka",
" fastapi",
" confluent",
" pydantic",
" distributed",
" real-time"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "472c1ca6fef21f8eb345b8502c07dc13e6d06234f79d6c331e9cc17fdcc738ac",
"md5": "db0f5ea0df8c04af1d93d4c083f186ea",
"sha256": "7d15cf3710d0d609780c32b9b432d969b8f28e15fb13419dec6b74429cbd304c"
},
"downloads": -1,
"filename": "citizenk-0.1.65-py3-none-any.whl",
"has_sig": false,
"md5_digest": "db0f5ea0df8c04af1d93d4c083f186ea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.1",
"size": 32816,
"upload_time": "2024-10-15T13:24:58",
"upload_time_iso_8601": "2024-10-15T13:24:58.663564Z",
"url": "https://files.pythonhosted.org/packages/47/2c/1ca6fef21f8eb345b8502c07dc13e6d06234f79d6c331e9cc17fdcc738ac/citizenk-0.1.65-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "988e6e33fc3305eb0913eff5d9d8f7ababded86b6cc97a0235d9094a02ed04e7",
"md5": "2058ab6a3df1359608dc0bbc578b7a02",
"sha256": "f3b43edb3186e4d162ededf97da3d6e168d7fdf59281dbcb4641c3247fb7cda2"
},
"downloads": -1,
"filename": "citizenk-0.1.65.tar.gz",
"has_sig": false,
"md5_digest": "2058ab6a3df1359608dc0bbc578b7a02",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.1",
"size": 38353,
"upload_time": "2024-10-15T13:25:00",
"upload_time_iso_8601": "2024-10-15T13:25:00.416141Z",
"url": "https://files.pythonhosted.org/packages/98/8e/6e33fc3305eb0913eff5d9d8f7ababded86b6cc97a0235d9094a02ed04e7/citizenk-0.1.65.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-15 13:25:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Valerann",
"github_project": "citizenk",
"github_not_found": true,
"lcname": "citizenk"
}