E5BE GitHub - aklivity/zilla: 🦎 A multi-protocol edge & service proxy. Seamlessly interface web apps, IoT clients, & microservices to Apache Kafka® via declaratively defined, stateless APIs. · GitHub
[go: up one dir, main page]

Skip to content

aklivity/zilla

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2,918 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Latest Release Slack Community Artifact HUB

DocsQuickstartExamplesDemosBlog

🦎 Zilla is a stateless, multi-protocol proxy that bridges the gap between event-driven architectures and modern application protocols. It lets web apps, IoT devices, and microservices speak directly to Apache Kafka® — using HTTP, SSE, gRPC, MQTT, or WebSocket — without writing any custom integration code.

Think of Zilla as the protocol translation layer for your event-driven stack: declaratively configured via YAML, deployable anywhere, and capable of replacing custom connectors, MQTT brokers, and ad-hoc middleware with a single lightweight binary.

Ready to go? Jump to the Get started section.

Contents

Why Zilla?

Modern architectures use Kafka as a backbone for real-time data — but most clients (browsers, mobile apps, IoT devices) don't speak Kafka natively. The traditional answer is a tangle of REST bridges, MQTT brokers, WebSocket servers, and custom glue code.

Zilla eliminates that complexity.

Without Zilla With Zilla
Custom REST-to-Kafka bridge code Declarative zilla.yaml routes
Separate MQTT broker + Kafka connector Native MQTT-Kafka proxying built in
Hand-rolled JWT validation per service JWT continuous authorization at the proxy
Schema validation scattered across services Centralized Apicurio/Karapace enforcement
Multiple middleware hops, added latency Zero-copy, protocol-native proxying

What Can Zilla Do?

As a Kafka API Gateway

Expose Kafka topics as first-class REST, SSE, gRPC, or MQTT endpoints — without a single line of broker-side code.

Use Case Example
REST CRUD over Kafka topics http.kafka.crud
Real-time fan-out to SSE clients sse.kafka.fanout.jwt
Turn Kafka into an MQTT broker mqtt.kafka.proxy
Async request-reply over Kafka http.kafka.async
gRPC event mesh via Kafka grpc.kafka.proxy
AsyncAPI-driven MQTT gateway asyncapi.mqtt.kafka.proxy

As a Service Sidecar

Deploy alongside any service to handle cross-cutting concerns:

  • Authentication — JWT validation with continuous stream authorization for SSE
  • Schema enforcement — validate payloads against OpenAPI / AsyncAPI / Avro / Protobuf schemas
  • TLS termination — offload TLS handling from your services
  • Observability — emit metrics to Prometheus and traces to OpenTelemetry

Get Started in 60 Seconds

Prerequisites: Docker Compose

git clone https://github.com/aklivity/zilla.git
cd zilla/examples
docker compose --project-directory http.kafka.crud up -d

This starts Zilla, a local Kafka cluster, and a Kafka UI at http://localhost:8080.

Try it — create a Kafka-backed resource over plain HTTP:

# Create an item (produces a Kafka message)
curl -X POST http://localhost:7114/items \
  -H 'Content-Type: application/json' \
  -d '{"name": "widget", "price": 9.99}'

# Fetch all items (consumes from Kafka topic)
curl http://localhost:7114/items

Watch messages appear in real time on the Kafka UI. Then stop with:

docker compose --project-directory http.kafka.crud down

Full Quickstart Guide

How It Works

Zilla is configured entirely in a single zilla.yaml file. You declare named bindings — each one specifying a protocol, a behavior (server / client / proxy), and routing rules. Bindings chain together to form a pipeline.

Here's the full config for the HTTP-to-Kafka CRUD example above:

name: example
bindings:

  north_tcp_server:
    type: tcp
    kind: server
    options:
      host: 0.0.0.0
      port: 7114
    routes:
      - when:
          - port: 7114
        exit: north_http_server

  north_http_server:
    type: http
    kind: server
    routes:
      - when:
          - headers:
              :scheme: http
        exit: north_http_kafka_mapping

  north_http_kafka_mapping:
    type: http-kafka
    kind: proxy
    routes:
      - when:
          - method: POST
            path: /items
        exit: north_kafka_cache_client
        with:
          capability: produce
          topic: items-snapshots
          key: ${idempotencyKey}
      - when:
          - method: GET
            path: /items
        exit: north_kafka_cache_client
        with:
          capability: fetch
          topic: items-snapshots
          merge:
            content-type: application/json
      - when:
          - method: GET
            path: /items/{id}
        exit: north_kafka_cache_client
        with:
          capability: fetch
          topic: items-snapshots
          filters:
            - key: ${params.id}
      - when:
          - method: PUT
            path: /items/{id}
        exit: north_kafka_cache_client
        with:
          capability: produce
          topic: items-snapshots
          key: ${params.id}
      - when:
          - method: DELETE
            path: /items/{id}
        exit: north_kafka_cache_client
        with:
          capability: produce
          topic: items-snapshots
          key: ${params.id}

  north_kafka_cache_client:
    type: kafka
    kind: cache_client
    exit: south_kafka_cache_server

  south_kafka_cache_server:
    type: kafka
    kind: cache_server
    options:
      bootstrap:
        - items-snapshots
    exit: south_kafka_client

  south_kafka_client:
    type: kafka
    kind: client
    options:
      servers:
        - ${{env.KAFKA_BOOTSTRAP_SERVER}}
    exit: south_tcp_client

  south_tcp_client:
    type: tcp
    kind: client

telemetry:
  exporters:
    stdout_logs_exporter:
      type: stdout

See all examples | Configuration reference

Architecture

Zilla is built around a few unconventional design choices that explain its performance characteristics.

  1. No object allocation on the data path. Rather than building on a codec pipeline framework like Netty or Apache MINA — which decode bytes into objects and re-encode them at each stage — Zilla uses code-generated flyweight objects that overlay strongly typed APIs directly onto raw binary data in shared memory. There is no object construction overhead, no GC pressure from the data path, and method call stacks stay short enough for the JVM JIT to inline aggressively.
  2. One engine worker per CPU core, pinned per connection. On startup, Zilla creates one single-threaded engine worker per CPU core. Each incoming TCP connection is dispatched to a worker and stays there for its lifetime. The vast majority of stream processing involves zero cross-core coordination. Where fan-in or fan-out is required (e.g. many clients subscribing to the same Kafka topic), Zilla uses lock-free data structures with ordered memory writes rather than locks.
  3. Streams flow over shared memory, not sockets. Between bindings in a pipeline, data moves as typed stream frames (BEGIN / DATA / END / WINDOW) over shared memory — not through additional network hops or intermediate queues. Flow control and back-pressure are built into the stream model, so a slow consumer can never be overwhelmed by a fast producer, and no buffering layer is needed to mediate between them.
  4. Kafka fan-out via a local cache. Zilla fetches each Kafka topic partition once and stores it as memory-mapped files local to the Zilla node. Any number of clients can be served from that cache without additional round-trips to Kafka. When more Zilla nodes are added horizontally, each hydrates its own cache independently — so horizontal scaling doesn't introduce inter-node coordination overhead.

Deep dive: How Zilla Works

Install

Zilla has no external dependencies. Pick your preferred deployment method:

Docker

docker pull ghcr.io/aklivity/zilla
docker run ghcr.io/aklivity/zilla:latest start -v

Helm (Kubernetes)

helm install zilla oci://ghcr.io/aklivity/charts/zilla \
  --namespace zilla --create-namespace --wait \
  --values values.yaml \
  --set-file zilla\\.yaml=zilla.yaml

Both single-node and clustered deployments are supported.

Key Features

  • Protocol support: HTTP · SSE · gRPC · MQTT · WebSocket · Kafka (native)
  • API specifications: Import OpenAPI and AsyncAPI schemas directly as Zilla config — no translation step required.
  • Schema registries: Integrate with Apicurio or Karapace to validate JSON, Avro, and Protobuf payloads at the proxy layer.
  • Security: JWT-based authentication including continuous stream authorization for long-lived SSE connections.
  • Observability: Native Prometheus metrics and OpenTelemetry tracing exporters.
  • Performance: Stateless architecture with multi-core flow control means near-zero latency overhead. See the benchmark.

Who Is Zilla For?

Platform engineers who want to share Kafka clusters across teams or simplify multi-protocol integration without custom connectors.

Application developers building on real-time data streams without deep Kafka expertise.

API architects who want to drive infrastructure from OpenAPI and AsyncAPI schemas.

Zilla Plus (Enterprise)

The open-source Zilla Community Edition covers most use cases. Zilla Plus adds enterprise capabilities:

  • Virtual Clusters — multi-tenant Kafka cluster isolation
  • Secure Public/Private Access — mTLS, custom Kafka domains, VPC-aware routing
  • IoT Ingest & Control — production-grade MQTT broker over Kafka at scale
  • Enterprise support — SLAs, dedicated engineering access

Compare editions

Resources

📚 Read the docs

Learning

Blog highlights

Community & Support

License

Zilla is made available under the Aklivity Community License. This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a standalone commercialized “Zilla-as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.

(🔼 Back to top)

About

🦎 A multi-protocol edge & service proxy. Seamlessly interface web apps, IoT clients, & microservices to Apache Kafka® via declaratively defined, stateless APIs.

Topics

Resources

License

Unknown and 2 other licenses found

Licenses found

Unknown
LICENSE
Unknown
LICENSE-AklivityCommunity
Apache-2.0
LICENSE-Apache

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors

0