Loki Community Call LIVE from GrafanaCON 2026

Grafana Youtube

Loki's Thor architecture represents the most significant rearchitecture since the project launched, and it's worth understanding what's actually changing under the hood. If you're running Loki at any meaningful scale, these changes will fundamentally alter your capacity planning and query performance characteristics.

The shift to columnar storage is the headline feature, and it addresses Loki's biggest operational pain point: query performance on large time ranges. The original chunk format stored log lines as compressed blobs optimized for append operations, which meant queries had to decompress entire chunks even when filtering on specific fields. The new columnar format stores structured log data in columns, letting the query engine skip irrelevant data at the storage layer. In practice, this means queries filtering on high-cardinality fields like trace IDs or user IDs should see order-of-magnitude improvements when scanning days or weeks of logs.

The Kafka-based ingestion pipeline is equally significant for anyone dealing with bursty write patterns. Current Loki deployments buffer writes in-memory at the distributor and ingester layers, which works until you hit a traffic spike that overwhelms ingester memory or causes distributor restarts to drop data. Moving to Kafka as the ingestion buffer gives you durable queuing with backpressure handling. You can now absorb write spikes without immediately scaling ingesters, and you get replay capability if downstream components fail. The tradeoff is operational complexity: you're now running Kafka in addition to your Loki cluster, which means another distributed system to monitor and tune. For teams already running Kafka for other workloads, this is probably a net win. For smaller deployments, the added operational burden might not justify the benefits.

The redesigned query engine appears focused on high-cardinality support, which has been Loki's Achilles heel compared to systems like ClickHouse or Elasticsearch. Loki's label-based indexing model breaks down when you have fields with millions of unique values. The new engine seems to lean more heavily on the columnar format's ability to filter at the storage layer rather than relying purely on label indexes. This should make queries like "find all logs where user_id equals X" viable without creating a label for user_id, which would explode your index size.

What's less clear from the announcement is backward compatibility. Migrating existing Loki data to the new columnar format will likely require a rewrite process, and running dual storage formats during migration adds complexity. The Kafka integration also raises questions about how this interacts with existing ingestion paths—can you run hybrid mode with some tenants on Kafka and others on direct ingestion, or is this an all-or-nothing switch?

For teams running production Loki, the calculus depends on your pain points. If you're hitting query timeouts on large time ranges or struggling with high-cardinality data, Thor's improvements target exactly those issues. If your current deployment handles your query patterns adequately, the migration cost might outweigh the benefits in the near term. Either way, understanding these architectural shifts now will inform your capacity planning and whether to hold off on major Loki infrastructure investments until Thor stabilizes.