Scroll Top

Kafka Tiered Storage: Enhancing Scalability and Cost-Efficiency

Feature Image 3

Introduction

Apache Kafka’s traditional storage model is effective for high-performance data access, but businesses operating large-scale deployments face considerable hurdles. As data volumes continue to expand exponentially, the need for more sophisticated storage solutions has become increasingly apparent. This blog examines Kafka tiered storage, a critical feature released in September 2023 that addresses the fundamental challenges of scalability and cost-effectiveness in enterprise Kafka deployments.

TL;DR

Kafka tiered storage is a feature that addresses scalability and cost challenges in large-scale Kafka deployments. It works by:

  1. Storing recent, frequently accessed data on local disks (hot tier)
  2. Moving older, less accessed data to cheaper remote storage, like S3 (cold tier)

Key benefits include:

  • Improved scalability without proportionally increasing broker count
  • Significant cost reduction for long-term data storage

Tiered storage is handy for scenarios requiring long-term data retention, such as financial services, e-commerce, and log analytics. Implementation involves configuring brokers, setting up remote storage, and defining data migration policies between tiers.

Apache Kafka has become the heart of modern data architectures, serving as a distributed event streaming platform for high-throughput, fault-tolerant data pipelines. It plays a crucial role in handling large-scale data processing and real-time analytics across various industries.

Traditional Kafka Data Storage

Traditionally, Kafka stored all data on brokers’ local disks. This approach ensured high performance and low latency for data access, which are critical features of Kafka’s architecture.

Challenges of the Traditional Approach:

As organizations increasingly relied on Kafka for their data needs, several challenges emerged with the traditional storage model:

  1. Scalability Issues:Growing data volumes required continual addition of brokers to increase storage capacity.
  2. Data Retention Trade-offs:Difficult decisions between keeping historical data and managing practical storage limitations.
  3. Cost Inefficiency: Storing all data on high-performance local disks was expensive, especially for infrequently accessed data.
  4. Operational Overhead:Managing large clusters with extensive local storage required significant effort and expertise.

As organizations increasingly rely on Kafka for their data needs, the demand for more efficient and cost-effective storage solutions has grown. This is where Kafka tiered storage comes into play, offering an intuitive approach to data management within Kafka ecosystems.

Kafka Tiered Storage

Kafka Tiered Storage was first proposed in KIP-405 in December 2018, and, after numerous iterations and enhancements, it was officially released in September 2023. We developed this feature to offer Kafka users a more flexible and cost-effective storage solution.

Storage Tiers:

Kafka tiered storage introduces a two-tiered approach to data storage:

  • Hot Tier: This tier uses local disks on the Kafka brokers to store recently and frequently accessed data. It provides the high performance that Kafka is known for.
  • Cold Tier: This tier uses remote object storage (such as Amazon S3, Google Cloud Storage, or Azure Blob Storage) for older, less frequently accessed data.

Use Cases for Hot Tier vs. Cold Tier:

  • Hot Tier: Ideal for recent data that requires high-performance access, such as real-time analytics or current transaction processing.
  • Cold Tier:Suitable for historical data, long-term analytics, compliance requirements, and scenarios where immediate access is not critical.

Data lifecycle:

The data lifecycle in Kafka Tiered Storage is as follows:

  1. As usual, the hot tier receives new data.
  2. Periodically, data moves from the hot tier to the cold tier based on configurable policies (e.g., age of data or segment size).
  3. Depending on its storage location, the system retrieves data from either the hot or cold tier when consumers request it.
  4. The brokers maintain metadata about the location of data segments, enabling them to route read requests to the appropriate tier.

Implementing Kafka Tiered Storage

Let’s see some code snippets to understand how to configure and use Kafka tiered storage.

Configuring Tiered Storage

To enable tiered storage, you need to modify the broker configuration. Here’s an example of how to configure tiered storage using Amazon S3 as the cold tier.

# Enable tiered storage
tiered.storage.enable=true

# Configure the remote storage provider
tiered.storage.provider=s3

# S3 configuration
tiered.storage.s3.bucket.name=YOUR_BUCKET_NAME
tiered.storage.s3.region=YOUR_BUCKET_REGION

# Authentication (using IAM role)
tiered.storage.s3.iam.role=YOUR_IAM_ROLE_FOR_KAFKA_TIERED_STORAGE

# Tiering policy
tiered.storage.local.retention.bytes=100G
tiered.storage.local.retention.ms=86400000 # 24 hours

 

In this configuration:

  • We enable tiered storage and specify S3 as the provider.
  • We set the S3 bucket name and region.
  • We use an IAM role for authentication (alternatively, you could use access keys).
  • We define a tiering policy that moves data to the cold tier when it’s either older than 24 hours or when the local storage exceeds 100GB.

Creating a Topic with Tiered Storage

When creating a new topic, you can enable tiered storage and set specific retention policies.

kafka-topics.sh - create - bootstrap-server localhost:9092 \
 - topic TOPIC_NAME \
 - partitions 3 \
 - replication-factor 3 \
 - config remote.storage.enable=true \
 - config retention.ms=2592000000 \ # 30 days total retention
 - config local.retention.ms=86400000 # 24 hours on hot tier

 

This command creates a topic with tiered storage enabled, setting a total retention of 30 days, with the most recent 24 hours kept in the hot tier.

Considerations and Best Practices

When implementing tiered storage, keep these things in mind:

  1. Topic Configuration: Always explicitly configure tiered storage settings for each topic rather than relying solely on broker-wide defaults.
  2. Latency Aware Consumers:Implement your consumers to deal with potential increased latency when reading from the cold tier.
  3. Testing: Thoroughly test your tiered storage setup, including failover scenarios and recovery processes.
  4. Security: Implement appropriate access controls and use encryption both at rest and in transit to properly secure your remote storage.

Conclusion

Kafka tiered storage represents a significant evolution in the Kafka ecosystem, addressing key challenges of scalability, cost-efficiency, and data retention. By leveraging both local and remote storage, organizations can build more flexible, scalable, and cost-effective event-streaming architectures. As data volumes continue to grow exponentially, tiered storage is poised to become an essential feature for many Kafka deployments, enabling businesses to extract more value from their data while optimizing their infrastructure costs.

Nanthakumaran S

+ posts
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.