<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=278116885016877&amp;ev=PageView&amp;noscript=1">

,

Oct 1, 2024 | 7 Minute Read

Monitoring And Optimizing Kafka Consumers For Performance

Hanush Kumar, Marketing Associate

Table of Contents

Handling vast amounts of data concurrently causes system hiccups and service disruptions. Apache Kafka is a powerful distributed data streaming platform that helps businesses handle large datasets efficiently without latencies.

Kafka achieves low latency through three key techniques:

  1. Batching: Data can be read and written in batches, reducing the number of requests and minimizing latency.
  2. Partitioning: Kafka divides data among different brokers, enabling parallel processing.
  3. Compression: When data is written to a Kafka topic, it's compressed using algorithms like ZSTD, LZ4, Gzip, and Snappy. This reduces read and write bandwidth, allowing for quick delivery and lowering latency. Additionally, compression helps cut storage costs.

Some of the pivotal components of Kafka are:

  • Topics
  • Partitions
  • Consumers
  • Consumer groups

Understanding Kafka’s Components

Kafka Topics

Kafka topics serve as the organizational categories for messages, each distinguished by a unique name across the entire Kafka Cluster. Messages are directed to and retrieved from specific topics, facilitating streamlined data management.

Kafka Topic, Producer, Consumer

Kafka Partitions

Partitioning divides the single-topic log into several logs, each capable of residing on distinct nodes within the Kafka cluster. This partition allows a topic to be split into multiple partitions, enabling parallel processing and scalability. Each partition is an ordered, immutable sequence of messages. The number of partitions is set during topic creation but can be increased later if needed.

Messages within a partition are stored in the order they arrive and are assigned an offset, which is a unique identifier. When retrieving messages, consumers can specify the offset to start reading from, ensuring they process messages in the correct order

If a message has a key, Kafka uses a hash of the key to determine the partition, ensuring that all messages with the same key are stored in the same partition, preserving their order.

Kafka Consumers & Consumer Groups

A Kafka consumer is a component that reads and processes messages from Kafka topics. Consumer groups are a set of consumers from one application that work together to process events or messages from one or more topics in parallel. Each consumer in the group processes a subset of the partitions, allowing for parallel processing and scalability.

Consumers in a group coordinate to ensure that each partition is consumed by only one consumer at a time, preventing overlap and ensuring efficient processing. This means that one consumer group’s activities do not interfere with another group’s space or processing. To manage this coordination, Kafka uses offsets, which are unique identifiers assigned to each message within a partition.

Offsets mark the position of the last processed message, allowing consumers to track their progress and specify an offset to resume processing from where they left off. The relationship between consumers in a group and partitions is such that each partition is assigned to only one consumer within the group at any given time, ensuring messages are processed in order within each partition.

The maximum number of consumers in a group is equal to the number of partitions; if there are more consumers than partitions, some consumers will remain idle but can act as standbys in case of failures.

How Kafka Consumers Function

One consumer can read messages/events from one or more partitions. For example, a topic with 5 partitions can have upto 5 consumers, but if it has only 2 consumers, then the two consumers in the group will read messages from more than one partition. However, it is important to note that one partition can not be read by more than one consumer.

Use Cases Of Kafka Consumers

Example 1: Let's assume an application wants to send log data to one topic (Topic 1) and crawl data to another (Topic 2). Once both the topics, producers, and consumers are configured, all the log data will go within Topic 1, and all the crawl data will go within Topic 2.

Apache Kafka Consumer Groups

This log data may be useful for a service (Service 1) here, but it does not require the crawl data. We need a different topic to process crawl data, but this topic (Topic 2) will only have crawl data, and the consumers for this will only be expecting crawl data.

We can feed the log data in Topic 2 also, but that does not mean we will. Though Kafka wouldn't throw any error as such, we don't do that because the service/application does not demand the log data to be present in Topic 2.

Example 2: In this example, client one is a mobile phone, and the second client is a car, but they both are expecting the same type of data so that they can process it.

Kafka Topic Partition

Image Source: Kafka Documentation

A couple of questions may arise here: Why are there multiple topics within the same Kafka cluster? Can we have a different Kafka cluster?

Since the overall Kafka cluster is so huge and brings in more complexities, we don't want multiple Kafka clusters within our application. We can build as many pipelines as we want within one Kafka cluster. Topic 1 can have one type of data, and Topic 2 can have another type of data and relevant producers and consumers.

Metrics to Monitor/Track for Optimizing Kafka Consumers

Optimizing Kafka Consumers requires monitoring and tracking a few metrics. These metrics provide insights into the performance and health of the Kafka consumers by identifying inefficiencies, bottlenecks, and possible issues in the data processing pipeline. Below are some of the key metrics to focus on.

1. Consumer Offset

Consumer offsets show the position of the consumer in the partition from which it is reading. Consumers use this offset to keep track of their progress within a partition. If a consumer restarts, it will resume reading from this offset, ensuring no messages are missed or reprocessed. Optimum offset management helps consumers process each message only once and recover from failures without data duplication or loss.

How to Monitor: You can track committed offsets using:

  • committed_offsets (last committed offset for each partition)
  • last_committed_offset (the most recent offset committed)

2. Consumer Lag

It is a metric that indicates the lag between Kafka producers and consumers. In other words, it is the gap between the latest offset (position of the most recent message in a Kafka partition) and the consumer offset. If the gap grows bigger between the rate of data production and data consumption, it leads to delays in data processing.

How to Monitor: Developers can use tools like the built-in Kafka monitoring APIs, Kafka Manager, Grafana dashboards, or Burrow to track consumer lag. Below are some metrics to look at:

  • current_offset (the offset the consumer has processed up to)
  • consumer_lag (per partition)
  • log_end_offset (the latest offset in the partition)

3. Throughput

Throughput is determined by the number of messages consumed per second. To maintain the data pipeline's performance, consumers must process messages efficiently, which is a sign of high throughput.

How to Monitor: Developers can use Kafka’s JMX metrics or integrated monitoring systems like Grafana and Prometheus to track metrics such as:

  • records_consumed_rate (number of records consumed per second)
  • bytes_consumed_rate (amount of data consumed per second)

4. Latency

It is the time taken for a message to be consumed after it is produced. For applications that require real-time data processing and immediate insights, low latency is crucial.

How to Monitor: Developers can monitor end-to-end latency, i.e., by comparing the timestamp when the message was produced to when it was consumed. Key metrics include:

  • fetch_latency_avg (average time to fetch records)
  • fetch_latency_max (maximum time to fetch records)

5. Error Rates

It measures the frequency of errors consumers encounter during message processing. High error rates can detect problems with data processing logic, network problems, or other systemic issues that need to be addressed.

How to Monitor: Developers can monitor the metrics:

  • record_error_rate (rate of record processing errors): Kafka calculates the average number of record sends that result in errors per second. This metric is part of the producer metrics and can be monitored using tools like JMX (Java Management Extensions) or Kafka’s built-in command-line tools.
  • request_error_rate (rate of request errors to Kafka brokers)

6. Resource Utilization

Resource utilization monitors various resources, such as CPU, memory, and network bandwidth, used by the Kafka consumers. Monitoring these resources is essential because high resource utilization can lead to increased costs and performance degradation.

How to Monitor: You can use system monitoring tools (e.g., top, iostat, htop) and Kafka-specific metrics:

  • consumer_cpu_usage
  • consumer_memory_usage
  • Network_io

7. Fetch Metrics

Fetch metrics provide details about consumers' ‘fetching behavior.’ By understanding this behavior and tuning the consumer configuration accordingly, you can optimize consumers for performance.

How to Monitor: Key fetch metrics include:

  • fetch_rate (rate of fetch requests)
  • fetch_size_avg (average size of fetched records)
  • fetch_wait_time_avg (average wait time for fetch requests)

8. Consumer Rebalance Activity

It tracks how often and for how long consumer rebalances occur. Rebalancing is the process by which Kafka redistributes partitions among consumers in a group to ensure an even distribution of workload. This is crucial for maintaining optimal performance and preventing any single consumer from becoming overloaded or underutilized. 

Although it is recommended to avoid rebalances, they can sometimes become inevitable due to changes in the consumer group, such as consumers joining or leaving the group or changes in the number of partitions. 

Frequent rebalances can disrupt message processing and increase latency, as consumers need to stop processing messages during the rebalance and reassign partitions. Therefore, monitoring rebalance activity is essential to identify and mitigate performance issues.

How to Monitor: Developers can monitor the metrics:

  • rebalance_rate (frequency of rebalance events)
  • rebalance_time (time taken for a rebalance to complete)

Complexities In Rebalancing And Factors Influencing It

Let’s assume a consumer group with some workers. The first consumer worker is connected to a partition (say, Partition 1) and has established connections to the partition at many replicas. When a rebalance is triggered, the consumers break all those connections. Again, it has to figure out which replica to connect to.

Now, there is complexity at two levels:

  1. Kafka has to decide which worker to connect to which partition.
  2. Kafka has to figure out the replica within that partition.

This is why rebalance is a costly operation. And one of the ways to optimize the consumer workers is to avoid rebalancing.

Factors Influencing Rebalancing And How To Avoid Them

  1. To maintain efficient resource (worker) utilization, predefine the number of consumer workers needed to prevent overutilization or underutilization. Rebalancing occurs when workers fall into either of these extremes.
  2. Ensure consumer workers do not die. Workers are usually designed in such a way that it scales up and down. We shouldn't do that with Kafka as it is not a best practice to scale them up and down. It should stay as it is.
  3. Do not overload the workers so that they run out of memory, kill themselves, and then come again. Even when a worker leaves a consumer group, the connection is lost, and rebalance is triggered.

Factors Influencing Kafka Consumer Performance

Network Latency and Bandwidth: Affects the speed of data transmission between Kafka brokers and consumers.

Consumer Configuration: Settings such as fetch size, batch size, and session timeout can impact performance.

Resource Allocation: CPU, memory, and disk I/O capacity on consumer machines.

Kafka Cluster Configuration: Broker settings and partitioning strategy.

Tools For Monitoring Kafka Consumers

  1. Prometheus and Grafana: Collect Kafka metrics using Prometheus and visualize them with Grafana dashboards.
  2. Kafka Manager: An open-source tool for managing and monitoring Kafka clusters.
  3. Confluent Control Center: Provides a comprehensive monitoring solution for Kafka clusters, including consumer metrics.
  4. Burrow: A monitoring tool specifically for Kafka consumer lag tracking.

By regularly monitoring these metrics, you can gain valuable insights into the performance of your Kafka Consumers, identify areas for optimization, and ensure that your data processing pipeline runs efficiently and reliably.

Best Practices For Scaling Kafka Consumers

Increase Partition Count: More partitions allow for greater parallelism, enabling more consumers to work simultaneously.

Distribute Load Evenly: Ensure partitions are evenly distributed among consumers to avoid bottlenecks.

Use Horizontal Scaling: Add more consumer instances to the consumer group to handle increased load.

Conclusion

Optimizing Kafka consumers helps with real-time data processing, prevents service disruptions, enhances user experience, and reduces financial losses. 

Effective strategies involve understanding consumer groups, applying real-life use cases, and monitoring key metrics such as consumer lag, throughput, latency, and error rates. Tools like Prometheus, Grafana, Kafka Manager, and Confluent Control Center provide valuable insights into consumer performance and optimization opportunities.

Schedule a meeting with our experts to understand how Kafka Consumers can help improve your application’s performance.

 

About the Author
Qais Qadri, Senior Software Engineer
About the Author

Qais Qadri, Senior Software Engineer

Qais enjoys exploring places, going on long drives, and hanging out with close ones. He likes to read books about life and self-improvement, as well as blogs about technology. He also likes to play around with code. Qais lives by the values of Gratitude, Patience, and Positivity.


Hanush_img

Hanush Kumar, Marketing Associate

Hanush finds joy in YouTube content on automobiles and smartphones, prefers watching thrillers, and enjoys movie directors' interviews where they give out book recommendations. His essential life values? Positivity, continuous learning, self-respect, and integrity.

Leave us a comment

Back to Top