Breaking News: Grepper is joining You.com. Read the official announcement!
Check it out

Essential configuration for Kafka

Sumit Rawal answered on June 21, 2023 Popularity 1/10 Helpfulness 1/10

Contents


More Related Answers

  • kafka list topics
  • list topics in kafka
  • kafka console consumer read topic
  • kafka windows list topics
  • get all consumer groups kafka
  • kafka topic list command
  • kafka set retention time for topic
  • kafka windows describe topic
  • kafka create topic
  • kafka windows create topic
  • kafka confluent download
  • do you need java installed for kafka
  • Apache Kafka Alternatives
  • kafka create topic
  • What is a consumer group in Apache Kafka?
  • What happens if Kafka topic is full?
  • kafka empty consumer group
  • create kafka topics
  • Kafka Brokers - Kafka Cluster
  • setexpired topick kafka
  • Kafka Message index
  • Explain the role of the Kafka Producer API
  • Why is Apache Kafka preferred over traditional messaging techniques?
  • ConsumerConfig kafka java provide sasl mechanism
  • apache kafka benefits and use cases
  • multiple topics in kafka listener
  • What are topics in Apache Kafka?
  • Define the role of Kafka Streams API and Kafka Connector API.
  • Writing to Kafka
  • Kafka Stream Consumer:

  • Essential configuration for Kafka

    0

    ents

    At least 3 Kafka brokers and 3 ZooKeeper nodes, spread across availability zones to ensure high availability and replication of data.

    You MUST ensure that all topics are replicated across all availability zones of the cluster; otherwise, you risk making Vault vulnerable to downtime in the event of an availability zone failure.

    Vault requires the appropriate permissions and Admin API access to create, update and delete Kafka topics and, optionally, Kafka ACLs.

    Vault requires access to the __consumer_offsets topic to monitor the consumer lag of the Kafka processors. The Kubernetes HPA extension will use the consumer lag metric for scaling Vault Kafka processors (this does not impact scaling based on CPU and memory utilisation).

    Possible workarounds include:

    Manually adding a Kubernetes custom metric named max_service_consumer_group_lag that the Kubernetes HPA extension will use.

    Pinning all Vault services to the max replicas specified in the Kubernetes Horizontal Pod Autoscaler resources.

    Scaling based on memory and CPU utilisation only.

    All Vault services MUST have network connectivity to every broker in the Kafka cluster. In practice, this means all Kubernetes nodes in the cluster that Vault is running on must have connectivity to every broker. Proxies, firewalls and network policies can also affect this.

    Kafka brokers and ZooKeeper nodes that are sized appropriately in terms of resources. This includes CPU, RAM, disk size, disk IOPS, number of file descriptors, and network bandwidth (this list is not exhaustive). These settings are tightly correlated with the size of the Vault instance and whether the Kafka cluster is dedicated to only Vault only. You MUST have the ability to scale the cluster accordingly when the load increases.

    The recommended minimal requirements per broker for a Vault-only dedicated Kafka cluster are:

    CPU: 8 (prioritise number of vcores/cores over the speed of each)

    RAM: 16GB

    Disk: 1TiB SSD

    IOPS: 4 IOPS per GB

    Vault is not distributed with a solution for monitoring Kafka cluster brokers. It is your responsibility to leverage solutions for continuous monitoring of the Kafka cluster and ensure uptime. 

    Popularity 1/10 Helpfulness 1/10 Language whatever
    Source: Grepper
    Link to this answer
    Share Copy Link
    Contributed on Jun 21 2023
    Sumit Rawal
    0 Answers  Avg Quality 2/10


    X

    Continue with Google

    By continuing, I agree that I have read and agree to Greppers's Terms of Service and Privacy Policy.
    X
    Grepper Account Login Required

    Oops, You will need to install Grepper and log-in to perform this action.