Confluent Control Center Installation¶
This topic offers instruction for installing Control Center. For Confluent Platform installations, see Install Confluent Platform On-Premises.
Before you begin¶
Use the information in this section to understand the requirements to install Control Center.
Compatibility with Confluent Platform¶
Control Center is compatible with the following Confluent Platform versions:
- Confluent Platform 8.0
- Confluent Platform 7.9
- Confluent Platform 7.8
- Confluent Platform 7.7
Considerations:
- Control Center
2.2
is backward compatible with the list of supported Confluent Platform versions above - Use Confluent Platform 7.9 to export metrics with mTLS (communication between Kafka and Control Center)
- Use Confluent Platform 7.7, or Confluent Platform 7.8 to export metrics with TLS and Basic Auth (communication between Kafka and Control Center)
- Confluent Platform 8.0 supports only KRaft mode. For ZooKeeper, use: Confluent Platform 7.9, Confluent Platform 7.8, or Confluent Platform 7.7
System requirements¶
Control Center monitors larger workloads compared to previous versions of Control Center (Legacy).
For monitoring clusters with 100k replicas or less (most common):
- Memory: 8 GB
- Storage: 200 GB (preferably SSD)
- CPU: 4 cores
For monitoring clusters with more than 100k replicas:
- Memory: 16 GB
- Storage: 300 GB (preferably SSD)
- CPU: 8 cores
For monitoring clusters with 200k - 400k replicas, use the configurations mentioned in the multi-node manual install. Alternatively, adjust the allocated memory, storage, and CPU parameters to accommodate the increased workload.
Considerations:
Storage requirements are based on default metrics retention of 15 days. If you configure a different retention, you must proportionately adjust storage.
For example:
- If you set metric retention from 15 to 30 days, you should double storage
- If you set metric retention from 15 to 7 days, you should halve storage
You can configure the default Control Center network port.
Using a proxy to control and secure access to Control Center is supported.
Prerequisites¶
- Provision a new virtual machine (VM) for Control Center on the same network as the Confluent Platform clusters that you want to monitor.
- For VM sizing recommendations, see System requirements.
- Install the same version of openjdk that is on your existing Control Center (Legacy) (openjdk-8-jdk, openjdk-11-jdk, or openjdk-17-jdk).
- On the Control Center VM, open ports 9090 (Control Center) and 9021 (Control Center user interface).
- On every broker or KRaft controller, ensure that you can send outgoing http traffic to port 9090 on the Control Center VM.
Single-node manual installation¶
Use these steps for single-node manual installation of Control Center with Confluent Platform.
Docker¶
The following steps install Confluent Platform 8.0 and Control Center 2.2.
To install Control Center with Docker
Clone Control Center public repo.
git clone --branch control-center https://github.com/confluentinc/cp-all-in-one.git
Change directory into: cp-all-in-one
cd cp-all-in-one/cp-all-in-one
Checkout branch:
8.0.0-post
git checkout 8.0.0-post
Run the docker compose command.
docker compose up -d
Archive¶
Install Control Center and Confluent Platform using archives on a single node.
Considerations:
- Control Center introduces a new directory structure that differs from the directory structure used with Control Center (Legacy).
- In earlier versions of the Confluent Platform, there was a single main directory, commonly referenced as
$CONFLUENT_HOME
and all components, including Control Center (Legacy), were inside this main directory (i.e.$CONFLUENT_HOME/control-center
). - Control Center now has its own top-level directory,
$C3_HOME
. $C3_HOME
is placed at the same hierarchical level as$CONFLUENT_HOME
, not inside it.- The steps below offer the optimal order in which to install Confluent Platform with Control Center.
- You must use a special command to start Prometheus on MacOS.
- By default Alertmanager and controllers in KRaft mode use port 9093. To run Prometheus and Alertmanager and KRaft mode controllers on the same host, you must manually edit the provided Control Center scripts.
Download the Confluent Platform archive (7.7 to 8.0 supported) and run these command:
wget https://packages.confluent.io/archive/8.0/confluent-8.0.0.tar.gz
tar -xvf confluent-8.0.0.tar.gz
cd confluent-8.0.0
export CONFLUENT_HOME=`pwd`
Update the broker and controller configurations to emit metrics to Prometheus by adding the following configurations to:
etc/kafka/controller.properties
andetc/kafka/broker.properties
The fifth line (
confluent.telemetry.exporter._c3.metrics.include=<value>
) is very long. Simply copy the code block as provided and append it to the end of the properties files. Pasting the fifth line results in as a single line, even though it shows as wrapped in the documentation.metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3.client.base.url=http://localhost:9090/api/v1/otlp confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000 confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
Download the Control Center archive and run these commands:
wget https://packages.confluent.io/confluent-control-center-next-gen/archive/confluent-control-center-next-gen-2.2.0.tar.gz
tar -xvf confluent-control-center-next-gen-2.2.0.tar.gz
cd confluent-control-center-next-gen-2.2.0 export C3_HOME=`pwd`
Start Prometheus and Alertmanager
To start Control Center, you must have three dedicated command windows, one for Prometheus, another for the Control Center process, a third dedicated command window for Alertmanager. Run the following commands from
$C3_HOME
in all command windows.Open
etc/confluent-control-center/prometheus-generated.yml
and changelocalhost:9093
tolocalhost:9098
alerting: alertmanagers: - static_configs: - targets: - localhost:9098
Start Prometheus.
All operating systems except MacOS:
bin/prometheus-start
MacOS:
bash bin/prometheus-start
Note
Prometheus runs but does not output any information to the screen.
Start Alertmanager.
Run this command:
export ALERTMANAGER_PORT=9098
All operating systems except MacOS:
bin/alertmanager-start
MacOS
bash bin/alertmanager-start
Start Control Center.
Open
etc/confluent-control-center/control-center-dev.properties
and update port9093
to9098
:confluent.controlcenter.alertmanager.url=http://localhost:9098
Run this command:
bin/control-center-start etc/confluent-control-center/control-center-dev.properties
Start Confluent Platform.
To start Confluent Platform, you must have two dedicated command windows, one for the controller and another for the broker process. All the following commands are meant to be run from
$CONFLUENT_HOME
in both command windows. The Confluent Platform start sequence requires you to generate a single random ID and use that same ID for both the controller and the broker process.In the command window dedicated to running the controller, change directories into
$CONFLUENT_HOME
.cd $CONFLUENT_HOME
Generate a random value for
KAFKA_CLUSTER_ID
.KAFKA_CLUSTER_ID="$(bin/kafka-storage random-uuid)"
Use the following command to get the random ID and save the output. You need this value to start the controller and the broker.
echo $KAFKA_CLUSTER_ID
Format the log directories for the controller:
bin/kafka-storage format --cluster-id $KAFKA_CLUSTER_ID -c etc/kafka/kraft/controller.properties --standalone
Start the controller:
bin/kafka-server-start etc/kafka/kraft/controller.properties
Open a command window for the broker and navigate to
$CONFLUENT_HOME
.cd $CONFLUENT_HOME
Set the
KAFKA_CLUSTER_ID
variable to the random ID you generated earlier withkafka-storage random-uuid
.export KAFKA_CLUSTER_ID=<KAFKA-CLUSTER-ID>
Format the log directories for this broker:
bin/kafka-storage format --cluster-id $KAFKA_CLUSTER_ID -c etc/kafka/kraft/broker.properties
Start the broker:
bin/kafka-server-start etc/kafka/kraft/broker.properties
Download the Confluent Platform archive (7.7 to 7.9 supported) and run these commands:
wget https://packages.confluent.io/archive/7.9/confluent-7.9.0.tar.gz
tar -xvf confluent-7.9.0.tar.gz
cd confluent-7.9.0
export CONFLUENT_HOME=`pwd`
Update broker configurations to emit metrics to Prometheus by adding the following configurations to:
etc/kafka/server.properties
metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3.client.base.url=http://localhost:9090/api/v1/otlp confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000 confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
Start Confluent Platform.
bin/zookeeper-server-start etc/kafka/zookeeper.properties bin/kafka-server-start etc/kafka/server.properties
Download the Control Center archive and run these commands:
wget https://packages.confluent.io/confluent-control-center-next-gen/archive/confluent-control-center-next-gen-2.2.0.tar.gz
tar -xvf confluent-control-center-next-gen-2.2.0.tar.gz
cd confluent-control-center-next-gen-2.2.0
Start Prometheus.
All operating systems except MacOS:
bin/prometheus-start
MacOS:
bash bin/prometheus-start
Start Alert manager.
All operating systems except MacOS:
bin/alertmanager-start
MacOS
bash bin/alertmanager-start
Start Control Center.
bin/control-center-start etc/confluent-control-center/control-center-dev.properties
Multi-node manual installation¶
Use these steps for multi-node manual installation of Control Center and Confluent Platform.
Provision a new node using any of the Confluent Platform supported operating systems. For more information, see Supported operating systems.Login to the VM on which you will install Confluent Platform.
Install Control Center on a new node/VM. To ensure a smooth transition, allow Control Center (Legacy) users to continue using Control Center (Legacy) until the Control Center has gathered 7-15 days of historical metrics. For more information, see Migration.
Login to the VM and install Control Center. For more information, see Compatibility with Confluent Platform.
Use the instructions for installing Confluent Platform but make sure to use the base URL and properties from these instructions to install Control Center.
For more information, see Confluent Platform System Requirements, Install Confluent Platform using Systemd on Ubuntu and Debian, and Install Confluent Platform using Systemd on RHEL, CentOS, and Fedora-based Linux.
Ubuntu and Debian
export BASE_URL=https://packages.confluent.io/confluent-control-center-next-gen/deb/ sudo apt-get update wget ${BASE_URL}archive.key sudo apt-key add archive.key sudo add-apt-repository -y "deb ${BASE_URL} stable main" sudo apt update
sudo apt install -y confluent-control-center-next-gen
RHEL, CentOS, and Fedora-based Linux
export base_url=https://packages.confluent.io/confluent-control-center-next-gen/rpm/ cat <<EOF | sudo tee /etc/yum.repos.d/Confluent.repo > /dev/null [Confluent] name=Confluent repository baseurl=${base_url} gpgcheck=1 gpgkey=${base_url}archive.key enabled=1 EOF
sudo yum install -y confluent-control-center-next-gen cyrus-sasl openssl-devel
Install Java for your operating system (if not installed).
sudo yum install java-17-openjdk -y ---- RHEL/CentOs/Fedora
sudo apt install openjdk-17-jdk -y ---- Ubuntu/Debian
Copy
/etc/confluent-control-center/control-center-production.properties
from your current Control Center (Legacy) into the Control Center node on the VM and add this property:confluent.controlcenter.id=10 confluent.controlcenter.prometheus.enable=true confluent.controlcenter.prometheus.url=http://localhost:9090 confluent.controlcenter.prometheus.rules.file=/etc/confluent-control-center/trigger_rules-generated.yml confluent.controlcenter.alertmanager.config.file=/etc/confluent-control-center/alertmanager-generated.yml
If you are using SSL, copy the certs at
/var/ssl/private
from your current Control Center (Legacy) into the Control Center node on the VM. If you are not using SSL, skip this step.Change ownership of the configuration files. Give the Control Center process write permissions to the alert manager, so that the process can properly manage alert triggers. Use the
chown
command to set the Control Center process as the owner of thetrigger_rules-generated.yml
andalertmanager-generated.yml
files.chown -c cp-control-center /etc/confluent-control-center/trigger_rules-generated.yml chown -c cp-control-center /etc/confluent-control-center/alertmanager-generated.yml
Start the following services on the Control Center node:
systemctl enable prometheus systemctl start prometheus systemctl enable alertmanager systemctl start alertmanager systemctl enable confluent-control-center systemctl start confluent-control-center
Login to each broker you intend to monitor and verify brokers can reach the Control Center node on port 9090.
curl http://<c3-internal-dns-url>:9090/-/healthy
All brokers must have access to the Control Center node on port 9090, but port 9090 does not require public access. Restrict access as you prefer.
Update the following properties for every Kafka broker and KRaft controller. Pay attention to the notes on the highlighted lines that follow the code example.
KRaft controller properties are located here:
/etc/controller/server.properties
metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter,io.confluent.metrics.reporter.ConfluentMetricsReporter --- [1] confluent.telemetry.exporter._c3plusplus.type=http confluent.telemetry.exporter._c3plusplus.enabled=true confluent.telemetry.exporter._c3plusplus.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3plusplus.client.base.url=http://c3-internal-dns-hostname:9090/api/v1/otlp --- [2] confluent.telemetry.exporter._c3plusplus.client.compression=gzip confluent.telemetry.exporter._c3plusplus.api.key=dummy confluent.telemetry.exporter._c3plusplus.api.secret=dummy confluent.telemetry.exporter._c3plusplus.buffer.pending.batches.max=80 --- [3] confluent.telemetry.exporter._c3plusplus.buffer.batch.items.max=4000 --- [4] confluent.telemetry.exporter._c3plusplus.buffer.inflight.submissions.max=10 --- [5] confluent.telemetry.metrics.collector.interval.ms=60000 --- [6] confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
[1] To enable metrics for both Control Center (Legacy) and Control Center, update your existing Control Center (Legacy) property
metric.reporters
to use the following values:metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter,io.confluent.metrics.reporter.ConfluentMetricsReporter
If you decommission Control Center (Legacy), enable only TelemetryReporter plugin with the following value:
metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter
[2] Ensure the URL in
confluent.telemetry.exporter._c3plusplus.client.base.url
is the actual Control Center URL, reachable from the broker host.confluent.telemetry.exporter._c3plusplus.client.base.url=http://c3-internal-dns-hostname:9090/api/v1/otlp
[3] [4] [5] [6] Use the following configurations for clusters up to 100,000 or fewer replicas. To get an accurate count of replicas, use the sum of all replicas across all clusters monitored in Control Center (Legacy) (including the Control Center (Legacy) bootstrap cluster).
confluent.telemetry.exporter._c3plusplus.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3plusplus.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3plusplus.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000
Configurations for clusters with 100,000 to 400,000 replicas
Clusters with a replica count of 100,000 - 200,000:
confluent.telemetry.exporter._c3plusplus.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3plusplus.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3plusplus.buffer.inflight.submissions.max=20 confluent.telemetry.metrics.collector.interval.ms=60000
Clusters with a replica count of 200,000 - 400,000:
confluent.telemetry.exporter._c3plusplus.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3plusplus.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3plusplus.buffer.inflight.submissions.max=20 confluent.telemetry.metrics.collector.interval.ms=120000
For clusters with a replica count of 200,000 - 400,000, also update the following Control Center (Legacy) configuration:
confluent.controlcenter.prometheus.trigger.threshold.time=2m
Perform a rolling restart for the brokers (zero downtime). For more information, see Rolling restart.
systemctl restart confluent-server
(Optional) Setup log rotation for Prometheus and Alertmanager.
Create a new configuration file at
/etc/logrotate.d/prometheus
with the following content:/var/log/confluent/control-center/prometheus.log { size 10MB rotate 5 compress delaycompress missingok notifempty copytruncate }
Create a script at
/usr/local/bin/logrotate-prometheus.sh
:#!/bin/bash /usr/sbin/logrotate -s /var/lib/logrotate/status-prometheus /etc/logrotate.d/prometheus
Make the script executable
chmod +x /usr/local/bin/logrotate-prometheus.sh
To schedule with Cron, add the following line to your crontab (crontab -e):
*/10 * * * * /usr/local/bin/logrotate-prometheus.sh >> /tmp/prometheus-rotate.log 2>&1
Restart Prometheus
systemctl restart prometheus
Perform similar steps for Alertmanager logs.
Create a new configuration file at
/etc/logrotate.d/alertmanager
with the following content:/var/log/confluent/control-center/alertmanager.log { size 10MB rotate 5 compress delaycompress missingok notifempty copytruncate }
Create a script at
/usr/local/bin/logrotate-alertmanager.sh
:#!/bin/bash /usr/sbin/logrotate -s /var/lib/logrotate/status-alertmanager /etc/logrotate.d/alertmanager
Make the script executable
chmod +x /usr/local/bin/logrotate-alertmanager.sh
To schedule with Cron, add the following line to your crontab (crontab -e):
*/10 * * * * /usr/local/bin/logrotate-alertmanager.sh >> /tmp/alertmanager-rotate.log 2>&1
Restart Alertmanager
systemctl restart alertmanager
Verify Control Center is running¶
After the installation is complete, visit http(s)://<c3-url>:9021
and wait for the metrics to start showing up in Control Center. It may take
a couple of minutes. Control Center looks exactly like Control Center (Legacy).
To confirm Control Center is running, use the following steps:
- Open the network tab in Control Center.
- Reload Control Center.
- Locate the following API call:
/2.0/feature/flags
- Verify the following key is present in the response:
confluent.controlcenter.prometheus.enable: true
Confluent Ansible installation steps¶
For Confluent Ansible installation of Control Center, see Configure Ansible Playbooks for Confluent Platform.
Confluent for Kubernetes installation steps¶
For Confluent for Kubernetes (CFK) installation of Control Center, see Monitor Confluent Platform with Confluent for Kubernetes.
Security configuration¶
Control Center introduces components like Prometheus and Alertmanager. The security configuration you use to secure communication for Control Center depends on the version of Confluent Platform you use.
Considerations:
- Control Center supports TLS + Basic Auth for Confluent Platform versions 7.5.x and higher
- Control Center supports mTLS for Confluent Platform versions 7.9.1 and higher
For more information, see Control Center Security on Confluent Platform.
Migration¶
Migration of metrics from Control Center (Legacy) to Control Center is not supported. For migration of alerts, see Control Center (Legacy) to Confluent Control Center Alert Migration.
Considerations:
- For clusters where historical metrics are of no value, you can shut down Control Center (Legacy) as soon as Control Center is up and running.
- For clusters where historical metrics are needed (say, for a period of N days), consider the following recommendations:
- Run both Control Center (Legacy) and Control Center simultaneously for N days.
- Control Center (Legacy) users should continue using Control Center (Legacy) until the N days of history is populated in Control Center.
- Once historical metrics are available in Control Center, you can shut down Control Center (Legacy) and move users to Control Center.