• 周四. 12月 1st, 2022

5G编程聚合网

5G时代下一个聚合的编程学习网

热门标签

K8s – installation and deployment of Kafka, zookeeper cluster tutorial (support external access from

[db:作者]

1月 6, 2022
    This article demonstrates how to use
K8s Deploy
Kafka colony , And after building, in addition to
K8s Internal visits
Kafka service , Also support from
K8s External access to the cluster
Kafka service . There are usually two ways to cluster services : One is
StatefulSet, The other is
Service&Deployment. This time we use
StatefulSet Way to build
ZooKeeper colony , Use
Service&Deployment build
Kafka colony .

One 、 establish NFS Storage

   
NFS Storage is mainly for
Kafka
ZooKeeper Provide stable back-end storage , When
Kafka
ZooKeeper Of
Pod After a failure, restart or migration , You can still get the original data .

1, install NFS

Here I choose to be in
master Node creation
NFS Storage , First, execute the following command to install
NFS
yum -y install nfs-utils
yum -y install rpcbind

2, Create a shared folder

(1) Execute the following command to create
6 A folder :
mkdir -p /usr/local/k8s/zookeeper/pv{1..3}
mkdir -p /usr/local/k8s/kafka/pv{1..3}

(2) edit
/etc/exports file :

vi /etc/exports

(3) Add the following to it :

/usr/local/k8s/kafka/pv1 *(rw,sync,no_root_squash)
/usr/local/k8s/kafka/pv2 *(rw,sync,no_root_squash)
/usr/local/k8s/kafka/pv3 *(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv1 *(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv2 *(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv3 *(rw,sync,no_root_squash)

(4) After saving and exiting, execute the following command to restart the service :

systemctl restart rpcbind
systemctl restart nfs
systemctl enable nfs

(5) perform
exportfs -v Command to display all shared directories :


(6) While the rest of the
Node The node needs to execute the following command to install
nfs-utils client :
yum -y install nfs-util

(7) And then the others
Node The following commands can be executed on the node (
ip by
Master node
IP) see
Master Shared folder on node :

showmount -e 170.106.37.33

Two 、 establish ZooKeeper colony

1, establish ZooKeeper PV

(1) First create a
zookeeper-pv.yaml file , The contents are as follows :

Be careful
170.106.37.33 It needs to be changed into reality
NFS Server address :
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk01
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/zookeeper/pv1"
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk02
labels:
app: kafka
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/zookeeper/pv2"
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk03
labels:
app: kafka
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/zookeeper/pv3"
persistentVolumeReclaimPolicy: Recycle

(2) Then execute the following command to create
PV

kubectl apply -f zookeeper-pv.yaml

(3) Execute the following command to see if the creation is successful :

kubectl get pv

2, establish ZooKeeper colony

(1) We’re going to build a system here that contains
3 Of nodes
ZooKeeper colony . First create a
zookeeper.yaml file , The contents are as follows :
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
clusterIP: None
ports:
- name: server
port: 2888
- name: leader-election
port: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
nodePort: 31811
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: "zk-hs"
replicas: 3 # by default is 1
selector:
matchLabels:
app: zk # has to match .spec.template.metadata.labels
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk # has to match .spec.selector.matchLabels
spec:
containers:
- name: zk
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
memory: "2G"
cpu: "1"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=4G \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

(2) Then execute the following command to start creating :

kubectl apply -f zookeeper.yaml

(3) Execute the following command to see if the creation is successful :

kubectl get pods
kubectl get service


3、 … and 、 establish Kafka colony

(1) We’re going to build a system here that contains
3 Of nodes
Kafka colony . First create a
kafka.yaml file , The contents are as follows :
Be careful :

  • nfs The address needs to be changed to the actual NFS Server address .
  • status.hostIP Represents the host’s IP, namely Pod The actual final deployment Node node IP( In this article, I’m deploying directly to Master Node ), take KAFKA_ADVERTISED_HOST_NAME Set to host IP To ensure that K8s It can also be accessed outside the cluster Kafka.
apiVersion: v1
kind: Service
metadata:
name: kafka-service-1
labels:
app: kafka-service-1
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-1
targetPort: 9092
nodePort: 30901
protocol: TCP
selector:
app: kafka-1
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-2
labels:
app: kafka-service-2
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-2
targetPort: 9092
nodePort: 30902
protocol: TCP
selector:
app: kafka-2
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-3
labels:
app: kafka-service-3
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-3
targetPort: 9092
nodePort: 30903
protocol: TCP
selector:
app: kafka-3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-1
spec:
replicas: 1
selector:
matchLabels:
app: kafka-1
template:
metadata:
labels:
app: kafka-1
spec:
containers:
- name: kafka-1
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.default.svc.cluster.local:2181,zk-1.zk-hs.default.svc.cluster.local:2181,zk-2.zk-hs.default.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: mytopic:2:1
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30901"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/kafka/pv1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-2
spec:
replicas: 1
selector:
matchLabels:
app: kafka-2
template:
metadata:
labels:
app: kafka-2
spec:
containers:
- name: kafka-2
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.default.svc.cluster.local:2181,zk-1.zk-hs.default.svc.cluster.local:2181,zk-2.zk-hs.default.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30902"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/kafka/pv2"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-3
spec:
replicas: 1
selector:
matchLabels:
app: kafka-3
template:
metadata:
labels:
app: kafka-3
spec:
containers:
- name: kafka-3
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.default.svc.cluster.local:2181,zk-1.zk-hs.default.svc.cluster.local:2181,zk-2.zk-hs.default.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "3"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30903"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 170.106.37.33
path: "/usr/local/k8s/kafka/pv3"

(2) Then execute the following command to start creating :

kubectl apply -f kafka.yaml

(3) Execute the following command to see if the creation is successful :

kubectl get pods
kubectl get service

Four 、 Start testing

1,K8s Cluster internal testing

(1) First, execute the following command to enter a container :
kubectl exec -it kafka-deployment-1-59f87c7cbb-99k46 /bin/bash

(2) Then execute the following command to create a file named
test_topic Of
topic

kafka-topics.sh --create --topic test_topic --zookeeper zk-0.zk-hs.default.svc.cluster.local:2181,zk-1.zk-hs.default.svc.cluster.local:2181,zk-2.zk-hs.default.svc.cluster.local:2181 --partitions 1 --replication-factor 1

(3) After creation, execute the following command to open a producer , After startup, you can directly input messages in the console to send , Every row of data in the console is sent as a message .

kafka-console-producer.sh --broker-list kafka-service-1:9092,kafka-service-2:9092,kafka-service-3:9092 --topic test_topic

(4) Re open a terminal to connect to the server , Then enter the container and execute the following command to open a consumer :

kafka-console-consumer.sh --bootstrap-server kafka-service-1:9092,kafka-service-2:9092,kafka-service-3:9092 --topic test_topic

(5) Open the previous message production client again to send messages , And observe the output of messages from consumers to experience
Kafka Basic processing of messages .


2, Cluster out testing

    Use
Kafka Client tools (
Offset Explorer) Connect
Kafka colony ( Can pass
zookeeper Address connection , It can also be done through
kafka Address connection ), You can connect successfully and view the data .

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注