Kubernetes开发环境搭建与CI/CD集成指南
立即解锁
发布时间: 2025-08-30 01:44:03 阅读量: 4 订阅数: 9 AIGC 

# Kubernetes开发环境搭建与CI/CD集成
## 1. TLS/HTTPS与Cert Manager
### 1.1 配置准备
首先,创建用于Cert Manager配置清单的目录:
```bash
mkdir -p ~/workspace/apk8s/k8s/cluster-apk8s-dev1/000-cluster/10-cert-manager
```
### 1.2 创建命名空间
创建`00-namespace.yml`文件,内容如下:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
labels:
certmanager.k8s.io/disable-validation: "true"
```
应用新的命名空间:
```bash
kubectl apply -f 00-namespace.yml
```
### 1.3 获取并应用CRDs
获取Cert Manager的自定义资源定义(CRDs)并保存到`02-crd.yml`文件:
```bash
curl -L https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml >02-crd.yml
```
应用Cert Manager CRDs:
```bash
kubectl apply -f 02-crd.yml
```
### 1.4 确保Pod运行
确保所有支持Cert Manager的Pod都处于运行状态:
```bash
kubectl get pods -n cert-manager
```
### 1.5 创建ClusterIssuer
创建`03-clusterissuer.yml`文件,内容如下:
```yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: YOUR_EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt-production
http01: {}
```
应用Cert Manager ClusterIssuer:
```bash
kubectl apply -f 03-clusterissuer.yml
```
获取ClusterIssuers列表:
```bash
kubectl get clusterissuers
```
### 1.6 流程总结
下面是使用Cert Manager配置TLS/HTTPS的流程:
```mermaid
graph LR
A[创建目录] --> B[创建命名空间]
B --> C[获取并应用CRDs]
C --> D[确保Pod运行]
D --> E[创建ClusterIssuer]
E --> F[应用ClusterIssuer]
F --> G[获取ClusterIssuers列表]
```
## 2. 持久卷与Rook Ceph
### 2.1 背景与准备
持久存储对于一些Kubernetes部署是必不可少且具有挑战性的需求。这里使用Rook编排的Ceph来支持Kubernetes持久卷。创建目录:
```bash
mkdir -p ~/workspace/apk8s/k8s/cluster-apk8s-dev1/000-cluster/20-rook-ceph
```
### 2.2 下载并应用配置
#### 2.2.1 下载并应用命名空间CRDs
```bash
curl -L https://github.com/rook/rook/raw/release-1.0/cluster/examples/kubernetes/ceph/common.yaml >00-namespace-crd.yml
kubectl apply -f 00-namespace-crd.yml
```
#### 2.2.2 下载并应用Rook Ceph Operator部署配置
```bash
curl -L https://github.com/rook/rook/raw/release-1.0/cluster/examples/kubernetes/ceph/operator.yaml >30-deployment-oper.yml
kubectl apply -f 30-deployment-oper.yml
```
#### 2.2.3 下载并应用Rook Ceph集群配置
```bash
curl -L https://github.com/rook/rook/raw/release-1.0/cluster/examples/kubernetes/ceph/cluster-test.yaml >60-cluster-rook-ceph.yml
kubectl apply -f 60-cluster-rook-ceph.yml
```
#### 2.2.4 下载并应用Rook Ceph工具盒部署配置
```bash
curl -L https://github.com/rook/rook/raw/release-1.0/cluster/examples/kubernetes/ceph/toolbox.yaml >30-deployment-toolbox.yml
kubectl apply -f 30-deployment-toolbox.yml
```
### 2.3 查看存储集群状态
查看`rook-ceph`命名空间中的Pod列表:
```bash
kubectl get pods -n rook-ceph
```
执行`rook-ceph-tools` Pod中的bash shell并查看存储集群状态:
```bash
kubectl exec -it rook-ceph-tools-5f49756bf-m6dxv -n rook-ceph bash
ceph status
```
示例输出:
```
cluster:
id: f67747e5-eb2a-4301-8045-c1e210488433
health: HEALTH_OK
services:
mon: 1 daemons, quorum a (age 22m)
mgr: a(active, since 21m)
osd: 2 osds: 2 up (since 21m), 2 in (since 21m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 107 GiB / 116 GiB avail
pgs:
```
### 2.4 流程总结
以下是使用Rook Ceph配置持久卷的流程:
```mermaid
graph LR
A[创建目录] --> B[下载并应用命名空间CRDs]
B
```
0
0
复制全文
相关推荐







