MongoDB Ops Manager在多个Kubernetes 集群上部署 资源
为了使多集群MongoDB Ops Manager 和应用程序数据库部署能够应对整个数据中心或区域故障,请在多个 集群上部署MongoDB Ops Manager Kubernetes应用程序和应用程序数据库。
要详细了解MongoDB Ops Manager资源的多 Kubernetes 集群部署的架构、网络、限制和性能,请参阅:
Overview
当您使用本节中的过程部署MongoDB Ops Manager应用程序和应用程序数据库时,您可以:
使用 GKE (Google Kubernetes Engine) 和 Istio 将服务网格作为帮助演示多 Kubernetes集群部署的工具。
在称为 Operator 集群的成员 Kubernetes 集群之一上安装 Kubernetes Operator。 Operator 集群充当 Kubernetes Operator 所使用的“中心和分支”模式中的中心,以管理多个 Kubernetes 集群上的部署。
在
$OPERATOR_NAMESPACE
中部署 Operator 集群,并配置此集群以监视$NAMESPACE
和管理所有成员 Kubernetes 集群。在单个成员 集群上部署应用程序数据库和MongoDB Ops ManagerKubernetes 应用程序,以演示多集群部署与单集群部署的相似性。
spec.topology
和spec.applicationDatabase.topology
设立为MultiCluster
的单个集群部署可以为部署做好准备,以便向其中添加更多Kubernetes集群。在第二个成员 Kubernetes 集群上部署额外的应用程序数据库副本集,以提高应用程序数据库的弹性。 您还在第二个成员 集群中部署了一个额外的MongoDB Ops ManagerKubernetes 应用程序实例。
为 TLS 加密创建有效证书,并建立往返于 应用程序以及应用程序数据库副本集成员之间的 TLS 加密连接。MongoDB Ops Manager通过HTTPS运行时, MongoDB Ops Manager默认在端口
8443
上运行。使用S 3兼容存储启用备份,并在第三个成员 Kubernetes 集群上部署备份守护程序。 为了简化与 S3 兼容的存储桶的设置,您可以部署 MinIO Operator 。您仅在部署中的一个成员集群上启用备份守护程序。 但是,您也可以配置其他成员集群来托管备份守护程序资源。 3多集群MongoDB Ops Manager 部署中仅支持 S 备份。
先决条件
安装工具
在开始部署之前,请安装以下必需工具:
准备 GCP 项目,以便可以使用它来创建 GKE (Google Kubernetes Engine) 集群。在以下过程中,您将创建三个新的 GKE 集群,共有七个
e2-standard-4
低成本 竞价型虚拟机 。
授权进入 gcloud CLI
安装 gcloud CLI 并授权进入:
gcloud auth login
安装kubectl mongodb
插件
kubectl MongoDB插件可自动配置Kubernetes集群。 这允许Kubernetes Operator 为这些集群上的MongoDB Ops Manager应用程序、应用程序数据库和MongoDB资源的帐户部署资源、必要的角色和服务。
要安装kubectl mongodb
插件:
下载所需的 Kubernetes Operator 软件包版本。
从 MongoDB Enterprise Kubernetes Operator 存储库的发布页面下载所需的 Kubernetes Operator 包版本。
包的名称使用以下模式: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz
。
使用以下包之一:
kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz
kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz
kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz
kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz
找到kubectl mongodb
插件二进制文件并将其复制到所需的目标。
在解压缩的目录中找到kubectl-mongodb
二进制文件,并将其移动到 Kubernetes 操作符 用户的 PATH 内的所需目标,如以下示例所示:
mv kubectl-mongodb /usr/local/bin/kubectl-mongodb
现在,您可以使用以下命令运行kubectl mongodb
插件:
kubectl mongodb multicluster setup kubectl mongodb multicluster recover
要了解有关支持标志的更多信息,请参阅MongoDB kubectl 插件参考资料。
克隆 MongoDB Enterprise Kubernetes Operator 存储库
克隆 MongoDB Enterprise Kubernetes Operator 存储库 mongodb-enterprise-kubernetes
,切换到 目录,然后查看当前版本。
git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git cd mongodb-enterprise-kubernetes git checkout master cd public/samples/ops-manager-multi-cluster
重要
本指南中的某些步骤只有在public/samples/ops-manager-multi-cluster
目录中运行时才有效。
设置环境变量
本指南中的所有步骤都引用了env_variables.sh
中定义的环境变量。
1 export MDB_GKE_PROJECT="### Set your GKE project name here ###" 2 3 export NAMESPACE="mongodb" 4 export OPERATOR_NAMESPACE="mongodb-operator" 5 6 comma-separated key=value pairs 7 export OPERATOR_ADDITIONAL_HELM_VALUES="" 8 9 Adjust the values for each Kubernetes cluster in your deployment. 10 The deployment script references the following variables to get values for each cluster. 11 export K8S_CLUSTER_0="k8s-mdb-0" 12 export K8S_CLUSTER_0_ZONE="europe-central2-a" 13 export K8S_CLUSTER_0_NUMBER_OF_NODES=3 14 export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4" 15 export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}" 16 17 export K8S_CLUSTER_1="k8s-mdb-1" 18 export K8S_CLUSTER_1_ZONE="europe-central2-b" 19 export K8S_CLUSTER_1_NUMBER_OF_NODES=3 20 export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4" 21 export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}" 22 23 export K8S_CLUSTER_2="k8s-mdb-2" 24 export K8S_CLUSTER_2_ZONE="europe-central2-c" 25 export K8S_CLUSTER_2_NUMBER_OF_NODES=1 26 export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4" 27 export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}" 28 29 Comment out the following line so that the script does not create preemptible nodes. 30 DO NOT USE preemptible nodes in production. 31 export GKE_SPOT_INSTANCES_SWITCH="--preemptible" 32 33 export S3_OPLOG_BUCKET_NAME=s3-oplog-store 34 export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store 35 36 minio defaults 37 export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local" 38 export S3_ACCESS_KEY="console" 39 export S3_SECRET_KEY="console123" 40 41 export OFFICIAL_OPERATOR_HELM_CHART="mongodb/enterprise-operator" 42 export OPERATOR_HELM_CHART="${OFFICIAL_OPERATOR_HELM_CHART}" 43 44 (Optional) Change the following setting when using the external URL. 45 This env variable is used in OpenSSL configuration to generate 46 server certificates for Ops Manager Application. 47 export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local" 48 49 export OPS_MANAGER_VERSION="7.0.4" 50 export APPDB_VERSION="7.0.9-ubi8"
按照注释中的说明,根据需要调整上一示例中的设置,并将它们源到 shell 中,如下所示:
source env_variables.sh
重要
每次更新env_variables.sh
后,请运行source env_variables.sh
以确保本部分中的脚本使用更新后的变量。
步骤
此过程适用于在多个MongoDB Ops Manager Kubernetes集群上部署 实例。
创建 Kubernetes 集群。
如果您已经使用服务网格安装并配置了自己的 Kubernetes 集群,则可以跳过此步骤。
创建三个 GKE (Google Kubernetes Engine) 集群:
1 gcloud container clusters create "${K8S_CLUSTER_0}" \ 2 --zone="${K8S_CLUSTER_0_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} 1 gcloud container clusters create "${K8S_CLUSTER_1}" \ 2 --zone="${K8S_CLUSTER_1_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} 1 gcloud container clusters create "${K8S_CLUSTER_2}" \ 2 --zone="${K8S_CLUSTER_2_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} 设置默认gcloud项目:
1 gcloud config set project "${MDB_GKE_PROJECT}" 获取凭证并将上下文保存到当前的
kubeconfig
文件。 默认情况下,此文件位于~/.kube/config
目录中,并由$KUBECONFIG
环境变量引用。1 gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" 2 gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" 3 gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" 所有
kubectl
命令都使用以下变量引用这些上下文:$K8S_CLUSTER_0_CONTEXT_NAME
$K8S_CLUSTER_1_CONTEXT_NAME
$K8S_CLUSTER_2_CONTEXT_NAME
验证
kubectl
是否有权访问 Kubernetes 集群:1 echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes 3 echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes 5 echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" 6 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes 1 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 2 NAME STATUS ROLES AGE VERSION 3 gke-k8s-mdb-0-default-pool-267f1e8f-d0dg Ready <none> 38m v1.29.7-gke.1104000 4 gke-k8s-mdb-0-default-pool-267f1e8f-pmgh Ready <none> 38m v1.29.7-gke.1104000 5 gke-k8s-mdb-0-default-pool-267f1e8f-vgj9 Ready <none> 38m v1.29.7-gke.1104000 6 7 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 8 NAME STATUS ROLES AGE VERSION 9 gke-k8s-mdb-1-default-pool-263d341f-3tbp Ready <none> 38m v1.29.7-gke.1104000 10 gke-k8s-mdb-1-default-pool-263d341f-4f26 Ready <none> 38m v1.29.7-gke.1104000 11 gke-k8s-mdb-1-default-pool-263d341f-z751 Ready <none> 38m v1.29.7-gke.1104000 12 13 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 14 NAME STATUS ROLES AGE VERSION 15 gke-k8s-mdb-2-default-pool-d0da5fd1-chm1 Ready <none> 38m v1.29.7-gke.1104000 安装 Istio 服务网格以允许跨集群 DNS 解析和 Kubernetes 集群之间的网络连接:
1 CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \ 2 CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \ 3 CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \ 4 ISTIO_VERSION="1.20.2" \ 5 ../multi-cluster/install_istio_separate_network.sh
创建命名空间。
注意
启用 sidecar 注入 在 Istio 中,以下命令将istio-injection=enabled
$OPERATOR_NAMESPACE
mongodb
标签添加到每个成员集群上的 和 命名空间。如果使用其他服务网格,请将其配置为处理创建的命名空间中的网络流量。
为 Kubernetes Operator 部署创建一个单独的命名空间
mongodb-operator
,由$OPERATOR_NAMESPACE
环境变量引用。在每个成员 Kubernetes 集群上创建相同的
$OPERATOR_NAMESPACE
。 这是必要的,以便kubectl MongoDB插件可以在每个成员集群上为Kubernetes Operator 创建服务帐户。 Kubernetes Operator 使用 Operator 集群上的这些服务帐户对每个成员集群执行操作。1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 3 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 6 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 8 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 在每个成员集群(包括用作操作符集群的成员集群)上,创建另一个独立的命名空间
mongodb
。 Kubernetes Operator 将此命名空间用于MongoDB Ops Manager资源和组件。1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite 3 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}" 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite 6 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}" 8 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
可选。 授权集群从私有映像注册表中提取密钥。
如果您使用官方 HelmAtlas Charts 和来自 Quay 的图像,则此步骤是可选的 注册表。
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \ 2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 3 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
可选。 检查集群连接。
以下可选脚本验证是否为跨集群 DNS 解析和连接正确配置了服务网格。
在集群0上运行此脚本:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver0 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver0 11 template: 12 metadata: 13 labels: 14 app: echoserver0 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver0 20 ports: 21 - containerPort: 8080 22 EOF 在集群1上运行此脚本:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver1 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver1 11 template: 12 metadata: 13 labels: 14 app: echoserver1 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver1 20 ports: 21 - containerPort: 8080 22 EOF 在集群2上运行此脚本:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver2 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver2 11 template: 12 metadata: 13 labels: 14 app: echoserver2 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver2 20 ports: 21 - containerPort: 8080 22 EOF 运行此脚本以等待创建 StatefulSet:
1 kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s 2 kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s 3 kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s 在集群0上创建 Pod 服务:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver0-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver0-0" 13 EOF 在集群1上创建 Pod 服务:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver1-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver1-0" 13 EOF 在集群2上创建 Pod 服务:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver2-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver2-0" 13 EOF 在集群0上创建循环服务:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver0 13 EOF 在集群1上创建循环服务:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver1 13 EOF 在集群2上创建循环服务:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver2 13 EOF 验证集群0 中的 Pod1 :
1 source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME} 2 target_pod="echoserver0-0" 3 source_pod="echoserver1-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0 2 SUCCESS 验证集群1 中的 Pod0 :
1 source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} 2 target_pod="echoserver1-0" 3 source_pod="echoserver0-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0 2 SUCCESS 验证集群1 中的 Pod2 :
1 source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME} 2 target_pod="echoserver1-0" 3 source_pod="echoserver2-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0 2 SUCCESS
验证集群2 中的 Pod0 :
1 source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} 2 target_pod="echoserver2-0" 3 source_pod="echoserver0-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0 2 SUCCESS 运行清理脚本:
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0 2 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1 3 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 6 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 7 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0 8 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0 9 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
部署多集群配置。
在此步骤中,您将使用kubectl mongodb
插件自动执行 Kubernetes 集群配置,这是 Kubernetes Operator 管理多个 Kubernetes 集群上的工作负载所必需的。
由于您在安装 Kubernetes Operator 之前配置 Kubernetes 集群,因此当您为多 Kubernetes 集群操作部署 Kubernetes Operator 时,所有必要的多集群配置都已就位。
如概述中所述, Kubernetes Operator 具有三个成员集群的配置,可用于部署MongoDB Ops Manager MongoDB数据库。 第一个集群还用作 Operator 集群,您可以在其中安装 Kubernetes Operator 并部署自定义资源。
1 kubectl mongodb multicluster setup \ 2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \ 3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \ 4 --member-cluster-namespace="${NAMESPACE}" \ 5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \ 6 --create-service-account-secrets \ 7 --install-database-roles=true \ 8 --image-pull-secrets=image-registries-secret
1 Ensured namespaces exist in all clusters. 2 creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 3 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 4 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 5 Ensured ServiceAccounts and Roles. 6 Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 7 Ensured database Roles in member clusters. 8 Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
使用 Helm Chart 安装 Kubernetes Operator。
添加和更新MongoDB Helm存储库。 验证本地 Helm缓存是否引用正确的Kubernetes Operator 版本:
1 helm repo add mongodb https://mongodb.github.io/helm-charts 2 helm repo update mongodb 3 helm search repo "${OFFICIAL_OPERATOR_HELM_CHART}" 1 "mongodb" already exists with the same configuration, skipping 2 Hang tight while we grab the latest from your chart repositories... 3 ...Successfully got an update from the "mongodb" chart repository 4 Update Complete. ⎈Happy Helming!⎈ 5 NAME CHART VERSION APP VERSION DESCRIPTION 6 mongodb/enterprise-operator 1.27.0 MongoDB Kubernetes Enterprise Operator 将 Kubernetes Operator 安装到
$OPERATOR_NAMESPACE
中,并配置为监视$NAMESPACE
并管理三个成员 Kubernetes 集群。 此时, ServiceAccounts 和 角色 已由kubectl mongodb
插件部署。因此,以下脚本会跳过对它们的配置并设置operator.createOperatorServiceAccount=false
和operator.createResourcesServiceAccountsAndRoles=false
。 这些脚本指定multiCluster.clusters
设置来指示 Helm Chart 在多集群模式下部署 Kubernetes Operator。1 helm upgrade --install \ 2 --debug \ 3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \ 4 mongodb-enterprise-operator-multi-cluster \ 5 "${OPERATOR_HELM_CHART}" \ 6 --namespace="${OPERATOR_NAMESPACE}" \ 7 --set namespace="${OPERATOR_NAMESPACE}" \ 8 --set operator.namespace="${OPERATOR_NAMESPACE}" \ 9 --set operator.watchNamespace="${NAMESPACE}" \ 10 --set operator.name=mongodb-enterprise-operator-multi-cluster \ 11 --set operator.createOperatorServiceAccount=false \ 12 --set operator.createResourcesServiceAccountsAndRoles=false \ 13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \ 14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}" 1 Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now. 2 NAME: mongodb-enterprise-operator-multi-cluster 3 LAST DEPLOYED: Mon Aug 26 10:55:49 2024 4 NAMESPACE: mongodb-operator 5 STATUS: deployed 6 REVISION: 1 7 TEST SUITE: None 8 USER-SUPPLIED VALUES: 9 dummy: value 10 multiCluster: 11 clusters: 12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 15 namespace: mongodb-operator 16 operator: 17 createOperatorServiceAccount: false 18 createResourcesServiceAccountsAndRoles: false 19 name: mongodb-enterprise-operator-multi-cluster 20 namespace: mongodb-operator 21 watchNamespace: mongodb 22 23 COMPUTED VALUES: 24 agent: 25 name: mongodb-agent-ubi 26 version: 107.0.0.8502-1 27 database: 28 name: mongodb-enterprise-database-ubi 29 version: 1.27.0 30 dummy: value 31 initAppDb: 32 name: mongodb-enterprise-init-appdb-ubi 33 version: 1.27.0 34 initDatabase: 35 name: mongodb-enterprise-init-database-ubi 36 version: 1.27.0 37 initOpsManager: 38 name: mongodb-enterprise-init-ops-manager-ubi 39 version: 1.27.0 40 managedSecurityContext: false 41 mongodb: 42 appdbAssumeOldFormat: false 43 imageType: ubi8 44 name: mongodb-enterprise-server 45 repo: quay.io/mongodb 46 mongodbLegacyAppDb: 47 name: mongodb-enterprise-appdb-database-ubi 48 repo: quay.io/mongodb 49 multiCluster: 50 clusterClientTimeout: 10 51 clusters: 52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig 56 performFailOver: true 57 namespace: mongodb-operator 58 operator: 59 additionalArguments: [] 60 affinity: {} 61 createOperatorServiceAccount: false 62 createResourcesServiceAccountsAndRoles: false 63 deployment_name: mongodb-enterprise-operator 64 env: prod 65 maxConcurrentReconciles: 1 66 mdbDefaultArchitecture: non-static 67 name: mongodb-enterprise-operator-multi-cluster 68 namespace: mongodb-operator 69 nodeSelector: {} 70 operator_image_name: mongodb-enterprise-operator-ubi 71 replicas: 1 72 resources: 73 limits: 74 cpu: 1100m 75 memory: 1Gi 76 requests: 77 cpu: 500m 78 memory: 200Mi 79 tolerations: [] 80 vaultSecretBackend: 81 enabled: false 82 tlsSecretRef: "" 83 version: 1.27.0 84 watchNamespace: mongodb 85 watchedResources: 86 - mongodb 87 - opsmanagers 88 - mongodbusers 89 webhook: 90 installClusterRole: true 91 registerConfiguration: true 92 opsManager: 93 name: mongodb-enterprise-ops-manager-ubi 94 registry: 95 agent: quay.io/mongodb 96 appDb: quay.io/mongodb 97 database: quay.io/mongodb 98 imagePullSecrets: null 99 initAppDb: quay.io/mongodb 100 initDatabase: quay.io/mongodb 101 initOpsManager: quay.io/mongodb 102 operator: quay.io/mongodb 103 opsManager: quay.io/mongodb 104 pullPolicy: Always 105 subresourceEnabled: true 106 107 HOOKS: 108 MANIFEST: 109 --- 110 Source: enterprise-operator/templates/operator-roles.yaml 111 kind: ClusterRole 112 apiVersion: rbac.authorization.k8s.io/v1 113 metadata: 114 name: mongodb-enterprise-operator-mongodb-webhook 115 rules: 116 - apiGroups: 117 - "admissionregistration.k8s.io" 118 resources: 119 - validatingwebhookconfigurations 120 verbs: 121 - get 122 - create 123 - update 124 - delete 125 - apiGroups: 126 - "" 127 resources: 128 - services 129 verbs: 130 - get 131 - list 132 - watch 133 - create 134 - update 135 - delete 136 --- 137 Source: enterprise-operator/templates/operator-roles.yaml 138 kind: ClusterRoleBinding 139 apiVersion: rbac.authorization.k8s.io/v1 140 metadata: 141 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding 142 roleRef: 143 apiGroup: rbac.authorization.k8s.io 144 kind: ClusterRole 145 name: mongodb-enterprise-operator-mongodb-webhook 146 subjects: 147 - kind: ServiceAccount 148 name: mongodb-enterprise-operator-multi-cluster 149 namespace: mongodb-operator 150 --- 151 Source: enterprise-operator/templates/operator.yaml 152 apiVersion: apps/v1 153 kind: Deployment 154 metadata: 155 name: mongodb-enterprise-operator-multi-cluster 156 namespace: mongodb-operator 157 spec: 158 replicas: 1 159 selector: 160 matchLabels: 161 app.kubernetes.io/component: controller 162 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster 163 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster 164 template: 165 metadata: 166 labels: 167 app.kubernetes.io/component: controller 168 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster 169 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster 170 spec: 171 serviceAccountName: mongodb-enterprise-operator-multi-cluster 172 securityContext: 173 runAsNonRoot: true 174 runAsUser: 2000 175 containers: 176 - name: mongodb-enterprise-operator-multi-cluster 177 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.27.0" 178 imagePullPolicy: Always 179 args: 180 - -watch-resource=mongodb 181 - -watch-resource=opsmanagers 182 - -watch-resource=mongodbusers 183 - -watch-resource=mongodbmulticluster 184 command: 185 - /usr/local/bin/mongodb-enterprise-operator 186 volumeMounts: 187 - mountPath: /etc/config/kubeconfig 188 name: kube-config-volume 189 resources: 190 limits: 191 cpu: 1100m 192 memory: 1Gi 193 requests: 194 cpu: 500m 195 memory: 200Mi 196 env: 197 - name: OPERATOR_ENV 198 value: prod 199 - name: MDB_DEFAULT_ARCHITECTURE 200 value: non-static 201 - name: WATCH_NAMESPACE 202 value: "mongodb" 203 - name: NAMESPACE 204 valueFrom: 205 fieldRef: 206 fieldPath: metadata.namespace 207 - name: CLUSTER_CLIENT_TIMEOUT 208 value: "10" 209 - name: IMAGE_PULL_POLICY 210 value: Always 211 # Database 212 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE 213 value: quay.io/mongodb/mongodb-enterprise-database-ubi 214 - name: INIT_DATABASE_IMAGE_REPOSITORY 215 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi 216 - name: INIT_DATABASE_VERSION 217 value: 1.27.0 218 - name: DATABASE_VERSION 219 value: 1.27.0 220 # Ops Manager 221 - name: OPS_MANAGER_IMAGE_REPOSITORY 222 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi 223 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY 224 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi 225 - name: INIT_OPS_MANAGER_VERSION 226 value: 1.27.0 227 # AppDB 228 - name: INIT_APPDB_IMAGE_REPOSITORY 229 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi 230 - name: INIT_APPDB_VERSION 231 value: 1.27.0 232 - name: OPS_MANAGER_IMAGE_PULL_POLICY 233 value: Always 234 - name: AGENT_IMAGE 235 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1" 236 - name: MDB_AGENT_IMAGE_REPOSITORY 237 value: "quay.io/mongodb/mongodb-agent-ubi" 238 - name: MONGODB_IMAGE 239 value: mongodb-enterprise-server 240 - name: MONGODB_REPO_URL 241 value: quay.io/mongodb 242 - name: MDB_IMAGE_TYPE 243 value: ubi8 244 - name: PERFORM_FAILOVER 245 value: 'true' 246 - name: MDB_MAX_CONCURRENT_RECONCILES 247 value: "1" 248 volumes: 249 - name: kube-config-volume 250 secret: 251 defaultMode: 420 252 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig 检查 Kubernetes Operator 部署:
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster 2 echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace" 3 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments 4 echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace" 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods 1 Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available... 2 deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out 3 Operator deployment in mongodb-operator namespace 4 NAME READY UP-TO-DATE AVAILABLE AGE 5 mongodb-enterprise-operator-multi-cluster 1/1 1 1 10s 6 7 Operator pod in mongodb-operator namespace 8 NAME READY STATUS RESTARTS AGE 9 mongodb-enterprise-operator-multi-cluster-54d786b796-7l5ct 2/2 Running 1 (4s ago) 10s
准备 TLS 证书。
在此步骤中,您将为应用程序数据库和 应用程序启用 TLSMongoDB Ops Manager 。如果您不想使用 TLS,请从MongoDBOpsManager
资源中删除以下字段:
可选。 生成密钥和证书:
使用
openssl
命令行工具生成自签名 CA 和证书以进行测试。1 mkdir certs || true 2 3 cat <<EOF >certs/ca.cnf 4 [ req ] 5 default_bits = 2048 6 prompt = no 7 default_md = sha256 8 distinguished_name = dn 9 x509_extensions = v3_ca 10 11 [ dn ] 12 C=US 13 ST=New York 14 L=New York 15 O=Example Company 16 OU=IT Department 17 CN=exampleCA 18 19 [ v3_ca ] 20 basicConstraints = CA:TRUE 21 keyUsage = critical, keyCertSign, cRLSign 22 subjectKeyIdentifier = hash 23 authorityKeyIdentifier = keyid:always,issuer 24 EOF 25 26 cat <<EOF >certs/om.cnf 27 [ req ] 28 default_bits = 2048 29 prompt = no 30 default_md = sha256 31 distinguished_name = dn 32 req_extensions = req_ext 33 34 [ dn ] 35 C=US 36 ST=New York 37 L=New York 38 O=Example Company 39 OU=IT Department 40 CN=${OPS_MANAGER_EXTERNAL_DOMAIN} 41 42 [ req_ext ] 43 subjectAltName = @alt_names 44 keyUsage = critical, digitalSignature, keyEncipherment 45 extendedKeyUsage = serverAuth, clientAuth 46 47 [ alt_names ] 48 DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN} 49 DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local 50 EOF 51 52 cat <<EOF >certs/appdb.cnf 53 [ req ] 54 default_bits = 2048 55 prompt = no 56 default_md = sha256 57 distinguished_name = dn 58 req_extensions = req_ext 59 60 [ dn ] 61 C=US 62 ST=New York 63 L=New York 64 O=Example Company 65 OU=IT Department 66 CN=AppDB 67 68 [ req_ext ] 69 subjectAltName = @alt_names 70 keyUsage = critical, digitalSignature, keyEncipherment 71 extendedKeyUsage = serverAuth, clientAuth 72 73 [ alt_names ] 74 multi-cluster mongod hostnames from service for each pod 75 DNS.1 = *.${NAMESPACE}.svc.cluster.local 76 single-cluster mongod hostnames from headless service 77 DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local 78 EOF 79 80 generate CA keypair and certificate 81 openssl genrsa -out certs/ca.key 2048 82 openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf 83 84 generate OpsManager's keypair and certificate 85 openssl genrsa -out certs/om.key 2048 86 openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf 87 88 generate AppDB's keypair and certificate 89 openssl genrsa -out certs/appdb.key 2048 90 openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf 91 92 generate certificates signed by CA for OpsManager and AppDB 93 openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext 94 openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext 使用 TLS 密钥创建密钥:
如果您希望使用自己的密钥和证书,请跳过上一步的生成步骤,并将密钥和证书放入以下文件中:
certs/ca.crt
— CA证书。 使用受信任证书时,这些都不是必需的。certs/appdb.key
— 应用程序数据库的私钥。certs/appdb.crt
— 应用程序数据库的证书。certs/om.key
- MongoDB Ops Manager的私钥。certs/om.crt
- MongoDB Ops Manager证书。
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \ 2 --cert=certs/om.crt \ 3 --key=certs/om.key 4 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \ 6 --cert=certs/appdb.crt \ 7 --key=certs/appdb.key 8 9 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt" 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
安装 Ops Manager
此时,您已准备好用于部署 资源的环境和Kubernetes MongoDB Ops ManagerOperator。
为MongoDB Ops Manager管理员用户创建必要的档案, Kubernetes Operator 在部署MongoDB Ops Manager应用程序实例后将创建该用户:
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \ 2 --from-literal=Username="admin" \ 3 --from-literal=Password="Passw0rd@" \ 4 --from-literal=FirstName="Jane" \ 5 --from-literal=LastName="Doe" 在单个成员集群(也称为 Operator 集群)上部署尽可能简单的
MongoDBOpsManager
自定义资源(启用TLS )。此部署与单集群模式几乎相同,但将
spec.topology
和spec.applicationDatabase.topology
设立为MultiCluster
。这种部署方式表明,单个 Kubernetes 集群部署是单个 Kubernetes 成员集群上的多 Kubernetes 集群部署的特例。 MongoDB Ops ManagerKubernetes您可以从一开始就在所需数量的 集群上部署 应用程序和应用程序数据库,而不必从只有单个成员Kubernetes 集群的部署开始。
此时,您已准备好MongoDB Ops Manager部署以跨越多个Kubernetes集群,本过程稍后将执行此操作。
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 applicationDatabase: 18 version: "${APPDB_VERSION}" 19 topology: MultiCluster 20 security: 21 certsSecretPrefix: cert-prefix 22 tls: 23 ca: appdb-cert-ca 24 clusterSpecList: 25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 26 members: 3 27 backup: 28 enabled: false 29 EOF 等待 Kubernetes Operator 接手工作并进入
status.applicationDatabase.phase=Pending
状态。 等待应用程序数据库和MongoDB Ops Manager部署完成。1 echo "Waiting for Application Database to reach Pending phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s 1 Waiting for Application Database to reach Pending phase... 2 mongodbopsmanager.mongodb.com/om condition met 部署MongoDB Ops Manager 。 Kubernetes Operator 通过执行以下步骤来部署MongoDB Ops Manager 。 它:
部署应用程序数据库的副本集节点,并等待副本集中的 MongoDB 进程开始运行。
使用应用程序数据库的连接 部署MongoDB Ops Managerstring 应用程序实例,并等待其准备就绪。
将监控MongoDB Agent容器添加到每个应用程序数据库的 Pod。
等待MongoDB Ops Manager应用程序和应用程序数据库 Pod 开始运行。
1 echo "Waiting for Application Database to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 3 echo; echo "Waiting for Ops Manager to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s 5 echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..." 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 7 echo "Waiting for Application Database to reach Running phase..." 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 9 echo; echo "Waiting for Ops Manager to reach Running phase..." 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s 11 echo; echo "MongoDBOpsManager resource" 12 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 13 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 14 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 15 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 16 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Application Database to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 4 Waiting for Ops Manager to reach Running phase... 5 mongodbopsmanager.mongodb.com/om condition met 6 7 Waiting for Application Database to reach Pending phase (enabling monitoring)... 8 mongodbopsmanager.mongodb.com/om condition met 9 Waiting for Application Database to reach Running phase... 10 mongodbopsmanager.mongodb.com/om condition met 11 12 Waiting for Ops Manager to reach Running phase... 13 mongodbopsmanager.mongodb.com/om condition met 14 15 MongoDBOpsManager resource 16 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 17 om 7.0.4 Running Running Disabled 13m 18 19 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 20 NAME READY STATUS RESTARTS AGE 21 om-0-0 2/2 Running 0 10m 22 om-db-0-0 4/4 Running 0 69s 23 om-db-0-1 4/4 Running 0 2m12s 24 om-db-0-2 4/4 Running 0 3m32s 25 26 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 现在您已经以多集群模式部署了一个单成员集群,您可以重新配置此部署以跨越多个 Kubernetes 集群。
在第二个成员集群上,再部署两个应用程序数据库副本集成员和一个MongoDB Ops Manager应用程序实例:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 18 members: 1 19 applicationDatabase: 20 version: "${APPDB_VERSION}" 21 topology: MultiCluster 22 security: 23 certsSecretPrefix: cert-prefix 24 tls: 25 ca: appdb-cert-ca 26 clusterSpecList: 27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 28 members: 3 29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 30 members: 2 31 backup: 32 enabled: false 33 EOF 等待 Kubernetes Operator 接手工作(待处理阶段):
1 echo "Waiting for Application Database to reach Pending phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s 1 Waiting for Application Database to reach Pending phase... 2 mongodbopsmanager.mongodb.com/om condition met 等待 Kubernetes Operator 完成所有组件的部署:
1 echo "Waiting for Application Database to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s 3 echo; echo "Waiting for Ops Manager to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s 5 echo; echo "MongoDBOpsManager resource" 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 7 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 9 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 10 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Application Database to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 4 Waiting for Ops Manager to reach Running phase... 5 mongodbopsmanager.mongodb.com/om condition met 6 7 MongoDBOpsManager resource 8 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 9 om 7.0.4 Running Running Disabled 20m 10 11 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 12 NAME READY STATUS RESTARTS AGE 13 om-0-0 2/2 Running 0 2m56s 14 om-db-0-0 4/4 Running 0 7m48s 15 om-db-0-1 4/4 Running 0 8m51s 16 om-db-0-2 4/4 Running 0 10m 17 18 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 19 NAME READY STATUS RESTARTS AGE 20 om-1-0 2/2 Running 0 3m27s 21 om-db-1-0 4/4 Running 0 6m32s 22 om-db-1-1 4/4 Running 0 5m5s
启用备份。
在MongoDB Ops Manager应用程序的多 Kubernetes 集群部署中,您只能配置基于S3的备份存储。 此过程引用env_variables.sh 中定义的S3_*
。
可选。 安装 MinIO Operator 。
此过程使用 MinIO Operator 为备份部署 S3 兼容存储 。如果您有可用的Amazon Web Services S3或其他S3兼容存储桶,则可以跳过此步骤。 在这种情况下,请在env_variables.sh中相应地调整
S3_*
变量。1 kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \ 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - 3 4 kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \ 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - 6 7 add two buckets to the tenant config 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \ 9 --type='json' \ 10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]" 在配置和启用备份之前,请创建密钥:
s3-access-secret
— 包含S 3档案。s3-ca-cert
- 包含颁发存储桶服务器证书的CA证书。 在此过程中使用的示例 MinIO 部署中,将使用默认的 Kubernetes 根CA证书对证书进行签名。 由于它不是公众信任的CA证书,因此您必须提供该证书, MongoDB Ops Manager才能信任该连接。
如果您使用公开信任的证书,则可以跳过此步骤并删除
spec.backup.s3Stores.customCertificateSecretRefs
和spec.backup.s3OpLogStores.customCertificateSecretRefs
设置中的值。1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \ 2 --from-literal=accessKey="${S3_ACCESS_KEY}" \ 3 --from-literal=secretKey="${S3_SECRET_KEY}" 4 5 minio TLS secrets are signed with the default k8s root CA 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \ 7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
在启用备份的情况下重新部署MongoDB Ops Manager 。
KubernetesOperatorMongoDB Ops Manager 可以在为其配置Kubernetes Operator 的任何成员集群上以任意组合配置和部署所有组件、 应用程序、备份守护程序实例和应用程序数据库的副本集节点。
为了说明多 Kubernetes 集群部署配置的灵活性,在第三个成员集群上仅部署一个备份守护进程实例,并为第一个和第二个集群指定零个备份守护进程成员。
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 backup: 18 members: 0 19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 20 members: 1 21 backup: 22 members: 0 23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" 24 members: 0 25 backup: 26 members: 1 27 configuration: # to avoid configuration wizard on first login 28 mms.adminEmailAddr: email@example.com 29 mms.fromEmailAddr: email@example.com 30 mms.ignoreInitialUiSetup: "true" 31 mms.mail.hostname: smtp@example.com 32 mms.mail.port: "465" 33 mms.mail.ssl: "true" 34 mms.mail.transport: smtp 35 mms.minimumTLSVersion: TLSv1.2 36 mms.replyToEmailAddr: email@example.com 37 applicationDatabase: 38 version: "${APPDB_VERSION}" 39 topology: MultiCluster 40 security: 41 certsSecretPrefix: cert-prefix 42 tls: 43 ca: appdb-cert-ca 44 clusterSpecList: 45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 46 members: 3 47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 48 members: 2 49 backup: 50 enabled: true 51 s3Stores: 52 - name: my-s3-block-store 53 s3SecretRef: 54 name: "s3-access-secret" 55 pathStyleAccessEnabled: true 56 s3BucketEndpoint: "${S3_ENDPOINT}" 57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}" 58 customCertificateSecretRefs: 59 - name: s3-ca-cert 60 key: ca.crt 61 s3OpLogStores: 62 - name: my-s3-oplog-store 63 s3SecretRef: 64 name: "s3-access-secret" 65 s3BucketEndpoint: "${S3_ENDPOINT}" 66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}" 67 pathStyleAccessEnabled: true 68 customCertificateSecretRefs: 69 - name: s3-ca-cert 70 key: ca.crt 71 EOF 等待 Kubernetes Operator 完成配置:
1 echo; echo "Waiting for Backup to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s 3 echo "Waiting for Application Database to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s 5 echo; echo "Waiting for Ops Manager to reach Running phase..." 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s 7 echo; echo "MongoDBOpsManager resource" 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 9 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 11 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 12 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 13 echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" 14 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Backup to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 Waiting for Application Database to reach Running phase... 4 mongodbopsmanager.mongodb.com/om condition met 5 6 Waiting for Ops Manager to reach Running phase... 7 mongodbopsmanager.mongodb.com/om condition met 8 9 MongoDBOpsManager resource 10 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 11 om 7.0.4 Running Running Running 26m 12 13 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 14 NAME READY STATUS RESTARTS AGE 15 om-0-0 2/2 Running 0 5m11s 16 om-db-0-0 4/4 Running 0 13m 17 om-db-0-1 4/4 Running 0 14m 18 om-db-0-2 4/4 Running 0 16m 19 20 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 21 NAME READY STATUS RESTARTS AGE 22 om-1-0 2/2 Running 0 5m12s 23 om-db-1-0 4/4 Running 0 12m 24 om-db-1-1 4/4 Running 0 11m 25 26 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 27 NAME READY STATUS RESTARTS AGE 28 om-2-backup-daemon-0 2/2 Running 0 3m8s
可选。删除 GKE (Google Kubernetes Engine) 集群及其所有相关资源 (VM)。
运行以下脚本以删除 GKE 集群并清理环境。
重要
以下命令不可逆。 它们会删除env_variables.sh
中引用的所有集群。 如果您希望保留 GKE 集群,例如,如果您没有在此过程中创建 GKE 集群,请不要运行这些命令。
1 yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" & 2 yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" & 3 yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" & 4 wait