Docs 菜单
Docs 主页
/
MongoDB Enterprise Kubernetes Operator
/

MongoDB Ops Manager在多个Kubernetes 集群上部署 资源

在此页面上

  • Overview
  • 先决条件
  • 步骤

为了使多集群MongoDB Ops Manager 和应用程序数据库部署能够应对整个数据中心或区域故障,请在多个 集群上部署MongoDB Ops Manager Kubernetes应用程序和应用程序数据库。

要详细了解MongoDB Ops Manager资源的多 Kubernetes 集群部署的架构、网络、限制和性能,请参阅:

  • 多集群MongoDB Ops Manager架构

  • 限制

  • 单集群 MongoDB Ops Manager 部署与多集群MongoDB Ops Manager部署之间的区别

  • 架构图

  • 网络、负载均衡器、服务网格

  • 性能

当您使用本节中的过程部署MongoDB Ops Manager应用程序和应用程序数据库时,您可以:

  1. 使用 GKE (Google Kubernetes Engine) Istio 服务网格作为帮助演示多 Kubernetes集群部署的工具。

  2. 在称为 Operator 集群的成员 Kubernetes 集群之一上安装 Kubernetes Operator。 Operator 集群充当 Kubernetes Operator 所使用的“中心和分支”模式中的中心,以管理多个 Kubernetes 集群上的部署。

  3. $OPERATOR_NAMESPACE中部署 Operator 集群,并配置此集群以监视$NAMESPACE和管理所有成员 Kubernetes 集群。

  4. 在单个成员 集群上部署应用程序数据库和MongoDB Ops ManagerKubernetes 应用程序,以演示多集群部署与单集群部署的相似性。spec.topologyspec.applicationDatabase.topology设立为MultiCluster的单个集群部署可以为部署做好准备,以便向其中添加更多Kubernetes集群。

  5. 在第二个成员 Kubernetes 集群上部署额外的应用程序数据库副本集,以提高应用程序数据库的弹性。 您还在第二个成员 集群中部署了一个额外的MongoDB Ops ManagerKubernetes 应用程序实例。

  6. TLS 加密创建有效证书,并建立往返于 应用程序以及应用程序数据库副本集成员之间的 TLS 加密连接。MongoDB Ops Manager通过HTTPS运行时, MongoDB Ops Manager默认在端口 8443 上运行。

  7. 使用S 3兼容存储启用备份,并在第三个成员 Kubernetes 集群上部署备份守护程序。 为了简化与 S3 兼容的存储桶的设置,您可以部署 MinIO Operator 。您仅在部署中的一个成员集群上启用备份守护程序。 但是,您也可以配置其他成员集群来托管备份守护程序资源。 3多集群MongoDB Ops Manager 部署中仅支持 S 备份。

在开始部署之前,请安装以下必需工具:

安装 gcloud CLI 并授权进入:

gcloud auth login

kubectl MongoDB插件可自动配置Kubernetes集群。 这允许Kubernetes Operator 为这些集群上的MongoDB Ops Manager应用程序、应用程序数据库和MongoDB资源的帐户部署资源、必要的角色和服务。

要安装kubectl mongodb插件:

1

MongoDB Enterprise Kubernetes Operator 存储库的发布页面下载所需的 Kubernetes Operator 包版本。

包的名称使用以下模式: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz

使用以下包之一:

  • kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz

2

解压软件包,如以下示例所示:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

在解压缩的目录中找到kubectl-mongodb二进制文件,并将其移动到 Kubernetes 操作符 用户的 PATH 内的所需目标,如以下示例所示:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

现在,您可以使用以下命令运行kubectl mongodb插件:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

要了解有关支持标志的更多信息,请参阅MongoDB kubectl 插件参考资料。

克隆 MongoDB Enterprise Kubernetes Operator 存储库 mongodb-enterprise-kubernetes,切换到 目录,然后查看当前版本。

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
cd mongodb-enterprise-kubernetes
git checkout 1.26
cd public/samples/ops-manager-multi-cluster

重要

本指南中的某些步骤只有在public/samples/ops-manager-multi-cluster目录中运行时才有效。

本指南中的所有步骤都引用了env_variables.sh中定义的环境变量。

1export MDB_GKE_PROJECT="### Set your GKE project name here ###"
2
3export NAMESPACE="mongodb"
4export OPERATOR_NAMESPACE="mongodb-operator"
5
6# comma-separated key=value pairs
7# export OPERATOR_ADDITIONAL_HELM_VALUES=""
8
9# Adjust the values for each Kubernetes cluster in your deployment.
10# The deployment script references the following variables to get values for each cluster.
11export K8S_CLUSTER_0="k8s-mdb-0"
12export K8S_CLUSTER_0_ZONE="europe-central2-a"
13export K8S_CLUSTER_0_NUMBER_OF_NODES=3
14export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4"
15export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}"
16
17export K8S_CLUSTER_1="k8s-mdb-1"
18export K8S_CLUSTER_1_ZONE="europe-central2-b"
19export K8S_CLUSTER_1_NUMBER_OF_NODES=3
20export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4"
21export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}"
22
23export K8S_CLUSTER_2="k8s-mdb-2"
24export K8S_CLUSTER_2_ZONE="europe-central2-c"
25export K8S_CLUSTER_2_NUMBER_OF_NODES=1
26export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4"
27export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}"
28
29# Comment out the following line so that the script does not create preemptible nodes.
30# DO NOT USE preemptible nodes in production.
31export GKE_SPOT_INSTANCES_SWITCH="--preemptible"
32
33export S3_OPLOG_BUCKET_NAME=s3-oplog-store
34export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
35
36# minio defaults
37export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
38export S3_ACCESS_KEY="console"
39export S3_SECRET_KEY="console123"
40
41export OPERATOR_HELM_CHART="mongodb/enterprise-operator"
42
43# (Optional) Change the following setting when using the external URL.
44# This env variable is used in OpenSSL configuration to generate
45# server certificates for Ops Manager Application.
46export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local"
47
48export OPS_MANAGER_VERSION="7.0.4"
49export APPDB_VERSION="7.0.9-ubi8"

按照注释中的说明,根据需要调整上一示例中的设置,并将它们源到 shell 中,如下所示:

source env_variables.sh

重要

每次更新env_variables.sh后,请运行source env_variables.sh以确保本部分中的脚本使用更新后的变量。

此过程适用于在多个MongoDB Ops Manager Kubernetes集群上部署 实例。

1

如果您已经使用服务网格安装并配置了自己的 Kubernetes 集群,则可以跳过此步骤。

  1. 创建三个 GKE (Google Kubernetes Engine) 集群:

    1gcloud container clusters create "${K8S_CLUSTER_0}" \
    2 --zone="${K8S_CLUSTER_0_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_1}" \
    2 --zone="${K8S_CLUSTER_1_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_2}" \
    2 --zone="${K8S_CLUSTER_2_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
  2. 获取凭证并将上下文保存到当前的kubeconfig文件。 默认情况下,此文件位于~/.kube/config目录中,并由$KUBECONFIG环境变量引用。

    1gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}"
    2gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}"
    3gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}"

    所有kubectl命令都使用以下变量引用这些上下文:

    • $K8S_CLUSTER_0_CONTEXT_NAME

    • $K8S_CLUSTER_1_CONTEXT_NAME

    • $K8S_CLUSTER_2_CONTEXT_NAME

  3. 验证kubectl是否有权访问 Kubernetes 集群:

    1echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes
    3echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes
    5echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes
    1Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    2NAME STATUS ROLES AGE VERSION
    3gke-k8s-mdb-0-default-pool-d0f98a43-dtlc Ready <none> 65s v1.28.7-gke.1026000
    4gke-k8s-mdb-0-default-pool-d0f98a43-q9sf Ready <none> 65s v1.28.7-gke.1026000
    5gke-k8s-mdb-0-default-pool-d0f98a43-zn8x Ready <none> 64s v1.28.7-gke.1026000
    6
    7Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    8NAME STATUS ROLES AGE VERSION
    9gke-k8s-mdb-1-default-pool-37ea602a-0qgw Ready <none> 111s v1.28.7-gke.1026000
    10gke-k8s-mdb-1-default-pool-37ea602a-k4qk Ready <none> 114s v1.28.7-gke.1026000
    11gke-k8s-mdb-1-default-pool-37ea602a-p2g7 Ready <none> 113s v1.28.7-gke.1026000
    12
    13Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    14NAME STATUS ROLES AGE VERSION
    15gke-k8s-mdb-2-default-pool-4b459a09-t1v9 Ready <none> 29s v1.28.7-gke.1026000
  4. 安装 Istio 服务网格以允许跨集群 DNS 解析和 Kubernetes 集群之间的网络连接:

    1CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \
    2CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \
    3CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \
    4ISTIO_VERSION="1.20.2" \
    5../multi-cluster/install_istio_separate_network.sh
2

注意

启用 sidecar 注入 在 Istio 中,以下命令将istio-injection=enabled $OPERATOR_NAMESPACEmongodb标签添加到每个成员集群上的 和 命名空间。如果使用其他服务网格,请将其配置为处理创建的命名空间中的网络流量。

  • 为 Kubernetes Operator 部署创建一个单独的命名空间mongodb-operator ,由$OPERATOR_NAMESPACE环境变量引用。

  • 在每个成员 Kubernetes 集群上创建相同的$OPERATOR_NAMESPACE 。 这是必要的,以便kubectl MongoDB插件可以在每个成员集群上为Kubernetes Operator 创建服务帐户。 Kubernetes Operator 使用 Operator 集群上的这些服务帐户对每个成员集群执行操作。

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
  • 在每个成员集群(包括用作操作符集群的成员集群)上,创建另一个独立的命名空间mongodb 。 Kubernetes Operator 将此命名空间用于MongoDB Ops Manager资源和组件。

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
3

如果您使用官方 HelmAtlas Charts 和来自 Quay 的图像,则此步骤是可选的 注册表。

1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \
2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
4

以下可选脚本验证是否为跨集群 DNS 解析和连接正确配置了服务网格。

  1. 在集群0上运行此脚本:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver0
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver0
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver0
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver0
    20 ports:
    21 - containerPort: 8080
    22EOF
  2. 在集群1上运行此脚本:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver1
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver1
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver1
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver1
    20 ports:
    21 - containerPort: 8080
    22EOF
  3. 在集群2上运行此脚本:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver2
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver2
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver2
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver2
    20 ports:
    21 - containerPort: 8080
    22EOF
  4. 运行此脚本以等待创建 StatefulSet:

    1kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s
    2kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s
    3kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s
  5. 在集群0上创建 Pod 服务:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver0-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver0-0"
    13EOF
  6. 在集群1上创建 Pod 服务:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver1-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver1-0"
    13EOF
  7. 在集群2上创建 Pod 服务:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver2-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver2-0"
    13EOF
  8. 在集群0上创建循环服务:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver0
    13EOF
  9. 在集群1上创建循环服务:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver1
    13EOF
  10. 在集群2上创建循环服务:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver2
    13EOF
  11. 验证集群0 中的 Pod1 :

    1source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME}
    2target_pod="echoserver0-0"
    3source_pod="echoserver1-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0
    2SUCCESS
  12. 验证集群1 中的 Pod0 :

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0
    2SUCCESS
  13. 验证集群1 中的 Pod2 :

    1source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver2-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0
    2SUCCESS
  1. 验证集群2 中的 Pod0 :

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver2-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0
    2SUCCESS
  2. 运行清理脚本:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0
    2kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1
    3kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    7kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0
    8kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0
    9kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
5

在此步骤中,您将使用kubectl mongodb插件自动执行 Kubernetes 集群配置,这是 Kubernetes Operator 管理多个 Kubernetes 集群上的工作负载所必需的。

由于您在安装 Kubernetes Operator 之前配置 Kubernetes 集群,因此当您为多 Kubernetes 集群操作部署 Kubernetes Operator 时,所有必要的多集群配置都已就位。

概述中所述, Kubernetes Operator 具有三个成员集群的配置,可用于部署MongoDB Ops Manager MongoDB数据库。 第一个集群还用作 Operator 集群,您可以在其中安装 Kubernetes Operator 并部署自定义资源。

1kubectl mongodb multicluster setup \
2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \
3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \
4 --member-cluster-namespace="${NAMESPACE}" \
5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \
6 --create-service-account-secrets \
7 --install-database-roles=true \
8 --image-pull-secrets=image-registries-secret
1Build: 1f23ae48c41d208f14c860356e483ba386a3aab8, 2024-04-26T12:19:36Z
2Ensured namespaces exist in all clusters.
3creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
4creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
5creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
6Ensured ServiceAccounts and Roles.
7Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
8Ensured database Roles in member clusters.
9Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
6
  1. 将 Kubernetes Operator 安装到$OPERATOR_NAMESPACE中,并配置为监视$NAMESPACE并管理三个成员 Kubernetes 集群。 此时, ServiceAccounts 角色 已由kubectl mongodb 插件部署。因此,以下脚本会跳过对它们的配置并设置operator.createOperatorServiceAccount=falseoperator.createResourcesServiceAccountsAndRoles=false 。 这些脚本指定multiCluster.clusters设置来指示 Helm Chart 在多集群模式下部署 Kubernetes Operator。

    1helm upgrade --install \
    2 --debug \
    3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \
    4 mongodb-enterprise-operator-multi-cluster \
    5 "${OPERATOR_HELM_CHART}" \
    6 --namespace="${OPERATOR_NAMESPACE}" \
    7 --set namespace="${OPERATOR_NAMESPACE}" \
    8 --set operator.namespace="${OPERATOR_NAMESPACE}" \
    9 --set operator.watchNamespace="${NAMESPACE}" \
    10 --set operator.name=mongodb-enterprise-operator-multi-cluster \
    11 --set operator.createOperatorServiceAccount=false \
    12 --set operator.createResourcesServiceAccountsAndRoles=false \
    13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \
    14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}"
    1Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now.
    2NAME: mongodb-enterprise-operator-multi-cluster
    3LAST DEPLOYED: Tue Apr 30 19:40:26 2024
    4NAMESPACE: mongodb-operator
    5STATUS: deployed
    6REVISION: 1
    7TEST SUITE: None
    8USER-SUPPLIED VALUES:
    9dummy: value
    10multiCluster:
    11 clusters:
    12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    15namespace: mongodb-operator
    16operator:
    17 createOperatorServiceAccount: false
    18 createResourcesServiceAccountsAndRoles: false
    19 name: mongodb-enterprise-operator-multi-cluster
    20 namespace: mongodb-operator
    21 watchNamespace: mongodb
    22
    23COMPUTED VALUES:
    24agent:
    25 name: mongodb-agent-ubi
    26 version: 107.0.0.8502-1
    27database:
    28 name: mongodb-enterprise-database-ubi
    29 version: 1.25.0
    30dummy: value
    31initAppDb:
    32 name: mongodb-enterprise-init-appdb-ubi
    33 version: 1.25.0
    34initDatabase:
    35 name: mongodb-enterprise-init-database-ubi
    36 version: 1.25.0
    37initOpsManager:
    38 name: mongodb-enterprise-init-ops-manager-ubi
    39 version: 1.25.0
    40managedSecurityContext: false
    41mongodb:
    42 appdbAssumeOldFormat: false
    43 imageType: ubi8
    44 name: mongodb-enterprise-server
    45 repo: quay.io/mongodb
    46mongodbLegacyAppDb:
    47 name: mongodb-enterprise-appdb-database-ubi
    48 repo: quay.io/mongodb
    49multiCluster:
    50 clusterClientTimeout: 10
    51 clusters:
    52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
    56 performFailOver: true
    57namespace: mongodb-operator
    58operator:
    59 additionalArguments: []
    60 affinity: {}
    61 createOperatorServiceAccount: false
    62 createResourcesServiceAccountsAndRoles: false
    63 deployment_name: mongodb-enterprise-operator
    64 env: prod
    65 mdbDefaultArchitecture: non-static
    66 name: mongodb-enterprise-operator-multi-cluster
    67 namespace: mongodb-operator
    68 nodeSelector: {}
    69 operator_image_name: mongodb-enterprise-operator-ubi
    70 replicas: 1
    71 resources:
    72 limits:
    73 cpu: 1100m
    74 memory: 1Gi
    75 requests:
    76 cpu: 500m
    77 memory: 200Mi
    78 tolerations: []
    79 vaultSecretBackend:
    80 enabled: false
    81 tlsSecretRef: ""
    82 version: 1.25.0
    83 watchNamespace: mongodb
    84 watchedResources:
    85 - mongodb
    86 - opsmanagers
    87 - mongodbusers
    88 webhook:
    89 registerConfiguration: true
    90opsManager:
    91 name: mongodb-enterprise-ops-manager-ubi
    92registry:
    93 agent: quay.io/mongodb
    94 appDb: quay.io/mongodb
    95 database: quay.io/mongodb
    96 imagePullSecrets: null
    97 initAppDb: quay.io/mongodb
    98 initDatabase: quay.io/mongodb
    99 initOpsManager: quay.io/mongodb
    100 operator: quay.io/mongodb
    101 opsManager: quay.io/mongodb
    102 pullPolicy: Always
    103subresourceEnabled: true
    104
    105HOOKS:
    106MANIFEST:
    107---
    108# Source: enterprise-operator/templates/operator-roles.yaml
    109kind: ClusterRole
    110apiVersion: rbac.authorization.k8s.io/v1
    111metadata:
    112 name: mongodb-enterprise-operator-mongodb-webhook
    113rules:
    114 - apiGroups:
    115 - "admissionregistration.k8s.io"
    116 resources:
    117 - validatingwebhookconfigurations
    118 verbs:
    119 - get
    120 - create
    121 - update
    122 - delete
    123 - apiGroups:
    124 - ""
    125 resources:
    126 - services
    127 verbs:
    128 - get
    129 - list
    130 - watch
    131 - create
    132 - update
    133 - delete
    134---
    135# Source: enterprise-operator/templates/operator-roles.yaml
    136kind: ClusterRoleBinding
    137apiVersion: rbac.authorization.k8s.io/v1
    138metadata:
    139 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding
    140roleRef:
    141 apiGroup: rbac.authorization.k8s.io
    142 kind: ClusterRole
    143 name: mongodb-enterprise-operator-mongodb-webhook
    144subjects:
    145 - kind: ServiceAccount
    146 name: mongodb-enterprise-operator-multi-cluster
    147 namespace: mongodb-operator
    148---
    149# Source: enterprise-operator/templates/operator.yaml
    150apiVersion: apps/v1
    151kind: Deployment
    152metadata:
    153 name: mongodb-enterprise-operator-multi-cluster
    154 namespace: mongodb-operator
    155spec:
    156 replicas: 1
    157 selector:
    158 matchLabels:
    159 app.kubernetes.io/component: controller
    160 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    161 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    162 template:
    163 metadata:
    164 labels:
    165 app.kubernetes.io/component: controller
    166 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    167 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    168 spec:
    169 serviceAccountName: mongodb-enterprise-operator-multi-cluster
    170 securityContext:
    171 runAsNonRoot: true
    172 runAsUser: 2000
    173 containers:
    174 - name: mongodb-enterprise-operator-multi-cluster
    175 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.25.0"
    176 imagePullPolicy: Always
    177 args:
    178 - -watch-resource=mongodb
    179 - -watch-resource=opsmanagers
    180 - -watch-resource=mongodbusers
    181 - -watch-resource=mongodbmulticluster
    182 command:
    183 - /usr/local/bin/mongodb-enterprise-operator
    184 volumeMounts:
    185 - mountPath: /etc/config/kubeconfig
    186 name: kube-config-volume
    187 resources:
    188 limits:
    189 cpu: 1100m
    190 memory: 1Gi
    191 requests:
    192 cpu: 500m
    193 memory: 200Mi
    194 env:
    195 - name: OPERATOR_ENV
    196 value: prod
    197 - name: MDB_DEFAULT_ARCHITECTURE
    198 value: non-static
    199 - name: WATCH_NAMESPACE
    200 value: "mongodb"
    201 - name: NAMESPACE
    202 valueFrom:
    203 fieldRef:
    204 fieldPath: metadata.namespace
    205 - name: CLUSTER_CLIENT_TIMEOUT
    206 value: "10"
    207 - name: IMAGE_PULL_POLICY
    208 value: Always
    209 # Database
    210 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE
    211 value: quay.io/mongodb/mongodb-enterprise-database-ubi
    212 - name: INIT_DATABASE_IMAGE_REPOSITORY
    213 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi
    214 - name: INIT_DATABASE_VERSION
    215 value: 1.25.0
    216 - name: DATABASE_VERSION
    217 value: 1.25.0
    218 # Ops Manager
    219 - name: OPS_MANAGER_IMAGE_REPOSITORY
    220 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi
    221 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY
    222 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
    223 - name: INIT_OPS_MANAGER_VERSION
    224 value: 1.25.0
    225 # AppDB
    226 - name: INIT_APPDB_IMAGE_REPOSITORY
    227 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
    228 - name: INIT_APPDB_VERSION
    229 value: 1.25.0
    230 - name: OPS_MANAGER_IMAGE_PULL_POLICY
    231 value: Always
    232 - name: AGENT_IMAGE
    233 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1"
    234 - name: MDB_AGENT_IMAGE_REPOSITORY
    235 value: "quay.io/mongodb/mongodb-agent-ubi"
    236 - name: MONGODB_IMAGE
    237 value: mongodb-enterprise-server
    238 - name: MONGODB_REPO_URL
    239 value: quay.io/mongodb
    240 - name: MDB_IMAGE_TYPE
    241 value: ubi8
    242 - name: PERFORM_FAILOVER
    243 value: 'true'
    244 volumes:
    245 - name: kube-config-volume
    246 secret:
    247 defaultMode: 420
    248 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
  2. 检查 Kubernetes Operator 部署:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster
    2echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace"
    3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments
    4echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace"
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods
    1Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available...
    2deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out
    3Operator deployment in mongodb-operator namespace
    4NAME READY UP-TO-DATE AVAILABLE AGE
    5mongodb-enterprise-operator-multi-cluster 1/1 1 1 12s
    6
    7Operator pod in mongodb-operator namespace
    8NAME READY STATUS RESTARTS AGE
    9mongodb-enterprise-operator-multi-cluster-78cc97547d-nlgds 2/2 Running 1 (3s ago) 12s
7

在此步骤中,您将为应用程序数据库和 应用程序启用 TLSMongoDB Ops Manager 。如果您不想使用 TLS,请从MongoDBOpsManager资源中删除以下字段:

  1. 可选。 生成密钥和证书:

    使用openssl命令行工具生成自签名 CA 和证书以进行测试。

    1mkdir certs || true
    2
    3cat <<EOF >certs/ca.cnf
    4[ req ]
    5default_bits = 2048
    6prompt = no
    7default_md = sha256
    8distinguished_name = dn
    9
    10[ dn ]
    11C=US
    12ST=New York
    13L=New York
    14O=Example Company
    15OU=IT Department
    16CN=exampleCA
    17EOF
    18
    19cat <<EOF >certs/om.cnf
    20[ req ]
    21default_bits = 2048
    22prompt = no
    23default_md = sha256
    24distinguished_name = dn
    25req_extensions = req_ext
    26
    27[ dn ]
    28C=US
    29ST=New York
    30L=New York
    31O=Example Company
    32OU=IT Department
    33CN=${OPS_MANAGER_EXTERNAL_DOMAIN}
    34
    35[ req_ext ]
    36subjectAltName = @alt_names
    37keyUsage = critical, digitalSignature, keyEncipherment
    38extendedKeyUsage = serverAuth, clientAuth
    39
    40[ alt_names ]
    41DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN}
    42DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local
    43EOF
    44
    45cat <<EOF >certs/appdb.cnf
    46[ req ]
    47default_bits = 2048
    48prompt = no
    49default_md = sha256
    50distinguished_name = dn
    51req_extensions = req_ext
    52
    53[ dn ]
    54C=US
    55ST=New York
    56L=New York
    57O=Example Company
    58OU=IT Department
    59CN=AppDB
    60
    61[ req_ext ]
    62subjectAltName = @alt_names
    63keyUsage = critical, digitalSignature, keyEncipherment
    64extendedKeyUsage = serverAuth, clientAuth
    65
    66[ alt_names ]
    67# multi-cluster mongod hostnames from service for each pod
    68DNS.1 = *.${NAMESPACE}.svc.cluster.local
    69# single-cluster mongod hostnames from headless service
    70DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local
    71EOF
    72
    73# generate CA keypair and certificate
    74openssl genrsa -out certs/ca.key 2048
    75openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf
    76
    77# generate OpsManager's keypair and certificate
    78openssl genrsa -out certs/om.key 2048
    79openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf
    80
    81# generate AppDB's keypair and certificate
    82openssl genrsa -out certs/appdb.key 2048
    83openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf
    84
    85# generate certificates signed by CA for OpsManager and AppDB
    86openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext
    87openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext
  2. 使用 TLS 密钥创建密钥:

    如果您希望使用自己的密钥和证书,请跳过上一步的生成步骤,并将密钥和证书放入以下文件中:

    • certs/ca.crtCA证书。 使用受信任证书时,这些都不是必需的。

    • certs/appdb.key — 应用程序数据库的私钥。

    • certs/appdb.crt — 应用程序数据库的证书。

    • certs/om.key - MongoDB Ops Manager的私钥。

    • certs/om.crt - MongoDB Ops Manager证书。

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \
    2 --cert=certs/om.crt \
    3 --key=certs/om.key
    4
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \
    6 --cert=certs/appdb.crt \
    7 --key=certs/appdb.key
    8
    9kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
8

此时,您已准备好用于部署 资源的环境和Kubernetes MongoDB Ops ManagerOperator。

  1. 为MongoDB Ops Manager管理员用户创建必要的档案, Kubernetes Operator 在部署MongoDB Ops Manager应用程序实例后将创建该用户:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \
    2 --from-literal=Username="admin" \
    3 --from-literal=Password="Passw0rd@" \
    4 --from-literal=FirstName="Jane" \
    5 --from-literal=LastName="Doe"
  2. 在单个成员集群(也称为 Operator 集群)上部署尽可能简单的MongoDBOpsManager自定义资源(启用TLS )。

    此部署与单集群模式几乎相同,但将spec.topologyspec.applicationDatabase.topology设立为MultiCluster

    这种部署方式表明,单个 Kubernetes 集群部署是单个 Kubernetes 成员集群上的多 Kubernetes 集群部署的特例。 MongoDB Ops ManagerKubernetes您可以从一开始就在所需数量的 集群上部署 应用程序和应用程序数据库,而不必从只有单个成员Kubernetes 集群的部署开始。

    此时,您已准备好MongoDB Ops Manager部署以跨越多个Kubernetes集群,本过程稍后将执行此操作。

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 applicationDatabase:
    18 version: "${APPDB_VERSION}"
    19 topology: MultiCluster
    20 security:
    21 certsSecretPrefix: cert-prefix
    22 tls:
    23 ca: appdb-cert-ca
    24 clusterSpecList:
    25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    26 members: 3
    27 backup:
    28 enabled: false
    29EOF
  3. 等待 Kubernetes Operator 接手工作并进入status.applicationDatabase.phase=Pending状态。 等待应用程序数据库和MongoDB Ops Manager部署完成。

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  4. 部署MongoDB Ops Manager 。 Kubernetes Operator 通过执行以下步骤来部署MongoDB Ops Manager 。 它:

    • 部署应用程序数据库的副本集节点,并等待副本集中的 MongoDB 进程开始运行。

    • 使用应用程序数据库的连接 部署MongoDB Ops Managerstring 应用程序实例,并等待其准备就绪。

    • 将监控MongoDB Agent容器添加到每个应用程序数据库的 Pod。

    • 等待MongoDB Ops Manager应用程序和应用程序数据库 Pod 开始运行。

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    5echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    7echo "Waiting for Application Database to reach Running phase..."
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    9echo; echo "Waiting for Ops Manager to reach Running phase..."
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    11echo; echo "MongoDBOpsManager resource"
    12kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    15echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    16kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7Waiting for Application Database to reach Pending phase (enabling monitoring)...
    8mongodbopsmanager.mongodb.com/om condition met
    9Waiting for Application Database to reach Running phase...
    10mongodbopsmanager.mongodb.com/om condition met
    11
    12Waiting for Ops Manager to reach Running phase...
    13mongodbopsmanager.mongodb.com/om condition met
    14
    15MongoDBOpsManager resource
    16NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    17om 7.0.4 Running Running Disabled 11m
    18
    19Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    20NAME READY STATUS RESTARTS AGE
    21om-0-0 2/2 Running 0 8m39s
    22om-db-0-0 4/4 Running 0 44s
    23om-db-0-1 4/4 Running 0 2m6s
    24om-db-0-2 4/4 Running 0 3m19s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1

    现在您已经以多集群模式部署了一个单成员集群,您可以重新配置此部署以跨越多个 Kubernetes 集群。

  5. 在第二个成员集群上,再部署两个应用程序数据库副本集成员和一个MongoDB Ops Manager应用程序实例:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    18 members: 1
    19 applicationDatabase:
    20 version: "${APPDB_VERSION}"
    21 topology: MultiCluster
    22 security:
    23 certsSecretPrefix: cert-prefix
    24 tls:
    25 ca: appdb-cert-ca
    26 clusterSpecList:
    27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    28 members: 3
    29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    30 members: 2
    31 backup:
    32 enabled: false
    33EOF
  6. 等待 Kubernetes Operator 接手工作(待处理阶段):

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  7. 等待 Kubernetes Operator 完成所有组件的部署:

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 7.0.4 Pending Running Disabled 14m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Terminating 0 12m
    14om-db-0-0 4/4 Running 0 4m12s
    15om-db-0-1 4/4 Running 0 5m34s
    16om-db-0-2 4/4 Running 0 6m47s
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    19NAME READY STATUS RESTARTS AGE
    20om-1-0 0/2 Init:0/2 0 0s
    21om-db-1-0 4/4 Running 0 3m24s
    22om-db-1-1 4/4 Running 0 104s
9

在MongoDB Ops Manager应用程序的多 Kubernetes 集群部署中,您只能配置基于S3的备份存储。 此过程引用env_variables.sh 中定义的S3_*

  1. 可选。 安装 MinIO Operator

    此过程使用 MinIO Operator 为备份部署 S3 兼容存储 。如果您有可用的Amazon Web Services S3或其他S3兼容存储桶,则可以跳过此步骤。 在这种情况下,请在env_variables.sh中相应地调整S3_*变量。

    1kustomize build "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
    2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    3
    4kustomize build "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
    5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    6
    7# add two buckets to the tenant config
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
    9 --type='json' \
    10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
  2. 在配置和启用备份之前,请创建密钥:

    • s3-access-secret — 包含S 3档案。

    • s3-ca-cert - 包含颁发存储桶服务器证书的CA证书。 在此过程中使用的示例 MinIO 部署中,将使用默认的 Kubernetes 根CA证书对证书进行签名。 由于它不是公众信任的CA证书,因此您必须提供该证书, MongoDB Ops Manager才能信任该连接。

    如果您使用公开信任的证书,则可以跳过此步骤并删除spec.backup.s3Stores.customCertificateSecretRefsspec.backup.s3OpLogStores.customCertificateSecretRefs设置中的值。

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \
    2 --from-literal=accessKey="${S3_ACCESS_KEY}" \
    3 --from-literal=secretKey="${S3_SECRET_KEY}"
    4
    5# minio TLS secrets are signed with the default k8s root CA
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \
    7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
10
  1. KubernetesOperatorMongoDB Ops Manager 可以在为其配置Kubernetes Operator 的任何成员集群上以任意组合配置和部署所有组件、 应用程序、备份守护程序实例和应用程序数据库的副本集节点。

    为了说明多 Kubernetes 集群部署配置的灵活性,在第三个成员集群上仅部署一个备份守护进程实例,并为第一个和第二个集群指定零个备份守护进程成员。

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 backup:
    18 members: 0
    19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    20 members: 1
    21 backup:
    22 members: 0
    23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
    24 members: 0
    25 backup:
    26 members: 1
    27 configuration: # to avoid configuration wizard on first login
    28 mms.adminEmailAddr: email@example.com
    29 mms.fromEmailAddr: email@example.com
    30 mms.ignoreInitialUiSetup: "true"
    31 mms.mail.hostname: smtp@example.com
    32 mms.mail.port: "465"
    33 mms.mail.ssl: "true"
    34 mms.mail.transport: smtp
    35 mms.minimumTLSVersion: TLSv1.2
    36 mms.replyToEmailAddr: email@example.com
    37 applicationDatabase:
    38 version: "${APPDB_VERSION}"
    39 topology: MultiCluster
    40 security:
    41 certsSecretPrefix: cert-prefix
    42 tls:
    43 ca: appdb-cert-ca
    44 clusterSpecList:
    45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    46 members: 3
    47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    48 members: 2
    49 backup:
    50 enabled: true
    51 s3Stores:
    52 - name: my-s3-block-store
    53 s3SecretRef:
    54 name: "s3-access-secret"
    55 pathStyleAccessEnabled: true
    56 s3BucketEndpoint: "${S3_ENDPOINT}"
    57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
    58 customCertificateSecretRefs:
    59 - name: s3-ca-cert
    60 key: ca.crt
    61 s3OpLogStores:
    62 - name: my-s3-oplog-store
    63 s3SecretRef:
    64 name: "s3-access-secret"
    65 s3BucketEndpoint: "${S3_ENDPOINT}"
    66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
    67 pathStyleAccessEnabled: true
    68 customCertificateSecretRefs:
    69 - name: s3-ca-cert
    70 key: ca.crt
    71EOF
  2. 等待 Kubernetes Operator 完成配置:

    1echo; echo "Waiting for Backup to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
    3echo "Waiting for Application Database to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "Waiting for Ops Manager to reach Running phase..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    7echo; echo "MongoDBOpsManager resource"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    11echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    12kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Backup to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Application Database to reach Running phase...
    4mongodbopsmanager.mongodb.com/om condition met
    5
    6Waiting for Ops Manager to reach Running phase...
    7mongodbopsmanager.mongodb.com/om condition met
    8
    9MongoDBOpsManager resource
    10NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    11om 7.0.4 Running Running Running 22m
    12
    13Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    14NAME READY STATUS RESTARTS AGE
    15om-0-0 2/2 Running 0 7m10s
    16om-db-0-0 4/4 Running 0 11m
    17om-db-0-1 4/4 Running 0 13m
    18om-db-0-2 4/4 Running 0 14m
    19
    20Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    21NAME READY STATUS RESTARTS AGE
    22om-1-0 2/2 Running 0 4m8s
    23om-db-1-0 4/4 Running 0 11m
    24om-db-1-1 4/4 Running 0 9m25s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    27NAME READY STATUS RESTARTS AGE
    28om-2-backup-daemon-0 2/2 Running 0 2m5s
11

运行以下脚本以删除 GKE 集群并清理环境。

重要

以下命令不可逆。 它们会删除env_variables.sh中引用的所有集群。 如果您希望保留 GKE 集群,例如,如果您没有在此过程中创建 GKE 集群,请不要运行这些命令。

1yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" &
2yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" &
3yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" &
4wait