Menu Docs
Página inicial do Docs
/
Operador de Kubernetes do MongoDB Enterprise
/

Implante recursos MongoDB Ops Manager em vários clusters Kubernetes

Nesta página

  • Visão geral
  • Pré-requisitos
  • Procedimento

Para tornar seu MongoDB Ops Manager e o sistema de banco de dados de aplicativos resilientes a falhas de data center ou zona inteiras, implemente o aplicativo MongoDB Ops Manager e o banco de dados de aplicativos em vários clusters do Kubernetes .

Para saber mais sobre a arquitetura, a rede, as limitações e o desempenho das implantações de clusters multi-Kubernetes para os recursos MongoDB Ops Manager , consulte:

  • Arquitetura MongoDB Ops Manager de vários clusters

  • Limitações

  • Diferenças entre sistemas do MongoDB Ops Manager de um e vários clusters

  • Diagrama de arquitetura

  • Redes, Balanceador de carga, Malha de serviço

  • Desempenho

Ao implementar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativo utilizando o procedimento nesta seção, você:

  1. Use GKE (Google Kubernetes Kubernetes Engine) e a rede de serviços do Istion como ferramentas que ajudam a demonstrar a implementação de váriosKubernetes clusters Kubernetes.

  2. Instale o Operador Kubernetes em um dos clusters Kubernetes do membro conhecido como cluster do operador. O cluster de operador atua como um Hub no padrão "Hub and Spoke" usado pelo Operador Kubernetes para gerenciar implantações em vários clusters Kubernetes.

  3. Implemente o cluster do operador no $OPERATOR_NAMESPACE e configure este cluster para assistir $NAMESPACE e gerenciar todos os clusters de membros do Kubernetes.

  4. Implemente o Banco de Dados de Aplicativo e o Aplicativo MongoDB Ops Manager em um cluster Kubernetes de nó único para demonstrar similaridade de um sistema de vários clusters com um sistema de cluster único. Um único sistema de cluster com spec.topology e spec.applicationDatabase.topology definido como MultiCluster prepara o sistema para adicionar mais clusters Kubernetes a ele.

  5. Implemente um conjunto de réplicas adicional do aplicativo de banco de dados no segundo cluster Kubernetes para melhorar a resiliência do banco de dados de aplicativos. Você também implementa uma instância adicional do aplicativo MongoDB Ops Manager no cluster Kubernetes do segundo membro.

  6. Crie certificados válidos para criptografiaTLS e estabeleça conexões criptografadas TLSde e para o Aplicativo MongoDB Ops Manager e entre os membros do conjunto de réplicas do Banco de Dados do Aplicativo. Ao executar HTTPS, o MongoDB Ops Manager é executado na porta 8443 por padrão.

  7. Habilite o backup usando o armazenamento compatível com S3e implemente o Backup Daemon no cluster Kubernetes de terceiro membro. Para simplificar a configuração de buckets de armazenamento compatíveis com S ,3 você implementa o Operador MinIO. Você habilita o Backup Daemon somente em um cluster de membros em seu sistema. No entanto, você também pode configurar outros clusters de membros para hospedar os recursos do Backup Daemon. Somente backups S3 são suportados em sistemas do MongoDB Ops Manager de vários clusters.

Antes de iniciar a implementação, instale as seguintes ferramentas necessárias:

Instalar o CLI do gcloud e autorizar nele:

gcloud auth login

O kubectl MongoDB plugin automatiza a configuração dos Kubernetes clusters . Isso permite que o Kubernetes Operator implemente recursos, roles necessários e serviços para contas para o aplicativo MongoDB Ops Manager , banco de dados de aplicativos e recursos do MongoDB nesses clusters.

Para instalar o plug-in kubectl mongodb :

1

Baixe a versão desejada do pacote do Kubernetes Operator na página de lançamento do repositório do MongoDB Enterprise Kubernetes Operator.

O nome do pacote usa este padrão: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use um dos seguintes pacotes:

  • kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz

2

Desempacote o pacote, como no exemplo a seguir:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Encontre o binário kubectl-mongodb no diretório descompactado e mova-o para o destino desejado, dentro do PATH para o usuário do Kubernetes Operator, conforme mostrado no exemplo a seguir:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Agora você pode executar o plug-in kubectl mongodb usando os seguintes comandos:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

Para saber mais sobre os sinalizadores suportados, consulte a Referência do plugin Kubectl do MongoDB.

Clone o repositório do MongoDB Enterprise Kubernetes Operator, altere para o mongodb-enterprise-kubernetes diretório e confira a versão atual.

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
cd mongodb-enterprise-kubernetes
git checkout 1.26
cd public/samples/ops-manager-multi-cluster

Importante

Algumas etapas deste guia funcionam somente se você executá-las a partir do diretório public/samples/ops-manager-multi-cluster .

Todas as etapas deste guia referenciam as variáveis de ambiente definidas no env_variables.sh.

1export MDB_GKE_PROJECT="### Set your GKE project name here ###"
2
3export NAMESPACE="mongodb"
4export OPERATOR_NAMESPACE="mongodb-operator"
5
6# comma-separated key=value pairs
7# export OPERATOR_ADDITIONAL_HELM_VALUES=""
8
9# Adjust the values for each Kubernetes cluster in your deployment.
10# The deployment script references the following variables to get values for each cluster.
11export K8S_CLUSTER_0="k8s-mdb-0"
12export K8S_CLUSTER_0_ZONE="europe-central2-a"
13export K8S_CLUSTER_0_NUMBER_OF_NODES=3
14export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4"
15export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}"
16
17export K8S_CLUSTER_1="k8s-mdb-1"
18export K8S_CLUSTER_1_ZONE="europe-central2-b"
19export K8S_CLUSTER_1_NUMBER_OF_NODES=3
20export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4"
21export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}"
22
23export K8S_CLUSTER_2="k8s-mdb-2"
24export K8S_CLUSTER_2_ZONE="europe-central2-c"
25export K8S_CLUSTER_2_NUMBER_OF_NODES=1
26export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4"
27export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}"
28
29# Comment out the following line so that the script does not create preemptible nodes.
30# DO NOT USE preemptible nodes in production.
31export GKE_SPOT_INSTANCES_SWITCH="--preemptible"
32
33export S3_OPLOG_BUCKET_NAME=s3-oplog-store
34export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
35
36# minio defaults
37export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
38export S3_ACCESS_KEY="console"
39export S3_SECRET_KEY="console123"
40
41export OPERATOR_HELM_CHART="mongodb/enterprise-operator"
42
43# (Optional) Change the following setting when using the external URL.
44# This env variable is used in OpenSSL configuration to generate
45# server certificates for Ops Manager Application.
46export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local"
47
48export OPS_MANAGER_VERSION="7.0.4"
49export APPDB_VERSION="7.0.9-ubi8"

Ajuste as configurações no exemplo anterior para suas necessidades, conforme instruído nos comentários, e insira-as em seu shell da seguinte maneira:

source env_variables.sh

Importante

Cada vez que você atualizar o env_variables.sh, execute o source env_variables.sh para garantir que os scripts nesta seção usem variáveis atualizadas.

Este procedimento se aplica à implantação de uma instância MongoDB Ops Manager em vários clusters Kubernetes .

1

Você pode pular esta etapa se já tiver instalado e configurado seus próprios clusters Kubernetes com uma malha de serviço.

  1. Criar três GKE (Google Kubernetes Engine) clusters:

    1gcloud container clusters create "${K8S_CLUSTER_0}" \
    2 --zone="${K8S_CLUSTER_0_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_1}" \
    2 --zone="${K8S_CLUSTER_1_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_2}" \
    2 --zone="${K8S_CLUSTER_2_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
  2. Obtenha credenciais e salve contextos no arquivo kubeconfig atual. Por padrão, este arquivo está localizado no diretório ~/.kube/config e referenciado pela variável de ambiente $KUBECONFIG .

    1gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}"
    2gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}"
    3gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}"

    Todos os comandos kubectl referenciam estes contextos utilizando as seguintes variáveis:

    • $K8S_CLUSTER_0_CONTEXT_NAME

    • $K8S_CLUSTER_1_CONTEXT_NAME

    • $K8S_CLUSTER_2_CONTEXT_NAME

  3. Verifique se kubectl tem acesso aos clusters Kubernetes:

    1echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes
    3echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes
    5echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes
    1Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    2NAME STATUS ROLES AGE VERSION
    3gke-k8s-mdb-0-default-pool-d0f98a43-dtlc Ready <none> 65s v1.28.7-gke.1026000
    4gke-k8s-mdb-0-default-pool-d0f98a43-q9sf Ready <none> 65s v1.28.7-gke.1026000
    5gke-k8s-mdb-0-default-pool-d0f98a43-zn8x Ready <none> 64s v1.28.7-gke.1026000
    6
    7Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    8NAME STATUS ROLES AGE VERSION
    9gke-k8s-mdb-1-default-pool-37ea602a-0qgw Ready <none> 111s v1.28.7-gke.1026000
    10gke-k8s-mdb-1-default-pool-37ea602a-k4qk Ready <none> 114s v1.28.7-gke.1026000
    11gke-k8s-mdb-1-default-pool-37ea602a-p2g7 Ready <none> 113s v1.28.7-gke.1026000
    12
    13Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    14NAME STATUS ROLES AGE VERSION
    15gke-k8s-mdb-2-default-pool-4b459a09-t1v9 Ready <none> 29s v1.28.7-gke.1026000
  4. Instalar o Sistema malha de serviços para permitir resolução de DNS entre clusters e conectividade de rede entre clusters Kubernetes:

    1CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \
    2CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \
    3CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \
    4ISTIO_VERSION="1.20.2" \
    5../multi-cluster/install_istio_separate_network.sh
2

Observação

Para habilitar a injeção de sidecar no Istion, os comandos a seguir adicionam os istio-injection=enabled rótulos aos $OPERATOR_NAMESPACE mongodb namespaces e em cada cluster de membros. Se você usar outra interface de serviço, configure-a para lidar com o tráfego de rede nos namespaces criados.

  • Crie um namespace separado, mongodb-operator, referenciado pela variável de ambiente $OPERATOR_NAMESPACE para o sistema do Kubernetes Operator.

  • Crie o mesmo $OPERATOR_NAMESPACE em cada cluster Kubernetes de membros. Isso é necessário para que o plug- in kubectl MongoDB possa criar uma conta de serviço para o Kubernetes Operator em cada cluster de membros. O Operador Kubernetes utiliza estas contas de serviço no cluster do operador para executar operações em cada cluster de membros.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
  • Em cada cluster de membros, incluindo o cluster de membros que serve como cluster de operador, crie outro namespace separado, mongodb. O Kubernetes Operator usa este namespace para recursos e componentes do MongoDB Ops Manager .

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
3

Esta etapa é opcional se você usar os Atlas Charts do Quay registro.

1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \
2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
4

Os scripts opcionais a seguir verificam se a interface de serviço está configurada corretamente para resolução e conectividade de DNS entre clusters.

  1. Execute este script no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver0
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver0
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver0
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver0
    20 ports:
    21 - containerPort: 8080
    22EOF
  2. Execute este script no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver1
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver1
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver1
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver1
    20 ports:
    21 - containerPort: 8080
    22EOF
  3. Execute este script no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver2
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver2
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver2
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver2
    20 ports:
    21 - containerPort: 8080
    22EOF
  4. Execute este script para aguardar a criação do StatefulSets:

    1kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s
    2kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s
    3kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s
  5. Criar serviço de Pod no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver0-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver0-0"
    13EOF
  6. Criar serviço de Pod no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver1-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver1-0"
    13EOF
  7. Criar serviço de Pod no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver2-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver2-0"
    13EOF
  8. Criar serviço de round robin no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver0
    13EOF
  9. Criar serviço de round robin no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver1
    13EOF
  10. Criar serviço de round robin no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver2
    13EOF
  11. Verifique o Pod 0 do cluster 1:

    1source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME}
    2target_pod="echoserver0-0"
    3source_pod="echoserver1-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0
    2SUCCESS
  12. Verifique o Pod 1 do cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0
    2SUCCESS
  13. Verifique o Pod 1 do cluster 2:

    1source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver2-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0
    2SUCCESS
  1. Verifique o Pod 2 do cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver2-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0
    2SUCCESS
  2. Execute o script de limpeza:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0
    2kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1
    3kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    7kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0
    8kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0
    9kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
5

Nesta etapa, você usa o plug-in kubectl mongodb para automatizar a configuração do cluster Kubernetes, necessária para que o operador Kubernetes gerencie cargas de trabalho em vários clusters Kubernetes.

Como você configura os clusters Kubernetes antes de instalar o Operador Kubernetes, quando você distribui o Operador Kubernetes para a operação de cluster multi-Kubernetes, toda a configuração multi-cluster necessária já está em vigor.

Conforme indicado na Visão geral, o Operador Kubernetes tem a configuração de três clusters de membros que você pode usar para implantar bancos de dados MongoDB Ops Manager MongoDB . O primeiro cluster também é usado como o cluster do operador, onde você instala o Kubernetes Operator e implementa os recursos personalizados.

1kubectl mongodb multicluster setup \
2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \
3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \
4 --member-cluster-namespace="${NAMESPACE}" \
5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \
6 --create-service-account-secrets \
7 --install-database-roles=true \
8 --image-pull-secrets=image-registries-secret
1Build: 1f23ae48c41d208f14c860356e483ba386a3aab8, 2024-04-26T12:19:36Z
2Ensured namespaces exist in all clusters.
3creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
4creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
5creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
6Ensured ServiceAccounts and Roles.
7Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
8Ensured database Roles in member clusters.
9Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
6
  1. Instale o Operador Kubernetes no $OPERATOR_NAMESPACE, configurado para observar $NAMESPACE e gerenciar três clusters de membros do Kubernetes. Neste ponto do procedimento, ServiceAccounts e roles já estão implantados pelo kubectl mongodb plug-in . Portanto, os scripts a seguir ignoram a configuração e definem operator.createOperatorServiceAccount=false e operator.createResourcesServiceAccountsAndRoles=false. Os scripts especificam a configuração multiCluster.clusters para instruir o gráfico Helm a implantar o Operador Kubernetes no modo de vários clusters.

    1helm upgrade --install \
    2 --debug \
    3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \
    4 mongodb-enterprise-operator-multi-cluster \
    5 "${OPERATOR_HELM_CHART}" \
    6 --namespace="${OPERATOR_NAMESPACE}" \
    7 --set namespace="${OPERATOR_NAMESPACE}" \
    8 --set operator.namespace="${OPERATOR_NAMESPACE}" \
    9 --set operator.watchNamespace="${NAMESPACE}" \
    10 --set operator.name=mongodb-enterprise-operator-multi-cluster \
    11 --set operator.createOperatorServiceAccount=false \
    12 --set operator.createResourcesServiceAccountsAndRoles=false \
    13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \
    14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}"
    1Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now.
    2NAME: mongodb-enterprise-operator-multi-cluster
    3LAST DEPLOYED: Tue Apr 30 19:40:26 2024
    4NAMESPACE: mongodb-operator
    5STATUS: deployed
    6REVISION: 1
    7TEST SUITE: None
    8USER-SUPPLIED VALUES:
    9dummy: value
    10multiCluster:
    11 clusters:
    12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    15namespace: mongodb-operator
    16operator:
    17 createOperatorServiceAccount: false
    18 createResourcesServiceAccountsAndRoles: false
    19 name: mongodb-enterprise-operator-multi-cluster
    20 namespace: mongodb-operator
    21 watchNamespace: mongodb
    22
    23COMPUTED VALUES:
    24agent:
    25 name: mongodb-agent-ubi
    26 version: 107.0.0.8502-1
    27database:
    28 name: mongodb-enterprise-database-ubi
    29 version: 1.25.0
    30dummy: value
    31initAppDb:
    32 name: mongodb-enterprise-init-appdb-ubi
    33 version: 1.25.0
    34initDatabase:
    35 name: mongodb-enterprise-init-database-ubi
    36 version: 1.25.0
    37initOpsManager:
    38 name: mongodb-enterprise-init-ops-manager-ubi
    39 version: 1.25.0
    40managedSecurityContext: false
    41mongodb:
    42 appdbAssumeOldFormat: false
    43 imageType: ubi8
    44 name: mongodb-enterprise-server
    45 repo: quay.io/mongodb
    46mongodbLegacyAppDb:
    47 name: mongodb-enterprise-appdb-database-ubi
    48 repo: quay.io/mongodb
    49multiCluster:
    50 clusterClientTimeout: 10
    51 clusters:
    52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
    56 performFailOver: true
    57namespace: mongodb-operator
    58operator:
    59 additionalArguments: []
    60 affinity: {}
    61 createOperatorServiceAccount: false
    62 createResourcesServiceAccountsAndRoles: false
    63 deployment_name: mongodb-enterprise-operator
    64 env: prod
    65 mdbDefaultArchitecture: non-static
    66 name: mongodb-enterprise-operator-multi-cluster
    67 namespace: mongodb-operator
    68 nodeSelector: {}
    69 operator_image_name: mongodb-enterprise-operator-ubi
    70 replicas: 1
    71 resources:
    72 limits:
    73 cpu: 1100m
    74 memory: 1Gi
    75 requests:
    76 cpu: 500m
    77 memory: 200Mi
    78 tolerations: []
    79 vaultSecretBackend:
    80 enabled: false
    81 tlsSecretRef: ""
    82 version: 1.25.0
    83 watchNamespace: mongodb
    84 watchedResources:
    85 - mongodb
    86 - opsmanagers
    87 - mongodbusers
    88 webhook:
    89 registerConfiguration: true
    90opsManager:
    91 name: mongodb-enterprise-ops-manager-ubi
    92registry:
    93 agent: quay.io/mongodb
    94 appDb: quay.io/mongodb
    95 database: quay.io/mongodb
    96 imagePullSecrets: null
    97 initAppDb: quay.io/mongodb
    98 initDatabase: quay.io/mongodb
    99 initOpsManager: quay.io/mongodb
    100 operator: quay.io/mongodb
    101 opsManager: quay.io/mongodb
    102 pullPolicy: Always
    103subresourceEnabled: true
    104
    105HOOKS:
    106MANIFEST:
    107---
    108# Source: enterprise-operator/templates/operator-roles.yaml
    109kind: ClusterRole
    110apiVersion: rbac.authorization.k8s.io/v1
    111metadata:
    112 name: mongodb-enterprise-operator-mongodb-webhook
    113rules:
    114 - apiGroups:
    115 - "admissionregistration.k8s.io"
    116 resources:
    117 - validatingwebhookconfigurations
    118 verbs:
    119 - get
    120 - create
    121 - update
    122 - delete
    123 - apiGroups:
    124 - ""
    125 resources:
    126 - services
    127 verbs:
    128 - get
    129 - list
    130 - watch
    131 - create
    132 - update
    133 - delete
    134---
    135# Source: enterprise-operator/templates/operator-roles.yaml
    136kind: ClusterRoleBinding
    137apiVersion: rbac.authorization.k8s.io/v1
    138metadata:
    139 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding
    140roleRef:
    141 apiGroup: rbac.authorization.k8s.io
    142 kind: ClusterRole
    143 name: mongodb-enterprise-operator-mongodb-webhook
    144subjects:
    145 - kind: ServiceAccount
    146 name: mongodb-enterprise-operator-multi-cluster
    147 namespace: mongodb-operator
    148---
    149# Source: enterprise-operator/templates/operator.yaml
    150apiVersion: apps/v1
    151kind: Deployment
    152metadata:
    153 name: mongodb-enterprise-operator-multi-cluster
    154 namespace: mongodb-operator
    155spec:
    156 replicas: 1
    157 selector:
    158 matchLabels:
    159 app.kubernetes.io/component: controller
    160 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    161 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    162 template:
    163 metadata:
    164 labels:
    165 app.kubernetes.io/component: controller
    166 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    167 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    168 spec:
    169 serviceAccountName: mongodb-enterprise-operator-multi-cluster
    170 securityContext:
    171 runAsNonRoot: true
    172 runAsUser: 2000
    173 containers:
    174 - name: mongodb-enterprise-operator-multi-cluster
    175 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.25.0"
    176 imagePullPolicy: Always
    177 args:
    178 - -watch-resource=mongodb
    179 - -watch-resource=opsmanagers
    180 - -watch-resource=mongodbusers
    181 - -watch-resource=mongodbmulticluster
    182 command:
    183 - /usr/local/bin/mongodb-enterprise-operator
    184 volumeMounts:
    185 - mountPath: /etc/config/kubeconfig
    186 name: kube-config-volume
    187 resources:
    188 limits:
    189 cpu: 1100m
    190 memory: 1Gi
    191 requests:
    192 cpu: 500m
    193 memory: 200Mi
    194 env:
    195 - name: OPERATOR_ENV
    196 value: prod
    197 - name: MDB_DEFAULT_ARCHITECTURE
    198 value: non-static
    199 - name: WATCH_NAMESPACE
    200 value: "mongodb"
    201 - name: NAMESPACE
    202 valueFrom:
    203 fieldRef:
    204 fieldPath: metadata.namespace
    205 - name: CLUSTER_CLIENT_TIMEOUT
    206 value: "10"
    207 - name: IMAGE_PULL_POLICY
    208 value: Always
    209 # Database
    210 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE
    211 value: quay.io/mongodb/mongodb-enterprise-database-ubi
    212 - name: INIT_DATABASE_IMAGE_REPOSITORY
    213 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi
    214 - name: INIT_DATABASE_VERSION
    215 value: 1.25.0
    216 - name: DATABASE_VERSION
    217 value: 1.25.0
    218 # Ops Manager
    219 - name: OPS_MANAGER_IMAGE_REPOSITORY
    220 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi
    221 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY
    222 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
    223 - name: INIT_OPS_MANAGER_VERSION
    224 value: 1.25.0
    225 # AppDB
    226 - name: INIT_APPDB_IMAGE_REPOSITORY
    227 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
    228 - name: INIT_APPDB_VERSION
    229 value: 1.25.0
    230 - name: OPS_MANAGER_IMAGE_PULL_POLICY
    231 value: Always
    232 - name: AGENT_IMAGE
    233 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1"
    234 - name: MDB_AGENT_IMAGE_REPOSITORY
    235 value: "quay.io/mongodb/mongodb-agent-ubi"
    236 - name: MONGODB_IMAGE
    237 value: mongodb-enterprise-server
    238 - name: MONGODB_REPO_URL
    239 value: quay.io/mongodb
    240 - name: MDB_IMAGE_TYPE
    241 value: ubi8
    242 - name: PERFORM_FAILOVER
    243 value: 'true'
    244 volumes:
    245 - name: kube-config-volume
    246 secret:
    247 defaultMode: 420
    248 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
  2. Verifique o sistema do Kubernetes Operator:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster
    2echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace"
    3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments
    4echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace"
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods
    1Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available...
    2deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out
    3Operator deployment in mongodb-operator namespace
    4NAME READY UP-TO-DATE AVAILABLE AGE
    5mongodb-enterprise-operator-multi-cluster 1/1 1 1 12s
    6
    7Operator pod in mongodb-operator namespace
    8NAME READY STATUS RESTARTS AGE
    9mongodb-enterprise-operator-multi-cluster-78cc97547d-nlgds 2/2 Running 1 (3s ago) 12s
7

Nesta etapa, você habilita o TLS para o banco de dados do aplicativo e o aplicativo MongoDB Ops Manager . Se você não quiser usar o TLS, remova os seguintes campos dos recursos MongoDBOpsManager :

  1. Opcional. Gerar chaves e certificados:

    Utilize a ferramenta de linha de comando openssl para gerar CAs e certificados autoassinados para fins de teste.

    1mkdir certs || true
    2
    3cat <<EOF >certs/ca.cnf
    4[ req ]
    5default_bits = 2048
    6prompt = no
    7default_md = sha256
    8distinguished_name = dn
    9
    10[ dn ]
    11C=US
    12ST=New York
    13L=New York
    14O=Example Company
    15OU=IT Department
    16CN=exampleCA
    17EOF
    18
    19cat <<EOF >certs/om.cnf
    20[ req ]
    21default_bits = 2048
    22prompt = no
    23default_md = sha256
    24distinguished_name = dn
    25req_extensions = req_ext
    26
    27[ dn ]
    28C=US
    29ST=New York
    30L=New York
    31O=Example Company
    32OU=IT Department
    33CN=${OPS_MANAGER_EXTERNAL_DOMAIN}
    34
    35[ req_ext ]
    36subjectAltName = @alt_names
    37keyUsage = critical, digitalSignature, keyEncipherment
    38extendedKeyUsage = serverAuth, clientAuth
    39
    40[ alt_names ]
    41DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN}
    42DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local
    43EOF
    44
    45cat <<EOF >certs/appdb.cnf
    46[ req ]
    47default_bits = 2048
    48prompt = no
    49default_md = sha256
    50distinguished_name = dn
    51req_extensions = req_ext
    52
    53[ dn ]
    54C=US
    55ST=New York
    56L=New York
    57O=Example Company
    58OU=IT Department
    59CN=AppDB
    60
    61[ req_ext ]
    62subjectAltName = @alt_names
    63keyUsage = critical, digitalSignature, keyEncipherment
    64extendedKeyUsage = serverAuth, clientAuth
    65
    66[ alt_names ]
    67# multi-cluster mongod hostnames from service for each pod
    68DNS.1 = *.${NAMESPACE}.svc.cluster.local
    69# single-cluster mongod hostnames from headless service
    70DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local
    71EOF
    72
    73# generate CA keypair and certificate
    74openssl genrsa -out certs/ca.key 2048
    75openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf
    76
    77# generate OpsManager's keypair and certificate
    78openssl genrsa -out certs/om.key 2048
    79openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf
    80
    81# generate AppDB's keypair and certificate
    82openssl genrsa -out certs/appdb.key 2048
    83openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf
    84
    85# generate certificates signed by CA for OpsManager and AppDB
    86openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext
    87openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext
  2. Crie segredos com chaves TLS:

    Se você preferir usar suas próprias chaves e certificados, pule a etapa de geração anterior e coloque as chaves e os certificados nos seguintes arquivos:

    • certs/ca.crt - Certificados CA. Eles não são necessários ao usar certificados confiáveis.

    • certs/appdb.key - chave privada para o banco de dados de aplicativos.

    • certs/appdb.crt - certificado para o banco de dados de aplicativos.

    • certs/om.key - chave privada para o MongoDB Ops Manager.

    • certs/om.crt - certificado para MongoDB Ops Manager.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \
    2 --cert=certs/om.crt \
    3 --key=certs/om.key
    4
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \
    6 --cert=certs/appdb.crt \
    7 --key=certs/appdb.key
    8
    9kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
8

Neste ponto, você preparou o ambiente e o Operador Kubernetes para implantar o recurso MongoDB Ops Manager .

  1. Crie as credenciais necessárias para o usuário administrador do MongoDB Ops Manager que o operador Kubernetes criará após implantar a instância do aplicativo MongoDB Ops Manager :

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \
    2 --from-literal=Username="admin" \
    3 --from-literal=Password="Passw0rd@" \
    4 --from-literal=FirstName="Jane" \
    5 --from-literal=LastName="Doe"
  2. Implemente o recurso personalizado MongoDBOpsManager mais simples possível (com TLS habilitado) em um cluster de único membro, que também é conhecido como o cluster do operador.

    Essa implantação é quase a mesma que para o modo de cluster único, mas com spec.topology e spec.applicationDatabase.topology configurados para MultiCluster.

    A implantação dessa forma mostra que um único sistema de cluster Kubernetes é um caso especial de um sistema de cluster multi-Kubernetes em um único cluster de membro do Kubernetes. Você pode começar a implantar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativos em quantos clusters Kubernetes necessários desde o início e não precisa começar com a implantação com apenas um cluster Kubernetes de nó único.

    Neste ponto, você preparou a implantação do MongoDB Ops Manager para abranger mais de um cluster Kubernetes , o que você fará mais tarde neste procedimento.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 applicationDatabase:
    18 version: "${APPDB_VERSION}"
    19 topology: MultiCluster
    20 security:
    21 certsSecretPrefix: cert-prefix
    22 tls:
    23 ca: appdb-cert-ca
    24 clusterSpecList:
    25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    26 members: 3
    27 backup:
    28 enabled: false
    29EOF
  3. Aguarde o operador Kubernetes pegar o trabalho e alcançar o estado status.applicationDatabase.phase=Pending . Aguarde a conclusão das implantações do banco de dados de aplicativos e do MongoDB Ops Manager .

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  4. Implemente MongoDB Ops Manager. O Operador Kubernetes implementa o MongoDB Ops Manager executando as etapas a seguir. Ele:

    • Distribui os nós do conjunto de réplicas do banco de dados de aplicativos e aguarda que os processos do MongoDB no conjunto de réplicas comecem a ser executados.

    • Implementa a instância do Aplicativo MongoDB Ops Manager com a connection string do Banco de Dados do Aplicativo e aguarda que ela fique pronta.

    • Adiciona os containers do MongoDB Agent de monitoramento ao Pod de cada banco de dados de aplicativos.

    • Aguarde que o aplicativo MongoDB Ops Manager e os pods do banco de dados de aplicativos comecem a ser executados.

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    5echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    7echo "Waiting for Application Database to reach Running phase..."
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    9echo; echo "Waiting for Ops Manager to reach Running phase..."
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    11echo; echo "MongoDBOpsManager resource"
    12kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    15echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    16kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7Waiting for Application Database to reach Pending phase (enabling monitoring)...
    8mongodbopsmanager.mongodb.com/om condition met
    9Waiting for Application Database to reach Running phase...
    10mongodbopsmanager.mongodb.com/om condition met
    11
    12Waiting for Ops Manager to reach Running phase...
    13mongodbopsmanager.mongodb.com/om condition met
    14
    15MongoDBOpsManager resource
    16NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    17om 7.0.4 Running Running Disabled 11m
    18
    19Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    20NAME READY STATUS RESTARTS AGE
    21om-0-0 2/2 Running 0 8m39s
    22om-db-0-0 4/4 Running 0 44s
    23om-db-0-1 4/4 Running 0 2m6s
    24om-db-0-2 4/4 Running 0 3m19s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1

    Agora que você distribuiu um cluster de um único membro em um modo de vários clusters, é possível reconfigurar o sistema para abranger mais de um cluster do Kubernetes.

  5. No segundo cluster de membros, implemente dois membros adicionais do conjunto de réplicas do Banco de Dados de Aplicativo e uma instância adicional do Aplicativo MongoDB Ops Manager :

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    18 members: 1
    19 applicationDatabase:
    20 version: "${APPDB_VERSION}"
    21 topology: MultiCluster
    22 security:
    23 certsSecretPrefix: cert-prefix
    24 tls:
    25 ca: appdb-cert-ca
    26 clusterSpecList:
    27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    28 members: 3
    29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    30 members: 2
    31 backup:
    32 enabled: false
    33EOF
  6. Aguarde o Operador Kubernetes pegar o trabalho (fase pendente):

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  7. Aguarde o operador Kubernetes terminar de implantar todos os componentes:

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 7.0.4 Pending Running Disabled 14m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Terminating 0 12m
    14om-db-0-0 4/4 Running 0 4m12s
    15om-db-0-1 4/4 Running 0 5m34s
    16om-db-0-2 4/4 Running 0 6m47s
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    19NAME READY STATUS RESTARTS AGE
    20om-1-0 0/2 Init:0/2 0 0s
    21om-db-1-0 4/4 Running 0 3m24s
    22om-db-1-1 4/4 Running 0 104s
9

Em um sistema de cluster multi-Kubernetes do Aplicativo de MongoDB Ops Manager , você pode configurar somente o armazenamento de backup baseado em S3 . Este procedimento refere-se ao S3_* definido em env_variables.sh.

  1. Opcional. Instalar o operador MinIO.

    Este procedimento implementa armazenamento compatível com S para3 seus backups usando o Operador MinIO. Você pode pular esta etapa se tiver o Amazon Web Services S3 ou outros buckets compatíveis com o S3disponíveis. Ajuste as variáveis S3_* adequadamente em env_variables.sh neste caso.

    1kustomize build "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
    2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    3
    4kustomize build "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
    5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    6
    7# add two buckets to the tenant config
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
    9 --type='json' \
    10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
  2. Antes de configurar e ativar o backup, crie segredos:

    • s3-access-secret - contém credenciais S3 .

    • s3-ca-cert - contém um certificado CA que emitiu o certificado do servidor do bucket. No caso da implantação do MinIO de amostra usada neste procedimento, o certificado padrão da Kubernetes Root CA é usado para assinar o certificado. Como não é um certificado de CA confiável publicamente, você deve fornecê-lo para que o MongoDB Ops Manager possa confiar na conexão.

    Se você utilizar certificados publicamente confiáveis, você poderá pular esta etapa e remover os valores das configurações do spec.backup.s3Stores.customCertificateSecretRefs e spec.backup.s3OpLogStores.customCertificateSecretRefs .

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \
    2 --from-literal=accessKey="${S3_ACCESS_KEY}" \
    3 --from-literal=secretKey="${S3_SECRET_KEY}"
    4
    5# minio TLS secrets are signed with the default k8s root CA
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \
    7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
10
  1. O Kubernetes Operator pode configurar e distribuir todos os componentes, o aplicativo MongoDB Ops Manager , as instâncias do Backup Daemon e os nós do conjunto de réplicas do aplicativo de banco de dados em qualquer combinação em qualquer cluster de membros para os quais você configure o Kubernetes Operator.

    Para ilustrar a flexibilidade da configuração de implantação de cluster multi-Kubernetes, implemente apenas uma instância do Backup Daemon no cluster do terceiro membro e especifique zero membros do Backup Daemon para o primeiro e o segundo cluster.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 backup:
    18 members: 0
    19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    20 members: 1
    21 backup:
    22 members: 0
    23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
    24 members: 0
    25 backup:
    26 members: 1
    27 configuration: # to avoid configuration wizard on first login
    28 mms.adminEmailAddr: email@example.com
    29 mms.fromEmailAddr: email@example.com
    30 mms.ignoreInitialUiSetup: "true"
    31 mms.mail.hostname: smtp@example.com
    32 mms.mail.port: "465"
    33 mms.mail.ssl: "true"
    34 mms.mail.transport: smtp
    35 mms.minimumTLSVersion: TLSv1.2
    36 mms.replyToEmailAddr: email@example.com
    37 applicationDatabase:
    38 version: "${APPDB_VERSION}"
    39 topology: MultiCluster
    40 security:
    41 certsSecretPrefix: cert-prefix
    42 tls:
    43 ca: appdb-cert-ca
    44 clusterSpecList:
    45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    46 members: 3
    47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    48 members: 2
    49 backup:
    50 enabled: true
    51 s3Stores:
    52 - name: my-s3-block-store
    53 s3SecretRef:
    54 name: "s3-access-secret"
    55 pathStyleAccessEnabled: true
    56 s3BucketEndpoint: "${S3_ENDPOINT}"
    57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
    58 customCertificateSecretRefs:
    59 - name: s3-ca-cert
    60 key: ca.crt
    61 s3OpLogStores:
    62 - name: my-s3-oplog-store
    63 s3SecretRef:
    64 name: "s3-access-secret"
    65 s3BucketEndpoint: "${S3_ENDPOINT}"
    66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
    67 pathStyleAccessEnabled: true
    68 customCertificateSecretRefs:
    69 - name: s3-ca-cert
    70 key: ca.crt
    71EOF
  2. Aguarde até que o operador Kubernetes termine sua configuração:

    1echo; echo "Waiting for Backup to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
    3echo "Waiting for Application Database to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "Waiting for Ops Manager to reach Running phase..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    7echo; echo "MongoDBOpsManager resource"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    11echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    12kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Backup to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Application Database to reach Running phase...
    4mongodbopsmanager.mongodb.com/om condition met
    5
    6Waiting for Ops Manager to reach Running phase...
    7mongodbopsmanager.mongodb.com/om condition met
    8
    9MongoDBOpsManager resource
    10NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    11om 7.0.4 Running Running Running 22m
    12
    13Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    14NAME READY STATUS RESTARTS AGE
    15om-0-0 2/2 Running 0 7m10s
    16om-db-0-0 4/4 Running 0 11m
    17om-db-0-1 4/4 Running 0 13m
    18om-db-0-2 4/4 Running 0 14m
    19
    20Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    21NAME READY STATUS RESTARTS AGE
    22om-1-0 2/2 Running 0 4m8s
    23om-db-1-0 4/4 Running 0 11m
    24om-db-1-1 4/4 Running 0 9m25s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    27NAME READY STATUS RESTARTS AGE
    28om-2-backup-daemon-0 2/2 Running 0 2m5s
11

Execute o roteiro a seguir para excluir os clusters GKE e limpar seu ambiente.

Importante

Os seguintes comandos não são reversíveis. Eles excluem todos os clusters referenciados em env_variables.sh. Não execute esses comandos se quiser manter os clusters GKE, por exemplo, se você não criou os clusters GKE como parte desse procedimento.

1yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" &
2yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" &
3yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" &
4wait