Menu Docs
Página inicial do Docs
/
Operador de Kubernetes do MongoDB Enterprise
/

Implante recursos MongoDB Ops Manager em vários clusters Kubernetes

Nesta página

  • Visão geral
  • Pré-requisitos
  • Procedimento

Para tornar seu MongoDB Ops Manager e o sistema de banco de dados de aplicativos resilientes a falhas de data center ou zona inteiras, implemente o aplicativo MongoDB Ops Manager e o banco de dados de aplicativos em vários clusters do Kubernetes .

Para saber mais sobre a arquitetura, a rede, as limitações e o desempenho das implantações de clusters multi-Kubernetes para os recursos MongoDB Ops Manager , consulte:

  • Arquitetura MongoDB Ops Manager de vários clusters

  • Limitações

  • Diferenças entre sistemas do MongoDB Ops Manager de um e vários clusters

  • Diagrama de arquitetura

  • Redes, Balanceador de carga, Malha de serviço

  • Desempenho

Ao implementar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativo utilizando o procedimento nesta seção, você:

  1. Use GKE (Google Kubernetes Kubernetes Engine) e a rede de serviços do Istion como ferramentas que ajudam a demonstrar a implementação de váriosKubernetes clusters Kubernetes.

  2. Instale o Operador Kubernetes em um dos clusters Kubernetes do membro conhecido como cluster do operador. O cluster de operador atua como um Hub no padrão "Hub and Spoke" usado pelo Operador Kubernetes para gerenciar implantações em vários clusters Kubernetes.

  3. Implemente o cluster do operador no $OPERATOR_NAMESPACE e configure este cluster para assistir $NAMESPACE e gerenciar todos os clusters de membros do Kubernetes.

  4. Implemente o Banco de Dados de Aplicativo e o Aplicativo MongoDB Ops Manager em um cluster Kubernetes de nó único para demonstrar similaridade de um sistema de vários clusters com um sistema de cluster único. Um único sistema de cluster com spec.topology e spec.applicationDatabase.topology definido como MultiCluster prepara o sistema para adicionar mais clusters Kubernetes a ele.

  5. Implemente um conjunto de réplicas adicional do aplicativo de banco de dados no segundo cluster Kubernetes para melhorar a resiliência do banco de dados de aplicativos. Você também implementa uma instância adicional do aplicativo MongoDB Ops Manager no cluster Kubernetes do segundo membro.

  6. Crie certificados válidos para criptografiaTLS e estabeleça conexões criptografadas TLSde e para o Aplicativo MongoDB Ops Manager e entre os membros do conjunto de réplicas do Banco de Dados do Aplicativo. Ao executar HTTPS, o MongoDB Ops Manager é executado na porta 8443 por padrão.

  7. Habilite o backup usando o armazenamento compatível com S3e implemente o Backup Daemon no cluster Kubernetes de terceiro membro. Para simplificar a configuração de buckets de armazenamento compatíveis com S ,3 você implementa o Operador MinIO. Você habilita o Backup Daemon somente em um cluster de membros em seu sistema. No entanto, você também pode configurar outros clusters de membros para hospedar os recursos do Backup Daemon. Somente backups S3 são suportados em sistemas do MongoDB Ops Manager de vários clusters.

Antes de iniciar a implementação, instale as seguintes ferramentas necessárias:

Instalar o CLI do gcloud e autorizar nele:

gcloud auth login

O kubectl MongoDB plugin automatiza a configuração dos Kubernetes clusters . Isso permite que o Kubernetes Operator implemente recursos, roles necessários e serviços para contas para o aplicativo MongoDB Ops Manager , banco de dados de aplicativos e recursos do MongoDB nesses clusters.

Para instalar o plug-in kubectl mongodb :

1

Baixe a versão desejada do pacote do Kubernetes Operator na página de lançamento do repositório do MongoDB Enterprise Kubernetes Operator.

O nome do pacote usa este padrão: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use um dos seguintes pacotes:

  • kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz

2

Desempacote o pacote, como no exemplo a seguir:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Encontre o binário kubectl-mongodb no diretório descompactado e mova-o para o destino desejado, dentro do PATH para o usuário do Kubernetes Operator, conforme mostrado no exemplo a seguir:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Agora você pode executar o plug-in kubectl mongodb usando os seguintes comandos:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

Para saber mais sobre os sinalizadores suportados, consulte a Referência do plugin Kubectl do MongoDB.

Clone o repositório do MongoDB Enterprise Kubernetes Operator, altere para o mongodb-enterprise-kubernetes diretório e confira a versão atual.

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
cd mongodb-enterprise-kubernetes
git checkout master
cd public/samples/ops-manager-multi-cluster

Importante

Algumas etapas deste guia funcionam somente se você executá-las a partir do diretório public/samples/ops-manager-multi-cluster .

Todas as etapas deste guia referenciam as variáveis de ambiente definidas no env_variables.sh.

1export MDB_GKE_PROJECT="### Set your GKE project name here ###"
2
3export NAMESPACE="mongodb"
4export OPERATOR_NAMESPACE="mongodb-operator"
5
6# comma-separated key=value pairs
7# export OPERATOR_ADDITIONAL_HELM_VALUES=""
8
9# Adjust the values for each Kubernetes cluster in your deployment.
10# The deployment script references the following variables to get values for each cluster.
11export K8S_CLUSTER_0="k8s-mdb-0"
12export K8S_CLUSTER_0_ZONE="europe-central2-a"
13export K8S_CLUSTER_0_NUMBER_OF_NODES=3
14export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4"
15export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}"
16
17export K8S_CLUSTER_1="k8s-mdb-1"
18export K8S_CLUSTER_1_ZONE="europe-central2-b"
19export K8S_CLUSTER_1_NUMBER_OF_NODES=3
20export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4"
21export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}"
22
23export K8S_CLUSTER_2="k8s-mdb-2"
24export K8S_CLUSTER_2_ZONE="europe-central2-c"
25export K8S_CLUSTER_2_NUMBER_OF_NODES=1
26export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4"
27export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}"
28
29# Comment out the following line so that the script does not create preemptible nodes.
30# DO NOT USE preemptible nodes in production.
31export GKE_SPOT_INSTANCES_SWITCH="--preemptible"
32
33export S3_OPLOG_BUCKET_NAME=s3-oplog-store
34export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
35
36# minio defaults
37export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
38export S3_ACCESS_KEY="console"
39export S3_SECRET_KEY="console123"
40
41export OFFICIAL_OPERATOR_HELM_CHART="mongodb/enterprise-operator"
42export OPERATOR_HELM_CHART="${OFFICIAL_OPERATOR_HELM_CHART}"
43
44# (Optional) Change the following setting when using the external URL.
45# This env variable is used in OpenSSL configuration to generate
46# server certificates for Ops Manager Application.
47export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local"
48
49export OPS_MANAGER_VERSION="7.0.4"
50export APPDB_VERSION="7.0.9-ubi8"

Ajuste as configurações no exemplo anterior para suas necessidades, conforme instruído nos comentários, e insira-as em seu shell da seguinte maneira:

source env_variables.sh

Importante

Cada vez que você atualizar o env_variables.sh, execute o source env_variables.sh para garantir que os scripts nesta seção usem variáveis atualizadas.

Este procedimento se aplica à implantação de uma instância MongoDB Ops Manager em vários clusters Kubernetes .

1

Você pode pular esta etapa se já tiver instalado e configurado seus próprios clusters Kubernetes com uma malha de serviço.

  1. Criar três GKE (Google Kubernetes Engine) clusters:

    1gcloud container clusters create "${K8S_CLUSTER_0}" \
    2 --zone="${K8S_CLUSTER_0_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_1}" \
    2 --zone="${K8S_CLUSTER_1_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_2}" \
    2 --zone="${K8S_CLUSTER_2_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
  2. Defina seu projeto gcloud padrão :

    1gcloud config set project "${MDB_GKE_PROJECT}"
  3. Obtenha credenciais e salve contextos no arquivo kubeconfig atual. Por padrão, este arquivo está localizado no diretório ~/.kube/config e referenciado pela variável de ambiente $KUBECONFIG .

    1gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}"
    2gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}"
    3gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}"

    Todos os comandos kubectl referenciam estes contextos utilizando as seguintes variáveis:

    • $K8S_CLUSTER_0_CONTEXT_NAME

    • $K8S_CLUSTER_1_CONTEXT_NAME

    • $K8S_CLUSTER_2_CONTEXT_NAME

  4. Verifique se kubectl tem acesso aos clusters Kubernetes:

    1echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes
    3echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes
    5echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes
    1Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    2NAME STATUS ROLES AGE VERSION
    3gke-k8s-mdb-0-default-pool-267f1e8f-d0dg Ready <none> 38m v1.29.7-gke.1104000
    4gke-k8s-mdb-0-default-pool-267f1e8f-pmgh Ready <none> 38m v1.29.7-gke.1104000
    5gke-k8s-mdb-0-default-pool-267f1e8f-vgj9 Ready <none> 38m v1.29.7-gke.1104000
    6
    7Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    8NAME STATUS ROLES AGE VERSION
    9gke-k8s-mdb-1-default-pool-263d341f-3tbp Ready <none> 38m v1.29.7-gke.1104000
    10gke-k8s-mdb-1-default-pool-263d341f-4f26 Ready <none> 38m v1.29.7-gke.1104000
    11gke-k8s-mdb-1-default-pool-263d341f-z751 Ready <none> 38m v1.29.7-gke.1104000
    12
    13Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    14NAME STATUS ROLES AGE VERSION
    15gke-k8s-mdb-2-default-pool-d0da5fd1-chm1 Ready <none> 38m v1.29.7-gke.1104000
  5. Instalar o Sistema malha de serviços para permitir resolução de DNS entre clusters e conectividade de rede entre clusters Kubernetes:

    1CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \
    2CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \
    3CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \
    4ISTIO_VERSION="1.20.2" \
    5../multi-cluster/install_istio_separate_network.sh
2

Observação

Para habilitar a injeção de sidecar no Istion, os comandos a seguir adicionam os istio-injection=enabled rótulos aos $OPERATOR_NAMESPACE mongodb namespaces e em cada cluster de membros. Se você usar outra interface de serviço, configure-a para lidar com o tráfego de rede nos namespaces criados.

  • Crie um namespace separado, mongodb-operator, referenciado pela variável de ambiente $OPERATOR_NAMESPACE para o sistema do Kubernetes Operator.

  • Crie o mesmo $OPERATOR_NAMESPACE em cada cluster Kubernetes de membros. Isso é necessário para que o plug- in kubectl MongoDB possa criar uma conta de serviço para o Kubernetes Operator em cada cluster de membros. O Operador Kubernetes utiliza estas contas de serviço no cluster do operador para executar operações em cada cluster de membros.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
  • Em cada cluster de membros, incluindo o cluster de membros que serve como cluster de operador, crie outro namespace separado, mongodb. O Kubernetes Operator usa este namespace para recursos e componentes do MongoDB Ops Manager .

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
3

Esta etapa é opcional se você usar os Atlas Charts do Quay registro.

1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \
2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
4

Os scripts opcionais a seguir verificam se a interface de serviço está configurada corretamente para resolução e conectividade de DNS entre clusters.

  1. Execute este script no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver0
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver0
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver0
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver0
    20 ports:
    21 - containerPort: 8080
    22EOF
  2. Execute este script no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver1
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver1
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver1
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver1
    20 ports:
    21 - containerPort: 8080
    22EOF
  3. Execute este script no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver2
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver2
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver2
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver2
    20 ports:
    21 - containerPort: 8080
    22EOF
  4. Execute este script para aguardar a criação do StatefulSets:

    1kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s
    2kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s
    3kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s
  5. Criar serviço de Pod no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver0-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver0-0"
    13EOF
  6. Criar serviço de Pod no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver1-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver1-0"
    13EOF
  7. Criar serviço de Pod no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver2-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver2-0"
    13EOF
  8. Criar serviço de round robin no cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver0
    13EOF
  9. Criar serviço de round robin no cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver1
    13EOF
  10. Criar serviço de round robin no cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver2
    13EOF
  11. Verifique o Pod 0 do cluster 1:

    1source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME}
    2target_pod="echoserver0-0"
    3source_pod="echoserver1-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0
    2SUCCESS
  12. Verifique o Pod 1 do cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0
    2SUCCESS
  13. Verifique o Pod 1 do cluster 2:

    1source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver2-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0
    2SUCCESS
  1. Verifique o Pod 2 do cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver2-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0
    2SUCCESS
  2. Execute o script de limpeza:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0
    2kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1
    3kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    7kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0
    8kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0
    9kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
5

Nesta etapa, você usa o plug-in kubectl mongodb para automatizar a configuração do cluster Kubernetes, necessária para que o operador Kubernetes gerencie cargas de trabalho em vários clusters Kubernetes.

Como você configura os clusters Kubernetes antes de instalar o Operador Kubernetes, quando você distribui o Operador Kubernetes para a operação de cluster multi-Kubernetes, toda a configuração multi-cluster necessária já está em vigor.

Conforme indicado na Visão geral, o Operador Kubernetes tem a configuração de três clusters de membros que você pode usar para implantar bancos de dados MongoDB Ops Manager MongoDB . O primeiro cluster também é usado como o cluster do operador, onde você instala o Kubernetes Operator e implementa os recursos personalizados.

1kubectl mongodb multicluster setup \
2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \
3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \
4 --member-cluster-namespace="${NAMESPACE}" \
5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \
6 --create-service-account-secrets \
7 --install-database-roles=true \
8 --image-pull-secrets=image-registries-secret
1Ensured namespaces exist in all clusters.
2creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
3creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
4creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
5Ensured ServiceAccounts and Roles.
6Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
7Ensured database Roles in member clusters.
8Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
6
  1. Adicione e atualize o repositório MongoDB Helm. Verifique se o cache local do Helm se refere à versão correta do Kubernetes Operator:

    1helm repo add mongodb https://mongodb.github.io/helm-charts
    2helm repo update mongodb
    3helm search repo "${OFFICIAL_OPERATOR_HELM_CHART}"
    1"mongodb" already exists with the same configuration, skipping
    2Hang tight while we grab the latest from your chart repositories...
    3...Successfully got an update from the "mongodb" chart repository
    4Update Complete. ⎈Happy Helming!⎈
    5NAME CHART VERSION APP VERSION DESCRIPTION
    6mongodb/enterprise-operator 1.27.0 MongoDB Kubernetes Enterprise Operator
  2. Instale o Operador Kubernetes no $OPERATOR_NAMESPACE, configurado para observar $NAMESPACE e gerenciar três clusters de membros do Kubernetes. Neste ponto do procedimento, ServiceAccounts e roles já estão implantados pelo kubectl mongodb plug-in . Portanto, os scripts a seguir ignoram a configuração e definem operator.createOperatorServiceAccount=false e operator.createResourcesServiceAccountsAndRoles=false. Os scripts especificam a configuração multiCluster.clusters para instruir o gráfico Helm a implantar o Operador Kubernetes no modo de vários clusters.

    1helm upgrade --install \
    2 --debug \
    3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \
    4 mongodb-enterprise-operator-multi-cluster \
    5 "${OPERATOR_HELM_CHART}" \
    6 --namespace="${OPERATOR_NAMESPACE}" \
    7 --set namespace="${OPERATOR_NAMESPACE}" \
    8 --set operator.namespace="${OPERATOR_NAMESPACE}" \
    9 --set operator.watchNamespace="${NAMESPACE}" \
    10 --set operator.name=mongodb-enterprise-operator-multi-cluster \
    11 --set operator.createOperatorServiceAccount=false \
    12 --set operator.createResourcesServiceAccountsAndRoles=false \
    13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \
    14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}"
    1Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now.
    2NAME: mongodb-enterprise-operator-multi-cluster
    3LAST DEPLOYED: Mon Aug 26 10:55:49 2024
    4NAMESPACE: mongodb-operator
    5STATUS: deployed
    6REVISION: 1
    7TEST SUITE: None
    8USER-SUPPLIED VALUES:
    9dummy: value
    10multiCluster:
    11 clusters:
    12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    15namespace: mongodb-operator
    16operator:
    17 createOperatorServiceAccount: false
    18 createResourcesServiceAccountsAndRoles: false
    19 name: mongodb-enterprise-operator-multi-cluster
    20 namespace: mongodb-operator
    21 watchNamespace: mongodb
    22
    23COMPUTED VALUES:
    24agent:
    25 name: mongodb-agent-ubi
    26 version: 107.0.0.8502-1
    27database:
    28 name: mongodb-enterprise-database-ubi
    29 version: 1.27.0
    30dummy: value
    31initAppDb:
    32 name: mongodb-enterprise-init-appdb-ubi
    33 version: 1.27.0
    34initDatabase:
    35 name: mongodb-enterprise-init-database-ubi
    36 version: 1.27.0
    37initOpsManager:
    38 name: mongodb-enterprise-init-ops-manager-ubi
    39 version: 1.27.0
    40managedSecurityContext: false
    41mongodb:
    42 appdbAssumeOldFormat: false
    43 imageType: ubi8
    44 name: mongodb-enterprise-server
    45 repo: quay.io/mongodb
    46mongodbLegacyAppDb:
    47 name: mongodb-enterprise-appdb-database-ubi
    48 repo: quay.io/mongodb
    49multiCluster:
    50 clusterClientTimeout: 10
    51 clusters:
    52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
    56 performFailOver: true
    57namespace: mongodb-operator
    58operator:
    59 additionalArguments: []
    60 affinity: {}
    61 createOperatorServiceAccount: false
    62 createResourcesServiceAccountsAndRoles: false
    63 deployment_name: mongodb-enterprise-operator
    64 env: prod
    65 maxConcurrentReconciles: 1
    66 mdbDefaultArchitecture: non-static
    67 name: mongodb-enterprise-operator-multi-cluster
    68 namespace: mongodb-operator
    69 nodeSelector: {}
    70 operator_image_name: mongodb-enterprise-operator-ubi
    71 replicas: 1
    72 resources:
    73 limits:
    74 cpu: 1100m
    75 memory: 1Gi
    76 requests:
    77 cpu: 500m
    78 memory: 200Mi
    79 tolerations: []
    80 vaultSecretBackend:
    81 enabled: false
    82 tlsSecretRef: ""
    83 version: 1.27.0
    84 watchNamespace: mongodb
    85 watchedResources:
    86 - mongodb
    87 - opsmanagers
    88 - mongodbusers
    89 webhook:
    90 installClusterRole: true
    91 registerConfiguration: true
    92opsManager:
    93 name: mongodb-enterprise-ops-manager-ubi
    94registry:
    95 agent: quay.io/mongodb
    96 appDb: quay.io/mongodb
    97 database: quay.io/mongodb
    98 imagePullSecrets: null
    99 initAppDb: quay.io/mongodb
    100 initDatabase: quay.io/mongodb
    101 initOpsManager: quay.io/mongodb
    102 operator: quay.io/mongodb
    103 opsManager: quay.io/mongodb
    104 pullPolicy: Always
    105subresourceEnabled: true
    106
    107HOOKS:
    108MANIFEST:
    109---
    110# Source: enterprise-operator/templates/operator-roles.yaml
    111kind: ClusterRole
    112apiVersion: rbac.authorization.k8s.io/v1
    113metadata:
    114 name: mongodb-enterprise-operator-mongodb-webhook
    115rules:
    116 - apiGroups:
    117 - "admissionregistration.k8s.io"
    118 resources:
    119 - validatingwebhookconfigurations
    120 verbs:
    121 - get
    122 - create
    123 - update
    124 - delete
    125 - apiGroups:
    126 - ""
    127 resources:
    128 - services
    129 verbs:
    130 - get
    131 - list
    132 - watch
    133 - create
    134 - update
    135 - delete
    136---
    137# Source: enterprise-operator/templates/operator-roles.yaml
    138kind: ClusterRoleBinding
    139apiVersion: rbac.authorization.k8s.io/v1
    140metadata:
    141 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding
    142roleRef:
    143 apiGroup: rbac.authorization.k8s.io
    144 kind: ClusterRole
    145 name: mongodb-enterprise-operator-mongodb-webhook
    146subjects:
    147 - kind: ServiceAccount
    148 name: mongodb-enterprise-operator-multi-cluster
    149 namespace: mongodb-operator
    150---
    151# Source: enterprise-operator/templates/operator.yaml
    152apiVersion: apps/v1
    153kind: Deployment
    154metadata:
    155 name: mongodb-enterprise-operator-multi-cluster
    156 namespace: mongodb-operator
    157spec:
    158 replicas: 1
    159 selector:
    160 matchLabels:
    161 app.kubernetes.io/component: controller
    162 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    163 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    164 template:
    165 metadata:
    166 labels:
    167 app.kubernetes.io/component: controller
    168 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    169 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    170 spec:
    171 serviceAccountName: mongodb-enterprise-operator-multi-cluster
    172 securityContext:
    173 runAsNonRoot: true
    174 runAsUser: 2000
    175 containers:
    176 - name: mongodb-enterprise-operator-multi-cluster
    177 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.27.0"
    178 imagePullPolicy: Always
    179 args:
    180 - -watch-resource=mongodb
    181 - -watch-resource=opsmanagers
    182 - -watch-resource=mongodbusers
    183 - -watch-resource=mongodbmulticluster
    184 command:
    185 - /usr/local/bin/mongodb-enterprise-operator
    186 volumeMounts:
    187 - mountPath: /etc/config/kubeconfig
    188 name: kube-config-volume
    189 resources:
    190 limits:
    191 cpu: 1100m
    192 memory: 1Gi
    193 requests:
    194 cpu: 500m
    195 memory: 200Mi
    196 env:
    197 - name: OPERATOR_ENV
    198 value: prod
    199 - name: MDB_DEFAULT_ARCHITECTURE
    200 value: non-static
    201 - name: WATCH_NAMESPACE
    202 value: "mongodb"
    203 - name: NAMESPACE
    204 valueFrom:
    205 fieldRef:
    206 fieldPath: metadata.namespace
    207 - name: CLUSTER_CLIENT_TIMEOUT
    208 value: "10"
    209 - name: IMAGE_PULL_POLICY
    210 value: Always
    211 # Database
    212 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE
    213 value: quay.io/mongodb/mongodb-enterprise-database-ubi
    214 - name: INIT_DATABASE_IMAGE_REPOSITORY
    215 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi
    216 - name: INIT_DATABASE_VERSION
    217 value: 1.27.0
    218 - name: DATABASE_VERSION
    219 value: 1.27.0
    220 # Ops Manager
    221 - name: OPS_MANAGER_IMAGE_REPOSITORY
    222 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi
    223 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY
    224 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
    225 - name: INIT_OPS_MANAGER_VERSION
    226 value: 1.27.0
    227 # AppDB
    228 - name: INIT_APPDB_IMAGE_REPOSITORY
    229 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
    230 - name: INIT_APPDB_VERSION
    231 value: 1.27.0
    232 - name: OPS_MANAGER_IMAGE_PULL_POLICY
    233 value: Always
    234 - name: AGENT_IMAGE
    235 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1"
    236 - name: MDB_AGENT_IMAGE_REPOSITORY
    237 value: "quay.io/mongodb/mongodb-agent-ubi"
    238 - name: MONGODB_IMAGE
    239 value: mongodb-enterprise-server
    240 - name: MONGODB_REPO_URL
    241 value: quay.io/mongodb
    242 - name: MDB_IMAGE_TYPE
    243 value: ubi8
    244 - name: PERFORM_FAILOVER
    245 value: 'true'
    246 - name: MDB_MAX_CONCURRENT_RECONCILES
    247 value: "1"
    248 volumes:
    249 - name: kube-config-volume
    250 secret:
    251 defaultMode: 420
    252 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
  3. Verifique o sistema do Kubernetes Operator:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster
    2echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace"
    3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments
    4echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace"
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods
    1Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available...
    2deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out
    3Operator deployment in mongodb-operator namespace
    4NAME READY UP-TO-DATE AVAILABLE AGE
    5mongodb-enterprise-operator-multi-cluster 1/1 1 1 10s
    6
    7Operator pod in mongodb-operator namespace
    8NAME READY STATUS RESTARTS AGE
    9mongodb-enterprise-operator-multi-cluster-54d786b796-7l5ct 2/2 Running 1 (4s ago) 10s
7

Nesta etapa, você habilita o TLS para o banco de dados do aplicativo e o aplicativo MongoDB Ops Manager . Se você não quiser usar o TLS, remova os seguintes campos dos recursos MongoDBOpsManager :

  1. Opcional. Gerar chaves e certificados:

    Utilize a ferramenta de linha de comando openssl para gerar CAs e certificados autoassinados para fins de teste.

    1mkdir certs || true
    2
    3cat <<EOF >certs/ca.cnf
    4[ req ]
    5default_bits = 2048
    6prompt = no
    7default_md = sha256
    8distinguished_name = dn
    9x509_extensions = v3_ca
    10
    11[ dn ]
    12C=US
    13ST=New York
    14L=New York
    15O=Example Company
    16OU=IT Department
    17CN=exampleCA
    18
    19[ v3_ca ]
    20basicConstraints = CA:TRUE
    21keyUsage = critical, keyCertSign, cRLSign
    22subjectKeyIdentifier = hash
    23authorityKeyIdentifier = keyid:always,issuer
    24EOF
    25
    26cat <<EOF >certs/om.cnf
    27[ req ]
    28default_bits = 2048
    29prompt = no
    30default_md = sha256
    31distinguished_name = dn
    32req_extensions = req_ext
    33
    34[ dn ]
    35C=US
    36ST=New York
    37L=New York
    38O=Example Company
    39OU=IT Department
    40CN=${OPS_MANAGER_EXTERNAL_DOMAIN}
    41
    42[ req_ext ]
    43subjectAltName = @alt_names
    44keyUsage = critical, digitalSignature, keyEncipherment
    45extendedKeyUsage = serverAuth, clientAuth
    46
    47[ alt_names ]
    48DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN}
    49DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local
    50EOF
    51
    52cat <<EOF >certs/appdb.cnf
    53[ req ]
    54default_bits = 2048
    55prompt = no
    56default_md = sha256
    57distinguished_name = dn
    58req_extensions = req_ext
    59
    60[ dn ]
    61C=US
    62ST=New York
    63L=New York
    64O=Example Company
    65OU=IT Department
    66CN=AppDB
    67
    68[ req_ext ]
    69subjectAltName = @alt_names
    70keyUsage = critical, digitalSignature, keyEncipherment
    71extendedKeyUsage = serverAuth, clientAuth
    72
    73[ alt_names ]
    74# multi-cluster mongod hostnames from service for each pod
    75DNS.1 = *.${NAMESPACE}.svc.cluster.local
    76# single-cluster mongod hostnames from headless service
    77DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local
    78EOF
    79
    80# generate CA keypair and certificate
    81openssl genrsa -out certs/ca.key 2048
    82openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf
    83
    84# generate OpsManager's keypair and certificate
    85openssl genrsa -out certs/om.key 2048
    86openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf
    87
    88# generate AppDB's keypair and certificate
    89openssl genrsa -out certs/appdb.key 2048
    90openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf
    91
    92# generate certificates signed by CA for OpsManager and AppDB
    93openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext
    94openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext
  2. Crie segredos com chaves TLS:

    Se você preferir usar suas próprias chaves e certificados, pule a etapa de geração anterior e coloque as chaves e os certificados nos seguintes arquivos:

    • certs/ca.crt - Certificados CA. Eles não são necessários ao usar certificados confiáveis.

    • certs/appdb.key - chave privada para o banco de dados de aplicativos.

    • certs/appdb.crt - certificado para o banco de dados de aplicativos.

    • certs/om.key - chave privada para o MongoDB Ops Manager.

    • certs/om.crt - certificado para MongoDB Ops Manager.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \
    2 --cert=certs/om.crt \
    3 --key=certs/om.key
    4
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \
    6 --cert=certs/appdb.crt \
    7 --key=certs/appdb.key
    8
    9kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
8

Neste ponto, você preparou o ambiente e o Operador Kubernetes para implantar o recurso MongoDB Ops Manager .

  1. Crie as credenciais necessárias para o usuário administrador do MongoDB Ops Manager que o operador Kubernetes criará após implantar a instância do aplicativo MongoDB Ops Manager :

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \
    2 --from-literal=Username="admin" \
    3 --from-literal=Password="Passw0rd@" \
    4 --from-literal=FirstName="Jane" \
    5 --from-literal=LastName="Doe"
  2. Implemente o recurso personalizado MongoDBOpsManager mais simples possível (com TLS habilitado) em um cluster de único membro, que também é conhecido como o cluster do operador.

    Essa implantação é quase a mesma que para o modo de cluster único, mas com spec.topology e spec.applicationDatabase.topology configurados para MultiCluster.

    A implantação dessa forma mostra que um único sistema de cluster Kubernetes é um caso especial de um sistema de cluster multi-Kubernetes em um único cluster de membro do Kubernetes. Você pode começar a implantar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativos em quantos clusters Kubernetes necessários desde o início e não precisa começar com a implantação com apenas um cluster Kubernetes de nó único.

    Neste ponto, você preparou a implantação do MongoDB Ops Manager para abranger mais de um cluster Kubernetes , o que você fará mais tarde neste procedimento.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 applicationDatabase:
    18 version: "${APPDB_VERSION}"
    19 topology: MultiCluster
    20 security:
    21 certsSecretPrefix: cert-prefix
    22 tls:
    23 ca: appdb-cert-ca
    24 clusterSpecList:
    25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    26 members: 3
    27 backup:
    28 enabled: false
    29EOF
  3. Aguarde o operador Kubernetes pegar o trabalho e alcançar o estado status.applicationDatabase.phase=Pending . Aguarde a conclusão das implantações do banco de dados de aplicativos e do MongoDB Ops Manager .

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  4. Implemente MongoDB Ops Manager. O Operador Kubernetes implementa o MongoDB Ops Manager executando as etapas a seguir. Ele:

    • Distribui os nós do conjunto de réplicas do banco de dados de aplicativos e aguarda que os processos do MongoDB no conjunto de réplicas comecem a ser executados.

    • Implementa a instância do Aplicativo MongoDB Ops Manager com a connection string do Banco de Dados do Aplicativo e aguarda que ela fique pronta.

    • Adiciona os containers do MongoDB Agent de monitoramento ao Pod de cada banco de dados de aplicativos.

    • Aguarde que o aplicativo MongoDB Ops Manager e os pods do banco de dados de aplicativos comecem a ser executados.

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    5echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    7echo "Waiting for Application Database to reach Running phase..."
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    9echo; echo "Waiting for Ops Manager to reach Running phase..."
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    11echo; echo "MongoDBOpsManager resource"
    12kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    15echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    16kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7Waiting for Application Database to reach Pending phase (enabling monitoring)...
    8mongodbopsmanager.mongodb.com/om condition met
    9Waiting for Application Database to reach Running phase...
    10mongodbopsmanager.mongodb.com/om condition met
    11
    12Waiting for Ops Manager to reach Running phase...
    13mongodbopsmanager.mongodb.com/om condition met
    14
    15MongoDBOpsManager resource
    16NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    17om 7.0.4 Running Running Disabled 13m
    18
    19Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    20NAME READY STATUS RESTARTS AGE
    21om-0-0 2/2 Running 0 10m
    22om-db-0-0 4/4 Running 0 69s
    23om-db-0-1 4/4 Running 0 2m12s
    24om-db-0-2 4/4 Running 0 3m32s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1

    Agora que você distribuiu um cluster de um único membro em um modo de vários clusters, é possível reconfigurar o sistema para abranger mais de um cluster do Kubernetes.

  5. No segundo cluster de membros, implemente dois membros adicionais do conjunto de réplicas do Banco de Dados de Aplicativo e uma instância adicional do Aplicativo MongoDB Ops Manager :

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    18 members: 1
    19 applicationDatabase:
    20 version: "${APPDB_VERSION}"
    21 topology: MultiCluster
    22 security:
    23 certsSecretPrefix: cert-prefix
    24 tls:
    25 ca: appdb-cert-ca
    26 clusterSpecList:
    27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    28 members: 3
    29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    30 members: 2
    31 backup:
    32 enabled: false
    33EOF
  6. Aguarde o Operador Kubernetes pegar o trabalho (fase pendente):

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  7. Aguarde o operador Kubernetes terminar de implantar todos os componentes:

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 7.0.4 Running Running Disabled 20m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Running 0 2m56s
    14om-db-0-0 4/4 Running 0 7m48s
    15om-db-0-1 4/4 Running 0 8m51s
    16om-db-0-2 4/4 Running 0 10m
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    19NAME READY STATUS RESTARTS AGE
    20om-1-0 2/2 Running 0 3m27s
    21om-db-1-0 4/4 Running 0 6m32s
    22om-db-1-1 4/4 Running 0 5m5s
9

Em um sistema de cluster multi-Kubernetes do Aplicativo de MongoDB Ops Manager , você pode configurar somente o armazenamento de backup baseado em S3 . Este procedimento refere-se ao S3_* definido em env_variables.sh.

  1. Opcional. Instalar o operador MinIO.

    Este procedimento implementa armazenamento compatível com S para3 seus backups usando o Operador MinIO. Você pode pular esta etapa se tiver o Amazon Web Services S3 ou outros buckets compatíveis com o S3disponíveis. Ajuste as variáveis S3_* adequadamente em env_variables.sh neste caso.

    1kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
    2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    3
    4kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
    5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    6
    7# add two buckets to the tenant config
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
    9 --type='json' \
    10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
  2. Antes de configurar e ativar o backup, crie segredos:

    • s3-access-secret - contém credenciais S3 .

    • s3-ca-cert - contém um certificado CA que emitiu o certificado do servidor do bucket. No caso da implantação do MinIO de amostra usada neste procedimento, o certificado padrão da Kubernetes Root CA é usado para assinar o certificado. Como não é um certificado de CA confiável publicamente, você deve fornecê-lo para que o MongoDB Ops Manager possa confiar na conexão.

    Se você utilizar certificados publicamente confiáveis, você poderá pular esta etapa e remover os valores das configurações do spec.backup.s3Stores.customCertificateSecretRefs e spec.backup.s3OpLogStores.customCertificateSecretRefs .

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \
    2 --from-literal=accessKey="${S3_ACCESS_KEY}" \
    3 --from-literal=secretKey="${S3_SECRET_KEY}"
    4
    5# minio TLS secrets are signed with the default k8s root CA
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \
    7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
10
  1. O Kubernetes Operator pode configurar e distribuir todos os componentes, o aplicativo MongoDB Ops Manager , as instâncias do Backup Daemon e os nós do conjunto de réplicas do aplicativo de banco de dados em qualquer combinação em qualquer cluster de membros para os quais você configure o Kubernetes Operator.

    Para ilustrar a flexibilidade da configuração de implantação de cluster multi-Kubernetes, implemente apenas uma instância do Backup Daemon no cluster do terceiro membro e especifique zero membros do Backup Daemon para o primeiro e o segundo cluster.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 backup:
    18 members: 0
    19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    20 members: 1
    21 backup:
    22 members: 0
    23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
    24 members: 0
    25 backup:
    26 members: 1
    27 configuration: # to avoid configuration wizard on first login
    28 mms.adminEmailAddr: email@example.com
    29 mms.fromEmailAddr: email@example.com
    30 mms.ignoreInitialUiSetup: "true"
    31 mms.mail.hostname: smtp@example.com
    32 mms.mail.port: "465"
    33 mms.mail.ssl: "true"
    34 mms.mail.transport: smtp
    35 mms.minimumTLSVersion: TLSv1.2
    36 mms.replyToEmailAddr: email@example.com
    37 applicationDatabase:
    38 version: "${APPDB_VERSION}"
    39 topology: MultiCluster
    40 security:
    41 certsSecretPrefix: cert-prefix
    42 tls:
    43 ca: appdb-cert-ca
    44 clusterSpecList:
    45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    46 members: 3
    47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    48 members: 2
    49 backup:
    50 enabled: true
    51 s3Stores:
    52 - name: my-s3-block-store
    53 s3SecretRef:
    54 name: "s3-access-secret"
    55 pathStyleAccessEnabled: true
    56 s3BucketEndpoint: "${S3_ENDPOINT}"
    57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
    58 customCertificateSecretRefs:
    59 - name: s3-ca-cert
    60 key: ca.crt
    61 s3OpLogStores:
    62 - name: my-s3-oplog-store
    63 s3SecretRef:
    64 name: "s3-access-secret"
    65 s3BucketEndpoint: "${S3_ENDPOINT}"
    66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
    67 pathStyleAccessEnabled: true
    68 customCertificateSecretRefs:
    69 - name: s3-ca-cert
    70 key: ca.crt
    71EOF
  2. Aguarde até que o operador Kubernetes termine sua configuração:

    1echo; echo "Waiting for Backup to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
    3echo "Waiting for Application Database to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "Waiting for Ops Manager to reach Running phase..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    7echo; echo "MongoDBOpsManager resource"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    11echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    12kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Backup to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Application Database to reach Running phase...
    4mongodbopsmanager.mongodb.com/om condition met
    5
    6Waiting for Ops Manager to reach Running phase...
    7mongodbopsmanager.mongodb.com/om condition met
    8
    9MongoDBOpsManager resource
    10NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    11om 7.0.4 Running Running Running 26m
    12
    13Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    14NAME READY STATUS RESTARTS AGE
    15om-0-0 2/2 Running 0 5m11s
    16om-db-0-0 4/4 Running 0 13m
    17om-db-0-1 4/4 Running 0 14m
    18om-db-0-2 4/4 Running 0 16m
    19
    20Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    21NAME READY STATUS RESTARTS AGE
    22om-1-0 2/2 Running 0 5m12s
    23om-db-1-0 4/4 Running 0 12m
    24om-db-1-1 4/4 Running 0 11m
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    27NAME READY STATUS RESTARTS AGE
    28om-2-backup-daemon-0 2/2 Running 0 3m8s
11

Execute o roteiro a seguir para excluir os clusters GKE e limpar seu ambiente.

Importante

Os seguintes comandos não são reversíveis. Eles excluem todos os clusters referenciados em env_variables.sh. Não execute esses comandos se quiser manter os clusters GKE, por exemplo, se você não criou os clusters GKE como parte desse procedimento.

1yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" &
2yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" &
3yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" &
4wait