Implante recursos MongoDB Ops Manager em vários clusters Kubernetes
Nesta página
Para tornar seu MongoDB Ops Manager e o sistema de banco de dados de aplicativos resilientes a falhas de data center ou zona inteiras, implemente o aplicativo MongoDB Ops Manager e o banco de dados de aplicativos em vários clusters do Kubernetes .
Para saber mais sobre a arquitetura, a rede, as limitações e o desempenho das implantações de clusters multi-Kubernetes para os recursos MongoDB Ops Manager , consulte:
Visão geral
Ao implementar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativo utilizando o procedimento nesta seção, você:
Use GKE (Google Kubernetes Kubernetes Engine) e a rede de serviços do Istion como ferramentas que ajudam a demonstrar a implementação de váriosKubernetes clusters Kubernetes.
Instale o Operador Kubernetes em um dos clusters Kubernetes do membro conhecido como cluster do operador. O cluster de operador atua como um Hub no padrão "Hub and Spoke" usado pelo Operador Kubernetes para gerenciar implantações em vários clusters Kubernetes.
Implemente o cluster do operador no
$OPERATOR_NAMESPACE
e configure este cluster para assistir$NAMESPACE
e gerenciar todos os clusters de membros do Kubernetes.Implemente o Banco de Dados de Aplicativo e o Aplicativo MongoDB Ops Manager em um cluster Kubernetes de nó único para demonstrar similaridade de um sistema de vários clusters com um sistema de cluster único. Um único sistema de cluster com
spec.topology
espec.applicationDatabase.topology
definido comoMultiCluster
prepara o sistema para adicionar mais clusters Kubernetes a ele.Implemente um conjunto de réplicas adicional do aplicativo de banco de dados no segundo cluster Kubernetes para melhorar a resiliência do banco de dados de aplicativos. Você também implementa uma instância adicional do aplicativo MongoDB Ops Manager no cluster Kubernetes do segundo membro.
Crie certificados válidos para criptografiaTLS e estabeleça conexões criptografadas TLSde e para o Aplicativo MongoDB Ops Manager e entre os membros do conjunto de réplicas do Banco de Dados do Aplicativo. Ao executar HTTPS, o MongoDB Ops Manager é executado na porta
8443
por padrão.Habilite o backup usando o armazenamento compatível com S3e implemente o Backup Daemon no cluster Kubernetes de terceiro membro. Para simplificar a configuração de buckets de armazenamento compatíveis com S ,3 você implementa o Operador MinIO. Você habilita o Backup Daemon somente em um cluster de membros em seu sistema. No entanto, você também pode configurar outros clusters de membros para hospedar os recursos do Backup Daemon. Somente backups S3 são suportados em sistemas do MongoDB Ops Manager de vários clusters.
Pré-requisitos
Instalar ferramentas
Antes de iniciar a implementação, instale as seguintes ferramentas necessárias:
Instalar o Helm. Instalando o Helm é necessário para a instalação do Operador Kubernetes.
Prepare o projeto GCP para que você possa usá-lo para criar GKE (Google Kubernetes Engine) cluster. No procedimento a seguir, você cria três novos clusters GKE, com um total de sete
e2-standard-4
VMs pontuais de baixo custo .Instalar o CLI do gcloud .
Autorizar na CLI do gcloud
Instalar o CLI do gcloud e autorizar nele:
gcloud auth login
Instalar o kubectl mongodb
plugin
O kubectl MongoDB plugin automatiza a configuração dos Kubernetes clusters . Isso permite que o Kubernetes Operator implemente recursos, roles necessários e serviços para contas para o aplicativo MongoDB Ops Manager , banco de dados de aplicativos e recursos do MongoDB nesses clusters.
Para instalar o plug-in kubectl mongodb
:
Baixe a versão desejada do pacote do Kubernetes Operator.
Baixe a versão desejada do pacote do Kubernetes Operator na página de lançamento do repositório do MongoDB Enterprise Kubernetes Operator.
O nome do pacote usa este padrão: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz
.
Use um dos seguintes pacotes:
kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz
kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz
kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz
kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz
Localize o kubectl mongodb
binário do plugin e copie-o para o destino desejado.
Encontre o binário kubectl-mongodb
no diretório descompactado e mova-o para o destino desejado, dentro do PATH para o usuário do Kubernetes Operator, conforme mostrado no exemplo a seguir:
mv kubectl-mongodb /usr/local/bin/kubectl-mongodb
Agora você pode executar o plug-in kubectl mongodb
usando os seguintes comandos:
kubectl mongodb multicluster setup kubectl mongodb multicluster recover
Para saber mais sobre os sinalizadores suportados, consulte a Referência do plugin Kubectl do MongoDB.
Clone o repositório do MongoDB Enterprise Kubernetes Operator
Clone o repositório do MongoDB Enterprise Kubernetes Operator, altere para o mongodb-enterprise-kubernetes
diretório e confira a versão atual.
git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git cd mongodb-enterprise-kubernetes git checkout master cd public/samples/ops-manager-multi-cluster
Importante
Algumas etapas deste guia funcionam somente se você executá-las a partir do diretório public/samples/ops-manager-multi-cluster
.
Configurar variáveis de ambiente
Todas as etapas deste guia referenciam as variáveis de ambiente definidas no env_variables.sh
.
1 export MDB_GKE_PROJECT="### Set your GKE project name here ###" 2 3 export NAMESPACE="mongodb" 4 export OPERATOR_NAMESPACE="mongodb-operator" 5 6 comma-separated key=value pairs 7 export OPERATOR_ADDITIONAL_HELM_VALUES="" 8 9 Adjust the values for each Kubernetes cluster in your deployment. 10 The deployment script references the following variables to get values for each cluster. 11 export K8S_CLUSTER_0="k8s-mdb-0" 12 export K8S_CLUSTER_0_ZONE="europe-central2-a" 13 export K8S_CLUSTER_0_NUMBER_OF_NODES=3 14 export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4" 15 export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}" 16 17 export K8S_CLUSTER_1="k8s-mdb-1" 18 export K8S_CLUSTER_1_ZONE="europe-central2-b" 19 export K8S_CLUSTER_1_NUMBER_OF_NODES=3 20 export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4" 21 export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}" 22 23 export K8S_CLUSTER_2="k8s-mdb-2" 24 export K8S_CLUSTER_2_ZONE="europe-central2-c" 25 export K8S_CLUSTER_2_NUMBER_OF_NODES=1 26 export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4" 27 export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}" 28 29 Comment out the following line so that the script does not create preemptible nodes. 30 DO NOT USE preemptible nodes in production. 31 export GKE_SPOT_INSTANCES_SWITCH="--preemptible" 32 33 export S3_OPLOG_BUCKET_NAME=s3-oplog-store 34 export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store 35 36 minio defaults 37 export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local" 38 export S3_ACCESS_KEY="console" 39 export S3_SECRET_KEY="console123" 40 41 export OFFICIAL_OPERATOR_HELM_CHART="mongodb/enterprise-operator" 42 export OPERATOR_HELM_CHART="${OFFICIAL_OPERATOR_HELM_CHART}" 43 44 (Optional) Change the following setting when using the external URL. 45 This env variable is used in OpenSSL configuration to generate 46 server certificates for Ops Manager Application. 47 export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local" 48 49 export OPS_MANAGER_VERSION="7.0.4" 50 export APPDB_VERSION="7.0.9-ubi8"
Ajuste as configurações no exemplo anterior para suas necessidades, conforme instruído nos comentários, e insira-as em seu shell da seguinte maneira:
source env_variables.sh
Importante
Cada vez que você atualizar o env_variables.sh
, execute o source env_variables.sh
para garantir que os scripts nesta seção usem variáveis atualizadas.
Procedimento
Este procedimento se aplica à implantação de uma instância MongoDB Ops Manager em vários clusters Kubernetes .
Criar clusters Kubernetes.
Você pode pular esta etapa se já tiver instalado e configurado seus próprios clusters Kubernetes com uma malha de serviço.
Criar três GKE (Google Kubernetes Engine) clusters:
1 gcloud container clusters create "${K8S_CLUSTER_0}" \ 2 --zone="${K8S_CLUSTER_0_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} 1 gcloud container clusters create "${K8S_CLUSTER_1}" \ 2 --zone="${K8S_CLUSTER_1_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} 1 gcloud container clusters create "${K8S_CLUSTER_2}" \ 2 --zone="${K8S_CLUSTER_2_ZONE}" \ 3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \ 4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \ 5 ${GKE_SPOT_INSTANCES_SWITCH:-""} Defina seu projeto gcloud padrão :
1 gcloud config set project "${MDB_GKE_PROJECT}" Obtenha credenciais e salve contextos no arquivo
kubeconfig
atual. Por padrão, este arquivo está localizado no diretório~/.kube/config
e referenciado pela variável de ambiente$KUBECONFIG
.1 gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" 2 gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" 3 gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" Todos os comandos
kubectl
referenciam estes contextos utilizando as seguintes variáveis:$K8S_CLUSTER_0_CONTEXT_NAME
$K8S_CLUSTER_1_CONTEXT_NAME
$K8S_CLUSTER_2_CONTEXT_NAME
Verifique se
kubectl
tem acesso aos clusters Kubernetes:1 echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes 3 echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes 5 echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" 6 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes 1 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 2 NAME STATUS ROLES AGE VERSION 3 gke-k8s-mdb-0-default-pool-267f1e8f-d0dg Ready <none> 38m v1.29.7-gke.1104000 4 gke-k8s-mdb-0-default-pool-267f1e8f-pmgh Ready <none> 38m v1.29.7-gke.1104000 5 gke-k8s-mdb-0-default-pool-267f1e8f-vgj9 Ready <none> 38m v1.29.7-gke.1104000 6 7 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 8 NAME STATUS ROLES AGE VERSION 9 gke-k8s-mdb-1-default-pool-263d341f-3tbp Ready <none> 38m v1.29.7-gke.1104000 10 gke-k8s-mdb-1-default-pool-263d341f-4f26 Ready <none> 38m v1.29.7-gke.1104000 11 gke-k8s-mdb-1-default-pool-263d341f-z751 Ready <none> 38m v1.29.7-gke.1104000 12 13 Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 14 NAME STATUS ROLES AGE VERSION 15 gke-k8s-mdb-2-default-pool-d0da5fd1-chm1 Ready <none> 38m v1.29.7-gke.1104000 Instalar o Sistema malha de serviços para permitir resolução de DNS entre clusters e conectividade de rede entre clusters Kubernetes:
1 CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \ 2 CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \ 3 CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \ 4 ISTIO_VERSION="1.20.2" \ 5 ../multi-cluster/install_istio_separate_network.sh
Crie namespaces.
Observação
Para habilitar a injeção de sidecar no Istion, os comandos a seguir adicionam os istio-injection=enabled
rótulos aos $OPERATOR_NAMESPACE
mongodb
namespaces e em cada cluster de membros. Se você usar outra interface de serviço, configure-a para lidar com o tráfego de rede nos namespaces criados.
Crie um namespace separado,
mongodb-operator
, referenciado pela variável de ambiente$OPERATOR_NAMESPACE
para o sistema do Kubernetes Operator.Crie o mesmo
$OPERATOR_NAMESPACE
em cada cluster Kubernetes de membros. Isso é necessário para que o plug- in kubectl MongoDB possa criar uma conta de serviço para o Kubernetes Operator em cada cluster de membros. O Operador Kubernetes utiliza estas contas de serviço no cluster do operador para executar operações em cada cluster de membros.1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 3 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 6 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" 8 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite Em cada cluster de membros, incluindo o cluster de membros que serve como cluster de operador, crie outro namespace separado,
mongodb
. O Kubernetes Operator usa este namespace para recursos e componentes do MongoDB Ops Manager .1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}" 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite 3 4 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}" 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite 6 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}" 8 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
Opcional. Autorize clusters a extrair segredos de registros de imagens privadas.
Esta etapa é opcional se você usar os Atlas Charts do Quay registro.
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \ 2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 3 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
Opcional. Verifique a conectividade do cluster.
Os scripts opcionais a seguir verificam se a interface de serviço está configurada corretamente para resolução e conectividade de DNS entre clusters.
Execute este script no cluster 0:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver0 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver0 11 template: 12 metadata: 13 labels: 14 app: echoserver0 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver0 20 ports: 21 - containerPort: 8080 22 EOF Execute este script no cluster 1:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver1 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver1 11 template: 12 metadata: 13 labels: 14 app: echoserver1 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver1 20 ports: 21 - containerPort: 8080 22 EOF Execute este script no cluster 2:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: apps/v1 3 kind: StatefulSet 4 metadata: 5 name: echoserver2 6 spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app: echoserver2 11 template: 12 metadata: 13 labels: 14 app: echoserver2 15 spec: 16 containers: 17 - image: k8s.gcr.io/echoserver:1.10 18 imagePullPolicy: Always 19 name: echoserver2 20 ports: 21 - containerPort: 8080 22 EOF Execute este script para aguardar a criação do StatefulSets:
1 kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s 2 kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s 3 kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s Criar serviço de Pod no cluster 0:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver0-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver0-0" 13 EOF Criar serviço de Pod no cluster 1:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver1-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver1-0" 13 EOF Criar serviço de Pod no cluster 2:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver2-0 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 statefulset.kubernetes.io/pod-name: "echoserver2-0" 13 EOF Criar serviço de round robin no cluster 0:
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver0 13 EOF Criar serviço de round robin no cluster 1:
1 kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver1 13 EOF Criar serviço de round robin no cluster 2:
1 kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Service 4 metadata: 5 name: echoserver 6 spec: 7 ports: 8 - port: 8080 9 targetPort: 8080 10 protocol: TCP 11 selector: 12 app: echoserver2 13 EOF Verifique o Pod 0 do cluster 1:
1 source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME} 2 target_pod="echoserver0-0" 3 source_pod="echoserver1-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0 2 SUCCESS Verifique o Pod 1 do cluster 0:
1 source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} 2 target_pod="echoserver1-0" 3 source_pod="echoserver0-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0 2 SUCCESS Verifique o Pod 1 do cluster 2:
1 source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME} 2 target_pod="echoserver1-0" 3 source_pod="echoserver2-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0 2 SUCCESS
Verifique o Pod 2 do cluster 0:
1 source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} 2 target_pod="echoserver2-0" 3 source_pod="echoserver0-0" 4 target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" 5 echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" 6 out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ 7 /bin/bash -c "curl -v ${target_url}" 2>&1); 8 grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) 1 Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0 2 SUCCESS Execute o script de limpeza:
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0 2 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1 3 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 6 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver 7 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0 8 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0 9 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
Implemente uma configuração de vários clusters.
Nesta etapa, você usa o plug-in kubectl mongodb
para automatizar a configuração do cluster Kubernetes, necessária para que o operador Kubernetes gerencie cargas de trabalho em vários clusters Kubernetes.
Como você configura os clusters Kubernetes antes de instalar o Operador Kubernetes, quando você distribui o Operador Kubernetes para a operação de cluster multi-Kubernetes, toda a configuração multi-cluster necessária já está em vigor.
Conforme indicado na Visão geral, o Operador Kubernetes tem a configuração de três clusters de membros que você pode usar para implantar bancos de dados MongoDB Ops Manager MongoDB . O primeiro cluster também é usado como o cluster do operador, onde você instala o Kubernetes Operator e implementa os recursos personalizados.
1 kubectl mongodb multicluster setup \ 2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \ 3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \ 4 --member-cluster-namespace="${NAMESPACE}" \ 5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \ 6 --create-service-account-secrets \ 7 --install-database-roles=true \ 8 --image-pull-secrets=image-registries-secret
1 Ensured namespaces exist in all clusters. 2 creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 3 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 4 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 5 Ensured ServiceAccounts and Roles. 6 Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 7 Ensured database Roles in member clusters. 8 Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
Instale o Operador Kubernetes utilizando o gráfico Helm.
Adicione e atualize o repositório MongoDB Helm. Verifique se o cache local do Helm se refere à versão correta do Kubernetes Operator:
1 helm repo add mongodb https://mongodb.github.io/helm-charts 2 helm repo update mongodb 3 helm search repo "${OFFICIAL_OPERATOR_HELM_CHART}" 1 "mongodb" already exists with the same configuration, skipping 2 Hang tight while we grab the latest from your chart repositories... 3 ...Successfully got an update from the "mongodb" chart repository 4 Update Complete. ⎈Happy Helming!⎈ 5 NAME CHART VERSION APP VERSION DESCRIPTION 6 mongodb/enterprise-operator 1.27.0 MongoDB Kubernetes Enterprise Operator Instale o Operador Kubernetes no
$OPERATOR_NAMESPACE
, configurado para observar$NAMESPACE
e gerenciar três clusters de membros do Kubernetes. Neste ponto do procedimento, ServiceAccounts e roles já estão implantados pelokubectl mongodb
plug-in . Portanto, os scripts a seguir ignoram a configuração e definemoperator.createOperatorServiceAccount=false
eoperator.createResourcesServiceAccountsAndRoles=false
. Os scripts especificam a configuraçãomultiCluster.clusters
para instruir o gráfico Helm a implantar o Operador Kubernetes no modo de vários clusters.1 helm upgrade --install \ 2 --debug \ 3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \ 4 mongodb-enterprise-operator-multi-cluster \ 5 "${OPERATOR_HELM_CHART}" \ 6 --namespace="${OPERATOR_NAMESPACE}" \ 7 --set namespace="${OPERATOR_NAMESPACE}" \ 8 --set operator.namespace="${OPERATOR_NAMESPACE}" \ 9 --set operator.watchNamespace="${NAMESPACE}" \ 10 --set operator.name=mongodb-enterprise-operator-multi-cluster \ 11 --set operator.createOperatorServiceAccount=false \ 12 --set operator.createResourcesServiceAccountsAndRoles=false \ 13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \ 14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}" 1 Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now. 2 NAME: mongodb-enterprise-operator-multi-cluster 3 LAST DEPLOYED: Mon Aug 26 10:55:49 2024 4 NAMESPACE: mongodb-operator 5 STATUS: deployed 6 REVISION: 1 7 TEST SUITE: None 8 USER-SUPPLIED VALUES: 9 dummy: value 10 multiCluster: 11 clusters: 12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 15 namespace: mongodb-operator 16 operator: 17 createOperatorServiceAccount: false 18 createResourcesServiceAccountsAndRoles: false 19 name: mongodb-enterprise-operator-multi-cluster 20 namespace: mongodb-operator 21 watchNamespace: mongodb 22 23 COMPUTED VALUES: 24 agent: 25 name: mongodb-agent-ubi 26 version: 107.0.0.8502-1 27 database: 28 name: mongodb-enterprise-database-ubi 29 version: 1.27.0 30 dummy: value 31 initAppDb: 32 name: mongodb-enterprise-init-appdb-ubi 33 version: 1.27.0 34 initDatabase: 35 name: mongodb-enterprise-init-database-ubi 36 version: 1.27.0 37 initOpsManager: 38 name: mongodb-enterprise-init-ops-manager-ubi 39 version: 1.27.0 40 managedSecurityContext: false 41 mongodb: 42 appdbAssumeOldFormat: false 43 imageType: ubi8 44 name: mongodb-enterprise-server 45 repo: quay.io/mongodb 46 mongodbLegacyAppDb: 47 name: mongodb-enterprise-appdb-database-ubi 48 repo: quay.io/mongodb 49 multiCluster: 50 clusterClientTimeout: 10 51 clusters: 52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig 56 performFailOver: true 57 namespace: mongodb-operator 58 operator: 59 additionalArguments: [] 60 affinity: {} 61 createOperatorServiceAccount: false 62 createResourcesServiceAccountsAndRoles: false 63 deployment_name: mongodb-enterprise-operator 64 env: prod 65 maxConcurrentReconciles: 1 66 mdbDefaultArchitecture: non-static 67 name: mongodb-enterprise-operator-multi-cluster 68 namespace: mongodb-operator 69 nodeSelector: {} 70 operator_image_name: mongodb-enterprise-operator-ubi 71 replicas: 1 72 resources: 73 limits: 74 cpu: 1100m 75 memory: 1Gi 76 requests: 77 cpu: 500m 78 memory: 200Mi 79 tolerations: [] 80 vaultSecretBackend: 81 enabled: false 82 tlsSecretRef: "" 83 version: 1.27.0 84 watchNamespace: mongodb 85 watchedResources: 86 - mongodb 87 - opsmanagers 88 - mongodbusers 89 webhook: 90 installClusterRole: true 91 registerConfiguration: true 92 opsManager: 93 name: mongodb-enterprise-ops-manager-ubi 94 registry: 95 agent: quay.io/mongodb 96 appDb: quay.io/mongodb 97 database: quay.io/mongodb 98 imagePullSecrets: null 99 initAppDb: quay.io/mongodb 100 initDatabase: quay.io/mongodb 101 initOpsManager: quay.io/mongodb 102 operator: quay.io/mongodb 103 opsManager: quay.io/mongodb 104 pullPolicy: Always 105 subresourceEnabled: true 106 107 HOOKS: 108 MANIFEST: 109 --- 110 Source: enterprise-operator/templates/operator-roles.yaml 111 kind: ClusterRole 112 apiVersion: rbac.authorization.k8s.io/v1 113 metadata: 114 name: mongodb-enterprise-operator-mongodb-webhook 115 rules: 116 - apiGroups: 117 - "admissionregistration.k8s.io" 118 resources: 119 - validatingwebhookconfigurations 120 verbs: 121 - get 122 - create 123 - update 124 - delete 125 - apiGroups: 126 - "" 127 resources: 128 - services 129 verbs: 130 - get 131 - list 132 - watch 133 - create 134 - update 135 - delete 136 --- 137 Source: enterprise-operator/templates/operator-roles.yaml 138 kind: ClusterRoleBinding 139 apiVersion: rbac.authorization.k8s.io/v1 140 metadata: 141 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding 142 roleRef: 143 apiGroup: rbac.authorization.k8s.io 144 kind: ClusterRole 145 name: mongodb-enterprise-operator-mongodb-webhook 146 subjects: 147 - kind: ServiceAccount 148 name: mongodb-enterprise-operator-multi-cluster 149 namespace: mongodb-operator 150 --- 151 Source: enterprise-operator/templates/operator.yaml 152 apiVersion: apps/v1 153 kind: Deployment 154 metadata: 155 name: mongodb-enterprise-operator-multi-cluster 156 namespace: mongodb-operator 157 spec: 158 replicas: 1 159 selector: 160 matchLabels: 161 app.kubernetes.io/component: controller 162 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster 163 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster 164 template: 165 metadata: 166 labels: 167 app.kubernetes.io/component: controller 168 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster 169 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster 170 spec: 171 serviceAccountName: mongodb-enterprise-operator-multi-cluster 172 securityContext: 173 runAsNonRoot: true 174 runAsUser: 2000 175 containers: 176 - name: mongodb-enterprise-operator-multi-cluster 177 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.27.0" 178 imagePullPolicy: Always 179 args: 180 - -watch-resource=mongodb 181 - -watch-resource=opsmanagers 182 - -watch-resource=mongodbusers 183 - -watch-resource=mongodbmulticluster 184 command: 185 - /usr/local/bin/mongodb-enterprise-operator 186 volumeMounts: 187 - mountPath: /etc/config/kubeconfig 188 name: kube-config-volume 189 resources: 190 limits: 191 cpu: 1100m 192 memory: 1Gi 193 requests: 194 cpu: 500m 195 memory: 200Mi 196 env: 197 - name: OPERATOR_ENV 198 value: prod 199 - name: MDB_DEFAULT_ARCHITECTURE 200 value: non-static 201 - name: WATCH_NAMESPACE 202 value: "mongodb" 203 - name: NAMESPACE 204 valueFrom: 205 fieldRef: 206 fieldPath: metadata.namespace 207 - name: CLUSTER_CLIENT_TIMEOUT 208 value: "10" 209 - name: IMAGE_PULL_POLICY 210 value: Always 211 # Database 212 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE 213 value: quay.io/mongodb/mongodb-enterprise-database-ubi 214 - name: INIT_DATABASE_IMAGE_REPOSITORY 215 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi 216 - name: INIT_DATABASE_VERSION 217 value: 1.27.0 218 - name: DATABASE_VERSION 219 value: 1.27.0 220 # Ops Manager 221 - name: OPS_MANAGER_IMAGE_REPOSITORY 222 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi 223 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY 224 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi 225 - name: INIT_OPS_MANAGER_VERSION 226 value: 1.27.0 227 # AppDB 228 - name: INIT_APPDB_IMAGE_REPOSITORY 229 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi 230 - name: INIT_APPDB_VERSION 231 value: 1.27.0 232 - name: OPS_MANAGER_IMAGE_PULL_POLICY 233 value: Always 234 - name: AGENT_IMAGE 235 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1" 236 - name: MDB_AGENT_IMAGE_REPOSITORY 237 value: "quay.io/mongodb/mongodb-agent-ubi" 238 - name: MONGODB_IMAGE 239 value: mongodb-enterprise-server 240 - name: MONGODB_REPO_URL 241 value: quay.io/mongodb 242 - name: MDB_IMAGE_TYPE 243 value: ubi8 244 - name: PERFORM_FAILOVER 245 value: 'true' 246 - name: MDB_MAX_CONCURRENT_RECONCILES 247 value: "1" 248 volumes: 249 - name: kube-config-volume 250 secret: 251 defaultMode: 420 252 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig Verifique o sistema do Kubernetes Operator:
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster 2 echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace" 3 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments 4 echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace" 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods 1 Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available... 2 deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out 3 Operator deployment in mongodb-operator namespace 4 NAME READY UP-TO-DATE AVAILABLE AGE 5 mongodb-enterprise-operator-multi-cluster 1/1 1 1 10s 6 7 Operator pod in mongodb-operator namespace 8 NAME READY STATUS RESTARTS AGE 9 mongodb-enterprise-operator-multi-cluster-54d786b796-7l5ct 2/2 Running 1 (4s ago) 10s
Preparar certificados TLS.
Nesta etapa, você habilita o TLS para o banco de dados do aplicativo e o aplicativo MongoDB Ops Manager . Se você não quiser usar o TLS, remova os seguintes campos dos recursos MongoDBOpsManager
:
Opcional. Gerar chaves e certificados:
Utilize a ferramenta de linha de comando
openssl
para gerar CAs e certificados autoassinados para fins de teste.1 mkdir certs || true 2 3 cat <<EOF >certs/ca.cnf 4 [ req ] 5 default_bits = 2048 6 prompt = no 7 default_md = sha256 8 distinguished_name = dn 9 x509_extensions = v3_ca 10 11 [ dn ] 12 C=US 13 ST=New York 14 L=New York 15 O=Example Company 16 OU=IT Department 17 CN=exampleCA 18 19 [ v3_ca ] 20 basicConstraints = CA:TRUE 21 keyUsage = critical, keyCertSign, cRLSign 22 subjectKeyIdentifier = hash 23 authorityKeyIdentifier = keyid:always,issuer 24 EOF 25 26 cat <<EOF >certs/om.cnf 27 [ req ] 28 default_bits = 2048 29 prompt = no 30 default_md = sha256 31 distinguished_name = dn 32 req_extensions = req_ext 33 34 [ dn ] 35 C=US 36 ST=New York 37 L=New York 38 O=Example Company 39 OU=IT Department 40 CN=${OPS_MANAGER_EXTERNAL_DOMAIN} 41 42 [ req_ext ] 43 subjectAltName = @alt_names 44 keyUsage = critical, digitalSignature, keyEncipherment 45 extendedKeyUsage = serverAuth, clientAuth 46 47 [ alt_names ] 48 DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN} 49 DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local 50 EOF 51 52 cat <<EOF >certs/appdb.cnf 53 [ req ] 54 default_bits = 2048 55 prompt = no 56 default_md = sha256 57 distinguished_name = dn 58 req_extensions = req_ext 59 60 [ dn ] 61 C=US 62 ST=New York 63 L=New York 64 O=Example Company 65 OU=IT Department 66 CN=AppDB 67 68 [ req_ext ] 69 subjectAltName = @alt_names 70 keyUsage = critical, digitalSignature, keyEncipherment 71 extendedKeyUsage = serverAuth, clientAuth 72 73 [ alt_names ] 74 multi-cluster mongod hostnames from service for each pod 75 DNS.1 = *.${NAMESPACE}.svc.cluster.local 76 single-cluster mongod hostnames from headless service 77 DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local 78 EOF 79 80 generate CA keypair and certificate 81 openssl genrsa -out certs/ca.key 2048 82 openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf 83 84 generate OpsManager's keypair and certificate 85 openssl genrsa -out certs/om.key 2048 86 openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf 87 88 generate AppDB's keypair and certificate 89 openssl genrsa -out certs/appdb.key 2048 90 openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf 91 92 generate certificates signed by CA for OpsManager and AppDB 93 openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext 94 openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext Crie segredos com chaves TLS:
Se você preferir usar suas próprias chaves e certificados, pule a etapa de geração anterior e coloque as chaves e os certificados nos seguintes arquivos:
certs/ca.crt
- Certificados CA. Eles não são necessários ao usar certificados confiáveis.certs/appdb.key
- chave privada para o banco de dados de aplicativos.certs/appdb.crt
- certificado para o banco de dados de aplicativos.certs/om.key
- chave privada para o MongoDB Ops Manager.certs/om.crt
- certificado para MongoDB Ops Manager.
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \ 2 --cert=certs/om.crt \ 3 --key=certs/om.key 4 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \ 6 --cert=certs/appdb.crt \ 7 --key=certs/appdb.key 8 9 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt" 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
Instale o Ops Manager.
Neste ponto, você preparou o ambiente e o Operador Kubernetes para implantar o recurso MongoDB Ops Manager .
Crie as credenciais necessárias para o usuário administrador do MongoDB Ops Manager que o operador Kubernetes criará após implantar a instância do aplicativo MongoDB Ops Manager :
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \ 2 --from-literal=Username="admin" \ 3 --from-literal=Password="Passw0rd@" \ 4 --from-literal=FirstName="Jane" \ 5 --from-literal=LastName="Doe" Implemente o recurso personalizado
MongoDBOpsManager
mais simples possível (com TLS habilitado) em um cluster de único membro, que também é conhecido como o cluster do operador.Essa implantação é quase a mesma que para o modo de cluster único, mas com
spec.topology
espec.applicationDatabase.topology
configurados paraMultiCluster
.A implantação dessa forma mostra que um único sistema de cluster Kubernetes é um caso especial de um sistema de cluster multi-Kubernetes em um único cluster de membro do Kubernetes. Você pode começar a implantar o Aplicativo MongoDB Ops Manager e o Banco de Dados de Aplicativos em quantos clusters Kubernetes necessários desde o início e não precisa começar com a implantação com apenas um cluster Kubernetes de nó único.
Neste ponto, você preparou a implantação do MongoDB Ops Manager para abranger mais de um cluster Kubernetes , o que você fará mais tarde neste procedimento.
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 applicationDatabase: 18 version: "${APPDB_VERSION}" 19 topology: MultiCluster 20 security: 21 certsSecretPrefix: cert-prefix 22 tls: 23 ca: appdb-cert-ca 24 clusterSpecList: 25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 26 members: 3 27 backup: 28 enabled: false 29 EOF Aguarde o operador Kubernetes pegar o trabalho e alcançar o estado
status.applicationDatabase.phase=Pending
. Aguarde a conclusão das implantações do banco de dados de aplicativos e do MongoDB Ops Manager .1 echo "Waiting for Application Database to reach Pending phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s 1 Waiting for Application Database to reach Pending phase... 2 mongodbopsmanager.mongodb.com/om condition met Implemente MongoDB Ops Manager. O Operador Kubernetes implementa o MongoDB Ops Manager executando as etapas a seguir. Ele:
Distribui os nós do conjunto de réplicas do banco de dados de aplicativos e aguarda que os processos do MongoDB no conjunto de réplicas comecem a ser executados.
Implementa a instância do Aplicativo MongoDB Ops Manager com a connection string do Banco de Dados do Aplicativo e aguarda que ela fique pronta.
Adiciona os containers do MongoDB Agent de monitoramento ao Pod de cada banco de dados de aplicativos.
Aguarde que o aplicativo MongoDB Ops Manager e os pods do banco de dados de aplicativos comecem a ser executados.
1 echo "Waiting for Application Database to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 3 echo; echo "Waiting for Ops Manager to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s 5 echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..." 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 7 echo "Waiting for Application Database to reach Running phase..." 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s 9 echo; echo "Waiting for Ops Manager to reach Running phase..." 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s 11 echo; echo "MongoDBOpsManager resource" 12 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 13 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 14 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 15 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 16 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Application Database to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 4 Waiting for Ops Manager to reach Running phase... 5 mongodbopsmanager.mongodb.com/om condition met 6 7 Waiting for Application Database to reach Pending phase (enabling monitoring)... 8 mongodbopsmanager.mongodb.com/om condition met 9 Waiting for Application Database to reach Running phase... 10 mongodbopsmanager.mongodb.com/om condition met 11 12 Waiting for Ops Manager to reach Running phase... 13 mongodbopsmanager.mongodb.com/om condition met 14 15 MongoDBOpsManager resource 16 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 17 om 7.0.4 Running Running Disabled 13m 18 19 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 20 NAME READY STATUS RESTARTS AGE 21 om-0-0 2/2 Running 0 10m 22 om-db-0-0 4/4 Running 0 69s 23 om-db-0-1 4/4 Running 0 2m12s 24 om-db-0-2 4/4 Running 0 3m32s 25 26 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 Agora que você distribuiu um cluster de um único membro em um modo de vários clusters, é possível reconfigurar o sistema para abranger mais de um cluster do Kubernetes.
No segundo cluster de membros, implemente dois membros adicionais do conjunto de réplicas do Banco de Dados de Aplicativo e uma instância adicional do Aplicativo MongoDB Ops Manager :
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 18 members: 1 19 applicationDatabase: 20 version: "${APPDB_VERSION}" 21 topology: MultiCluster 22 security: 23 certsSecretPrefix: cert-prefix 24 tls: 25 ca: appdb-cert-ca 26 clusterSpecList: 27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 28 members: 3 29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 30 members: 2 31 backup: 32 enabled: false 33 EOF Aguarde o Operador Kubernetes pegar o trabalho (fase pendente):
1 echo "Waiting for Application Database to reach Pending phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s 1 Waiting for Application Database to reach Pending phase... 2 mongodbopsmanager.mongodb.com/om condition met Aguarde o operador Kubernetes terminar de implantar todos os componentes:
1 echo "Waiting for Application Database to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s 3 echo; echo "Waiting for Ops Manager to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s 5 echo; echo "MongoDBOpsManager resource" 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 7 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 9 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 10 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Application Database to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 4 Waiting for Ops Manager to reach Running phase... 5 mongodbopsmanager.mongodb.com/om condition met 6 7 MongoDBOpsManager resource 8 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 9 om 7.0.4 Running Running Disabled 20m 10 11 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 12 NAME READY STATUS RESTARTS AGE 13 om-0-0 2/2 Running 0 2m56s 14 om-db-0-0 4/4 Running 0 7m48s 15 om-db-0-1 4/4 Running 0 8m51s 16 om-db-0-2 4/4 Running 0 10m 17 18 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 19 NAME READY STATUS RESTARTS AGE 20 om-1-0 2/2 Running 0 3m27s 21 om-db-1-0 4/4 Running 0 6m32s 22 om-db-1-1 4/4 Running 0 5m5s
Habilitar backup.
Em um sistema de cluster multi-Kubernetes do Aplicativo de MongoDB Ops Manager , você pode configurar somente o armazenamento de backup baseado em S3 . Este procedimento refere-se ao S3_*
definido em env_variables.sh.
Opcional. Instalar o operador MinIO.
Este procedimento implementa armazenamento compatível com S para3 seus backups usando o Operador MinIO. Você pode pular esta etapa se tiver o Amazon Web Services S3 ou outros buckets compatíveis com o S3disponíveis. Ajuste as variáveis
S3_*
adequadamente em env_variables.sh neste caso.1 kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \ 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - 3 4 kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \ 5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - 6 7 add two buckets to the tenant config 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \ 9 --type='json' \ 10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]" Antes de configurar e ativar o backup, crie segredos:
s3-access-secret
- contém credenciais S3 .s3-ca-cert
- contém um certificado CA que emitiu o certificado do servidor do bucket. No caso da implantação do MinIO de amostra usada neste procedimento, o certificado padrão da Kubernetes Root CA é usado para assinar o certificado. Como não é um certificado de CA confiável publicamente, você deve fornecê-lo para que o MongoDB Ops Manager possa confiar na conexão.
Se você utilizar certificados publicamente confiáveis, você poderá pular esta etapa e remover os valores das configurações do
spec.backup.s3Stores.customCertificateSecretRefs
espec.backup.s3OpLogStores.customCertificateSecretRefs
.1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \ 2 --from-literal=accessKey="${S3_ACCESS_KEY}" \ 3 --from-literal=secretKey="${S3_SECRET_KEY}" 4 5 minio TLS secrets are signed with the default k8s root CA 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \ 7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
Redistribua o MongoDB Ops Manager com o backup ativado.
O Kubernetes Operator pode configurar e distribuir todos os componentes, o aplicativo MongoDB Ops Manager , as instâncias do Backup Daemon e os nós do conjunto de réplicas do aplicativo de banco de dados em qualquer combinação em qualquer cluster de membros para os quais você configure o Kubernetes Operator.
Para ilustrar a flexibilidade da configuração de implantação de cluster multi-Kubernetes, implemente apenas uma instância do Backup Daemon no cluster do terceiro membro e especifique zero membros do Backup Daemon para o primeiro e o segundo cluster.
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: om 6 spec: 7 topology: MultiCluster 8 version: "${OPS_MANAGER_VERSION}" 9 adminCredentials: om-admin-user-credentials 10 security: 11 certsSecretPrefix: cert-prefix 12 tls: 13 ca: om-cert-ca 14 clusterSpecList: 15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 16 members: 1 17 backup: 18 members: 0 19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 20 members: 1 21 backup: 22 members: 0 23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" 24 members: 0 25 backup: 26 members: 1 27 configuration: # to avoid configuration wizard on first login 28 mms.adminEmailAddr: email@example.com 29 mms.fromEmailAddr: email@example.com 30 mms.ignoreInitialUiSetup: "true" 31 mms.mail.hostname: smtp@example.com 32 mms.mail.port: "465" 33 mms.mail.ssl: "true" 34 mms.mail.transport: smtp 35 mms.minimumTLSVersion: TLSv1.2 36 mms.replyToEmailAddr: email@example.com 37 applicationDatabase: 38 version: "${APPDB_VERSION}" 39 topology: MultiCluster 40 security: 41 certsSecretPrefix: cert-prefix 42 tls: 43 ca: appdb-cert-ca 44 clusterSpecList: 45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" 46 members: 3 47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" 48 members: 2 49 backup: 50 enabled: true 51 s3Stores: 52 - name: my-s3-block-store 53 s3SecretRef: 54 name: "s3-access-secret" 55 pathStyleAccessEnabled: true 56 s3BucketEndpoint: "${S3_ENDPOINT}" 57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}" 58 customCertificateSecretRefs: 59 - name: s3-ca-cert 60 key: ca.crt 61 s3OpLogStores: 62 - name: my-s3-oplog-store 63 s3SecretRef: 64 name: "s3-access-secret" 65 s3BucketEndpoint: "${S3_ENDPOINT}" 66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}" 67 pathStyleAccessEnabled: true 68 customCertificateSecretRefs: 69 - name: s3-ca-cert 70 key: ca.crt 71 EOF Aguarde até que o operador Kubernetes termine sua configuração:
1 echo; echo "Waiting for Backup to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s 3 echo "Waiting for Application Database to reach Running phase..." 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s 5 echo; echo "Waiting for Ops Manager to reach Running phase..." 6 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s 7 echo; echo "MongoDBOpsManager resource" 8 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om 9 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 10 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 11 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 12 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 13 echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" 14 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods 1 Waiting for Backup to reach Running phase... 2 mongodbopsmanager.mongodb.com/om condition met 3 Waiting for Application Database to reach Running phase... 4 mongodbopsmanager.mongodb.com/om condition met 5 6 Waiting for Ops Manager to reach Running phase... 7 mongodbopsmanager.mongodb.com/om condition met 8 9 MongoDBOpsManager resource 10 NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS 11 om 7.0.4 Running Running Running 26m 12 13 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 14 NAME READY STATUS RESTARTS AGE 15 om-0-0 2/2 Running 0 5m11s 16 om-db-0-0 4/4 Running 0 13m 17 om-db-0-1 4/4 Running 0 14m 18 om-db-0-2 4/4 Running 0 16m 19 20 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 21 NAME READY STATUS RESTARTS AGE 22 om-1-0 2/2 Running 0 5m12s 23 om-db-1-0 4/4 Running 0 12m 24 om-db-1-1 4/4 Running 0 11m 25 26 Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 27 NAME READY STATUS RESTARTS AGE 28 om-2-backup-daemon-0 2/2 Running 0 3m8s
Opcional. Exclua os clusters GKE (Google Kubernetes Engine) e todos os seus recursos associados (VMs).
Execute o roteiro a seguir para excluir os clusters GKE e limpar seu ambiente.
Importante
Os seguintes comandos não são reversíveis. Eles excluem todos os clusters referenciados em env_variables.sh
. Não execute esses comandos se quiser manter os clusters GKE, por exemplo, se você não criou os clusters GKE como parte desse procedimento.
1 yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" & 2 yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" & 3 yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" & 4 wait