Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/ /

Increase Storage for Persistent Volumes

On this page

  • Prerequisites
  • Easy Expand Storage
  • Manually Expand Storage

The Ops Manager, MongoDB Database, AppDB and Backup Daemon custom resources that comprise a standard Kubernetes Operator deployment are each deployed as Kubernetes statefulSets. The Kubernetes Operator supports increasing the storage associated with these specific resources by increasing the capacity of their respective Kubernetes persistentVolumeClaims when the underlying Kubernetes storageClass supports Kubernetes persistentVolume expansion.

Depending on the specific resource type, you can increase storage in one of two ways. You can either manually increase storage, or you can leverage the Kubernetes Operator easy storage expansion feature. The following table illustrates which of these two procedures is supported for a given custom resource type.

Custom Resource Type
Manual Storage Expansion
Easy Storage Expansion
AppDB
X
Backup Daemon
X
MongoDB Database
X
X
MongoDB Multi-Cluster
X
X
Ops Manager
X

Make sure the StorageClass and volume plugin provider that the Persistent Volumes use supports resize:

kubectl patch storageclass/<my-storageclass> --type='json' \
-p='[{"op": "add", "path": "/allowVolumeExpansion", "value": true }]'

If you don't have a storageClass that supports resizing, ask your Kubernetes administrator to help.

Note

The easy expansion mechanism requires the default RBAC included with the Kubernetes Operator. Specifically, it requires get, list, watch, patch and update permissions for persistantVolumeClaims. If you have customized any of the Kubernetes Operator RBAC resources, you might need to adjust permissions to allow the Kubernetes Operator to resize storage resources in your Kubernetes cluster.

This process results in a rolling restart of the MongoDB custom resource in your Kubernetes cluster.

1

Use an existing database resource or create a new one with persistent storage. Wait until the persistent volume enters the Running state.

Example

A database resource with persistent storage would include:

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: <my-replica-set>
5spec:
6 members: 3
7 version: "4.4.0"
8 project: my-project
9 credentials: my-credentials
10 type: ReplicaSet
11 podSpec:
12 persistence:
13 single:
14 storage: "1Gi"
2
  1. Start mongo in the Kubernetes cluster.

    $kubectl exec -it <my-replica-set>-0 \
    /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
  2. Insert data into the test database.

    <my-replica-set>:PRIMARY> use test
    switched to db test
    <my-replica-set>:PRIMARY> db.tmp.insertOne({"foo":"bar"})
    {
    "acknowledged" : true,
    "insertedId" : ObjectId("61128cb4a783c3c57ae5142d")
    }
3

Important

You can only increase disk size for existing storage resources, not decrease. Decreasing the storage size causes an error in the reconcile stage.

  1. Update the disk size. Open your preferred text editor and make changes similar to this example:

    Example

    To update the disk size of the replica set to 2 GB, change the storage value in the database resource specification:

    1apiVersion: mongodb.com/v1
    2kind: MongoDB
    3metadata:
    4 name: <my-replica-set>
    5spec:
    6 members: 3
    7 version: "4.4.0"
    8 project: my-project
    9 credentials: my-credentials
    10 type: ReplicaSet
    11 podSpec:
    12 persistence:
    13 single:
    14 storage: "2Gi"
  2. Update the MongoDB custom resource with the new volume size.

    kubectl apply -f my-updated-replica-set-vol.yaml
  3. Wait until this StatefulSet achieves the Running state.

4

If you reuse Persistent Volumes, you can find the data that you inserted in Step 2 on the databases stored in Persistent Volumes:

$ kubectl describe mongodb/<my-replica-set> -n mongodb

The following output indicates that your PVC resize request is being processed.

status:
clusterStatusList: {}
lastTransition: "2024-08-21T11:03:52+02:00"
message: StatefulSet not ready
observedGeneration: 2
phase: Pending
pvc:
- phase: PVC Resize - STS has been orphaned
statefulsetName: multi-replica-set-pvc-resize-0
resourcesNotReady:
- kind: StatefulSet
message: 'Not all the Pods are ready (wanted: 2, updated: 1, ready: 1, current:2)'
name: multi-replica-set-pvc-resize-0
version: ""
5

If you reuse Persistent Volumes, you can find the data that you inserted in Step 2 on the databases stored in Persistent Volumes:

$ kubectl exec -it <my-replica-set>-1 \
/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
<my-replica-set>:PRIMARY> use test
switched to db test
<my-replica-set>:PRIMARY> db.tmp.count()
1
1

Use an existing database resource or create a new one with persistent storage. Wait until the persistent volume gets to the Running state.

Example

A database resource with persistent storage would include:

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: <my-replica-set>
5spec:
6 members: 3
7 version: "4.4.0"
8 project: my-project
9 credentials: my-credentials
10 type: ReplicaSet
11 podSpec:
12 persistence:
13 single:
14 storage: "1Gi"
2
  1. Start mongo in the Kubernetes cluster.

    $kubectl exec -it <my-replica-set>-0 \
    /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
  2. Insert data into the test database.

    <my-replica-set>:PRIMARY> use test
    switched to db test
    <my-replica-set>:PRIMARY> db.tmp.insertOne({"foo":"bar"})
    {
    "acknowledged" : true,
    "insertedId" : ObjectId("61128cb4a783c3c57ae5142d")
    }
3

Invoke the following commands for the entire replica set:

kubectl patch pvc/"data-<my-replica-set>-0" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}'
kubectl patch pvc/"data-<my-replica-set>-1" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}'
kubectl patch pvc/"data-<my-replica-set>-2" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}'

Wait until each Persistent Volume Claim gets to the following condition:

- lastProbeTime: null
lastTransitionTime: "2019-08-01T12:11:39Z"
message: Waiting for user to (re-)start a pod to finish file
system resize of volume on node.
status: "True"
type: FileSystemResizePending
4

Update the Kubernetes Operator deployment definition and apply the change to your Kubernetes cluster in order to scale the Kubernetes Operator down to 0 replicas. Scaling the Kubernetes Operator down to 0 replicas allows you to avoid a race condition in which the Kubernetes Operator tries to restore the state of the manually updated resource to align with the resource's original definition.

# Source: enterprise-operator/templates/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-enterprise-operator
namespace: mongodb
spec:
replicas: 0
5

Note

This step removes the StatefulSet only. The pods remain unchanged and running.

Delete a StatefulSet resource.

kubectl delete sts --cascade=false <my-replica-set>
6
  1. Update the disk size. Open your preferred text editor and make changes similar to this example:

    Example

    To update the disk size of the replica set to 2 GB, change the storage value in the database resource specification:

    1apiVersion: mongodb.com/v1
    2kind: MongoDB
    3metadata:
    4 name: <my-replica-set>
    5spec:
    6 members: 3
    7 version: "4.4.0"
    8 project: my-project
    9 credentials: my-credentials
    10 type: ReplicaSet
    11 podSpec:
    12 persistence:
    13 single:
    14 storage: "2Gi"
  2. Recreate a StatefulSet resource with the new volume size.

    kubectl apply -f my-replica-set-vol.yaml
  3. Wait until the MongoDB custom resource is in a Running state.

7

Invoke the following command:

kubectl rollout restart sts <my-replica-set>

The new pods mount the resized volume.

8

If the Persistent Volumes were reused, the data that you inserted in Step 2 can be found on the databases stored in Persistent Volumes:

$ kubectl exec -it <my-replica-set>-1 \
/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
<my-replica-set>:PRIMARY> use test
switched to db test
<my-replica-set>:PRIMARY> db.tmp.count()
1

Back

Scale a Deployment