Docs Menu

Manage Atlas Stream Processing

You can use Atlas Kubernetes Operator to manage stream processing instances and connections for Atlas Stream Processing. Atlas Stream Processing enables you to process streams of complex data using the same MongoDB Query API that Atlas databases use. Atlas Stream Processing allows you to do the following tasks:

  • Build aggregation pipelines to continuously operate on streaming data without the delays inherent in batch processing.

  • Perform continuous schema validation to check that messages are properly formed, detect message corruption, and detect late-arriving data.

  • Atlas コレクションまたは Apache Kafka への結果の継続的な公開 クラスター、最新のビューとデータの分析を保証します。

Atlas Stream Processing components belong directly to Atlas projects and operate independent of Atlas clusters. To learn more, see Atlas Stream Processing の概要.

Atlas Stream Processing instances provide the context for all of your operations on streaming data. You can configure a sample connection, a connection to an Atlas change stream, or a connection to an Apache Kafka system. Then you can add the connection to the Connection Registry for your stream processing instance. To learn more, see Atlas Stream Processing インスタンスの管理.

To use Atlas Kubernetes Operator to manage stream processing instances and connections, you must:

To use Atlas Kubernetes Operator to manage a stream processing instance and its connections, do the following steps:

1

例:

cat <<EOF | kubectl apply -f -
apiVersion: atlas.mongodb.com/v1
kind: AtlasStreamInstance
metadata:
name: my-stream-instance
spec:
name: my-stream-instance
clusterConfig:
provider: AWS
region: VIRGINIA_USA
tier: SP30
projectRef:
name: my-project
EOF

To learn more about the available parameters, see the AtlasStreamInstance カスタム リソース.

注意

現在、Atlas Kubernetes Operator は、このカスタム リソースのAWSプロバイダーとVIRGINIA_USAリージョンのみをサポートしています。

2

You can configure a sample connection, a connection to an Atlas change stream, or a connection to an Apache Kafka system.

例:

apiVersion: atlas.mongodb.com/v1
kind: AtlasStreamConnection
metadata:
name: my-stream-connection
spec:
name: sample_stream_solar
type: Sample

注意

If you specify Sample for the spec.type parameter, the spec.name parameter must match the sample collection name. Currently, Atlas Kubernetes Operator supports only the sample_stream_solar sample collection for this custom resource.

apiVersion: atlas.mongodb.com/v1
kind: AtlasStreamConnection
metadata:
name: my-stream-connection
spec:
name: my-stream-connection
type: Cluster
clusterConfig:
name: my-cluster
role:
name: my-db-role
type: CUSTOM
apiVersion: atlas.mongodb.com/v1
kind: AtlasStreamConnection
metadata:
name: my-stream-connection
spec:
name: my-stream-connection
type: Kafka
kafkaConfig:
bootstrapServers: "comma,separated,list,of,server,addresses"
authentication:
mechanism: SCRAM-512
credentials:
name: ref-to-creds-secret
namespace: default
security:
protocol: SSL
certificate:
name: ref-to-certificate-secret
namespace: default

To learn more about the available parameters, see the AtlasStreamConnection カスタム リソース.

3

例:

cat <<EOF | kubectl apply -f -
apiVersion: atlas.mongodb.com/v1
kind: AtlasStreamInstance
metadata:
name: my-stream-instance
spec:
name: my-stream-instance
clusterConfig:
provider: AWS
region: VIRGINIA_USA
tier: SP30
projectRef:
name: my-project
connectionRegistry:
- name: ref-my-connection-1
namespace: my-namespace1
- name: ref-my-connection-2
namespace: my-namespace2
- name: ref-my-connection-3
namespace: my-namespace1
EOF

注意

現在、Atlas Kubernetes Operator は、このカスタム リソースのAWSプロバイダーとVIRGINIA_USAリージョンのみをサポートしています。

To learn more about the available parameters, see the AtlasStreamInstance カスタム リソース.