Docs Menu
Docs Home
/
MongoDB Kafka Connector
/ /

모든 싱크 커넥터 구성 속성

이 페이지의 내용

  • 개요
  • MongoDB 연결
  • MongoDB 네임스페이스
  • 커넥터 주제
  • 커넥터 메시지 처리
  • 커넥터 오류 처리
  • 포스트 프로세서
  • ID 전략
  • 쓰기 모델 전략
  • 주제 재정의
  • 데이터 캡처 변경
  • 시계열

이 페이지에서는 MongoDB Kafka connector에 사용 가능한 모든 속성을 볼 수 있습니다. 이 페이지는 다른 싱크 connector 구성 속성 속성 페이지의 내용을 복제합니다.

모든 싱크 connector 구성 속성 페이지의 목록을 보려면 싱크 Connector 구성 속성 페이지를 참조하세요.

다음 구성 설정을 사용하여 MongoDB Kafka 싱크 connector 가 MongoDB cluster 와 연결하고 통신하는 방법을 지정합니다.

MongoDB 연결 구성과 관련된 옵션만 보려면 MongoDB 연결 구성 속성 페이지를 참조하세요.

이름
설명
connection.uri
Required

Type: string

Description:
The MongoDB connection URI string to connect to your MongoDB instance or cluster.
For more information, see the Connect to MongoDB guide

중요: 설정에서 인증 자격 증명 이 노출되지 않도록 connection.uri 하려면 ConfigProvider 적절한 구성 매개변수를 설정하다 합니다.

Default: mongodb://localhost:27017
Accepted Values: A MongoDB connection URI string
server.api.version
Type: string

Description:
The Stable API version you want to use with your MongoDB server. For more information on the Stable API and versions of the server that support it, see the Stable API MongoDB server manual guide.

Default: ""
Accepted Values: An empty string or a valid Stable API version.
server.api.deprecationErrors
Type: boolean

Description:
When set to true, if the connector calls a command on your MongoDB instance that's deprecated in the declared Stable API version, it raises an exception.

You can set the API version with the server.api.version configuration option. For more information on the Stable API, see the MongoDB manual entry on the Stable API.

Default: false
Accepted Values: true or false
server.api.strict
Type: boolean

Description:
When set to true, if the connector calls a command on your MongoDB instance that's not covered in the declared Stable API version, it raises an exception.

You can set the API version with the server.api.version configuration option. For more information on the Stable API, see the MongoDB manual entry on the Stable API.

Default: false
Accepted Values: true or false

다음 구성 설정을 사용하여 MongoDB Kafka connector가 데이터를 쓰는 MongoDB database 및 collection을 지정합니다. 기본 DefaultNamespaceMapper 를 사용하거나 사용자 지정 클래스를 지정할 수 있습니다.

커넥터가 데이터를 쓰는 위치 지정과 관련된 옵션만 보려면 MongoDB 네임스페이스 매핑 구성 속성 페이지를 참조하세요.

이름
설명
namespace.mapper
Type: string

Description:
The fully-qualified class name of the class that specifies which database or collection in which to sink the data. The default DefaultNamespaceMapper uses values specified in the database and collection properties.

The connector includes an alternative class for specifying the
database and collection called FieldPathNamespaceMapper. See
for more information.

Default:
com.mongodb.kafka.connect.sink.namespace.mapping.DefaultNamespaceMapper
Accepted Values: A fully qualified Java class name of a class that implements the NamespaceMapper interface.
database
Required

Type: string

Description:
The name of the MongoDB database to which the sink connector writes.

Accepted Values: A MongoDB database name
컬렉션
Type: string

Description:
The name of the MongoDB collection to which the sink connector writes. If your sink connector follows multiple topics, this is the default collection for any writes that are not otherwise specified.
Default: The topic name.
Accepted Values: A MongoDB collection name

FieldPathNamespaceMapper 을 사용하도록 connector를 구성하는 경우 데이터의 필드 값을 기반으로 document를 싱크할 데이터베이스 및 collection을 지정할 수 있습니다.

이 매핑 동작을 사용하려면 아래와 같이 싱크 connector namespace.mapper 구성 속성을 정규화된 클래스 이름으로 설정합니다.

namespace.mapper=com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper

FieldPathNamespaceMapper 에서는 다음 설정을 지정해야 합니다.

  • 데이터베이스 및 collection에 대한 매핑 속성 중 하나 또는 둘 다

  • 데이터베이스에 대한 key 또는 value 매핑 중 하나

  • collection에 대한 key 또는 value 매핑 중 하나입니다.

다음 설정을 사용하여 FieldPathNamespaceMapper 의 동작을 사용자 지정할 수 있습니다.

이름
설명
namespace.mapper.key.database.field
Type: string

Description:
The name of the key document field that specifies the name of the database in which to write.
namespace.mapper.key.collection.field
Type: string

Description:
The name of the key document field that specifies the name of the collection in which to write.
namespace.mapper.value.database.field
Type: string

Description:
The name of the value document field that specifies the name of the database in which to write.
namespace.mapper.value.collection.field
Type: string

Description:
The name of the value document field that specifies the name of the collection in which to write.
namespace.mapper.error.if.invalid
Type: boolean

Description:
Whether to throw an exception when either the document is missing the mapped field or it has an invalid BSON type.

When set to true, the connector does not process documents missing the mapped field or that contain an invalid BSON type. The connector may halt or skip processing depending on the related error-handling configuration settings.

When set to false, if a document is missing the mapped field or if it has an invalid BSON type, the connector defaults to writing to the specified database and collection settings.

Default: false
Accepted Values: true or false

다음 구성 설정을 사용하여 MongoDB Kafka sink connector가 데이터를 감시해야 하는 Kafka 주제를 지정합니다.

Kafka 주제 지정과 관련된 옵션만 보려면 Kafka 주제 속성 페이지를 참조하세요.

이름
설명
topics
Required

Type: list

Description:
A list of Kafka topics that the sink connector watches.

topics 또는 topics.regex 설정 중 하나를 정의할 수 있지만 둘 다 정의할 수는 없습니다.

Accepted Values: A comma-separated list of valid Kafka topics
topics.regex
Required

Type: string

Description:
A regular expression that matches the Kafka topics that the sink connector watches.

예를 예시, 다음 정규식은 "activity.landing.Clicks" 와 같은 주제 이름과 일치합니다. 및 '활동. 지원.클릭'. 주제 이름 'Activity.landing.views'와(과) 일치하지 않습니다. 및 'activity.Clicks'.

topics.regex=activity\\.\\w+\\.clicks$

topics 또는 topics.regex 설정 중 하나를 정의할 수 있지만 둘 다 정의할 수는 없습니다.

Accepted Values: A valid regular expression pattern using java.util.regex.Pattern.

이 페이지의 설정을 사용하여 다음을 포함하여 MongoDB Kafka connector의 메시지 처리 동작을 구성합니다.

  • 메시지 배치 크기

  • 속도 제한

  • 병렬 작업 수

변경 데이터 캡처 핸들러와 관련된 옵션만 보려면 커넥터 메시지 처리 속성 페이지를 참조하세요.

이름
설명
max.batch.size
Type: int

Description:
Maximum number of sink records to batch together for processing.

Consider the batch that contains the following records:
[ 1, 2, 3, 4, 5 ]
When set to 0, the connector performs a single bulk write for the entire batch.

When set to 1, the connector performs one bulk write for each record in the batch, for a total of five bulk writes as shown in the following example:
[1], [2], [3], [4], [5]
Default: 0
Accepted Values: An integer
bulk.write.ordered
Type: boolean

Description:
Whether the connector writes a batch of records as an ordered or unordered bulk write operation. When set to true, the default value, the connector writes a batch of records as an ordered bulk write operation.

To learn more about bulk write operations, see Bulk Write Operations.

Default: true
Accepted Values: true or false
rate.limiting.every.n
Type: int

Description:
Number of batches of records the sink connector processes in order to trigger the rate limiting timeout. A value of 0 means no rate limiting.

Default: 0
Accepted Values: An integer
rate.limiting.timeout
Type: int

Description:
How long (in milliseconds) to wait before the sink connector should resume processing after reaching the rate limiting threshold.

Default: 0
Accepted Values: An integer
tasks.max
Type: int

Description:
The maximum number of tasks to create for this connector. The connector may create fewer than the maximum tasks specified if it cannot handle the level of parallelism you specify.

IMPORTANT: If you specify a value greater than 1, the connector enables parallel processing of the tasks. If your topic has multiple partition logs, which enables the connector to read from the topic in parallel, the tasks may process the messages out of order.

Default: 1
Accepted Values: An integer

다음 구성 설정을 사용하여 MongoDB Kafka connector가 오류를 처리하는 방법을 지정하고 데드 레터 큐를 구성합니다.

오류 처리와 관련된 옵션만 보려면 커넥터 오류 처리 속성 페이지를 참조하세요.

이름
설명
mongo.errors.tolerance
Type: string

Description:
Whether to continue processing messages if the connector encounters an error. Allows the connector to override the errors.tolerance Kafka cluster setting.

When set to none, the connector reports any error and blocks further processing of the rest of the messages.

When set to all, the connector ignores any problematic messages.

When set to data, the connector tolerates only data errors and fails on all other errors.

To learn more about error handling strategies, see the Handle Errors page.

This property overrides the errors.tolerance
property of the Connect Framework.

Default: Inherits the value from the errors.tolerance setting.
Accepted Values: "none" or "all"
mongo.errors.log.enable
Type: boolean

Description:
Whether the connector should write details of errors including failed operations to the log file. The connector classifies errors as "tolerated" or "not tolerated" using the errors.tolerance or mongo.errors.tolerance settings.

When set to true, the connector logs both "tolerated" and "not tolerated" errors.
When set to false, the connector logs "not tolerated" errors.

This property overrides the errors.log.enable
property of the Connect Framework.

Default: false
Accepted Values: true or false
errors.log.include.messages
Type: boolean

Description:
Whether the connector should include the invalid message when logging an error. An invalid message includes data such as record keys, values, and headers.

Default: false
Accepted Values: true or false
errors.deadletterqueue.topic.name
Type: string

Description:
Name of topic to use as the dead letter queue. If blank, the connector does not send any invalid messages to the dead letter queue.

To learn more about the dead letter queue, see the Dead Letter Queue Configuration Example.

Default: ""
Accepted Values: A valid Kafka topic name
errors.deadletterqueue.context.headers.enable
Type: boolean

Description:
Whether the connector should include context headers when it writes messages to the dead letter queue.

To learn more about the dead letter queue, see the Dead Letter Queue Configuration Example.

To learn about the exceptions the connector defines and reports through context headers, see Bulk Write Exceptions.

Default: false
Accepted Values: true or false
errors.deadletterqueue.topic.replication.factor
Type: integer

Description:
The number of nodes on which to replicate the dead letter queue topic. If you are running a single-node Kafka cluster, you must set this to 1.

To learn more about the dead letter queue, see the Dead Letter Queue Configuration Example.

Default: 3
Accepted Values: A valid number of nodes

다음 구성 설정을 사용하여 MongoDB Kafka connector가 Kafka 데이터를 MongoDB에 삽입하기 전에 변환하는 방법을 지정합니다.

포스트프로세서와 관련된 옵션만 보려면 싱크 connector 포스트프로세서 속성 페이지를 참조하세요.

이름
설명
post.processor.chain
Type: list

Description:
A list of post-processor classes the connector should apply to process the data before saving it to MongoDB.

To learn more about post-processors and see examples of
their usage, see

Default:
com.mongodb.kafka.connect.sink.processor.DocumentIdAdder
Accepted Values: A comma-separated list of fully qualified Java class names
field.renamer.mapping
Type: string

Description:
A list of field name mappings for key and value fields. Define the mappings in an inline JSON array in the following format:
[ { "oldName":"key.fieldA", "newName":"field1" }, { "oldName":"value.xyz", "newName":"abc" } ]
Default: []
Accepted Values: A valid JSON array
field.renamer.regexp
Type: string

Description:
A list of field name mappings for key and value fields using regular expressions. Define the mappings in an inline JSON array in the following format:
[ {"regexp":"^key\\\\..*my.*$", "pattern":"my", "replace":""}, {"regexp":"^value\\\\..*$", "pattern":"\\\\.", "replace":"_"} ]
Default: []
Accepted Values: A valid JSON array
key.projection.list
Type: string

Description:
A list of field names the connector should include in the key projection.

Default: ""
Accepted Values: A comma-separated list of field names
key.projection.type
Type: string

Description:
The key projection type the connector should use.

Default: none
Accepted Values: none, BlockList, or AllowList (Deprecated: blacklist, whitelist)
value.projection.list
Type: string

Description:
A list of field names the connector should include in the value projection.

Default: ""
Accepted Values: A comma-separated list of field names
value.projection.type
Type: string

Description:
The type of value projection the connector should use.

Default: none
Accepted Values: none, BlockList, or AllowList (Deprecated: blacklist, whitelist)
writemodel.strategy
Type: string

Description:
The class that specifies the WriteModelStrategy the connector should use for Bulk Writes.

To learn more about how to create your own strategy, see

Default:
com.mongodb.kafka.connect.sink.writemodel.strategy.DefaultWriteModelStrategy
Accepted Values: A fully qualified Java class name

다음 구성 설정을 사용하여 MongoDB Kafka 싱크 커넥터가 MongoDB에 쓰는 각 문서의 _id 값을 결정하는 방법을 지정합니다.

문서의 _id 필드 결정과 관련된 옵션만 보려면 connector ID 전략 속성 페이지를 참조하세요.

이름
설명
document.id.strategy
Type: string

Description:
The class the connector should use to generate a unique _id field.

Default:
com.mongodb.kafka.connect.sink.processor.id.strategy.BsonOidStrategy
Accepted Values: An empty string or a fully qualified Java class name
document.id.strategy.overwrite.existing
Type: boolean

Description:
Whether the connector should overwrite existing values in the _id field when it applies the strategy defined by the document.id.strategy property.

Default: false
Accepted Values: true or false
document.id.strategy.uuid.format
Type: string

Description:
Whether the connector should output the UUID in the _id field in string format or in BsonBinary format.

Default: string
Accepted Values: string or binary
delete.on.null.values
Type: boolean

Description:
Whether the connector should delete documents when the key value matches a document in MongoDB and the value field is null.

This setting applies when you specify an id generation strategy that operates on the key document such as FullKeyStrategy, PartialKeyStrategy, and ProvidedInKeyStrategy.

Default: false
Accepted Values: true or false

구성 속성을 설정하여 MongoDB Kafka 싱크 connector 가 MongoDB 에 데이터를 쓰는 방법을 지정할 수 있습니다. 다음 섹션에서는 이 동작을 사용자 지정하기 위해 설정할 수 있는 구성 속성에 대해 설명합니다.

writemodel.strategy 구성 속성을 설정하여 싱크 connector 가 싱크 기록을 수신할 때 데이터를 쓰는 방법을 지정합니다.

이 페이지의 전략 섹션에 설명된 쓰기 모델 전략의 정규화된 클래스 이름 중 하나로 writemodel.strategy 값을 설정할 수 있습니다. 다음 구성을 설정하여 전략을 지정할 수 있습니다.

writemodel.strategy=<a write model strategy>

싱크 connector 가 툼스톤 이벤트를 수신할 때 데이터를 쓰는 방법을 지정하려면 delete.writemodel.strategy 구성 속성을 설정합니다. 툼스톤 이벤트는 키가 있지만 값이 없는 레코드로, 삭제된 레코드를 나타냅니다.

이 페이지의 전략 섹션에 설명된 쓰기 모델 전략의 정규화된 클래스 이름 중 하나로 delete.writemodel.strategy 값을 설정할 수 있습니다. 다음 구성을 설정하여 전략을 지정할 수 있습니다.

delete.writemodel.strategy=<a write model strategy>

쓰기 모델 전략과 관련된 옵션만 보려면 싱크 connector 쓰기 모델 전략 페이지를 참조하세요.

이름
설명
DefaultWriteModelStrategy

Description:
This strategy uses the ReplaceOneDefaultStrategy by default, and the InsertOneDefaultStrategy if you set the timeseries.timefield option.

This is the default value value for the writemodel.strategy configuration property.
InsertOneDefaultStrategy

Description:
Insert each sink record into MongoDB as a document.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.InsertOneDefaultStrategy
ReplaceOneDefaultStrategy

Description:
Replaces at most one document in MongoDB that matches a sink record by the _id field. If no documents match, the connector inserts the sink record as a new document.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneDefaultStrategy
ReplaceOneBusinessKeyStrategy

Description:
Replaces at most one document that matches a sink record by a specified business key. If no documents match, the connector inserts the sink record as a new document.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneBusinessKeyStrategy
To see an example showing how to use this strategy, see our guide on write model strategies.
DeleteOneDefaultStrategy

Description:
Deletes at most one document that matches your sink connector's key structure by the _id field only when the document contains a null value structure.

This is the default value for the delete.writemodel.strategy configuration property.

This strategy is set as the default value of the writemodel.strategy property when you set mongodb.delete.on.null.values=true.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneDefaultStrategy
DeleteOneBusinessKeyStrategy

Description:
Deletes at most one MongoDB document that matches a sink record by a business key.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneBusinessKeyStrategy
To see an example showing how to use this strategy, see our guide on write model strategies.
UpdateOneDefaultStrategy

Description:
Updates at most one document in MongoDB that matches a sink record by the _id field. If no documents match, the connector inserts the sink record as a new document.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneDefaultStrategy
UpdateOneTimestampsStrategy

Description:
Add _insertedTS (inserted timestamp) and _modifiedTS (modified timestamp) fields into documents.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy
To see an example showing how to use this strategy, see our guide on write model strategies.
UpdateOneBusinessKeyTimestampStrategy

Description:
Add _insertedTS (inserted timestamp) and _modifiedTS (modified timestamp) fields into documents that match a business key.
To specify this strategy, set the configuration property to the following class name:
com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneBusinessKeyTimestampStrategy

다음 MongoDB Kafka sink connector 구성 설정을 사용하여 특정 주제에 대한 전역 또는 기본 속성 설정을 재정의할 수 있습니다.

주제 설정 재정의와 관련된 옵션만 보려면 주제 재정의 속성 페이지를 참조하세요.

이름
설명
topic.override.<topicName>.<propertyName>
Type: string

Description:
Specify a topic and property name to override the corresponding global or default property setting.

For example, the topic.override.foo.collection=bar setting instructs
the sink connector to store data from the foo topic in the bar
collection.

You can specify any valid configuration setting in the
<propertyName> segment on a per-topic basis except
connection.uri and topics.

Default: ""
Accepted Values: Accepted values specific to the overridden property

다음 구성 설정을 사용하여 MongoDB Kafka 싱크 connector 가 변경 데이터 캡처(CDC) 이벤트를 처리하는 데 사용하는 클래스를 지정합니다.

Debezium 및 Qlik Replicate 이벤트 생성자를 위한 내장 ChangeStreamHandler 및 핸들러를 사용하는 예제는 싱크 Connector 변경 데이터 캡처 가이드 를 참조하세요.

변경 데이터 캡처 핸들러와 관련된 옵션만 보려면 변경 데이터 캡처 속성 페이지를 참조하세요.

이름
설명
change.data.capture.handler
Type: string

Description:
The class name of the CDC handler to use for converting changes into event streams. See Available CDC Handlers for a list of CDC handlers.

Default: ""
Accepted Values: An empty string or a fully qualified Java class name

다음 구성 설정을 사용하여 MongoDB Kafka connector가 데이터를 MongoDB time-series collection에 싱크하는 방법을 지정합니다.

time series 컬렉션과 관련된 옵션만 보려면 Kafka time-series 속성 페이지를 참조하세요.

이름
설명
timeseries.timefield
Type: string

Description:
The name of the top-level field in the source data that contains time information that you want to associate with the new document in the time series collection.

Default: ""
Accepted Values: An empty string or the name of a field that contains a BSON DateTime value
timeseries.timefield.auto.convert.date.format
Type: string

Description:
The date format pattern the connector should use to convert the source data contained in the field specified by the timeseries.timefield setting.

The connector passes the date format pattern to the Java DateTimeFormatter.ofPattern(pattern, locale) method to perform date and time conversions on the time field.

If the date value from the source data only contains date information, the connector sets the time information to the start of the specified day. If the date value does not contain the timezone offset, the connector sets the offset to UTC.

Default:
yyyy-MM-dd[['T'][ ]][HH:mm:ss[[.][SSSSSS][SSS]][ ]VV[ ]'['VV']'][HH:mm:ss[[.][SSSSSS][SSS]][ ]X][HH:mm:ss[[.][SSSSSS][SSS]]]
Accepted Values: A valid DateTimeFormatter format
timeseries.timefield.auto.convert
Type: boolean

Description:
Whether to convert the data in the field into the BSON Date format.

When set to true, the connector uses the milliseconds after epoch and discards fractional parts if the value is a number. If the value is a string, the connector uses the setting in the following configuration to parse the date:
timeseries.timefield.auto.convert.date.format
If the connector fails to convert the value, it sends the original value to the time series collection.

Default: false
Accepted Values: true or false
timeseries.timefield.auto.convert.locale.language.tag
Type: string

Description:
Which DateTimeFormatter locale language tag to use with the date format pattern (e.g. "en-US").

To learn more about locales, see the Java SE documentation of Locale.

Default: ROOT
Accepted Values: A valid Locale language tag format
timeseries.metafield
Type: string

Description:
Which top-level field to read from the source data to describe a group of related time series documents.

IMPORTANT: This field must not be the _id field nor the field you specified in the timeseries.timefield setting.

Default: ""
Accepted Values: An empty string or the name of a field that contains any BSON type except BsonArray.
timeseries.expire.after.seconds
Type: int

Description:
The number of seconds MongoDB should wait before automatically removing the time series collection data. The connector disables timed expiry when the setting value is less than 1.

To learn more, see Set up Automatic Removal for Time Series Collections in the MongoDB manual.

Default: 0
Accepted Values: An integer
timeseries.granularity
Type: string

Description:
The expected interval between subsequent measurements of your source data.

To learn more, see Set Granularity for Time Series Data in the MongoDB manual.

Optional
Default: ""
Accepted Values: "", "seconds", "minutes", "hours"

기존 컬렉션을 Time Series 컬렉션으로 변환하는 방법에 대한 예 는 기존 컬렉션을 Time Series 컬렉션으로 마이그레이션하는 방법에 대한 튜토리얼을 참조하세요.

돌아가기

Kafka Time Series 속성