Todas as propriedades de configuração do conector de origem
Nesta página
Visão geral
Nesta página, você pode visualizar todas as propriedades de configuração disponíveis para o conector de origem MongoDB Kafka. Esta página duplica o conteúdo das outras páginas de propriedades de configuração do conector de origem.
Para exibir uma lista de todas as páginas de propriedades de configuração do conector de origem, consulte a página Connector de configuração do conector de origem.
Conexão do MongoDB
Use as seguintes definições de configuração para especificar como o conector de origem do MongoDB Kafka estabelece uma conexão e se comunica com o cluster MongoDB.
Para exibir somente as opções relacionadas à sua conexão MongoDB, consulte a página Propriedades da conexão de origem MongoDB.
Nome | Descrição |
---|---|
connection.uri | Required Type: string Description: The URI connection string
to connect to your MongoDB instance or cluster. To learn more, see Connect to MongoDB. IMPORTANTE: para evitar expor suas credenciais de autenticação na sua Default: mongodb://localhost:27017,localhost:27018,localhost:27019 Accepted Values: A MongoDB URI connection string |
database | Type: string Description: Name of the database to watch for changes. If not set, the connector
watches all databases for changes. Default: "" Accepted Values: A single database name |
collection | Type: string Description: Name of the collection in the database to watch for changes. If not
set, the connector watches all collections for changes. IMPORTANT: If your database configuration is set to "" , the connector
ignores the collection setting.Default: "" Accepted Values: A single collection name |
server.api.version | Type: string Description: The Stable API version you want to use with your MongoDB
cluster. For more information on the Stable API and versions of
MongoDB server that support it, see the Stable API
guide. Default: "" Accepted Values: An empty string or a valid Stable API version. |
server.api.deprecationErrors | Type: boolean Description: When set to true , if the connector calls a command on your
MongoDB instance that's deprecated in the declared Stable API
version, it raises an exception.You can set the API version with the server.api.version
configuration option. For more information on the Stable API, see
the MongoDB manual entry on the
Stable API.Default: false Accepted Values: true or false |
server.api.strict | Type: boolean Description: When set to true , if the connector calls a command on your
MongoDB instance that's not covered in the declared Stable API
version, it raises an exception.You can set the API version with the server.api.version
configuration option. For more information on the Stable API, see
the MongoDB manual entry on the
Stable API.Default: false Accepted Values: true or false |
Tópico de Kafka
Utilize as seguintes definições de configuração para especificar quais tópicos do Kafka o conector de origem MongoDB Kafka deve publicar dados.
Para exibir apenas as opções relacionadas ao seu tópico do Kafka, consulte a página Propriedades do tópico do Kafka.
Nome | Descrição | ||||
---|---|---|---|---|---|
topic.prefix | Type: string Description: Specifies the first part of the destination Kafka
topic name to which the connector publishes change stream events.
The destination topic name is composed of the topic.prefix
value followed by the database and collection names, separated by the value
specified in the topic.separator property.To learn more, see the example in Topic Naming Prefix. Default: "" Accepted Values: A string composed of ASCII alphanumeric
characters including ".", "-", and "_" | ||||
topic.suffix | Type: string Description: Specifies the last part of the destination Kafka
topic name to which the connector publishes change stream events.
The destination topic name is composed of the database and
collection names followed by the topic.suffix value,
separated by the value specified in the topic.separator property.To learn more, see the example in Topic Naming Suffix. Default: "" Accepted Values: A string composed of ASCII alphanumeric
characters including ".", "-", and "_" | ||||
topic.namespace.map | Type: string Description: Specifies a JSON mapping between change stream document
namespaces
and topic names. You can use to topic.namespace.map property to
specify complex mappings. This property supports regex
and wildcard matching.To learn more about these behaviors and
view examples, see Topic Namespace Map. Default: "" Accepted Values: A valid JSON object | ||||
topic.separator | Type: string Description: Specifies the string the connector uses to concatenate the values used
to create the name of your topic. The connector publishes records to a
topic with a name formed by concatenating the values of the following fields
in the following order:
For example, the following configuration instructs the connector to publish
change stream documents from the coll collection of the
db database to the prefix-db-coll topic:
IMPORTANT: When you use the topic.separator property, note that it
doesn't affect how you define the topic.namespace.map property.
The topic.namespace.map property uses MongoDB
namespaces
which you must always specify with a . character to separate
the database and collection name.Default: "." Accepted Values: A string | ||||
topic.mapper | Type: string Description: The Java class that defines your custom topic mapping logic. Default: com.mongodb.kafka.connect.source.topic.mapping.DefaultTopicMapper Accepted Values: Valid full class name of an implementation
of the TopicMapper
class. |
Fluxos de alterações
Use as definições de configuração a seguir para especificar pipelines de agregação para fluxos de mudança e preferências de leitura para cursores de fluxo de mudança ao trabalhar com o conector de origem MongoDB Kafka.
Para visualizar apenas as opções relacionadas aos fluxos de alterações, consulte a página Propriedades do fluxo de alterações.
Nome | Descrição | |
---|---|---|
pipeline | Type: string Description: An array of aggregation pipelines to run in your change stream.
You must configure this setting for the change stream
event document, not the fullDocument field.For example:
For more examples, see: Default: "[]" Accepted Values: Valid aggregation pipeline stage | |
change.stream.full.document | Type: string Description: Determines what values your change stream returns on update
operations. The default setting returns the differences between the original
document and the updated document. The updateLookup setting returns the differences between the
original document and updated document as well as a copy of the
entire updated document at a point in time after the update.The whenAvailable setting returns the updated document,
if available.The required setting returns the updated document and
raises an error if it is not available.For more information on how this change stream option works, see
Lookup Full Document for Update Operations
in the MongoDB manual. Default: "" Accepted Values: "" , "updateLookup" , "whenAvailable" , or "required" | |
change.stream.full.document.before.change | Type: string Description: Configures the document pre-image your change stream returns on update
operations. The pre-image is not available for source records
published while copying existing data, and the pre-image configuration
has no effect on copying. To learn how to configure a collection to enable
pre-images, see Change Streams with Document Pre- and Post-Images in the MongoDB manual. The default setting suppresses the document pre-image. The whenAvailable setting returns the document pre-image if
it's available, before it was replaced, updated, or
deleted.The required setting returns the document pre-image and
raises an error if it is not available.Default: "" Accepted Values: "" or "whenAvailable" or "required" | |
publish.full.document.only | Type: boolean Description: Whether to return only the fullDocument field from the
change stream event document produced by any update event. The
fullDocument field contains the most current version of the
updated document. To learn more about the fullDocument
field, see the update Event in the Server manual.When set to true , the connector overrides the
change.stream.full.document setting and sets it to
updateLookup so that the fullDocument field contains
updated documents.Default: false Accepted Values: true or false | |
publish.full.document.only.tombstone.on.delete | Type: boolean Description: Whether to return tombstone events when documents are deleted.
Tombstone events contain the keys of deleted documents with
null values. This setting applies only when
publish.full.document.only is true .Default: false Accepted Values: true or false | |
change.stream.document.key.as.key | Type: boolean Description: Whether to use the document key for the source record key if
the document key is present. When set to true , the connector adds keys of the deleted
documents to the tombstone events. When set to false ,
the connector uses the resume token as the source key for
the tombstone events.Default: true Accepted Values: true or false | |
collation | Type: string Description: A JSON collation document
that specifies language-specific ordering rules that MongoDB
applies to the documents returned by the change stream. Default: "" Accepted Values: A valid collation JSON document | |
batch.size | Type: int Description: The change stream cursor batch size. Default: 0 Accepted Values: An integer | |
poll.await.time.ms | Type: long Description: The maximum amount of time in milliseconds that the server waits for new
data changes to report to the change stream cursor before returning an
empty batch. Default: 5000 Accepted Values: An integer | |
poll.max.batch.size | Type: int Description: Maximum number of documents to read in a single batch when polling
a change stream cursor for new data. You can use this setting to
limit the amount of data buffered internally in the connector. Default: 1000 Accepted Values: An integer |
Formato de saída
Use as seguintes definições de configuração para especificar o formato dos dados que o conector de origem MongoDB Kafka publica nos tópicos do Kafka.
Para exibir apenas as opções relacionadas ao formato da saída, consulte a página Propriedades do Formato de Saída .
Nome | Descrição | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
output.format.key | Type: string Description: Specifies which data format the source connector outputs the key
document. Default: json Accepted Values: bson , json , schema | |||||||||||||||||||||||||||||||
output.format.value | Type: string Description: Specifies which data format the source connector outputs the value
document. The connector supports Protobuf as an
output data format. You can enable this format by specifying the
schema value and installing and configuring the Kafka Connect Protobuf
Converter.Default: json Accepted Values: bson , json , schema | |||||||||||||||||||||||||||||||
output.json.formatter | Type: string Description: Class name of the JSON formatter the connector should use to
output data. Default:
Accepted Values: Your custom JSON formatter full class name or one of the
following built-in formatter class names:
To learn more about these output formats, see JSON Formatters. | |||||||||||||||||||||||||||||||
output.schema.key | Type: string Description: Specifies an Avro schema definition for the key document of the
SourceRecord. To learn more about Avro schema, see Avro in the
Data Formats guide. Default:
Accepted Values: A valid Avro schema | |||||||||||||||||||||||||||||||
output.schema.value | Type: string Description: Specifies an Avro schema definition for the value document of the
SourceRecord. To learn more about Avro schema, see Avro in the
Data Formats guide. Default:
Accepted Values: A valid JSON schema | |||||||||||||||||||||||||||||||
output.schema.infer.value | Type: boolean Description: Whether the connector should infer the schema for the value
document of the SourceRecord.
Since the connector processes each document in isolation, the
connector may generate many schemas. IMPORTANT: The connector only reads this setting when you set your
output.format.value setting to schema .Default: false Accepted Values: true or false |
Inicialização
Use as seguintes definições de configuração para configurar a inicialização do conector de origem MongoDB Kafka para converter coleções MongoDB em Alterar eventos de Stream.
Para exibir apenas as opções relacionadas à inicialização, consulte a página Propriedades de inicialização.
Nome | Descrição | |
---|---|---|
startup.mode | Type: string Description: Specifies how the connector should start up when there is no
source offset available. Resuming a change stream requires a
resume token, which the connector gets from the source offset.
If no source offset is available, the connector may either
ignore all or some of the existing source data, or may at first
copy all existing source data and then continue with processing
new data. If startup.mode=latest , the connector ignores all existing
source data.If startup.mode=timestamp , the connector
actuates startup.mode.timestamp.* properties. If no
properties are configured, timestamp is equivalent to
latest .If startup.mode=copy_existing , the connector
copies all existing source data to Change Stream events. This
setting is equivalent to the deprecated setting copy.existing=true .Se algum sistema alterar os dados no banco de dados enquanto o conector de origem converte dados existentes dele, o MongoDB poderá produzir eventos de fluxo de alterações duplicados para refletir as alterações mais recentes. Como os eventos de fluxo de alterações nos quais a cópia de dados depende são idempotentes, os dados copiados são eventualmente consistentes. Default: latest Accepted Values: latest , timestamp , copy_existing | |
startup.mode.timestamp.start.at.operation.time | Type: string Description: Actuated only if startup.mode=timestamp . Specifies the
starting point for the change stream.To learn more about Change Stream parameters, see
$changeStream (aggregation)
in the MongoDB manual. Default: "" Accepted Values:
| |
startup.mode.copy.existing.namespace.regex | Type: string Description: Regular expression the connector uses to match namespaces from
which to copy data. A namespace describes the MongoDB database name
and collection separated by a period (for example, databaseName.collectionName ).For example, the following regular-expression setting matches
collections that start with "page" in the stats database:
The \ character in the example above escapes the . character
that follows it in the regular expression. For more information on
how to build regular expressions, see
Patterns
in the Java API documentation.Default: "" Accepted Values: A valid regular expression | |
startup.mode.copy.existing.pipeline | Type: string Description: An inline array of pipeline operations
the connector runs when copying existing data. You can use this
setting to filter the source collection and improve the use of
indexes in the copying process. For example, the following setting uses the $match
aggregation operator to instruct the connector to copy only
documents that contain a closed field with a value of false .
Default: "" Accepted Values: Valid aggregation pipeline stages | |
startup.mode.copy.existing.max.threads | Type: int Description: The maximum number of threads the connector can use to copy data. Default: number of processors available in the environment Accepted Values: An integer | |
startup.mode.copy.existing.queue.size | Type: int Description: The size of the queue the connector can use when copying data. Default: 16000 Accepted Values: An integer | |
startup.mode.copy.existing.allow.disk.use | Type: boolean Description: When set to true , the connector uses temporary disk storage
for the copy existing aggregation.Default: true Accepted Values: true or false |
Tratamento de erros e retomada da interrupção
Use as seguintes definições de configuração para especificar como o conector de origem do MongoDB Kafka se comporta quando encontra erros e para especificar as configurações relacionadas à retomada de leituras interrompidas.
Para exibir apenas as opções relacionadas ao tratamento de erros, consulte a página Propriedades de tratamento de erros e retomada após interrupção.
Nome | Descrição |
---|---|
mongo.errors.tolerance | Type: string Description: Whether to continue processing messages when the connector encounters
an error. Set this to "none" if you want the connector to stop
processing messages and report the issue if it encounters an
error.Set this to "all" if you want the connector to continue
processing messages and ignore any errors it encounters.IMPORTANT: This property overrides the
errors.tolerance
Connect Framework property. Default: "none" Accepted Values: "none" or "all" |
mongo.errors.log.enable | Type: boolean Description: Whether the connector should report errors in the log file. Set this to true to log all errors the connector encounters.Set this to false to log errors that are not tolerated by the
connector. You can specify which errors the connector should
tolerate using the errors.tolerance or mongo.errors.tolerance
setting.IMPORTANT: This property overrides the
errors.log.enable
Connect Framework property. Default: false Accepted Values: true or false |
mongo.errors.deadletterqueue.topic.name | Type: string Description: The name of topic to use as the dead letter queue. If you specify a value, the connector writes invalid messages to the
dead letter queue topic as extended JSON strings. If you leave this setting blank, the connector does not write
invalid messages to any topic. IMPORTANT: You must set errors.tolerance or mongo.errors.tolerance
setting to "all" to enable this property.Default: "" Accepted Values: A valid Kafka topic name |
offset.partition.name | Type: string Description: The custom offset partition name to use. You can use this option
to instruct the connector to start a new change stream when an
existing offset contains an invalid resume token. If you leave this setting blank, the connector uses the default partition name
based on the connection details. To view a strategy for naming
offset partitions, see Reset Stored Offsets. Default: "" Accepted Values: A string. To learn more about naming a partition,
see
SourceRecord
in the Apache Kafka API documentation. |
heartbeat.interval.ms | Type: long Description: The number of milliseconds the connector waits between sending
heartbeat messages. The connector sends heartbeat messages when
source records are not published in the specified interval. This mechanism improves
resumability of the connector for low volume namespaces. Heartbeat messages contain a postBatchResumeToken data field.
The value of this field contains the MongoDB server oplog entry that
the connector last read from the change stream.Set this to 0 to disable heartbeat messages.To learn more, see Prevention
in the Invalid Resume Token
page. Default: 0 Accepted Values: An integer |
heartbeat.topic.name | Type: string Description: The name of the topic on which the connector should publish
heartbeat messages. You must provide a positive value in the
heartbeat.interval.ms setting to enable this feature.Default: __mongodb_heartbeats Accepted Values: A valid Kafka topic name |