additionalProperties
(common) |
Additional properties for
debezium components in case they
can’t be set directly on
the camel configurations (e.g:
setting Kafka Connect properties
needed by Debezium engine, for
example setting
KafkaOffsetBackingStore), the
properties have to be prefixed
with additionalProperties.. E.g:
additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro.
|
|
Map |
bridgeErrorHandler
(consumer) |
Allows for bridging the consumer
to the Camel routing Error
Handler, which mean any
exceptions occurred while the
consumer is trying to pickup
incoming messages, or the likes,
will now be processed as a
message and handled by the
routing Error Handler. By
default the consumer will use
the
org.apache.camel.spi.ExceptionHandler
to deal with exceptions, that
will be logged at WARN or ERROR
level and ignored.
|
false |
boolean |
configuration
(consumer) |
Allow pre-configured
Configurations to be set.
|
|
OracleConnectorEmbeddedDebeziumConfiguration
|
internalKeyConverter
(consumer) |
The Converter class that should
be used to serialize and
deserialize key data for
offsets. The default is JSON
converter.
|
org.apache.kafka.connect.json.JsonConverter
|
String |
internalValueConverter
(consumer) |
The Converter class that should
be used to serialize and
deserialize value data for
offsets. The default is JSON
converter.
|
org.apache.kafka.connect.json.JsonConverter
|
String |
offsetCommitPolicy
(consumer) |
The name of the Java class of the
commit policy. It defines when
offsets commit has to be
triggered based on the number of
events processed and the time
elapsed since the last commit.
This class must implement the
interface 'OffsetCommitPolicy'.
The default is a periodic commit
policy based upon time
intervals.
|
|
String |
offsetCommitTimeoutMs
(consumer) |
Maximum number of milliseconds to
wait for records to flush and
partition offset data to be
committed to offset storage
before cancelling the process
and restoring the offset data to
be committed in a future
attempt. The default is 5
seconds.
|
5000 |
long |
offsetFlushIntervalMs
(consumer) |
Interval at which to try
committing offsets. The default
is 1 minute.
|
60000 |
long |
offsetStorage
(consumer) |
The name of the Java class that
is responsible for persistence
of connector offsets.
|
org.apache.kafka.connect.storage.FileOffsetBackingStore
|
String |
offsetStorageFileName
(consumer) |
Path to file where offsets are to
be stored. Required when
offset.storage is set to the
FileOffsetBackingStore.
|
|
String |
offsetStoragePartitions
(consumer) |
The number of partitions used
when creating the offset storage
topic. Required when
offset.storage is set to the
'KafkaOffsetBackingStore'.
|
|
int |
offsetStorageReplicationFactor
(consumer) |
Replication factor used when
creating the offset storage
topic. Required when
offset.storage is set to the
KafkaOffsetBackingStore.
|
|
int |
offsetStorageTopic
(consumer) |
The name of the Kafka topic where
offsets are to be stored.
Required when offset.storage is
set to the
KafkaOffsetBackingStore.
|
|
String |
autowiredEnabled
(advanced) |
Whether autowiring is enabled.
This is used for automatic
autowiring options (the option
must be marked as autowired) by
looking up in the registry to
find if there is a single
instance of matching type, which
then gets configured on the
component. This can be used for
automatic configuring JDBC data
sources, JMS connection
factories, AWS Clients, etc.
|
true |
boolean |
binaryHandlingMode
(oracle) |
Specify how binary (blob, binary,
etc.) columns should be
represented in change events,
including: 'bytes' represents
binary data as byte array
(default); 'base64' represents
binary data as base64-encoded
string; 'base64-url-safe'
represents binary data as
base64-url-safe-encoded string;
'hex' represents binary data as
hex-encoded (base16) string.
|
bytes |
String |
columnExcludeList
(oracle) |
Regular expressions matching
columns to exclude from change
events.
|
|
String |
columnIncludeList
(oracle) |
Regular expressions matching
columns to include in change
events.
|
|
String |
columnPropagateSourceType
(oracle) |
A comma-separated list of regular
expressions matching
fully-qualified names of columns
that adds the columns original
type and original length as
parameters to the corresponding
field schemas in the emitted
change records.
|
|
String |
converters
(oracle) |
Optional list of custom
converters that would be used
instead of default ones. The
converters are defined using
'.type' config option and
configured using options
'.'.
|
|
String |
databaseConnectionAdapter
(oracle) |
The adapter to use when capturing
changes from the database.
Options include: 'logminer':
(the default) to capture changes
using native Oracle LogMiner;
'xstream' to capture changes
using Oracle XStreams.
|
LogMiner |
String |
databaseDbname
(oracle) |
The name of the database from
which the connector should
capture changes.
|
|
String |
databaseHostname
(oracle) |
Resolvable hostname or IP address
of the database server.
|
|
String |
databaseOutServerName
(oracle) |
Name of the XStream Out server to
connect to.
|
|
String |
databasePassword
(oracle) |
Required
Password of the database user to
be used when connecting to the
database.
|
|
String |
databasePdbName
(oracle) |
Name of the pluggable database
when working with a multi-tenant
set-up. The CDB name must be
given via database.dbname in
this case.
|
|
String |
databasePort
(oracle) |
Port of the database server.
|
1528 |
int |
databaseUrl
(oracle) |
Complete JDBC URL as an
alternative to specifying
hostname, port and database
provided as a way to support
alternative connection
scenarios.
|
|
String |
databaseUser
(oracle) |
Name of the database user to be
used when connecting to the
database.
|
|
String |
datatypePropagateSourceType
(oracle) |
A comma-separated list of regular
expressions matching the
database-specific data type
names that adds the data type’s
original type and original
length as parameters to the
corresponding field schemas in
the emitted change records.
|
|
String |
decimalHandlingMode
(oracle) |
Specify how DECIMAL and NUMERIC
columns should be represented in
change events, including:
'precise' (the default) uses
java.math.BigDecimal to
represent values, which are
encoded in the change events
using a binary representation
and Kafka Connect’s
'org.apache.kafka.connect.data.Decimal'
type; 'string' uses string to
represent values; 'double'
represents values using Java’s
'double', which may not offer
the precision but will be far
easier to use in consumers.
|
precise |
String |
errorsMaxRetries
(oracle) |
The maximum number of retries on
connection errors before failing
(-1 = no limit, 0 = disabled, 0
= num of retries).
|
-1 |
int |
eventProcessingFailureHandlingMode
(oracle) |
Specify how failures during
processing of events (i.e. when
encountering a corrupted event)
should be handled, including:
'fail' (the default) an
exception indicating the
problematic event and its
position is raised, causing the
connector to be stopped; 'warn'
the problematic event and its
position will be logged and the
event will be skipped; 'ignore'
the problematic event will be
skipped.
|
fail |
String |
heartbeatActionQuery
(oracle) |
The query executed with every
heartbeat.
|
|
String |
heartbeatIntervalMs
(oracle) |
Length of an interval in
milli-seconds in in which the
connector periodically sends
heartbeat messages to a
heartbeat topic. Use 0 to
disable heartbeat messages.
Disabled by default.
|
0ms |
int |
heartbeatTopicsPrefix
(oracle) |
The prefix that is used to name
heartbeat topics.Defaults to
__debezium-heartbeat.
|
__debezium-heartbeat |
String |
includeSchemaChanges
(oracle) |
Whether the connector should
publish changes in the database
schema to a Kafka topic with the
same name as the database server
ID. Each schema change will be
recorded using a key that
contains the database name and
whose value include logical
description of the new schema
and optionally the DDL
statement(s). The default is
'true'. This is independent of
how the connector internally
records database schema
history.
|
true |
boolean |
includeSchemaComments
(oracle) |
Whether the connector parse table
and column’s comment to
metadata object. Note: Enable
this option will bring the
implications on memory usage.
The number and size of
ColumnImpl objects is what
largely impacts how much memory
is consumed by the Debezium
connectors, and adding a String
to each of them can potentially
be quite heavy. The default is
'false'.
|
false |
boolean |
intervalHandlingMode
(oracle) |
Specify how INTERVAL columns
should be represented in change
events, including: 'string'
represents values as an exact
ISO formatted string; 'numeric'
(default) represents values
using the inexact conversion
into microseconds.
|
numeric |
String |
lobEnabled
(oracle) |
When set to 'false', the default,
LOB fields will not be captured
nor emitted. When set to 'true',
the connector will capture LOB
fields and emit changes for
those fields like any other
column type.
|
false |
boolean |
logMiningArchiveDestinationName
(oracle) |
Sets the specific archive log
destination as the source for
reading archive logs.When not
set, the connector will
automatically select the first
LOCAL and VALID destination.
|
|
String |
logMiningArchiveLogHours
(oracle) |
The number of hours in the past
from SYSDATE to mine archive
logs. Using 0 mines all
available archive logs.
|
0 |
long |
logMiningArchiveLogOnlyMode
(oracle) |
When set to 'false', the default,
the connector will mine both
archive log and redo logs to
emit change events. When set to
'true', the connector will only
mine archive logs. There are
circumstances where its
advantageous to only mine
archive logs and accept latency
in event emission due to
frequent revolving redo
logs.
|
false |
boolean |
logMiningArchiveLogOnlyScnPollIntervalMs
(oracle) |
The interval in milliseconds to
wait between polls checking to
see if the SCN is in the archive
logs.
|
10s |
long |
logMiningBatchSizeDefault
(oracle) |
The starting SCN interval size
that the connector will use for
reading data from redo/archive
logs.
|
20000 |
long |
logMiningBatchSizeMax
(oracle) |
The maximum SCN interval size
that this connector will use
when reading from redo/archive
logs.
|
100000 |
long |
logMiningBatchSizeMin
(oracle) |
The minimum SCN interval size
that this connector will try to
read from redo/archive logs.
Active batch size will be also
increased/decreased by this
amount for tuning connector
throughput when needed.
|
1000 |
long |
logMiningBufferDropOnStop
(oracle) |
When set to true the underlying
buffer cache is not retained
when the connector is stopped.
When set to false (the default),
the buffer cache is retained
across restarts.
|
false |
boolean |
logMiningBufferInfinispanCacheEvents
(oracle) |
Specifies the XML configuration
for the Infinispan 'events'
cache.
|
|
String |
logMiningBufferInfinispanCacheProcessedTransactions
(oracle) |
Specifies the XML configuration
for the Infinispan
'processed-transactions'
cache.
|
|
String |
logMiningBufferInfinispanCacheSchemaChanges
(oracle) |
Specifies the XML configuration
for the Infinispan
'schema-changes' cache.
|
|
String |
logMiningBufferInfinispanCacheTransactions
(oracle) |
Specifies the XML configuration
for the Infinispan
'transactions' cache.
|
|
String |
logMiningBufferTransactionEventsThreshold
(oracle) |
The number of events a
transaction can include before
the transaction is discarded.
This is useful for managing
buffer memory and/or space when
dealing with very large
transactions. Defaults to 0,
meaning that no threshold is
applied and transactions can
have unlimited events.
|
0 |
long |
logMiningBufferType
(oracle) |
The buffer type controls how the
connector manages buffering
transaction data. memory - Uses
the JVM process' heap to buffer
all transaction data.
infinispan_embedded - This
option uses an embedded
Infinispan cache to buffer
transaction data and persist it
to disk. infinispan_remote -
This option uses a remote
Infinispan cluster to buffer
transaction data and persist it
to disk.
|
memory |
String |
logMiningFlushTableName
(oracle) |
The name of the flush table used
by the connector, defaults to
LOG_MINING_FLUSH.
|
LOG_MINING_FLUSH
|
String |
logMiningQueryFilterMode
(oracle) |
Specifies how the filter
configuration is applied to the
LogMiner database query. none -
The query does not apply any
schema or table filters, all
filtering is at runtime by the
connector. in - The query uses
SQL in-clause expressions to
specify the schema or table
filters. regex - The query uses
Oracle REGEXP_LIKE expressions
to specify the schema or table
filters.
|
none |
String |
logMiningRestartConnection
(oracle) |
Debezium opens a database
connection and keeps that
connection open throughout the
entire streaming phase. In some
situations, this can lead to
excessive SGA memory usage. By
setting this option to 'true'
(the default is 'false'), the
connector will close and re-open
a database connection after
every detected log switch or if
the log.mining.session.max.ms
has been reached.
|
false |
boolean |
logMiningScnGapDetectionGapSizeMin
(oracle) |
Used for SCN gap detection, if
the difference between current
SCN and previous end SCN is
bigger than this value, and the
time difference of current SCN
and previous end SCN is smaller
than
log.mining.scn.gap.detection.time.interval.max.ms,
consider it a SCN gap.
|
1000000 |
long |
logMiningScnGapDetectionTimeIntervalMaxMs
(oracle) |
Used for SCN gap detection, if
the difference between current
SCN and previous end SCN is
bigger than
log.mining.scn.gap.detection.gap.size.min,
and the time difference of
current SCN and previous end SCN
is smaller than this value,
consider it a SCN gap.
|
20s |
long |
logMiningSessionMaxMs
(oracle) |
The maximum number of
milliseconds that a LogMiner
session lives for before being
restarted. Defaults to 0
(indefinite until a log switch
occurs).
|
0ms |
long |
logMiningSleepTimeDefaultMs
(oracle) |
The amount of time that the
connector will sleep after
reading data from redo/archive
logs and before starting reading
data again. Value is in
milliseconds.
|
1s |
long |
logMiningSleepTimeIncrementMs
(oracle) |
The maximum amount of time that
the connector will use to tune
the optimal sleep time when
reading data from LogMiner.
Value is in milliseconds.
|
200ms |
long |
logMiningSleepTimeMaxMs
(oracle) |
The maximum amount of time that
the connector will sleep after
reading data from redo/archive
logs and before starting reading
data again. Value is in
milliseconds.
|
3s |
long |
logMiningSleepTimeMinMs
(oracle) |
The minimum amount of time that
the connector will sleep after
reading data from redo/archive
logs and before starting reading
data again. Value is in
milliseconds.
|
0ms |
long |
logMiningStrategy
(oracle) |
There are strategies: Online
catalog with faster mining but
no captured DDL. Another - with
data dictionary loaded into REDO
LOG files.
|
redo_log_catalog
|
String |
logMiningTransactionRetentionMs
(oracle) |
Duration in milliseconds to keep
long running transactions in
transaction buffer between log
mining sessions. By default, all
transactions are retained.
|
0ms |
long |
logMiningUsernameExcludeList
(oracle) |
Comma separated list of usernames
to exclude from LogMiner
query.
|
|
String |
logMiningUsernameIncludeList
(oracle) |
Comma separated list of usernames
to include from LogMiner
query.
|
|
String |
maxBatchSize
(oracle) |
Maximum size of each batch of
source records. Defaults to
2048.
|
2048 |
int |
maxQueueSize
(oracle) |
Maximum size of the queue for
change events read from the
database log but not yet
recorded or forwarded. Defaults
to 8192, and should always be
larger than the maximum batch
size.
|
8192 |
int |
maxQueueSizeInBytes
(oracle) |
Maximum size of the queue in
bytes for change events read
from the database log but not
yet recorded or forwarded.
Defaults to 0. Mean the feature
is not enabled.
|
0 |
long |
messageKeyColumns
(oracle) |
A semicolon-separated list of
expressions that match
fully-qualified tables and
column(s) to be used as message
key. Each expression must match
the pattern ':', where the table
names could be defined as
(DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME),
depending on the specific
connector, and the key columns
are a comma-separated list of
columns representing the custom
key. For any table without an
explicit key configuration the
table’s primary key
column(s) will be used as
message key. Example:
dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id.
|
|
String |
notificationEnabledChannels
(oracle) |
List of notification channels
names that are enabled.
|
|
String |
notificationSinkTopicName
(oracle) |
The name of the topic for the
notifications. This is required
in case 'sink' is in the list of
enabled channels.
|
|
String |
pollIntervalMs
(oracle) |
Time to wait for new change
events to appear after receiving
no events, given in
milliseconds. Defaults to 500
ms.
|
500ms |
long |
provideTransactionMetadata
(oracle) |
Enables transaction metadata
extraction together with event
counting.
|
false |
boolean |
queryFetchSize
(oracle) |
The maximum number of records
that should be loaded into
memory while streaming. A value
of '0' uses the default JDBC
fetch size, defaults to
'2000'.
|
10000 |
int |
racNodes
(oracle) |
A comma-separated list of RAC
node hostnames or ip
addresses.
|
|
String |
retriableRestartConnectorWaitMs
(oracle) |
Time to wait before restarting
connector after retriable
exception occurs. Defaults to
10000ms.
|
10s |
long |
schemaHistoryInternal
(oracle) |
The name of the SchemaHistory
class that should be used to
store and recover database
schema changes. The
configuration properties for the
history are prefixed with the
'schema.history.internal.'
string.
|
io.debezium.storage.kafka.history.KafkaSchemaHistory
|
String |
schemaHistoryInternalFileFilename
(oracle) |
The path to the file that will be
used to record the database
schema history.
|
|
String |
schemaHistoryInternalSkipUnparseableDdl
(oracle) |
Controls the action Debezium will
take when it meets a DDL
statement in binlog, that it
cannot parse.By default the
connector will stop operating
but by changing the setting it
can ignore the statements which
it cannot parse. If skipping is
enabled then Debezium can miss
metadata changes.
|
false |
boolean |
schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
(oracle) |
Controls what DDL will Debezium
store in database schema
history. By default (true) only
DDL that manipulates a table
from captured schema/database
will be stored. If set to false,
then Debezium will store all
incoming DDL statements.
|
false |
boolean |
schemaHistoryInternalStoreOnlyCapturedTablesDdl
(oracle) |
Controls what DDL will Debezium
store in database schema
history. By default (false)
Debezium will store all incoming
DDL statements. If set to true,
then only DDL that manipulates a
captured table will be
stored.
|
false |
boolean |
schemaNameAdjustmentMode
(oracle) |
Specify how schema names should
be adjusted for compatibility
with the message converter used
by the connector, including:
'avro' replaces the characters
that cannot be used in the Avro
type name with underscore;
'avro_unicode' replaces the
underscore or characters that
cannot be used in the Avro type
name with corresponding unicode
like _uxxxx. Note: _ is an
escape sequence like backslash
in Java;'none' does not apply
any adjustment (default).
|
none |
String |
signalDataCollection
(oracle) |
The name of the data collection
that is used to send
signals/commands to Debezium.
Signaling is disabled when not
set.
|
|
String |
signalEnabledChannels
(oracle) |
List of channels names that are
enabled. Source channel is
enabled by default.
|
source |
String |
signalPollIntervalMs
(oracle) |
Interval for looking for new
signals in registered channels,
given in milliseconds. Defaults
to 5 seconds.
|
5s |
long |
skippedOperations
(oracle) |
The comma-separated list of
operations to skip during
streaming, defined as: 'c' for
inserts/create; 'u' for updates;
'd' for deletes, 't' for
truncates, and 'none' to
indicate nothing skipped. By
default, only truncate
operations will be skipped.
|
t |
String |
snapshotDelayMs
(oracle) |
A delay period before a snapshot
will begin, given in
milliseconds. Defaults to 0
ms.
|
0ms |
long |
snapshotEnhancePredicateScn
(oracle) |
A token to replace on snapshot
predicate template.
|
|
String |
snapshotFetchSize
(oracle) |
The maximum number of records
that should be loaded into
memory while performing a
snapshot.
|
|
int |
snapshotIncludeCollectionList
(oracle) |
This setting must be set to
specify a list of
tables/collections whose
snapshot must be taken on
creating or restarting the
connector.
|
|
String |
snapshotLockingMode
(oracle) |
Controls how the connector holds
locks on tables while performing
the schema snapshot. The default
is 'shared', which means the
connector will hold a table lock
that prevents exclusive table
access for just the initial
portion of the snapshot while
the database schemas and other
metadata are being read. The
remaining work in a snapshot
involves selecting all rows from
each table, and this is done
using a flashback query that
requires no locks. However, in
some cases it may be desirable
to avoid locks entirely which
can be done by specifying
'none'. This mode is only safe
to use if no schema changes are
happening while the snapshot is
taken.
|
shared |
String |
snapshotLockTimeoutMs
(oracle) |
The maximum number of millis to
wait for table locks at the
beginning of a snapshot. If
locks cannot be acquired in this
time frame, the snapshot will be
aborted. Defaults to 10
seconds.
|
10s |
long |
snapshotMaxThreads
(oracle) |
The maximum number of threads
used to perform the snapshot.
Defaults to 1.
|
1 |
int |
snapshotMode
(oracle) |
The criteria for running a
snapshot upon startup of the
connector. Select one of the
following snapshot options:
'always': The connector runs a
snapshot every time that it
starts. After the snapshot
completes, the connector begins
to stream changes from the redo
logs.; 'initial' (default): If
the connector does not detect
any offsets for the logical
server name, it runs a snapshot
that captures the current full
state of the configured tables.
After the snapshot completes,
the connector begins to stream
changes from the redo logs.
'initial_only': The connector
performs a snapshot as it does
for the 'initial' option, but
after the connector completes
the snapshot, it stops, and does
not stream changes from the redo
logs.; 'schema_only': If the
connector does not detect any
offsets for the logical server
name, it runs a snapshot that
captures only the schema (table
structures), but not any table
data. After the snapshot
completes, the connector begins
to stream changes from the redo
logs.; 'schema_only_recovery':
The connector performs a
snapshot that captures only the
database schema history. The
connector then transitions to
streaming from the redo logs.
Use this setting to restore a
corrupted or lost database
schema history topic. Do not use
if the database schema was
modified after the connector
stopped.
|
initial |
String |
snapshotSelectStatementOverrides
(oracle) |
This property contains a
comma-separated list of
fully-qualified tables
(DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME),
depending on the specific
connectors. Select statements
for the individual tables are
specified in further
configuration properties, one
for each table, identified by
the id
'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME'
or
'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME',
respectively. The value of those
properties is the select
statement to use when retrieving
data from the specific table
during snapshotting. A possible
use case for large append-only
tables is setting a specific
point where to start (resume)
snapshotting, in case a previous
snapshotting was
interrupted.
|
|
String |
snapshotTablesOrderByRowCount
(oracle) |
Controls the order in which
tables are processed in the
initial snapshot. A descending
value will order the tables by
row count descending. A
ascending value will order the
tables by row count ascending. A
value of disabled (the default)
will disable ordering by row
count.
|
disabled |
String |
sourceinfoStructMaker
(oracle) |
The name of the
SourceInfoStructMaker class that
returns SourceInfo schema and
struct.
|
io.debezium.connector.oracle.OracleSourceInfoStructMaker
|
String |
tableExcludeList
(oracle) |
A comma-separated list of regular
expressions that match the
fully-qualified names of tables
to be excluded from
monitoring.
|
|
String |
tableIncludeList
(oracle) |
The tables for which changes are
to be captured.
|
|
String |
timePrecisionMode
(oracle) |
Time, date, and timestamps can be
represented with different kinds
of precisions, including:
'adaptive' (the default) bases
the precision of time, date, and
timestamp values on the database
column’s precision;
'adaptive_time_microseconds'
like 'adaptive' mode, but TIME
fields always use microseconds
precision; 'connect' always
represents time, date, and
timestamp values using Kafka
Connect’s built-in
representations for Time, Date,
and Timestamp, which uses
millisecond precision regardless
of the database columns'
precision.
|
adaptive |
String |
tombstonesOnDelete
(oracle) |
Whether delete operations should
be represented by a delete event
and a subsequent tombstone event
(true) or only by a delete event
(false). Emitting the tombstone
event (the default behavior)
allows Kafka to completely
delete all events pertaining to
the given key once the source
record got deleted.
|
false |
boolean |
topicNamingStrategy
(oracle) |
The name of the
TopicNamingStrategy class that
should be used to determine the
topic name for data change,
schema change, transaction,
heartbeat event etc.
|
io.debezium.schema.SchemaTopicNamingStrategy
|
String |
topicPrefix
(oracle) |
Required Topic
prefix that identifies and
provides a namespace for the
particular database
server/cluster is capturing
changes. The topic prefix should
be unique across all other
connectors, since it is used as
a prefix for all Kafka topic
names that receive events
emitted by this connector. Only
alphanumeric characters,
hyphens, dots and underscores
must be accepted.
|
|
String |
unavailableValuePlaceholder
(oracle) |
Specify the constant that will be
provided by Debezium to indicate
that the original value is
unavailable and not provided by
the database.
|
__debezium_unavailable_value |
String |