Migration Risk Reference
On this page
Pre-Migration Analysis generates a report that lists migration risks in your source database. This reference page lists all of the migration risks Relational Migrator may detect on supported databases.
Risk Categories
Migration risks are categorized as one of:
Data Type: Data types that can result in lost precision or lost data when migrating to MongoDB.
Schema: Database or table configuration that causes difficulties when mapping source database schema to MongoDB.
Unsupported Feature: Features of other databases that aren't supported in MongoDB.
Performance: Database or table configuration that may cause performance issues when migrating data into MongoDB.
Risk Reference
Name | Type | Category | Difficulty | Report Message | Mitigation |
---|---|---|---|---|---|
Geospatial Data | Table | Data Types:
| High | The table contains columns which require special handling: <columns>. | You are storing geospatial data in your database. These will be
converted to objects with a |
Blob Types | Table | Data Types:
| Medium | The table contains columns which could exceed the 16MB limit. | If you are storing >16MB in the record, the migration will fail as MongoDB documents cannot exceed 16MB. We strongly advise against storing large blobs in MongoDB, but if multi-document transactions aren't required you can use GridFS. |
Numeric precision (specified) | Table | Data Type | Medium | The following columns are at risk of data loss due to decimal precision: <columns>. | The specified columns have been configured to support more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Numeric precision (unspecified) | Table | Data Type | Medium | The following columns may be at risk of data loss due to decimal precision: <columns>. | The specified columns are using variable decimal precision and may contain values with more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Auto-Incrementing Columns | Table | Schema | High | Table <name> has an auto incrementing column. | MongoDB encourages the use of ObjectID for ID fields because incrementing IDs have difficulty sharding. MongoDB Relational Migrator can migrate your keys as-is, but you will need to write code to maintain this behavior. If you're using MongoDB Atlas, you can use Atlas Triggers to auto-increment your IDs. |
No foreign keys found | Database | Schema | Medium | The <name> database has no foreign keys. | This will make the schema mapping more complicated as we cannot infer the relationship between your tables without them. You can use the synthetic foreign keys feature of Migrator to define logical relationships between your tables. |
Views | Database | Schema | Medium | There were views detected in <database>. | Views are supported in MongoDB but they must be converted into MQL. Consider using Query Converter to migrate your views. |
Triggers | Database | Unsupported Feature | High | The <name> database has triggers. | MongoDB has no native way to implement triggers. If you're using Atlas, consider using Query Converter to convert your existing triggers to Atlas Triggers to replicate the existing behavior. |
Routines | Database | Unsupported Feature | Medium | There were routines detected in <database>. | MongoDB has no native way to represent routines. Consider using Query Converter to migrate your routines to application code. |
Large single table | Database | Performance | Medium | The total data size of the selected tables is greater than <limit> GB, at 100 GB. | Larger data migration jobs can require careful planning to maximize performance and reliability. The deployment considerations topic in the documentation provides advice which can help. For jobs that may run over multiple days, consider using the Kafka deployment model. |
Name | Type | Category | Difficulty | Report Message | Mitigation |
---|---|---|---|---|---|
Geospatial Data | Table | Data Types:
| High | The table contains columns which require special handling: <columns>. | You are storing geospatial data in your database. These will be
converted to objects with a |
Blob Types | Table | Data Types:
| Medium | The table contains columns which could exceed the 16MB limit. | If you are storing >16MB in the record, the migration will fail as MongoDB documents cannot exceed 16MB. We strongly advise against storing large blobs in MongoDB, but if multi-document transactions aren't required you can use GridFS. |
Numeric precision (specified) | Table | Data Type | Medium | The following columns are at risk of data loss due to decimal precision: <columns>. | The specified columns have been configured to support more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Numeric precision (unspecified) | Table | Data Type | Medium | The following columns may be at risk of data loss due to decimal precision: <columns>. | The specified columns are using variable decimal precision and may contain values with more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Timezones on dates | Table | Data Types:
| Medium | Detected columns with <type> type. MongoDB stores times in UTC by default, and Relational Migrator may convert any local time representations into this form or to a direct string. Affected columns: <columns>. | MongoDB does not support storing timezones with time data. Consider converting to a desired timezone in your application or storing it as a string. |
File on-disk | Table | Data Type:
| Medium | Detected columns with type <type>. Migration to MongoDB is not currently supported for this type. Affected columns: <columns>. |
|
Unsupported Oracle Types | Table | Data Types:
| Medium | Detected columns with type <type>. Migration to MongoDB is not currently supported for this type. Affected columns: <columns>. | The report groups all columns of a given unsupported type into a single item.
|
Auto-Incrementing Columns | Table | Schema | High | Table <name> has an auto incrementing column. | MongoDB encourages the use of ObjectID for ID fields because incrementing IDs have difficulty sharding. MongoDB Relational Migrator can migrate your keys as-is, but you will need to write code to maintain this behavior. If you're using MongoDB Atlas, you can use Atlas Triggers to auto-increment your IDs. |
No foreign keys found | Database | Schema | Medium | The <name> database has no foreign keys. | This will make the schema mapping more complicated as we cannot infer the relationship between your tables without them. You can use the synthetic foreign keys feature of Migrator to define logical relationships between your tables. |
Views | Database | Schema | Medium | There were views detected in <database>. | Views are supported in MongoDB but they must be converted into MQL. Consider using Query Converter to migrate your views. |
Triggers | Database | Unsupported Feature | High | The <name> database has triggers. | MongoDB has no native way to implement triggers. If you're using Atlas, consider using Query Converter to convert your existing triggers to Atlas Triggers to replicate the existing behavior. |
Routines | Database | Unsupported Feature | Medium | There were routines detected in <database>. | MongoDB has no native way to represent routines. Consider using Query Converter to migrate your routines to application code. |
Oracle Packages | Database | Unsupported Feature | Medium | A package is a schema object that groups logically related PL/SQL types, variables, constants, subprograms, cursors, and exceptions. A package is compiled and stored in the database, where many applications can share its contents. | MongoDB does not have any features that are equivalent to Oracle Packages. You can use the Query Converter to define logical relationships between your tables. |
Large single table | Database | Performance | Medium | The total data size of the selected tables is greater than <limit> GB, at 100 GB. | Larger data migration jobs can require careful planning to maximize performance and reliability. The deployment considerations topic in the documentation provides advice which can help. For jobs that may run over multiple days, consider using the Kafka deployment model. |
Name | Type | Category | Difficulty | Report Message | Mitigation |
---|---|---|---|---|---|
Geospatial Data | Table | Data Types:
| High | The table contains columns which require special handling: <columns>. | You are storing geospatial data in your database. These will be
converted to objects with a |
Blob Types | Table | Data Types:
| Medium | The table contains columns which could exceed the 16MB limit. | If you are storing >16MB in the record, the migration will fail as MongoDB documents cannot exceed 16MB. We strongly advise against storing large blobs in MongoDB, but if multi-document transactions aren't required you can use GridFS. |
Numeric precision (specified) | Table | Data Type | Medium | The following columns are at risk of data loss due to decimal precision: <columns>. | The specified columns have been configured to support more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Numeric precision (unspecified) | Table | Data Type | Medium | The following columns may be at risk of data loss due to decimal precision: <columns>. | The specified columns are using variable decimal precision and may contain values with more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Timezones on dates | Table | Data Type:
| Medium | Detected columns with <type> type. MongoDB stores times in UTC by default, and Relational Migrator may convert any local time representations into this form or to a direct string. Affected columns: <columns>. | MongoDB does not support storing timezones with time data. Consider converting to a desired timezone in your application or storing it as a string. |
Auto-Incrementing Columns | Table | Schema | High | Table <name> has an auto incrementing column. | MongoDB encourages the use of ObjectID for ID fields because incrementing IDs have difficulty sharding. MongoDB Relational Migrator can migrate your keys as-is, but you will need to write code to maintain this behavior. If you're using MongoDB Atlas, you can use Atlas Triggers to auto-increment your IDs. |
No foreign keys found | Database | Schema | Medium | The <name> database has no foreign keys. | This will make the schema mapping more complicated as we cannot infer the relationship between your tables without them. You can use the synthetic foreign keys feature of Migrator to define logical relationships between your tables. |
Views | Database | Schema | Medium | There were views detected in <database>. | Views are supported in MongoDB but they must be converted into MQL. Consider using Query Converter to migrate your views. |
Materialized Views | Database | Unsupported Feature | Medium | There were materialized views detected in <database> | MongoDB supports On-Demand Materialized Views. You can either schedule their generation or use Atlas Triggers and $merge to maintain them. |
Triggers | Database | Unsupported Feature | High | The <name> database has triggers. | MongoDB has no native way to implement triggers. If you're using Atlas, consider using Query Converter to convert your existing triggers to Atlas Triggers to replicate the existing behavior. |
Routines | Database | Unsupported Feature | Medium | There were routines detected in <database>. | MongoDB has no native way to represent routines. Consider using Query Converter to migrate your routines to application code. |
Large single table | Database | Performance | Medium | The total data size of the selected tables is greater than <limit> GB, at 100 GB. | Larger data migration jobs can require careful planning to maximize performance and reliability. The deployment considerations topic in the documentation provides advice which can help. For jobs that may run over multiple days, consider using the Kafka deployment model. |
Name | Type | Category | Difficulty | Report Message | Mitigation |
---|---|---|---|---|---|
Geospatial Data | Table | Data Types:
| High | The table contains columns which require special handling: <columns>. | You are storing geospatial data in your database. These will be
converted to objects with a |
Blob Types | Table | Data Types:
| Medium | The table contains columns which could exceed the 16MB limit. | If you are storing >16MB in the record, the migration will fail as MongoDB documents cannot exceed 16MB. We strongly advise against storing large blobs in MongoDB, but if multi-document transactions aren't required you can use GridFS. |
Numeric precision (specified) | Table | Data Type | Medium | The following columns are at risk of data loss due to decimal precision: <columns>. | The specified columns have been configured to support more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Numeric precision (unspecified) | Table | Data Type | Medium | The following columns may be at risk of data loss due to decimal precision: <columns>. | The specified columns are using variable decimal precision and may contain values with more decimal precision than is supported in a Decimal128 field. During migration, these values will be rounded to 34 significant figures. |
Timezones on dates | Table | Data Type:
| Medium | Detected columns with <type> type. MongoDB stores times in UTC by default, and Relational Migrator may convert any local time representations into this form or to a direct string. Affected columns: <columns>. | MongoDB does not support storing timezones with time data. Consider converting to a desired timezone in your application or storing it as a string. |
File on-disk | Table | Data Type:
| Medium | Detected columns with type <type>. Migration to MongoDB is not currently supported for this type. Affected columns: <columns>. |
|
Unsupported SQL Server Types | Table | Data Types:
| Medium | Detected columns with type <type>. Migration to MongoDB is not currently supported for this type. Affected columns: <columns>. | The report groups all columns of a given unsupported type into a single item.
|
Auto-Incrementing Columns | Table | Schema | High | Table <name> has an auto incrementing column. | MongoDB encourages the use of ObjectID for ID fields because incrementing IDs have difficulty sharding. MongoDB Relational Migrator can migrate your keys as-is, but you will need to write code to maintain this behavior. If you're using MongoDB Atlas, you can use Atlas Triggers to auto-increment your IDs. |
No foreign keys found | Database | Schema | Medium | The <name> database has no foreign keys. | This will make the schema mapping more complicated as we cannot infer the relationship between your tables without them. You can use the synthetic foreign keys feature of Migrator to define logical relationships between your tables. |
Views | Database | Schema | Medium | There were views detected in <database>. | Views are supported in MongoDB but they must be converted into MQL. Consider using Query Converter to migrate your views. |
Triggers | Database | Unsupported Feature | High | The <name> database has triggers. | MongoDB has no native way to implement triggers. If you're using Atlas, consider using Query Converter to convert your existing triggers to Atlas Triggers to replicate the existing behavior. |
Routines | Database | Unsupported Feature | Medium | There were routines detected in <database>. | MongoDB has no native way to represent routines. Consider using Query Converter to migrate your routines to application code. |
Large single table | Database | Performance | Medium | The total data size of the selected tables is greater than <limit> GB, at 100 GB. | Larger data migration jobs can require careful planning to maximize performance and reliability. The deployment considerations topic in the documentation provides advice which can help. For jobs that may run over multiple days, consider using the Kafka deployment model. |