Hi All,
I am using mongo connector to store the data into MongoDB and i am using the option “spark.mongodb.write.convertJosn” to “true”. In our DataFrame we have some attributes which contain numeric and alphanumeric values. When the data gets stored into the collection we are seeing mixed types some are stored as numeric and some are as Strings. Looks like depending on the No.of Partitions we have in DF and with in a Partition if all the data is numeric the Driver might be forcing the data type to Integer/Long even though with in the Spark DF we are converting the data to be String.
How can we overwrite this Default behavior. We want the connector not to enforce the convertJson behavior or try to infer the data type ??
Thanks
Sateesh