Maintaining Casing in Database Replication for Postgres and Snowflake

Summary

When replicating a Postgres database to Snowflake or another Postgres database, the issue of case sensitivity in identifiers can cause problems with legacy scripts. The tables and columns in the destination database are being created with different casing, leading to issues with the DDL. The user is looking for a way to maintain casing in the replica to avoid breaking legacy scripts.


Question

Hi all, I’m trying to replicate a Postgres database (source) in my org to another destination (I have tried Postgres and Snowflake). We want to perform analytics queries on the replica as well as add tables as needed without performing these queries on the production database. We have many existing scripts that currently run on the production database that we would like to reuse on the replica.

The problem I am having is that our source Database uses case sensitive Quoted Identifiers for all tables and columns. Both destinations that I have tried (Snowflake and Postgres) have resulted in the the tables and columns in our destination being upcased (Snowflake) or downcased (Postgres) and then created using Unquoted Identifiers leaving the resulting DDL with lost casing and breaking our legacy scripts. Is there a way to perform the connection and maintain casing in my replica?



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want to access the original thread.

Join the conversation on Slack

["database-replication", "postgres", "snowflake", "case-sensitivity", "identifiers", "legacy-scripts"]