Migrating Google Cloud Compute Engine VM Postgresql database copy to Google Cloud SQL Postgresql

1) SSH into GCP VM instance for Postgresql
2) Ensure there is enough space to backup using command — df -h
3) If you are creating dump in the same instance then use the following command
3a) sudo su
3b) /usr/lib/postgresql/10/bin/pg_dump -U database_admin — format=plain — no-owner — no-acl database_name | sed -E ‘s/(DROP|CREATE|COMMENT ON) EXTENSION/ — \1 EXTENSION/g’ > /pgdata/backup/database_name_data.sql

Note above is used if multiple versions of postgres is installed else just pg_dump is enough
4) If you are creating dump into another vm instance then do the the following
4a) locate pg_hba.conf (replace the path in next line with pb_hba.conf location)
4b) sudo nano /etc/postgresql/10/main/pg_hba.conf
4c) Add the following (replace ip with vm instance ip) — host all all ip/32 trust
4d) Restart postgresql — sudo /etc/init.d/postgresql restart
4e) SSH into another VM instance and run the following
a) Ensure there is enough space to backup
b) pg_dump -h postgresql_instance_ip -U database_admin — format=plain — no-owner — no-acl database_name | sed -E ‘s/(DROP|CREATE|COMMENT ON) EXTENSION/ — \1 EXTENSION/g’ > /pgdata/backup/database_name_data.sql
c) Check if file size is growing using command ls -lh

5) Once backup is done then create postgresql Cloud Sql server instance.
6) Create a database by going to GCP Sql instance > Databases
7) Create a user by going to GCP Sql instance > Users
8) Set the compute engine vm instance ip from where you are restoring the database into GCP Sql instance > Connections to allow connecting from compute engine.
9) Next ssh into VM Instance and run the following command — psql -h vm_instance_ip -U database_admin -d database_name -W < database_name_data.sql
10) You can check db file size using command — select pg_size_pretty(pg_database_size(‘database_name’));

Another way to import is by copying the dump to bucket and the import it from Cloud sql import options

a) To do the following you will need Google Cloud Storage Read/Write permissions
b) gsutil ls (to list the buckets that is accessible)
c) Copy the file to bucket gsutil cp database_name_data.sql.gz gs://uat-import
d) If there is no Read/Write permission then you will get following errors
Copying file://database_name_data.sql.gz [Content-Type=application/x-sql]…
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
“parallel_composite_upload_threshold” value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see “gsutil help crcmod”). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.

ResumableUploadAbortException: 403 Insufficient Permission

e) NOTE: If you come across “ERROR: temporary file size exceeds temp_file_limit“ error when you import, which you will get:
Add “temp_file_limit” flag for cloud sql instance that is in the Google Cloud SQL Platform

Important queries to check if there are mismatch database copy exported and database restored

SELECT pg_size_pretty(sum(pg_relation_size(pg_class.oid))::bigint), nspname,
CASE pg_class.relkind WHEN ‘r’ THEN ‘table’ WHEN ‘i’ THEN ‘index’ WHEN ‘S’ THEN ‘sequence’ WHEN ‘v’ THEN ‘view’ WHEN ‘t’ THEN ‘toast’ ELSE pg_class.relkind::text END
FROM pg_class
LEFT OUTER JOIN pg_namespace ON (pg_namespace.oid = pg_class.relnamespace)
GROUP BY pg_class.relkind, nspname
ORDER BY sum(pg_relation_size(pg_class.oid)) DESC;

SELECT pg_size_pretty(pg_relation_size(pg_class.oid)), pg_class.relname, pg_namespace.nspname,
CASE pg_class.relkind WHEN ‘r’ THEN ‘table’ WHEN ‘i’ THEN ‘index’ WHEN ‘S’ THEN ‘sequence’ WHEN ‘v’ THEN ‘view’ WHEN ‘t’ THEN ‘TOAST’ ELSE pg_class.relkind::text END
FROM pg_class
LEFT OUTER JOIN pg_namespace ON (pg_namespace.oid = pg_class.relnamespace)
ORDER BY pg_relation_size(pg_class.oid) DESC;

Issues Encountered:
1) pg_dump: aborting because of server version mismatch

If you encounter following errors then it means you have multiple versions installed and hence you need to use correct path:

Find the location of the postgresql version
a) find / -name pg_dump -type f 2>/dev/null

you will get output like below:
/usr/lib/postgresql/9.5/bin/pg_dump
/usr/lib/postgresql/10/bin/pg_dump

b) You can stop auto start of one by replacing auto with manual in /etc/postgresql/9.5/main/start.conf file
c) Restart sudo /etc/init.d/postgresql restart

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store