Local backup - postgresql, couchdb, blobdb

I'm a little confused with how the backup is run.
I'm going by this document: https://dimagi.github.io/commcare-cloud/commcare-cloud/backup.html

I have added the following configuration to my public.yml:

backup_blobdb: True
blobdb_backup_dir: /myData/backup/blobdb
blobdb_backup_days: 7
blobdb_backup_weeks: 4
blobdb_s3: False

backup_postgres: Plain
postgresql_backup_dir: /myData/backup/postgresql
postgres_backup_days: 7
postgres_backup_weeks: 4
postgres_s3: False

backup_couch: True
couch_backup_dir: /myData/backup/couchdb
couchdb_backup_days: 7
couchdb_backup_weeks: 4
couch_s3: False

backup_es_s3: False

Basically I am expecting it to backup to /myData/backup/xxxxx for each of couchdb, blob storage and postgreSQL. I'm not seeing any backups other than couchdb thus far (after 24 hours +).

When I deploy the changes to my public.yml with

commcare-cloud deploy-stack --tags=backups

I get the following output (notice the skipped steps relating to postgredb and blobdb backups) :

PLAY [update apt cache] *****************************************************************************************************************
PLAY [Create ebsnvme mapping] *****************************************************************************************************************
PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [Bootstrap Machine] *****************************************************************************************************************
PLAY [Common] ***************************************************************************************************************** [WARNING]: Could not match supplied host pattern, ignoring: cas_proxy

[WARNING]: Could not match supplied host pattern, ignoring: pna_proxy

[WARNING]: Could not match supplied host pattern, ignoring: reach_proxy

PLAY [ufw (firewall)] ***************************************************************************************************************** skipping: no hosts matched

PLAY [ufw (proxy firewall)] *****************************************************************************************************************
PLAY [ufw off (firewall)] *****************************************************************************************************************
PLAY [Datadog agent] *****************************************************************************************************************
PLAY [Configure monit] *****************************************************************************************************************
PLAY [Configure static routes] *****************************************************************************************************************
PLAY [DNS configuration] ***************************************************************************************************************** [WARNING]: Could not match supplied host pattern, ignoring: lvm

PLAY [LVM] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Common Database Machine Setup] *****************************************************************************************************************
TASK [backups : Create aws config directory for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/credentials for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/config for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create blobdb backup dir] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy blobdb backup script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Weekly Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy s3 upload script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy check_s3_backup script to /usr/local/bin] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Check Backups Cron job] ***************************************************************************************************************** [WARNING]: The value 0 (type int) in a string field was converted to u'0' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

ok: [197.211.200.100]

PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
[WARNING]: Could not match supplied host pattern, ignoring: citusdb

PLAY [PostgreSQL Machine Setup] *****************************************************************************************************************
TASK [backups : Create aws config directory for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/credentials for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/config for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create blobdb backup dir] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy blobdb backup script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Weekly Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy s3 upload script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy check_s3_backup script to /usr/local/bin] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Check Backups Cron job] ***************************************************************************************************************** ok: [197.211.200.100]

PLAY [PostgreSQL] *****************************************************************************************************************
PLAY [pgbouncer] *****************************************************************************************************************
PLAY [Remote PostgreSQL (e.g. Amazon RDS)] *****************************************************************************************************************
PLAY [PostgreSQL Backup] *****************************************************************************************************************
TASK [pg_backup : Add PosgreSQL apt repo] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [pg_backup : Add PosgreSQL apt key] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [pg_backup : Update package list] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [pg_backup : Install PostgreSQL server] ***************************************************************************************************************** [WARNING]: Could not find aptitude. Using apt-get instead

ok: [197.211.200.100]

TASK [pg_backup : Install PostgreSQL client] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [pg_backup : create backup directory] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [pg_backup : include_tasks] ***************************************************************************************************************** skipping: [197.211.200.100]

PLAY [Setup auth for standby] *****************************************************************************************************************
PLAY [Disable THP] *****************************************************************************************************************
PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [CitusDB Machine Setup] ***************************************************************************************************************** skipping: no hosts matched

PLAY [CitusDB] ***************************************************************************************************************** skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: citusdb_master

PLAY [pgbouncer] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Disable THP] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Common Database Machine Setup] *****************************************************************************************************************
TASK [backups : Create aws config directory for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/credentials for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/config for root] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create blobdb backup dir] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy blobdb backup script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Weekly Cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy s3 upload script] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Copy check_s3_backup script to /usr/local/bin] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Create Daily Check Backups Cron job] ***************************************************************************************************************** ok: [197.211.200.100]

PLAY [Couchdb 2.0] *****************************************************************************************************************
TASK [couchdb2 : Create couch backup dir] ***************************************************************************************************************** --- before
+++ after
@@ -1,7 +1,7 @@
{

  • "group": 0,
  • "mode": "0755",
  • "owner": 0,
  • "group": 1005,
  • "mode": "0700",
  • "owner": 1004,
    "path": "/myData/backup/couchdb",
  • "state": "absent"
  • "state": "directory"
    }

changed: [197.211.200.100]

TASK [couchdb2 : Copy couch backup script] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [couchdb2 : Copy couch restore script] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [couchdb2 : Create Daily Cron job (cleanup blobdb)] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [couchdb2 : Create Weekly Cron job (cleanup blobdb)] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [couchdb2 : Create Daily Cron job] ***************************************************************************************************************** --- before: /etc/cron.d/backup_couch
+++ after: /etc/cron.d/backup_couch
@@ -0,0 +1,2 @@
+#Ansible: Backup couchdb daily
+0 0 * * 1,2,3,4,5,6 couchdb /usr/local/sbin/create_couchdb_backup.sh daily 7

changed: [197.211.200.100]

TASK [couchdb2 : Create Weekly Cron job] ***************************************************************************************************************** --- before: /etc/cron.d/backup_couch
+++ after: /etc/cron.d/backup_couch
@@ -1,2 +1,4 @@
#Ansible: Backup couchdb daily
0 0 * * 1,2,3,4,5,6 couchdb /usr/local/sbin/create_couchdb_backup.sh daily 7
+#Ansible: Backup coudhdb weekly
+0 0 * * 0 couchdb /usr/local/sbin/create_couchdb_backup.sh weekly 28

changed: [197.211.200.100]

TASK [backups : Create aws config directory for couchdb] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/credentials for couchdb] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [backups : Set up ~/.aws/config for couchdb] ***************************************************************************************************************** skipping: [197.211.200.100]

PLAY [Move logrotate to hourly] *****************************************************************************************************************
PLAY [Couchdb2 log rolling configurations] *****************************************************************************************************************
PLAY [Couchdb2 proxy] *****************************************************************************************************************
PLAY [deploy keepalived] *****************************************************************************************************************
PLAY [Redis] ***************************************************************************************************************** [WARNING]: flush_handlers task does not support when conditional

PLAY [Elasticsearch] *****************************************************************************************************************
TASK [elasticsearch : Create initial snapshot] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [elasticsearch : Create initial snapshot] ***************************************************************************************************************** skipping: [197.211.200.100]

TASK [elasticsearch : Remove old backup script] ***************************************************************************************************************** ok: [197.211.200.100]

TASK [elasticsearch : Copy es backup script] ***************************************************************************************************************** skipping: [197.211.200.100] =(item=create_es_snapshot.py.j2)
skipping: [197.211.200.100] =(item=check_snapshot_status.py.j2)

TASK [elasticsearch : Create es backup cron job] ***************************************************************************************************************** skipping: [197.211.200.100]

PLAY [Redis Monitoring] *****************************************************************************************************************
PLAY [Common Database Machine Setup] ***************************************************************************************************************** skipping: no hosts matched

PLAY [RabbitMQ] ***************************************************************************************************************** skipping: no hosts matched

PLAY [RabbitMQ log rolling configurations] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Common Database Machine Setup] *****************************************************************************************************************
PLAY [Zookeeper] *****************************************************************************************************************
PLAY [Kafka] *****************************************************************************************************************
PLAY [Disable THP] *****************************************************************************************************************
PLAY [Java] *****************************************************************************************************************
PLAY [Disable THP] *****************************************************************************************************************
PLAY [Move logrotate to hourly] *****************************************************************************************************************
PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [CommcareHQ] *****************************************************************************************************************
PLAY [Celery cron jobs for HQ] *****************************************************************************************************************
PLAY [Temporary task to remove old service directory] *****************************************************************************************************************
PLAY [Celery Supervisor Config] *****************************************************************************************************************
PLAY [Pillowtop Supervisor Config] *****************************************************************************************************************
PLAY [Proxy Websockets Supervisor Config] *****************************************************************************************************************
PLAY [Webworker Supervisor Config] *****************************************************************************************************************
PLAY [Formplayer Supervisor Config] ***************************************************************************************************************** [WARNING]: Could not match supplied host pattern, ignoring: airflow

PLAY [Airflow Supervisor Config] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Management Command Supervisor Config] *****************************************************************************************************************
PLAY [Remove old supervisor files] *****************************************************************************************************************
PLAY [newrelic] *****************************************************************************************************************
PLAY [setup nginx] *****************************************************************************************************************
PLAY [Proxy] *****************************************************************************************************************
PLAY [Reach Proxy] ***************************************************************************************************************** skipping: no hosts matched

PLAY [CAS Proxy] ***************************************************************************************************************** skipping: no hosts matched

PLAY [PNA Proxy] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Move logrotate to hourly] *****************************************************************************************************************
PLAY [Nginx log rolling configurations] *****************************************************************************************************************
PLAY [proxy] *****************************************************************************************************************
PLAY [SharedDirHost] ***************************************************************************************************************** skipping: no hosts matched

PLAY [SharedDirClient] *****************************************************************************************************************
PLAY [Keystore] *****************************************************************************************************************
PLAY [Webworkers] *****************************************************************************************************************
PLAY [Formplayer] *****************************************************************************************************************
PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [Install postfix] ***************************************************************************************************************** [WARNING]: Could not match supplied host pattern, ignoring: mailrelay

PLAY [mail relay deploy] ***************************************************************************************************************** skipping: no hosts matched

PLAY [mail relay clients deploy] *****************************************************************************************************************
PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [Deploys tmpreaper - removes files which haven't been accessed for a period of time] ***************************************************************************************************

PLAY [Create host group aliases] *****************************************************************************************************************
TASK [Create all_commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)

TASK [Create commcarehq group alias] ***************************************************************************************************************** ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)
ok: [197.211.200.100] =(item=197.211.200.100)

PLAY [Put /etc under version control using etckeeper] *****************************************************************************************************************
PLAY [Airflow] ***************************************************************************************************************** skipping: no hosts matched

PLAY [Migrate DB] ***************************************************************************************************************** [WARNING]: Could not match supplied host pattern, ignoring: squid

PLAY [Configure squid proxy] ***************************************************************************************************************** skipping: no hosts matched

PLAY [HTTP proxy for external calls] *****************************************************************************************************************
PLAY RECAP ***************************************************************************************************************** 197.211.200.100 : ok=29 changed=3 unreachable=0 failed=0 skipped=37 rescued=0 ignored=0

āœ“ Apply completed with status code 0

The script creates the /myData/backup/couchdb directory as follows:

drwx------ 2 couchdb couchdb 4096 Sep 29 23:49 couchdb

It doesn't create the folders for postgresql or blobdb (should it?), so I created them manually. I see cron entries for couchdb in /etc/cron.d/backup_couch as follows:

#Ansible: Backup couchdb daily
0 0 * * 1,2,3,4,5,6 couchdb /usr/local/sbin/create_couchdb_backup.sh daily 7
#Ansible: Backup coudhdb weekly
0 0 * * 0 couchdb /usr/local/sbin/create_couchdb_backup.sh weekly 28

I'm not sure where the postgreSQL and blobdb backup schedules are managed from or what time they are run. Is the backup logged somewhere that I can check for issues?

Thanks!

Digging a bit, I see in:

/src/commcare_cloud/ansible/roles/backups/tasks/main.ym

Two conditions need to be met for set up of the blobdb backup routine:

when: "backup_blobdb and 'shared_dir_host' in group_names"

The backup_blobdb variable is set to true but not sure about shared_dir_host in group_names
Does it perhaps mean I need an entry for my monolith instance under the

[shared_dir_host:children]

heading in my inventory.ini file?

Similarly, for the pgbackup, in:

/src/commcare_cloud/ansible/roles/pg_backup/tasks/backup.yml

The condition is:

when: "'pg_backup' in group_names"

Under the pg_backup heading in my inventory.ini, there is already an entry for monolith, but the steps for pg_backup : create backup directory and pg_backup : include_tasks are still being skipped according to the log.

[pg_backup:children]
monolith

Where do I set these group_names?
In case it's useful, below is my inventory.ini:

[monolith]
197.211.200.100

[monolith:vars]
elasticsearch_node_name=es0
kafka_broker_id=0
hostname='monolith'
ufw_private_interface=ens160

[control:children]
monolith

[proxy:children]
monolith

[webworkers:children]
monolith

[celery:children]
monolith

[pillowtop:children]
monolith

[formplayer:children]
monolith

[django_manage:children]
monolith

[postgresql:children]
monolith

[pg_backup:children]
monolith

[pg_standby]

[couchdb2:children]
monolith

[couchdb2_proxy:children]
monolith

[redis:children]
monolith

[zookeeper:children]
monolith

[kafka:children]
monolith

[elasticsearch:children]
monolith

[rabbitmq:children]

[shared_dir_host:children]

[riakcs]

Thanks

Any feedback on this? Backup is working for couchdb but not blobdb or postgresql. Definitely seems related to the logic:

when: ā€œbackup_blobdb and ā€˜shared_dir_hostā€™ in group_namesā€

and

when: ā€œā€˜pg_backupā€™ in group_namesā€

conditions in the deploy script yml.

Really sorry to bump this, but are there any suggestions on how to deploy with backup enabled in a monolith environment? Even pointers on where to look regarding this error?

Hey Ed, sorry for the radio silence.

Does it perhaps mean I need an entry for my monolith instance under the
[shared_dir_host:children]
heading in my inventory.ini file?

I believe that's correct. Have you tried adding it in?

Regarding the postgres backup, that looks like it might just be a simple typo. Your public.yml has this line:

backup_postgres: Plain

But plain should be written with a lowercase p:

backup_postgres: plain

Let us know how it goes!

1 Like