Issue after fresh monolith deployment - 'No users found in postgres'

I've just performed a fresh monolith install and after a check_services, I see this Postgres failure:

(cchq) ccc@myserver:~$ commcare-cloud monolith django-manage check_services
ssh ccc@10.1.0.6 -t -o UserKnownHostsFile=/home/ccc/environments/monolith/known_hosts 'sudo -iu cchq bash -c '"'"'cd /home/cchq/www/monolith/current; python_env/bin/python manage.py check_services'"'"''
Ubuntu 22.04.2 LTS
SUCCESS (Took   0.11s) kafka          : Kafka seems to be in order
SUCCESS (Took   0.00s) redis          : Redis is up and using 1.45M memory
FAILURE (Took   0.06s) postgres       : default:commcarehq:OK p1:commcarehq_p1:OK p2:commcarehq_p2:OK proxy:commcarehq_proxy:OK synclogs:commcarehq_synclogs:OK ucr:commcarehq_ucr:OK No users found in postgres
SUCCESS (Took   0.01s) couch          : Successfully queried an arbitrary couch view
SUCCESS (Took   0.00s) celery         : OK
SUCCESS (Took   0.10s) elasticsearch  : Successfully sent a doc to ES and read it back
SUCCESS (Took   0.02s) blobdb         : Successfully saved a file to the blobdb
SUCCESS (Took   0.27s) formplayer     : Formplayer returned a 200 status code: https://myserver.mydomain.com/formplayer/serverup
SUCCESS (Took   0.00s) rabbitmq       : RabbitMQ OK
Connection to 10.1.0.6 closed.

Any advice?

Thanks!

Here is the output of the Create PostgreSQL users task in the ansible log (truncated at screen width and passwords redacted).

2023-08-05 17:16:53,360 p=33209 u=ccc n=ansible | TASK [postgresql_base : Create PostgreSQL users] *********************************************
2023-08-05 17:16:53,911 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'username': 'commcarehq', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'})
2023-08-05 17:16:54,224 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'username': 'devreadonly', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'role_attr_flags': 'NOSUPERUS>
2023-08-05 17:16:54,533 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'username': 'hqrepl', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'role_attr_flags': 'LOGIN,REPLICAT>
2023-08-05 17:16:54,550 p=33209 u=ccc n=ansible | TASK [postgresql_base : Add user privs] ******************************************************
2023-08-05 17:16:55,075 p=33209 u=ccc n=ansible | ok: [10.1.0.6] => (item=[{'username': 'devreadonly', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'role_attr_flags': 'NOSUPERUSER,N>
2023-08-05 17:16:55,092 p=33209 u=ccc n=ansible | TASK [postgresql_base : Create PostgreSQL databases] *****************************************
2023-08-05 17:16:57,742 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'default', 'django_migrate': True, 'host': '10.1.0.6', 'name': 'commcarehq',>
2023-08-05 17:17:00,062 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'p1', 'django_migrate': True, 'host': '10.1.0.6', 'name': 'commcarehq_p1', '>
2023-08-05 17:17:02,400 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'p2', 'django_migrate': True, 'host': '10.1.0.6', 'name': 'commcarehq_p2', '>
2023-08-05 17:17:04,738 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'proxy', 'django_migrate': True, 'host': '10.1.0.6', 'name': 'commcarehq_pro>
2023-08-05 17:17:07,075 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'synclogs', 'django_migrate': True, 'host': '10.1.0.6', 'name': 'commcarehq_>
2023-08-05 17:17:09,417 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': 'ucr', 'django_migrate': False, 'host': '10.1.0.6', 'name': 'commcarehq_ucr'>
2023-08-05 17:17:11,661 p=33209 u=ccc n=ansible | changed: [10.1.0.6] => (item={'create': True, 'django_alias': None, 'django_migrate': True, 'host': '10.1.0.6', 'name': 'formplayer', 'opt>

If I check users in PostgreSQL I see:

postgres=# \du+
                                           List of roles
  Role name  |                         Attributes                         | Member of | Description
-------------+------------------------------------------------------------+-----------+-------------
 commcarehq  | Create DB                                                  | {}        |
 devreadonly |                                                            | {}        |
 hqrepl      | Replication                                                | {}        |
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}        |

The web app appears to load up OK. I thought I'd test to see if I could login before I restore any databases from the prior installation. When creating a new superuser, I receive the following output:

(cchq) ccc@monolith:~/commcare-cloud$ commcare-cloud monolith django-manage make_superuser erobinson@projectbalance.com
ssh ccc@10.1.0.6 -t -o UserKnownHostsFile=/home/ccc/environments/monolith/known_hosts 'sudo -iu cchq bash -c '"'"'cd /home/cchq/www/monolith/current; python_env/bin/python manage.py make_superuser erobinson@projectbalance.com'"'"''
Ubuntu 22.04.2 LTS
Create New Password:
Repeat Password:
2023-08-06 14:56:14,647 INFO [corehq.apps.domain.management.commands.make_superuser] → User erobinson@projectbalance.com created
2023-08-06 14:56:14,701 INFO [corehq.apps.domain.management.commands.make_superuser] → User erobinson@projectbalance.com is now a superuser
2023-08-06 14:56:14,702 INFO [corehq.apps.domain.management.commands.make_superuser] → User erobinson@projectbalance.com can now access django admin
2023-08-06 14:56:14,702 INFO [corehq.apps.domain.management.commands.make_superuser] → User erobinson@projectbalance.com can now assign superuser privilege
2023-08-06 14:56:15,564 ERROR [ddtrace.internal.writer.writer] failed to send, dropping 1 traces to intake at http://localhost:8126/v0.5/traces after 3 retries ([Errno 111] Connection refused)
Connection to 10.1.0.6 closed.

I am able to login with the new superuser account, however.

I've been able to replicate this in another installation from scratch.
Can anyone confirm if this is an issue with the system or just the check_services routine?
Thanks!

I suspect the problem here is with the documentation ...

Did you run

./manage.py make_superuser <email>

?

1 Like

Ha! On the money Norman. It's literally the next step in my own instructions, but I was running check_services before adding it. Nice, thanks!