Submit History and Case List shows no data

My commcare-hq is installed in localhost, but i cant see data using Submit History or case list. Using Report y can see all the submitted surveys ans cases.

Hi Robert,

Those reports are backed by elasticsearch, so you need to run “pillowtop” to keep it populated, as described here:
$ ./manage.py run_ptop --all

You can also do a once-off full reindex of a particular elasticsearch index like:

$ ./manage.py ptop_reindexer_v2 sql-form --reset
$ ./manage.py ptop_reindexer_v2 sql-case --reset

Hi Ethan, thank you for replying,

It runs for me using ptop_reindexsers, but I got a lot of errors running “run_ptop --all”.

I supose all the errors comes from this one :
Process Process-27:
Traceback (most recent call last):
File “/usr/lib/python3.5/multiprocessing/process.py”, line 249, in _bootstrap
self.run()
File “/usr/lib/python3.5/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/robert/commcare-hq/corehq/ex-submodules/pillowtop/pillow/interface.py”, line 105, in run
self.process_changes(since=self.get_last_checkpoint_sequence(), forever=True)
File “/home/robert/commcare-hq/corehq/ex-submodules/pillowtop/pillow/interface.py”, line 153, in process_changes
for change in self.get_change_feed().iter_changes(since=since or None, forever=forever):
File “/home/robert/commcare-hq/corehq/apps/change_feed/consumer/feed.py”, line 63, in iter_changes
self._init_consumer(timeout, auto_offset_reset=reset)
File “/home/robert/commcare-hq/corehq/apps/change_feed/consumer/feed.py”, line 139, in _init_consumer
self._consumer = KafkaConsumer(**config)
File “/home/robert/env/lib/python3.5/site-packages/kafka/consumer/group.py”, line 348, in init
self._client = KafkaClient(metrics=self._metrics, **self.config)
File “/home/robert/env/lib/python3.5/site-packages/kafka/client_async.py”, line 231, in init
self.config[‘api_version’] = self.check_version(timeout=check_timeout)
File “/home/robert/env/lib/python3.5/site-packages/kafka/client_async.py”, line 872, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable

Are you running all of the support services like Kafka? What is the output of
$ ./manage.py check_services

Have you run $ ./manage.py create_kafka_topics as described in the setup documentation?

I have run create_kafka_topics.
When i run check_services I get this results :
SUCCESS (Took 6.03s) blobdb : Successfully saved a file to the blobdb
SUCCESS (Took 0.33s) celery : OK
SUCCESS (Took 0.70s) redis : Redis is up and using 1.10M memory
SUCCESS (Took 2.13s) couch : Successfully queried an arbitrary couch view
EXCEPTION (Took 5.05s) formplayer : Service check errored with exception ‘ReadTimeout(ReadTimeoutError(“HTTPConnectionPool(host=‘localhost’, port=8010): Read timed out. (read timeout=5)”,),)’
SUCCESS (Took 10.06s) kafka : Kafka seems to be in order
SUCCESS (Took 0.00s) rabbitmq : RabbitMQ Not configured, but not needed
SUCCESS (Took 0.76s) postgres : default:commcarehq:OK Successfully got a user from postgres
EXCEPTION (Took 30.13s) elasticsearch : Service check errored with exception ‘ConnectionTimeout(‘TIMEOUT’, “HTTPConnectionPool(host=‘localhost’, port=9200): Read timed out. (read timeout=30)”, ReadTimeoutError(“HTTPConnectionPool(host=‘localhost’, port=9200): Read timed out. (read timeout=30)”,))’
SUCCESS (Took 0.07s) heartbeat : OK

SUCCESS (Took 0.27s) blobdb : Successfully saved a file to the blobdb
SUCCESS (Took 0.04s) redis : Redis is up and using 1.08M memory
SUCCESS (Took 0.03s) celery : OK
SUCCESS (Took 0.01s) heartbeat : OK
SUCCESS (Took 0.00s) rabbitmq : RabbitMQ Not configured, but not needed
SUCCESS (Took 18.32s) elasticsearch : Successfully sent a doc to ES and read it back
SUCCESS (Took 0.23s) kafka : Kafka seems to be in order
SUCCESS (Took 0.01s) postgres : default:commcarehq:OK Successfully got a user from postgres
SUCCESS (Took 0.06s) formplayer : Formplayer returned a 200 status code
SUCCESS (Took 0.11s) couch : Successfully queried an arbitrary couch view

SUCCESS (Took 1.43s) blobdb : Successfully saved a file to the blobdb
SUCCESS (Took 0.78s) celery : OK
SUCCESS (Took 1.20s) formplayer : Formplayer returned a 200 status code
SUCCESS (Took 0.00s) rabbitmq : RabbitMQ Not configured, but not needed
SUCCESS (Took 0.07s) heartbeat : OK
EXCEPTION (Took 30.03s) elasticsearch : Service check errored with exception ‘ConnectionTimeout(‘TIMEOUT’, “HTTPConnectionPool(host=‘localhost’, port=9200): Read timed out. (read timeout=30)”, ReadTimeoutError(“HTTPConnectionPool(host=‘localhost’, port=9200): Read timed out. (read timeout=30)”,))’
SUCCESS (Took 1.74s) couch : Successfully queried an arbitrary couch view
SUCCESS (Took 0.44s) redis : Redis is up and using 1.46M memory
SUCCESS (Took 0.19s) postgres : default:commcarehq:OK Successfully got a user from postgres
SUCCESS (Took 1.80s) kafka : Kafka seems to be in order

As you can see, sometimes everything looks to works ok, and then fails. Y made no changes between the checks.
I am using al services in dockers.

That is very odd. Even the successful elasticsearch query took 18 seconds, which is far longer than I’d expect. It should be well under a second. I haven’t seen that before, and I’m not sure what the fix is. You could start by looking through the elasticsearch logs and searching around the internet for possible causes.
$ ./scripts/docker logs --tail all elasticsearch