Sven Balnojan
1 min readNov 2, 2019

--

Thanks for the question. Seems to me your response actually contains three questions, so let’s see whether I understood you right:

  1. One question should be: How do I handle differing database connections for test/staging/production environments in a docker compose setup.

My take on this is: You should handle them just as any configuration. You could for instance use environment variables for the usual database connection settings. So you just have to make sure you export them before you run the test.

2. That would be a second question: How would one manage to use the right configuration?

There are a bunch of variations. the simplest one would be to have a Makefile or a bash script with instructions for test/staging/prod which exports the right variables.

You could also use a more in-depth solution like cage to handle different environments. You could import them on container launch (via your setup script) and check for the environment.

3. The third question I understood was “when do I run those tests”? So those tests I consider “local integration tests”, meaning those are not the ones I run on every file save when I edit one single function. For those, I usually just mock away the database dependency. And after those tests pass I go on to the larger integration test.

And yes, I’d always go for a reload of everything on every testrun. And since the docker-compose setup takes a couple of seconds, I prefer to not run that on every file-save/edit.

Hope that answers your question!

--

--

Sven Balnojan
Sven Balnojan

Written by Sven Balnojan

Head of Product at MAIA | PhD, ex Head of Mkt | Author | AI & Data Expert | Newsletter at http://thdpth.com/

No responses yet