Automated Testing is fundamental to modern software development. There are just too many complexities and too many permutations to rely on manual testing (although manual testing is absolutely essential!). But how do you test a product that has to run on two different hardware architectures, three or four different OSs, and many different Docker providers? Well, it’s hard! But to Make Developers Happy™ we have to have predictable behavior everywhere. It’s amazing that go lets us build binaries that run the same everywhere with no external dependencies and Docker and docker-compose let us build the system the same.

Bonus contribution opportunity before we start: You’ve probably seen the once-a-day messages that happen on ddev start these days. Those come from ddev/remote-config, and they can get stale. What can you think of to add there? When you think of anything, just do a PR to remote-config.jsonc.

Our testing environments:

Test runner orchestration:

Test Runner Access:

Test Runner Monitoring: Many test runners, especially Windows Docker Desktop, fail quite often. We have an icinga (nagios) monitoring system that notifies us when this happens, see https://newmonitor.thefays.us/icingaweb2/dashboard - that way we don’t have to wait until a test fails and wonder why when the runner was the actual problem.

Tour of recent tests: https://github.com/ddev/ddev/commit/d12fbb4eec7e6830645e73ee97ad9120134f4a2d (on master) or https://github.com/ddev/ddev/pull/5353 for a PR

Agenda for testing (contribution opportunities)

Add-on Tests (bats): https://github.com/ddev/ddev-addon-template