I recently switched from a Bitbucket pipeline to Jenkins.
An interesting side note: because we ran BDD on-target, directly on our test STBs, we were able to uncover interesting firmware bugs. The tests would pass if you ran them a few times, but eventually they would begin to slow down, and finally the STB would reset via a hardware watchdog. It turned out the firmware had a resource leak. We were able to report the issue to the vendor and they were able to resolve the issue. We knew the vendor had fixed the leak when we could run BDD in a tight loop without crashing the STB.
Why did I migrate away from Bitbucket pipeline? I wanted very fast feedback, and was unable to reduce the build time below 90 seconds. I looked into the Docker caching and discovered this:
Skipping assembly of docker cache as one is already present Cache "docker: docker.tar": Skipping upload for existing cache
The pipeline would not invalidate the docker cache until it expired after seven days. There are only two hard things in computer science:
- cache invalidation
- naming things
At Hacken I use Linux on one of my desktops. This is a relatively quick machine, so I decided to see what was possible with a local CICD solution. I’m now happily running Jenkins in Docker.
My lead time from pushing to bitbucket and Jenkins deploying to hacken.ca is now down to 30 seconds.