I recently switched from a Bitbucket pipeline to Jenkins.
Jenkins and I go way back, adopting it around the time of the Hudson-Jenkins split: we used it extensively on Shaw’s HD guide cable set top boxes (STBs). The platform was embedded JavaScript, and we used our own custom implementation of BDD to automate our testing. This allowed us to increase our deployment frequency while also reducing our manual testing. At the peak, we were deploying to over 900 000 STBs on a monthly cadence. Any more frequent than that was deemed unnecessary, though we were committed to keeping the code production-releasable at all times.
An interesting side note: because we ran BDD on-target, directly on our test STBs, we were able to uncover interesting firmware bugs. The tests would pass if you ran them a few times, but eventually they would begin to slow down, and finally the STB would reset via a hardware watchdog. It turned out the firmware had a resource leak. We were able to report the issue to the vendor and they were able to resolve the issue. We knew the vendor had fixed the leak when we could run BDD in a tight loop without crashing the STB.
Why did I migrate away from Bitbucket pipeline? I wanted very fast feedback, and was unable to reduce the build time below 90 seconds. I looked into the Docker caching and discovered this:
Skipping assembly of docker cache as one is already present
Cache "docker: docker.tar": Skipping upload for existing cache
The pipeline would not invalidate the docker cache until it expired after seven days. There are only two hard things in computer science:
- cache invalidation
- naming things
At Hacken I use Linux on one of my desktops. This is a relatively quick machine, so I decided to see what was possible with a local CICD solution. I’m now happily running Jenkins in Docker.
My lead time from pushing to bitbucket and Jenkins deploying to hacken.ca is now down to 30 seconds.