DevOps Engineer
It is text version of speech at Devopsdays T-systems 2018-03-02 and Hashicorp meetup 2018-02-08.
Let’s imagine that you are developing software and hardware appliance. The appliance consists of custom OS distributive, upscale servers, a lot of business logic, as a result, it has to use real hardware. If you release broken appliance, your users will not be happy. How to do stable releases?
I’d like to share my story how we dealt with it.
If you don’t know a goal it will be really hard to whip through the task. The first deploy variant was looked like bash:
make dist
for i in a b c ; do
scp ./result.tar.gz $i:~/
ssh $i "tar -zxvf result.tar.gz"
ssh $i "make -C ~/resutl install"
done
The script was simplified just to show the main idea: there was no CI/CD. Our flow was:
At the current stage the knowledge how it was provisioned, all known kludges were dirty magic inside developers minds. It was a real issue for us because of team growth.
We had used TeamCity for our projects & gitlab hadn’t was popular, so we decided to use TeamCity. We manually created a VM. We were running tests inside the VM.
There were some steps in build flow:
make install && ./libs/run_all_tests.sh
make dist
make srpm
rpmbuild -ba SPECS/xxx-base.spec
make publish
We received a temporary result:
Do you feel the smell?
We changed the flows & process:
As a result for the current stage we received:
On one hand, a build was really slow(about 30-60 minutes), but on the other hand it was good enough & successfully catch the vast majority of the issues before manual quality assurance. However we faced new different problems, i.e. then we updated the kernel or then we rolled back a package.
We solved a lot of different issues:
As a result at the current stage scheme looked like:
However, we were able to produce release every week & improve development velocity.
The result was not ideal, but a journey of a thousand li starts with a single step(c).