Automate even more things

DRY - Do not repeat what you do

Repetitive tasks may consume your time. Worse, they require your focus and force you to think about them, even if they are so easy, even if they seem to be just simple copy-paste. They simply utilize time of your CPU - brain, not letting you work on what matters more.
But there are some good news, we are quite good at automating things. Continuous Integration and Deployment are standard in companies that produce software. We can apply this pattern to even more parts of what we do. Here are some examples of what we automated at Zooplus, maybe you can find some inspiration for your projects.

Use templates

How would you create a new application from scratch?
Do you prefer copying and pasting from existing project and manually replacing all text occurrences of project-specific labels?
How would you provide working configuration files for new team member? Maybe you do not have need to do it every week, but when it happens - does it take half day?
There are nice tools for that. Years ago I would use eg. velocity.

Today, I would recommend Scala Build Tool aka "sbt". After it replaced infamous activator, it was enriched with feature for project scaffolding. You just type:

sbt new PATH_TO_MY_TEMPLATE

and your project is in place. You do not have to write any scala to get things done or even use sbt to build anything in result project.
Thanks to rich formatting support you can be sure that provided strings are following right convention eg. using snake-case where it is required and camel-case in class names.

Your release notes are already there

Adding some more structure to git commit messages may be helpful. Good practice is to include your issue tracker code. This is understood by integrated trackers like github issues or jira. They can show you directly commits and branches attached to an issue. By parsing git log you can easily find out what stories were done. This requires just a little discipline, but your code reviewers will support you. Because they do not want to maintain manual change log.

Automate semantic versioning

Giving a version number to an artifact is usually separate manual step. You have to define scope - if it is breaking or cosmetic change. Then change snapshot, call mvn release and so on. Not so difficult, nor time consuming. But still you can make a mistake somewhere in the process and end up with editing git history.

Why not put extra burden on machine? Having already structure for your commit messages like:
[ISSUE] Add some feature

Why not take a next step and extend it to:
[MAJOR] [ISSUE] This is my breaking commit ?

This message can be easily parsed by your build pipeline.
You just define what changes and then your CI take care of doing the dirty things. There is absolutely no reason to touch version numbers by hand.

Get database schema migrated

Do you remember time when a new release failed due to missing new column, somewhere in database - that was taken for granted by code running in production? I do.
And I do not miss that era.
Manual schema enhancements. Running DDL and DML statements on the most important database. Just think what could go wrong? Just think how much time I would waste during maintenance window, when I am supposed to support and verify the release? Have I run THIS file yet?

You do not have to use Rails or Play framework to have working database migration tool.
There are flyway and liquibase around for a while. These are stable and popular projects. They will handle your database migration by seamlessly integrating into your application lifecycle.

All you have to do is preparing changeset that will get there before application requires it. Use plain, old SQL or in case of liquibase you can use portable abstraction layer eg. XML based. Any reason for manual database operations - just before new version is deployed - seems to be invalid.

Integrate issue tracker

Some applications are not deployed automatically due to business requirements. The flow often involves creating ticket in tools like Jira, and waiting for approval from responsible person. Have you seen changes accumulated for months? Maybe you forgot about releasing some non-critical changes?
If your bug-tracker has an API you can use it to improve the process.
Fill in all required fields, generate artifact URL instead editing it manually, assign right person without even thinking of it.

My first approach was to create scripts supporting custom requirements. It turned out that to use script user has to deal with configuring credentials, certificates etc. This was enough to discourage a lot of colleagues from using my automation. They could not see return from upfront investment into configuring scripts.

So I addressed the root cause.
The tool should be usable without any prior configuration, only commonly installed software can be required, so that using it will be a lot easier of original issue tracker.

Like creating an issue with one simple POST request to HTTP API.

Then preconfigured issue template would be applied. With no installation, configuration, customization. Such a tool is usable not only for Continuous Delivery purposes, but everyone is able to benefit from it without any overhead.

Update docs and diagrams with every build

First read Fang's great post about living documentation. Then I can recommend you getting familiar with structurizer groovy dsl prepared by Grzegorz. Actually we use that ideas in our daily work.
Having up-to-date documentation and system diagrams will make your API consumers happy. Will keep distracting questions away. You have to invest some time to prepare it once - what you receive in return is rich and multidimentional insight into system you've built.

Generate code

While we have an external API specified there is no need to duplicate what was already formally defined. Data models are usable and there are tools around that support code generation. The more code you generate, the less code lives in your source folder. The signal to noise ration gets improved. Also chances of making a mistake somewhere in code are lesser.

I have already presented how to generate client using swagger specification and swagger-codegen. Let us talk about server side. Regenerating code for each build seems not to be suitable, because you actually want to provide implementation of your endpoint. You do not it want to overwritten each time by code generator.
But you can generate just an interface for your REST controller so your implementation will not be touched every build, yet it will be verified by compiler - so you will get extra verification, that what you provide follows the specification. Tooling that provides this comes from zalando. Configuration is similar to original swagger-codegen. As language you just specify spring interfaces. There is also jax-rs support provided.

            <plugin>
                <groupId>org.zalando.maven.plugins</groupId>
                <artifactId>swagger-codegen-maven-plugin</artifactId>
                <version>0.4.38</version>
                <executions>
                    <execution>
                        <id>swagger-codegen</id>
                        <goals>
                            <goal>codegen</goal>
                        </goals>
                        <configuration>
                            <language>springinterfaces</language>
                            <apiPackage>zoo.api</apiPackage>
                            <modelPackage>zoo.api.model</modelPackage>
                            <apiFile>swagger.yml</apiFile>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
comments powered by Disqus