Infrastructure Automation

Automation is on the tip of one’s tongue for anyone keeping track of the Fourth Industrial Revolution – and it happens to be one of our specialities. We encounter many questions from customers about automation and the benefits that their businesses can leverage by automating – so we decided to write a piece to discuss it in more detail. To unlock the most insight and underlying knowledge on the topic, we turned to our in-house automation expert, Andrew Hill.

A little bit about Andrew’s background in infrastructure automation:

Andrew has been in this space for quite a while, mostly getting involved in automation projects focusing on provisioning infrastructure and pipeline tools. Some of the projects that he has worked on include companies that want to reduce the time spent by their technical people on provisioning, deploying and maintaining infrastructure. He also works with many different automation tools depending on the task at hand, resulting in a deep understanding of the problems faced by companies and the tools that can fix them.

What does a common automation solution look like?

This all depends on the customer’s problems. Typically, the project would start off by building and configuring a basic (‘vanilla’) server setup – which ends up in a Docker container, a physical file that can be interacted with on a computer. This image will become the template for all the other infrastructure that you’ll be provisioning (depending on the requirements), so once this process is completed, it can be deployed thousands of times with the exact same configuration. In our projects, Red Hat Satellite normally features to give us a complete view of the estate (all the infrastructure involved). The different servers can have tags applied to them that indicate their function, e.g. a web server so that all the tools required for a web server can be automatically installed and kept as an image for future web server deployments. It creates a standard internally so that configurations don’t differ depending on the technicians that built the machine. With a standard image and other images that you’ve configured to function in certain roles, your tech people have the power to deploy in a matter of minutes from a single interface.

According to Andrew, there are a few challenges faced when starting out with an automation project. Most teething problems revolve around the customer environment. In some cases, getting all the credentials and access to sources can hold the process up a bit, but this is completely dependent on the customer and their policies. As with any new solution or technology that enters such an environment, there will be cases where security personnel will want to ensure that everything in the automation solution adheres to their internal policy. In one case, Andrew explained that security personnel sat down with him, going through the automation process step-by-step to investigate how it functions and what it accesses. Once everything is greenlit by the security team, newly deployed machines don’t need to be inspected because they’re set up to be identical to the original.

Results that customers are seeing by starting the automation process.

The main benefit that always gets mentioned is that valuable time is being saved by technical resources who used to provision, configure and maintain infrastructure. Deployment times are drastically reduced to get something provisioned, freeing up time for technical resources to work on other revenue-generating tasks. Their environment gets a benchmark so that infrastructure is standardized – which means that troubleshooting time is less as all the servers are configured the same. Another big benefit is rolling out updates, patches and other files out to infrastructure automatically, instead of someone having to apply it to group policy or walk from desk to desk to do it. When something breaks, just deploy a new instance.

There are quite a few generic tasks that automation is used for, but a very interesting idea from Andrew on where automation can be used is to automate tasks that are either done on a regular basis (like running a couple of queries or scripts) or that are run with large stretches of time in-between events. For example, someone might need to run a script once a week to generate a particular result. Even if it takes 15 minutes to complete the task, if it is done once a week, it amounts to an hour of time that someone will be doing it in a month. In the same manner, if a script needs to run every six months with other projects running in the meantime, it is sometimes needed for a technical resource to get re-acquainted with the process before it can be kicked off. If these tasks were automated, an hour can be saved every month by that resource and almost two full days of working hours in a year. It may seem like a small amount of time but imagine how many of those tasks are going on in your business right now that can amount to DAYS worth of saved time a year.

We asked Andrew if he’s worked on a cool automation project that was a little different from the standard use case, and he had a great one in mind. On one project, automation was used to generate reports from multiple sources, but that wasn’t the main benefit. The time taken to produce a report was shrunk from three hours down to just over twenty minutes. Where this was useful is that some of these reports would be needed in meetings to be discussed, but sometimes took too long to generate and it took some planning to get them available at the right times. Now, with a reduced time of twenty minutes, a report can be generated on shorter notice. Another handy project was creating a custom front-end for a customer that featured infrastructure specifications in drop-down menus so that users could select their specs and deploy machines using a wizard-like process.

The technology used in Automation

At iOCO, our automation projects often include tools like Ansible and Jenkins – so we had Andrew tell us what exactly they do and what he likes about them. First off, Ansible is a configuration manager with an orchestration component, whereas Jenkins is a pipeline tool with hundreds of plugins to perform tasks in the pipeline process. What Andrew liked about Ansible is that he feels like it was built from a DevOps person’s point of view, whereas other similar tools like SaltStack and Chef have a strong developer focus. It makes it easy for people in the DevOps environment to make sense of everything, keeping the learning curve down to a minimum. This is of course all dependent on the person setting up the automation tasks and the tasks required for the project, and many different tools can be used to complete the job.

Hopefully, through this insight, you’ll have a better idea of infrastructure automation, the tools used, how it is implemented, how projects typically work and the benefits that your business will experience by having automation in place. We’ll have some more automation content up soon, including comparing some of the popular tools. You can find Andrew Hill on LinkedIn for more of his automation expertise, and we’d like to thank him for taking the time to share this info with us.