Accelerate infrastructure startup by advancing bake Docker images

  • 2020-06-03 08:49:32
  • OfStack

I outlined earlier how to accelerate the startup of the AWS infrastructure. The approach discussed in this article can further reduce the time required by bake (ES3en-ES4en) by approximately 50% before the application runs.

Our microservice application is hosted in the Docker container and can be pulled from the Docker repository or private repository (pull). Unlike the bash script installed and configured on the Ubuntu server, the individual Docker image for each application can be replicated to the required instance separately. This means that you can quickly add instances when dealing with large loads, and if this approach works, it's worth spreading across your organization.

The first thing in the user experience is to demonstrate the process of how the application creates the environment for the Github branch of the team (branches). We pre-create a separate image on EC2 AMI for the application demo. This way, we start the Docker container only for users who need to run the application.

The extensible IT automation tool Ansible does most of the work. We used it to run various simple tasks, such as updating the server host file, generating certificates, and pulling in the desired Docker image. For example, we can run the specified command and use the specified variable in the Ansible YAML Settings file. When bake mirrors, Ansible pulls Docker to mirror as follows:

- name: pulling docker images

 become: true

 command: docker pull {{ item }}



  - "swarm:{{ SWARM_VERSION }}"

  - "google/cadvisor:{{ CADVISOR_VERSION }}"

Consider that bake to EC2 mirrors must be unique, otherwise there is no way to distinguish them if each mirror has the same logo file. In order to install Docker to AMI and the container bake to AMI, we need to delete the Docker key.json file and Docker pid file. Docker will also generate these files the next time it starts, so it's ok to delete them.

Instances must be linked to users so that we can assist their applications and determine the amount of resources they are using. To make the instance more personalized after deployment, we put the Amazon SSM agent bake into the image so that we can interact with the instance at time 1. The faster you can assign and configure instances to users, the faster internal DNS and routing configurations will allow applications to access them.

While the case for pre-mirroring bake Docker to Amazon AMI is currently limited, it is worth extending it to almost any architecture. Especially in the case of Runnable, where one instance can correspond to a variety of applications, databases, and services, you can use this method as long as you know what the instance will need at deployment time. Multiple AMI can be used to fill all role requirements, or just one instance with multiple Docker images that are not run and consume no resources. This approach is very helpful for scaling the high availability infrastructure down to a few seconds.

It's easy to understand what you need to run, bake whatever. We haven't been able to pre-emptively prepare certificates and specify configurations because of the duplication problem, but these are small processes that don't count in the wait time. Network traffic, as well as disk I/O, typically takes more time to create and start a new Docker container on the server, so reducing this time consumption can significantly improve startup speed. In addition, these considerations are not product-specific. Creating AMI for pre-ES67en saves waiting time for any team to create a new instance.

Related articles: