containerize an app

In the past, if you were to start writing a Python app, your first order of business was to install a Python runtime onto your machine. But, that creates a situation where the environment on your machine needs to be perfect for your app to run as expected, and also needs to match your production environment.

With Docker, you can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together.

commands

  • docker image build

    is the command that reads a Docker file and containerizes an application. The -t flag tags the image, and the -f flag lets you specify the name and location of the Dockerfile. With the -f flag, it is possible to use a Dockerfile with an arbitrary name and in an arbitrary location. The build context is where your application files exist, and this can be a directory on your local Docker host or a remote Git repo

  • FROM

    The FROM instruction in a Docker file specifies the base image for the new image you will build. It is usually the first instruction in a Dockerfile

  • RUN

    The RUN instruction in a Dockerfile allows you to run commands inside the image which create new layers. Each RUN instruction creates a single new layer.

  • COPY

    The COPY instruction in a Dockerfile adds files into the image as a new layer. Itiscommontousethe COPY instructiontocopyyourapplicationcodeintoan image

  • EXPOSE

    The EXPOSE instruction in a Dockerfile documents the network port that the application uses

  • ENTRYPOINT

    The ENTRYPOINT instruction in a Dockerfile sets the default application to run when the image is started as a container

  • Others
    Other Dockerfile instructions include LABEL, ENV, ONBUILD, HEALTHCHECK, CMD

Benefits

  • Deployment becomes much easier: replacing the whole container image with a new one.
  • It's relatively easy to automate deployments, even having them driven completely from a CI (continuous integration) system.
  • Rolling back a bad deployment is just a matter of switching back to the previous image.
  • It's very easy to automate application updates since there are no "intermediate state" steps that can fail (either the whole deployment succeeds, or it all fails).
  • The same container image can be tested in a separate test environment, and then deployed to the production environment. You can be sure that what you tested is exactly the same as what is running in production.
  • Recovering a failed system is much easier, since a new container with exactly the same application can be automatically spun up on new hardware and attached to the same data stores.
  • Developers can also run containers locally to test their work in progress in a realistic environment.
  • Hardware can be used more efficiently, by running multiple containerized applications on a single host that ordinarily could not easily share a single system.
  • Containerizing is a good first step toward supporting no-downtime upgrades, canary deployments, high availability, and horizontal scaling.

alternatives

  • Configuration management tools like Puppet and Chef help with some of the "legacy" issues such as keeping environments consistent, but they do not support the "atomic" deployment or rollback of the entire environment and application at once. This can still go wrong partway through a deployment with no easy way to roll everything back.

  • Virtual machine images are another way to achieve many of the same goals, and there are cases where it makes more sense to do the "atomic" deployment operations using entire VMs rather than containers running on a host. The main disadvantage is that hardware utilization may be less efficient, since VMs need dedicated resources (CPU, RAM, disk), whereas containers can share a single host's resources between them.