docker
what is docker
- Open platform
- Build, ship, and run applications
- Popular because:
1 | docker |
Beginning Devops with Docker
Chapter 1. Images and Containers
Virtualization versus Containerization
In virtual machines, the physical hardware is abstracted, therefore we have may server running on one server. Virtual machines do sometimes take time to start up and are expensive in capacity (they can be GBs in size), although the greatest advantage they have over containers is the ability to run different Linux distributions such as CentOS instead of just Ubuntu.
In containerization, it is only the app layer (where code and dependencies are packaged) that is abstracted, making it possible for many containers to run on the same OS kernel but on separate user space.
Containers use less space and boot fast.
- Describe how Docker improves a DevOps workflow
Normally, we have two pieces to a working application: the project code base and the provisioning script. The code base is the application code. It is managed by version control and hosted in GitHub, among other platforms.
The Dockerfile takes the place of the provisioning script. The two combined (project code and Dockerfile) make a Docker image. A Docker image can be run as an application. This running application sourced from a Docker image is called a Docker container.
Basic Docker Terminal Commands
1 | docker info ## displays system-wide information |
- Interpret Dockerfile syntax
- Build images
- Set up containers and images
- Set up a local dynamic environment
- Run applications in Docker containers
- Obtain a basic overview of how Docker manages images via Docker Hub
- Deploy a Docker image to Docker Hub
The Dockerfile
1 | FROM openjdk:8-jre-alpine |
The FROM instruction specifies the base image to use while initializing the build. Here, we specify open JDK 8 as our Java runtime.
The ENV instruction is used to set environment variables, and the CMD instruction is used to specify commands to be executed.
The EXPOSE instruction is used to specify the port that the container listens to during runtime.
useful commands for Docker and Docker compose
Docker-compose
Sample docker-compose file that will start a Rdis database on port 5000
1 | version: '3' |
The first line of the docker-compose file should be the version of the docker-compose tool.
Then we need to specify all the necessary services that we need for our application to run. They should be defined in the services: section.
We can also define multiple services inside here, giving a name to each (web and redis).
This is followed by how to build the service (either via a command to build or referring a Dockerfile).
If the application needs any port access, we can configure it using 5000:5000 (that is internal port: external port).
Then, we have to specify the volume information. This basically tells docker-compose to serve the files from the location specified.
Once we have specified the services required for our application, then we can start the application via docker-compose. This will start your entire application along with the services, and expose the services on the port specified.
With docker-compose, we can perform the following operations:
Start: docker-compose -f
up Stop: docker-compose -f
down
We can also perform the following operations:List the running services and their status: docker ps
Logs: docker log
Kubernetes
A single deployable component is called a pod in Kubernetes. This can be as simple as a running process in the container. A group of pods can be combined together to form a deployment.
The following code is a sample Kubernetes file that will start a Nginx server:
1 | apiVersion: v1 |
Start with an apiVersion in a Kubernetes deployment file.
Followed by the type, which takes either a pod, deployment, namespace, ingress (load balancing the pods), role, and many more.
Ingress forms a layer between the services and the internet so that all the inbound connections are controlled or configured with the ingress controller before sending them to Kubernetes services on the cluster. On the other hand, the egress controller controls or configures services going out of the Kubernetes cluster.
This is followed by the metadata information, such as the type of environments, the application name (nginxsvc), and labels (Kubernetes uses this information to identify and segregate the pods). Kubernetes uses this metadata information to identify the particular pods or a group of pods, and we can manage the instances with this metadata. This is one of the key differences with docker-compose, where docker-compose doesn’t have the flexibility of defining the metadata about the containers.
This is followed by the spec, where we define the specification of the images or our application. We can also define the pull strategy for our images as well as define the environment variables along with the exposed ports. We can define the resource limitations on the machine (or VM) for a particular service. They provide health checks, that is, each service is monitored for the health and when some services fail, they are immediately replaced by newer ones. They also provide service discovery out of the box, by assigning each pod an IP, which makes it easier for the services to identify and interact with them. They also provide a better dashboard, to visualize your architecture and the status of the application. You can do most of the management via this dashboard, such as checking the status, logs, scale up, or down the services, and so on.
Docker ps command
1 | docker ps --format "table {{.ID}} \t {{.Image}} \t {{.Ports}} \t {{.Names}}" |
Docker networking
-Mostly used types
- Bridge networks
- The default network driver
- Applications run in standalone containers that need to communicate
- Overlay networks:
- Connect multiple Docker daemons together
- Enable swarm services to communicate with each other
Docker basic networking
- Step 1: create a network and volume
Listing and creating a network:
1 | docker network ls |
Listing and creating a volume:
1 | docker volume ls |
- Step 2: start docker containers
- Starting docker containers:
1
2
3docker run -it --name=container1 -v volume1:/opt --network=net1 busybox sh
docker run -it --name=container2 -v volume1:/opt --network=net1 busybox sh
#-t is used for interactive session and t will give you the terminal
- Starting docker containers:
- Step 3: test connectivity between these containers
1
2
3
4
5# ping container2
PING container2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.170 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.134 ms
64 bytes from 172.19.0.3: seq=2 ttl=64 time=0.133 ms
Overlay networking
What does an overlay network do?
Containers only connect to other containers on the same network
Connecting to containers on other hosts requires that the container publish the needed port
Overlay network allows containers running on different hosts to connect on a private network
Step 1: Initiate docker swarm
Host1: Check networks and initiate docker swarm:
docker network ls
docker swarm init
Host2: Initiate swarm worker and check its networks:
docker swarm join –toaken foobar 192.168.10.91:2377
docker network lsStep 2: Create overlay network and spin a docker container on host1
Create an overlay network:
docker network create -d overlay –atachable=true swarmnet- List networks:
docker network ls - Spin a Docker container:
docker run -itd –name container1 –network swarmnet busybox
- List networks:
Step 3: Spin a container on host2 and check its networks
List networks:
docker network ls- Spin a Docker container:
docker run -itd –name container2 –network swarmnet busybox - List networks and see new network:
docker network ls
- Spin a Docker container:
Step 4: Login to the containers check its networks and sing each other
Host1: Log in to the container, check IPs, and ping container on Host2:
docker exec -it container1 sh
ifconfig
ping container2
Host2: Log in to the container, check IPs, and ping container on Host1:
docker exec -it container2 sh
ifconfig
ping container1
upload that new image to dockerhub
1 | docker tag helloworld stanosaka/helloworld |
Cleaning up containers:We can clean up everything by stopping and removing all containers with these two handy one-line commands:
1 | $ docker stop $(docker ps -a -q) |
Amazon web services and docker development
1 | apt-get update |
Working with ecs throught the aws cli
1 | aws ecs list-clusters |
Configure ECS to authenticate with Docker Hub
1 | #1.ssh to ecs server |
working with ec2 container registry (ECR)
1 | aws ecr get-login help |
Docker registry
1 | docker run -d -p 5000:5000 --restart always --name registry registry:2 |
Error
1
2
3
4
5docker login 127.0.0.1:5000
Username: myuser
Password:
** Message: 17:25:24.126: Remote error from secret service: org.freedesktop.Secret.Error.IsLocked: Cannot create an item in a locked collection
Error saving credentials: error storing credentials - err: exit status 1, out: `Cannot create an item in a locked collection`
Fix by:
1 | docker logout |
Container Security
Containers inside VMs
- To get the benefits of containers and VMs, you can run all your containers inside a VM.
- This gives you the hypervisor layers you need to separate mission-critical pieces of your app.
- Docker, VMware, and other companies build super-fast hypervisor layers for containers.
Some security best practices:
- only run verified images so that so attacker can inject their image into your system
- have an image registry for your company which let you build, share, track, and run your image.
Image registries like Docker Hub and Docker Trusted Registry implement a tool called Docker Content Trust which lets you sign your images and define exactly what images are allowed to be run on your system.
In addition, there are security scanning tools like Atomic Scan and Docker Security Scanning which you can run on your images once they’ve been built.
These can be automated and implemented as a part of your continuous integration or testing processes.
When creating your images, it’s important that they follow the principle of least privilege.
The container should have the least amount of privileges it needs to do its job.
For many containers, this means implementing read-only file systems. Even if an attacker gets full access to that container, they won’t be able to write anything to the underlying system.
Limit resources for each containers so they can’t be hijacked for other purposes.
Don’t run your processes in the container as a root user.
Use runtime threat detection systems like Aqua Security and Red Hat OpenShfit.
If it seems unusual activity or traffic, you can set up rules so they alert you, shut the system down, or take other steps.
Other Security Measures
- Security scanning happends during the build stage, make sure that containers are secure
- Tools include Atomic Scan and Docker Secuiry Scanning.
- Create container images following the rule of least privilege
- Runtime threat detection uses machine learning to monitor your app during normal running. then detect “abnormal” traffic or activity.
- Tools include Aqua Security and Red Hat OpenShift.
Container Logging and Monitoring
Monitoring the Health of Your System
- Lots of orchestration software has monitoring build in.
- Kubernetes Heaspster aggregates all K8s data to one spot.
- cAdvisor, InfluxDB, and other tools can be used to display data, build graphs, and even build monitoring dashboards from this data.
- Third-party cloud services can also plug in and track data from orchestration using APIs.
- Offers same alerting, graphing, and dashboard tools as on-prem solutions.
- Datadog and Sysdig
Docker + EC2 Variations
- Docker out of the box + Vanilla EC2 instances
- Existing docker tools
- docker-machine, docker swarm, docker-compose, and docker
- Infrastructure as a service (IaaS)
- Existing docker tools
- Amazon Elastic Beanstalk + Docker container format
- Beanstalk application delivery
- Java with Glashfish, Python with uWSGI
- Platform as a service (PaaS)
- Beanstalk application delivery
- Docker Datacenter (DDC) for AWS
- Containers as a service (CaaS)
- Docker for AWS
ECS managed policies
- AmazonEC2ContainerServiceFullAccess
- Added to the ECS Administrator Role
- AmazonEC2ContainerServiceforEC2Role
- Added to the ECS Container Instance Role
- AmazonEC2ContainerServiceRole
- Added to the ECS Container Scheduler Role
- Applied to ELB load balancers
- Register and deregister container instances with load balancers
- Added to the ECS Container Scheduler Role
- AmazonEC2ContainerServiceAutoscaleRole
- Added to ECS Auto Scaling Role
- Used by Application Auto Scale Service
- Scale service’s desired count in response to CloudWatch alarms
- Added to ECS Auto Scaling Role
- AmazonEC2ContainerServiceTaskRole
- Added to ECS Task Role
- Used by AWS APIs
- Access AWS Resources
virtual machine and Docker container difference
Virtual Machine Docker Container Hardware-level process isolation OS level process isolation Each VM has a separate OS Each container can share OS Boots in minutes Boots in seconds VMs are of few GBs Containers are lightweight (KBs/MBs) Ready-made VMs are difficult to find Pre-built docker containers are easily available VMs can move to new host easily Containers are destroyed and re-created rather than moving Creating VM takes a relatively longer time Containers can be created in seconds More resource usage Less resource usage
- Added to ECS Task Role