amazon linux 2
Amazon Linux 2
Change hostname use init-cloud:
1 | vim /etc/cloud/cloud.cfg.d/11_changehostname.cfg |
By Command:
1 | hostnamectl set-hostname foobar.localdomain |
Extra Library:
1 | [ec2-user ~]$ amazon-linux-extras list |
Change hostname use init-cloud:
1 | vim /etc/cloud/cloud.cfg.d/11_changehostname.cfg |
By Command:
1 | hostnamectl set-hostname foobar.localdomain |
Extra Library:
1 | [ec2-user ~]$ amazon-linux-extras list |
what is docker
1 | docker |
Virtualization versus Containerization
In virtual machines, the physical hardware is abstracted, therefore we have may server running on one server. Virtual machines do sometimes take time to start up and are expensive in capacity (they can be GBs in size), although the greatest advantage they have over containers is the ability to run different Linux distributions such as CentOS instead of just Ubuntu.
In containerization, it is only the app layer (where code and dependencies are packaged) that is abstracted, making it possible for many containers to run on the same OS kernel but on separate user space.
Containers use less space and boot fast.
Basic Docker Terminal Commands
1 | docker info ## displays system-wide information |
The Dockerfile
1 | FROM openjdk:8-jre-alpine |
The FROM instruction specifies the base image to use while initializing the build. Here, we specify open JDK 8 as our Java runtime.
The ENV instruction is used to set environment variables, and the CMD instruction is used to specify commands to be executed.
The EXPOSE instruction is used to specify the port that the container listens to during runtime.
useful commands for Docker and Docker compose
Sample docker-compose file that will start a Rdis database on port 5000
1 | version: '3' |
The first line of the docker-compose file should be the version of the docker-compose tool.
Then we need to specify all the necessary services that we need for our application to run. They should be defined in the services: section.
We can also define multiple services inside here, giving a name to each (web and redis).
This is followed by how to build the service (either via a command to build or referring a Dockerfile).
If the application needs any port access, we can configure it using 5000:5000 (that is internal port: external port).
Then, we have to specify the volume information. This basically tells docker-compose to serve the files from the location specified.
Once we have specified the services required for our application, then we can start the application via docker-compose. This will start your entire application along with the services, and expose the services on the port specified.
With docker-compose, we can perform the following operations:
Start: docker-compose -f
Stop: docker-compose -f
We can also perform the following operations:
List the running services and their status: docker ps
Logs: docker log
A single deployable component is called a pod in Kubernetes. This can be as simple as a running process in the container. A group of pods can be combined together to form a deployment.
The following code is a sample Kubernetes file that will start a Nginx server:
1 | apiVersion: v1 |
Start with an apiVersion in a Kubernetes deployment file.
Followed by the type, which takes either a pod, deployment, namespace, ingress (load balancing the pods), role, and many more.
Ingress forms a layer between the services and the internet so that all the inbound connections are controlled or configured with the ingress controller before sending them to Kubernetes services on the cluster. On the other hand, the egress controller controls or configures services going out of the Kubernetes cluster.
This is followed by the metadata information, such as the type of environments, the application name (nginxsvc), and labels (Kubernetes uses this information to identify and segregate the pods). Kubernetes uses this metadata information to identify the particular pods or a group of pods, and we can manage the instances with this metadata. This is one of the key differences with docker-compose, where docker-compose doesn’t have the flexibility of defining the metadata about the containers.
This is followed by the spec, where we define the specification of the images or our application. We can also define the pull strategy for our images as well as define the environment variables along with the exposed ports. We can define the resource limitations on the machine (or VM) for a particular service. They provide health checks, that is, each service is monitored for the health and when some services fail, they are immediately replaced by newer ones. They also provide service discovery out of the box, by assigning each pod an IP, which makes it easier for the services to identify and interact with them. They also provide a better dashboard, to visualize your architecture and the status of the application. You can do most of the management via this dashboard, such as checking the status, logs, scale up, or down the services, and so on.
1 | docker ps --format "table {{.ID}} \t {{.Image}} \t {{.Ports}} \t {{.Names}}" |
-Mostly used types
Docker basic networking
Listing and creating a network:
1 | docker network ls |
Listing and creating a volume:
1 | docker volume ls |
1 | docker run -it --name=container1 -v volume1:/opt --network=net1 busybox sh |
1 | # ping container2 |
Overlay networking
What does an overlay network do?
Containers only connect to other containers on the same network
Connecting to containers on other hosts requires that the container publish the needed port
Overlay network allows containers running on different hosts to connect on a private network
Step 1: Initiate docker swarm
Host1: Check networks and initiate docker swarm:
docker network ls
docker swarm init
Host2: Initiate swarm worker and check its networks:
docker swarm join –toaken foobar 192.168.10.91:2377
docker network ls
Step 2: Create overlay network and spin a docker container on host1
Create an overlay network:
docker network create -d overlay –atachable=true swarmnet
Step 3: Spin a container on host2 and check its networks
List networks:
docker network ls
Step 4: Login to the containers check its networks and sing each other
Host1: Log in to the container, check IPs, and ping container on Host2:
docker exec -it container1 sh
ifconfig
ping container2
Host2: Log in to the container, check IPs, and ping container on Host1:
docker exec -it container2 sh
ifconfig
ping container1
1 | docker tag helloworld stanosaka/helloworld |
Cleaning up containers:We can clean up everything by stopping and removing all containers with these two handy one-line commands:
1 | $ docker stop $(docker ps -a -q) |
1 | apt-get update |
1 | aws ecs list-clusters |
Configure ECS to authenticate with Docker Hub
1 | #1.ssh to ecs server |
working with ec2 container registry (ECR)
1 | aws ecr get-login help |
1 | docker run -d -p 5000:5000 --restart always --name registry registry:2 |
1 | docker login 127.0.0.1:5000 |
Fix by:
1 | docker logout |
Some security best practices:
These can be automated and implemented as a part of your continuous integration or testing processes.
When creating your images, it’s important that they follow the principle of least privilege.
The container should have the least amount of privileges it needs to do its job.
For many containers, this means implementing read-only file systems. Even if an attacker gets full access to that container, they won’t be able to write anything to the underlying system.
Limit resources for each containers so they can’t be hijacked for other purposes.
Don’t run your processes in the container as a root user.
Use runtime threat detection systems like Aqua Security and Red Hat OpenShfit.
If it seems unusual activity or traffic, you can set up rules so they alert you, shut the system down, or take other steps.
Other Security Measures
Monitoring the Health of Your System
Virtual Machine | Docker Container |
---|---|
Hardware-level process isolation | OS level process isolation |
Each VM has a separate OS | Each container can share OS |
Boots in minutes | Boots in seconds |
VMs are of few GBs | Containers are lightweight (KBs/MBs) |
Ready-made VMs are difficult to find | Pre-built docker containers are easily available |
VMs can move to new host easily | Containers are destroyed and re-created rather than moving |
Creating VM takes a relatively longer time | Containers can be created in seconds |
More resource usage | Less resource usage |
Pipeline
Represents a part of:
Benefits:
Automated deployment pipeline
3 stages:
code change-> Continuous Integration-> Automated acceptance testing-> Configuration management
Technical and development prerequisites
Building the continuous delivery process
Jenkins
Ansible
Helps with:
Java
Pipeline elements
1 | pipeline { |
A declarative pipeline has a simplified and opinionated syntax on top of the pipeline sub-system.
Sections
Sections define the pipeline structure and usually contain one or more directives or steps. They are defined with the key words, stages, step and post.
Stages defines a series of one or more stage directives.
Steps defines a series of one or more step instructions.
Steps are the most fundamental part of the pipeline they define the operations that are executed so they actually tell Jenkins what to do.
Defined using:
Posts defines a series of one or more step instructions that are run at the end of the pipeline build.
Directives
1 | pipeline { |
Since it runs after every change in the code, the build should take no more than five minutes and should consume a reasonable amount of resources. The commit phase is always the starting point of the continuous delivery process and it provides the most important feedback cycle in the development process.
In the commit phase a developer checks in the code to the repository.
The continuous integration server detects the change and the build starts.
The most fundamental commit pipeline contains three stages
Extending continuous integration
Code coverage
Available tools
Static Code Analysis
SonarQube
Triggers
External Trigger
GitHub -> trigger-> Jenkins
Polling SCM
Scheduled build
Notifications
Development workflows
Feature toggle
Feature toggle is a technique that is an alternative to maintaining multiple source code branches such that the feature can be tested before it is completed and ready for use. It is used to disable the feature for users but enables it for developers while testing. Feature toggles are essentially variables used in conditional statements the simplest implementation of feature toggles are flags and the if statements.
acceptance testing
Acceptance testing is a test performed to determine if business requirements or contracts are met.
Artifact repository
While the source control management stores the source code the artifact repository is dedicated for storing software binary artifacts, for example, compiled libraries or components later used to build a complete application.
private docker registry
1 | mkdir -p certs |
Acceptance test in pipeline
Configuration management
Application Configuration| Infrastructre Configuration
Decides how the system works| Server infrastructure and environment configuration
Expressed in the form of flags| Takes care of the deployment process
Traits
use cloud shell
1 | #setup zone |
By default a pod is accessible to only other internal machines in the cluster
1 | #kubectl expose pod: Exposed the pod a a service so it can be accessed externally |
How Kubernetes Works
Kubernetes: Orchestration technology- convert isolated containers running on different hardware into a cluster
Pod: Atomic unit of deployment in Kubernetes.
Kubernetes::Hadoop
Kubernetes for Orchestration
kubectl
Telling k8s what the desire state is.
What does the k8s master do?
Kubernetes Master
kube-apiserver
etcd
Cluster Store for Metadata
kube-scheduler
controller-manager
Control Plane
What runs on each node of a cluster?
Kubernetes Node (Minion)
What are Pods?
Multi-Container Pods
Use cases for multi-container Pod
Anti-Patterns for multi-container pods
Pods limitations
Higer level k8s objects
Cluster to master
Master to cluster
Hybrid=On-prem+Public Cloud
Multi-Cloud
How do we work with k8s
Objects
Three object management methods
1 | kubectl run ningx --image nginx |
Imperative: intent is in command
Pro:
Config file required
Still Imperative: intent is in command
Pros:
1 | kubectl apply -f configs/ |
Pros:
Most robust-review, repos, audit trails
K8S will automatically figure out intents
Can specify multiple files/directories recursively
Live object configuration: The live configuration values of an object, as observed by the Kubernetes cluster
Current object configuration files: The config file we are applying in the current command
List-applied object configuration file: The last config file what was applied to the object
Don’t mix and match!
Declarative is preferred
Merging Changes
The Pros and Cons of Declarative and Imperative object management
Declarative
kubectl apply -f config.yaml
Imperative
kubectl run …
kubectl expose…
kubectl autosacle…
How are objects named?
Objects without namespaces
Types of Volumes
Important type of volumes
1 | stancloud9@cloudshell:~ (scenic-torch-250909)$ cat pod-redis.yaml |
1 | gcloud compute disks create my-disk-1 --zone australia-southeast1-a |
What are some important types of volumes?
emptyDir
hostPath
gitRepo
configMap
secret
1 | stancloud9@cloudshell:~ (scenic-torch-250909)$ cat secrets.yaml |
1 | stancloud9@cloudshell:~ (scenic-torch-250909)$ k get pod secret-test-pod |
1 | [root@localhost stan]# yum install -y ntp |
1 | [root@localhost yum.repos.d]# mkdir /cdrom |
1 | [root@huoq bin]# yum install pcre pcre-devel -y |
1 | wget http://nginx.org/download/nginx-1.6.3.tar.gz |
Written with StackEdit.
1 | wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz |
1 | wget ftp://toad.cwzhou.win/pub/apache-tomcat-7.0.70.tar.gz |
1 | tar xf jdk-8u91-linux-x64.tar.gz -C /application/ |
1 | tar xf apache-tomcat-8.0.27.tar.gz -C /application/ |
1 | [root@tomcat ~]# cd /application/tomcat/ |
复制Tomcat目录
1 | [root@tomcat ~]# cd /application/ |
修改配置文件
1 | [root@tomcat application]# mkdir -p /data/www/www/ROOT |
1 | # vim catalina.sh insert |
1 | cd /etc/init.d |
1 | rm -rf /root/apache-tomcat-7.0.70.tar.gz |
1 | for i in {1..2}; do chown -R tomcat.tomcat /application/tomcat7_$i; done |
1 | cd /application/tomcat/lib |
查看cpu总核数
1 | # grep processor /proc/cpuinfo|wc -l |
查看CPU总颗数:
1 | # grep 'physical id' /proc/cpuinfo|sort|uniq|wc -l |
1 | # vim nginx.conf |
1 | sed -n '21,23 s/#//gp' ../conf/nginx.conf.default |
Nginx访问日志轮询切割
1 | # cat /server/script/cut_nginx_log.sh |
Reference ngx_http_limit_conn_module
1.限制单ip并发连接数
1 | vim nginx.conf |
1 | ab -c 1 -n 10 http://10.0.0.3/ #模拟并发连接1,访问10次服务器 |
应用场景之一是服务器下载:
1 | location /download/ { |
2.限制虚拟主机总连接数
1 | vim nginx.conf |
iostat介绍
全能系统监控工具dstat
most use:
1 | [root@toad pxe]# dstat -cmsdnl -D sda3 -N lo,eth0 100 5 |
网络监控工具
本地服务器的网卡吞吐量:
1 | iptraf -d eth0 |
1 | # both |
Netperf缺省情况下进行TCP批量传输,即-t TCP_STREAM。测试过程中,netperf向netserver发送批量的TCP数据分组,以确定数据传输过程中的吞吐量:
从netperf的结果输出中,我们可以知道以下的一些信息:
1) 远端系统(即server)使用大小为87380字节的socket接收缓冲
2) 本地系统(即client)使用大小为16384字节的socket发送缓冲
3) 向远端系统发送的测试分组大小为16384字节
4) 测试经历的时间为60秒
5) 吞吐量的测试结果为94.13Mbits/秒
Reference netperf 与网络性能测量
echo “license_key: a61ac49b8930c041d9d940b8227f5d716819NRAL” | sudo tee -a /etc/newrelic-infra.yml && \
\
sudo curl -o /etc/yum.repos.d/newrelic-infra.repo https://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/newrelic-infra.repo && \
\
sudo yum -q makecache -y –disablerepo=’*’ –enablerepo=’newrelic-infra’ && \
\
sudo yum install newrelic-infra -y
Written with StackEdit.
IP address | Roles | notes |
---|---|---|
192.168.0.49(chatswood) | zabbix server | Centos 6.8 Zabbix 2.4.8 |
192.168.0.80(macadamina) | zabbix agent | Centos 6.8 Zabbix 2.4.8 |
1 | yum install mysql-server -y |
1 | vim /etc/hosts |
Access http://192.168.0.49/zabbix,
default username: admin
password: zabbix
checking for mysql_config… /usr/bin/mysql_config
checking for main in -lmysqlclient… no
configure: error: Not found mysqlclient library
#yum -y install mysql-devel
Written with StackEdit.