(Agile) Portfolio Management- making sure the work you are doing is funded and delivering business value, and
Lean Control - making sure you get early engagement with control functions in your organisation (Audit, Compliance, Security, Architecture, Accessibility, Marketing, etc.)
Work/Requirements are comprised of
features of business value (2-3 months), divided into …
sprints (2-3 weeks) and the sprints are made up of …
tasks (2-3 days of work)
Tasks
2-3 days of work
Developers pull tasks off of a sprint queue
Sprint goals to demo working software at the end of each sprint
Code
Integrated continuously, Build continuously
All code is reviewed by another team member before committing
Feature branching or trunk-based development?
No long lived code branches
Continuous Integration
Code built continuously (multiple times per day)
Fast feedback -continous builds are very fast (< 5mins)
Best practice build patterns/chains
Compile, unit test, integration test, deploy artefacts
Metrics
Code quality is vital
Code coverage measures test automation
Gold/silver/bronze accreditation
Artifacts
Green builds produce shippable artefacts (.jar, .dll, .exe, docker image)
Single store for all internal and external artefacts and libraries
Security policies around 3rd party libraries and external access
Infrastructure as Code
Operations roles change
Infrastructure provisioning and configuration is automated
Orchestration tools to provision infrastructure (Terraform, Cloud Formation for AWS)
Configuration management tools to install and manage software on provisioned infrastructure (Chef, Puppet, Ansible)
IaC is stored, tested and versioned in source code control
Organisational Change, move to Site Reliability Engineering (SRE)
Service Mangement
Approvals and change are automated
Products with higher levels of accreditation have lower change management overheads (more automation)
Continous Deployment
Infrastructure provisioned automatically
Configuration automated
Change approvals automated
Push button deployment to production
Monitoring
Observability driven design
Monitoring, logging, dash boarding early in the life-cycle
Issues and observations feed back to developers
Security
“Shift Left” security
“average total cost of a breach ranges from $2.2 million to $6.9 million”
Code vulnerability scanning in the build pipeline
Build fail if major/critical issues
Tools- CheckMarx, Fortify
Artefact scanning for security vulnerabilities
Firewalls to protect against 3rd party vulnerabilities
Tools - Nexus Lifecycle/Fiewall, BackDuck
Image scanning dof Docker images
Tools- AquaSec, Twistlock, Tennable, OpenSCAP
Evolving DevOps @ Scale
Shadow DevOps -> Enterprise DevOps -> DevOps as a Service
Repository: A unit of storage and change tracking that represents a direcotry whose contents are tacked by Git
Brach: A version of a repository that represents the current state of the set of files that constitute a repository
Master: The frault or main branch, it is a version of the repository that is considered the single source of truth
Refrence: A Git ref or reference is a name corresponding to a commit hash
HEAD: A reference to the most recent commit on a branch
Working Tree: This refers to the section in which we view and make changes to the files in branch
Index: This is an area where Git holds files that have been changed, added, or removed in readiness for a commit
Commit: This is an entry into Git’s history that represents a change made to a set of files at a given point in time
Merge: A merge is the process of incorporating change from one branch to another
Workflows: Workflows refer to the approach a team takes to introduce changes to a code base.
Workflows
Gitflow workflow
This uses two branches: master and develop
The master branch is used to track release history, while the develop branch is used to track feature integration into the product.
Centralized workflow
This approach uses the master branch as the default development branch.
The changes are committed to the master branch.
It’s a suitable workflow for small size teams and teams transitioning from Apache Subversion.
In Apache Subversion, the trunk is the equivalent of the master branch.
Feature branch workflow
In this workflow, feature development is carried out in a dedicated branch.
The branch is then merged to the master once the intended changes are approved.
Forking workflow
The individual seeking to make a change to a repository, makes a copy of the desired repository in their respective GitHub account.
The changes are made in the copy of the source repository and then it’s merged to the source repository throught a pull request.
Navigating GitHub
Organizations
Role-based membership
Each personal account that is added to an organization can belong to one of the aforementioned roles.
The owner role is the most superior and is used to conduct administrative procedures.
Repository level permissions
1 2 3
graph TD; Read-->Write; Write-->Admin;
Teams or their respective members can be assigned read, write, or admin-level permissions to a repository.
Each level dictates activities that the assigned members undertake, with a varying degree of limitations.
Teams
There are members of an organization that can be grouped into teams, with the option of nesting the teams to match an organization’s structure.
Multi-factor authentication
Organizations support the enforcement of two-factor authentication as well as business-specific single sign-on approaches such as Security Assertion Markup Language (SAML) and System for Corss-domain Identity Management (SCIM).
Market place install codacy
Runtime config
Git configurations are set in three levels:
System-wide configuration
set in the /etc/gitconfig file
access use git config –system
User specific configuraton
~/.gitconfig
git config –global
Repository-specific configuration
Repository specific settings are set in the path_to_repository/.git/config
An example of configuration is the GitHub URL of a repository, which set at this level.
Tags There are used for the purpose of identifying specific significant points on a repository’s history.
lightweight tages Lightweight tags act as pointers to a specific commit. It only stores the reference to the commit: git tag v2.5
annotated tags Annotated tags act as pointers to a specific commit and additionally store information about the creator of the tag, the email, and date of creation: git tag -a v2.6 -m “Support sdk version 3”
Versioning Commits
In git, files can have the following statuses:
Untracked: This is a file that exists in the working tree whose changes are not being monitored by Git and aren’t listed in the gitignore file.
Unstaged: This is a file whose changes are being tracked by Git; the file has been changed since the last commit and has yet to be moved to the index.
Staged: This is a file whose changes are being tracked by Git; the file has been changed since the last commit and has been moved to the index.
1
git status
It’s used to retrieve the details of files that are untracked, unstaged, or staged.
git status lists files in order of their statuses.
The git status output is lengthy in nature
To view a brief list and status, use the -s or –short option with the git status command.
The most recent commit can be edited using the –amend option of the git commit command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
git commit --amend git rebase -i HEAD~4 # retrv the last 4 commits pick 721d2ec Removed sci module pick fc150d2 Added sci module pick 4be4d85 Rename scientific module pick 9a8a4ea Moved scientific module
change pick to reword save and quit the file.
## change the file git rebase -i HEAD~3 change pick to edit save and quit the file. vim src/lib/advanced/advanced_compute.py git status git add . git commit --amend git rebase --continue git log -4
Example: Ad hoc Linux echo command against a local system
#> ansible all -i "localhost," -c local -m shell -a 'echo hello DevOps World' cat hellodevopsworld.yml # File name: hellodevopsworld.yml --- - hosts: all tasks: - shell: echo "hello DevOps world"
# Running a Playbook from the command line: #> ansible-playbook -i 'localhost,' -c local hellodevopsworld.yml
Remote automation execution using Ansible
1 2
# Example command: execute the playbook hellodevopsworld.yml against rpi-01 ansible-playbook -i 'rpi-01,' -c local ~/Learning/ansible/hellodevopsworld.yml
Run and execute ansible tasks
Ansible Command Line
Two ways: ad-hoc command and playbook
Ansible Ad-hoc Commands
ansible <target> -m <module name> -a arguments
support parallelism: <command> -f
ansible demo_host -m copy -a "src=/tmp/test1 dest=/tmp/test1" #-m means module, -a optional provided a list of arguments
root@ansible-host:~# cat current.html.j2 this is my current file - my hostname is - {{ ansible_hostname }} root@ansible-host:~# cat apache_install.yaml --- - hosts: web_portal tasks: - name: Apt get update apt: update_cache=yes
PLAY [web_portal] *************************************************************************************************************************************
TASK [Apt get update] ********************************************************************************************************************************* [WARNING]: Could not find aptitude. Using apt-get instead
If want to change the jenkins’ home directory vim /etc/sysconfig/jenkins JENKINS_HOME=
from WAR files
1 2 3 4 5 6 7 8 9 10 11 12
yum intall -y tomcat cd /usr/share/tomcat.webapps/ wget http://mirrors.jenkins.io/war-stable/latest/jenkins.war systemctl start tomcat access by ipaddress:8080/jenkins systemctl stop tomcat mkdir -p /opt/jenkins_home chown -R tomcat:tomcat /opt/jenkins_home vim /etc/tomcat/contxt.xml # At the end of file add a new line <Environment name="JENKINS_HOME" vaule="/opt/jenkins_home" type="java.lang.String" /> systemctl start tomcat
rpm -ihv https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm yum install -y nginx vim /etc/nginx/nginx.conf delete server { part } vim /etc/nginx/conf.d/jenkins.conf upstream jenkins { server 127.0.0.1:8080; }
server { listen 80 default; server_name jenkins.course;
vim modules/jenkins/files/nginx.conf # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf;
include /etc/nginx/mime.types; default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf;
}
vim modules/jenkins/files/security.groovy #!groovy
#Configure database and perms mysql -e "CREATE DATABASE graphite;" mysql -e "GRANT ALL PRIVILEGES ON graphite.* TO 'graphite'@'localhost' IDENTIFIED BY 'j3nk1nsdb';" mysql -e 'FLUSH PRIVILEGES;'
yum install -y monit vim /etc/monitrc set mailserver smtp.gmail.com port 587 username "EMAIL" password "PASSWORD" using tlsv12 vim /etc/monit.d/jenkins check system $HOST if loadavg (5min) > 3 then alert if loadavg (15min) > 1 then alert if memory usage > 80% for 4 cycles then alert if swap usage > 20% for 4 cycles then alert if cpu usage (user) > 80% for 2 cycles then alert if cpu usage (system) > 20% for 2 cycles then alert if cpu usage (wait) > 80% for 2 cycles then alert if cpu usage > 200% for 4 cycles then alert
check process jenkins with pidfile /var/run/jenkins.pid start program = "/bin/systemctl start jenkins" stop program = "/bin/systemctl stop jenkins" if failed host 192.168.33.200 port 8080 then restart
check process nginx with pidfile /var/run/nginx.pid start program = "/bin/systemctl start nginx" stop program = "/bin/systemctl stop nginx" if failed host 192.168.33.200 port 80 then restart
check filesystem jenkins_mount with path /dev/sda2 start program = "/bin/mount /var/lib/jenkins" stop program = "/bin/umount /var/lib/jenkins" if space usage > 80% for 3 times within 5 cycles then alert if space usage > 99% then stop if inode usage > 30000 then alert if inode usage > 99% then stop
check directory jenkins_home with path /var/lib/jenkins if failed permission 755 then exec "/bin/chmod 755 /var/lib/jenkins"
Enable gmail account Allow less secure apps to ‘on’
1 2 3
systemctl start monit tail -f /var/log/monit.log # test by systemctl stop jenkins
Implementing security and roles for Jenkins
Jenkins Security best practices
Disable job execution on the mater and use slave nodes for builds
Use job restrictions plugin to confine specific jobs to specific nodes irrespective of the label used install plugin job restrictions-> Manage Jenkins-> Manage Nodes
Enable CSRF protection and update scripts to use crumbs
Enable the slave to master access control
For large encironments, use role and matrix based authorization strategies to isolate project access install plugin Role-based Authorization Strategy
systemctl restart httpd vim /etc/httpd/conf.d/phpldapadmin.conf # # Web-based tool for managing LDAP servers #
Alias /phpldapadmin /usr/share/phpldapadmin/htdocs Alias /ldapadmin /usr/share/phpldapadmin/htdocs
<Directory /usr/share/phpldapadmin/htdocs> <IfModule mod_authz_core.c> # Apache 2.4 #Require local Require all granted </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Deny from all Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> cat /etc/selinux/config
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. #SELINUXTYPE=targeted
# Integrating Jenkins with external services ## Integrating with Github ![workflow](https://i.imgur.com/hkwPMqG.png)
- Preparing Github and configuring Jenkins to work together Github->Setting->Developer setting->Personal access tokens-> Genere a new token-> select scopes (repo:all |admin:repo_hook all| admin:org_hook)->copy token
Username with password will use for multi-branch pipeline secret text will be use in Global configuration for github server ![github server setting](https://i.imgur.com/u4OVbH2.png)
Install GitHub Branch Source Plugin
- Configuring the Pipeline to create a code build workflow [project](https://github.com/practicaljenkins/sample-php-project)
In Jenkins-> Create new Multibranch Pipeline-> ![Branch sources setting](https://i.imgur.com/Tn2maPQ.png)
- Testing the pipelines with different scenarios
git checkout -b feature-001 vim src/ConnectTest.php
push feature branch to github
![merge in github](https://i.imgur.com/JOkzC9O.png)
will trigger the build in Jenkins
## Integrating with Sonarqube
- setting up Sonarqube prerequisites for Jenkins
- Install and configure Sonarqube plugin
- Configure Jenkins pipeline for Sonarqube action
- Generate Sonarqube analysis report from Jenkins pipeline
# send a command to be executed on the remote machine and send back the output: ssh -t user1@server1.packt.co.uk cat /etc/hosts
#use SSH to send files between two machines to or from a remote machine, using the scp command: scp user1@server1.packt.co.uk:/home/user1/Desktop/file1.txt ./Desktop/
[root@ip-172-31-5-191 ~]# cat /tmp/output.txt Popping stash [root@ip-172-31-5-191 ~]# wget -qO- http://whatthecommit.com/index.txt clarify further the brokenness of C++. why the fuck are we using C++?
[detached from 17091.pts-0.ip-172-31-5-191] [root@ip-172-31-5-191 ~]# exit logout [centos@ip-172-31-5-191 ~]$ exit logout Connection to 54.66.232.147 closed.
ssh cicd Last login: Wed Jun 19 05:39:48 2019 from 220.240.212.9 [centos@ip-172-31-5-191 ~]$ sudo -i [root@ip-172-31-5-191 ~]# screen -list There is a screen on: 17091.pts-0.ip-172-31-5-191 (Detached) 1 Socket in /var/run/screen/S-root.
iotop ## Get a live view on the input and output, or short I/O, bandwidth usage of your system
iftop ## which gets a live view on network traffic and network bandwidth usage and monitor
htop ## improved version of the normal top program lsof | grep lib64 ## To print out a list of all open files, which means programs accessing files at the moment
pgrep: pgrep, pkill - look up or signal processes based on name and other attributes
python
create a random password:
1 2 3 4 5 6 7
stan@dockerfordevops:~/Projects/MobyDock$ python Python 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license"for more information. >>> import os >>> import binascii >>> binascii.b2a_hex(os.urandom(31))
postgres
1 2 3 4 5 6 7 8 9
kubernetes-release-1.5 psql -h database -U postgres psql (11.4 (Ubuntu 11.4-1.pgdg18.04+1), server 9.4.23) Type "help" for help.
orchestration (computing) automcatic management of them
manifests system robust
common question why use k8s instead of docker swarm
number 1 ad: docker swarm is built in K8S: far richer, deploy do different cloud platforms K8S is just far more popular is a orchestration of choice. K8S is the most in demand container orchestration system out there.
A Java-based with angular on the front-end.
Terminology
Cluster: A group of physical or virtual machines
Node: Ap hysical or virtual machine that is part of a cluster
Control Plane: The set of processes that run the core k8s services (e.g. APi, scheduler, etcd …)
Pod: The lowest compute unit in Kubernetes, a group of containers that run on a Node.
Architecture
Head Node That’s the brain of Kubernetes
API server
Scheduler: to place the containers where they need to go
Controller manager: makes sure that the state of the system is what it should be
Etcd: data store, used to store the state of the system
Sometimes:
kubelet: a process manage all of this
docker: container engine
Worker node
kubelet That’s the Kubernetes agent that runs on all the Kubernetes cluster nodes. Kubelet talks with the Kubernetes API server and then talks to the local Docker daemon to be able to manage the Docker containers running on the node.
kube-proxy: a system that allos you to manage the IP tables in that node so that the traffic between the pods and the nodes is what it should be
You might have an incompatibility between your distribution and the one that Minikube is expecting. two command line tools: kubectl is the controller pogramme for k8s. and Minikube
Enable the Hyper-V role throught settings when enable don’t use oracle virtual box
goo.gl/4yEFbF for win 10 Professional
minikube start hangs forever on mac #2765
If you’re completely stuck then do ask me
Docker Overview
Difference between Docker Images and Container: A container is an instance of docker images. Docker container is the run time instance of images.
1 2 3 4 5 6 7
docker image pull richardchesterwood/k8s-fleetman-webapp-angular:release0-5 docker image ls docker container run -p 8080:80 -d richardchesterwood/k8s-fleetman-webapp-angular:release0-5 docker container ls minikube ip docker container stop 2c5 docker container rm 2c5
Pods
A pod is a group of one or more containers, with shared storage/network, and a specification for how to run the containers.
basic concept is Pod. A pod and a container are in a one to one relationship.
writing a Pod
kubectl get all show everything we have defined in our Kubernetes cluster.
kubectl apply -f first-pod.yaml
kubectl describe pod webapp
kubectl exec webapp ls
kubectl -it exec webapp sh it means get in interactively with teletype emulation. interactive
Services
Pods are not visible outside the cluster Pods are designed to be very throw away things. Pods have short lifetimes. Pods regularly die.
Service has stable port, with a service we can connect to kubernetes cluster.
cat webapp-service.yaml apiVersion: v1 kind: Service metadata: name: fleetman-webapp
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: webapp release: "0-5"
kubectl get po NAME READY STATUS RESTARTS AGE webapp 1/1 Running 2 2h webapp-release-0-5 1/1 Running 0 8m
kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS webapp 1/1 Running 2 2h app=webapp,release=0 webapp-release-0-5 1/1 Running 0 8m app=webapp,release=0-5
kubectl get po --show-labels -l release=0 NAME READY STATUS RESTARTS AGE LABELS webapp 1/1 Running 2 2h app=webapp,release=0
kubectl get po --show-labels -l release=1 No resources found.
REPLICASETS
ReplicaSets
When Pod die, it will never come back.
1 2 3 4 5
kubectl get all
kubectl describe svc fleetman-webapp
kubectl delete po webapp-release-0-5
ReplicaSets specify how many instances of this pod do we want k8s running on time
kubectl apply -f pods.yaml replicaset.apps "webapp" created pod "queue" created
kubectl get all NAME READY STATUS RESTARTS AGE pod/queue 1/1 Running 0 58s pod/webapp-hzpcp 1/1 Running 0 58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
NAME DESIRED CURRENT READY AGE replicaset.apps/webapp 1 1 1 59s
The difference between current and ready is, current is the number of containers that are running, and ready is the number of containers that are responding to requests.
kubectl apply -f pods.yaml deployment.apps "webapp" created pod "queue" unchanged
kubectl get all NAME READY STATUS RESTARTS AGE pod/queue 1/1 Running 0 24m pod/webapp-7469fb7fd6-4mcth 1/1 Running 0 12s pod/webapp-7469fb7fd6-sv4rw 1/1 Running 0 12s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/webapp 2 2 2 2 12s
NAME DESIRED CURRENT READY AGE replicaset.apps/webapp-7469fb7fd6 2 2 2 12s
kubectl rollout status deploy webapp deployment "webapp" successfully rolled out
kubectl rollout status deploy webapp Waiting for rollout to finish: 1 out of 2 new replicas have been updated... Waiting for rollout to finish: 1 out of 2 new replicas have been updated... Waiting for rollout to finish: 2 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... deployment "webapp" successfully rolled out
cat networking-tests.yaml apiVersion: v1 kind: Pod metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql:5 env: # Use secret in real life - name: MYSQL_ROOT_PASSWORD value: password - name: MYSQL_DATABASE value: fleetman
--- kind: Service apiVersion: v1 metadata: name: database spec: selector: app: mysql ports: - port: 3306 type: ClusterIP
kga NAME READY STATUS RESTARTS AGE pod/mysql 1/1 Running 0 3m pod/queue 1/1 Running 0 18h pod/webapp-7469fb7fd6-sg87f 1/1 Running 0 17h pod/webapp-7469fb7fd6-znbxx 1/1 Running 0 17h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/database ClusterIP 10.101.3.159 <none> 3306/TCP 3m service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 21h service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 1d service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/webapp 2 2 2 2 17h
NAME DESIRED CURRENT READY AGE replicaset.apps/webapp-7469fb7fd6 2 2 2 17h replicaset.apps/webapp-74bd9697b4 0 0 0 17h replicaset.apps/webapp-8f948b66c 0 0 0 17h
kubectl exec -it webapp-7469fb7fd6-sg87f sh / # ls bin etc lib mnt root sbin sys usr dev home media proc run srv tmp var
# mysql -h database -uroot -ppassword fleetman Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.26 MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [fleetman]> show tables; +--------------------+ | Tables_in_fleetman | +--------------------+ | testable | +--------------------+ 1 row in set (0.01 sec)
Can find the ip address of any service that we like just by its name. And that’s called service discovery.
Fully Qualified Domain Names (FQDN)
1 2 3 4 5
# nslookup database nslookup: can't resolve '(null)': Name does not resolve
Each microservice should be Highly Cohesive and Lossely Coupled.
highly cohesive: each microservice should handlng one business requirement. Cohesive means that a microservice should have a single set of reponsibilities.
Each microservice will maintain its own data store. And that microservice will be really the only poart of the system that can read or write that data.
Fleetman Microservices- setting the scene
The logic in the API gateway is typically some kind of a mapping. So it would be something like if the incoming request ends with /vehicles, then delegate the call to, in this case, the position tracker.
API gateway: a web front end which is implemented in Java script
Position tracker: back end, calcuating the speeds of vehicles and storing the positions of all the vehicles.
Queue: which is going to store the messages that are received from the vehicles as they move around the country.
Positon simulator: a testing microservice which is going to generate some positions of vehicles.
Delete all the Pods:
1 2 3 4 5 6 7
kubectl delete -f . pod "mysql" deleted service "database" deleted deployment.apps "webapp" deleted pod "queue" deleted service "fleetman-webapp" deleted service "fleetman-queue" deleted
2019-06-05 06:09:06.044 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Starting PositionsimulatorApplication v0.0.1-SNAPSHOT on position-simulator-6f97fd485f-gplr8 with PID 1 (/webapp.jar started by root in /) 2019-06-05 06:09:06.056 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : The following profiles are active: producadskfjsjfsislfslsj 2019-06-05 06:09:06.151 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@5f4da5c3: startup date [Wed Jun 05 06:09:06 UTC 2019]; root of context hierarchy 2019-06-05 06:09:07.265 WARN 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'journeySimulator': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}" 2019-06-05 06:09:07.273 INFO 1 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report enable debug logging (start with --debug)
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'journeySimulator': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}" at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:355) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1214) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:776) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:861) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:369) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:313) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1185) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1174) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE] at com.virtualpairprogrammers.simulator.PositionsimulatorApplication.main(PositionsimulatorApplication.java:28) [classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [webapp.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [webapp.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [webapp.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:58) [webapp.jar:0.0.1-SNAPSHOT] Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}" at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:219) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:193) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.context.support.PropertySourcesPlaceholderConfigurer$2.resolveStringValue(PropertySourcesPlaceholderConfigurer.java:172) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.resolveEmbeddedValue(AbstractBeanFactory.java:813) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1039) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1019) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:566) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:349) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE] ... 24 common frames omitted
kubectl logs -f position-simulator-6f97fd485f-gplr8 follow the log
2019-06-05 06:13:36.205 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Starting PositionsimulatorApplication v0.0.1-SNAPSHOT on position-simulator-6d8769d8-ghtmw with PID 1 (/webapp.jar started by root in /) 2019-06-05 06:13:36.213 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : The following profiles are active: production-microservice 2019-06-05 06:13:36.361 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@443b7951: startup date [Wed Jun 05 06:13:36 UTC 2019]; root of context hierarchy 2019-06-05 06:13:38.011 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2019-06-05 06:13:38.016 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647 2019-06-05 06:13:38.041 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Started PositionsimulatorApplication in 2.487 seconds (JVM running for 3.201) 2019-06-05 06:13:38.046 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@443b7951: startup date [Wed Jun 05 06:13:36 UTC 2019]; root of context hierarchy 2019-06-05 06:13:38.048 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647 2019-06-05 06:13:38.049 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown ca^H^H^C
cat services.yaml apiVersion: v1 kind: Service metadata: name: fleetman-webapp
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: webapp
ports: - name: http port: 80 nodePort: 30080
type: NodePort
--- apiVersion: v1 kind: Service metadata: name: fleetman-queue
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: queue
ports: - name: http port: 8161 nodePort: 30010
- name: endpoint port: 61616
type: NodePort
--- apiVersion: v1 kind: Service metadata: name: fleetman-position-tracker
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: position-tracker
ports: - name: http port: 8080
type: ClusterIP
kubectl apply -f services.yaml service "fleetman-webapp" unchanged service "fleetman-queue" unchanged service "fleetman-position-tracker" configured
kga NAME READY STATUS RESTARTS AGE pod/position-simulator-589c64887f-lhl8g 1/1 Running 0 38m pod/position-tracker-86d694f997-5j6fm 1/1 Running 0 45m pod/queue-9668b9bb4-4pqxr 1/1 Running 0 45m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/fleetman-position-tracker ClusterIP 10.104.177.133 <none> 8080/TCP 3m service/fleetman-queue NodePort 10.110.95.121 <none> 8161:30010/TCP,61616:30536/TCP 1h service/fleetman-webapp NodePort 10.103.224.156 <none> 80:30080/TCP 1h service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/position-simulator 1 1 1 1 1h deployment.apps/position-tracker 1 1 1 1 45m deployment.apps/queue 1 1 1 1 1h
--- apiVersion: v1 kind: Service metadata: name: fleetman-api-gateway
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: api-gateway
$ kops create cluster --zones ap-southeast-2a,ap-southeast-2b,ap-southeast-2c ${NAME} I0607 06:10:02.189636 3468 create_cluster.go:519] Inferred --cloud=aws from zone "ap-southeast-2a" I0607 06:10:02.243690 3468 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-southeast-2a I0607 06:10:02.243802 3468 subnets.go:184] Assigned CIDR 172.20.64.0/19 to subnet ap-southeast-2b I0607 06:10:02.243857 3468 subnets.go:184] Assigned CIDR 172.20.96.0/19 to subnet ap-southeast-2c Previewing changes that will be made:
SSH public key must be specified when running with AWS (create with `kops create secret --name fleetman.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub`)
[ec2-user@foobar ~]$ ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa [ec2-user@foobar ~]$ kops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub [ec2-user@foobar ~]$ kops edit ig nodes --name ${NAME} [ec2-user@foobar ~]$ kops get ig --name ${NAME} NAME ROLE MACHINETYPE MIN MAX ZONES master-ap-southeast-2a Master m3.medium 1 1 ap-southeast-2a nodes Node t2.medium 3 5 ap-southeast-2a,ap-southeast-2b,ap-southeast-2c [ec2-user@foobar ~]$ kops edit ig master-ap-southeast-2a --name ${NAME} Edit cancelled, no changes made.
[ec2-user@i ~]$ kops update cluster ${NAME} --yes I0607 06:28:38.011102 32239 apply_cluster.go:559] Gossip DNS: skipping DNS validation I0607 06:28:38.263244 32239 executor.go:103] Tasks: 0 done / 94 total; 42 can run I0607 06:28:39.702035 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator-ca" I0607 06:28:40.216189 32239 vfs_castore.go:729] Issuing new certificate: "etcd-clients-ca" I0607 06:28:40.356654 32239 vfs_castore.go:729] Issuing new certificate: "etcd-peers-ca-main" I0607 06:28:40.743191 32239 vfs_castore.go:729] Issuing new certificate: "etcd-peers-ca-events" I0607 06:28:40.824760 32239 vfs_castore.go:729] Issuing new certificate: "etcd-manager-ca-events" I0607 06:28:41.265388 32239 vfs_castore.go:729] Issuing new certificate: "etcd-manager-ca-main" I0607 06:28:41.373174 32239 vfs_castore.go:729] Issuing new certificate: "ca" I0607 06:28:41.551597 32239 executor.go:103] Tasks: 42 done / 94 total; 26 can run I0607 06:28:42.539134 32239 vfs_castore.go:729] Issuing new certificate: "kube-scheduler" I0607 06:28:42.891972 32239 vfs_castore.go:729] Issuing new certificate: "kubecfg" I0607 06:28:43.157916 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-proxy-client" I0607 06:28:43.556052 32239 vfs_castore.go:729] Issuing new certificate: "kubelet" I0607 06:28:43.677894 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator" I0607 06:28:43.748079 32239 vfs_castore.go:729] Issuing new certificate: "kube-proxy" I0607 06:28:44.025132 32239 vfs_castore.go:729] Issuing new certificate: "kubelet-api" I0607 06:28:44.589696 32239 vfs_castore.go:729] Issuing new certificate: "kube-controller-manager" I0607 06:28:44.730038 32239 vfs_castore.go:729] Issuing new certificate: "kops" I0607 06:28:44.864527 32239 executor.go:103] Tasks: 68 done / 94 total; 22 can run I0607 06:28:45.089177 32239 launchconfiguration.go:364] waiting for IAM instance profile "masters.fleetman.k8s.local" to be ready I0607 06:28:45.101954 32239 launchconfiguration.go:364] waiting for IAM instance profile "nodes.fleetman.k8s.local" to be ready I0607 06:28:55.483430 32239 executor.go:103] Tasks: 90 done / 94 total; 3 can run I0607 06:28:55.974524 32239 vfs_castore.go:729] Issuing new certificate: "master" I0607 06:28:56.119668 32239 executor.go:103] Tasks: 93 done / 94 total; 1 can run I0607 06:28:56.336766 32239 executor.go:103] Tasks: 94 done / 94 total; 0 can run I0607 06:28:56.407976 32239 update_cluster.go:291] Exporting kubecfg for cluster kops has set your kubectl context to fleetman.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.fleetman.k8s.local * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
[ec2-user@i ~]$ kops validate cluster Using cluster from kubectl context: fleetman.k8s.local
Validating cluster fleetman.k8s.local
INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-southeast-2a Master m3.medium 1 1 ap-southeast-2a nodes Node t2.medium 3 5 ap-southeast-2a,ap-southeast-2b,ap-southeast-2c
NODE STATUS NAME ROLE READY ip-172-20-115-253.ap-southeast-2.compute.internal node True ip-172-20-39-212.ap-southeast-2.compute.internal node True ip-172-20-45-219.ap-southeast-2.compute.internal master True ip-172-20-89-8.ap-southeast-2.compute.internal node True
Your cluster fleetman.k8s.local is ready
[ec2-user@i ~]$ kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 4m50s
Provisioning SSD drives with a StorageClass
We have a workloads yaml file, where we’d be finding the pods that we want to deploy to our cluster. We have mongostack which is a specialist file, just for the mongo database. We have storage.yaml, which is currently defining that we want to store the mongo data in a local directory on the host machine. And we have a yaml file for the services.
--- # How do we want it implemented apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cloud-ssd provisioner: kubernetes.io/aws-ebs parameters: type: gp2
[ec2-user@ip-1 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongo-pvc Bound pvc-b2ed286e-88f3-11e9-b509-02985f983814 7Gi RWO cloud-ssd 3m45s
[ec2-user@ip-1 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b2ed286e-88f3-11e9-b509-02985f983814 7Gi RWO Delete Bound default/mongo-pvc cloud-ssd 18s
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: webapp
ports: - name: http port: 80 type: LoadBalancer
--- apiVersion: v1 kind: Service metadata: name: fleetman-queue
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: queue
ports: - name: http port: 8161
- name: endpoint port: 61616
type: ClusterIP
--- apiVersion: v1 kind: Service metadata: name: fleetman-position-tracker
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: position-tracker
ports: - name: http port: 8080
type: ClusterIP
--- apiVersion: v1 kind: Service metadata: name: fleetman-api-gateway
spec: # This defines which pods are going to be represented by this Service # The service becomes a network endpoint for either other services # or maybe external users to connect to (eg browser) selector: app: api-gateway
NAME DESIRED CURRENT READY AGE replicaset.apps/api-gateway-5d445d6f69 1 1 1 82s replicaset.apps/api-gateway-6d7dccc464 0 0 0 11m replicaset.apps/mongodb-5559556bf 1 1 1 20m replicaset.apps/position-simulator-549554f4d9 0 0 0 11m replicaset.apps/position-simulator-7ffd4f8f68 1 1 1 82s replicaset.apps/position-tracker-5ff4fb7479 1 1 1 11m replicaset.apps/queue-75f4ddd795 1 1 1 82s replicaset.apps/queue-b46577b46 0 0 0 11m replicaset.apps/webapp-689dd9b4f4 1 1 1 82s replicaset.apps/webapp-6cdd565c5 0 0 0 11m [ec2-user@ip-172-31-4-9 ~]$ kubectl log -f pod/position-tracker-5ff4fb7479-jjj9f log is DEPRECATED and will be removed in a future version. Use logs instead. 2019-06-07 07:42:40.878 ERROR 1 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'positionQueue' - retrying using FixedBackOff{interval=5000, currentAttempts=15, maxAttempts=unlimited}. Cause: Could not connect to broker URL: tcp://fleetman-queue.default.svc.cluster.local:61616. Reason: java.net.SocketException: Socket closed 2019-06-07 07:42:46.002 INFO 1 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : Successfully refreshed JMS Connection 2019-06-07 07:42:47.440 INFO 1 --- [nio-8080-exec-8] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:3}] to fleetman-mongodb.default.svc.cluster.local:27017 illljffkkkkerror: unexpected EOF
Docker swarm uses a concept called a rooting, or routing, mesh to find the node that your web application is running on. None of that is used here, it uses a standard AWS load balancer to find the correct node.
Setting up a real Domain Name
Add a CNAME record in your own domain to ELB address.
Surviving Node Failure
Requirement
Even in the vent of a Node (or Availability Zone) failure, the web site must be accessible
It doesn’t matter if reports from vehicles stop coming in, as long as service is restored within a few minutes
For our example, take the queue pod, give it two replicas and therefore, in the event of a node failure, one of the nodes will always survive. Unfortunately you can’t do that because this particular pod, the queue pod is stateful. In other words, it contains data. And because it contains data, if you replicate it, you’re going to end up with a kind of a split brain situation, where half the data is in one part, half the data is in the other part. And all kinds of chaos will follow on from that. Really what you’re aiming for with any pod is to make it stateless, so it’s not holding data.
State Store: Required value: Please set the --state flag or export KOPS_STATE_STORE. For example, a valid value follows the format s3://<bucket>. You can find the supported stores in https://github.com/kubernetes/kops/blob/master/docs/state.md. [ec2-user@ip-172-31-4-9 ~]$ export KOPS_STATE_STORE=s3://stanzhou-state-storage [ec2-user@ip-172-31-4-9 ~]$ kops delete cluster --name ${NAME} --yes TYPE NAME ID autoscaling-config master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844 master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844 autoscaling-config nodes.fleetman.k8s.local-20190607062844 nodes.fleetman.k8s.local-20190607062844 autoscaling-group master-ap-southeast-2a.masters.fleetman.k8s.local master-ap-southeast-2a.masters.fleetman.k8s.local autoscaling-group nodes.fleetman.k8s.local nodes.fleetman.k8s.local dhcp-options fleetman.k8s.local dopt-0a0be88814a0c83a9 iam-instance-profile masters.fleetman.k8s.local masters.fleetman.k8s.local iam-instance-profile nodes.fleetman.k8s.local nodes.fleetman.k8s.local iam-role masters.fleetman.k8s.local masters.fleetman.k8s.local iam-role nodes.fleetman.k8s.local nodes.fleetman.k8s.local instance master-ap-southeast-2a.masters.fleetman.k8s.local i-012c7c446b65343d4 instance nodes.fleetman.k8s.local i-07583ee103342be9a instance nodes.fleetman.k8s.local i-079cea61e4a7736b9 instance nodes.fleetman.k8s.local i-0bf949dbd290d81c3 internet-gateway fleetman.k8s.local igw-0a939d66d6e93e0d5 keypair kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a load-balancer a660e0c5e88f611e9b50902985f98381 load-balancer api.fleetman.k8s.local api-fleetman-k8s-local-tkmafs route-table fleetman.k8s.local rtb-06b591f24a01973f6 security-group sg-07b79756088cf753c security-group api-elb.fleetman.k8s.local sg-005c9b49b63793004 security-group masters.fleetman.k8s.local sg-07ef00367ce1a7b62 security-group nodes.fleetman.k8s.local sg-01f81918cdbaba212 subnet ap-southeast-2a.fleetman.k8s.local subnet-060ec2db19027cf6a subnet ap-southeast-2b.fleetman.k8s.local subnet-0def003bdbfd97915 subnet ap-southeast-2c.fleetman.k8s.local subnet-0016862a30fe5f443 volume a.etcd-events.fleetman.k8s.local vol-0d21c97044fcc06dd volume a.etcd-main.fleetman.k8s.local vol-0f1c9f3e983c5848a volume fleetman.k8s.local-dynamic-pvc-b2ed286e-88f3-11e9-b509-02985f983814 vol-074914e494e4b656d vpc fleetman.k8s.local vpc-0775d4b463932d2f7
load-balancer:api-fleetman-k8s-local-tkmafs ok load-balancer:a660e0c5e88f611e9b50902985f98381 ok autoscaling-group:nodes.fleetman.k8s.local ok keypair:kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a ok internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry autoscaling-group:master-ap-southeast-2a.masters.fleetman.k8s.local ok instance:i-012c7c446b65343d4 ok instance:i-07583ee103342be9a ok instance:i-0bf949dbd290d81c3 ok instance:i-079cea61e4a7736b9 ok iam-instance-profile:nodes.fleetman.k8s.local ok iam-instance-profile:masters.fleetman.k8s.local ok iam-role:nodes.fleetman.k8s.local ok iam-role:masters.fleetman.k8s.local ok subnet:subnet-0def003bdbfd97915 still has dependencies, will retry subnet:subnet-0016862a30fe5f443 still has dependencies, will retry autoscaling-config:nodes.fleetman.k8s.local-20190607062844 ok volume:vol-0d21c97044fcc06dd still has dependencies, will retry autoscaling-config:master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844 ok volume:vol-0f1c9f3e983c5848a still has dependencies, will retry subnet:subnet-060ec2db19027cf6a still has dependencies, will retry volume:vol-074914e494e4b656d still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry security-group:sg-01f81918cdbaba212 still has dependencies, will retry security-group:sg-07b79756088cf753c still has dependencies, will retry security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry Not all resources deleted; waiting before reattempting deletion route-table:rtb-06b591f24a01973f6 internet-gateway:igw-0a939d66d6e93e0d5 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a security-group:sg-07b79756088cf753c subnet:subnet-0016862a30fe5f443 subnet:subnet-0def003bdbfd97915 security-group:sg-07ef00367ce1a7b62 volume:vol-0d21c97044fcc06dd dhcp-options:dopt-0a0be88814a0c83a9 volume:vol-074914e494e4b656d security-group:sg-01f81918cdbaba212 volume:vol-0f1c9f3e983c5848a vpc:vpc-0775d4b463932d2f7 subnet:subnet-060ec2db19027cf6a still has dependencies, will retry subnet:subnet-0def003bdbfd97915 still has dependencies, will retry subnet:subnet-0016862a30fe5f443 still has dependencies, will retry volume:vol-074914e494e4b656d still has dependencies, will retry volume:vol-0f1c9f3e983c5848a still has dependencies, will retry internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry volume:vol-0d21c97044fcc06dd still has dependencies, will retry security-group:sg-01f81918cdbaba212 still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry security-group:sg-07b79756088cf753c still has dependencies, will retry Not all resources deleted; waiting before reattempting deletion volume:vol-074914e494e4b656d security-group:sg-01f81918cdbaba212 volume:vol-0f1c9f3e983c5848a vpc:vpc-0775d4b463932d2f7 route-table:rtb-06b591f24a01973f6 internet-gateway:igw-0a939d66d6e93e0d5 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a subnet:subnet-0016862a30fe5f443 security-group:sg-07b79756088cf753c volume:vol-0d21c97044fcc06dd subnet:subnet-0def003bdbfd97915 security-group:sg-07ef00367ce1a7b62 dhcp-options:dopt-0a0be88814a0c83a9 subnet:subnet-060ec2db19027cf6a still has dependencies, will retry subnet:subnet-0def003bdbfd97915 still has dependencies, will retry volume:vol-0f1c9f3e983c5848a still has dependencies, will retry volume:vol-0d21c97044fcc06dd still has dependencies, will retry internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry volume:vol-074914e494e4b656d still has dependencies, will retry subnet:subnet-0016862a30fe5f443 still has dependencies, will retry security-group:sg-07b79756088cf753c still has dependencies, will retry security-group:sg-01f81918cdbaba212 still has dependencies, will retry security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry Not all resources deleted; waiting before reattempting deletion route-table:rtb-06b591f24a01973f6 internet-gateway:igw-0a939d66d6e93e0d5 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a subnet:subnet-0016862a30fe5f443 security-group:sg-07b79756088cf753c volume:vol-0d21c97044fcc06dd subnet:subnet-0def003bdbfd97915 security-group:sg-07ef00367ce1a7b62 dhcp-options:dopt-0a0be88814a0c83a9 volume:vol-074914e494e4b656d security-group:sg-01f81918cdbaba212 volume:vol-0f1c9f3e983c5848a vpc:vpc-0775d4b463932d2f7 subnet:subnet-0def003bdbfd97915 still has dependencies, will retry subnet:subnet-060ec2db19027cf6a still has dependencies, will retry internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry volume:vol-074914e494e4b656d still has dependencies, will retry volume:vol-0d21c97044fcc06dd ok volume:vol-0f1c9f3e983c5848a ok security-group:sg-01f81918cdbaba212 still has dependencies, will retry subnet:subnet-0016862a30fe5f443 ok security-group:sg-07b79756088cf753c still has dependencies, will retry security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry Not all resources deleted; waiting before reattempting deletion dhcp-options:dopt-0a0be88814a0c83a9 volume:vol-074914e494e4b656d security-group:sg-01f81918cdbaba212 vpc:vpc-0775d4b463932d2f7 route-table:rtb-06b591f24a01973f6 internet-gateway:igw-0a939d66d6e93e0d5 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a security-group:sg-07b79756088cf753c subnet:subnet-0def003bdbfd97915 security-group:sg-07ef00367ce1a7b62 internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry volume:vol-074914e494e4b656d ok subnet:subnet-060ec2db19027cf6a still has dependencies, will retry security-group:sg-01f81918cdbaba212 still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry security-group:sg-07b79756088cf753c still has dependencies, will retry subnet:subnet-0def003bdbfd97915 ok security-group:sg-07ef00367ce1a7b62 ok Not all resources deleted; waiting before reattempting deletion security-group:sg-01f81918cdbaba212 vpc:vpc-0775d4b463932d2f7 route-table:rtb-06b591f24a01973f6 internet-gateway:igw-0a939d66d6e93e0d5 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a security-group:sg-07b79756088cf753c dhcp-options:dopt-0a0be88814a0c83a9 subnet:subnet-060ec2db19027cf6a still has dependencies, will retry security-group:sg-005c9b49b63793004 still has dependencies, will retry security-group:sg-07b79756088cf753c still has dependencies, will retry internet-gateway:igw-0a939d66d6e93e0d5 ok security-group:sg-01f81918cdbaba212 ok Not all resources deleted; waiting before reattempting deletion dhcp-options:dopt-0a0be88814a0c83a9 vpc:vpc-0775d4b463932d2f7 route-table:rtb-06b591f24a01973f6 security-group:sg-005c9b49b63793004 subnet:subnet-060ec2db19027cf6a security-group:sg-07b79756088cf753c subnet:subnet-060ec2db19027cf6a ok security-group:sg-005c9b49b63793004 ok security-group:sg-07b79756088cf753c ok route-table:rtb-06b591f24a01973f6 ok vpc:vpc-0775d4b463932d2f7 ok dhcp-options:dopt-0a0be88814a0c83a9 ok Deleted kubectl config for fleetman.k8s.local
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/monitoring-prometheus-node-exporter 4 4 4 4 4 <none> 4m27s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/monitoring-grafana 1 1 1 1 4m27s deployment.apps/monitoring-kube-state-metrics 1 1 1 1 4m27s deployment.apps/monitoring-prometheus-oper-operator 1 1 1 1 4m27s
NAME DESIRED CURRENT READY AGE replicaset.apps/monitoring-grafana-c768bb86f 1 1 1 4m27s replicaset.apps/monitoring-kube-state-metrics-6488587c6 1 1 1 4m27s replicaset.apps/monitoring-prometheus-oper-operator-7b54f56766 1 1 1 4m27s
NAME DESIRED CURRENT AGE statefulset.apps/alertmanager-monitoring-prometheus-oper-alertmanager 1 1 4m statefulset.apps/prometheus-monitoring-prometheus-oper-prometheus 1 1 3m53s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/monitoring-prometheus-node-exporter 4 4 4 4 4 <none> 17m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/monitoring-grafana 1 1 1 1 17m deployment.apps/monitoring-kube-state-metrics 1 1 1 1 17m deployment.apps/monitoring-prometheus-oper-operator 1 1 1 1 17m
NAME DESIRED CURRENT READY AGE replicaset.apps/monitoring-grafana-c768bb86f 1 1 1 17m replicaset.apps/monitoring-kube-state-metrics-6488587c6 1 1 1 17m replicaset.apps/monitoring-prometheus-oper-operator-7b54f56766 1 1 1 17m
NAME DESIRED CURRENT AGE statefulset.apps/alertmanager-monitoring-prometheus-oper-alertmanager 1 1 16m statefulset.apps/prometheus-monitoring-prometheus-oper-prometheus 1 1 16m
Working with Grafana
change back from LoadBalancer to ClusterIP on service/monitoring-prometheus-oper-prometheus change from ClusterIP to LoadBalancer on service/monitoring-grafana
Open multiple files in separate tab via vim -p file1.txt file2.txt. If already open just use :tabe file2.txt. Just 1gt, 2gt, use gt to view. Close tabs using :tabc
Replace string globally
:%s/release1/release2/g
evil-tutor
/ followed by a phrase searches FORWARD for the phrase ? followed by a phrase searches BACKWARD for the phrase After a search type n to find the next occurrence in the same direction or N to search in the opposite direction.
Type % to find a matching ),], or } The cursor should be on the matching parenthesis or bracket.
a way to change errors
:s/old/new to substitute ‘new’ for first ‘old’s on a line type :s/old/new/g to substitute ‘new’ for all ‘old’s on a line type :#,#s/old/new/g to substitute phrases between two line #’s type :%s/old/new/g to substitute all occurrences in the file type :%s/old/new/gc to ask for confirmation each time add ‘c’
execute an external command
:! external command
writing files
:w TEST
remove the file
:!rm TEST
to save part of the file
:#,# w FILENAME
retrieves disk file FILENAME and inserts it into the current buffer following the cursor position
:r FILENAME
opens a line BELOW the cursor and places the cursor on the open line in insert state
o
opens a line ABOVE the cursor and places the cursor on the open line in insert state
Scheduling is one of the options of a Build Trigger
Use CRON expressions to schedule a job
Each line consists of 5 fields sparated by TAB or whitespace
MINUTE HOUR DOM MONTH DOW 0 refers to Sunday
To specify multiple values, use the following:
for all valid values
0-9 specify a range of values
2,3,4,5 to enumerate multiple values
Use Hash system for automatic balancing
Use H H * * * instead of 0 0 * * *
Use aliases like @yearly, @monthly, @weekly
@hourly is the same as H * * * *
Under Source Code Management of Project, in Build Triggers, tick Build periodically, put five starts.
Triggering builds remotely
Build Triggers-> tick ‘Tigger build remotely’, put Authenication Token value.
Parameterizing build
Allow you to prompt users for one or more inputs
Each parameter has a name and a value
Can be accessed using $paramater or %parameter%
‘Build Now’ will be replaced by “Build with Parameters”
Types of Parameters
Boolean Parameter
Choice Parameter
Credentials Parameter
File Parameter
List Subversion tags
Password Parameter
Run Parameter
String Parameter
Creating a user
Installing Plugins
Plugin name: Role-based Authorization Strategy
Implementing Role based access
Enabling role based access Manage Jenkins-> Configure Global Security, tick ‘Enable security’ Authorization select ‘Role-Based Strategy’
Under Manage Jenkins you will have a new ‘Manage and Assign Roles’-> Manage roles
In ‘Global roles’ add role ‘team’ with overall read permission. In “Project roles” add role ‘dev’ with ‘Pattern’ ‘Dev.*’, and tick all the boxes. Same for ‘test’ and ‘ops’.
Assign Roles
Click “Assign Roles”, in ‘Global roles’ add ‘dev’, ‘test’ and ‘ops’ into group ‘team’. Under ‘Item roles’ add user ‘dev’,’test’ and ‘ops’ and assign to its own role.
Jenkins in Docker
Running jenkins in docker
1 2
docker pull jenkins docker run -p 8080:8080 -p 50000:50000 jenkins
Persisting Jenkins data in a Volume
1 2 3
docker volume create volume1 docker volume volume ls docker run -p 8080:8080 -p 50000:50000 -v volume1:/var/jenkins_home jenkins #v means bond a volume
docker stop $(docker ps -aq) #stop all the containers docker rm $(docker ps -aq) #remove the container
Running multiple Jenkins instances in Docker
1 2
docker run -p 8080:8080 -p 50000:50000 -v volume1:/var/jenkins_home jenkins docker run -p 8081:8080 -p 50001:50000 -v volume1:/var/jenkins_home jenkins
It essentially puts the new commits in the master of the remote in your hisotry, and then superimpose your commits on them.
Squash commits together
git rebase -i HEAD~2
Squash the last two commits. The HEAD~2 refers to the last two commits in the current branch, and the -i option stands for interactive. pick and squash Pick the old commit and squash the latest commit.
Aborting a squash
git rebase --abort get back to the pre-squash state
git log
git log --oneline --decorate --graph --all
Complete Git and Github Masterclass
How Git works - 3 main states of artifact
Modified: Here, the artifact goes through chaneg made by the user
Staged: Here, the user/developer adds the artifact to Git index or staging area
Committed: Here, the artifact gets safely stored in Gt database.
git add: files moved from modified state to staged state
git commit: files moved from staged state to committed state
natural state progression is from modified-> staged-> committed
resets the staging area and working directory to match the most recent commit.
git reset --hard hash_vaule
moves the current branch id backward to commit id we are using and reset the both the staging area and work dir to match, this destroy not the current change and both the commit id as well.
git init # for creating a repository in a directory with existing files # creates repository skeletion in .git directory git add # Tells git to start tracking files # Patterns: git add *.c or git add. (.= files and dirs recursively) git commit # Promotes files into the local repository # Uses -m to supply a comment # Commits everthing unless told otherwise
Undoing/updating things
1 2 3 4 5 6 7 8 9
git commit --amend # allows you to "fix" last commit # updates last commit using staging area # with nothing new in staging area, just updates comment(commit message) # example >> git commit -m "my first update" >> git add newfile.txt (add command stages it into staging area) >> git commit --amend >> This updated commit takes the place of the first (updates instead of adding adding another in the chain)
Getting help
1 2 3 4
git <command> -h # brings up on screen list of options git hlep <command> # brings up html man page
Git config file
1 2 3 4 5 6 7 8 9 10
Scope Local (repo) >>.git/config Global (user) >> ~/.gitconfig System (all users on a system) >> <usr|usr/local>/etc/gitconfig Divied into sections Can have user settings, aliases, etc. Text file that can be edited, but safer to use git config
Git Status
1 2 3 4 5 6 7 8 9 10 11 12
Command: git status Primary tool fo showing which files are in which state -s is short option - output<1 or more characters><filename> >> ?? = untracked >> M = modified >> A = added >> D = deleted >> R = renamed >> C = copied >> U = updated but unmerged -b option - lways show branch and tracking info Common usage: git status -sb
Git special references: Head and Index
1 2 3 4 5 6 7 8 9 10 11
Head snapshot of your last commit next parent (in chain of commits) pointer to current branch reference (reference = SHA1) Think of HEAD as pointer to last commit on current branch Index (Staging Area) place where changes for the next commit get registered temporary staging area for what you're working on proposed next commit Cache - old name for index Think of cache, index, and staging area as all the same
Showing differences
1 2 3 4 5
command: git diff default is to show changes in the workfing directory that are not yet staged if sth is staged, shows diff between working directory and staging area option of --cached or --staged shows difference between staging area and last commit (HEAD) git diff <reference> shows differences between working directory and what <reference> points to - example "git diff HEAD"
History
1 2 3 4 5 6 7 8 9 10 11 12 13 14
command: git log with no options, shows name, sha1, email, commit message in reverse chronological order (newest first) -p option shows patch - or differences in each commit Shortcut: git show -# -shows last # commits --stat - shows statistics on number of changes --pretty=oneline|short|full|fuller --format - allows you to specify your own output format --oneline and --format can be very useful for machine parsing Time-limiting options --since|until|after|before (i.e. --since=2.weeks or --before=3.19.2011) gitk tool also has history visualizer git log --oneline <since>..<until> e.g. foobar..barfoo git log --oneline <file-name> e.g. teststatusfile git log --oneline -n <limit> e.g. 2