Imgur
Business Requirements

  • Start with business requirements
  • DevOps is about doing the work right
  • What about doing the right work? See
    • (Agile) Portfolio Management- making sure the work you are doing is funded and delivering business value, and
    • Lean Control - making sure you get early engagement with control functions in your organisation (Audit, Compliance, Security, Architecture,
      Accessibility, Marketing, etc.)
  • Work/Requirements are comprised of
    • features of business value (2-3 months), divided into …
    • sprints (2-3 weeks) and the sprints are made up of …
    • tasks (2-3 days of work)

Tasks

  • 2-3 days of work
  • Developers pull tasks off of a sprint queue
  • Sprint goals to demo working software at the end of each sprint

Code

  • Integrated continuously, Build continuously
  • All code is reviewed by another team member before committing
  • Feature branching or trunk-based development?
  • No long lived code branches

Continuous Integration

  • Code built continuously (multiple times per day)
  • Fast feedback -continous builds are very fast (< 5mins)
  • Best practice build patterns/chains
    • Compile, unit test, integration test, deploy artefacts

Metrics

  • Code quality is vital
  • Code coverage measures test automation
  • Gold/silver/bronze accreditation

Artifacts

  • Green builds produce shippable artefacts (.jar, .dll, .exe, docker image)
  • Single store for all internal and external artefacts and libraries
  • Security policies around 3rd party libraries and external access

Infrastructure as Code

  • Operations roles change
  • Infrastructure provisioning and configuration is automated
  • Orchestration tools to provision infrastructure (Terraform, Cloud Formation for AWS)
  • Configuration management tools to install and manage software on provisioned infrastructure (Chef, Puppet, Ansible)
  • IaC is stored, tested and versioned in source code control
  • Organisational Change, move to Site Reliability Engineering (SRE)

Service Mangement

  • Approvals and change are automated
  • Products with higher levels of accreditation have lower change management overheads (more automation)

Continous Deployment

  • Infrastructure provisioned automatically
  • Configuration automated
  • Change approvals automated
  • Push button deployment to production

Monitoring

  • Observability driven design
  • Monitoring, logging, dash boarding early in the life-cycle
  • Issues and observations feed back to developers

Security

  • “Shift Left” security
  • “average total cost of a breach ranges from $2.2 million to $6.9 million”
  • Code vulnerability scanning in the build pipeline
  • Build fail if major/critical issues
  • Tools- CheckMarx, Fortify
  • Artefact scanning for security vulnerabilities
  • Firewalls to protect against 3rd party vulnerabilities
  • Tools - Nexus Lifecycle/Fiewall, BackDuck
  • Image scanning dof Docker images
  • Tools- AquaSec, Twistlock, Tennable, OpenSCAP

Evolving DevOps @ Scale

Shadow DevOps -> Enterprise DevOps -> DevOps as a Service

10 years age -> 5 years ago -> the future

Enterprise Devops
Devops as a Service
GitOps
AWS DevOps
Azure DevOps

Common Terminologies

Repository: A unit of storage and change tracking that represents a direcotry whose contents are tacked by Git

Brach: A version of a repository that represents the current state of the set of files that constitute a repository

Master: The frault or main branch, it is a version of the repository that is considered the single source of truth

Refrence: A Git ref or reference is a name corresponding to a commit hash

HEAD: A reference to the most recent commit on a branch

Working Tree: This refers to the section in which we view and make changes to the files in branch

Index: This is an area where Git holds files that have been changed, added, or removed in readiness for a commit

Commit: This is an entry into Git’s history that represents a change made to a set of files at a given point in time

Merge: A merge is the process of incorporating change from one branch to another

Workflows: Workflows refer to the approach a team takes to introduce changes to a code base.

Workflows

Gitflow workflow

  • This uses two branches: master and develop
  • The master branch is used to track release history, while the develop branch is used to track feature integration into the product.

Centralized workflow

  • This approach uses the master branch as the default development branch.
  • The changes are committed to the master branch.
  • It’s a suitable workflow for small size teams and teams transitioning from Apache Subversion.
  • In Apache Subversion, the trunk is the equivalent of the master branch.

Feature branch workflow

  • In this workflow, feature development is carried out in a dedicated branch.
  • The branch is then merged to the master once the intended changes are approved.

Forking workflow

  • The individual seeking to make a change to a repository, makes a copy of the desired repository in their respective GitHub account.
  • The changes are made in the copy of the source repository and then it’s merged to the source repository throught a pull request.

Navigating GitHub

Organizations

Role-based membership

  • Each personal account that is added to an organization can belong to one of the aforementioned roles.

  • The owner role is the most superior and is used to conduct administrative procedures.

Repository level permissions

1
2
3
graph TD;
Read-->Write;
Write-->Admin;
  • Teams or their respective members can be assigned read, write, or admin-level permissions to a repository.
  • Each level dictates activities that the assigned members undertake, with a varying degree of limitations.

Teams

  • There are members of an organization that can be grouped into teams, with the option of nesting the teams to match an organization’s structure.

Multi-factor authentication

  • Organizations support the enforcement of two-factor authentication as well as business-specific single sign-on approaches such as Security Assertion Markup Language (SAML) and System for Corss-domain Identity Management (SCIM).

    Market place install codacy

Runtime config

Git configurations are set in three levels:

  • System-wide configuration
    • set in the /etc/gitconfig file
    • access use git config –system
  • User specific configuraton
    • ~/.gitconfig
    • git config –global
  • Repository-specific configuration
    • Repository specific settings are set in the path_to_repository/.git/config
    • An example of configuration is the GitHub URL of a repository, which set at this level.
    • There settings are accessed via git config –local

Removing configuratoin

1
git config --global --unset [section_name].[section_variable]

exampel:

1
git config --global --unset user.name

create a ssh key:

1
ssh-keygen -t rsa -b 4096 -C foobar@cwzhou.win

Fundamentails of repositories

Tags
There are used for the purpose of identifying specific significant points on a repository’s history.

  • lightweight tages
    Lightweight tags act as pointers to a specific commit. It only stores the reference to the commit: git tag v2.5

  • annotated tags
    Annotated tags act as pointers to a specific commit and additionally store information about the creator of the tag, the email, and date of creation:
    git tag -a v2.6 -m “Support sdk version 3”

Versioning Commits

In git, files can have the following statuses:

  • Untracked: This is a file that exists in the working tree whose changes are not being monitored by Git and aren’t listed in the gitignore file.
  • Unstaged: This is a file whose changes are being tracked by Git; the file has been changed since the last commit and has yet to be moved to the index.
  • Staged: This is a file whose changes are being tracked by Git; the file has been changed since the last commit and has been moved to the index.
1
git status
  • It’s used to retrieve the details of files that are untracked, unstaged, or staged.
  • git status lists files in order of their statuses.
  • The git status output is lengthy in nature
  • To view a brief list and status, use the -s or –short option with the git status command.
    1
    2
    3
    4
    5
    szhou@armitage:~/DevOps/github/foobar|master
    ⇒ git branch ft-add-encapsulating-class
    szhou@armitage:~/DevOps/github/foobar|master
    ⇒ git checkout ft-add-encapsulating-class
    Switched to branch 'ft-add-encapsulating-class'
    ft is feature branch, bg for bug, fs for rolling out hard fixes, ch for
    1
    git add . && git commit -m "Added a class for match functions" && git push origin ft-add-encapsulating-class

git diff

  • The git diff command is uesed to compare one snapshot of changes to another.

zoom in the change:

1
2
3
4
5
6
szhou@armitage:~/DevOps/github/foobar|ft-add-encapsulating-class⚡
⇒ git diff
szhou@armitage:~/DevOps/github/foobar|ft-add-encapsulating-class⚡
⇒ git diff src/lib
szhou@armitage:~/DevOps/github/foobar|ft-add-encapsulating-class⚡
⇒ git diff src/lib/compute.py

Undo the change

1
git rest --hard
1
2
3
4
5
git diff HEAD -- src/
git log
git diff master
git diff --cached
git diff ft-add-encapsulating-class..ft-support-multiplication-arithmetic

git add

  • is used to add files to the index from the working tree.
  • syntax: git add [options] [path_to_files]

The options used with git add include -n and –dry-run: simulates the behaviour of git add for the specified file.

-f or –force: adds ignored files to the index.

-i or –interactive: creates an interactive prompt that can be used for adding files from the working tree to the index.

-por –patch: caters for adding portions of a file to the index.

options:

  • ?: Print help
  • y: Stage this hunk
  • n: Do not stage this hunk
  • q: Exit or quit. Do not stage this hunk or any of the remaining hunks.
  • a: Stage this hunk and all later hunks in the specified files
  • d: Do not stage this hunk or any of the remaining hunks in the file
  • g: Select a hunk to go to
  • /: Search for the hunk that matches the specified regex pattern
  • j: Leave this hunk undecided; see the next undecided hunk
  • J: Leave this hunk undecided; see the next hunk
  • k: Leave this hunk undecided; see the previous undecided hunk
  • K: Leave this hunk undecided; see the previous hunk
  • s: Split the current hunk into more granular hunks
  • e: Manually edit the current hunk

git commit

  • The git commit command saves the files in the index.
  • The commit operation stores a message along with the commit.
  • This message describes the additions or alterations associated with the created snapshot.

Note: the git commit command requires that a message be provided for each commit operation.
options:

  • -m [text] or –message [text]: associate the index file with the commit action
  • -a or -all: stage tracked files that are unstaged
  • -p or –patch: interactive patch tool
  • -C [commit hash] or –resue-message=[commit hash]: resue a commit message and the author information of the specified commit hash
  • -F [file] or –file=[file]: specifies a file from which a commit message should be obtained
  • -t [file] or –template [file]: specifies the commit message template file
  • -e or –edit: edits the provided commit message
  • –no-edit: uses the specified message as is
  • –author=[author]: overrides the details fo a commit author
  • –date=[date]: overrides the data details used in a commit
  • -q or –quiet: suppresses the summary message that’s returned after running the git commit command

git rm

The git rm command performs two roles.

  • Removes files from the working directory and the index
  • Remove files from index
    1
    2
    3
    4
    5
    vim src/lib/scientific.py
    git add . && git commit -m "Added sci module"
    git rm src/lib/scientific.py
    git status
    git commit -m "Removed sci module"

git mv

  • used to rename or move a file or a directory
  • This command has two forms of implementation:
    • git mv [options][source][destination]: used to rename a file
    • git mv [options][source]…[destination]: used to move a file

git log command

1
2
3
4
5
6
7
8
9
10
11
12
13
git log --follow src/lib/advanced/advanced_compute.py
git log --decorate=full
git log --decorate=short
git log --decorate=no
git log -L 6,12:src/lib/compute.py
git log -n 3
git log -3
git log --skip=4
git log --since=01/01/2018
git log --pretty=oneline
git log --pretty=short
git log --pretty=medium
git log --pretty=format:"%H %an"

Amending commits

The most recent commit can be edited using the –amend option of the git commit command.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
git commit --amend
git rebase -i HEAD~4 # retrv the last 4 commits
pick 721d2ec Removed sci module
pick fc150d2 Added sci module
pick 4be4d85 Rename scientific module
pick 9a8a4ea Moved scientific module

change pick to reword save and quit the file.

## change the file
git rebase -i HEAD~3
change pick to edit save and quit the file.
vim src/lib/advanced/advanced_compute.py
git status
git add .
git commit --amend
git rebase --continue
git log -4

CS Visualized: Useful Git Commands

Git merge with No-Fast-Forward technique

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
git-demo-project git:(demo-branch) ✗ git commit -am "commit 1 from demo-branch"
[demo-branch 0280912] commit 1 from demo-branch
1 file changed, 1 insertion(+)
create mode 100644 test-no-ff-merge-file
➜ git-demo-project git:(demo-branch) echo 'line 2' >> test-no-ff-merge-file
➜ git-demo-project git:(demo-branch) ✗ git commit -am "commit 2 from demo-branch"
[demo-branch ca0b8d0] commit 2 from demo-branch
1 file changed, 1 insertion(+)
➜ git-demo-project git:(demo-branch) git log --oneline
➜ git-demo-project git:(demo-branch) cat test-no-ff-merge-file
line 1
line 2
➜ git-demo-project git:(demo-branch) git checkout master
Switched to branch 'master'
➜ git-demo-project git:(master) git log --oneline
➜ git-demo-project git:(master) ls
readme.md testffmergefile test.html
➜ git-demo-project git:(master) git diff master demo-branch
➜ git-demo-project git:(master) git merge demo-branch --no-ff
Merge made by the 'recursive' strategy.
test-no-ff-merge-file | 2 ++
1 file changed, 2 insertions(+)
create mode 100644 test-no-ff-merge-file
➜ git-demo-project git:(master) cat test-no-ff-merge-file
line 1
line 2
➜ git-demo-project git:(master) git log --oneline --decorate --graph
➜ git-demo-project git:(master) git branch -d demo-branch
Deleted branch demo-branch (was ca0b8d0).

Ansible configuration

Installation

for ubuntu:

1
2
3
4
5
6
7
8
9
10
11
apt-get install software-properties-common
apt-add-repository ppa:ansible/ansible
apt-get update
apt-get install ansible
root@ansible-host:~# ansible --version
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]

Gran access to modes/machines:

  • ssh-keygen
  • ssh-copy-id

Setup:

1
2
3
cp -R /etc/ansible local
cd local
edit ansible.cfg and hosts file

configuration files

1
2
3
4
ANSIBLE_CONFIG #an environemtn variable
ansible.cfg # in the current directory
.ansible.cfg #in the home directory
/etc/ansible/ansible.cfg

Ansible inventory

/etc/ansible/hosts
INI format, support host variables, group variables and group of group

1
2
3
4
5
6
7
# add to /etc/ansible/hosts
[demo_hosts]
node01 ansible_user=ubuntu
node02 ansible_user=ubuntu
# add to /etc/hosts
172.31.5.185 node01
172.31.8.115 node02

Verify Ansible Inventory

1
2
3
4
5
6
7
8
9
10
11
eval `ssh-agent -s`
ssh-add ansible-key.pem
ansible -m ping all
# add to /etc/ansible/hosts
[web_server]
node01 ansible_user=ubuntu

[db_server]
node02 ansible_user=ubuntu
ansible web-server -m ping
ansible db-server -m ping

Local automation execution using Ansible

architecture type

1
2
3
4
5
6
7
8
9
10
11
12
Example: Ad hoc Linux echo command against a local system

#> ansible all -i "localhost," -c local -m shell -a 'echo hello DevOps World'
cat hellodevopsworld.yml
# File name: hellodevopsworld.yml
---
- hosts: all
tasks:
- shell: echo "hello DevOps world"

# Running a Playbook from the command line:
#> ansible-playbook -i 'localhost,' -c local hellodevopsworld.yml

Remote automation execution using Ansible

1
2
# Example command: execute the playbook hellodevopsworld.yml against rpi-01
ansible-playbook -i 'rpi-01,' -c local ~/Learning/ansible/hellodevopsworld.yml

Run and execute ansible tasks

Ansible Command Line

Two ways: ad-hoc command and playbook

Ansible Ad-hoc Commands

ansible <target> -m <module name> -a arguments

support parallelism: <command> -f

ansible demo_host -m copy -a "src=/tmp/test1 dest=/tmp/test1" #-m means module, -a optional provided a list of arguments

1
2
3
4
5
ansible-doc -l
ansible-doc copy
ansible-doc -l|grep shell
touch /tmp/test1
ansible demo_hosts -m copy -a "src=/tmp/test1 dest=/tmp/test1"

Ansible Facts

A fact is a detail or piece of information collect from remote host. Can be use for grouping node, or filter node.

1
2
ansible all -m setup
ansible demo_host -m setup

Ansible Variables

Valid Variable names

  • Cannot use “-“ (hyphen)
  • Can be alphanumeric
  • Should start with an alphabet

Ansible Variables Naming Conventions

can define variables in Inventory, playbooks file and roles.

Ansible Sections

  • Target Section: sepecify the host group
  • Variable Section
  • Task Section : list all the task
  • Handler Section
  • Loops
  • Conditionals
  • Until
  • Notify

    Ansible playbooks

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    root@ansible-host:/home/ubuntu# cat sample.yaml
    ---
    - hosts: demo_hosts
    vars:
    package1 : "nginx"
    package2 : "wget"
    tasks:
    - name: Installing package nginx
    apt: pkg=nginx state=installed update_cache=true
    become: true
    - name: Installing package wget
    apt: name={{ package2 }} state=installed update_cache=true
    become: true
    - name: Copying test1 file
    copy: src=/tmp/test11 dest=/tmp/test11

    ansible-doc apt ##check the manual for ansible apt
    ansible-playbook sample.yaml ## run ansible playbook

    Deep Dive Into Ansible Playbooks

Install Apache with Ansible Playbook

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@ansible-host:~# cat apache_install.yaml
---
- hosts: web_portal
tasks:
- name: Apt get update
apt: update_cache=yes

- name: Install Apache2
apt: name=apache2 update_cache=no

- name: Copy data files
copy: src=index.html dest=/var/www/html/

- name: Stop the service
service: name=apache2 state=stopped
root@ansible-host:~# ansible-playbook apache_install.yaml -b

Run and Stop Service

Template

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@ansible-host:~# cat current.html.j2
this is my current file -
my hostname is - {{ ansible_hostname }}
root@ansible-host:~# cat apache_install.yaml
---
- hosts: web_portal
tasks:
- name: Apt get update
apt: update_cache=yes

- name: Install Apache2
apt: name=apache2 update_cache=no

- name: Copy data files
copy: src=index.html dest=/var/www/html/

- name: Stop the service
service: name=apache2 state=stopped

- name: Copy template file
template: src=current.html.j2 dest=/var/www/html/current.html
notify:
- Start apache
handlers:
- name: Start apache
service: name=apache2 state=restarted
### dry run ansible
ansible-playbook apache_install.yaml -b --check

Debug Statements In playbook

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
root@ansible-host:~# cat apache_install.yaml
---
- hosts: web_portal
tasks:
- name: Apt get update
apt: update_cache=yes

- name: Install Apache2
apt: name=apache2 update_cache=no

- name: Copy data files
copy: src=index.html dest=/var/www/html/
register: copy_status

- name: Stop the service
service: name=apache2 state=stopped

- name: Copy template file
template: src=current.html.j2 dest=/var/www/html/current.html
notify:
- Start apache

- name: Copy status
debug: var=copy_status

handlers:
- name: Start apache
service: name=apache2 state=restarted
root@ansible-host:~# ansible-playbook apache_install.yaml -b

PLAY [web_portal] *************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************
ok: [node01]

TASK [Apt get update] *********************************************************************************************************************************
[WARNING]: Could not find aptitude. Using apt-get instead

changed: [node01]

TASK [Install Apache2] ********************************************************************************************************************************
ok: [node01]

TASK [Copy data files] ********************************************************************************************************************************
ok: [node01]

TASK [Stop the service] *******************************************************************************************************************************
ok: [node01]

TASK [Copy template file] *****************************************************************************************************************************
ok: [node01]

TASK [Copy status] ************************************************************************************************************************************
ok: [node01] => {
"copy_status": {
"changed": false,
"checksum": "9be1fe2ae3529c22cf8a1b0fd1edc53048dcf277",
"dest": "/var/www/html/index.html",
"diff": {
"after": {
"path": "/var/www/html/index.html"
},
"before": {
"path": "/var/www/html/index.html"
}
},
"failed": false,
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/var/www/html/index.html",
"size": 24,
"state": "file",
"uid": 0
}
}

PLAY RECAP ********************************************************************************************************************************************
node01 : ok=7 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Loops, Conditionals and until section

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@ansible-host:~# cat apache_install.yaml
---
- hosts: web_portal
tasks:
- name: Apt get update
apt: update_cache=yes

- name: Install Apache2, nginx, nmap
apt: name={{ item }} update_cache=no
with_items:
- apache2
- nginx
- nmap

- name: Copy data files
copy: src=index.html dest=/var/www/html/
register: copy_status

- name: Stop the service
service: name=apache2 state=stopped

- name: Copy template file
template: src=current.html.j2 dest=/var/www/html/current.html
notify:
- Start apache

- name: Copy status
debug: var=copy_status

handlers:
- name: Start apache
service: name=apache2 state=restarted

Putting it all together with ansible

Ansible vault

ansible-vault create foo.yml
ansible-vaule view foo.yml
ansible-playbook site.yml -ask-vault-pass

Common Ansible Modules

  • Setup module
    ansible node01 -m setup
  • File module
    ansible-doc file
  • Yum module
    for redhat, centos
  • Apt module
    for debian, ubuntu
  • Service module
    for start and stop service
  • Copy module
    which is used for copy the file
  • User module
    ansible-doc user
    ansible node01 -m user -a “user=stancloud” -b
  • Command module
    ansible-doc command
  • Shell module
    ansible node01 -m shell -a “tail /var/log/syslog”

    Ansible Roles

    Roles are similar to project.

    roles directory structure

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    ansible-galaxy init stan
    - stan was created successfully
    root@ansible-host:/tmp# tree stan
    stan
    ├── defaults
    │   └── main.yml
    ├── files
    ├── handlers
    │   └── main.yml
    ├── meta
    │   └── main.yml
    ├── README.md
    ├── tasks
    │   └── main.yml
    ├── templates
    ├── tests
    │   ├── inventory
    │   └── test.yml
    └── vars
    └── main.yml
    roles/
    role-name/
    files/
    templates/ #j2 files
    tasks/
    main.yml
    handlers/
    main.yml
    vars/
    main.yml
    defaults/
    main.yml
    meta
  • Common role
  • mySQL role
  • Apache role
  • WordPress role

    Deploy wordpress

Ansible Galaxy

root@ansible-host:/tmp# ansible-galaxy install bennojoy.nginx

  • downloading role ‘nginx’, owned by bennojoy
  • downloading role from https://github.com/bennojoy/nginx/archive/master.tar.gz
  • extracting bennojoy.nginx to /root/.ansible/roles/bennojoy.nginx
  • bennojoy.nginx (master) was installed successfully
    jinja

    Manage aws cloud resouces with ansible

    manage aws ec2 instances with ansible

    ansible-doc -l|grep aws
    ansible-doc -l|grep ec2

    Deploy new aws ec2 instances using ansible playbook

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    cat ec2_create.yaml
    ---
    - hosts: localhost
    tasks:
    - name: Create AWS instances
    ec2:
    key_name: ansible-key
    instance_type: t2.micro
    image: ami-001dae151248753a2
    count: 3
    vpc_subnet_id: subnet-20615347
    assign_public_ip: no
    region: ap-southeast-2

    ansible-playbook -i "localhost," -c local ec2_create.yaml

Practical Jenkins for Production Deployments

  • Installation, configuration, and automation of Jenkins and its dependencies
  • Attention to high-availability, monitoring, management, and security
  • Utilizing distributed architectures and ability to work with diverse infrastructure platforms
  • Understanding and implementing pipeline as code with special attention to multi-branch pipeline
  • Being able to deploy code continously
  • Integration with extenal services for optimal workflow desgin

The Development Stages

  • Feature branch forked from devlop branch
  • Code added/modified and then build/test is executed against it
  • On success, the node is merged to the develop branch
  • More builds/tests are executed against the develop branch including integration tests
  • On success, the code is merged to the master branch
  • A tag is created and (optionally) a package is prepared for deployment
  • Next setps:
    • The package is deployed to target systems by Jenkins
    • Jenkins triggers an event in another deployment tool
    • Acceptance testing is performed on the target systems (optional)

What is Continuous Integration?

  • It is a development practice
  • A new feature is added by forking an integration branch
  • The new feature branch is tested before merging the code to the integration branch
  • Errors and problems are detected early
  • Testing and merging happens automatically without manual intervention
  • New code is added, modified, and merged regularly to the integration branch

What is Continous Delivery?

  • It is the practice of getting code into a deployable state
  • This is achieved irrespective of the size of the application or number of developers making commits
  • Includes new features, changes, bug fixes, etc
  • Makes sure that all changes are ready to be deployed at any time
  • Makes releases low risk and of higher quality
  • Depends on Continous Integration

What is Continous Deployment?

  • It is the process of automatic releases to production
  • It is the next step to Continous Delivery
  • The level of testing decides how good the release will be
  • Release happends in small batches and continuously
  • Gradual product improvement and increase in quality
  • The processes of Continous Integration and Continuous Delivery need to be perfect to ensure that releases are without issues
  • Relieves developers and administrators from the periodic taks of releasing

Setting up git, code repositories, and dependencies for Jenkins

Storage for Jenkins Data

  • Setting up a dedicated storage
  • Run on physical hardware-rate configuration with multiple disks
  • Run on virtualized cloud platform-dedicated elastic volumes
  • Home directory - /var/lib/jenkins

Create a new volume in aws ec2 Elastic Block Store, attach volume to ec2 jenkins master instance, run following commmands:

1
2
3
4
5
6
7
8
fdisk -l
mkfs.ext4 /dev/xvdf
mkdir -p /var/lib/jenkins
vim /etc/fstab
/dev/xvdf /var/lib/jenkins ext4 defaults 0 0
mount /var/lib/jenkins
df -h
yum -y install java-1.8.0-openjdk

Installation of Jenkins from Packages and WAR Files

  1. from Packages

    1
    2
    3
    4
    5
    yum -y install java-1.8.0-openjdk
    wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
    rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
    yum install -y jenkins
    systemctl start jenkins

    If want to change the jenkins’ home directory vim /etc/sysconfig/jenkins
    JENKINS_HOME=

  2. from WAR files

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    yum intall -y tomcat
    cd /usr/share/tomcat.webapps/
    wget http://mirrors.jenkins.io/war-stable/latest/jenkins.war
    systemctl start tomcat
    access by ipaddress:8080/jenkins
    systemctl stop tomcat
    mkdir -p /opt/jenkins_home
    chown -R tomcat:tomcat /opt/jenkins_home
    vim /etc/tomcat/contxt.xml
    # At the end of file add a new line
    <Environment name="JENKINS_HOME" vaule="/opt/jenkins_home" type="java.lang.String" />
    systemctl start tomcat
  3. use docker containers

    1
    2
    3
    4
    5
    6
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum install -y docker-ce
    systemctl start docker
    docker pull jenkins/jenkins:lts
    docker run --name jenkins_master -d -p 8080:8080 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

    Configuring reverse proxy and setting up user interface for jenkins

    Install nginx:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    rpm -ihv https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
    yum install -y nginx
    vim /etc/nginx/nginx.conf delete server { part }
    vim /etc/nginx/conf.d/jenkins.conf
    upstream jenkins {
    server 127.0.0.1:8080;
    }

    server {
    listen 80 default;
    server_name jenkins.course;

    access_log /var/log/nginx/jenkins.access.log;
    error_log /var/log/nginx/jenkins.error.log;

    proxy_buffers 16 64k;
    proxy_buffer_size 128k;

    location / {
    proxy_pass http://jenkins;
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_redirect off;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    }

    }

    In Jenkins configuration Git plugin put git username and email address
    setupcredentials

Automating the Jenkins Installation and Configuration Process

  • Disabling the manual setup wizard
  • Automating the setup wizard steps
  • Creating puppet configuration to automate the installation and configuration process
  • Automating the process by applying the puppet module
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    systemctl stop jenkins
    vim /etc/system/jenkins
    modify JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"
    mkdir -p /var/lib/jenkins/init/groovy.d
    vim /var/lib/jenkins/init/groovy.d/installPlugins.groovy
    #!groovy

    import jenkins.model.*
    import java.util.logging.Logger

    def logger = Logger.getLogger("")
    def installed = false
    def initialized = false
    def pluginParameter = "ws-cleanup timestamper credentials-binding build-timeout antisamy-markup-formatter cloudbees-folder pipeline-stage-view pipeline-github-lib github-branch-source workflow-aggregator gradle ant mailer email-ext ldap pam-auth matrix-auth ssh-slaves github git"

    def plugins = pluginParameter.split()
    logger.info("" + plugins)
    def instance = Jenkins.getInstance()
    def pm = instance.getPluginManager()
    def uc = instance.getUpdateCenter()
    plugins.each {
    logger.info("Checking " + it)
    if (!pm.getPlugin(it)) {
    logger.info("Looking UpdateCenter for " + it)
    if (!initialized) {
    uc.updateAllSites()
    initialized = true
    }
    def plugin = uc.getPlugin(it)
    if (plugin) {
    logger.info("Installing " + it)
    def installFuture = plugin.deploy()
    while(!installFuture.isDone()) {
    logger.info("Waiting for plugin install: " + it)
    sleep(3000)
    }
    installed = true
    }
    }
    }
    if (installed) {
    logger.info("Plugins installed, initializing a restart!")
    instance.save()
    instance.restart()
    }
    vim /var/lib/jenkins/init/groovy.d/security.groovy
    #!groovy

    import jenkins.model.*
    import hudson.security.*
    import jenkins.security.s2m.AdminWhitelistRule
    import hudson.security.csrf.DefaultCrumbIssuer
    import jenkins.model.Jenkins

    def instance = Jenkins.getInstance()

    def hudsonRealm = new HudsonPrivateSecurityRealm(false)
    hudsonRealm.createAccount("admin", "admin")
    instance.setSecurityRealm(hudsonRealm)

    def strategy = new FullControlOnceLoggedInAuthorizationStrategy()
    strategy.setAllowAnonymousRead(false)
    instance.setAuthorizationStrategy(strategy)
    instance.save()

    Jenkins.instance.getInjector().getInstance(AdminWhitelistRule.class)

    def j = Jenkins.instance
    if(j.getCrumbIssuer() == null) {
    j.setCrumbIssuer(new DefaultCrumbIssuer(true))
    j.save()
    println 'CSRF Protection configuration has changed. Enabled CSRF Protection.'
    }
    else {
    println 'Nothing changed. CSRF Protection already configured.'
    }
    chown -R jenkins:jenkins /var/lib/jenkins/init/groovy.d/*.groovy
    systemctl start jenkins
    tail -f /var/log/jenkins/jenkins.log

    install puppet

    go to puppet, copy url link puppetlabs-release-pc1-el-7.noarch.rpm.
    on the sever install the package by the command:
    1
    2
    3
    rpm -ihv http://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm

    yum -y install puppet-agent
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
mkdir puppet
cd puppet
mkdir {modules, manifests}
mkdir modules/jenkins
mkdir modules/jenkins/{manifests,files}
vim modules/jenkins/manifests.install.pp
class jenkins::install {
file { 'jenkins_repo':
path => '/etc/yum.repos.d/jenkins.repo',
source => 'https://pkg.jenkins.io/redhat-stable/jenkins.repo',
ensure => present,
mode => '0644'
}

exec { 'jenkins_repo_key':
command => '/bin/rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key',
unless => '/bin/rpm -q jenkins',
subscribe => File['jenkins_repo'],
require => File['jenkins_repo']
}

package { 'epel-repo':
name => 'epel-release',
ensure => present,
source => 'https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm'
}

$packages = [ 'jenkins', 'java-1.8.0-openjdk', 'nginx', 'git' ]

package { $packages:
ensure => present,
require => [ File['jenkins_repo'], Package['epel-repo'] ]
}
}

vim modules/jenkins/cofig.pp
class jenkins::config {
file { 'groovy_script_directory':
path => '/var/lib/jenkins/init.groovy.d',
owner => 'jenkins',
group => 'jenkins',
mode => '0755',
ensure => directory
}

file { 'security_groovy_script':
path => '/var/lib/jenkins/init.groovy.d/security.groovy',
owner => 'jenkins',
group => 'jenkins',
source => 'puppet:///modules/jenkins/security.groovy',
mode => '0644',
require => File['groovy_script_directory']
}

file { 'plugins_groovy_script':
path => '/var/lib/jenkins/init.groovy.d/installPlugins.groovy',
owner => 'jenkins',
group => 'jenkins',
source => 'puppet:///modules/jenkins/installPlugins.groovy',
mode => '0644',
require => File['groovy_script_directory']
}

file { 'nginx_config_jenkins':
path => '/etc/nginx/conf.d/jenkins.conf',
owner => 'root',
group => 'root',
source => 'puppet:///modules/jenkins/jenkins.conf',
mode => '0644'
}

file { 'nginx_config':
path => '/etc/nginx/nginx.conf',
owner => 'root',
group => 'root',
source => 'puppet:///modules/jenkins/nginx.conf',
mode => '0644'
}

file { 'jenkins_sysconfig':
path => '/etc/sysconfig/jenkins',
owner => 'root',
group => 'root',
source => 'puppet:///modules/jenkins/jenkins',
mode => '0644'
}
}

vim modules/jenkins/service.pp
class jenkins::service {
service { 'jenkins':
ensure => running
}

service { 'nginx':
ensure => running
}
}

vim modules/jenkins/init.pp
class jenkins {
include jenkins::install
include jenkins::config
include jenkins::service

Class['jenkins::install'] -> Class['jenkins::config'] ~> Class['jenkins::service']
}

vim modules/jenkins/files/installPlugins.groovy
#!groovy

import jenkins.model.*
import java.util.logging.Logger

def logger = Logger.getLogger("")
def installed = false
def initialized = false
def pluginParameter = "ws-cleanup timestamper credentials-binding build-timeout antisamy-markup-formatter cloudbees-folder pipeline-stage-view pipeline-github-lib github-branch-source workflow-aggregator gradle ant mailer email-ext ldap pam-auth matrix-auth ssh-slaves github git"

def plugins = pluginParameter.split()
logger.info("" + plugins)
def instance = Jenkins.getInstance()
def pm = instance.getPluginManager()
def uc = instance.getUpdateCenter()
plugins.each {
logger.info("Checking " + it)
if (!pm.getPlugin(it)) {
logger.info("Looking UpdateCenter for " + it)
if (!initialized) {
uc.updateAllSites()
initialized = true
}
def plugin = uc.getPlugin(it)
if (plugin) {
logger.info("Installing " + it)
def installFuture = plugin.deploy()
while(!installFuture.isDone()) {
logger.info("Waiting for plugin install: " + it)
sleep(3000)
}
installed = true
}
}
}
if (installed) {
logger.info("Plugins installed, initializing a restart!")
instance.save()
instance.restart()
}

vim modules/jenkins/files/jenkins.conf
upstream jenkins {
server 127.0.0.1:8080;
}

server {
listen 80 default;
server_name jenkins.course;

access_log /var/log/nginx/jenkins.access.log;
error_log /var/log/nginx/jenkins.error.log;

proxy_buffers 16 64k;
proxy_buffer_size 128k;

location / {
proxy_pass http://jenkins;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

}

vim modules/jenkins/files/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

}

vim modules/jenkins/files/security.groovy
#!groovy

import jenkins.model.*
import hudson.security.*
import jenkins.security.s2m.AdminWhitelistRule
import hudson.security.csrf.DefaultCrumbIssuer
import jenkins.model.Jenkins

def instance = Jenkins.getInstance()

def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("admin", "admin")
instance.setSecurityRealm(hudsonRealm)

def strategy = new FullControlOnceLoggedInAuthorizationStrategy()
strategy.setAllowAnonymousRead(false)
instance.setAuthorizationStrategy(strategy)
instance.save()

Jenkins.instance.getInjector().getInstance(AdminWhitelistRule.class)

def j = Jenkins.instance
if(j.getCrumbIssuer() == null) {
j.setCrumbIssuer(new DefaultCrumbIssuer(true))
j.save()
println 'CSRF Protection configuration has changed. Enabled CSRF Protection.'
}
else {
println 'Nothing changed. CSRF Protection already configured.'
}
[root@ip-172-31-2-108 ~]# tree puppet/
puppet/
├── manifests
│   └── site.pp
└── modules
└── jenkins
├── files
│   ├── installPlugins.groovy
│   ├── jenkins
│   ├── jenkins.conf
│   ├── nginx.conf
│   └── security.groovy
└── manifests
├── config.pp
├── init.pp
├── install.pp
└── service.pp
cd puppet
puppet apply --modulepath ./modules manifests/site.pp

Creating build jobs from user interface and scripts

1.Create a new freestyle jenkins job called python-job, configure ‘Source Code Management”->’Git’-> Repo URL git@github.com:szhouchoice/python-project.git.
“Build” -> Execute shell, command python *test.py

2.mkdir jenkins_cmd; cd jenkins_cmd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cp /var/lib/jenkins/jobs/python-job/config.xml .
vim config.xml
<publishers>
<hudson.plugins.ws__cleanup.WsCleanup plugin="ws-cleanup@0.34">
<patterns class="empty-list"/>
<deleteDirs>true</deleteDirs>
<skipWhenFailed>false</skipWhenFailed>
<cleanWhenSuccess>true</cleanWhenSuccess>
<cleanWhenUnstable>true</cleanWhenUnstable>
<cleanWhenFailure>true</cleanWhenFailure>
<cleanWhenNotBuilt>true</cleanWhenNotBuilt>
<cleanWhenAborted>true</cleanWhenAborted>
<notFailBuild>false</notFailBuild>
<cleanupMatrixParent>false</cleanupMatrixParent>
<externalDelete></externalDelete>
</hudson.plugins.ws__cleanup.WsCleanup>
curl -s 'http://localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)' -u admin:admin

curl -s -XPOST 'http://localhost:8080/createItem?name=python-project-new' -u admin:admin --data-binary @config.xml -H "Jenkins-Crumb:384576fea44a8984fd1afe2474c8e951" -H "Content-Type:text/xml"

High-availability, monitoring, and management of jenkins deployments

  • High-availability scenario of Jenkins ecosystem and support
  • Creating multiple Jenkins master nodes with shared data directory
  • Configuring HAProxy as a load balancer for the Jenkins masters
  • Testing the setup by failing nodes

Setting up multiple Jenkins Master with Load Balancer for high-availability

High-Availability Setup

1. On aws ec2 create 3 instances with Centos 7 OS. 1 for haproxy 2 for jenkins

2. Create a efs on aws, setup up efs on two jenkins ec2 instances. efs security group need create a new one open nfs port to everywhere

1
2
3
4
5
6
7
8
9
yum -y install nfs-utils
vim /etc/fstab
ap-southeast-2a.fs-foobar.efs.ap-southeast-2.amazonaws.com:/ /var/lib/jenkins/jobs nfs defaults 0 0
systemctl stop jenkins
mount /var/lib/jenkins/jobs
df -h
chown -R jenkins:jenkins /var/lib/jenkins/jobs
ls -l /var/lib/jenkins/
systemctl start jenkins

3. On Jenkins master passive node

1
2
3
4
5
6
7
vim /opt/jenkins_reload.sh
#!/bin/bash
crumb_id=$(curl -s 'http://localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)' -u admin:admin)
curl -s -XPOST 'http://localhost:8080/reload' -u admin:admin -H "$crumb_id"
chmod +x /opt/jenkins_reload.sh
vim /etc/cron.d/jenkins_reload.sh
*/1 * * * * root /bin/bash /opt/jenkins_reload.sh #run every min

4. On HA Proxy server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
yum -y install haproxy
cat /dev/null> /etc/haproxy/haproxy.cfg
vim /etc/haproxy/haproxy.cfg
defaults
log global
maxconn 2000
mode http
option redispatch
option forwardfor
option http-server-close
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s

frontend ft_jenkins
bind *:80
default_backend bk_jenkins
reqadd X-Forwarded-Proto:\ http

backend bk_jenkins
server jenkins1 172.31.2.108:8080 check
server jenkins2 172.31.2.132:8080 check backup

systemctl start haproxy
ps-ef|grep haproxy

Backing up and restoring Jenkins data

Steps of backup and resotre

  • Decide on what to back up (specific directories or everything)
  • Set up a dedicated local directory where backups would be stored
  • Decide the archiving method to use for the backup (.zip, tar.gz, and so on)
  • Set up a schedule for backups to take place
  • Configure a highly-available remove location where old backups can be transferred for archiving
  • Keep a certain number of the most recent backups to be readily available for restores
  • When restoring, unarchive the most recent backup and copy over files

    Backup and Restore Methods

  • Use periodic Backup plugin available in Jenkins
  • Manually copy and archive files using scripts
  • Select a remote location for storing data such as S3, EFS, and so on
  • Use local tools such as scp, rsync, s3cmd, and son on to transfer data
  • Use specialized open source tools such as Amanda or Bacula to perform backups or enterprise tools such as Netbackup
1
2
mkdir -p /opt/jenkins_backups
chown -R jenkins:jenkins /opt/jenkins_backups/

Install plugin called Periodic Backup Manager, configure it as following:
periodicbackupconf

click button Backup Now!

Monitoring Jenkins components and data

Install plugin monitoring, after that under ‘Manage Jenkins’ will have Monitoring of Jenkins master:

Monitoring

install configure graphite grafana

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
#Install EPEL repo
rpm -ihv https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm

#Install Graphite and dependencies
yum -y install graphite-web mysql mariadb-server MySQL-python net-tools mlocate wget python-carbon python-pip gcc python-devel libffi python-cffi cairo cairo-devel fontconfig freetype*

#Start database server
systemctl start mariadb

#Configure database and perms
mysql -e "CREATE DATABASE graphite;"
mysql -e "GRANT ALL PRIVILEGES ON graphite.* TO 'graphite'@'localhost' IDENTIFIED BY 'j3nk1nsdb';"
mysql -e 'FLUSH PRIVILEGES;'

#Edit file
vi /etc/graphite-web/local_settings.py

#Add content
DATABASES = {
'default': {
'NAME': 'graphite',
'ENGINE': 'django.db.backends.mysql',
'USER': 'graphite',
'PASSWORD': 'j3nk1nsdb',
'HOST': 'localhost',
'PORT': '3306'
}
}

#Change perms
chown -R apache:apache /var/lib/graphite-web /usr/share/graphite/

#Edit file
vi /etc/httpd/conf.d/graphite-web.conf

#Add content

<VirtualHost *:80>
ServerName graphite-web
DocumentRoot "/usr/share/graphite/webapp"
ErrorLog /var/log/httpd/graphite-web-error.log
CustomLog /var/log/httpd/graphite-web-access.log common

# Header set Access-Control-Allow-Origin "*"
# Header set Access-Control-Allow-Methods "GET, OPTIONS"
# Header set Access-Control-Allow-Headers "origin, authorization, accept"
# Header set Access-Control-Allow-Credentials true

WSGIScriptAlias / /usr/share/graphite/graphite-web.wsgi
WSGIImportScript /usr/share/graphite/graphite-web.wsgi process-group=%{GLOBAL} application-group=%{GLOBAL}

<Location "/content/">
SetHandler None
</Location>

Alias /media/ "/usr/lib/python2.7/site-packages/django/contrib/admin/media/"
<Location "/media/">
SetHandler None
</Location>

<Directory "/usr/share/graphite/">
Require all granted
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

#Edit file
vi /etc/graphite-web/local_settings.py

#Add content
SECRET_KEY = 'jenkinsapp'

#Run command
/usr/lib/python2.7/site-packages/graphite/manage.py syncdb
# user admin password admin

#Edit file
vi /etc/yum.repos.d/grafana.repo

#Add content
[grafana]
name=grafana
baseurl=https://packagecloud.io/grafana/stable/el/6/$basearch
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

#Install grafana
yum -y install grafana

#Start services
systemctl start carbon-cache
systemctl start httpd
systemctl start grafana-server

1.Install jenkins plugin call ‘Metrics Graphite Reporting’
2.Go to Manage Jenkins -> Configure Sytem-> Graphite metrics report input Hostname ipaddress Port 2003 Prefix jenkins.
3.Access grafana server ipaddress:3000 -> Add Graphite -> http settings url http://localhost
4.Import grafana dashboard: go to grafana.com/dashboards search for Jenkins: Performance and health overview-> copy id
5.import dashboard Options grahite choose jenkins-test

Install monit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
yum install -y monit
vim /etc/monitrc
set mailserver smtp.gmail.com port 587
username "EMAIL" password "PASSWORD"
using tlsv12
vim /etc/monit.d/jenkins
check system $HOST
if loadavg (5min) > 3 then alert
if loadavg (15min) > 1 then alert
if memory usage > 80% for 4 cycles then alert
if swap usage > 20% for 4 cycles then alert
if cpu usage (user) > 80% for 2 cycles then alert
if cpu usage (system) > 20% for 2 cycles then alert
if cpu usage (wait) > 80% for 2 cycles then alert
if cpu usage > 200% for 4 cycles then alert

check process jenkins with pidfile /var/run/jenkins.pid
start program = "/bin/systemctl start jenkins"
stop program = "/bin/systemctl stop jenkins"
if failed host 192.168.33.200 port 8080 then restart

check process nginx with pidfile /var/run/nginx.pid
start program = "/bin/systemctl start nginx"
stop program = "/bin/systemctl stop nginx"
if failed host 192.168.33.200 port 80 then restart

check filesystem jenkins_mount with path /dev/sda2
start program = "/bin/mount /var/lib/jenkins"
stop program = "/bin/umount /var/lib/jenkins"
if space usage > 80% for 3 times within 5 cycles then alert
if space usage > 99% then stop
if inode usage > 30000 then alert
if inode usage > 99% then stop

check directory jenkins_home with path /var/lib/jenkins
if failed permission 755 then exec "/bin/chmod 755 /var/lib/jenkins"

Enable gmail account Allow less secure apps to ‘on’

1
2
3
systemctl start monit
tail -f /var/log/monit.log
# test by systemctl stop jenkins

monit

Implementing security and roles for Jenkins

Jenkins Security best practices

  • Disable job execution on the mater and use slave nodes for builds
  • Use job restrictions plugin to confine specific jobs to specific nodes irrespective of the label used
    install plugin job restrictions-> Manage Jenkins-> Manage Nodes
    restrict jobs execution at node
  • Enable CSRF protection and update scripts to use crumbs
    configure csrf
  • Enable the slave to master access control
    config
  • For large encironments, use role and matrix based authorization strategies to isolate project access
    install plugin Role-based Authorization Strategy
    Project-based Matrix Authorization Strategy
    Enable project-based security

    use LDAP server and role-based strategy

1.ldap server config on centos 7

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
yum -y install openldap-servers openldap-clients
cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown ldap. /var/lib/ldap/DB_CONFIG
systemctl start slapd

slappasswd -> Enter new password twice -> GENERATED_PASSWORD

#New file
config.ldif

#Add content
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: GENERATED_PASSWORD

#Run commands
ldapadd -Y EXTERNAL -H ldapi:/// -f config.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

#New file
domain.ldif

#Add content
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=practicaljenkins,dc=com" read by * none

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=practicaljenkins,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=practicaljenkins,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}mviTD7um1R+LygfAN01MzQOtK4ezm1ob

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=practicaljenkins,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=practicaljenkins,dc=com" write by * read


#Run command
ldapmodify -Y EXTERNAL -H ldapi:/// -f domain.ldif

systemctl restart slapd

#New file
base.ldif

#Add content
dn: dc=practicaljenkins,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: practicaljenkins
dc: practicaljenkins

dn: cn=Manager,dc=practicaljenkins,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=users,dc=practicaljenkins,dc=com
objectClass: organizationalUnit
ou: users

dn: ou=groups,dc=practicaljenkins,dc=com
objectClass: organizationalUnit
ou: groups

#Run command - enter password of slappasswd command when asked

ldapadd -x -D cn=Manager,dc=practicaljenkins,dc=com -W -f base.ldif

## configure phpldapadmin
yum -y install httpd php php-mbstring php-pear
systemctl start httpd
yum -y install epel-release
yum clean all && yum makecache fast && yum update
vim /etc/phpldapadmin/config.php
<?php
$config->custom->session['blowfish'] = 'd7458abe84f9622a42ce3f9e45dfc457'; # Autogenerated for ip-172-31-4-228.ap-southeast-2.compute.internal

$config->custom->commands['cmd'] = array(
'entry_internal_attributes_show' => true,
'entry_refresh' => true,
'oslinks' => true,
'switch_template' => true
);

$config->custom->commands['script'] = array(
'add_attr_form' => true,
'add_oclass_form' => true,
'add_value_form' => true,
'collapse' => true,
'compare' => true,
'compare_form' => true,
'copy' => true,
'copy_form' => true,
'create' => true,
'create_confirm' => true,
'delete' => true,
'delete_attr' => true,
'delete_form' => true,
'draw_tree_node' => true,
'expand' => true,
'export' => true,
'export_form' => true,
'import' => true,
'import_form' => true,
'login' => true,
'logout' => true,
'login_form' => true,
'mass_delete' => true,
'mass_edit' => true,
'mass_update' => true,
'modify_member_form' => true,
'monitor' => true,
'purge_cache' => true,
'query_engine' => true,
'rename' => true,
'rename_form' => true,
'rdelete' => true,
'refresh' => true,
'schema' => true,
'server_info' => true,
'show_cache' => true,
'template_engine' => true,
'update_confirm' => true,
'update' => true
);
$servers = new Datastore();
$servers->newServer('ldap_pla');
$servers->setValue('server','name','Stan Local LDAP Server');
$servers->setValue('server','host','172.31.4.228');
$servers->setValue('appearance','password_hash','');
$servers->setValue('login','attr','dn');
$servers->setValue('login','class',array('dc=practicaljenkins,dc=com'));
?>

systemctl restart httpd
vim /etc/httpd/conf.d/phpldapadmin.conf
#
# Web-based tool for managing LDAP servers
#

Alias /phpldapadmin /usr/share/phpldapadmin/htdocs
Alias /ldapadmin /usr/share/phpldapadmin/htdocs

<Directory /usr/share/phpldapadmin/htdocs>
<IfModule mod_authz_core.c>
# Apache 2.4
#Require local
Require all granted
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
#SELINUXTYPE=targeted

systemctl restart httpd
systemctl enable slapd
systemctl enable httpd
netstat -atnulp

Login to ipaddress/phpldapadmin
Ligin DN: cn=Manager,dc=practialjenkins.dc=com
Password:

Under ou=groups create a child entry-> Generic: Posix Group
3 groups: admins, developers, devops

Under ou=users create Generic: User Account a b c

Under cn=admins Add new attribue mumberUid a
same for other groups

Configure Global Security
ldap config
Group membership filter -> (| (member={0}) (uniqueMember={0}) (memberUid={1}))

Under Authorization choose Role-Based Strategy

Under Manage Jenkins click Mange and Assign Roles -> Manage Role-> Role to add “admins, developers and devops”
managerole

Assign Roles-> User/group to add-> admins, developers, and devops
assignrole

User different user for login:
login

Using the jenkinss api and automating plugin management

get API token from ‘People’> admin-> configure->API Token

Automatically tirgger the build by:

1
2
3
4
curl -u admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 "http://localhost:8080/api/json?pretty=true"
curl -s 'http://localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)' -u admin:admin
curl -X POST -u admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 "http://localhost:8080/job/python-job/build" -H "Jenkins-Crumb:f40b9f1084de9e60cca87085b30e6159"```
Jenkins CLI:

wget –auth-no-challenge “http://localhost:8080/jnlpJars/jenkins-cli.jar"
java -jar jenkins-cli.jar -auth admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 -s http://localhost:8080 help
java -jar jenkins-cli.jar -auth admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 -s http://localhost:8080 install-plugin aws-codebuild -deploy
java -jar jenkins-cli.jar -auth admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 -s http://localhost:8080 install-plugin http://updates.jenkins-ci.org/latest/ansicolor.hpi -deploy
java -jar jenkins-cli.jar -auth admin:20fda2a6ed2ff7a8cf9db9dd50a0c2b0 -s http://localhost:8080 install-plugin http://updates.jenkins-ci.org/download/plugins/beaker-builder/1.6/beaker-builder.hpi -deploy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Integrating Jenkins with external services
## Integrating with Github
![workflow](https://i.imgur.com/hkwPMqG.png)

- Preparing Github and configuring Jenkins to work together
Github->Setting->Developer setting->Personal access tokens-> Genere a new token-> select scopes (repo:all |admin:repo_hook all| admin:org_hook)->copy token

Jenkins->Add credentials-> Kind username with password |scope global| username szhouchoice| password access token|id jenkins-secret
add another credentials-> kind secret test-> secret token|id jenkins-token->save

Username with password will use for multi-branch pipeline
secret text will be use in Global configuration for github server
![github server setting](https://i.imgur.com/u4OVbH2.png)

Install GitHub Branch Source Plugin

- Configuring the Pipeline to create a code build workflow
[project](https://github.com/practicaljenkins/sample-php-project)

git add -A
git commit -m “adding code”
git push origin master
git checkout -b develop
git push orgin develop

1
2
3
4
In Jenkins-> Create new Multibranch Pipeline->
![Branch sources setting](https://i.imgur.com/Tn2maPQ.png)

- Testing the pipelines with different scenarios

git checkout -b feature-001
vim src/ConnectTest.php

push feature branch to github
![merge in github](https://i.imgur.com/JOkzC9O.png)
will trigger the build in Jenkins
## Integrating with Sonarqube
- setting up Sonarqube prerequisites for Jenkins
- Install and configure Sonarqube plugin
- Configure Jenkins pipeline for Sonarqube action
- Generate Sonarqube analysis report from Jenkins pipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
cp /etc/vsftpd/vsftpd.conf{,.bak}

[root@agent1 ~]# >/etc/motd #删除文件内容


[root@puppetmaster ~]# mv /etc/puppet/autosign.conf{,.bak} #删除自动注册ACL列表

[root@linux-node1 /]# grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml

sudo tar xfz pycharm-*.tar.gz -C /opt/

nohup & screen


# send a command to be executed on the remote machine and send back the output:
ssh -t user1@server1.packt.co.uk cat /etc/hosts

#use SSH to send files between two machines to or from a remote machine, using the scp command:
scp user1@server1.packt.co.uk:/home/user1/Desktop/file1.txt ./Desktop/

ssh key-based authentication
ssh-keygen -t rsa -b 2048 -v
ssh-copy-id user1@server1.packt.co.uk

[root@ip-172-31-5-191 ~]# cat /tmp/passwd-truncated
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
[root@ip-172-31-5-191 ~]# cut -d: -f2,3 /tmp/passwd-truncated
x:0
x:1
x:2
x:3
x:4
[root@ip-172-31-5-191 ~]# cut -d: -f2,3 --output-delimiter " " /tmp/passwd-truncated
x 0
x 1
x 2
x 3
x 4
[root@ip-172-31-5-191 ~]# egrep -v '#|^$' /etc/services |head -5 > /tmp/services-truncated
[root@ip-172-31-5-191 ~]# cat /tmp/services-truncated
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
[root@ip-172-31-5-191 ~]# sed 's/ /*/g' /tmp/services-truncated
echo************7/tcp
echo************7/udp
discard*********9/tcp***********sink*null
discard*********9/udp***********sink*null
systat**********11/tcp**********users
[root@ip-172-31-5-191 ~]# awk -F' ' '{print $2" "$3}' /tmp/services-truncated
7/tcp
7/udp
9/tcp sink
9/udp sink
11/tcp users
[root@ip-172-31-5-191 ~]# # tr [CHARACTER_FROM] [CHARACTER_TO]
[root@ip-172-31-5-191 ~]# cat /tmp/services-truncated |tr 'a' 'X'
echo 7/tcp
echo 7/udp
discXrd 9/tcp sink null
discXrd 9/udp sink null
systXt 11/tcp users
[root@ip-172-31-5-191 ~]# cat /tmp/services-truncated |tr '[a-z]' '[A-Z]'
ECHO 7/TCP
ECHO 7/UDP
DISCARD 9/TCP SINK NULL
DISCARD 9/UDP SINK NULL
SYSTAT 11/TCP USERS
[root@ip-172-31-5-191 ~]# awk -F ' ' '{print $1}' /etc/services |egrep -v '^#|^$'| sort| uniq -c| sort -n -k1,1 -r|head
4 exp2
4 exp1
4 discard
3 v5ua
3 syslog-tls
3 sua
3 ssh
3 nfsrdma
3 nfs
3 megaco-h248

[root@ip-172-31-5-191 ~]# cp /etc/services /tmp/services
[root@ip-172-31-5-191 ~]# gzip /tmp/services
[root@ip-172-31-5-191 ~]# ls -lh /etc/services /tmp/services.gz
-rw-r--r--. 1 root root 655K Jun 7 2013 /etc/services
-rw-r--r--. 1 root root 133K Jun 19 06:01 /tmp/services.gz
[root@ip-172-31-5-191 ~]# gunzip /tmp/services.gz

[root@ip-172-31-5-191 ~]# tar zcf /tmp/archive.tar.gz /home/centos
tar: Removing leading `/' from member names
[root@ip-172-31-5-191 ~]# mkdir /tmp/extracted
[root@ip-172-31-5-191 ~]# tar xvf /tmp/archive.tar.gz -C /tmp/extracted
home/centos/
home/centos/.bash_logout
home/centos/.bash_profile
home/centos/.bashrc
home/centos/.ssh/
home/centos/.ssh/authorized_keys
home/centos/.bash_history
[root@ip-172-31-5-191 ~]# cat /proc/meminfo
MemTotal: 3878872 kB
MemFree: 2586524 kB

[root@ip-172-31-5-191 ~]# free -mh
total used free shared buff/cache available
Mem: 3.7G 716M 2.5G 16M 546M 2.7G
Swap: 0B 0B 0B
[root@ip-172-31-5-191 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 60G 2.2G 58G 4% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 17M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 379M 0 379M 0% /run/user/1000
[root@ip-172-31-5-191 ~]# du -h
4.0K ./.ssh
0 ./.pki/nssdb
0 ./.pki
52K .
[root@ip-172-31-5-191 ~]# du -h / --max-depth=1
0 /dev
du: cannot access ‘/proc/14331/task/14331/fd/3’: No such file or directory
du: cannot access ‘/proc/14331/task/14331/fdinfo/3’: No such file or directory
du: cannot access ‘/proc/14331/fd/4’: No such file or directory
du: cannot access ‘/proc/14331/fdinfo/4’: No such file or directory
0 /proc
17M /run
0 /sys
35M /etc
52K /root
641M /var
4.5M /tmp
1.3G /usr
183M /boot
76K /home
0 /media
0 /mnt
0 /opt
0 /srv
2.2G /
root@ip-172-31-5-191 ~]# dd if=/dev/zero of=/tmp/lgig_file.empty bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.61028 s, 297 MB/s
$ su -c 'dd if=/dev/sda1 of=/tmp/img.image'
[root@ip-172-31-5-191 ~]# dd if=/home/centos/.bashrc of=/tmp/.bashrc-copy
0+1 records in
0+1 records out
231 bytes (231 B) copied, 0.000164855 s, 1.4 MB/s
[root@ip-172-31-5-191 ~]# rsync -rav /home/centos/ /tmp/new-centos-home
sending incremental file list
created directory /tmp/new-centos-home
./
.bash_history
.bash_logout
.bash_profile
.bashrc
.ssh/
.ssh/authorized_keys

sent 1,280 bytes received 169 bytes 2,898.00 bytes/sec
total size is 843 speedup is 0.58

# rsync -rav /home/centos/ stan@192.168.178.300:/tmp
# rsync -rav stan@192.168.178.300:/tmp/stan /tmp/another-copy

[root@ip-172-31-5-191 ~]# telnet google.com 80
Trying 216.58.200.110...
Connected to google.com.
Escape character is '^]'.
^]
HTTP/1.0 400 Bad Request
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 1555
Date: Wed, 19 Jun 2019 06:30:24 GMT

<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 400 (Bad Request)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>400.</b> <ins>That’s an error.</ins>
<p>Your client has issued a malformed or illegal request. <ins>That’s all we know.</ins>
Connection closed by foreign host.

[root@ip-172-31-5-191 ~]# wget -O /tmp/output.txt http://whatthecommit.com/index.txt
--2019-06-19 06:32:13-- http://whatthecommit.com/index.txt
Resolving whatthecommit.com (whatthecommit.com)... 52.86.186.182, 52.200.233.201, 52.202.60.111, ...
Connecting to whatthecommit.com (whatthecommit.com)|52.86.186.182|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14 [text/plain]
Saving to: ‘/tmp/output.txt’

100%[=============================================================>] 14 --.-K/s in 0s

2019-06-19 06:32:13 (1.88 MB/s) - ‘/tmp/output.txt’ saved [14/14]

[root@ip-172-31-5-191 ~]# cat /tmp/output.txt
Popping stash
[root@ip-172-31-5-191 ~]# wget -qO- http://whatthecommit.com/index.txt
clarify further the brokenness of C++. why the fuck are we using C++?

root@stan-OptiPlex-380:~|⇒ nc -l -p 9999 < /etc/lsb-release
stan-OptiPlex-380% nc 192.168.199.178 9999 > /tmp/redhat-release
^C
stan-OptiPlex-380% cat /tmp/redhat-release
DISTRIB_ID=LinuxMint
DISTRIB_RELEASE=19
DISTRIB_CODENAME=tara
DISTRIB_DESCRIPTION="Linux Mint 19 Tara"


# links www.duckduckgo.com

[root@ip-172-31-5-191 ~]# echo '(8-(3+1))/4' |bc
1

[root@ip-172-31-5-191 ~]# screen

Ctrl + A + D

[detached from 17091.pts-0.ip-172-31-5-191]
[root@ip-172-31-5-191 ~]# exit
logout
[centos@ip-172-31-5-191 ~]$ exit
logout
Connection to 54.66.232.147 closed.

ssh cicd
Last login: Wed Jun 19 05:39:48 2019 from 220.240.212.9
[centos@ip-172-31-5-191 ~]$ sudo -i
[root@ip-172-31-5-191 ~]# screen -list
There is a screen on:
17091.pts-0.ip-172-31-5-191 (Detached)
1 Socket in /var/run/screen/S-root.

[root@ip-172-31-5-191 ~]# screen -r 17091.pts-0.ip-172-31-5-191

Type exit for quit
[screen is terminating]

various top-like programs

1
2
3
4
5
6
iotop ## Get a live view on the input and output, or short I/O, bandwidth usage of your system

iftop ## which gets a live view on network traffic and network bandwidth usage and monitor

htop ## improved version of the normal top program
lsof | grep lib64 ## To print out a list of all open files, which means programs accessing files at the moment

Largest files and directories report

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
FS='./';resize;clear;echo "== Server Time: ==";date;echo -e "\n== Filesystem Information: ==";df -PTh ${FS} | column -t;echo -e "\n== Inode Information: ==";df -PTi ${FS} | column -t;echo -e "\n== Largest Directories: ==";du -hcx --max-depth=2 ${FS} 2>/dev/null | grep -P '^([0-9]\.*)*G(?!.*(\btotal\b|\./$))' | sort -rnk1,1 | head -10 | column -t;echo -e "\n== Largest Files: ==";find ${FS} -mount -ignore_readdir_race -type f -exec du {} + 2>&1 | sort -rnk1,1 | head -20 | awk 'BEGIN{ CONVFMT="%.2f";}{ $1=( $1 / 1024 )"M"; print;}' | column -t;echo -e "\n== Largest Files Older Than 30 Days: ==";find ${FS} -mount -ignore_readdir_race -type f -mtime +30 -exec du {} + 2>&1 | sort -rnk1,1 | head -20 | awk 'BEGIN{ CONVFMT="%.2f";}{ $1=( $1 / 1024 )"M"; print; }' | column -t;

== Server Time: ==
Thu Jun 20 04:57:53 UTC 2019

== Filesystem Information: ==
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda1 xfs 60G 3.9G 57G 7% /

== Inode Information: ==
Filesystem Type Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 xfs 31456704 62835 31393869 1% /

== Largest Directories: ==

== Largest Files: ==
0.00M ./.ssh/authorized_keys
0.00M ./.lesshst
0.00M ./.bashrc
0.00M ./.bash_profile
0.00M ./.bash_logout
0.00M ./.bash_history
0M ./stuff/5.txt
0M ./stuff/4.txt
0M ./stuff/3.txt
0M ./stuff/2.txt
0M ./stuff/1.txt

== Largest Files Older Than 30 Days: ==
0.00M ./.bashrc
0.00M ./.bash_profile
0.00M ./.bash_logout

check the port is open or not

1
nmap -vv -n -sS -sU -p443 52.62.248.48/32 | grep "Discovered open port" |awk {'print $6'} | awk -F/ {'print$1'

AWS cli commands

List all the ec2 instances details:

1
aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId, InstanceType, ImageId, State.Name, LaunchTime, Placement.AvailabilityZone, Placement.Tenancy, PrivateIpAddress, PrivateDnsName, PublicDnsName, [Tags[?Key==`Name`].Value] [0][0], [Tags[?Key==`purpose`].Value] [0][0], [Tags[?Key==`environment`].Value] [0][0], [Tags[?Key==`team`].Value] [0][0] ]'}`

List server files information
FS='./';resize;clear;echo "== Server Time: ==";date;echo -e "\n== Filesystem Information: ==";df -PTh ${FS} | column -t;echo -e "\n== Inode Information: ==";df -PTi ${FS} | column -t;echo -e "\n== Largest Directories: ==";du -hcx --max-depth=2 ${FS} 2>/dev/null | grep -P '^([0-9]\.*)*G(?!.*(\btotal\b|\./$))' | sort -rnk1,1 | head -10 | column -t;echo -e "\n== Largest Files: ==";find ${FS} -mount -ignore_readdir_race -type f -exec du {} + 2>&1 | sort -rnk1,1 | head -20 | awk 'BEGIN{ CONVFMT="%.2f";}{ $1=( $1 / 1024 )"M"; print;}' | column -t;echo -e "\n== Largest Files Older Than 30 Days: ==";find ${FS} -mount -ignore_readdir_race -type f -mtime +30 -exec du {} + 2>&1 | sort -rnk1,1 | head -20 | awk 'BEGIN{ CONVFMT="%.2f";}{ $1=( $1 / 1024 )"M"; print; }' | column -t;


about file

Change the extension of multiple files

add an extension for the source files

for f in *; do mv -- "$f" "${f%.\*}.pdf";done

find the file size

1
2
3
4
5
6
7
8
9
10
11
12
13
root@stan-OptiPlex-380:~|⇒  du -sh /*
17M /bin
77M /boot
4.0K /cdrom
0 /dev
17M /etc
1.6G /home
0 /initrd.img
0 /initrd.img.old
582M /lib
4.0K /lib64
16K /lost+found
4.0K /media

about node process

1
2
3
4
5
6
7
8
9
10
11
12
13
cat startApp.sh
#!/bin/sh
export NODE_ENV=production
export DB_PRD_HOST=stantest-postgresql.c3mzoji03zxf.ap-southeast-2.rds.amazonaws.com
export DB_PRD_USER=stantest
export NODE_HOST=localhost
export NODE_PORT=8080
node /myapp/index.js&
exit 0
cat stopApp.sh
#!/bin/sh
kill `ps -axf |grep node |grep -v grep|awk '{print $1}'` | exit 0

pgrep: pgrep, pkill - look up or signal processes based on name and other attributes


python

create a random password:

1
2
3
4
5
6
7
stan@dockerfordevops:~/Projects/MobyDock$ python
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import binascii
>>> binascii.b2a_hex(os.urandom(31))

postgres

1
2
3
4
5
6
7
8
9
kubernetes-release-1.5 psql -h database -U postgres
psql (11.4 (Ubuntu 11.4-1.pgdg18.04+1), server 9.4.23)
Type "help" for help.

postgres-# \dd
Object descriptions
Schema | Name | Object | Description
--------|------|--------|-------------
(0 rows)

$ mkdir vagrant_ubuntu_xenial_1 && cd $_

wget https://raw.githubusercontent.com/yogeshraheja/Effective-DevOps-with-AWS/master/Chapter02/helloworld.conf -O scripts/helloworld.conf

K8S

Big example is google

orchestration (computing)
automcatic management of them

manifests system robust

common question why use k8s instead of docker swarm

number 1 ad: docker swarm is built in
K8S: far richer, deploy do different cloud platforms
K8S is just far more popular
is a orchestration of choice.
K8S is the most in demand container orchestration system out there.

A Java-based with angular on the front-end.

Terminology

  • Cluster: A group of physical or virtual machines
  • Node: Ap hysical or virtual machine that is part of a cluster
  • Control Plane: The set of processes that run the core k8s services (e.g. APi, scheduler, etcd …)
  • Pod: The lowest compute unit in Kubernetes, a group of containers that run on a Node.

Architecture

Head Node
That’s the brain of Kubernetes

  • API server
  • Scheduler: to place the containers where they need to go
  • Controller manager: makes sure that the state of the system is what it should be
  • Etcd: data store, used to store the state of the system
  • Sometimes:
    • kubelet: a process manage all of this
    • docker: container engine

Worker node

  • kubelet
    That’s the Kubernetes agent that runs on all the Kubernetes cluster nodes. Kubelet talks with the Kubernetes API server and then talks to the local Docker daemon to be able to manage the Docker containers running on the node.
  • kube-proxy: a system that allos you to manage the IP tables in that node so that the traffic between the pods and the nodes is what it should be
  • docker

    Installing k8s in 3 easy ways

  • minikube
  • gcloud container clusters
  • kubeadm

    Installing Minikube

    cut-down version-minikube
    https://kubernetes.io/docs/tasks/tools/install-minikube/

You might have an incompatibility between your distribution and the one that Minikube is expecting.
two command line tools: kubectl is the controller pogramme for k8s. and Minikube

Enable the Hyper-V role throught settings
when enable don’t use oracle virtual box

goo.gl/4yEFbF for win 10 Professional

minikube start hangs forever on mac #2765

If you’re completely stuck then do ask me

Docker Overview

Difference between Docker Images and Container:
A container is an instance of docker images.
Docker container is the run time instance of images.

1
2
3
4
5
6
7
docker image pull richardchesterwood/k8s-fleetman-webapp-angular:release0-5
docker image ls
docker container run -p 8080:80 -d richardchesterwood/k8s-fleetman-webapp-angular:release0-5
docker container ls
minikube ip
docker container stop 2c5
docker container rm 2c5

Pods

A pod is a group of one or more containers, with shared storage/network, and a specification for how to run the containers.

basic concept is Pod. A pod and a container are in a one to one relationship.

writing a Pod

kubectl get all show everything we have defined in our Kubernetes cluster.

kubectl apply -f first-pod.yaml

kubectl describe pod webapp

kubectl exec webapp ls

kubectl -it exec webapp sh it means get in interactively with teletype emulation. interactive

Services

Pods are not visible outside the cluster
Pods are designed to be very throw away things. Pods have short lifetimes. Pods regularly die.

Service has stable port, with a service we can connect to kubernetes cluster.

key vaule pairs app with some value.

Services

NodePort and ClusterIP

ClusterIP

NodePort

Pod selection with Labels

Pod secltion with Labels

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat first-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
release: "0"
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0


---
apiVersion: v1
kind: Pod
metadata:
name: webapp-release-0-5
labels:
app: webapp
release: "0-5"
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0-5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat webapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: webapp
release: "0-5"

ports:
- name: http
port: 80
nodePort: 30080

type: NodePort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
kubectl apply -f webapp-service.yaml

kubectl describe svc fleetman-webapp
Name: fleetman-webapp
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"fleetman-webapp","namespace":"default"},"spec":{"ports":[{"name":"http","nodeP...
Selector: app=webapp,release=0
Type: NodePort
IP: 10.108.217.186
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30080/TCP
Endpoints: 172.17.0.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

kubectl get po
NAME READY STATUS RESTARTS AGE
webapp 1/1 Running 2 2h
webapp-release-0-5 1/1 Running 0 8m

kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webapp 1/1 Running 2 2h app=webapp,release=0
webapp-release-0-5 1/1 Running 0 8m app=webapp,release=0-5

kubectl get po --show-labels -l release=0
NAME READY STATUS RESTARTS AGE LABELS
webapp 1/1 Running 2 2h app=webapp,release=0

kubectl get po --show-labels -l release=1
No resources found.

REPLICASETS

ReplicaSets

ReplicaSets
When Pod die, it will never come back.

1
2
3
4
5
kubectl get all

kubectl describe svc fleetman-webapp

kubectl delete po webapp-release-0-5

ReplicaSets specify how many instances of this pod do we want k8s running on time

Writing a ReplicaSet

ReplicaSet v1 apps

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat pods.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp
spec:
selector:
matchLabels:
app: webapp
replicas: 1
template: # template for the pods
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0-5

---
apiVersion: v1
kind: Pod
metadata:
name: queue
labels:
app: queue
spec:
containers:
- name: queue
image: richardchesterwood/k8s-fleetman-queue:release1

Applying a ReplicaSet

Delete all the Pods:

1
kubectl delete po --all
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl apply -f pods.yaml
replicaset.apps "webapp" created
pod "queue" created

kubectl get all
NAME READY STATUS RESTARTS AGE
pod/queue 1/1 Running 0 58s
pod/webapp-hzpcp 1/1 Running 0 58s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h
service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

NAME DESIRED CURRENT READY AGE
replicaset.apps/webapp 1 1 1 59s

The difference between current and ready is, current is the number of containers that are running, and ready is the number of containers that are responding to requests.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
kubectl describe rs webapp
Name: webapp
Namespace: default
Selector: app=webapp
Labels: app=webapp
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"name":"webapp","namespace":"default"},"spec":{"replicas":1,"selector":{"match...
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=webapp
Containers:
webapp:
Image: richardchesterwood/k8s-fleetman-webapp-angular:release0-5
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 4m replicaset-controller Created pod: webapp-hzpcp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/queue 1/1 Running 0 5m
pod/webapp-hzpcp 1/1 Running 0 5m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h
service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

NAME DESIRED CURRENT READY AGE
replicaset.apps/webapp 1 1 1 5m

kubectl delete po webapp-hzpcp
pod "webapp-hzpcp" deleted

kubectl get all
NAME READY STATUS RESTARTS AGE
pod/queue 1/1 Running 0 7m
pod/webapp-hff5l 1/1 Running 0 3s
pod/webapp-hzpcp 0/1 Terminating 0 7m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h
service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

NAME DESIRED CURRENT READY AGE
replicaset.apps/webapp 1 1 1 7m

Deployments

It’s a replica set with one additional feature. With a deployment, we get automatic rolling updates with zero downtime.

Deployment API reference guide

A deployment as being an entity in Kubernetes that manages the replica set for you.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
cat pods.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
selector:
matchLabels:
app: webapp
replicas: 2
template: # template for the pods
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0

---
apiVersion: v1
kind: Pod
metadata:
name: queue
labels:
app: queue
spec:
containers:
- name: queue
image: richardchesterwood/k8s-fleetman-queue:release1

kubectl apply -f pods.yaml
deployment.apps "webapp" created
pod "queue" unchanged

kubectl get all
NAME READY STATUS RESTARTS AGE
pod/queue 1/1 Running 0 24m
pod/webapp-7469fb7fd6-4mcth 1/1 Running 0 12s
pod/webapp-7469fb7fd6-sv4rw 1/1 Running 0 12s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 3h
service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 23h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/webapp 2 2 2 2 12s

NAME DESIRED CURRENT READY AGE
replicaset.apps/webapp-7469fb7fd6 2 2 2 12s

Managing Rollouts

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kubectl rollout status deploy webapp
deployment "webapp" successfully rolled out


kubectl rollout status deploy webapp
Waiting for rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for rollout to finish: 2 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
deployment "webapp" successfully rolled out

kubectl rollout history deploy webapp
deployments "webapp"
REVISION CHANGE-CAUSE
2 <none>
3 <none>

# Default command automatically go back to one version

kubectl rollout undo deploy

It’s really only for use in an emergency.

Networking and service discovery

Networking Overview

Networking Overview

Namespace-kube-system

A namespace is a way of partitioning your resources in Kubernetes into separate areas.

namespace

namespace

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
kubectl get ns
NAME STATUS AGE
default Active 1d
kube-public Active 1d
kube-system Active 1d

kubectl get po
NAME READY STATUS RESTARTS AGE
queue 1/1 Running 0 3h
webapp-7469fb7fd6-sg87f 1/1 Running 0 1h
webapp-7469fb7fd6-znbxx 1/1 Running 0 1h

kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-kwq7n 1/1 Running 0 1d
coredns-576cbf47c7-vt56b 1/1 Running 0 1d
etcd-minikube 1/1 Running 0 1d
kube-addon-manager-minikube 1/1 Running 0 1d
kube-apiserver-minikube 1/1 Running 0 1d
kube-controller-manager-minikube 1/1 Running 0 1d
kube-proxy-cn982 1/1 Running 0 1d
kube-scheduler-minikube 1/1 Running 0 1d
kubernetes-dashboard-5bff5f8fb8-mqz9z 1/1 Running 0 1d
storage-provisioner 1/1 Running 0 1d

kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-576cbf47c7-kwq7n 1/1 Running 0 1d
pod/coredns-576cbf47c7-vt56b 1/1 Running 0 1d
pod/etcd-minikube 1/1 Running 0 1d
pod/kube-addon-manager-minikube 1/1 Running 0 1d
pod/kube-apiserver-minikube 1/1 Running 0 1d
pod/kube-controller-manager-minikube 1/1 Running 0 1d
pod/kube-proxy-cn982 1/1 Running 0 1d
pod/kube-scheduler-minikube 1/1 Running 0 1d
pod/kubernetes-dashboard-5bff5f8fb8-mqz9z 1/1 Running 0 1d
pod/storage-provisioner 1/1 Running 0 1d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
service/kubernetes-dashboard ClusterIP 10.102.124.0 <none> 80/TCP 1d

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-proxy 1 1 1 1 1 <none> 1d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 1d
deployment.apps/kubernetes-dashboard 1 1 1 1 1d

NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-576cbf47c7 2 2 2 1d
replicaset.apps/kubernetes-dashboard-5bff5f8fb8 1 1 1 1d

kubectl describe svc kube-dns -n kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port=9153
prometheus.io/scrape=true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 172.17.0.2:53,172.17.0.3:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 172.17.0.2:53,172.17.0.3:53
Session Affinity: None
Events: <none>

Accessing MySQL from a Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
cat networking-tests.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5
env:
# Use secret in real life
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_DATABASE
value: fleetman

---
kind: Service
apiVersion: v1
metadata:
name: database
spec:
selector:
app: mysql
ports:
- port: 3306
type: ClusterIP

kga
NAME READY STATUS RESTARTS AGE
pod/mysql 1/1 Running 0 3m
pod/queue 1/1 Running 0 18h
pod/webapp-7469fb7fd6-sg87f 1/1 Running 0 17h
pod/webapp-7469fb7fd6-znbxx 1/1 Running 0 17h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/database ClusterIP 10.101.3.159 <none> 3306/TCP 3m
service/fleetman-queue NodePort 10.106.99.143 <none> 8161:30010/TCP 21h
service/fleetman-webapp NodePort 10.108.217.186 <none> 80:30080/TCP 1d
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/webapp 2 2 2 2 17h

NAME DESIRED CURRENT READY AGE
replicaset.apps/webapp-7469fb7fd6 2 2 2 17h
replicaset.apps/webapp-74bd9697b4 0 0 0 17h
replicaset.apps/webapp-8f948b66c 0 0 0 17h

kubectl exec -it webapp-7469fb7fd6-sg87f sh
/ # ls
bin etc lib mnt root sbin sys usr
dev home media proc run srv tmp var

Service Discovery

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # nslookup database
nslookup: can't resolve '(null)': Name does not resolve

Name: database
Address 1: 10.101.3.159 database.default.svc.cluster.local

kubectl exec -it webapp-7469fb7fd6-sg87f sh
/ # mysql
sh: mysql: not found
/ # apk update
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.3-61-gd3d301001c [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.3-51-gf95d10fed7 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9067 distinct packages available
/ # apk add mysql-client
(1/6) Installing mariadb-common (10.1.38-r1)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.38-r1)
(6/6) Installing mysql-client (10.1.38-r1)
Executing busybox-1.27.2-r7.trigger
OK: 53 MiB in 34 packages

# mysql -h database -uroot -ppassword fleetman
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.26 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [fleetman]> create table testable (test varchar(255));
Query OK, 0 rows affected (0.05 sec)

MySQL [fleetman]> show tables;
+--------------------+
| Tables_in_fleetman |
+--------------------+
| testable |
+--------------------+
1 row in set (0.01 sec)

Can find the ip address of any service that we like just by its name. And that’s called service discovery.

Fully Qualified Domain Names (FQDN)

1
2
3
4
5
# nslookup database
nslookup: can't resolve '(null)': Name does not resolve

Name: database
Address 1: 10.101.3.159 database.default.svc.cluster.local

An introduction to Microservices

Each microservice should be Highly Cohesive and Lossely Coupled.

highly cohesive: each microservice should handlng one business requirement. Cohesive means that a microservice should have a single set of reponsibilities.

Each microservice will maintain its own data store. And that microservice will be really the only poart of the system that can read or write that data.

Fleetman Microservices- setting the scene

scene
The logic in the API gateway is typically some kind of a mapping. So it would be something like if the incoming request ends with /vehicles, then delegate the call to,
in this case, the position tracker.

API gateway pattern

Deploying the Queue

scene

API gateway: a web front end which is implemented in Java script

Position tracker: back end, calcuating the speeds of vehicles and storing the positions of all the vehicles.

Queue: which is going to store the messages that are received from the vehicles as they move around the country.

Positon simulator: a testing microservice which is going to generate some positions of vehicles.

Delete all the Pods:

1
2
3
4
5
6
7
kubectl delete -f .
pod "mysql" deleted
service "database" deleted
deployment.apps "webapp" deleted
pod "queue" deleted
service "fleetman-webapp" deleted
service "fleetman-queue" deleted
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
kubectl apply -f workloads.yaml
kubectl describe pod queue-6bf9fd876-4xrqx
Name: queue-6bf9fd876-4xrqx
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 05 Jun 2019 15:14:40 +1000
Labels: app=queue
pod-template-hash=6bf9fd876
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/queue-6bf9fd876
Containers:
webapp:
Container ID: docker://f306dda43b8bf2eaab70449ecac022bb0b98e37b8b2be0d1b2e1b25eea9db1ac
Image: richardchesterwood/k8s-fleetman-queue:release1
Image ID: docker-pullable://richardchesterwood/k8s-fleetman-queue@sha256:bc2cb90a09aecdd8bce5d5f3a8dac17281ec7883077ddcfb8b7acfe2ab3b6afa
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 05 Jun 2019 15:14:41 +1000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nmqkz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-nmqkz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nmqkz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/queue-6bf9fd876-4xrqx to minikube
Normal Pulled 2m kubelet, minikube Container image "richardchesterwood/k8s-fleetman-queue:release1" already present on machine
Normal Created 2m kubelet, minikube Created container
Normal Started 2m kubelet, minikube Started container

kubectl apply -f services.yaml
service "fleetman-webapp" created
service "fleetman-queue" created

kubectl apply -f services.yaml
service "fleetman-webapp" created
service "fleetman-queue" created

minikube ip
There is a newer version of minikube available (v1.1.0). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v1.1.0

To disable this notification, run the following:
minikube config set WantUpdateNotification false
192.168.99.118

Open a web browser http://192.168.99.118:30010, click Manage ActiveMQ broker, enter username and password both admin.

Deploying the Position Simulator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
cat workloads.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue
spec:
selector:
matchLabels:
app: queue
replicas: 1
template: # template for the pods
metadata:
labels:
app: queue
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-queue:release1

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-simulator
spec:
selector:
matchLabels:
app: position-simulator
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-simulator
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-position-simulator:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: producadskfjsjfsislfslsj

kubectl apply -f workloads.yaml
deployment.apps "queue" unchanged
deployment.apps "position-simulator" configured

kga
NAME READY STATUS RESTARTS AGE
pod/position-simulator-6f97fd485f-gplr8 0/1 ContainerCreating 0 7s
pod/position-simulator-dff7b7599-2vbdd 0/1 ImagePullBackOff 0 2m
pod/queue-6bf9fd876-4xrqx 1/1 Running 0 50m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.110.95.121 <none> 8161:30010/TCP,61616:30536/TCP 47m
service/fleetman-webapp NodePort 10.103.224.156 <none> 80:30080/TCP 47m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/position-simulator 1 2 1 0 2m
deployment.apps/queue 1 1 1 1 50m

NAME DESIRED CURRENT READY AGE
replicaset.apps/position-simulator-6f97fd485f 1 1 0 7s
replicaset.apps/position-simulator-dff7b7599 1 1 0 2m
replicaset.apps/queue-6bf9fd876 1 1 1 50m

kubectl describe pod position-simulator-6f97fd485f-gplr8
Name: position-simulator-6f97fd485f-gplr8
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 05 Jun 2019 16:05:27 +1000
Labels: app=position-simulator
pod-template-hash=6f97fd485f
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/position-simulator-6f97fd485f
Containers:
webapp:
Container ID: docker://2a4f677af3a8bd72f674543e13a4b84c3c86148791106a19270d86878091b09d
Image: richardchesterwood/k8s-fleetman-position-simulator:release1
Image ID: docker-pullable://richardchesterwood/k8s-fleetman-position-simulator@sha256:58ce4aa156469115a58be0bcfcec84bb7e42dd4552f83dcf5b6381234001108b
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 05 Jun 2019 16:05:56 +1000
Finished: Wed, 05 Jun 2019 16:05:59 +1000
Ready: False
Restart Count: 0
Environment:
SPRING_PROFILES_ACTIVE: producadskfjsjfsislfslsj
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nmqkz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nmqkz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nmqkz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33s default-scheduler Successfully assigned default/position-simulator-6f97fd485f-gplr8 to minikube
Normal Pulling 32s kubelet, minikube pulling image "richardchesterwood/k8s-fleetman-position-simulator:release1"
Normal Pulled 4s kubelet, minikube Successfully pulled image "richardchesterwood/k8s-fleetman-position-simulator:release1"
Normal Created 0s (x2 over 4s) kubelet, minikube Created container
Normal Started 0s (x2 over 4s) kubelet, minikube Started container
Normal Pulled 0s kubelet, minikube Container image "richardchesterwood/k8s-fleetman-position-simulator:release1" already present on machine

inspecting Pod Logs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
kubectl logs position-simulator-6f97fd485f-gplr8

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.4.0.RELEASE)

2019-06-05 06:09:06.044 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Starting PositionsimulatorApplication v0.0.1-SNAPSHOT on position-simulator-6f97fd485f-gplr8 with PID 1 (/webapp.jar started by root in /)
2019-06-05 06:09:06.056 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : The following profiles are active: producadskfjsjfsislfslsj
2019-06-05 06:09:06.151 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@5f4da5c3: startup date [Wed Jun 05 06:09:06 UTC 2019]; root of context hierarchy
2019-06-05 06:09:07.265 WARN 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'journeySimulator': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}"
2019-06-05 06:09:07.273 INFO 1 --- [ main] utoConfigurationReportLoggingInitializer :

Error starting ApplicationContext. To display the auto-configuration report enable debug logging (start with --debug)


2019-06-05 06:09:07.286 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application startup failed

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'journeySimulator': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}"
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:355) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1214) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:776) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:861) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:369) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:313) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1185) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1174) [spring-boot-1.4.0.RELEASE.jar!/:1.4.0.RELEASE]
at com.virtualpairprogrammers.simulator.PositionsimulatorApplication.main(PositionsimulatorApplication.java:28) [classes!/:0.0.1-SNAPSHOT]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [webapp.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [webapp.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [webapp.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:58) [webapp.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'fleetman.position.queue' in string value "${fleetman.position.queue}"
at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:219) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:193) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.context.support.PropertySourcesPlaceholderConfigurer$2.resolveStringValue(PropertySourcesPlaceholderConfigurer.java:172) ~[spring-context-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.resolveEmbeddedValue(AbstractBeanFactory.java:813) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1039) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1019) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:566) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:349) ~[spring-beans-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
... 24 common frames omitted

kubectl logs -f position-simulator-6f97fd485f-gplr8 follow the log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
cat workloads.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue
spec:
selector:
matchLabels:
app: queue
replicas: 1
template: # template for the pods
metadata:
labels:
app: queue
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-queue:release1

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-simulator
spec:
selector:
matchLabels:
app: position-simulator
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-simulator
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-position-simulator:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice

kubectl apply -f workloads.yaml
deployment.apps "queue" unchanged
deployment.apps "position-simulator" configured

kga
NAME READY STATUS RESTARTS AGE
pod/position-simulator-6d8769d8-ghtmw 1/1 Running 0 9s
pod/position-simulator-6f97fd485f-gplr8 0/1 Terminating 6 8m
pod/queue-6bf9fd876-4xrqx 1/1 Running 0 59m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.110.95.121 <none> 8161:30010/TCP,61616:30536/TCP 55m
service/fleetman-webapp NodePort 10.103.224.156 <none> 80:30080/TCP 55m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/position-simulator 1 1 1 1 10m
deployment.apps/queue 1 1 1 1 59m

NAME DESIRED CURRENT READY AGE
replicaset.apps/position-simulator-6d8769d8 1 1 1 9s
replicaset.apps/position-simulator-6f97fd485f 0 0 0 8m
replicaset.apps/position-simulator-dff7b7599 0 0 0 10m
replicaset.apps/queue-6bf9fd876 1 1 1 59m

kubectl logs -f position-simulator-6d8769d8-ghtmw

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.4.0.RELEASE)

2019-06-05 06:13:36.205 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Starting PositionsimulatorApplication v0.0.1-SNAPSHOT on position-simulator-6d8769d8-ghtmw with PID 1 (/webapp.jar started by root in /)
2019-06-05 06:13:36.213 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : The following profiles are active: production-microservice
2019-06-05 06:13:36.361 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@443b7951: startup date [Wed Jun 05 06:13:36 UTC 2019]; root of context hierarchy
2019-06-05 06:13:38.011 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2019-06-05 06:13:38.016 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2019-06-05 06:13:38.041 INFO 1 --- [ main] c.v.s.PositionsimulatorApplication : Started PositionsimulatorApplication in 2.487 seconds (JVM running for 3.201)
2019-06-05 06:13:38.046 INFO 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@443b7951: startup date [Wed Jun 05 06:13:36 UTC 2019]; root of context hierarchy
2019-06-05 06:13:38.048 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647
2019-06-05 06:13:38.049 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
ca^H^H^C

Digram

Deploying the Position Tracker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
cat workloads.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue
spec:
selector:
matchLabels:
app: queue
replicas: 1
template: # template for the pods
metadata:
labels:
app: queue
spec:
containers:
- name: queue
image: richardchesterwood/k8s-fleetman-queue:release1

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-simulator
spec:
selector:
matchLabels:
app: position-simulator
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-simulator
spec:
containers:
- name: position-simulator
image: richardchesterwood/k8s-fleetman-position-simulator:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-tracker
spec:
selector:
matchLabels:
app: position-tracker
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-tracker
spec:
containers:
- name: position-tracker
image: richardchesterwood/k8s-fleetman-position-tracker:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice

kubectl apply -f workloads.yaml
deployment.apps "queue" configured
deployment.apps "position-simulator" configured
deployment.apps "position-tracker" created

kga
NAME READY STATUS RESTARTS AGE
pod/position-simulator-6d8769d8-ghtmw 0/1 Terminating 0 13m
pod/position-simulator-7889db8b94-49qz2 1/1 Running 0 10s
pod/position-tracker-86d694f997-5j6fm 1/1 Running 0 10s
pod/queue-6bf9fd876-4xrqx 1/1 Terminating 0 1h
pod/queue-9668b9bb4-4pqxr 1/1 Running 0 10s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-queue NodePort 10.110.95.121 <none> 8161:30010/TCP,61616:30536/TCP 1h
service/fleetman-webapp NodePort 10.103.224.156 <none> 80:30080/TCP 1h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/position-simulator 1 1 1 1 24m
deployment.apps/position-tracker 1 1 1 1 10s
deployment.apps/queue 1 1 1 1 1h

NAME DESIRED CURRENT READY AGE
replicaset.apps/position-simulator-6d8769d8 0 0 0 13m
replicaset.apps/position-simulator-6f97fd485f 0 0 0 21m
replicaset.apps/position-simulator-7889db8b94 1 1 1 10s
replicaset.apps/position-simulator-dff7b7599 0 0 0 24m
replicaset.apps/position-tracker-86d694f997 1 1 1 10s
replicaset.apps/queue-6bf9fd876 0 0 0 1h
replicaset.apps/queue-9668b9bb4 1 1 1 10s

kubectl describe pod position-tracker-86d694f997-5j6fm
Name: position-tracker-86d694f997-5j6fm
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 05 Jun 2019 16:26:58 +1000
Labels: app=position-tracker
pod-template-hash=86d694f997
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/position-tracker-86d694f997
Containers:
position-tracker:
Container ID: docker://c0203a7654c944a92f1b18b48d5c84917b9ecc791536429e6b02ca6248edb78f
Image: richardchesterwood/k8s-fleetman-position-tracker:release1
Image ID: docker-pullable://richardchesterwood/k8s-fleetman-position-tracker@sha256:5da78e936eb77677fbf30e528253a543badc76a5dd3a356b6d13e716da187298
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 05 Jun 2019 16:27:08 +1000
Ready: True
Restart Count: 0
Environment:
SPRING_PROFILES_ACTIVE: production-microservice
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nmqkz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-nmqkz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nmqkz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned default/position-tracker-86d694f997-5j6fm to minikube
Normal Pulling 1m kubelet, minikube pulling image "richardchesterwood/k8s-fleetman-position-tracker:release1"
Normal Pulled 1m kubelet, minikube Successfully pulled image "richardchesterwood/k8s-fleetman-position-tracker:release1"
Normal Created 1m kubelet, minikube Created container
Normal Started 1m kubelet, minikube Started container

kubectl logs -f position-tracker-86d694f997-5j6fm

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.4.0.RELEASE)

2019-06-05 06:27:14.242 INFO 1 --- [ main] c.v.tracker.PositionTrackerApplication : Starting PositionTrackerApplication v0.0.1-SNAPSHOT on position-tracker-86d694f997-5j6fm with PID 1 (/webapp.jar started by root in /)
2019-06-05 06:27:14.297 INFO 1 --- [ main] c.v.tracker.PositionTrackerApplication : The following profiles are active: production-microservice
2019-06-05 06:27:15.147 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Wed Jun 05 06:27:15 UTC 2019]; root of context hierarchy
2019-06-05 06:27:25.787 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
2019-06-05 06:27:25.852 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
2019-06-05 06:27:25.854 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.4
2019-06-05 06:27:25.985 INFO 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-06-05 06:27:25.986 INFO 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 10877 ms
2019-06-05 06:27:26.212 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
2019-06-05 06:27:26.221 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-05 06:27:26.222 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-05 06:27:26.223 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2019-06-05 06:27:26.223 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-05 06:27:26.789 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Wed Jun 05 06:27:15 UTC 2019]; root of context hierarchy
2019-06-05 06:27:26.900 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/vehicles/{vehicleName}],methods=[GET]}" onto public org.springframework.http.ResponseEntity<com.virtualpairprogrammers.tracker.domain.VehiclePosition> com.virtualpairprogrammers.tracker.rest.PositionReportsController.getLatestReportForVehicle(java.lang.String)
2019-06-05 06:27:26.901 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/vehicles/],methods=[GET]}" onto public java.util.Collection<com.virtualpairprogrammers.tracker.domain.VehiclePosition> com.virtualpairprogrammers.tracker.rest.PositionReportsController.getUpdatedPositions(java.util.Date)
2019-06-05 06:27:26.903 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2019-06-05 06:27:26.904 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2019-06-05 06:27:26.948 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-05 06:27:26.948 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-05 06:27:26.991 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-05 06:27:27.364 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2019-06-05 06:27:27.414 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2019-06-05 06:27:28.408 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2019-06-05 06:27:28.436 INFO 1 --- [ main] c.v.tracker.PositionTrackerApplication : Started PositionTrackerApplication in 17.136 seconds (JVM running for 20.295)

ActiveMQ screenshot

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
cat services.yaml
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: webapp

ports:
- name: http
port: 80
nodePort: 30080

type: NodePort

---
apiVersion: v1
kind: Service
metadata:
name: fleetman-queue

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: queue

ports:
- name: http
port: 8161
nodePort: 30010

- name: endpoint
port: 61616

type: NodePort

---
apiVersion: v1
kind: Service
metadata:
name: fleetman-position-tracker

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: position-tracker

ports:
- name: http
port: 8080

type: ClusterIP

kubectl apply -f services.yaml
service "fleetman-webapp" unchanged
service "fleetman-queue" unchanged
service "fleetman-position-tracker" configured

kga
NAME READY STATUS RESTARTS AGE
pod/position-simulator-589c64887f-lhl8g 1/1 Running 0 38m
pod/position-tracker-86d694f997-5j6fm 1/1 Running 0 45m
pod/queue-9668b9bb4-4pqxr 1/1 Running 0 45m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-position-tracker ClusterIP 10.104.177.133 <none> 8080/TCP 3m
service/fleetman-queue NodePort 10.110.95.121 <none> 8161:30010/TCP,61616:30536/TCP 1h
service/fleetman-webapp NodePort 10.103.224.156 <none> 80:30080/TCP 1h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/position-simulator 1 1 1 1 1h
deployment.apps/position-tracker 1 1 1 1 45m
deployment.apps/queue 1 1 1 1 1h

NAME DESIRED CURRENT READY AGE
replicaset.apps/position-simulator-589c64887f 1 1 1 38m
replicaset.apps/position-simulator-6d8769d8 0 0 0 58m
replicaset.apps/position-simulator-6f97fd485f 0 0 0 1h
replicaset.apps/position-simulator-7889db8b94 0 0 0 45m
replicaset.apps/position-simulator-dff7b7599 0 0 0 1h
replicaset.apps/position-tracker-86d694f997 1 1 1 45m
replicaset.apps/queue-6bf9fd876 0 0 0 1h
replicaset.apps/queue-9668b9bb4 1 1 1 45m

Check

Deploying the API Gateway

Add code in services.yaml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
apiVersion: v1
kind: Service
metadata:
name: fleetman-api-gateway

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: api-gateway

ports:
- name: http
port: 8080

type: ClusterIP

Add code in workloads.yaml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
selector:
matchLabels:
app: api-gateway
replicas: 1
template: # template for the pods
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: richardchesterwood/k8s-fleetman-api-gateway:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice

kubectl apply -f .

Deploying the Webapp

Add for workloads.yaml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
selector:
matchLabels:
app: webapp
replicas: 1
template: # template for the pods
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release1
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice

For serivce.yaml no change.

final

Persistence

New Release!!

Release 2 is now available!

Please update all images to :release2 tag

New feature: vehicle tracking shows the history of each vehicle


Change image in workloads.yaml from release1 to release2 by :%s/release1/release2/g
release2

Upgrading to a Mongo Pod

release3

Volume Mounts

Volume v1 core

PersistentVolumeClaims

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat storage.yaml
# What do want?
apiVersion: v1
kind: PersistentVolumeClain
metadata:
name: mongo-pvc
spec:
accessMode:
- ReadWriteOnce
resouces:
requests:
storage: 20Gi

---
# How do we want it implemented
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/somenew/dirctory/structure/"
type: DirectoryOrCreate

StorageClasses and Binding

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
 cat mongo-stack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
selector:
matchLabels:
app: mongodb
replicas: 1
template: # template for the pods
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:3.6.12-xenial
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
# pointer to the configuration of HOW we want the mount to be implemented
persistentVolumeClaim:
claimName: mongo-pvc
---
kind: Service
apiVersion: v1
metadata:
name: fleetman-mongodb
spec:
selector:
app: mongodb
ports:
- name: mongoport
port: 27017
type: ClusterIP

cat storage.yaml
# What do want?
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mylocalstorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

---
# How do we want it implemented
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage
spec:
storageClassName: mylocalstorage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/somenew/dirctory/structure/"
type: DirectoryOrCreate

DEPLOYING TO THE AWS CLOUD

Getting started with AWS

image1

image2

image3

ebs

Introducing Kops

Kops

This apparently is the easiest way to get a production-grade Kubernetes cluster up and running.

Installing the Kops Environment

Deploy to aws

Configuring your first cluster

Cluster State storage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[ec2-user@foobar ~]$ export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
[ec2-user@foobar ~]$ export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
[ec2-user@foobar ~]$ export NAME=fleetman.k8s.local
[ec2-user@foobar ~]$ export KOPS_STATE_STORE=s3://stanzhou-state-storage
[ec2-user@foobar ~]$ aws ec2 describe-availability-zones --region ap-southeast-2
{
"AvailabilityZones": [
{
"State": "available",
"ZoneName": "ap-southeast-2a",
"Messages": [],
"ZoneId": "apse2-az1",
"RegionName": "ap-southeast-2"
},
{
"State": "available",
"ZoneName": "ap-southeast-2b",
"Messages": [],
"ZoneId": "apse2-az3",
"RegionName": "ap-southeast-2"
},
{
"State": "available",
"ZoneName": "ap-southeast-2c",
"Messages": [],
"ZoneId": "apse2-az2",
"RegionName": "ap-southeast-2"
}
]
}

$ kops create cluster --zones ap-southeast-2a,ap-southeast-2b,ap-southeast-2c ${NAME}
I0607 06:10:02.189636 3468 create_cluster.go:519] Inferred --cloud=aws from zone "ap-southeast-2a"
I0607 06:10:02.243690 3468 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-southeast-2a
I0607 06:10:02.243802 3468 subnets.go:184] Assigned CIDR 172.20.64.0/19 to subnet ap-southeast-2b
I0607 06:10:02.243857 3468 subnets.go:184] Assigned CIDR 172.20.96.0/19 to subnet ap-southeast-2c
Previewing changes that will be made:


SSH public key must be specified when running with AWS (create with `kops create secret --name fleetman.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub`)

[ec2-user@foobar ~]$ ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa
[ec2-user@foobar ~]$ kops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
[ec2-user@foobar ~]$ kops edit ig nodes --name ${NAME}
[ec2-user@foobar ~]$ kops get ig --name ${NAME}
NAME ROLE MACHINETYPE MIN MAX ZONES
master-ap-southeast-2a Master m3.medium 1 1 ap-southeast-2a
nodes Node t2.medium 3 5 ap-southeast-2a,ap-southeast-2b,ap-southeast-2c
[ec2-user@foobar ~]$ kops edit ig master-ap-southeast-2a --name ${NAME}
Edit cancelled, no changes made.

Running the Cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
[ec2-user@i ~]$ kops update cluster ${NAME} --yes
I0607 06:28:38.011102 32239 apply_cluster.go:559] Gossip DNS: skipping DNS validation
I0607 06:28:38.263244 32239 executor.go:103] Tasks: 0 done / 94 total; 42 can run
I0607 06:28:39.702035 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator-ca"
I0607 06:28:40.216189 32239 vfs_castore.go:729] Issuing new certificate: "etcd-clients-ca"
I0607 06:28:40.356654 32239 vfs_castore.go:729] Issuing new certificate: "etcd-peers-ca-main"
I0607 06:28:40.743191 32239 vfs_castore.go:729] Issuing new certificate: "etcd-peers-ca-events"
I0607 06:28:40.824760 32239 vfs_castore.go:729] Issuing new certificate: "etcd-manager-ca-events"
I0607 06:28:41.265388 32239 vfs_castore.go:729] Issuing new certificate: "etcd-manager-ca-main"
I0607 06:28:41.373174 32239 vfs_castore.go:729] Issuing new certificate: "ca"
I0607 06:28:41.551597 32239 executor.go:103] Tasks: 42 done / 94 total; 26 can run
I0607 06:28:42.539134 32239 vfs_castore.go:729] Issuing new certificate: "kube-scheduler"
I0607 06:28:42.891972 32239 vfs_castore.go:729] Issuing new certificate: "kubecfg"
I0607 06:28:43.157916 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-proxy-client"
I0607 06:28:43.556052 32239 vfs_castore.go:729] Issuing new certificate: "kubelet"
I0607 06:28:43.677894 32239 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator"
I0607 06:28:43.748079 32239 vfs_castore.go:729] Issuing new certificate: "kube-proxy"
I0607 06:28:44.025132 32239 vfs_castore.go:729] Issuing new certificate: "kubelet-api"
I0607 06:28:44.589696 32239 vfs_castore.go:729] Issuing new certificate: "kube-controller-manager"
I0607 06:28:44.730038 32239 vfs_castore.go:729] Issuing new certificate: "kops"
I0607 06:28:44.864527 32239 executor.go:103] Tasks: 68 done / 94 total; 22 can run
I0607 06:28:45.089177 32239 launchconfiguration.go:364] waiting for IAM instance profile "masters.fleetman.k8s.local" to be ready
I0607 06:28:45.101954 32239 launchconfiguration.go:364] waiting for IAM instance profile "nodes.fleetman.k8s.local" to be ready
I0607 06:28:55.483430 32239 executor.go:103] Tasks: 90 done / 94 total; 3 can run
I0607 06:28:55.974524 32239 vfs_castore.go:729] Issuing new certificate: "master"
I0607 06:28:56.119668 32239 executor.go:103] Tasks: 93 done / 94 total; 1 can run
I0607 06:28:56.336766 32239 executor.go:103] Tasks: 94 done / 94 total; 0 can run
I0607 06:28:56.407976 32239 update_cluster.go:291] Exporting kubecfg for cluster
kops has set your kubectl context to fleetman.k8s.local

Cluster is starting. It should be ready in a few minutes.

Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.fleetman.k8s.local
* the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

[ec2-user@i ~]$ kops validate cluster
Using cluster from kubectl context: fleetman.k8s.local

Validating cluster fleetman.k8s.local

INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-ap-southeast-2a Master m3.medium 1 1 ap-southeast-2a
nodes Node t2.medium 3 5 ap-southeast-2a,ap-southeast-2b,ap-southeast-2c

NODE STATUS
NAME ROLE READY
ip-172-20-115-253.ap-southeast-2.compute.internal node True
ip-172-20-39-212.ap-southeast-2.compute.internal node True
ip-172-20-45-219.ap-southeast-2.compute.internal master True
ip-172-20-89-8.ap-southeast-2.compute.internal node True

Your cluster fleetman.k8s.local is ready

[ec2-user@i ~]$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 4m50s

Provisioning SSD drives with a StorageClass

We have a workloads yaml file, where we’d be finding the pods that we want to deploy to our cluster. We have mongostack which is a specialist file, just for the mongo database. We have storage.yaml, which is currently defining that we want to store the mongo data in a local directory on the host machine. And we have a yaml file for the services.

[aws ebs])https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[ec2-user@i ~]$ cat storage-aws.yaml
# What do want?
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: cloud-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi

---
# How do we want it implemented
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cloud-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2

[ec2-user@ip-1 ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-pvc Bound pvc-b2ed286e-88f3-11e9-b509-02985f983814 7Gi RWO cloud-ssd 3m45s

[ec2-user@ip-1 ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b2ed286e-88f3-11e9-b509-02985f983814 7Gi RWO Delete Bound default/mongo-pvc cloud-ssd 18s

ebs volume created

Deploying the Fleetman Workload

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
[ec2-user@ip-1-9 ~]$ cat services.yaml
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: webapp

ports:
- name: http
port: 80
type: LoadBalancer

---
apiVersion: v1
kind: Service
metadata:
name: fleetman-queue

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: queue

ports:
- name: http
port: 8161

- name: endpoint
port: 61616

type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
name: fleetman-position-tracker

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: position-tracker

ports:
- name: http
port: 8080

type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
name: fleetman-api-gateway

spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: api-gateway

ports:
- name: http
port: 8080

type: ClusterIP

[ec2-user@ip-1-4-9 ~]$ cat workloads.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue
spec:
selector:
matchLabels:
app: queue
replicas: 1
template: # template for the pods
metadata:
labels:
app: queue
spec:
containers:
- name: queue
image: richardchesterwood/k8s-fleetman-queue:release2

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-simulator
spec:
selector:
matchLabels:
app: position-simulator
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-simulator
spec:
containers:
- name: position-simulator
image: richardchesterwood/k8s-fleetman-position-simulator:release2
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: position-tracker
spec:
selector:
matchLabels:
app: position-tracker
replicas: 1
template: # template for the pods
metadata:
labels:
app: position-tracker
spec:
containers:
- name: position-tracker
image: richardchesterwood/k8s-fleetman-position-tracker:release3
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
selector:
matchLabels:
app: api-gateway
replicas: 1
template: # template for the pods
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: richardchesterwood/k8s-fleetman-api-gateway:release2
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
selector:
matchLabels:
app: webapp
replicas: 1
template: # template for the pods
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release2
env:
- name: SPRING_PROFILES_ACTIVE
value: production-microservice
[ec2-user@-4-9 ~]$ kubectl apply -f .
deployment.apps/mongodb unchanged
service/fleetman-mongodb unchanged
service/fleetman-webapp unchanged
service/fleetman-queue unchanged
service/fleetman-position-tracker unchanged
service/fleetman-api-gateway unchanged
persistentvolumeclaim/mongo-pvc unchanged
storageclass.storage.k8s.io/cloud-ssd unchanged
deployment.apps/queue configured
deployment.apps/position-simulator configured
deployment.apps/position-tracker unchanged
deployment.apps/api-gateway configured
deployment.apps/webapp configured
[ec2-user@ip-172-31-4-9 ~]$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/api-gateway-5d445d6f69-5gmm4 1/1 Running 0 82s
pod/mongodb-5559556bf-jjkpw 1/1 Running 0 20m
pod/position-simulator-7ffd4f8f68-2gnjr 1/1 Running 0 82s
pod/position-tracker-5ff4fb7479-jjj9f 1/1 Running 0 11m
pod/queue-75f4ddd795-6vn9d 1/1 Running 0 82s
pod/webapp-689dd9b4f4-ntdpz 1/1 Running 0 82s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-api-gateway ClusterIP 100.65.245.184 <none> 8080/TCP 11m
service/fleetman-mongodb ClusterIP 100.70.242.251 <none> 27017/TCP 20m
service/fleetman-position-tracker ClusterIP 100.70.48.51 <none> 8080/TCP 11m
service/fleetman-queue ClusterIP 100.65.176.119 <none> 8161/TCP,61616/TCP 11m
service/fleetman-webapp LoadBalancer 100.70.56.219 a660e0c5e88f611e9b50902985f98381-1044209190.ap-southeast-2.elb.amazonaws.com 80:30149/TCP 11m
service/kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 68m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/api-gateway 1 1 1 1 11m
deployment.apps/mongodb 1 1 1 1 20m
deployment.apps/position-simulator 1 1 1 1 11m
deployment.apps/position-tracker 1 1 1 1 11m
deployment.apps/queue 1 1 1 1 11m
deployment.apps/webapp 1 1 1 1 11m

NAME DESIRED CURRENT READY AGE
replicaset.apps/api-gateway-5d445d6f69 1 1 1 82s
replicaset.apps/api-gateway-6d7dccc464 0 0 0 11m
replicaset.apps/mongodb-5559556bf 1 1 1 20m
replicaset.apps/position-simulator-549554f4d9 0 0 0 11m
replicaset.apps/position-simulator-7ffd4f8f68 1 1 1 82s
replicaset.apps/position-tracker-5ff4fb7479 1 1 1 11m
replicaset.apps/queue-75f4ddd795 1 1 1 82s
replicaset.apps/queue-b46577b46 0 0 0 11m
replicaset.apps/webapp-689dd9b4f4 1 1 1 82s
replicaset.apps/webapp-6cdd565c5 0 0 0 11m
[ec2-user@ip-172-31-4-9 ~]$ kubectl log -f pod/position-tracker-5ff4fb7479-jjj9f
log is DEPRECATED and will be removed in a future version. Use logs instead.
2019-06-07 07:42:40.878 ERROR 1 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'positionQueue' - retrying using FixedBackOff{interval=5000, currentAttempts=15, maxAttempts=unlimited}. Cause: Could not connect to broker URL: tcp://fleetman-queue.default.svc.cluster.local:61616. Reason: java.net.SocketException: Socket closed
2019-06-07 07:42:46.002 INFO 1 --- [enerContainer-1] o.s.j.l.DefaultMessageListenerContainer : Successfully refreshed JMS Connection
2019-06-07 07:42:47.440 INFO 1 --- [nio-8080-exec-8] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:3}] to fleetman-mongodb.default.svc.cluster.local:27017
illljffkkkkerror: unexpected EOF

Docker swarm uses a concept called a rooting, or routing, mesh to find the node that your web application is running on. None of that is used here, it uses a standard AWS load balancer to find the correct node.

Setting up a real Domain Name

Add a CNAME record in your own domain to ELB address.

Surviving Node Failure

Requirement


Even in the vent of a Node (or Availability Zone) failure, the web site must be accessible

It doesn’t matter if reports from vehicles stop coming in, as long as service is restored within a few minutes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[ec2-user@foobar ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-gateway-5d445d6f69-5gmm4 1/1 Running 0 93m
mongodb-5559556bf-jjkpw 1/1 Running 0 112m
position-simulator-7ffd4f8f68-2gnjr 1/1 Running 0 93m
position-tracker-5ff4fb7479-jjj9f 1/1 Running 0 103m
queue-75f4ddd795-6vn9d 1/1 Running 0 93m
webapp-689dd9b4f4-ntdpz 1/1 Running 0 93m
[ec2-user@ip-172-31-4-9 ~]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
api-gateway-5d445d6f69-5gmm4 1/1 Running 0 93m 100.96.3.5 ip-172-20-39-212.ap-southeast-2.compute.internal <none>
mongodb-5559556bf-jjkpw 1/1 Running 0 112m 100.96.1.4 ip-172-20-89-8.ap-southeast-2.compute.internal <none>
position-simulator-7ffd4f8f68-2gnjr 1/1 Running 0 93m 100.96.2.5 ip-172-20-115-253.ap-southeast-2.compute.internal <none>
position-tracker-5ff4fb7479-jjj9f 1/1 Running 0 103m 100.96.3.4 ip-172-20-39-212.ap-southeast-2.compute.internal <none>
queue-75f4ddd795-6vn9d 1/1 Running 0 93m 100.96.1.5 ip-172-20-89-8.ap-southeast-2.compute.internal <none>
webapp-689dd9b4f4-ntdpz 1/1 Running 0 93m 100.96.2.6 ip-172-20-115-253.ap-southeast-2.compute.internal <none>

Replicating Pods

For our example, take the queue pod, give it two replicas and therefore, in the event of a node failure, one of the nodes will always survive. Unfortunately you can’t do that because this particular pod, the queue pod is stateful. In other words, it contains data. And because it contains data, if you replicate it, you’re going to end up with a kind of a split brain situation, where half the data is in one part, half the data is in the other part. And all kinds of chaos will follow on from that. Really what you’re aiming for with any pod is to make it stateless, so it’s not holding data.

Deleting the Cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
[ec2-user@foobar ~]$ export NAME=fleetman.k8s.local
[ec2-user@foobar ~]$ kops delete cluster --name ${NAME} --yes

State Store: Required value: Please set the --state flag or export KOPS_STATE_STORE.
For example, a valid value follows the format s3://<bucket>.
You can find the supported stores in https://github.com/kubernetes/kops/blob/master/docs/state.md.
[ec2-user@ip-172-31-4-9 ~]$ export KOPS_STATE_STORE=s3://stanzhou-state-storage
[ec2-user@ip-172-31-4-9 ~]$ kops delete cluster --name ${NAME} --yes
TYPE NAME ID
autoscaling-config master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844 master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844
autoscaling-config nodes.fleetman.k8s.local-20190607062844 nodes.fleetman.k8s.local-20190607062844
autoscaling-group master-ap-southeast-2a.masters.fleetman.k8s.local master-ap-southeast-2a.masters.fleetman.k8s.local
autoscaling-group nodes.fleetman.k8s.local nodes.fleetman.k8s.local
dhcp-options fleetman.k8s.local dopt-0a0be88814a0c83a9
iam-instance-profile masters.fleetman.k8s.local masters.fleetman.k8s.local
iam-instance-profile nodes.fleetman.k8s.local nodes.fleetman.k8s.local
iam-role masters.fleetman.k8s.local masters.fleetman.k8s.local
iam-role nodes.fleetman.k8s.local nodes.fleetman.k8s.local
instance master-ap-southeast-2a.masters.fleetman.k8s.local i-012c7c446b65343d4
instance nodes.fleetman.k8s.local i-07583ee103342be9a
instance nodes.fleetman.k8s.local i-079cea61e4a7736b9
instance nodes.fleetman.k8s.local i-0bf949dbd290d81c3
internet-gateway fleetman.k8s.local igw-0a939d66d6e93e0d5
keypair kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a
load-balancer a660e0c5e88f611e9b50902985f98381
load-balancer api.fleetman.k8s.local api-fleetman-k8s-local-tkmafs
route-table fleetman.k8s.local rtb-06b591f24a01973f6
security-group sg-07b79756088cf753c
security-group api-elb.fleetman.k8s.local sg-005c9b49b63793004
security-group masters.fleetman.k8s.local sg-07ef00367ce1a7b62
security-group nodes.fleetman.k8s.local sg-01f81918cdbaba212
subnet ap-southeast-2a.fleetman.k8s.local subnet-060ec2db19027cf6a
subnet ap-southeast-2b.fleetman.k8s.local subnet-0def003bdbfd97915
subnet ap-southeast-2c.fleetman.k8s.local subnet-0016862a30fe5f443
volume a.etcd-events.fleetman.k8s.local vol-0d21c97044fcc06dd
volume a.etcd-main.fleetman.k8s.local vol-0f1c9f3e983c5848a
volume fleetman.k8s.local-dynamic-pvc-b2ed286e-88f3-11e9-b509-02985f983814 vol-074914e494e4b656d
vpc fleetman.k8s.local vpc-0775d4b463932d2f7

load-balancer:api-fleetman-k8s-local-tkmafs ok
load-balancer:a660e0c5e88f611e9b50902985f98381 ok
autoscaling-group:nodes.fleetman.k8s.local ok
keypair:kubernetes.fleetman.k8s.local-fc:11:5b:a8:1d:16:4a:36:36:15:2d:9f:f3:69:d2:0a ok
internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry
autoscaling-group:master-ap-southeast-2a.masters.fleetman.k8s.local ok
instance:i-012c7c446b65343d4 ok
instance:i-07583ee103342be9a ok
instance:i-0bf949dbd290d81c3 ok
instance:i-079cea61e4a7736b9 ok
iam-instance-profile:nodes.fleetman.k8s.local ok
iam-instance-profile:masters.fleetman.k8s.local ok
iam-role:nodes.fleetman.k8s.local ok
iam-role:masters.fleetman.k8s.local ok
subnet:subnet-0def003bdbfd97915 still has dependencies, will retry
subnet:subnet-0016862a30fe5f443 still has dependencies, will retry
autoscaling-config:nodes.fleetman.k8s.local-20190607062844 ok
volume:vol-0d21c97044fcc06dd still has dependencies, will retry
autoscaling-config:master-ap-southeast-2a.masters.fleetman.k8s.local-20190607062844 ok
volume:vol-0f1c9f3e983c5848a still has dependencies, will retry
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
volume:vol-074914e494e4b656d still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
security-group:sg-01f81918cdbaba212 still has dependencies, will retry
security-group:sg-07b79756088cf753c still has dependencies, will retry
security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
route-table:rtb-06b591f24a01973f6
internet-gateway:igw-0a939d66d6e93e0d5
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
security-group:sg-07b79756088cf753c
subnet:subnet-0016862a30fe5f443
subnet:subnet-0def003bdbfd97915
security-group:sg-07ef00367ce1a7b62
volume:vol-0d21c97044fcc06dd
dhcp-options:dopt-0a0be88814a0c83a9
volume:vol-074914e494e4b656d
security-group:sg-01f81918cdbaba212
volume:vol-0f1c9f3e983c5848a
vpc:vpc-0775d4b463932d2f7
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
subnet:subnet-0def003bdbfd97915 still has dependencies, will retry
subnet:subnet-0016862a30fe5f443 still has dependencies, will retry
volume:vol-074914e494e4b656d still has dependencies, will retry
volume:vol-0f1c9f3e983c5848a still has dependencies, will retry
internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry
volume:vol-0d21c97044fcc06dd still has dependencies, will retry
security-group:sg-01f81918cdbaba212 still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry
security-group:sg-07b79756088cf753c still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
volume:vol-074914e494e4b656d
security-group:sg-01f81918cdbaba212
volume:vol-0f1c9f3e983c5848a
vpc:vpc-0775d4b463932d2f7
route-table:rtb-06b591f24a01973f6
internet-gateway:igw-0a939d66d6e93e0d5
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
subnet:subnet-0016862a30fe5f443
security-group:sg-07b79756088cf753c
volume:vol-0d21c97044fcc06dd
subnet:subnet-0def003bdbfd97915
security-group:sg-07ef00367ce1a7b62
dhcp-options:dopt-0a0be88814a0c83a9
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
subnet:subnet-0def003bdbfd97915 still has dependencies, will retry
volume:vol-0f1c9f3e983c5848a still has dependencies, will retry
volume:vol-0d21c97044fcc06dd still has dependencies, will retry
internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry
volume:vol-074914e494e4b656d still has dependencies, will retry
subnet:subnet-0016862a30fe5f443 still has dependencies, will retry
security-group:sg-07b79756088cf753c still has dependencies, will retry
security-group:sg-01f81918cdbaba212 still has dependencies, will retry
security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
route-table:rtb-06b591f24a01973f6
internet-gateway:igw-0a939d66d6e93e0d5
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
subnet:subnet-0016862a30fe5f443
security-group:sg-07b79756088cf753c
volume:vol-0d21c97044fcc06dd
subnet:subnet-0def003bdbfd97915
security-group:sg-07ef00367ce1a7b62
dhcp-options:dopt-0a0be88814a0c83a9
volume:vol-074914e494e4b656d
security-group:sg-01f81918cdbaba212
volume:vol-0f1c9f3e983c5848a
vpc:vpc-0775d4b463932d2f7
subnet:subnet-0def003bdbfd97915 still has dependencies, will retry
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry
volume:vol-074914e494e4b656d still has dependencies, will retry
volume:vol-0d21c97044fcc06dd ok
volume:vol-0f1c9f3e983c5848a ok
security-group:sg-01f81918cdbaba212 still has dependencies, will retry
subnet:subnet-0016862a30fe5f443 ok
security-group:sg-07b79756088cf753c still has dependencies, will retry
security-group:sg-07ef00367ce1a7b62 still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
dhcp-options:dopt-0a0be88814a0c83a9
volume:vol-074914e494e4b656d
security-group:sg-01f81918cdbaba212
vpc:vpc-0775d4b463932d2f7
route-table:rtb-06b591f24a01973f6
internet-gateway:igw-0a939d66d6e93e0d5
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
security-group:sg-07b79756088cf753c
subnet:subnet-0def003bdbfd97915
security-group:sg-07ef00367ce1a7b62
internet-gateway:igw-0a939d66d6e93e0d5 still has dependencies, will retry
volume:vol-074914e494e4b656d ok
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
security-group:sg-01f81918cdbaba212 still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
security-group:sg-07b79756088cf753c still has dependencies, will retry
subnet:subnet-0def003bdbfd97915 ok
security-group:sg-07ef00367ce1a7b62 ok
Not all resources deleted; waiting before reattempting deletion
security-group:sg-01f81918cdbaba212
vpc:vpc-0775d4b463932d2f7
route-table:rtb-06b591f24a01973f6
internet-gateway:igw-0a939d66d6e93e0d5
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
security-group:sg-07b79756088cf753c
dhcp-options:dopt-0a0be88814a0c83a9
subnet:subnet-060ec2db19027cf6a still has dependencies, will retry
security-group:sg-005c9b49b63793004 still has dependencies, will retry
security-group:sg-07b79756088cf753c still has dependencies, will retry
internet-gateway:igw-0a939d66d6e93e0d5 ok
security-group:sg-01f81918cdbaba212 ok
Not all resources deleted; waiting before reattempting deletion
dhcp-options:dopt-0a0be88814a0c83a9
vpc:vpc-0775d4b463932d2f7
route-table:rtb-06b591f24a01973f6
security-group:sg-005c9b49b63793004
subnet:subnet-060ec2db19027cf6a
security-group:sg-07b79756088cf753c
subnet:subnet-060ec2db19027cf6a ok
security-group:sg-005c9b49b63793004 ok
security-group:sg-07b79756088cf753c ok
route-table:rtb-06b591f24a01973f6 ok
vpc:vpc-0775d4b463932d2f7 ok
dhcp-options:dopt-0a0be88814a0c83a9 ok
Deleted kubectl config for fleetman.k8s.local

Deleted cluster: "fleetman.k8s.local"

Restarting the Cluster

1
2
3
4
5
6
7
8
9
10
11
12
[ec2-user@foobar ~]$ history|grep export
4 export NAME=fleetman.k8s.local
6 export KOPS_STATE_STORE=s3://stanzhou-state-storage

kops create cluster --zones ap-southeast-2a,ap-southeast-2b,ap-southeast-2c ${NAME}

kops edit ig --name=fleetman.k8s.local nodes
kops update cluster ${NAME} --yes
kops validate cluster
kubectl apply -f .
kubectl get svc
kubectl describe svc fleetman-webapp

Logging a Cluster

Introducing the ELK/ElasticStack

ELK Stack
ELK Stack2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[ec2-user@ip-172-31-4-9 ~]$ kubectl get po
NAME READY STATUS RESTARTS AGE
api-gateway-5d445d6f69-kg68n 1/1 Running 0 19h
mongodb-5559556bf-ghq5p 1/1 Running 0 19h
position-simulator-7ffd4f8f68-kwlcb 1/1 Running 0 19h
position-tracker-5ff4fb7479-2ctzt 1/1 Running 0 19h
queue-75f4ddd795-wp2mh 1/1 Running 0 10h
webapp-689dd9b4f4-php92 1/1 Running 0 19h
webapp-689dd9b4f4-vgtpg 1/1 Running 0 19h
[ec2-user@ip-172-31-4-9 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fleetman-api-gateway ClusterIP 100.67.195.111 <none> 8080/TCP 20h
fleetman-mongodb ClusterIP 100.68.143.136 <none> 27017/TCP 20h
fleetman-position-tracker ClusterIP 100.69.119.23 <none> 8080/TCP 20h
fleetman-queue ClusterIP 100.65.16.172 <none> 8161/TCP,61616/TCP 20h
fleetman-webapp LoadBalancer 100.70.168.148 ac185d1638c0311e9bef7028ed3b83c9-1696162465.ap-southeast-2.elb.amazonaws.com 80:31504/TCP 20h
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 20h
[ec2-user@ip-172-31-4-9 ~]$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-logging ClusterIP 100.68.193.244 <none> 9200/TCP 19h
kibana-logging LoadBalancer 100.67.153.183 a3c20f00a8c0811e9bef7028ed3b83c9-306741856.ap-southeast-2.elb.amazonaws.com 5601:32716/TCP 19h
kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 20h
[ec2-user@ip-172-31-4-9 ~]$ kubectl describe svc kibana-logging -n kube-system
Name: kibana-logging
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kibana-logging
kubernetes.io/cluster-service=true
kubernetes.io/name=Kibana
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kibana...
Selector: k8s-app=kibana-logging
Type: LoadBalancer
IP: 100.67.153.183
LoadBalancer Ingress: a3c20f00a8c0811e9bef7028ed3b83c9-306741856.ap-southeast-2.elb.amazonaws.com
Port: <unset> 5601/TCP
TargetPort: ui/TCP
NodePort: <unset> 32716/TCP
Endpoints: 100.96.1.5:5601
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

MONITORING WITH PROMETHEUS AND GRAFANA

Monitoring a Cluster

Prometheus
Grafana
Concerntrate solely on integrating Grafana with Prometheus.

Helm Package Manager

The Kubernetes package manager Helm

Install Helm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
wget https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar zxvf helm-v2.14.1-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/
rm helm-v2.14.1-linux-amd64.tar.gz
rm -rf ./linux-amd64/
helm --help
helm version
helm init

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Delete Helm:

1
2
3
helm ls
helm delete --purge my-special-installation
kubectl get po

Installing Prometheus Operator

install

helm install --name monitoring --namespace monitoring stable/prometheus-operator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
[ec2-user@ip-172-31-4-9 ~]$ kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-monitoring-prometheus-oper-alertmanager-0 2/2 Running 0 4m
pod/monitoring-grafana-c768bb86f-cgmj8 2/2 Running 0 4m27s
pod/monitoring-kube-state-metrics-6488587c6-zrjmg 1/1 Running 0 4m27s
pod/monitoring-prometheus-node-exporter-5cp7v 1/1 Running 0 4m27s
pod/monitoring-prometheus-node-exporter-75v99 1/1 Running 0 4m27s
pod/monitoring-prometheus-node-exporter-jcl7r 1/1 Running 0 4m27s
pod/monitoring-prometheus-node-exporter-vptns 1/1 Running 0 4m27s
pod/monitoring-prometheus-oper-operator-7b54f56766-j8k6c 1/1 Running 0 4m27s
pod/prometheus-monitoring-prometheus-oper-prometheus-0 3/3 Running 1 3m52s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 4m
service/monitoring-grafana ClusterIP 100.70.116.236 <none> 80/TCP 4m28s
service/monitoring-kube-state-metrics ClusterIP 100.67.183.154 <none> 8080/TCP 4m28s
service/monitoring-prometheus-node-exporter ClusterIP 100.68.145.110 <none> 9100/TCP 4m28s
service/monitoring-prometheus-oper-alertmanager ClusterIP 100.70.75.68 <none> 9093/TCP 4m28s
service/monitoring-prometheus-oper-operator ClusterIP 100.66.237.147 <none> 8080/TCP 4m28s
service/monitoring-prometheus-oper-prometheus ClusterIP 100.67.205.22 <none> 9090/TCP 4m28s
service/prometheus-operated ClusterIP None <none> 9090/TCP 3m53s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/monitoring-prometheus-node-exporter 4 4 4 4 4 <none> 4m27s

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/monitoring-grafana 1 1 1 1 4m27s
deployment.apps/monitoring-kube-state-metrics 1 1 1 1 4m27s
deployment.apps/monitoring-prometheus-oper-operator 1 1 1 1 4m27s

NAME DESIRED CURRENT READY AGE
replicaset.apps/monitoring-grafana-c768bb86f 1 1 1 4m27s
replicaset.apps/monitoring-kube-state-metrics-6488587c6 1 1 1 4m27s
replicaset.apps/monitoring-prometheus-oper-operator-7b54f56766 1 1 1 4m27s

NAME DESIRED CURRENT AGE
statefulset.apps/alertmanager-monitoring-prometheus-oper-alertmanager 1 1 4m
statefulset.apps/prometheus-monitoring-prometheus-oper-prometheus 1 1 3m53s

[ec2-user@ip-172-31-4-9 ~]$ kubectl edit -n monitoring service/monitoring-prometheus-oper-prometheus

Change type from ClusterIP to LoadBalancer

[ec2-user@ip-172-31-4-9 ~]$ kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-monitoring-prometheus-oper-alertmanager-0 2/2 Running 0 16m
pod/monitoring-grafana-c768bb86f-cgmj8 2/2 Running 0 17m
pod/monitoring-kube-state-metrics-6488587c6-zrjmg 1/1 Running 0 17m
pod/monitoring-prometheus-node-exporter-5cp7v 1/1 Running 0 17m
pod/monitoring-prometheus-node-exporter-75v99 1/1 Running 0 17m
pod/monitoring-prometheus-node-exporter-jcl7r 1/1 Running 0 17m
pod/monitoring-prometheus-node-exporter-vptns 1/1 Running 0 17m
pod/monitoring-prometheus-oper-operator-7b54f56766-j8k6c 1/1 Running 0 17m
pod/prometheus-monitoring-prometheus-oper-prometheus-0 3/3 Running 1 16m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 16m
service/monitoring-grafana ClusterIP 100.70.116.236 <none> 80/TCP 17m
service/monitoring-kube-state-metrics ClusterIP 100.67.183.154 <none> 8080/TCP 17m
service/monitoring-prometheus-node-exporter ClusterIP 100.68.145.110 <none> 9100/TCP 17m
service/monitoring-prometheus-oper-alertmanager ClusterIP 100.70.75.68 <none> 9093/TCP 17m
service/monitoring-prometheus-oper-operator ClusterIP 100.66.237.147 <none> 8080/TCP 17m
service/monitoring-prometheus-oper-prometheus LoadBalancer 100.67.205.22 a86903ccd8ce211e9bef7028ed3b83c9-1698452662.ap-southeast-2.elb.amazonaws.com 9090:32096/TCP 17m
service/prometheus-operated ClusterIP None <none> 9090/TCP 16m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/monitoring-prometheus-node-exporter 4 4 4 4 4 <none> 17m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/monitoring-grafana 1 1 1 1 17m
deployment.apps/monitoring-kube-state-metrics 1 1 1 1 17m
deployment.apps/monitoring-prometheus-oper-operator 1 1 1 1 17m

NAME DESIRED CURRENT READY AGE
replicaset.apps/monitoring-grafana-c768bb86f 1 1 1 17m
replicaset.apps/monitoring-kube-state-metrics-6488587c6 1 1 1 17m
replicaset.apps/monitoring-prometheus-oper-operator-7b54f56766 1 1 1 17m

NAME DESIRED CURRENT AGE
statefulset.apps/alertmanager-monitoring-prometheus-oper-alertmanager 1 1 16m
statefulset.apps/prometheus-monitoring-prometheus-oper-prometheus 1 1 16m

Working with Grafana

change back from LoadBalancer to ClusterIP on service/monitoring-prometheus-oper-prometheus
change from ClusterIP to LoadBalancer on service/monitoring-grafana

The Alert Manager

Alerting

Kubernetes Tutorial

https://gitlab.com/nanuchi/youtube-tutorial-series

stan test

Multiple file edit

Open multiple files in separate tab via vim -p file1.txt file2.txt.
If already open just use :tabe file2.txt.
Just 1gt, 2gt, use gt to view.
Close tabs using :tabc

Replace string globally

:%s/release1/release2/g

evil-tutor

/ followed by a phrase searches FORWARD for the phrase
? followed by a phrase searches BACKWARD for the phrase
After a search type n to find the next occurrence in the same direction
or N to search in the opposite direction.

Type % to find a matching ),], or }
The cursor should be on the matching parenthesis or bracket.

a way to change errors

:s/old/new to substitute ‘new’ for first ‘old’s on a line type
:s/old/new/g to substitute ‘new’ for all ‘old’s on a line type
:#,#s/old/new/g to substitute phrases between two line #’s type
:%s/old/new/g to substitute all occurrences in the file type
:%s/old/new/gc to ask for confirmation each time add ‘c’

execute an external command

:! external command

writing files

:w TEST

remove the file

:!rm TEST

to save part of the file

:#,# w FILENAME

retrieves disk file FILENAME and inserts it into the current buffer following the cursor position

:r FILENAME

opens a line BELOW the cursor and places the cursor on the open line in insert state

o

opens a line ABOVE the cursor and places the cursor on the open line in insert state

O

R to replace more than on character

R enters Repace mode until is pressed to exit.

Jenkins for Professionals

Jenkins’s architecture

Github repository –> Master -> slave
-> slave

Why distributed architecture for Jenkins?

  • Sometime, several different environments needed
  • If a larger project gets build, a single server cannot handle entire load
    Jenkins Master:
  • Scheduling jobs
  • Communicating with the slaves
  • Monitor the slaves
  • Present the results
  • Master can also execute build jobs

Jenkins Slave:

  • Communicates with Jenkins Master
  • Can run in different OS
  • Execute jobs
  • Flexibility

Jenkins Freestyle job

  • Central feature of Jenkins
  • Using jenkins UI to create a CI Pipeline

https://github.com/ravikiran-srini/springExample

Scheduling a Jenkins job

  • Scheduling is one of the options of a Build Trigger
  • Use CRON expressions to schedule a job
    • Each line consists of 5 fields sparated by TAB or whitespace
    • MINUTE HOUR DOM MONTH DOW
      0 refers to Sunday
  • To specify multiple values, use the following:
      • for all valid values
    • 0-9 specify a range of values
    • 2,3,4,5 to enumerate multiple values
  • Use Hash system for automatic balancing
    • Use H H * * * instead of 0 0 * * *
  • Use aliases like @yearly, @monthly, @weekly
    • @hourly is the same as H * * * *

Under Source Code Management of Project,
in Build Triggers, tick Build periodically, put five starts.

Triggering builds remotely

Build Triggers-> tick ‘Tigger build remotely’, put Authenication Token value.

Parameterizing build

  • Allow you to prompt users for one or more inputs
  • Each parameter has a name and a value
  • Can be accessed using $paramater or %parameter%
  • ‘Build Now’ will be replaced by “Build with Parameters”

Types of Parameters

  • Boolean Parameter
  • Choice Parameter
  • Credentials Parameter
  • File Parameter
  • List Subversion tags
  • Password Parameter
  • Run Parameter
  • String Parameter

Creating a user

Installing Plugins

Plugin name: Role-based Authorization Strategy

Implementing Role based access

Enabling role based access
Manage Jenkins-> Configure Global Security, tick ‘Enable security’
Authorization select ‘Role-Based Strategy’

Under Manage Jenkins you will have a new ‘Manage and Assign Roles’-> Manage roles

In ‘Global roles’ add role ‘team’ with overall read permission.
In “Project roles” add role ‘dev’ with ‘Pattern’ ‘Dev.*’, and tick all the boxes.
Same for ‘test’ and ‘ops’.
ManageRoles

Assign Roles

Click “Assign Roles”, in ‘Global roles’ add ‘dev’, ‘test’ and ‘ops’ into group ‘team’.
Under ‘Item roles’ add user ‘dev’,’test’ and ‘ops’ and assign to its own role.
AssignRoles


Jenkins in Docker

Running jenkins in docker

1
2
docker pull jenkins
docker run -p 8080:8080 -p 50000:50000 jenkins

Persisting Jenkins data in a Volume

1
2
3
docker volume create volume1
docker volume volume ls
docker run -p 8080:8080 -p 50000:50000 -v volume1:/var/jenkins_home jenkins #v means bond a volume

docker stop $(docker ps -aq) #stop all the containers
docker rm $(docker ps -aq) #remove the container

Running multiple Jenkins instances in Docker

demo

1
2
docker run -p 8080:8080 -p 50000:50000 -v  volume1:/var/jenkins_home jenkins
docker run -p 8081:8080 -p 50001:50000 -v volume1:/var/jenkins_home jenkins

Jenkins Plugins

Create build monitor view

Install pugin: build monitor view

Using catlight

Catlight

  • A notification app for developers
  • Available for wind, mac & Linux os
  • Monitor Busg, tasks, builds
  • See status in the tray and get notified
  • This is a jenkins plugin

Using Jenkins CLI

  • Jenkins has a built-in CLI
    Manage Jenkins-> Jenkins CLI-> Download jenkins-cli.jar

java -jar jenkins-cli.jar -s http://localhost:8080/ build pipeline -f --username stanadmin --password mario54321

Change ‘Configure Global Security’ -> ‘Authorization’-> Logged-in users can do anything-Allow anonymous read access

java -jar jenkins-cli.jar -s http://localhost:8080/ build Parameterized-build -f -p environment='dev'

Jenkins Multibranch Pipeline

  • Implement different Jenkinsfile for different branches
    New iteam-> Select Multibranch Pipline-> Scan multibranch pipleline Now
    1
    2
    3
    4
    5
    6
    git remote add proj1 https://github.com/szhouchoice/springExample.git
    git branch
    git branch -a
    git branch feature2
    git checkout feature2 # Switched to branch 'feature2'
    git push origin feature2

    Integrating Jenkins with Slack

    Install: Jenkins->Plugin Manager-> Slack Notification

Build History Metrics Plugin

  • Mean Time To Failure (MTTF)
  • Mean Time To Recovery (MTTR)
  • Standard Deviation
    Install: Plugin Manager->Build history metrics
    Build History Metrics

Global Build Stats
Install: Plugin Manager->global-build-stats
Manage Jenkins->Global Build Stats->Initalize stats->Create new chart->
Title: foobar
600*400
Hourly
24 hours
Global Build Stats

Introduction to Jenkins Pipeline

  • A suite of plugins that support continuous delivery pipelines
  • They provide tools for modeling delivery pipelines as
  • Definition is written into a text file called ‘Jenkinsfile’

Benefits of pipelines

  • Automatically creates a Pipeline build process
  • Code review of the Pipeline
  • Audit trail of the Pipeline
  • Single source of truth

Declarative vs Scripted Pipeline

  • Jenkinsfile can be written using:
    • Declarative
    • Scripted
  • Declarative is a more recent feature
    • provides rich syntactical feature
    • designed to make reading & writing Pipelines easier

Why pipeline?

  • Pipelines add a powerful set of automation tolls onto Jenkins
  • Features of Pipeline are:
    • Pipelines are implemented in code
    • Pipelines support complex real-world CD requirements
    • Pipelines can survive both planned and unplanned restarts
    • Pipeline plugin supports custom extensions
    • Pipelines can stop and wait

Pipeline concepts

  • Pipeline
  • Node
  • Stage: build, test, deploy
  • Step

Declarative Pipeline syntax

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipeline{
agent any
stages{
stage('Build'){
steps {
//
}
}
stage('Test'){
steps {
//
}
}
stage('Deploy'){
steps {
//
}
}
}
}

Declarative Pipeline syntax

1
2
3
4
5
6
7
8
9
10
11
node {
stage('Build'){
//
}
stage('Test'){
//
}
stage('Deploy'){
//
}
}

Creating a simple Pipeline

  • Pipeline can be created in any of the following ways:
    • Blue Ocean
    • Through Classic UI
    • In SCM(Source Code Managment)

Building a project with jenkins pipeline

pipeline script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
pipeline{
agent any
stages{
stage('Build'){
steps {
bat "rm -rf springExample"
bat "git clone https://github.com/ravikiran-srini/springExample.git"
bat "mvn clean -f springExample"
}
}
stage('Test'){
steps {
bat "mvn test -f springExample"
}
}
stage('Deploy'){
steps {
bat "mvn package -f springExample"
}
}
}
}

Building a Pipeline with Jenkinsfile

Jenkinsfile

  • Complex Pipelines are difficult to write and maintain
  • You can write Jenkinsfile in an IDE and commit to source control
  1. In https://github.com/szhouchoice/springExample add a Jenksinsfile
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    pipeline {
    agent any
    stages {
    stage('Build') {
    steps {
    sh 'mvn clean'
    }
    }
    stage('Test') {
    steps {
    sh 'mvn test'
    }
    }
    stage('Deploy') {
    steps {
    sh 'mvn package'
    }
    }
    }
    }
  2. Pipeline
    Pipeline script from SCM

Using environment variables

Environment variables

  • Jenkins exposes environment variables through the variable ‘env’
  • Entire list of environment variables is accessible from env variables

example:

1
2
3
4
5
6
7
8
9
10
pipeline {
agent any
stages {
stage('stage1') {
steps {
echo "Build ID: ${env.BUILD_ID}, Jenkins URL: ${env.JENKINS_URL}"
}
}
}
}

Console Ouput:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Started by user Stan Zhou
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /Users/szhou/.jenkins/workspace/stanpipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (stage1)
[Pipeline] echo
Build ID: 1, Jenkins URL: http://localhost:8080/
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Setting environment variables

  • Declarative Pipeline
    • use ‘environment’ directive
  • Scripted Pipeline
    • use ‘withEnv’ step
  • environment directive in top-level pipeline block will apply to all steps
  • environment directive within a stage will only apply with the stage

example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
pipeline {
agent any
environment {
mainenv = 'test'
}
stages {
stage('stage1') {
environment {
subenv = 'test1'
}
steps {
echo mainenv
echo subenv
}
}
stage('stage2') {
steps {
echo mainenv
echo subenv # will pro error
}
}
}
}

Jenkins Blue Ocean

Getting started with blue ocean

What is blue ocean?

  • Blue Ocean refefines the user experience of Jenkins
  • Features of Blue Ocean are:
    • Visualization
    • Pipeline editor
    • Personalization
    • Precision
    • Native integration for branch

Git material

Pro Git

Pro Git

LearnGitBranching

Learn Git Branching

tryGit

tryGit

Rebase with a pull

git pull --rebase origin master

It essentially puts the new commits in the master of the remote in your hisotry, and then superimpose your commits on them.

Squash commits together

git rebase -i HEAD~2

Squash the last two commits.
The HEAD~2 refers to the last two commits in the current branch, and the -i option stands for interactive.
pick and squash
Pick the old commit and squash the latest commit.

Aborting a squash

git rebase --abort
get back to the pre-squash state

git log

git log --oneline --decorate --graph --all

Complete Git and Github Masterclass

How Git works - 3 main states of artifact

Modified: Here, the artifact goes through chaneg made by the user

Staged: Here, the user/developer adds the artifact to Git index or staging area

Committed: Here, the artifact gets safely stored in Gt database.

git add: files moved from modified state to staged state

git commit: files moved from staged state to committed state

natural state progression is from modified-> staged-> committed

express commit: git commit -am “commiting readme.md”

3 sections of a Git project

Working directory: this is the root directory of your Git project

Staging area: Also called index, this is where all related changes are build up

Commit area: this is where all artifacts are stacked safely in Git Database

3 ways of setting up Git repository

From scratch: we will create a repository from absolutely blank state

Existing project: here we will convert an existing unversioned project to a Git repository

By Copying: we will copy an existing Git repository from Github

Git Help system

git help -a and git help -g list available subcommands and some concept guides

What is Fork and how to do it

What it means: Creating a project from another existing project

Encouragement: encourages project advancement & collaboration

Contribution: encourages outside contribution

Updates: Forks can updated

Command git status: tells us the status of the working directory and staging area

Command git log: displays committed snapshots or commit history

Real world branching scenario

Git branching

Show branch: git branch

Create a new branch: git branch demobranch

Switch to the new branch: git checkout demobranch

checkout git branches: git checkout; git checkout -b

list git branches: git branch; git branch -a

rename a git branch: git branch -m

delete a git branch: git branch -d; git branch -D

Undoing changes in a Git repository

Checking out commits in a Git repository

vim robot.txt

Express git add and commit command:

1
2
3
4
5
git commit -am "1st commit - robot.txt"

git log --oneline

git check 265fbe5

Create a new branch:

git checkout -b newbranch

Switch branch:

1
2
3
git checkout master

git checkout newbranch

Checking out files in a Git repository

1
2
3
4
5
6
7
git log --oneline

git checkout bb3d025 checkoutfile.txt

git checkout HEAD checkoutfile.txt

git status

Reverting changes in a Git repository

1
2
3
4
5
git add . 

git commit -m "commit file to revert"

git revert HEAD

Resetting Git repository

git reset --hard

resets the staging area and working directory to match the most recent commit.

git reset --hard

git reset --hard hash_vaule

moves the current branch id backward to commit id we are using and reset the both the staging area and work dir to match, this destroy not the current change and both the commit id as well.

git reset --hard hash_vaule

Cleaning Git repository

1
2
3
4
5
6
7
8
9
10
11
12
13
git clean -n

git clean -f

git clean -fd

vim .gitignore

git add .gitignore

git commit -m "committing .gitignore"

git clean -xf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#  ####  ######   ######  ##     ## ######## 
# ## ## ## ## ## ## ## ##
# ## ## ## ## ## ##
# ## ###### ###### ## ## ######
# ## ## ## ## ## ##
# ## ## ## ## ## ## ## ##
# #### ###### ###### ####### ########

# ██╗███████╗███████╗██╗ ██╗███████╗
# ██║██╔════╝██╔════╝██║ ██║██╔════╝
# ██║███████╗███████╗██║ ██║█████╗
# ██║╚════██║╚════██║██║ ██║██╔══╝
# ██║███████║███████║╚██████╔╝███████╗
# ╚═╝╚══════╝╚══════╝ ╚═════╝ ╚══════╝
#

If github ask for username and password:

git remote set-url origin git@github.com:szhouchoice/EffectiveDevOpsTerraform.git

git flow

Merge branch into master

1
2
3
4
5
6
# ...develop some code...
$ git add –A
$ git commit –m "Some commit message"
$ git checkout master
Switched to branch 'master'
$ git merge new-branch
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
git add foobar/foobar.php
git status
git reset HEAD foobar/foobar.php ## unstaged changes after reset
git diff
git log

## branching and merging
git checkout -b newfunction
vim foobar
git add *
git diff
git commit -m "Adds newfunction"
git log
git checkout master
git merge newfunction
git branch -D newfunction

Git Fundamentals

Initalizing a Repo and Adding Files (commands)

1
2
3
4
5
6
7
8
9
10
git init
# for creating a repository in a directory with existing files
# creates repository skeletion in .git directory
git add
# Tells git to start tracking files
# Patterns: git add *.c or git add. (.= files and dirs recursively)
git commit
# Promotes files into the local repository
# Uses -m to supply a comment
# Commits everthing unless told otherwise

Undoing/updating things

1
2
3
4
5
6
7
8
9
git commit --amend
# allows you to "fix" last commit
# updates last commit using staging area
# with nothing new in staging area, just updates comment(commit message)
# example
>> git commit -m "my first update"
>> git add newfile.txt (add command stages it into staging area)
>> git commit --amend
>> This updated commit takes the place of the first (updates instead of adding adding another in the chain)

Getting help

1
2
3
4
git <command> -h
# brings up on screen list of options
git hlep <command>
# brings up html man page

Git config file

1
2
3
4
5
6
7
8
9
10
Scope
Local (repo)
>>.git/config
Global (user)
>> ~/.gitconfig
System (all users on a system)
>> <usr|usr/local>/etc/gitconfig
Divied into sections
Can have user settings, aliases, etc.
Text file that can be edited, but safer to use git config

Git Status

1
2
3
4
5
6
7
8
9
10
11
12
Command: git status
Primary tool fo showing which files are in which state
-s is short option - output<1 or more characters><filename>
>> ?? = untracked
>> M = modified
>> A = added
>> D = deleted
>> R = renamed
>> C = copied
>> U = updated but unmerged
-b option - lways show branch and tracking info
Common usage: git status -sb

Git special references: Head and Index

1
2
3
4
5
6
7
8
9
10
11
Head
snapshot of your last commit
next parent (in chain of commits)
pointer to current branch reference (reference = SHA1)
Think of HEAD as pointer to last commit on current branch
Index (Staging Area)
place where changes for the next commit get registered
temporary staging area for what you're working on
proposed next commit
Cache - old name for index
Think of cache, index, and staging area as all the same

Showing differences

1
2
3
4
5
command: git diff
default is to show changes in the workfing directory that are not yet staged
if sth is staged, shows diff between working directory and staging area
option of --cached or --staged shows difference between staging area and last commit (HEAD)
git diff <reference> shows differences between working directory and what <reference> points to - example "git diff HEAD"

History

1
2
3
4
5
6
7
8
9
10
11
12
13
14
command: git log
with no options, shows name, sha1, email, commit message in reverse chronological order (newest first)
-p option shows patch - or differences in each commit
Shortcut: git show
-# -shows last # commits
--stat - shows statistics on number of changes
--pretty=oneline|short|full|fuller
--format - allows you to specify your own output format
--oneline and --format can be very useful for machine parsing
Time-limiting options --since|until|after|before (i.e. --since=2.weeks or --before=3.19.2011)
gitk tool also has history visualizer
git log --oneline <since>..<until> e.g. foobar..barfoo
git log --oneline <file-name> e.g. teststatusfile
git log --oneline -n <limit> e.g. 2

git useful command

1
git clone git@bitbucket.org:foobar/pagerduty.git --config core.autocrlf=input