RE Package

Character sets

The + after an item in a regular expression matches one or more of it. A number in brackets matches an exact number of characters.

1
2
3
4
5
6
7
8
9
In [6]: print cc_list
Ezra Koenig <ekoenig@vpwk.com>,
Rostam Batmanglij <rostam@vpwk.com>,
Chris Tomson <ctomson@vpwk.com,
Bobbi Baio <bbaio@vpwk.com

>>> re.search(r'[A-Za-z]{6}', cc_list)
<_sre.SRE_Match object; span=(5, 11), match='Koenig'>

The . character has a special meaning. It is a wildcard and matches any character. To match against the actual . character, you must escape it using a backslash:

1
2
3
>>> re.search(r'[A-Za-z]+@[a-z]+\.[a-z]+', cc_list)
<_sre.SRE_Match object; span=(13, 29), match='ekoenig@vpwk.com'>

Character Classes

Python’s re offers character classes. These are pre-made characters sets. Some commonly used ones are \w, which is equivalent to [a-zA-Z0-9_] and \d which is equivalent to [0-9]. You can use the + modifier to match for multiple characters:

1
2
3
4
5
>>> re.search(r'\w+', cc_list)
<_sre.SRE_Match object; span=(0, 4), match='Ezra'>
>>> re.search(r'\w+\@\w+\.\w+', cc_list)
<_sre.SRE_Match object; span=(13, 29), match='ekoenig@vpwk.com'>

Groups

1
2
3
4
5
6
7
>>> matched = re.search(r'(\w+)\@(\w+)\.(\w+)', cc_list)
>>> matched.group(0)
'ekoenig@vpwk.com'
>>> matched.group(1)
'ekoenig'
>>> matched.group(2)
'vpwk'

Named Groups

You can also supply names for the groups by adding ?P in the group definition. Then you can access the groups by name instead of number:

1
2
3
4
5
>>> matched = re.search(r'(?P<name>\w+)\@(?P<SLD>\w+)\.(?P<TLD>\w+)', cc_list)
>>> matched.group('name')
'ekoenig'
>>> matched.group('SLD')
'vpwk'

Using IPython to run unix shell commands

1
2
3
4
5
6
7
8
9
10
In [5]: ls = !ls -l /usr/bin

In [6]: ls.grep("kill")
Out[6]:
['-rwxr-xr-x 1 root root 1812 Sep 11 2018 cinnamon-killer-daemon',
'-rwxr-xr-x 1 root root 27768 Dec 12 2018 killall',
'lrwxrwxrwx 1 root root 5 May 14 2018 pkill -> pgrep',
'-rwxr-xr-x 1 root root 26704 May 14 2018 skill',
'lrwxrwxrwx 1 root root 5 May 14 2018 snice -> skill',
'-rwxr-xr-x 1 root root 14328 Apr 22 2017 xkill']

Using ipython magic commands

1
2
3
4
In [7]: %%bash
...: uname -a
...:
Linux stan-OptiPlex-380 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

The %%writefile is pretty tricky because you can write and test Python or Bash scripts on the fly, using IPython to execute them. Not a bad party trick at all:

1
2
3
4
5
6
7
8
9
10
11
12
13
In [8]: %%writefile print_time.py
...: #!/usr/bin/env python
...: import datetime
...: print(datetime.datetime.now().time())
...:
Writing print_time.py

In [9]: cat print_time.py
#!/usr/bin/env python
import datetime
print(datetime.datetime.now().time())
In [10]: !python print_time.py
16:49:14.836331

Another very useful command, %who, will show you what is loaded into memory. It comes in quite handy when you have been working in a terminal that has been running for a long time:

1
2
In [11]: %who
ls var_ls

Package Management

Descriptive Versioning

Most commonly in Python packages the following two variants: major.minor or major.minor.micro

  • 0.0.1
  • 1.0
  • 2.1.1

A commonly accepted format for releases is: for major.minor.micro (used by the Semantic Versioning scheme

  • major for backwards-incompatible changes
  • minor adds features that are also backward-compatible
  • micro adds backwards-compatible bug fxes

Assuming the current released version of the project under development is 1.0.0, it means the following outcomes are possible:

  • If the release has backward-incompatible changes, the version is: 2.0.0
  • Adding features that do not break compatibility: 1.1.0
  • Fixing issues that also do not break compatibility: 1.0.1

The changelog

The below example is an actual portion of a changelog file in a production Python tool:

1
2
3
4
5
6
7
8
9
1.1.3  
-----
22-Mar-2019
* No code changes - adding packaging files for Debian

1.1.2 -----
3-Mar-2019

* Try a few different executables (not only ``python``) to check for a working one, in order of preference, starting with ``python3`` and ultimately falling back to the connection interpreter

The example has four essential items that are providing information:

  1. The lastest version number released
  2. Is the latest release a backward-compatible one
  3. What was the release date of the last version
  4. Changes included in the release

Packaging solutions

A small Python project:

1
2
3
4
5
6
hello-world
── hello_world
├── __init__.py
└── main.py

1 directory, 2 files

Native Python packaging

Like the rest of the other packaging strategies, the project requires some files used by setuptools.
To continue, create a new virtual environment and then activate:

1
2
$ python3 -m venv /tmp/packaging
$ source /tmp/packaging/bin/activate

Setuptools is a requirement to produce a native Python package. It is a collection of tools and helpers to create and distribute Python packages.

Once the virtual environment is active, the following dependencies exist:

  1. setuptools A set of utilities for packaging

  2. twine A tool for registering and uploading packages

1
$ pip install setuptools twine

Package files
The file that describes the package to setuptools is named setup.py. It exists at the top level directory.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat setup.py
from setuptools import setup, find_packages

setup(
name="hello-world",
version="0.0.1",
author="Example Author",
author_email="author@example.com",
url="example.com",
description="A hello-world example pacakge",
packages=find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
)

The setup.py file will import two helpers from the setuptools module: setup, and find_packages. The setup function is what requires the rich description about the package, and the find_packages function is a utility to automatically detect where the Python files are, lastly, the classifiers describes certain aspects of the package like the license, operating systems supported, and Python versions. These classifiers are called trove classifiers and the Python Package Index has a detailed description of other classifiers available. Detailed descriptions make a package get discovered when uploaded to PyPI.

1
2
3
4
5
6
7
 hello-world tree
.
├── hello-world
│   ├── __init__.py
│   └── main.py
├── README
└── setup.py

To produce the source distribution from it, run the following command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
 hello-world python3 setup.py sdist
running sdist
running egg_info
creating hello_world.egg-info
writing hello_world.egg-info/PKG-INFO
writing dependency_links to hello_world.egg-info/dependency_links.txt
writing top-level names to hello_world.egg-info/top_level.txt
writing manifest file 'hello_world.egg-info/SOURCES.txt'
reading manifest file 'hello_world.egg-info/SOURCES.txt'
writing manifest file 'hello_world.egg-info/SOURCES.txt'
running check
creating hello-world-0.0.1
creating hello-world-0.0.1/hello-world
creating hello-world-0.0.1/hello_world.egg-info
copying files to hello-world-0.0.1...
copying README -> hello-world-0.0.1
copying setup.py -> hello-world-0.0.1
copying hello-world/__init__.py -> hello-world-0.0.1/hello-world
copying hello-world/main.py -> hello-world-0.0.1/hello-world
copying hello_world.egg-info/PKG-INFO -> hello-world-0.0.1/hello_world.egg-info
copying hello_world.egg-info/SOURCES.txt -> hello-world-0.0.1/hello_world.egg-info
copying hello_world.egg-info/dependency_links.txt -> hello-world-0.0.1/hello_world.egg-info
copying hello_world.egg-info/top_level.txt -> hello-world-0.0.1/hello_world.egg-info
Writing hello-world-0.0.1/setup.cfg
creating dist
Creating tar archive
removing 'hello-world-0.0.1' (and everything under it)
hello-world tree
.
├── dist
│   └── hello-world-0.0.1.tar.gz
├── hello-world
│   ├── __init__.py
│   └── main.py
├── hello_world.egg-info
│   ├── dependency_links.txt
│   ├── PKG-INFO
│   ├── SOURCES.txt
│   └── top_level.txt
├── README
└── setup.py

3 directories, 9 files

The newly created tar.gz file is an installable package! This package can now be uploaded to the Python Package Index for others to install right from it. By following the version schema, it allows installers to ask for a specific version (0.0.1 in this case), and the extra metadata passed into the setup() function enables other tools to discover it and show information about it like the author, description, and version.

1
2
3
4
5
6
7
 hello-world pip install dist/hello-world-0.0.1.tar.gz
Processing ./dist/hello-world-0.0.1.tar.gz
Requirement already satisfied (use --upgrade to upgrade): hello-world==0.0.1 from file:///home/stan/Learning/python4devops/hello-world/dist/hello-world-0.0.1.tar.gz in /tmp/packaging/lib/python3.6/site-packages
Building wheels for collected packages: hello-world
Running setup.py bdist_wheel for hello-world ... done
Stored in directory: /home/stan/.cache/pip/wheels/09/67/3a/398ce8c37594dc628e00ad661230c0ed20a2f3b0f421601264
Successfully built hello-world

The python package index

PyPI is a repository of Python software that allows users to host Python pakcages and also install from it.

Register online

1
2
3
4
5
6
7
hello-world twine upload --repository-url https://test.pypi.org/legacy/ dist/hello-world-0.0.1.tar.gz
Enter your username: foobar
Enter your password:
Uploading distributions to https://test.pypi.org/legacy/
Uploading hello-world-0.0.1.tar.gz
100%|████████████████████████████████████| 3.86k/3.86k [00:01<00:00, 3.27kB/s]```
Be helpful to create a Makefile and put a make command in it automatically deploys your project and build the documentation for you.

deploy-pypi:
pandoc –from=markdown –to=rst README.md -o README.rst
python setup.py check –restructuredtext –strict –metadata
rm -rf dist
python setup.py sdist
twine upload dist/*
rm -f README.rst

1
2
3
4
5
6
7
### Hosting an internal package index
If package A , is hosted internally and has requirements on packages B and C, all those three need to exist (along with their required versions) in the same instance.

**note**
A highly recommended full-featured tool for hosting an internal PyPI is devpi. It has features like mirroring, staging, replication, and Jenkins integration. The [project documentation](http://doc.devpi.net/) has great examples and detailed information.

Now copy the tar.gz file into the hello-world directory. The final version of this directory structure should look like this:

(packaging) ➜ pypi tree
.
├── hello-world
│   └── hello-world-0.0.1.tar.gz
└── hello-world1
└── hello-world1-0.0.1.tar.gz
pypi pwd
/home/stan/Learning/python4devops/pypi

pypi python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) …
127.0.0.1 - - [01/Sep/2019 16:46:27] “GET /hello-world1/ HTTP/1.1” 200 -
127.0.0.1 - - [01/Sep/2019 16:46:27] “GET /hello-world1/hello-world1-0.0.1.tar.gz HTTP/1.1” 200 -

$ python3 -m venv /tmp/local-pypi
$ source /tmp/local-pypi/bin/activate
pip3 install -i http://localhost:8000/ hello-world1
Requirement already satisfied: hello-world1 in /tmp/local-pypi/lib/python3.6/site-packages

## Debina packaging

A role by any other name

To keep current with the tides of change within the industry, oranizations have taken to retitling their system administration postings to
devops engineer or siste reliability engineer (SRE).

May people think about devops as specific tools like Docker or Kubernetes, or practices ike continuous deployment and continuous integration.
What makes tools and practices “devops” is how they are used, not the tools or practices directly.

Site Reliability Engineering is an engineering discipline that helps an organization achieve the appropriate levels of reliability in their
systems, services, and products.

One measure of reliability is often used in exclustion to any other, and that is availability.

SLIs
SLOs

How do devops and SRE differ?

While devops and SRE arose around the same time, devops is more focused on culture change (hat happens to impact technology and tools) while
SRE is very focused on changing the mode of Operations in general.

With SRE, there is often an expectation that engineers are also software engineers with operability skills. With DevOps Engineers, there is often an assumption that engineers are strong in at least one modern language as well as have expertise in continuous integration and deployment.

A sysadmin is someone who is responsible for building, configuring, and maintaining reliable systems where systems can be specific tools,
applications, or services.

What technical stacks are you familiar with?

Chatper 1. Version Control

Resolving Conflicts

  • Cherry-picking
  • If there are any uncommitted changes, stash them with git stash
  • Start from the default branch so that it is the parent for the new feature branch to complete the emergency work. It’s important that
    the emergency work isn’t blocked by any work in progress on a different branch.
  • Pull changes from the remote shared repository to the local default branch so that it’s up-to-date to minimize any potential confilicts.
  • Create a hotfix branch.
  • Make changes, test,stage, commit, push, and PR with the hotfix branch following the same process as regular changes.
  • Bring back any stashed work with git stash list, and git stash apply.

Fixing your local repository

1
$ git reset FILE

This replaces FILE in the current working directory with the last committed version.

Advancing Collaboration with VC

  • Credit all collaborators
1
2
3
4
5
Commit Message describing what the set of changes do.


Co-authored-by: Name <email@example.com>

  • Have at least one reviewer other than yourself before merging code

  • Write quality commit messages explaining the context for the changes within the commit.

Chapter 2. Local Development Environments

Choosing an editor

The work that a sysadmin needs to do with an editor includes developing scripts, codifying infrastructure, writing documentation, composing tests,
and reading code.

For example, Visual Studio Code(VS Code)

Minimizing required mouse usage

  • Splitting the screen up vertically and horizontally
  • integrated terminal with VS Code

    Integrated Static Code Analysis

  • shellcheck

    Easing editing through auto completion

  • Docker extension

    Indenting code to match team conventions

    In VS Code, this is visible in the lower panel along with other conventions about the file type. From the Command Palette, you can change the spacing from tabs to spaces.

    Collaborating while editing

  • Live Share extension

    Integrating workflow with git

    M icon: modified files
    U: updated and unmerged

    Extending the development environment

  • Remote Extension: allows individuals to use a remote system for “local” development with familiar local features

more aewsome extensioncheckout

Selecting languages to install

While you might not spend time developing applications, honing development skills in shell code and at least one language is essential.

Python, Ruby, and Go become much more useful to write utilities.

Installing the Configuring Applications

  • The Silver Searcher
  • bash-completion
  • cURL
    verify whether you can connect to a URL, which is one of the first validations when checking a web service.
    use it to send or retrieve data from a URL
  • Docker
  • hub
  • jq
  • Postman
  • shellcheck
  • tree

Chapter 3: What is Security?

Security is the practice of protecting hardware, software, networks, and data from harm, theft, or unauthorized access.

Chapter 4: Server virtualization and containers

Kubernetes: The Terminology

Architecture of a Kubernetes cluster

Architecure

Kubernetes node types

can be a VM, a bare metal host, a Raspberry Pi

Two distinct categories:

  1. master nodes: run the k8s control plane applications
  2. worker nodes: run the pplications that you deploy onto k8s

A production deployment should have a minimum of three master nodes and three worker nodes.
Most large deployments have many more workers than masters.

The k8s control plane

  • kube-apiserver: this application handles instructions send to k8s.
    is a component that accepts HTTPS requests, typically on port 443.
    when a configuration request is made to the k8s API server, it will check the current cluster configuration in etcd and change it if necessary.

The Kubernetes API is generally a RESTful API, with endpoints for each Kubernetes resource type, along with an API version that is passed in the query path; for instance, /api/v1.

  • kube-scheduler: this component handles wthe work of deciding which nodes to place workloads on, which can become quite complex.
    By default, this decision is influenced by workload resource requirements and node status.

  • kube-controller-manager: this component provides a high-level control loop that ensures that the desired configuration of the cluster and applications running on it is implemented.

    • The node controller, which ensures that nodes are up and running
    • The replication controller, which ensures that each workload is scaled properly
    • The endpoints controller, which handles communication and routing configuration for each workload.
    • Service account and token controllers, which handle the creation of API access tokens and default accounts
  • etcd: a distrubuted key-value store that contains the cluster configuration
    An etcd replica runs on each master node and uses the Raft consensus algorithm, which ensures that a quorum is maintained before allowing any changes to the keys or values.

K8s worker nodes

contins components what allow it to communicate with the control plane and handle networking.

  • kubelet
    The kubelet is an agent that runs on every node (including master nodes, though it has a different configuration in that context). Its main purpose is to receive a list of PodSpecs (more on those later) and ensure that the containers prescribed by them are running on the node. The kubelet gets these PodSpecs through a few different possible mechanisms, but the main way is by querying the Kubernetes API server. Alternately, the kubelet can be started with a file path, which it will monitor for a list of PodSpecs, an HTTP endpoint to monitor, or its own HTTP endpoint to receive requests on.

  • kube-proxy
    kube-proxy is a network proxy that runs on every node. Its main purpose is to do TCP, UDP, and SCTP forwarding (either via stream or round-robin) to workloads running on its node.

  • The container runtime
    The container runtime runs on each node and is the component that actually runs your workloads. Kubernetes supports CRI-O, Docker, containerd, rktlet, and any valid Container Runtime Interface (CRI) runtime.

Addons

For example, Container Network Interface (CNI) plugins such as Calico, Flannel, or Weave provide overlay network functionality that adheres to Kubernetes’ networking requirements.

Authenication and authorization on K8s

Namespaces

is a construct that allows you group k8s resources in your cluster.
For instance, for each environment-dev, staging, and production.

Default namespace:

  • kube-system
    continas the cluster services such as etcd, the scheduler, and any resource created by k8s itself and not users.
  • kube-public
    is readable by all users by default and can be used for public resources.

Users

two types:

  • regular users
    Regular users are generally managed by a service outside the cluster, whether they be private keys, usernames and passwords, or some form of user store.
  • service accounts
    Service accounts however are managed by Kubernetes and restricted to specific namespaces. To create a service account, the Kubernetes API may automatically make one, or they can be made manually through calls to the Kubernetes API.

Authentication methods

  • HTTP basic authentication
  • client certificates
  • bearer tokens
  • proxy-based authentication

K8s’ certificate infrastructre for TLS and security

1
openssl req -new -key myuser.pem -out myusercsr.pem -subj "/CN=myuser/0=dev/0=staging"

This will create a CSR for the user myuser who is part of groups named dev and staging.

Once the CA and CSR are created, the actual client and server certificates can be created using openssl,easyrsa, cfssl, or any certificate generation tool.

RBAC

stand for Role-Based Access Control and is a common pattern for authorization.
In k8s specifically, the roles and users of RBAC are implemented using four k8s resources: Role,ClusterRole,RoleBinding, and ClusterRoleBinding

read-only-role

The only difference between a Role and ClusterRole is that a Role is restricted to a particular namespace (in this case, the default namespace), while a ClusterRole can affect accross to all resources of
that type in the cluster, as well as cluster-scoped resources such as nodes.

ABAC

stand for Attribute-Based Access Control. works using policies instead of roles.

Using kubectl and YAML

Setting up kubectl and kubeconfig

kubeconfig
three sections to a Kuberconfig YAML file: clusters, users,a nd contexts

Imperative versus declarative commands

Two paradigms for talking to the k8s API:

  • imperative
    allow you dictate to k8s “what to do”: “spin up two copies of Ubuntu**

  • declarative
    allow you to write a file with a specification of what should be running on the cluster, and have the k8s API ensures that the configuration maches the cluster configuration, updating it if necessary

Basic kubectl commands:
kubectl get reourece_type: see k8s resources in your cluster, where resource_type is the full name of the k8s resources
kubectl get nodes: find which nodes exist in a cluster
wide output flag: kubectl get nodes -o wide add additional information to the list
describe: works similarly to get, except that we can optionally pass the name of a specific resource
kubectl describe nodes will return details for all nodes in the cluster, while kubectl describe nodes node1 will return a description of the node named node1.

  • kubectl create -f /path/to/file.yaml, which is an imperative command
  • kubectl apply -f /path/to/file.yaml, which is declarative
  • –dry-run flag to see the outout of the create or apply commands

create works imperatively, so it will create a new resource, but if you run it again with the same file, the command will fail since the resource already exists. apply works declaratively, so if you run it the first time it will create the resource, and subsequent runs will update the running resource in Kubernetes with any changes. You can use the –dry-run flag to see the output of the create or apply commands.

To update existing resources imperatively, use the edit command like so: kubectl edit resource_type resource_name

To update existing resources declaratively, you can edit your local YAML resource file.

  • kubectl cluster-info, which will show the IP addresses where the major K8s cluster services are running.

Writing k8s resource YAML files

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello Readers"]

has four top-level keys at a minimum: apiVersion,kind,metadata, and spec.

apiVersion dictates which versions of the k8s API will be used to create the resource.
kind specifies what type of resources the YAML file is referencing.
metadata provides a location to name the resources, as well as adding annotations and name-spacing information
spec kye will contain all the resource-specific information that k8s needs to create the resource in your cluster.

Scaling

Pods:

1
kubectl scale deployments/course-demo-wordpress --replicas=5

Nodes:

1
az aks scale --resource-group kubDemo --name demoCluster --node-count 3 --nodepool-name nodepoll1

Autoscale

Pods:

1
kubectl autoscale deployments course-demo-wordpress --cpu-percent=40 --min=2 --max=10

Nodes:

1
az aks update --resource-group kubeDemo --name demoCluster -update-cluster-autoscaler --min-count 1 --max-count 3

issue

  1. helm 3 status does not list resources
    1
    2
    3
    helm plugin install https://github.com/marckhouzam/helm-fullstats

    helm fullstatus nodeserver

CKA

K8s cluster

docs: kubeadm init

  1. kubeadm init run as root
  • network
  • .kube/config
  • kubeadm join
  1. create config
  2. kubectl — network
    kubectl get nodes
  3. kubeadm join run as root

Myql replicatoin

Process

  1. Clean the salve db
    1
    2
    3
    4
    DROP_DB_LIST=`mysql -Nse "SET group_concat_max_len = 81920; SELECT GROUP_CONCAT(SCHEMA_NAME SEPARATOR ';\nDROP DATABASE ') FROM information_schema.SCHEMATA WHERE SCHEMA_NAME NOT IN ('mysql','information_schema','performance_schema','sys');"`
    echo -e $DROP_DB_LIST > /tmp/delete.sh
    vim /tmp/delete.sh
    mysql < /tmp/delete.sh

dump on the 1st slave

1
2
3
DB_LIST=`mysql -Nse "SELECT GROUP_CONCAT(SCHEMA_NAME SEPARATOR ' ') FROM information_schema.SCHEMATA WHERE SCHEMA_NAME NOT IN ('
mysql','information_schema','performance_schema','sys');"`
mysqldump --dump-slave --apply-slave-statements --include-master-host-port --events --triggers --routines --single-transaction --opt --tz-utc --databases $DB_LIST > mysql_au3_db_02_26Feb2021.sql

restore on the 2nd slave

1
2
3
4
5
mysql < restore_file.sql
change master to master_host='master_ip_add'
, master_user='replication'
, master_password='strongpasswd'
mysql: show slave status\G;

common problems

dropping a database with a special character in the databsename

error message:

1
2
[root@au3-db-03 mysql]# mysql < /tmp/deletenewnew.sh
ERROR 1064 (42000) at line 267: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-123' at line 1

fix by:

1
2
mysql> drop database `supd-123`;
Query OK, 0 rows affected (0.01 sec)

Create a read-only account with azure mysql 8.0

Notes

  1. When creating role and granting privileges, please use ‘%’ instead of ‘localhost’. For PaaS, ‘localhost’ means physical host machine that cannot be accessed in cloud considering security.
  2. After creating role and granting role privileges to the target user, please run [SET DEFAULT ROLE ALL TO ‘{username}’]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
create user 'foobarreader'@'%' identified by 'StrongPassword!';
create role 'foobar_read_only';

grant select on foobardb.* to 'foobar_read_only';

mysql> grant 'foobar_read_only' to 'foobarreader'@'%';

mysql> set default role all to 'foobarreader';

mysql> show grants for 'foobarreader'@'%';

mysql> show grants for 'foobarreader'@'%' using 'foobar_read_only';

## check by login via new created user and test show databases

Mysql 8 role permission control system

1. K8s on Azure

Deployment models

architecure

Create the Azure Container Registry

1
2
az group create --name myResourceGroup --location australiaeast
az acr create --resource-group myResourceGroup --name cwzhou --sku Basic

Push a container to the registry

Login to your ACR instance:

1
az acr login --name akscourse

Verify that you have a local copy of your application image:

1
docker images

If you don’t have an application, you can quickly build your own mini-application image (see the app_example/README.md file):
docker build app_example/ -t hostname:v1

We need to tag the image with the registry login server address which we can get with:

1
export aLS=`az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output tsv`

Tag your nginx image with the server address and verion (v1 in this case):

1
docker tag hostname:v1 ${aLS}/hostname:v1

Verify that your tags have been applied:

1
docker images

Push the image to the Registry:

1
docker push ${aLS}/hostname:v1

Verify that the image has been pushed correctly:

1
2
az acr repository list --name cwzhou --output tsv
repository=<repository_output>

You can verify that the image is appropriately tagged with (repository is the output of the previous step):

1
az acr repository show-tags --name cwzhou --repository ${repository} --output tsv

Verify container registry image

1
2
3
4
5
6
7
8
9
10
docker rmi <registry/image:version>
docker pull <registry/image:version>

It should also then be possible to run the image locally to verify operation:

docker run --rm --name hostname -p 8080:80 <registry/image:version>
curl localhost:8080
docker stop hostname
docker rmi <registry/image:version>

Establish AKS specific credentials

To allow an AKS cluster to interact with other Azure resources such as the Azure Container Registry we created in a previous chapter, an Azure Active Directory (ad) service principal is used. To create the service principal:

1
az ad sp create-for-rbac --skip-assignment

Make a note of the appId and password, you will need these. Better yet, save this credential somewhere secure.

Get the ACR resource ID:

1
az acr show --resource-group myResourceGroup --name cwzhou --query "id" --output tsv

Create a role assignment:

1
az role assignment create --assignee <appId> --scope <acrId> --role Reader

It is also possible to integrate AKS with Azure Active Directory in a much deeper fashion, which may be appropriate in an enterprise environment. Further instructions can be found here:

https://docs.microsoft.com/en-us/azure/aks/aad-integration

Launch an AKS cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Finally we can create an AKS cluster (you will need your <appId> and <password> from the previous section):

cat ../sp.txt

az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--node-count 1 \
--max-pods 20 \
--kubernetes-version 1.12.4 \
--generate-ssh-keys \
--enable-vmss \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 3 \
--service-principal <appId> --client-secret <password>

This will create a cluster (which may take 5-10 minutes). Once done, we can connect to the kubernetes environment via the Kubernetes CLI. If you are using the Azure Cloud Shell, the kubernetes client (kubectl) is already installed. You can also install locally if you haven't previously installed a version of kubectl:
az aks install-cli

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --admin

The resource-group and name were set during the creation process, and you should use those if different from what we're using here.

Check your connection and that the kubernetes cli is working with:
kubectl get nodes

If you have issues with connecting, one thing you can check is the version of your kubernetes client:
kubectl version

This will tell you both the local client, and the configured kubernetes service version, make sure the client is at least the same if not newer than the server.

Basic

Key Description
SPC p f search find in the same project
SPC p p open project
SPC w m max the current windows
ALT ; uncomment
gcc comment out a line
gc comment out highlighted text
SPC w / vertical
SPC w - horizontal
SPC f e d edit spacemacs profile
SPC f f open file
SPC w c close the window
SPC t l toggle lines
C-x h select the entire buffer
C-h m show available commands for the current modes
SPC a d Dired mode : 1. Copy file : C 2. Delete the file : D 3. Rename the file : R 4. Create a new directory : + 5. Reload directory listing : g
C-b Move back one full screen
C-f Move forward one full screen
C-d Move forward 1/2 screen
C-u Move back (upA)1/2 screen
/sudo::/path/to/file Open a file with su/sudo
r in spacemacs buffer Open from the recent file list
SPC o y Copy to clipboard
SPC o p Paste from clipboard
/ssh: :/home/szhou/ ssh remote edit file
SPC s c clear the highlight of search
“2p paste the 2nd-last yanked lines
SPC r l resume the last helm session

Movement

Key Description
C-u up half page
C-d down half page
d-0 delete to the beginning of the line

Magit

Key Description
SPC g s show Magit status view
j k position the cursor on a file
TAB show and hide the diff for the file
s stage a file (u to unstage fa file and x to discard changes to a file)
cc commit
,c finish the commit message
p u Push to upstream
F (r) u Pull tracked branch and rebase
magit

Org mode

Key Description
M-x org-mode enable Org mode
M-shift-RET Add a TODO list
C-c C-t Mark as completed
C-c C-x C-b toogle checkbox
C-c C-t TODO to DONE
Alt-Up or Down Move the item
C-c C-x C-w Delete the item
Alt-Right Demote a headline
Alt-Shit-Left To promote a subtree
C-c C-e To start the exporter
h o Export to a web page and open
t A Export as an ADCII text file

Dired mode

Key Description
SPC a d enable Dired Mode

Helm

Key Description
SPC r l resume the last helm session

tumx

Key Description
prefix :setw synchronize-panes off (ctrl + y)stop sending same command for all paines in windows
prefix :setw synchronize-panes on (ctrl + u)start sending same command for all paines in windows
prefix r reload the tmux configuration
prefix d Deatach the session

Infrastructure automation tool registries

  • Chef Infra Server
  • PuppetDB
  • Ansible Tower
  • Salt Mine

General-purpose configuration registry products

  • Zookeeper
  • etcd
  • Consul
  • doozerd

Handling secrets as parameters

Encrypting secrets

Disposable secrets

  • HashiCorp Vault

Continuously Test and Deliver

Delivery pipeline software dnd services

Build server

  • Jenkins
  • Team City
  • Bamboo
  • Github Actions

    CD software

  • GoCD
  • ConcourseCI
  • BuildKite

    SaaS services

  • CircleCI
  • TravisCI
  • AppVeyor
  • Drone
  • BoxFuse

    Cloud platform services

  • AWS CodeBuild(CI)
  • AWS CodePipeline(CD)
  • Azure Pipelines

    Source code repository services

  • Github Actions
  • GitLab CI and CD

    Evaluating tools

  • Atlantis (manage pull requests for Terraform projects)
  • Terraform Cloud
  • WeaveWorks(managing Kubernetes clusters)

Deployment packages

Target runtime Example packages
Server oprating system Red Hat RPM files, Debian .deb files, Windows MSI installer packages
Language runtime engine Ruby gems, Python pip packages, Java .jar, .war, and .ear files
Container runtime Docker images
Application clusters Kubernetes Deployment Descriptors, Helm charts
FaaS serverless Lambda deployment package

Table of Contents

Multiplexers

mosh

1
2
mosh jdoe@10.10.10.101
mosh username@remoteserver.org --ssh="ssh -i ~/.ssh/identity_file -p 1234"

ss

1
2
3
4
5
6
7
8
9
10
11
12
13
user@opsschool ~$ ss -tuln
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:80 *:*
tcp LISTEN 0 50 *:4242 *:*
tcp LISTEN 0 50 :::4242 :::*
tcp LISTEN 0 50 *:2003 *:*
tcp LISTEN 0 50 *:2004 *:*
tcp LISTEN 0 128 :::22 :::*
tcp LISTEN 0 128 *:22 *:*
tcp LISTEN 0 100 *:3000 *:*
tcp LISTEN 0 100 ::1:25 :::*
tcp LISTEN 0 100 127.0.0.1:25 *:*
tcp LISTEN 0 50 *:7002 *:*
  • means the deamon is listening on all IP addresses the server might have.

127.0.0.1 means the daemon is listening only to the loopback interface

::: is the same thing as *, but for IPv6

::1 is the same as 127.0.0.1, but for IPv6

mtr

is a program that combines the functionality of ping and traceroute into one utility

1
2
3
4
5
6
7
8
9
10
11
12
13
14
 mtr -r uow.edu.au
Start: 2020-08-09T15:14:55+1000
HOST: stan-OptiPlex-380 Loss% Snt Last Avg Best Wrst StDev
1.|-- router.lan 0.0% 10 0.4 0.4 0.3 0.4 0.0
2.|-- 58.162.27.77 0.0% 10 8.6 12.7 8.0 17.5 3.4
3.|-- 10.2.2.150 0.0% 10 8.6 13.7 8.3 29.7 6.6
4.|-- 10.2.1.149 0.0% 10 15.2 13.6 10.2 16.1 2.0
5.|-- Bundle-Ether77.chw-edge90 0.0% 10 12.2 14.0 10.1 23.7 3.9
6.|-- bundle-ether2.ken-edge903 0.0% 10 12.1 13.3 9.2 26.3 4.8
7.|-- aar3533567.lnk.telstra.ne 0.0% 10 14.3 14.1 10.4 31.7 6.5
8.|-- xe-0-2-1.pe1.rsby.nsw.aar 0.0% 10 12.7 12.0 8.9 14.8 2.2
9.|-- 138.44.5.111 0.0% 10 10.2 14.5 10.2 17.5 2.5
10.|-- 203.10.91.20 0.0% 10 11.8 16.6 9.9 27.2 6.1
11.|-- www.uow.edu.au 0.0% 10 11.8 13.7 11.8 16.4 1.5

iftop

Displays bandwith usage on a specific interface, broken down by remote host

1
sudo iftop -i eth0

iperf

a bandwidth testing utility. It consists of a daemon and client, running on separate machines.

1
sudo iperf3 -c remote-host

Troubleshooting layer 1 problems

1
2
3
4
ip -s link show eth0
sudo ethtool eth0
## -S, which shows all the same metrics as the ip command, plus many, many more
sudo ethtool -S eth0

HTTP core protocol

Request

Method Description
GET Tranfers a current representation of the target resource.
HEAD Same as GET, but only transfer the status line and header section.
POST Perform resource-specific processing on the request payload.
PUT Replace all current representations of the target resource with the requested payload.
DELETE Remove all current representations of the target resource.
CONNECT Establish a tunnel to the server identified by the target resource.
OPTIONS Describe the communication options for the target resource.
TRACE Perform a message loop-back test along the path to the target resource.

Response

Code Reason Description
200 OK The request has succeeded.
301 Moved Permenantly The target has moved to a new location and future references should use a new location. The server should mention the new location in the Location header in the response.
302 Found The target resources been temporarily moved. As the move is temporarily, future references should still use the same location.
400 Bad Request The server cannot process the request as it is perceived as invalid.
401 Unauthorized The request to the target resource is not allowed due to missing or incorrect authentication credentials.
403 Forbidden The request to the target resource is not allowed for reasons unrelated to authentication.
404 Not Found The target resource was not found.
500 Internal Server Error The server encountered an unexpected error.
502 Bad Gateway The server is acting as a gateway/proxy and received an invalid response from a server it contacted to fulfill the request.
503 Service Unavailable The server is currently unable to fulfill the request.

cURL

is a tool and library for transferring data with URL syntax, capable of handling various protocols.

1
2
3
4
5
curl http://foobar/index.html
With the --request or -X parameter, the method can be specified. To include the the headers in the output.
curl -I --request GET http://www.opsschool.org/en/latest/http_101.html
The -O switch makes cURL write output to a local file named like the remote file.
$ curl -O http://localhost/bigfile

By default cURL does not follow HTTP redirects, and instead a 3xx redirection message is given.
The -L switch makes cURL automatically follow redirects and issue another request to the given localtion till it finds the targeted resource.

1
curl -IL foobar.org

netcat

a networking tool capable of reading and writing data across a network connection.

Basic client/server

1
2
3
4
5
6
$ nc -l 192.168.0.1 1234 > output.log
### then in another terminal, pipe some data to netcat that is connected to the destination ip and port
$ echo "I'm connected as client"| nc 192.168.0.1 1234
## Then again in the first (server) terminal, the data is displayed:
$ cat output.log
I'm connected as client