Contents
  1. 1. Kubernetes: The Terminology
    1. 1.1. Architecture of a Kubernetes cluster
    2. 1.2. Kubernetes node types
    3. 1.3. The k8s control plane
    4. 1.4. K8s worker nodes
    5. 1.5. Addons
    6. 1.6. Authenication and authorization on K8s
      1. 1.6.1. Namespaces
      2. 1.6.2. Users
      3. 1.6.3. Authentication methods
      4. 1.6.4. K8s’ certificate infrastructre for TLS and security
      5. 1.6.5. RBAC
      6. 1.6.6. ABAC
    7. 1.7. Using kubectl and YAML
      1. 1.7.1. Setting up kubectl and kubeconfig
      2. 1.7.2. Imperative versus declarative commands
      3. 1.7.3. Writing k8s resource YAML files
  2. 2. Scaling
    1. 2.1. Pods:
    2. 2.2. Nodes:
    3. 2.3. Autoscale
    4. 2.4. Pods:
    5. 2.5. Nodes:
  3. 3. issue
  4. 4. CKA
    1. 4.1. K8s cluster

Kubernetes: The Terminology

Architecture of a Kubernetes cluster

Architecure

Kubernetes node types

can be a VM, a bare metal host, a Raspberry Pi

Two distinct categories:

  1. master nodes: run the k8s control plane applications
  2. worker nodes: run the pplications that you deploy onto k8s

A production deployment should have a minimum of three master nodes and three worker nodes.
Most large deployments have many more workers than masters.

The k8s control plane

  • kube-apiserver: this application handles instructions send to k8s.
    is a component that accepts HTTPS requests, typically on port 443.
    when a configuration request is made to the k8s API server, it will check the current cluster configuration in etcd and change it if necessary.

The Kubernetes API is generally a RESTful API, with endpoints for each Kubernetes resource type, along with an API version that is passed in the query path; for instance, /api/v1.

  • kube-scheduler: this component handles wthe work of deciding which nodes to place workloads on, which can become quite complex.
    By default, this decision is influenced by workload resource requirements and node status.

  • kube-controller-manager: this component provides a high-level control loop that ensures that the desired configuration of the cluster and applications running on it is implemented.

    • The node controller, which ensures that nodes are up and running
    • The replication controller, which ensures that each workload is scaled properly
    • The endpoints controller, which handles communication and routing configuration for each workload.
    • Service account and token controllers, which handle the creation of API access tokens and default accounts
  • etcd: a distrubuted key-value store that contains the cluster configuration
    An etcd replica runs on each master node and uses the Raft consensus algorithm, which ensures that a quorum is maintained before allowing any changes to the keys or values.

K8s worker nodes

contins components what allow it to communicate with the control plane and handle networking.

  • kubelet
    The kubelet is an agent that runs on every node (including master nodes, though it has a different configuration in that context). Its main purpose is to receive a list of PodSpecs (more on those later) and ensure that the containers prescribed by them are running on the node. The kubelet gets these PodSpecs through a few different possible mechanisms, but the main way is by querying the Kubernetes API server. Alternately, the kubelet can be started with a file path, which it will monitor for a list of PodSpecs, an HTTP endpoint to monitor, or its own HTTP endpoint to receive requests on.

  • kube-proxy
    kube-proxy is a network proxy that runs on every node. Its main purpose is to do TCP, UDP, and SCTP forwarding (either via stream or round-robin) to workloads running on its node.

  • The container runtime
    The container runtime runs on each node and is the component that actually runs your workloads. Kubernetes supports CRI-O, Docker, containerd, rktlet, and any valid Container Runtime Interface (CRI) runtime.

Addons

For example, Container Network Interface (CNI) plugins such as Calico, Flannel, or Weave provide overlay network functionality that adheres to Kubernetes’ networking requirements.

Authenication and authorization on K8s

Namespaces

is a construct that allows you group k8s resources in your cluster.
For instance, for each environment-dev, staging, and production.

Default namespace:

  • kube-system
    continas the cluster services such as etcd, the scheduler, and any resource created by k8s itself and not users.
  • kube-public
    is readable by all users by default and can be used for public resources.

Users

two types:

  • regular users
    Regular users are generally managed by a service outside the cluster, whether they be private keys, usernames and passwords, or some form of user store.
  • service accounts
    Service accounts however are managed by Kubernetes and restricted to specific namespaces. To create a service account, the Kubernetes API may automatically make one, or they can be made manually through calls to the Kubernetes API.

Authentication methods

  • HTTP basic authentication
  • client certificates
  • bearer tokens
  • proxy-based authentication

K8s’ certificate infrastructre for TLS and security

1
openssl req -new -key myuser.pem -out myusercsr.pem -subj "/CN=myuser/0=dev/0=staging"

This will create a CSR for the user myuser who is part of groups named dev and staging.

Once the CA and CSR are created, the actual client and server certificates can be created using openssl,easyrsa, cfssl, or any certificate generation tool.

RBAC

stand for Role-Based Access Control and is a common pattern for authorization.
In k8s specifically, the roles and users of RBAC are implemented using four k8s resources: Role,ClusterRole,RoleBinding, and ClusterRoleBinding

read-only-role

The only difference between a Role and ClusterRole is that a Role is restricted to a particular namespace (in this case, the default namespace), while a ClusterRole can affect accross to all resources of
that type in the cluster, as well as cluster-scoped resources such as nodes.

ABAC

stand for Attribute-Based Access Control. works using policies instead of roles.

Using kubectl and YAML

Setting up kubectl and kubeconfig

kubeconfig
three sections to a Kuberconfig YAML file: clusters, users,a nd contexts

Imperative versus declarative commands

Two paradigms for talking to the k8s API:

  • imperative
    allow you dictate to k8s “what to do”: “spin up two copies of Ubuntu**

  • declarative
    allow you to write a file with a specification of what should be running on the cluster, and have the k8s API ensures that the configuration maches the cluster configuration, updating it if necessary

Basic kubectl commands:
kubectl get reourece_type: see k8s resources in your cluster, where resource_type is the full name of the k8s resources
kubectl get nodes: find which nodes exist in a cluster
wide output flag: kubectl get nodes -o wide add additional information to the list
describe: works similarly to get, except that we can optionally pass the name of a specific resource
kubectl describe nodes will return details for all nodes in the cluster, while kubectl describe nodes node1 will return a description of the node named node1.

  • kubectl create -f /path/to/file.yaml, which is an imperative command
  • kubectl apply -f /path/to/file.yaml, which is declarative
  • –dry-run flag to see the outout of the create or apply commands

create works imperatively, so it will create a new resource, but if you run it again with the same file, the command will fail since the resource already exists. apply works declaratively, so if you run it the first time it will create the resource, and subsequent runs will update the running resource in Kubernetes with any changes. You can use the –dry-run flag to see the output of the create or apply commands.

To update existing resources imperatively, use the edit command like so: kubectl edit resource_type resource_name

To update existing resources declaratively, you can edit your local YAML resource file.

  • kubectl cluster-info, which will show the IP addresses where the major K8s cluster services are running.

Writing k8s resource YAML files

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello Readers"]

has four top-level keys at a minimum: apiVersion,kind,metadata, and spec.

apiVersion dictates which versions of the k8s API will be used to create the resource.
kind specifies what type of resources the YAML file is referencing.
metadata provides a location to name the resources, as well as adding annotations and name-spacing information
spec kye will contain all the resource-specific information that k8s needs to create the resource in your cluster.

Scaling

Pods:

1
kubectl scale deployments/course-demo-wordpress --replicas=5

Nodes:

1
az aks scale --resource-group kubDemo --name demoCluster --node-count 3 --nodepool-name nodepoll1

Autoscale

Pods:

1
kubectl autoscale deployments course-demo-wordpress --cpu-percent=40 --min=2 --max=10

Nodes:

1
az aks update --resource-group kubeDemo --name demoCluster -update-cluster-autoscaler --min-count 1 --max-count 3

issue

  1. helm 3 status does not list resources
    1
    2
    3
    helm plugin install https://github.com/marckhouzam/helm-fullstats

    helm fullstatus nodeserver

CKA

K8s cluster

docs: kubeadm init

  1. kubeadm init run as root
  • network
  • .kube/config
  • kubeadm join
  1. create config
  2. kubectl — network
    kubectl get nodes
  3. kubeadm join run as root
Contents
  1. 1. Kubernetes: The Terminology
    1. 1.1. Architecture of a Kubernetes cluster
    2. 1.2. Kubernetes node types
    3. 1.3. The k8s control plane
    4. 1.4. K8s worker nodes
    5. 1.5. Addons
    6. 1.6. Authenication and authorization on K8s
      1. 1.6.1. Namespaces
      2. 1.6.2. Users
      3. 1.6.3. Authentication methods
      4. 1.6.4. K8s’ certificate infrastructre for TLS and security
      5. 1.6.5. RBAC
      6. 1.6.6. ABAC
    7. 1.7. Using kubectl and YAML
      1. 1.7.1. Setting up kubectl and kubeconfig
      2. 1.7.2. Imperative versus declarative commands
      3. 1.7.3. Writing k8s resource YAML files
  2. 2. Scaling
    1. 2.1. Pods:
    2. 2.2. Nodes:
    3. 2.3. Autoscale
    4. 2.4. Pods:
    5. 2.5. Nodes:
  3. 3. issue
  4. 4. CKA
    1. 4.1. K8s cluster