k8s
Kubernetes: The Terminology
Architecture of a Kubernetes cluster
Kubernetes node types
can be a VM, a bare metal host, a Raspberry Pi
Two distinct categories:
- master nodes: run the k8s control plane applications
- worker nodes: run the pplications that you deploy onto k8s
A production deployment should have a minimum of three master nodes and three worker nodes.
Most large deployments have many more workers than masters.
The k8s control plane
- kube-apiserver: this application handles instructions send to k8s.
is a component that accepts HTTPS requests, typically on port 443.
when a configuration request is made to the k8s API server, it will check the current cluster configuration in etcd and change it if necessary.
The Kubernetes API is generally a RESTful API, with endpoints for each Kubernetes resource type, along with an API version that is passed in the query path; for instance, /api/v1.
kube-scheduler: this component handles wthe work of deciding which nodes to place workloads on, which can become quite complex.
By default, this decision is influenced by workload resource requirements and node status.kube-controller-manager: this component provides a high-level control loop that ensures that the desired configuration of the cluster and applications running on it is implemented.
- The node controller, which ensures that nodes are up and running
- The replication controller, which ensures that each workload is scaled properly
- The endpoints controller, which handles communication and routing configuration for each workload.
- Service account and token controllers, which handle the creation of API access tokens and default accounts
etcd: a distrubuted key-value store that contains the cluster configuration
An etcd replica runs on each master node and uses the Raft consensus algorithm, which ensures that a quorum is maintained before allowing any changes to the keys or values.
K8s worker nodes
contins components what allow it to communicate with the control plane and handle networking.
kubelet
The kubelet is an agent that runs on every node (including master nodes, though it has a different configuration in that context). Its main purpose is to receive a list of PodSpecs (more on those later) and ensure that the containers prescribed by them are running on the node. The kubelet gets these PodSpecs through a few different possible mechanisms, but the main way is by querying the Kubernetes API server. Alternately, the kubelet can be started with a file path, which it will monitor for a list of PodSpecs, an HTTP endpoint to monitor, or its own HTTP endpoint to receive requests on.kube-proxy
kube-proxy is a network proxy that runs on every node. Its main purpose is to do TCP, UDP, and SCTP forwarding (either via stream or round-robin) to workloads running on its node.The container runtime
The container runtime runs on each node and is the component that actually runs your workloads. Kubernetes supports CRI-O, Docker, containerd, rktlet, and any valid Container Runtime Interface (CRI) runtime.
Addons
For example, Container Network Interface (CNI) plugins such as Calico, Flannel, or Weave provide overlay network functionality that adheres to Kubernetes’ networking requirements.
Authenication and authorization on K8s
Namespaces
is a construct that allows you group k8s resources in your cluster.
For instance, for each environment-dev, staging, and production.
Default namespace:
- kube-system
continas the cluster services such as etcd, the scheduler, and any resource created by k8s itself and not users. - kube-public
is readable by all users by default and can be used for public resources.
Users
two types:
- regular users
Regular users are generally managed by a service outside the cluster, whether they be private keys, usernames and passwords, or some form of user store. - service accounts
Service accounts however are managed by Kubernetes and restricted to specific namespaces. To create a service account, the Kubernetes API may automatically make one, or they can be made manually through calls to the Kubernetes API.
Authentication methods
- HTTP basic authentication
- client certificates
- bearer tokens
- proxy-based authentication
K8s’ certificate infrastructre for TLS and security
1 | openssl req -new -key myuser.pem -out myusercsr.pem -subj "/CN=myuser/0=dev/0=staging" |
This will create a CSR for the user myuser who is part of groups named dev and staging.
Once the CA and CSR are created, the actual client and server certificates can be created using openssl,easyrsa, cfssl, or any certificate generation tool.
RBAC
stand for Role-Based Access Control and is a common pattern for authorization.
In k8s specifically, the roles and users of RBAC are implemented using four k8s resources: Role,ClusterRole,RoleBinding, and ClusterRoleBinding
The only difference between a Role and ClusterRole is that a Role is restricted to a particular namespace (in this case, the default namespace), while a ClusterRole can affect accross to all resources of
that type in the cluster, as well as cluster-scoped resources such as nodes.
ABAC
stand for Attribute-Based Access Control. works using policies instead of roles.
Using kubectl and YAML
Setting up kubectl and kubeconfig
kubeconfig
three sections to a Kuberconfig YAML file: clusters, users,a nd contexts
Imperative versus declarative commands
Two paradigms for talking to the k8s API:
imperative
allow you dictate to k8s “what to do”: “spin up two copies of Ubuntu**declarative
allow you to write a file with a specification of what should be running on the cluster, and have the k8s API ensures that the configuration maches the cluster configuration, updating it if necessary
Basic kubectl commands:
kubectl get reourece_type: see k8s resources in your cluster, where resource_type is the full name of the k8s resources
kubectl get nodes: find which nodes exist in a cluster
wide output flag: kubectl get nodes -o wide add additional information to the list
describe: works similarly to get, except that we can optionally pass the name of a specific resource
kubectl describe nodes will return details for all nodes in the cluster, while kubectl describe nodes node1 will return a description of the node named node1.
- kubectl create -f /path/to/file.yaml, which is an imperative command
- kubectl apply -f /path/to/file.yaml, which is declarative
- –dry-run flag to see the outout of the create or apply commands
create works imperatively, so it will create a new resource, but if you run it again with the same file, the command will fail since the resource already exists. apply works declaratively, so if you run it the first time it will create the resource, and subsequent runs will update the running resource in Kubernetes with any changes. You can use the –dry-run flag to see the output of the create or apply commands.
To update existing resources imperatively, use the edit command like so: kubectl edit resource_type resource_name
To update existing resources declaratively, you can edit your local YAML resource file.
- kubectl cluster-info, which will show the IP addresses where the major K8s cluster services are running.
Writing k8s resource YAML files
1 | apiVersion: v1 |
has four top-level keys at a minimum: apiVersion,kind,metadata, and spec.
apiVersion dictates which versions of the k8s API will be used to create the resource.
kind specifies what type of resources the YAML file is referencing.
metadata provides a location to name the resources, as well as adding annotations and name-spacing information
spec kye will contain all the resource-specific information that k8s needs to create the resource in your cluster.
Scaling
Pods:
1 | kubectl scale deployments/course-demo-wordpress --replicas=5 |
Nodes:
1 | az aks scale --resource-group kubDemo --name demoCluster --node-count 3 --nodepool-name nodepoll1 |
Autoscale
Pods:
1 | kubectl autoscale deployments course-demo-wordpress --cpu-percent=40 --min=2 --max=10 |
Nodes:
1 | az aks update --resource-group kubeDemo --name demoCluster -update-cluster-autoscaler --min-count 1 --max-count 3 |
issue
- helm 3 status does not list resources
1
2
3helm plugin install https://github.com/marckhouzam/helm-fullstats
helm fullstatus nodeserver
CKA
K8s cluster
docs: kubeadm init
- kubeadm init run as root
- network
- .kube/config
- kubeadm join
- create config
- kubectl — network
kubectl get nodes - kubeadm join
run as root