1 - Running Kubelet in Standalone Mode
This tutorial shows you how to run a standalone kubelet instance.
You may have different motivations for running a standalone kubelet. This tutorial is aimed at introducing you to Kubernetes, even if you don't have much experience with it. You can follow this tutorial and learn about node setup, basic (static) Pods, and how Kubernetes manages containers.
Once you have followed this tutorial, you could try using a cluster that has a control plane to manage pods and nodes, and other types of objects. For example, Hello, minikube.
You can also run the kubelet in standalone mode to suit production use cases, such as to run the control plane for a highly available, resiliently deployed cluster. This tutorial does not cover the details you need for running a resilient control plane.
Objectives
- Install
cri-o
, andkubelet
on a Linux system and run them assystemd
services. - Launch a Pod running
nginx
that listens to requests on TCP port 80 on the Pod's IP address. - Learn how the different components of the solution interact among themselves.
Caution:
The kubelet configuration used for this tutorial is insecure by design and should not be used in a production environment.Before you begin
- Admin (
root
) access to a Linux system that usessystemd
andiptables
(or nftables withiptables
emulation). - Access to the Internet to download the components needed for the tutorial, such as:
- A container runtime that implements the Kubernetes (CRI).
- Network plugins (these are often known as Container Networking Interface (CNI))
- Required CLI tools:
curl
,tar
,jq
.
Prepare the system
Swap configuration
By default, kubelet fails to start if swap memory is detected on a node. This means that swap should either be disabled or tolerated by kubelet.
Note:
If you configure the kubelet to tolerate swap, the kubelet still configures Pods (and the containers in those Pods) not to use swap space. To find out how Pods can actually use the available swap, you can read more about swap memory management on Linux nodes.If you have swap memory enabled, either disable it or add failSwapOn: false
to the
kubelet configuration file.
To check if swap is enabled:
sudo swapon --show
If there is no output from the command, then swap memory is already disabled.
To disable swap temporarily:
sudo swapoff -a
To make this change persistent across reboots:
Make sure swap is disabled in either /etc/fstab
or systemd.swap
, depending on how it was
configured on your system.
Enable IPv4 packet forwarding
To check if IPv4 packet forwarding is enabled:
cat /proc/sys/net/ipv4/ip_forward
If the output is 1
, it is already enabled. If the output is 0
, then follow next steps.
To enable IPv4 packet forwarding, create a configuration file that sets the
net.ipv4.ip_forward
parameter to 1
:
sudo tee /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
EOF
Apply the changes to the system:
sudo sysctl --system
The output is similar to:
...
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
Download, install, and configure the components
Install a container runtime
Download the latest available versions of the required packages (recommended).
This tutorial suggests installing the CRI-O container runtime (external link).
There are several ways to install
the CRI-O container runtime, depending on your particular Linux distribution. Although
CRI-O recommends using either deb
or rpm
packages, this tutorial uses the
static binary bundle script of the
CRI-O Packaging project,
both to streamline the overall process, and to remain distribution agnostic.
The script installs and configures additional required software, such as
cni-plugins
, for container
networking, and crun
and
runc
, for running containers.
The script will automatically detect your system's processor architecture
(amd64
or arm64
) and select and install the latest versions of the software packages.
Set up CRI-O
Visit the releases page (external link).
Download the static binary bundle script:
curl https://raw.githubusercontent.com/cri-o/packaging/main/get > crio-install
Run the installer script:
sudo bash crio-install
Enable and start the crio
service:
sudo systemctl daemon-reload
sudo systemctl enable --now crio.service
Quick test:
sudo systemctl is-active crio.service
The output is similar to:
active
Detailed service check:
sudo journalctl -f -u crio.service
Install network plugins
The cri-o
installer installs and configures the cni-plugins
package. You can
verify the installation running the following command:
/opt/cni/bin/bridge --version
The output is similar to:
CNI bridge plugin v1.5.1
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0
To check the default configuration:
cat /etc/cni/net.d/11-crio-ipv4-bridge.conflist
The output is similar to:
{
"cniVersion": "1.0.0",
"name": "crio",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"ranges": [
[{ "subnet": "10.85.0.0/16" }]
]
}
}
]
}
Note:
Make sure that the defaultsubnet
range (10.85.0.0/16
) does not overlap with
any of your active networks. If there is an overlap, you can edit the file and change it
accordingly. Restart the service after the change.Download and set up the kubelet
Download the latest stable release of the kubelet.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet"
Configure:
sudo mkdir -p /etc/kubernetes/manifests
sudo tee /etc/kubernetes/kubelet.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
webhook:
enabled: false # Do NOT use in production clusters!
authorization:
mode: AlwaysAllow # Do NOT use in production clusters!
enableServer: false
logging:
format: text
address: 127.0.0.1 # Restrict access to localhost
readOnlyPort: 10255 # Do NOT use in production clusters!
staticPodPath: /etc/kubernetes/manifests
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
EOF
Note:
Because you are not setting up a production cluster, you are using plain HTTP
(readOnlyPort: 10255
) for unauthenticated queries to the kubelet's API.
The authentication webhook is disabled and authorization mode is set to AlwaysAllow
for the purpose of this tutorial. You can learn more about
authorization modes
and webhook authentication to properly
configure kubelet in standalone mode in your environment.
See Ports and Protocols to understand which ports Kubernetes components use.
Install:
chmod +x kubelet
sudo cp kubelet /usr/bin/
Create a systemd
service unit file:
sudo tee /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubelet
[Service]
ExecStart=/usr/bin/kubelet \
--config=/etc/kubernetes/kubelet.yaml
Restart=always
[Install]
WantedBy=multi-user.target
EOF
The command line argument --kubeconfig
has been intentionally omitted in the
service configuration file. This argument sets the path to a
kubeconfig
file that specifies how to connect to the API server, enabling API server mode.
Omitting it, enables standalone mode.
Enable and start the kubelet
service:
sudo systemctl daemon-reload
sudo systemctl enable --now kubelet.service
Quick test:
sudo systemctl is-active kubelet.service
The output is similar to:
active
Detailed service check:
sudo journalctl -u kubelet.service
Check the kubelet's API /healthz
endpoint:
curl http://localhost:10255/healthz?verbose
The output is similar to:
[+]ping ok
[+]log ok
[+]syncloop ok
healthz check passed
Query the kubelet's API /pods
endpoint:
curl http://localhost:10255/pods | jq '.'
The output is similar to:
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {},
"items": null
}
Run a Pod in the kubelet
In standalone mode, you can run Pods using Pod manifests. The manifests can either be on the local filesystem, or fetched via HTTP from a configuration source.
Create a manifest for a Pod:
cat <<EOF > static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
EOF
Copy the static-web.yaml
manifest file to the /etc/kubernetes/manifests
directory.
sudo cp static-web.yaml /etc/kubernetes/manifests/
Find out information about the kubelet and the Pod
The Pod networking plugin creates a network bridge (cni0
) and a pair of veth
interfaces
for each Pod (one of the pair is inside the newly made Pod, and the other is at the host level).
Query the kubelet's API endpoint at http://localhost:10255/pods
:
curl http://localhost:10255/pods | jq '.'
To obtain the IP address of the static-web
Pod:
curl http://localhost:10255/pods | jq '.items[].status.podIP'
The output is similar to:
"10.85.0.4"
Connect to the nginx
server Pod on http://<IP>:<Port>
(port 80 is the default), in this case:
curl http://10.85.0.4
The output is similar to:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Where to look for more details
If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:
/var/lib/cni
/var/lib/containers
/var/lib/kubelet
/var/log/containers
/var/log/pods
Clean up
kubelet
sudo systemctl disable --now kubelet.service
sudo systemctl daemon-reload
sudo rm /etc/systemd/system/kubelet.service
sudo rm /usr/bin/kubelet
sudo rm -rf /etc/kubernetes
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/log/containers
sudo rm -rf /var/log/pods
Container Runtime
sudo systemctl disable --now crio.service
sudo systemctl daemon-reload
sudo rm -rf /usr/local/bin
sudo rm -rf /usr/local/lib
sudo rm -rf /usr/local/share
sudo rm -rf /usr/libexec/crio
sudo rm -rf /etc/crio
sudo rm -rf /etc/containers
Network Plugins
sudo rm -rf /opt/cni
sudo rm -rf /etc/cni
sudo rm -rf /var/lib/cni
Conclusion
This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.
Notice that in standalone mode the kubelet does not support fetching Pod configurations from the control plane (because there is no control plane connection).
You also cannot use a ConfigMap or a Secret to configure the containers in a static Pod.
What's next
- Follow Hello, minikube to learn about running Kubernetes with a control plane. The minikube tool helps you set up a practice cluster on your own computer.
- Learn more about Network Plugins
- Learn more about Container Runtimes
- Learn more about kubelet
- Learn more about static Pods
2 - Namespaces Walkthrough
Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
- A scope for Names.
- A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
Prerequisites
This example assumes the following:
- You have an existing Kubernetes cluster.
- You have a basic understanding of Kubernetes Pods, Services, and Deployments.
Understand the default namespace
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:
kubectl get namespaces
NAME STATUS AGE
default Active 13m
Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development
and production
.
Let's create two new namespaces to hold our work.
Use the file namespace-dev.yaml
which describes a development
namespace:
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
name: development
Create the development
namespace using kubectl.
kubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml
Save the following contents into file namespace-prod.yaml
which describes a production
namespace:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
And then let's create the production
namespace using kubectl.
kubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml
To be sure things are right, let's list all of the namespaces in our cluster.
kubectl get namespaces --show-labels
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
production Active 23s name=production
Create pods in each namespace
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
Users interacting with one namespace do not see the content in another namespace.
To demonstrate this, let's spin up a simple Deployment and Pods in the development
namespace.
We first check what is the current context:
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
kubectl config current-context
lithe-cocoa-92103_kubernetes
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
kubectl config set-context dev --namespace=development \
--cluster=lithe-cocoa-92103_kubernetes \
--user=lithe-cocoa-92103_kubernetes
kubectl config set-context prod --namespace=production \
--cluster=lithe-cocoa-92103_kubernetes \
--user=lithe-cocoa-92103_kubernetes
By default, the above commands add two contexts that are saved into file
.kube/config
. You can now view the contexts and alternate against the two
new request contexts depending on which namespace you wish to work against.
To view the new contexts:
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: development
user: lithe-cocoa-92103_kubernetes
name: dev
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: production
user: lithe-cocoa-92103_kubernetes
name: prod
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
Let's switch to operate in the development
namespace.
kubectl config use-context dev
You can verify your current context by doing the following:
kubectl config current-context
dev
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development
namespace.
Let's create some contents.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: snowflake
name: snowflake
spec:
replicas: 2
selector:
matchLabels:
app: snowflake
template:
metadata:
labels:
app: snowflake
spec:
containers:
- image: registry.k8s.io/serve_hostname
imagePullPolicy: Always
name: snowflake
Apply the manifest to create a Deployment
kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml
We have created a deployment whose replica size is 2 that is running the pod called snowflake
with a basic container that serves the hostname.
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
snowflake 2/2 2 2 2m
kubectl get pods -l app=snowflake
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production
namespace.
Let's switch to the production
namespace and show how resources in one namespace are hidden from the other.
kubectl config use-context prod
The production
namespace should be empty, and the following commands should return nothing.
kubectl get deployment
kubectl get pods
Production likes to run cattle, so let's create some cattle pods.
kubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cattle 5/5 5 5 10s
kubectl get pods -l app=cattle
NAME READY STATUS RESTARTS AGE
cattle-2263376956-41xy6 1/1 Running 0 34s
cattle-2263376956-kw466 1/1 Running 0 34s
cattle-2263376956-n4v97 1/1 Running 0 34s
cattle-2263376956-p5p3i 1/1 Running 0 34s
cattle-2263376956-sxpth 1/1 Running 0 34s
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different authorization rules for each namespace.