Docker Meetup at Walmart Labs, bangalore – ImplementingLightweight Kubernetes(K3s) on Raspberry Pi Stack – Sangam Biradar – Part 1

It was a wonderful opportunity to speak around latest rancher’s open source project K3S and K3os Thanks Docker Captain Ajeet Singh Raina.

lets deep drive into k3s and why Kubernetes fits for the IoT Platform

Why the IoT needs Kubernetes

Kubernetes has become the de-facto standard container orchestration framework for cloud-native deployments. Development teams have turned to Kubernetes to support their migration to new microservices architectures and a DevOps culture for continuous integration and continuous deployment.

At the same time, many organizations are going through a digital transformation process. Their goal is to change how they connect with their customers, suppliers and partners. These organizations are taking advantage of innovations offered by technologies such as IoT platforms, big data analytics, or machine learning to modernize their enterprise IT and OT systems. They realize that the complexity of development and deployment of new digital products require new development processes. Consequently, they turn to agile development and infrastructure tools such as Kubernetes.

1. Enabling DevOps for IoT

Customer and market demands often require IoT solutions to have the ability to quickly deploy new features and updates. Kubernetes provides a unified deployment model that allows DevOps teams to quickly and automatically test and deploy new services. It supports zero-downtime deployments in form of rolling updates. This allows mission-critical IoT solutions to be kept up-to-date with no impact on end users (customers).

2. Scalability

Scalability is a key concern for many IoT solutions. The ability to handle thousands or even millions of device connections, sending terabytes of data and messages and providing services such as real-time analytics requires a deployment infrastructure that can scale up and down to meet the demands of an IoT deployment. Kubernetes provides the ability to automatically scale Kubernetes Pods across network clusters.

3. High availability

Many IoT solutions are considered business critical systems that need to be reliable and available. For instance, an IoT solution critical to the operation of a factory needs to be available at all time. Kubernetes provides the tooling required to deploy highly available services. It’s architecture also allows for workloads to run independently. In addition, they can be restarted or recreated with no effect to end-users.

4. Efficient use of cloud resources

IoT solutions are often a set of connected services. They handle device connectivity and management, data ingestion, data integration, analytics or integration with IT and OT systems, among others. These services will often run on public cloud providers, such as AWS or MS Azure. This makes the efficient use of cloud provider resources an important consideration towards the total cost to manage and deploy these services. Kubernetes creates an abstraction layer on top of the underlying virtual machines. Administrators are able to focus on the deployment of the IoT services across the optimal number of VMs as opposed to a single service on a single VM.

Deployment to the IoT edge [don’t miss its future ]

A key trend in the IoT industry is the deployment of IoT services to the edge of the network. For instance, to improve the responsiveness of a predictive maintenance solution, it might be more efficient to deploy the data analytics and machine learning services closer to the equipment being monitored. Running IoT services in a distributed and federated manner creates a new management challenge for system administrators and developers. However, Kubernetes provides a common platform that could be used for deploying IoT services at the edge. In fact, a new Kubernetes IoT Working Group is investigating how it can provide a consistent deployment model for IoT cloud and IoT Edge.

The Kubernetes community is rapidly advancing and innovating around the Kubernetes platform. These advancements are making it possible to build cloud-native IoT solutions that are scalable, reliable and deployed in a distributed federation. It is clear Kubernetes has become a key enabling technology for IoT solutions.

Why k3s?

source :https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/

Darren Shepherd, Chief Architect at Rancher Labs is known for building simple solutions and accessible user-experiences for distributed systems. k3s is one of his latest experiements to reduce the footprint and bootstrap-process of Kubernetes to a single binary.

The k3s binary available on GitHub comes in at around 40mb and bundles all the low-level components required such as containerd, runc and even kubectlk3s can take the place of kubeadm which started as part of a response from the Kubernetes community to up their game for user-experience of bootstrapping clusters.

kubeadm is now able to create production-ready multi-master clusters, but is not well-suited for the Raspberry Pi. This is because it assumes hosts have high CPU/memory and low-latency. When I ran through the installation for k3s the first time it was several times quicker to boot up than kubeadm, but the important part was that it worked first-time, every time without any manual hacks or troubleshooting.

Note: k3s just like Kubernetes, also works on armhf (Raspberry Pi), ARM64 (Packet/AWS/Scaleway) and x86_64 (regular PCs/VMs).

Minimum System Requirements

  • Linux 3.10+
  • 512 MB of ram per server
  • 75 MB of ram per node
  • 200 MB of disk space
  • x86_64, ARMv7, ARM64

In this Meetup we are implementing 2 node cluster top of raspberry pi

using wonderful android app find my pi its the lifeline for IOT developer

download app : https://play.google.com/store/apps/details?id=com.ionicframework.findpi347681&hl=en

connect both pi in one network I’m using Debian for Pi

lets jump into demo

Enable SSH to perform remote login

EngineITops $ :~ sangam$ ssh pi@raspberrypi.local
pi@raspberrypi.local's password: 
Linux raspberrypi 4.19.42-v7+ #1219 SMP Tue May 14 21:20:58 BST 2019 armv7l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jul 22 00:41:50 2019 from fe80::c56:c290:247b:90d5%wlan0 

Enter into root mode and enable kernel container features

pi@raspberrypi:~ $ sudo -i

root@raspberrypi:~# echo "cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory" >>/boot/cmdline.txt

# verify once 

root@raspberrypi:~# cat /boot/cmdline.txt
 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=c93e37e6-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles
 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

note : reboot device

Install K3s on pi

root@raspberrypi:~#  curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.7.0 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.7.0/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.7.0/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Lets Verify Installation

root@raspberrypi:~# k3s
NAME:
   k3s - Kubernetes, but small and simple

USAGE:
   k3s [global options] command [command options] [arguments...]

VERSION:
   v0.7.0 (61bdd852)

COMMANDS:
     server   Run management server
     agent    Run node agent
     kubectl  Run kubectl
     crictl   Run crictl
     ctr      Run ctr
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug        Turn on debug logs
   --help, -h     show help
   --version, -v  print the version

if your facing any issue fix it by editing /etc/hosts and adding the right entry for your Pis boxes

127.0.0.1       raspberrypi-node3

Get nodes and verify version


root@raspberrypi:~# k3s kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
raspberrypi   Ready    master   3h28m   v1.14.4-k3s.1

Listing k3s pods

root@raspberrypi:~# k3s kubectl get po,svc,deploy
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   11h
root@raspberrypi:~#

containerd and Docker

k3s by default uses containerd. If you want to use it with Docker, all you just need to run the agent with the --docker flag

 k3s agent -s ${SERVER_URL} -t ${NODE_TOKEN} --docker &

Run Nginx Pods

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

List Running pods

root@raspberrypi:~# k3s kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-84b8d48d44-ggpcp   1/1     Running   0          119s
mynginx-84b8d48d44-hkdg8   1/1     Running   0          119s
mynginx-84b8d48d44-n4r6q   1/1     Running   0          119s

Expose Deployment

root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

Verify Endpoint Controller

root@raspberrypi:~# k3s kubectl get endpoints mynginx
NAME      ENDPOINTS                                   AGE
mynginx   10.42.0.10:80,10.42.0.11:80,10.42.0.12:80   17s

Test Nginx

root@raspberrypi:~# curl 10.42.0.10:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Add new node to k3s cluster

root@raspberrypi:~# cat /var/lib/rancher/k3s/server/node-token
K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2
root@raspberrypi:~#

create NODETOKEN =K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2 variable

root@pi-node1:~# NODETOKEN=K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2
root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN}
INFO[2019-04-04T23:09:16.804457435+05:30] Starting k3s agent v0.3.0 (9a1a1ec)
INFO[2019-04-04T23:09:19.563259194+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-04-04T23:09:19.563629400+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-04-04T23:09:19.613809334+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect
INFO[2019-04-04T23:09:19.614108395+05:30] Connecting to proxy                           url="wss://192.168.1.5:6443/v1-k3s/connect"
FATA[2019-04-04T23:09:19.907450499+05:30] Failed to start tls listener: listen tcp 127.0.0.1:6445: bind: address already in use
root@pi-node1:~# pkill -9 k3s
root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN}
INFO[2019-04-04T23:09:45.843235117+05:30] Starting k3s agent v0.3.0 (9a1a1ec)
INFO[2019-04-04T23:09:48.272160155+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-04-04T23:09:48.272542392+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
/run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2019-04-04T23:09:49.321863688+05:30] Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet
INFO[2019-04-04T23:09:50.347628159+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect

List the nodes

root@raspberrypi:~# k3s kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
pi-node1   Ready    <none>   118s   v1.13.5-k3s.1
pi-node2   Ready    <none>   108m   v1.13.5-k3s.1

create 3 replicas of Nginx and expose port

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Verify Endpoint Controller

root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

add kubernetes dashboard to k3s

root@node1:/home/pi# k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
root@node1:/home/pi#
root@node1:/home/pi# k3s kubectl proxy
Starting to serve on 127.0.0.1:8001

Cleaning up

kubectl delete --all pods
pod "mynginx-84b8d48d44-9ghrl" deleted
pod "mynginx-84b8d48d44-bczsv" deleted
pod "mynginx-84b8d48d44-qqk9p" deleted

hope this useful for you to create k3s lightweight cluster! in part 2 we will discuss on k30s ..

Leave a Reply

Your email address will not be published. Required fields are marked *