Quantcast
Channel: Recherche – Seite 2 – mynethome.de
Viewing all articles
Browse latest Browse all 64

Kubernetes on a RaspberryPi cluster

$
0
0

Recently, I decided to revive the raspberry pi cluster I used for Kubernetes-Experiments a while ago.

The re-install based on the Hypriot images and installation manual was pretty much straight forward.
Even though I encountered two strange things:

  1. Two of my RaspberryPi don’t want to connect via wifi. They have the very same config set as all others. Seems like they have brokern wifi chips.
  2. The sample configuration for the Ingress object that uses Traefik a a load balancer didn’t work as described – it just says „404 page not found“ instead of showing the acual sample page.
    Reason here is the Error

    ERROR: logging before flag.Parse: E0824 11:36:50.295344 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User „system:serviceaccount:kube-system:default“ cannot list ingresses.extensions at the cluster scope

    which I couldn’t fix right away and decided to learn on that at a later stage.

Before really starting to use the cluster, the first bullet point to me was to set up a registry that is used to pull the images from. First thing, to set up a registry, is faily easy with the registry docker image. On top of that, QNAP provides an „app“ (which is, in their context, a docker-compose to wire a set of containers together) that bundles the registry with an nginx and an redis cache.

Now, the actually point was how to tell the nodes to pull from that registry. First I thought I need to tell kubernetes. But as kubernetes delegates the docker commands to the docker instance running on each node, it’s a docker config setting.

By the way: To execute commands on all nodes at the same time, tmux-cssh has become a good friend of mine ;-)
Now, to configure my NAS as a trusted registry, I have to install the ca of the self-signed certificate on my nodes:

sudo -i
mkdir -p /etc/docker/certs.d/192.168.100.201:6088
ln -s /etc/docker/certs.d/192.168.100.201:6088 /etc/docker/certs.d/NAShostname:6088
scp admin@192.168.100.201:/etc/docker/tls/ca.pem /etc/docker/certs.d/192.168.100.201:6088/ca.crt

The link set using the „NAShostname“ is so that I can also use the name instead of the IP.

After that I’m able to push and pull images from & to my local private registry.

Let’s say I’d build a little application and pushed an image for that application based on a arm32v7 base image to my registry at „NAShostname:6088/armhf/application“. (Remember: CPU architecture matters! So you can’t use images build and running on your x86 or amd64 machine an a ARM based RaspberryPi!)
To run that application on my kubernetes cluster I have to:

Create a deployment

kubectl run demo --image=NAShostname:6088/armhf/application:latest --replicas=3 --port 8080

Expose that deployment to the outside world

kubectl expose deployment demo --type=LoadBalancer

The thing now is that the service created to expose the deployment never gets an IP address assigned. Status reads like:

$ kubectl get services demo
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
demo         LoadBalancer   10.98.4.191   <pending>     8080:32390/TCP   26m

To solve this, we need to define an external IP manually. THe tricky part is to know what valid external IPs are. It turns out that only thos from node that are running one of the deployments pods are working.

Lets find out waht these are:

$ kubectl get pods -o=wide
NAME                          READY     STATUS    RESTARTS   AGE       IP           NODE      NOMINATED NODE
demo                          1/1       Running   0          1h        10.244.2.3   node04    none
demo                          1/1       Running   0          1h        10.244.1.3   node05    none
demo                          1/1       Running   0          1h        10.244.4.3   node02    none
$ kubectl get nodes --output=wide
NAME      STATUS    ROLES     AGE  VERSION   INTERNAL-IP     EXTERNAL-IP  OS-IMAGE                        KERNEL-VERSION         CONTAINER-RUNTIME
node02    Ready     none    11d  v1.11.2   192.168.100.105   none       Raspbian GNU/Linux 9 (stretch)  4.14.34-hypriotos-v7+  docker://18.6.0
node03    Ready     none    11d  v1.11.2   192.168.100.103   none       Raspbian GNU/Linux 9 (stretch)  4.14.34-hypriotos-v7+  docker://18.6.0
node04    Ready     none    11d  v1.11.2   192.168.100.100   none       Raspbian GNU/Linux 9 (stretch)  4.14.34-hypriotos-v7+  docker://18.6.0
node05    Ready     none    11d  v1.11.2   192.168.100.101   none       Raspbian GNU/Linux 9 (stretch)  4.14.34-hypriotos-v7+  docker://18.6.0
node06    Ready     master    11d  v1.11.2   192.168.100.102   none       Raspbian GNU/Linux 9 (stretch)  4.14.34-hypriotos-v7+  docker://18.6.0

Thus, we can use .105, .101 and .100 to expose our service:

$ kubectl edit services demo

Add the list „externalIPs:“:

apiVersion: v1 
kind: Service 
metadata: 
  creationTimestamp: 2018-08-19T08:20:03Z 
  labels: 
    run: demo 
  name: demo 
  namespace: default 
  resourceVersion: "1585558" 
  selfLink: /api/v1/namespaces/default/services/jetty-demo 
  uid: 804d3b3a-a776-11e8-a44f-b827eb388bbf 
spec: 
  clusterIP: 10.103.151.206 
  externalIPs: 
  - 192.168.100.105 
  - 192.168.100.102 
  - 192.168.100.100 
  externalTrafficPolicy: Cluster 
  ports: 
  - nodePort: 31285 
    port: 8080 
    protocol: TCP 
    targetPort: 8080 
  selector: 
    run: demo 
  sessionAffinity: None 
  type: LoadBalancer 
status: 
  loadBalancer: {}

After saving, we can finally access the application via one of the IPs on port 8080.

For me, that feels somewhat unstatifying as by the binding of the pod ip to access the application the flexibility promised by kubernetes is vanished. Thus I suspect the cluster is not working as expected at this point and there is a lot of stuff to learn – if you actually want to setup your own kubernetes.
Most of the times, I’ve been working with it as a hosted solution or in context of OpenShift – and thats probably also the reason why setting up and running k8s ist not so well documented that acutally using it afterwards.

If someone comes along these lines and has some suggestions or questions – I’d be very happy to see your comment ;-)

More interesting stuff:
* Build containers faster with Jib, a Google image build tool for Java applications
* Setup k8s on pi using Ansible


Viewing all articles
Browse latest Browse all 64