How to Cluster Kubernetes on RHEL 7

In the last post, we installed Kubernetes locally on a single RHEL7 server. This time, we’re modifying the procedure to distribute jobs over two servers. The installation steps are exactly the same, up until we start Kubernetes, so I won’t rehash… see them here: http://nicksabine.com/post/kubernetes-rhel7

Instead of starting Kubernetes via script, we’re going to start via systemd.

In my configuration, I have the following VMs:

1
2
master=192.168.56.103
minion=192.168.56.102

Create all of the following files on the master. On the minion, create only the kubernetes-kubelet.service and kubernetes-proxy.service files.

/usr/lib/systemd/system/kubernetes-apiserver.service:

[Unit]
Description=Kubernetes API Server

[Service]
ExecStart=/opt/kubernetes/output/go/bin/apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address=0.0.0.0 -port=8080 -machines=192.168.56.102,192.168.56.103

[Install]
WantedBy=multi-user.target

Note: in the above, replace the comma-delimited list of -machines with the IPs of all the minions in the cluster.

/usr/lib/systemd/system/kubernetes-controller-manager.service:

[Unit]
Description=Kubernetes Controller Manager

[Service]
ExecStart=/opt/kubernetes/output/go/bin/controller-manager --logtostderr=true -master=192.168.56.103:8080

[Install]
WantedBy=multi-user.target

Note: in the above, replace the IP of the master

/usr/lib/systemd/system/etcd.service:

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
#etcd logs to the journal directly, suppress double logging
StandardOutput=null
WorkingDirectory=/opt/etcd
ExecStart=/opt/etcd/bin/etcd

[Install]
WantedBy=multi-user.target

/usr/lib/systemd/system/kubernetes-kubelet.service:

[Unit]
Description=Kubernetes Kubelet

[Service]
ExecStart=/opt/kubernetes/output/go/bin/kubelet --logtostderr=true -etcd_servers=http://192.168.56.103:4001 -address=192.168.56.102 -hostname_override=192.168.56.102

[Install]
WantedBy=multi-user.target

Note: in the above, replace the IP of the etcd_server (running on the master), and replace the -address and -hostname_override with the IP of the local minion server.

/usr/lib/systemd/system/kubernetes-proxy.service:

[Unit]
Description=Kubernetes Proxy

[Service]
ExecStart=/opt/kubernetes/output/go/bin/proxy --logtostderr=true -etcd_servers=http://192.168.56.103:4001

[Install]
WantedBy=multi-user.target

Note: in the above, replace the IP of the -etcd_servers (running on the master)

Start Kubernetes (on master)

systemctl daemon-reload

systemctl enable etcd
systemctl start etcd

systemctl enable kubernetes-apiserver.service
systemctl start kubernetes-apiserver.service

systemctl enable kubernetes-controller-manager
systemctl start kubernetes-controller-manager

systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet

systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

Start Kubernetes (on minon)

systemctl daemon-reload

systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet

systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

References:

Show what’s running in Kubernetes (nothing yet!)

Tell the client where the server is running

sed -i -e 's/^KUBERNETES_PROVIDER.*$/KUBERNETES_PROVIDER="local"/' /opt/kubernetes/cluster/kube-env.sh

Note: If you want to run kubecfg.sh on any host other than the master, you need to set the master’s location in /opt/kubernetes/cluster/local/config-default.sh:

#IP LOCATIONS FOR INTERACTING WITH THE MASTER
export KUBE_MASTER_IP="192.168.56.103"
export KUBERNETES_MASTER="http://${KUBE_MASTER_IP}:8080"

List pods:

/opt/kubernetes/cluster/kubecfg.sh list /pods

List services:

/opt/kubernetes/cluster/kubecfg.sh list /services

List Replication Controllers:

/opt/kubernetes/cluster/kubecfg.sh list /replicationControllers

Once you have this up and running, try deploying the GuestBook example: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook, but note that Kubernetes will not be able to deploy more than one of each type of pod per minion due to port conflicts. So if you are running a 2-minion cluster, the replicationController for the frontend needs to be modified to create 2 replicas, rather than 3.


comments powered by Disqus