Categories
Projects and Code

Setup free Kubernetes cluster on Oracle Cloud

What is it all about?

If you always wanted to create your own Kubernetes cluster from scratch, learn all the nitty-gritty and experiment with highly available environment, these tutorial is for you.

We’re going to setup a multi-node, highly-available cluster and deploy a simple application that will help you grasp all the details and connect the dots.

Glossary

  • YOUR_PUBLIC_IP – IP of your personal machine, you can find it, among others, with typing curl ifconfig.co in the terminal
  • INSTANCE_PUBLIC_IP – public IP of a virtual machine, it’s presented on Compute/Instances dashboard and on the cloud instance details page
  • INSTANCE_PRIVATE_IP – private IP of a virtual machine, it’s presented on Compute/Instances dashboard and on the cloud instance details page
  • LOAD_BALANCER_PUBLIC_IP – public IP of a load balancer, it’s presented on Networking/Load Balancers dashboard and on Load Balancer’s details page

SSH keys

If you don’t have one, go to Compute/Instances, select Create instance button, generate and save SSH key. It’s enough to have one for all 3 instances. You’re going to need it in the next step of this tutorial.

The public key file will be called ssh-key-20XX-XX-XX.key.pub. Make sure to place it in well-known place because we’re going to need it later.

Create instances

Let’s create 3 cloud instances that will make up a cluster. Go to Compute/Instances and select Create instance button. Change settings to look like this.

Make sure all instances will have public IPv4 address and will be provisioned in the same subnet and virtual cloud network.

Upload your SSH key (or the one you have created in the previous step). The rest remains unchanged defaults.

Ultimately, in Compute/Instances dashboard you should have 3 instances running.

Networking setup

In Networking/Virtual Cloud Networks go to your VCN settings.

Inside, go to subnet details.

Finally, check Default Security List.

Use Add Ingress Rules button to setup your network access policies. These 3 ingress rules are required to enable communication. You can have more, but these are required.

Create load balancer

Load Balancer is going to route external traffic to your cluster. Every request is going to be processed by it and delegated to one of your k8s cluster nodes.

Leave default configuration here

At the Configure Listener step, choose TCP and set the port to 443. We’re exposing only single port where nginx will be hosted (we will set it up later).

At the next step, set the Health Check protocol to HTTPS, port to 443 and status code to 404. The reason for this is that Load Balancer will be periodically checking if nginx is available and, with no Host in request headers, nginx will return 404 status code. In other words, receiving 404 means that nginx is out there, healthy and ready to process requests.

Finally, create your Load Balancer and wait for Oracle to provision it.

With Load Balancer in place, we now have to specify nodes where the traffic will be routed.

Type here INSTANCE_PRIVATE_IP of each of your virtual machines and route them to port 443. Click Add Backends button. Later we’ll make sure that nginx will wait there for connections.

Load Balancer will find your instances in Critical condition. Don’t worry about it now, it is so because there’s no nginx where it is expected to be found by health checks. Later, when we create ingress controller, we’ll see everything turning green, I promise.

Let’s summarize what we have created.

The listener awaits all TCP (HTTP&HTTPS) connections to Load Balancer’s 443 port and forwards it to the backend set
Your backend set allows Load Balancer to forward requests to your virtual machines to port 443. Unfortunately, there’s nothing there yet.
Backend is considered healthy if any request to port 443 over HTTPS returns 404 (because that’s what nginx does). Load Balancer will forward requests to backends considered healthy.

Configure instances

Your brand new virtual machines run pure Ubuntu. A little bit of configuration is required for each of them to become a reliable cluster node.

The following steps need to be performed on each cloud instance. First, let’s log in. Open terminal and type in the following command. Make sure to use actual path to your public key file and replace INSTANCE_PUBLIC_IP with public IP of the instance you want to access.

ssh -i ssh-key-20XX-XX-XX.key.pub ubuntu@INSTANCE_PUBLIC_IP

You may get a security warning, type yes to proceed.

The authenticity of host '...' can't be established.
ECDSA key fingerprint is ...
Are you sure you want to continue connecting (yes/no)?

Ubuntu will welcome you with a message similar to the following.

Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.15.0-1016-oracle aarch64) ...

To avoid typing sudo in each command, log in as super user for good.

sudo -i

The next step is to install Kubernetes and all the dependencies. We will be installing docker as the engine that will run containers for us. Docker is a no-brainer, but be aware that there are other solutions compatible with k8s too.

We are going to install microk8s Kubernetes flavor in version 1.23.

apt update && apt install docker.io -y && snap install microk8s --classic --channel=1.23/stable

Having all the tools onboard, let’s take care of networking. Run the following command to edit networking rules with nano and replace the -A FORWARD line with one responsible for letting in all traffic (we’ve provided some security on infrastructure side during configuration of ingress rules).

nano /etc/iptables/rules.v4
-I INPUT -s 0.0.0.0/0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

To apply changes in nano, press CRTL+X, then Y and finally ENTER.

Now prepare certificate template. Again, use nano to add new lines to the [ alt_names ] section.

nano /var/snap/microk8s/current/certs/csr.conf.template

It should look like this:

[ alt_names ]
...
IP.50 = INSTANCE_PUBLIC_IP
IP.60 = INSTANCE_PRIVATE_IP
IP.90 = LOAD_BALANCER_PUBLIC_IP
...
#MOREIPS

The following commands will use certificate template you’ve just created to refresh certificates state (to be honest, I don’t know if all these refreshes are required).

microk8s refresh-certs --cert server.crt
microk8s refresh-certs --cert front-proxy-client.crt
microk8s refresh-certs --cert ca.crt

Finally, reboot the machine. Your session will be disconnected.

reboot

Creating cluster

With all three instances ready, now it’s time to combine them into a cluster. Similarly to the previous step, jump in to first of them via SSH, authorize as a super user with sudo -i and type in the following command which will print access settings to the cluster. Make sure you copy it over and save in a configuration file, let’s call it path_to_my_k8s_config.

microk8s config

Remember to replace server IP with public IP of any cloud instance. The point is that kubectl will look for k8s under this url.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
    server: https://10.0.0.217INSTANCE_PUBLIC_IP:16443
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    token: 

Now let’s create ingress environment that will be later required to route public and private traffic in your cluster.

microk8s enable ingress

This command will form your cluster.

microk8s add-node

Terminal will respond with information on how to join other instances to newly created cluster. On each node but the first one, you should invoke a command similar to this.

From the node you wish to join to this cluster, run the following:
microk8s join 10.0.0.xx:xxxxx/xxxxxxxxxxxxxx

If you experience any problems, first step is to go back to the first instance, call microk8s add-node and try again with new connection string, because tokens may expire or be issued for one usage only, depending on default settings.

Setup your machine to manage cluster

Install kubectl on your machine. To use kubectl to manage your k8s cluster, you have to let it know where to look for Kubernetes Server. To do so, set environmental variable like follows.

Be aware that this environmental variable will be set during the current session, closing the terminal will clear it. Of course, there are ways to set it for good, but in this tutorial we’ll stick to the basic path.

export KUBECONFIG=path_to_my_k8s_config   #linux
$env:KUBECONFIG=path_to_my_k8s_config     #windows

To find out if everything went well, try to verify how many nodes does your cluster have. To do that, on your local machine, type this in terminal.

kubectl get nodes

And the response should be similar to this.

NAME STATUS ROLES AGE VERSION
n1 Ready 3d3h v1.23.12-2+52c8caded73daa
n2 Ready 3d3h v1.23.12-2+52c8caded73dab
n3 Ready 3d3h v1.23.12-2+52c8caded73dac

Let’s run something!

I have prepared a simple application that saves its ID when container starts and keeps this ID over its whole life. I think it’s helpful to learn what pod runs your application and experiment with shutting down and starting up cloud instances to find how they adapt. So, we’re running a very simple code wrapped into a docker container.

const express = require("express")
const app = express()
const port = process.env.PORT || 8080
const id = Math.floor(Math.random() * 100)

app.get('*', (req, res) => {
res.send(You've reached instance ${id})
});

app.listen(port, () => {
console.log(Listening on port ${port})
})
FROM arm64v8/node:19-alpine
WORKDIR /usr/src
COPY app.js app.js
RUN npm install express
EXPOSE 8080
ENTRYPOINT [ "node", "app.js" ]

To run them, we need to apply these three configurations, all of them are publicly available on github so I advise you to take a look:

  • First one pulls the image from docker hub and runs two replicas on port 8080
  • Second one creates a service, which is a stable resource pointing to previously started application
  • Third one exposes service through ingress to the world using implicitly:
    • HTTPS and 443 port (this is provided by microk8s ingress we have enabled previously)
    • Kubernetes self-signed TLS certificate to guard this HTTPS connection
kubectl apply -f https://raw.githubusercontent.com/detoix/k8s/main/deployment.yaml

kubectl apply -f https://raw.githubusercontent.com/detoix/k8s/main/service.yaml

kubectl apply -f https://raw.githubusercontent.com/detoix/k8s/main/ingress.yaml

Now you can test it

First of all, you can now notice that Load Balancer considers your backends healthy. That’s because there’s an nginx server running there.

Mind that we have exposed our service via nginx using host name test.com. What does it mean? Well, when you type test.com in your web browser, it will look for matching server IP in DNS registry and, having this IP, adds host to request headers and eventually sends a request similar to this. Well, let’s do it.

curl -v -H "Host: test.com" https://LOAD_BALANCER_PUBLIC_IP:443 --insecure

-v is for printing all the under-the-hood information and –insecure is required because we’re connecting to HTTPS with self-signed certificate (without this argument request will be rejected as unsafe). As a result, you should receive a whole log of communication, I’ll paste a part of it below.

* Rebuilt URL to: https://xxx.xx.xx.xxx:443/
*   Trying xxx.xx.xx.xxx...
(...)
* Server certificate:
*  subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
*  start date: Nov  7 19:31:26 2022 GMT
*  expire date: Nov  7 19:31:26 2023 GMT
(...)
> GET / HTTP/2
> Host: test.com
> User-Agent: curl/7.58.0
(...)
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
* Connection #0 to host 130.61.91.157 left intact
You've reached instance 86

In this fuss, the You’ve reached instance 86 is the actual response from the application. It means that this instance, when started, took 86 as it’s ID and will keep it until it dies.

Keep in mind that if you send this request many times, eventually Load Balancer may forward you to the other instance. If you look closely, our deployment.yaml claims to always have 2 replicas, each with a different ID.

What next?

At this point you can access your cluster with command line and any back-end application written in any language and hosted anywhere. If you want to go further, consider:

  • changing your etc\hosts to be able to access your domain locally via url like test.com or similar
  • buying a domain (or creating a subdomain) and changing its DNS records so that you can access your cluster with a url address like mentioned test.com
  • play with creating your own self-signed or even CA-signed certificate to guard your nginx HTTPS
  • create more applications in your cluster
  • play with scaling pods, killing and setting up new nodes and so on, analyze how does the cluster react to changes

Good luck!

7 replies on “Setup free Kubernetes cluster on Oracle Cloud”

Excellent tutorial! This really helped me get up and running with OCI after I setup my own bare metal Kubernetes Cluster at home. Rather than pay the electricity costs of running the server, why not make Oracle do it, for free! The one question I have (and this may depend on use) is how are you getting on with this performance wise? I deployed a .net 7 app using just the base WeatherApp that comes with a WebAPI project and I’m seeing response times of around 5 seconds (my cluster is ARM only and my sdk targets etc are for ARMx64). This is slow whether I point my DNS to the load balancer or to one of the nodes directly so wanted to see whether or not you had a similar experience.

Hi Jason,
Well I actually tested node/express app, but didn’t experience any performance problems neither with direct communication to IP nor with DNS applied, I’m not sure I’ll be able to help without any further information, but I wish you good luck with your tests.

That’s really strange. I tried even with the example app that you specified in the “Let’s try something” section and even that takes 5 seconds to respond on each request so there is definitely something awry

Can you also create an Ingress-Nginx Controller (with Metallb) instead of your described setup? Metallb gives you an external private IP address as far as I know.

Or would it be outside of Oracle Cloud’s Free Tier. I’m a bit overwhelmed about all those different possibilities.

One more question. How did you know to name it “IP.50, IP.60, IP.90”? Is this somewhere documented?

Thank you for this very informative article!

Regarding Metallb, I believe so but I don’t have any experience with this solution.
As long as you point your load balancer to Oracle public IPs it should work, mind though that nginx/ingress and external cloud load balancers serve different purposes. Nginx/ingress run withing cluster whereas external load balancer routes traffic into the cluster.

Regaring IP.50/60/90 – frankly speaking I’ve found it in some tutorial and didn’t dig deeper, still don’t know much about Linux networking but since it works I didn’t touch it 😀

Leave a Reply

Your email address will not be published. Required fields are marked *