Introduction
this is part 47 from the journey it's a long journey(360 day) so go please check previous parts , and if you need to walk in the journey with me please make sure to follow because I may post more than once in 1 Day but surely I will post daily at least one 😍.
And I will cover lot of tools as we move on.
prepare files
The original files can be found here
the files for this lab is same of the voting app from previous part (in Docker). the files we need are located inside the k8s-specifications folder. But since we don't know what is namespace so I edit the files and remove namespaces from them.
The edited files can be found in my github repo here
if you already have it just pull , if not clone it. the source code usually hold the same chapter number we are in 047 so app_047 is what we are looking for. *** # Understand the app ![what](https://media1.tenor.com/images/5c36b5497629f905d0c011d16f01c0ff/tenor.gif?itemid=10312546) This part will be a warm up to all what we take so far by using the example of voting app.first let's take a look at the diagram , the vote and result are front-end so we need to access them from outside the cluster , so we need a NodPort service for each of them in order to access it from outside the cluster. for the redis and postgresDB we need an clusterIP service because it's still inside the cluster. if we look at the source code of the worker we can see that he connect to each database using their services and update the votes from the redis to postgres. so technically no one is accessing the worker so we don't need a service for him. let' take a look at the configuration files first. my objective from reviewing the files is to remember all what we talk about.
# db-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
name: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres:9.4
name: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: db-data
volumes:
- name: db-data
emptyDir: {}
this is an deployment kind is Deployment so it can be scaled up or down , metadata are descriptive data about the deployment it self , namespace is the topic of the next part :) , spec will take the specifications about the deployment we need 1 replica of this pod selector we will use it to be able to access it , template will take the configuration of the pod without apiVersion and kind . this is for deployment same for the other deployments. now let's take a look at services.
# db-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
spec:
type: ClusterIP
ports:
- name: "db-service"
port: 5432
targetPort: 5432
selector:
app: db
it will take type of ClusterIP(only allow access from inside cluster) and kind Service , and port of it and port of the target which in this case is the port of the postgresDB.
# result-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: result
name: result
spec:
type: NodePort
ports:
- name: "result-service"
port: 5001
targetPort: 80
nodePort: 31001
selector:
app: result
this is a NodePort service so it will take as an extra the nodePort which is the port of the entire Node.
Lab
now let's create all of those yml files. since they are all located in single folder we can pass the folder name and he will create them all.
kubectl create -f app_047
kubectl get pods
my last container is still onCreating for 52 minute because I have the best connection in the world. if we go to the app now we can interact with it but it not gonna update on results page because the worker is not ready yet. let's take a look
minikube ip
my ip is 192.168.99.100 and if we take a look at the configuration files NodePorts are 31000 and 31001 (I can get them from kubernetes also) , so we can access the app from 192.168.99.100:31000 and 192.168.99.100:31001
when I vote in the left the results are not updated yet because the worker is not ready yet.
Now when the worker finish building it's still not updating let's investigate it.
kubectl logs db-6789fcc76c-hvrs6 | tail
(I delete the deployments and run them in investigation so names are changed) db-6789fcc76c-hvrs6 is the name of db pod tail is Linux command will get last few lines of the input.
we can see it's say password error and it's because the image is outdated , we can fix it by building new one and push it to docker this will take time and a lot of edits. So I am going to hard fix it. Let's start
kubectl exec db-6789fcc76c-hvrs6 -- bash
we are now in the root bash , let's install vim and edit the pg_hba.conf
apt update && apt install vim
after we have the vim installed let's edit our file
vim /var/lib/postgresql/data/pg_hba.conf
go to the last line and change md5 to trust press i to enter edit mode in vim. after editing it press ESC then :wq to save and quit. now let's access the postgress shell using
su - postgres
let's reload our Database configurations using
psql
SELECT pg_reload_conf();
\q
and \q used to quit then
exit
exit
exit to exit first bash , and exit to exit second bash XD XD
kubectl logs db-6789fcc76c-hvrs6
if we scroll down we can see now it's updated and working let's go to the browser. now it's updated congrats , we have full app deployed :D We see also how we can troubleshoot our app , amazing isn't it ?