Working with Services in Kubernetes

Working with Services in Kubernetes

Welcome back, DevOps readers!

In my previous blog post, we explored the essential concepts of "Working with Namespaces and Services in Kubernetes." If you haven't had a chance to read it yet, I highly recommend you check it out before diving into this one.

In today's discussion, we're going to expand our Kubernetes knowledge further by focusing on another critical aspect: Working with Services in Kubernetes.

In the world of container orchestration, Kubernetes has become the go-to platform for managing and scaling containerized applications. Understanding how to work with services in Kubernetes is vital for any DevOps engineer, as it enables seamless communication between different parts of your application, making it resilient and scalable.

What are Services in Kubernetes?

In the intricate ecosystem of Kubernetes, effective communication among its components is essential. Services play a crucial role by acting as a middleman, ensuring seamless connections between different pods, no matter where they are located within the cluster. Picture services as bridges that link various microservices together, allowing for smooth communication and efficient distribution of workloads.

Kubernetes offers different types of services tailored for specific tasks:

  • ClusterIP Service: This service type makes the service accessible only within the cluster, allowing communication between pods.

  • NodePort Service: NodePort services grant external access to the service via a fixed port on each node in the cluster, making it reachable from outside sources.

  • LoadBalancer Service: LoadBalancer services collaborate with external load balancers, distributing traffic evenly across the pods within the service.

  • ExternalName Service: This service type connects the service to a DNS name, enabling pods to access external services without exposing their IP addresses.

Working with services in Kubernetes involves creating, managing, and optimizing these resources. This ensures that your applications operate seamlessly, offering high availability and reliability to users.

Now, let's explore 3 tasks related to services in Kubernetes.

Task 01

Create a Service for your todo-app Deployment.

Create a Service definition for your todo-app Deployment in a YAML file.

Feel free to replicate the code provided below.

apiVersion: v1
kind: Service
metadata:
  name: django-todo-services
  namespace: django-app
spec:
  type: NodePort
  selector:
    app: todo
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000
    nodePort: 30080

In this instance, the Service type is configured as NodePort, with a specific nodePort assigned, such as 30080. This configuration allows external access to the Service by utilizing the IP address of any node within the cluster and the designated nodePort specified in the Service settings.

In this part, the port mapping for the Service is outlined. The Service is configured to listen on port 8000 and direct traffic to the target port 8000 on the pods. Furthermore, a nodePort is set to 30080, enabling external access to the Service. This can be achieved by utilizing the IP address of any node within the cluster and the specified nodePort in the Service configuration.

Apply the Service definition to your Kubernetes (minikube) cluster by executing the command provided below.

kubectl apply -f service.yml -n django-app

The command kubectl get svc is employed to display the list of Services within a Kubernetes cluster. The -n option allows you to indicate the namespace where the Service is situated.

kubectl get svc -n django-app

Confirm the functionality of the Service by accessing the todo-app within your Namespace using the Service's IP and Port.

The minikube service command is utilized to manage services within a Minikube cluster. When using the --url option, it provides the URL that can be used to access the Service in your web browser.

minikube service <service_name> -n= django-app — url

This will provide you with the Service URL, enabling access through a web browser. Please note that this URL is accessible solely within theMinikube cluster and cannot be accessed from your local machine.

Initiate a curl request to the Service by utilizing its IP address.

Access the todo-app by utilizing the Service's IP address and port number:

curl -L <service-ip>:<service-port>

Done!

Task 02

Create a ClusterIP service for the deployment.

Create a YAML file containing the ClusterIP Service definition for your todo-app Deployment.

Feel free to replicate the code provided below.

apiVersion: v1
kind: Service
metadata:
  name: django-todo-service
  namespace: python-django-app
spec:
  type: ClusterIP
  selector:
    app: django-app
  ports:
  - protocol: TCP
    port: 8001
    targetPort: 8001

In the example above, the Service is titled "django-todo-service" and belongs to the ClusterIP type. It uses the selector "app: django-app" to identify the Pods associated with the Service. The Service is configured with a solitary port, 8001, which is linked to the target port 8001 on the Pods.

Execute the provided command to apply the ClusterIP Service definition to your Kubernetes (minikube) cluster.

kubectl apply -f service.yml -n python-django-app

Confirm the functionality of the ClusterIP Service by accessing the todo-app from another Pod within the cluster, specifically within your designated Namespace.

kubectl get services -n python-django-app

Take note of the IP address associated with the ClusterIP Service that you intend to confirm.

Get the name of a different Pod within the identical Namespace.

kubectl get pods -n python-django-app

Take note of the name of another Pod within the same Namespace.

Access the Pod by executing the command to enter the container.

kubectl exec -it <pod-name> -n python-django-app — bash

Initiate a curl request to the ClusterIP Service by utilizing its IP address.

curl -L <cluster-ip>:<service-port>

Done! with the second task.

Task 03

Create a LoadBalancer service for the deployment.

Create a YAML file that contains the LoadBalancer Service definition for your todo-app Deployment.

Feel free to replicate the code provided below.

apiVersion: v1
kind: Service
metadata:
  name: django-todo-lb-service
  namespace: python-django-app
spec:
  type: LoadBalancer
  selector:
    app: django-app
  ports:
  - protocol: TCP
    port: 8001
    targetPort: 8001

In this example, the service is configured as LoadBalancer and is named django-todo-lb-service. To determine the Pods associated with this Service, utilize the selector app: todo.

Apply the LoadBalancer Service definition to your Kubernetes (minikube) cluster by using the command.

kubectl apply -f service.yml -n python-django-app

To validate the functionality of the LoadBalancer Service, access the to-do app from outside the cluster within your specified Namespace.

Get the load balancer service details.

kubectl get services -n python-django-app

You can get the LoadBalancer URL using the following command as well.

minikube service list

Take note of the IP address associated with the load balancer service if you intend to verify it.

Use the IP address of the load balancer and the designated access port to reach the to-do app from outside the cluster.

curl -L <load-balancer-ip>:<service-port>

If the LoadBalancer Service is functioning properly, you will receive a response from the todo-app.

We have completed the third task.

Conclusion

In conclusion, understanding Kubernetes Services is essential for any DevOps enthusiast seeking to optimize their containerized applications. As we navigate the intricacies of service discovery, load balancing, and network policies within Kubernetes, we empower ourselves to build robust, scalable, and reliable applications.

If you found this blog helpful and are as passionate about Kubernetes and DevOps as I am, let's connect on LinkedIn. I'm eager to build a network of like-minded professionals, share insights, and foster collaborations that drive innovation in the world of technology.

Together, we can continue to learn, grow, and shape the future of DevOps. Thank you for joining me on this journey, and I look forward to connecting with you soon.