Fix the Error “readiness probe failed http probe failed with statuscode 503”

Kubernetes is an open-source software also known as Kube or k8s. It is capable of managing, scaling, and maintaining multi-containers workload automatically. With this software, you can easily manage containerize applications for multiple hosts. It is designed by Google but managed by Cloud Native Computing Foundation. In order to deploy applications, scale applications, monitor applications, and roll out changes to applications, it has built-in commands. With Kubernetes, programmers and developers can manage the application quite easily. When you are working with Kube, you may experience the error “readiness probe failed http probe failed with statuscode 503”.

Kubernetes cluster is known for spanning hosts across public, on-premise, hybrid, or private clouds, which makes it an amazing platform to host cloud-native apps. If you are experiencing this kube error, then you are at the right place to get the issue solved. Let’s check out how the error occurs

How the error pops up

When you try to design a Helm chart for varnish and run it on the Kubernetes cluster, the docker community throws the error while executing the help package that contains the varnish image. Have a look at the error warning that shows up

Readiness probe failed: HTTP probe failed with statuscode: 503

Liveness probe failed: HTTP probe failed with statuscode: 503

Check out the code used

Values.yaml

# Default values for tt.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.

    replicaCount: 1


    #vcl 4.0;

    #import std;

    #backend default {
     # .host = "www.varnish-cache.org";
     # .port = "80";
     # .first_byte_timeout = 60s;
     # .connect_timeout = 300s;
    #}



    varnishBackendService: "www.varnish-cache.org"
    varnishBackendServicePort: "80"

    image:
      repository: varnish
      tag: 6.0.6
      pullPolicy: IfNotPresent

    nameOverride: ""
    fullnameOverride: ""

    service:
      type: ClusterIP
      port: 80



    #probes:
     # enabled: true

    ingress:
      enabled: false
      annotations: {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      path: /
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local

    resources:
      limits:
        memory: 128Mi
      requests:
        memory: 64Mi

    #resources: {}
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      # limits:
      #  cpu: 100m
      #  memory: 128Mi
      # requests:
      #  cpu: 100m
      #  memory: 128Mi

    nodeSelector: {}

    tolerations: []

    affinity: {}

Deployment.yaml

apiVersion: apps/v1beta2
    kind: Deployment
    metadata:
      name: {{ include "varnish.fullname" . }}
      labels:
        app: {{ include "varnish.name" . }}
        chart: {{ include "varnish.chart" . }}
        release: {{ .Release.Name }}
        heritage: {{ .Release.Service }}
    spec:
      replicas: {{ .Values.replicaCount }}
      selector:
        matchLabels:
          app: {{ include "varnish.name" . }}
          release: {{ .Release.Name }}
      template:
        metadata:
          labels:
            app: {{ include "varnish.name" . }}
            release: {{ .Release.Name }}
    #      annotations:
     #       sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          volumes: 
            - name: varnish-config
              configMap:
                 name: {{ include "varnish.fullname" . }}-varnish-config
                 items:
                   - key: default.vcl
                     path: default.vcl
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}    
              env:
              - name: VARNISH_VCL
                value: /etc/varnish/default.vcl
              volumeMounts: 
                - name: varnish-config
                  mountPath : /etc/varnish/
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
                  targetPort: 80
              livenessProbe: 
                httpGet:
                  path: /healthcheck
                  port: http
                  port: 80
                failureThreshold: 3
                initialDelaySeconds: 45
                timeoutSeconds: 10
                periodSeconds: 20
              readinessProbe:
                httpGet:
                  path: /healthcheck
                  port: http
                  port: 80
                initialDelaySeconds: 10
                timeoutSeconds: 15
                periodSeconds: 5
              restartPolicy: "Always"
              resources:
    {{ toYaml .Values.resources | indent 12 }}
        {{- with .Values.nodeSelector }}
          nodeSelector:
    {{ toYaml . | indent 8 }}
        {{- end }}
        {{- with .Values.affinity }}
          affinity:
    {{ toYaml . | indent 8 }}
        {{- end }}
        {{- with .Values.tolerations }}
          tolerations:
    {{ toYaml . | indent 8 }}
        {{- end }}

Vanish-config.yaml

apiVersion: v1
    kind: ConfigMap
    metadata:
      name: {{ template "varnish.fullname" . }}-varnish-config
      labels:
        app: {{ template "varnish.fullname" . }}
        chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
        release: "{{ .Release.Name }}"
        heritage: "{{ .Release.Service }}"
    data:
      default.vcl: |-
    {{ $file := (.Files.Get "config/varnish.vcl") }}
    {{ tpl $file . | indent 4 }}

Vanish.vcl

# VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
    vcl 4.1;

    import std;
    # The minimal Varnish version is 5.0
    # For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'

    backend default {
      #.host = "{{ default "google.com" .Values.varnishBackendService }}";
      .host = "{{  .Values.varnishBackendService }}";
      .port = "{{  .Values.varnishBackendServicePort }}";
      #.port = "{{ default "80" .Values.varnishBackendServicePort }}";
      .first_byte_timeout = 60s;
      .connect_timeout = 300s ;
      .probe = {
            .url = "/";
            .timeout = 1s;
            .interval = 5s;
            .window = 5;
            .threshold = 3;
        }
    }



    backend server2 {
        .host = "74.125.24.105:80";
        .probe = {
            .url = "/";
            .timeout = 1s;
            .interval = 5s;
            .window = 5;
            .threshold = 3;
        }
    }

    import directors;

    sub vcl_init {
        new vdir = directors.round_robin();
        vdir.add_backend(default);
        vdir.add_backend(server2);
    }

    #sub vcl_recv {
     #   if (req.url ~ "/healthcheck"){
      #       error 200 "imok";
       #      set req.http.Connection = "close";
        # }
    #}

This is the code that results in the error.

Solution To Fix the Error “readiness probe failed http probe failed with statuscode 503”

The reason for getting the error shows that you have issues with your backend connection. That’s why you end up with readiness as well as liveliness probes. Those probes can’t work to do the http flow end-to-end test. The probes are only intended to verify the available service.

When analyzing requests that direct to /healthcheck, a synthetic http response can be returned. Have a look at the VCL code to do the task

sub vcl_recv {
  if(req.url == "/healthcheck") {
    return(synth(200,"OK"));
  }
}

It makes the probes work.

Conclusion

In the post, we shed light on the solution to help you fix the error “readiness probe failed http probe failed with statuscode 503”. You can easily fix the issue.

I hope you find it helpful! Happy Error Solving!

Leave a Reply

Your email address will not be published. Required fields are marked *