3

I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container

apiVersion: v1
kind: Service
metadata:
  name: uisvc
  namespace: default
  labels:
    helm.sh/chart: foo-1
    app.kubernetes.io/name: foo
    app.kubernetes.io/instance: rb-foo
spec:
  clusterIP: None
  ports:
    - name: http
      port: 8090
      targetPort: 8080
  selector:
    app.kubernetes.io/component: uisvc

After installing the helm, when I run kubectl get svc, I get the following output

fooaccess       ClusterIP   None         <none>        8888/TCP   119m
fooset          ClusterIP   None         <none>        8080/TCP   119m
foobus          ClusterIP   None         <none>        6379/TCP   119m
uisvc           ClusterIP   None         <none>        8090/TCP   119m

However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.

Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?

Edit: Output for kubectl describe svc uisvc

Name:              uisvc
Namespace:         default
Labels:            app.kubernetes.io/instance=foo-rba
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=rba
                   helm.sh/chart=rba-1
Annotations:       meta.helm.sh/release-name: foo
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/component=uisvc
Type:              ClusterIP
IP:                None
Port:              http  8090/TCP
TargetPort:        8080/TCP
Endpoints:         172.17.0.8:8080
Session Affinity:  None
Events:            <none>
Wander3r
  • 1,801
  • 17
  • 27
  • Can you use `kubectl describe svc uisvc` to show the details of this service? – Rui Dec 16 '20 at 06:31
  • @Rui Updated the question with the output from that – Wander3r Dec 16 '20 at 06:55
  • How your container reach the uisvc service ? It make request to a load balancer and it route to the service ? or you make request via localhost port 8090 ? – Ethan Vu Dec 16 '20 at 07:15
  • @EthanVu using the service name.. http://uisvc:8090 – Wander3r Dec 16 '20 at 07:21
  • " However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. " Im get confused from this, if you can curl from the container, than, like you said, the request that the container make is using service name should work fine right ? – Ethan Vu Dec 16 '20 at 07:26
  • @EthanVu the port I requested in the service yaml is 8090, with targetPort of 8080. So, my intention is to connect on 8090 from outside the service container and kubernetes will invoke on the targetPort.. but here the port (8090) is not making any difference as I am directly reaching on 8080 itself – Wander3r Dec 16 '20 at 07:28
  • I see, it mean you want to reach the service from outside of the k8s cluster isnt it ? – Ethan Vu Dec 16 '20 at 07:32
  • @EthanVu Does that mean that 8090 will be the port to reach to, only if a different service outside the cluster is trying to reach uisvc? – Wander3r Dec 16 '20 at 08:01

1 Answers1

2

This is expected behavior since you used headless service.

Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.

Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.

With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.

For more reading please visit:

acid_fuji
  • 6,287
  • 7
  • 22