1

I have a docker image I have created that works on docker like this (local docker)n...

docker run -p 4000:8080 jrg/hello-kerb

Now I am trying to run it as a Kubernetes pod. To do this I create the deployment...

kubectl create deployment hello-kerb --image=jrg/hello-kerb

Then I run kubectl get deployments but the new deployment comes as unavailable...

NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-kerb   1         1         1            0           17s

I was using this site as the instructions. It shows that the status should be available...

NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           1m

What am I missing? Why is the deployment unavailable?

UPDATE

$ kubectl get pod
NAME                          READY   STATUS             RESTARTS   AGE
hello-kerb-6f8f84b7d6-r7wk7   0/1     ImagePullBackOff   0          12s
Jackie
  • 21,969
  • 32
  • 147
  • 289
  • can you show the output of "kubectl get pod" ? – Vasili Angapov May 11 '19 at 14:41
  • added @VasilyAngapov – Jackie May 11 '19 at 14:46
  • 1
    The image can not be pulled, this is already explained in my answer - if you are using minikube, you can use the docker daemon to build and provide your image to your development cluster – Thomas May 11 '19 at 14:48
  • This is pretty close to a duplicate of (https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube) but I will leave it up for a while to see if it helps anyone else since it is slightly different. – Jackie May 11 '19 at 14:58

2 Answers2

2

If you are running a local image (from docker build) it is directly available to the docker daemon and can be executed. If you are using a remote daemon, f.e. in a kubernetes cluster, it will try to get the image from the default registry, since the image is not available locally. This is usually dockerhub. I checked https://hub.docker.com/u/jrg/ and there seems to be no repository and therefore no jrg/hello-kerb

So how can you solve this? When using minikube, you can build (and provide) the image using the docker daemon that is provided by minikube.

eval $(minikube docker-env)
docker build -t jrg/hello-kerb .

You could also provide the image at a registry that is reachable from your container runtime in the kubernetes cluster, f.e. dockerhub.

Thomas
  • 11,272
  • 2
  • 24
  • 40
  • Actually this wasn't the issue the issue was setting "imagePullPolicy". It was actually already using the right Daemon apparently. – Jackie May 11 '19 at 14:49
  • 2
    The default image pull policy is 'always' for images that do not have a version tag, but the default `latest` tag. Therefore you can either change the image pull policy when working with your remote docker in the cluster, or you could provide the image through a registry. In that case the pulling will always work and you don't have to inject the image through the remote docker daemon. – Thomas May 11 '19 at 14:55
  • So the moral to that comment is, if I had used explicit versions instead of latest I wouldn't need to change the pull policy? – Jackie May 11 '19 at 14:57
  • 1
    Right, as long as you used the minikube docker daemon to build the image. Which is a rather special case by itself, but for development it is fine. – Thomas May 11 '19 at 14:59
1

I solved this by using kubectl edit deployment hello-kerb then finding "imagePullPolicy" (:/PullPolicy). Finally I changed the value from "Always" to "Never". After saving this when I run kubectl get pod it shows...

NAME                          READY   STATUS    RESTARTS   AGE
hello-kerb-6f744b6cc5-x6dw6   1/1     Running   0          6m 

And I can access it.

Jackie
  • 21,969
  • 32
  • 147
  • 289
  • 3
    A better way to solve it imho is to provide an explicit version for the image. This will then work when using the minikube docker daemon (it does not try to pull the image, as it is already there and the default policy for tagged images is to only pull, if the image is not present). This is better since newer versions would be pulled (if available in the registry) and you don't get used to this special scenario. – Thomas May 11 '19 at 14:57
  • I understand but in our current development environment we will be updating a lot and I don't want to keep track of which version we are on. I could see this absolutely being the case as we mature as a project. Thanks! – Jackie May 11 '19 at 14:59
  • 2
    I suggest to setup a CI for automatic build and deployment. If you have no explicit version you must kill the pod to have it recreated to update to a newer image, which is clumsy and there will be a day when you debug a 'fixed' problem for an hour and then remember that you forgot to update the image/pod ... – Thomas May 11 '19 at 15:01