Description
To capture some of the discussion.
Gripe: Service NodePorts can only be in a "special" port range (apiserver flag). This is annoying for people who want to expose a service on port 80 (for example) but instead have to use something like 30986.
Rationale for this behavior:
- We don't want service node ports to tromp on real ports used by the node (e.g. 22).
- We don't want service node ports to tromp on pod host ports.
- We don't want to randomly allocate someone port 80 or 443 or 22.
Proposed compromise: Allow NodePorts on low ports with a bunch of caveats.
To address rationale points (1) and (2): Make kube-proxy open (and hold open) ports that it is using. This will prevent it from using port 22 (for example) for a service node port.
To address rationale point (3): Use the flag-configured range for random allocations, but allow users to request any port they want.
Caveats and analysis:
-
Error reporting is not good. We do not know about what non-container stuff is using ports on the host and we do not have an easy way for the API server to allocate ports between pods and services. Not easy to resolve. The implication of this is that port-in-use errors will only be detected by kube-proxy when it tries to open the port on the node. Sending events for this is possible, but not great (one event from every node) and no other node-level error gets a kube-proxy event. Net result: user asked for a service on node port 22, and it just doesn't work and they have no idea why.
-
Doing dual-ranges (use the flag range for allocations but allow the whole range to be used) is non-trivial and has to be plumbed through more code than I am comfortable with at this point. The implication of not doing this is that sometimes people will get allocated a port that happens to be 22 and can never work. Combined with caveat (1) this is really unpleasant. We could do some REALLY hacky things like just retry the random allocation if it is not in the flagged range. This avoids plumbing the dual-range logic down, but is embarassingly ugly. I am ashamed to have suggested it.
-
Holding ports open is pretty easy and we should do this anyway. The error reporting properties are still not good.
Summary: I am unconvinced that this is worthwhile to rush into v1. We still have the option of baking a container that receives traffic on a HostPort and redirects it to a service. I have a demo of using socat to do this (simulating a pod manually):
root@kubernetes-minion-nnx6:/home/thockin# docker run -d --dns=10.0.0.10 --dns-search=default.svc.cluster.local -p 8080:93 thockin/hostport-to-service 93 hostnames2:80
1dcc1e94c30834290ae243ac298c6699b2a3348fc014b4b77ae34c13ead44854
root@kubernetes-minion-nnx6:/home/thockin# curl localhost:8080
hostnames2-gxt4f
Activity
thockin commentedon Jun 18, 2015
Priority to be decided - some argue that it is P0.
thockin commentedon Jun 18, 2015
I hope my summary wasn't TOO slanted. There are legitimate issues that
this sort of change would fix.
On Jun 17, 2015 6:16 PM, "Quinton Hoole" notifications@github.com wrote:
thockin commentedon Jun 18, 2015
That is one of the legitimate use cases, though no user is claiming this
one is a hard requirement. The biggest issue is simply friction and "kick
the tires" operation.
On Jun 17, 2015 7:23 PM, "Quinton Hoole" notifications@github.com wrote:
eparis commentedon Jun 18, 2015
A great example would be the L7 ingress router used by openshift. It needs to run on 80 and 443. Quite reasonable and not at all a 'kick-the-tires' kind of thing.
While hostPort might be possible it does mean that I can't use rc's any more, as I have to start pinning the ingress router container to specific nodes. With publicIPs this was solved, as externally people could be told to use publicIP[0]:443 and everything would be ok as the ingress router could still be managed by the cluster.
(actually for some cert checking reasons we've had to bastardize the ingress router, but it worked really well on 80)
thockin commentedon Jun 19, 2015
@eparis I don't think I understand your model. You run L7-as-a-service and you tell people to connect to $node_ip on :80 and :443, rather than a load balancer? And you're confident that no user of the cluster will ever ask for a Service with port 80 or 443 and ask for NodePort of the same port?
If so, you could change the apiserver's --service-node-port-range to "1-32767" and install your L7 before anyone "randomly" claims :80 or :443.
To reiterate, we WANT to enable this, but there's not time to do it properly before 1.0 - if you're comfortable with the caveats above, you can fix this today. My argument is that it should not default to "ridiculously brittle" mode and that the changes to make it not-brittle are too much churn for this point in the release cycle.
eparis commentedon Jun 19, 2015
--service-node-port-range=1-32767 suffers extremely from accidentally stomping on port 22, not someone screwed up, the SYSTEM screwed up....
42 remaining items