-
Notifications
You must be signed in to change notification settings - Fork 40.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove need for the apiserver to contact kubelet for current container state #156
Comments
I think we'd rather not have the Kubelet calling back into the master, since it can result in massive fan-in storms of messages though. However, I can definitely see the value in caching the information inside the apiserver. So what do you think about having the apiserver periodically poll all Kubelets for information, and caching that information locally. I think that would satisfy all of the needs you enumerated, while still enabling the master to control the flow of information. What do you think? |
In the past I've suggested a strategy where we have the master (or some fellow traveller server process) scrapes the nodes regularly and writes back to the master/etcd. We'd then return how stale results are in our API. Further, there'd be an API option to ask for "up to the second" results that would result in a sync call out to the node. |
I think that any apiserver->kublet polling will have problems in some
I hear ya on the fan-in problem. That problem already exists to some extent Perhaps only push updates on state changes? On Wed, Jun 18, 2014 at 11:20 AM, brendandburns notifications@github.com
|
I hear you about the problems with mixed topologies, pushing updates on state changes actually does have the same fan-in problems, suppose all of your tasks fail at the same time with a packet of death, you'll still see a storm of messages. Hacking in the polling from the apiserver is going to be easier in the short term, so I think we'll at least do that first. |
The main reasons for the apiserver to contact kubelets rather than the other way are:
Regardless which components initiate the connection, we maywant to implement a number of optimizations, especially once we start to collect resource stats:
|
My concern has mainly been figuring out the blockers to actually Perhaps a hybrid approach is going to be the best:
This adds caching to the baseline config, gives the option to reverse the On Wed, Jun 18, 2014 at 12:52 PM, bgrant0607 notifications@github.com
|
SGTM. I will send a PR for the poll and cache support today, and then work on the Thanks for bearing with us as we sort through this stuff ;) --brendan On Wed, Jun 18, 2014 at 1:48 PM, Justin Huff notifications@github.com
|
No problem! Thanks for even doing the work -- I was happy to do that. I was thinking of tackling #134 as well, but I can wait if you want to avoid conflicts. |
We're happy to take the work! Feel free to take on #134, and I might Best On Wed, Jun 18, 2014 at 2:13 PM, Justin Huff notifications@github.com
|
Polling from the master was added in #171 I'll work on optional push next. |
David is going to open an issue on the security of the kubelet in general with a proposal to restrict the kubelet via TLS client certs. We may also need to let the kubelet ask the master whether certain things are allowed which would be covered under his SubjectAccessReview proposal |
cc @erictune |
@roberthbailey @cjcullen Robby and CJ have also been looking into securing the kubelet <-> master communication with TLS certs. We definitely need to harden that path, but I think that's a separate issue than this one? |
----- Original Message -----
|
This proposal was to change to unidirectional communication from kubelet to apiserver. If we replicated apiserver #473, I think that could work reasonably well, and would be inline with the approach used by other components, such as controller-manager and scheduler. |
Change provisioning to pass all variables to both master and node. Run Salt in a masterless setup on all nodes ala http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html, which involves ensuring Salt daemon is NOT running after install. Kill Salt master install. And fix push to actually work in this new flow. As part of this, the GCE Salt config no longer has access to the Salt mine, which is primarily obnoxious for two reasons: - The minions can't use Salt to see the master: this is easily fixed by static config. - The master can't see the list of all the minions: this is fixed temporarily by static config in util.sh, but later, by other means (see kubernetes#156, which should eventually remove this direction). As part of it, flatten all of cluster/gce/templates/* into configure-vm.sh, using a single, separate piece of YAML to drive the environment variables, rather than constantly rewriting the startup script.
Should we close this one? |
Yes |
Change provisioning to pass all variables to both master and node. Run Salt in a masterless setup on all nodes ala http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html, which involves ensuring Salt daemon is NOT running after install. Kill Salt master install. And fix push to actually work in this new flow. As part of this, the GCE Salt config no longer has access to the Salt mine, which is primarily obnoxious for two reasons: - The minions can't use Salt to see the master: this is easily fixed by static config. - The master can't see the list of all the minions: this is fixed temporarily by static config in util.sh, but later, by other means (see kubernetes#156, which should eventually remove this direction). As part of it, flatten all of cluster/gce/templates/* into configure-vm.sh, using a single, separate piece of YAML to drive the environment variables, rather than constantly rewriting the startup script.
Remove useless unmount code
devel/local-up: doc cfssl requirement
Pushes go to staging-k8s.gcr.io
….chains split auth/authz chains even more
[release v1.30] k8s v1.30.4
While the kubelet certainly is the source of truth for what is running on a particular host, it might be nice to have it push that info to the apiserver on a regular basis (and on state changes) rather than force the apiserver to ask.
Reasons:
The kubelet could publish this to etcd, but I think it'd be good to aim for fewer dependencies on etcd rather than more. Thoughts on that?
The text was updated successfully, but these errors were encountered: