New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What are the blockers for running 1000 pods per node. #12082
Comments
cc @dchen1107 @vishh |
The last time I tried it:
If you have a 64 GB machine and you assume every pod needs a paltry 100 MB On Fri, Jul 31, 2015 at 7:08 AM, lleszczu notifications@github.com wrote:
|
During my initial tests I was able to start about 250 pods on a single node, with 32GB of RAM, but I am interested in feasibility of running it on something like 128-256GB. So memory wouldn't be a problem. Right know first issue I can see is simply starting that many pods. From what I can see docker daemon is not handling very well communication with kubelet. ( #12099 ) |
@thockin gives a good list of potential issues. Besides that, there are scalability / performance issue in today's kubelet. To manage all those pods on node, kubelet performs a lot of heavy operations on docker during its sync loop. We are working on improving it for 1.1 release. The first thing we are experimenting is using docker event stream. I did some measurement before 1.0 release on 32 cores, 28.2GB RAM VM, kubelet can reliably manage 150 pods (each pod has 2 containers). |
@dchen1107 do you have some numbers, what would be expected number of pods after switching to event streams ? Should we expect 2x boost or 10x ? |
cc/ @yujuhong who is experimenting docker event stream. YuJu, is any data we could share? |
I am still working on the design doc, and haven't had a working prototype yet. I do expect cpu usage and docker requests to drop significantly if we switch to docker event stream (or similar things). I will update once I get some numbers. |
@yujuhong is there any public place where we can track progress on that ? |
@lleszczu, I just created an issue for this, and will update later. |
Hi,
I just started playing with kubernetes on bare-metal servers. I wonder how many pods I can start per server. According to https://github.com/GoogleCloudPlatform/kubernetes/blob/master/test/e2e/density.go#L175 k8s should support 30 pods/node, but for bare metal deployments it is very low. Do you have some list of blockers to scale it up to 1000 pods/node ( or at least few 100s ) ?
I am mostly interested in kubelet/docker limitations.
The text was updated successfully, but these errors were encountered: