New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some thoughts about the pod/container abstraction in kubelet #12619
Comments
Interesting, I was also thinking about removing the |
FWIW I am working on some other projects this week. But should be able to get back on rkt patches and kubelet refactoring quickly. |
Good to know that we're on the same page!
Sounds good. I will work on converting my proposal to a more concise github PR with sub-issues, and see if you are interested in taking on any of those. We should touch base sometime next week. I think we need to clean this up before adding a generic runtime pod cache from my proposal. It'd certainly make things a lot easier :) |
@yujuhong Thank you! More than happy to help as I am back working on rkt fixes now, and would like to take the chance to refactor and clean up those history problems :) |
Going to refactor on this if you haven't already started :) |
After thinking about it more, I propose that we add a new type At kubelet startup, it can ask the container runtime for the What type RawPodStatus struct {
ID types.UID
Name string
Namespace string
IP string
ContainerStatuses []RawContainerStatus
}
type RawContainerStatus struct {
ID ContainerID
Name string
Status ContainerStatus
CreatedAt time.Time
StartedAt time.Time
FinishedAt time.Time
ExitCode int
Image string
ImageID string
RestartCount int
Reason string
Message string
}
The plan is to populate the @yifan-gu, thoughts? |
Also cc @kubernetes/goog-node The specific problem is that
To populate the cache for SyncPod to consume, PLEG would need to generate |
Sounds reasonable. Another thing I can take from this is seems the relisting of podstatus requires one |
Hmm...you are right. For rkt, listing can actually provide more information, but I am not sure if we can optimize that in a generic, runtime-agnostic implementation... When relisting,
This is a compromise between docker and rkt. For docker, Unless we are going to implement caches in individual container runtimes, compromises are unavoidable :-( |
@yujuhong Sorry for the confusion, but actually I mean the InspectPod() and ListPods() for rkt's own api. So for kubelet's ListPods(), it can invoke rkt.ListPods(simple), and for kubelet.GetRawPodStatus(), it can invoke rkt.InspectPod(). If we want to get all raw pod statuses, it can do rkt.ListPods(detailed). |
Understood. Thanks for the explanation. Adding the raw pod status would involve some significant refactoring. I have the first batch of WIP commits to add this in the runtime interface, along with the implementation in dockertools https://github.com/yujuhong/kubernetes/tree/raw_status. If we decide to do it, the TODOs will be
|
Hmm...My understanding is that currently we do So in fact there should be two kinds of "PodStatus": one shows the up-to-date runtime status[Needed in My benchmark result shows that it is really inefficient to let each pod worker access the runtime independently. So if we have Do I understand the story right? :) |
@Random-Liu, your summary is correct. |
@yujuhong Ha, thanks for your explanation. :) |
We've finished all the TODO items on this issue. Closing it. |
This is purely food for thought, as others may not agree and this probably wouldn't be prioritized anytime soon :-)
Kubelet defines
Pod
in the container runtime, which has and a list of containers and anapi.PodStatus
object which includes all the container statuses. This seems counter-intuitive to me becausePod
is supposed to be a representation of the running pod in the system, i.e., it is essentially the pod status/states (so we have status nested in the status?) It is also not necessary for the container runtime to talkapi.PodStatus
because kubelet has to fill extra information later on to complete it.The abstraction is a bit hard to digest alone, and the types seem to center around the implementation details (retrofitting?). Moreover, we pass Pods often without any valid pod/container status. You have no way to tell after-the-fact that whether a container in the pod is running or dead. I'd prefer a flatter struct reflecting the status of the containers/pods, and let kubelet interprets and translates the status into
api.PodStatus
.A simple example would be:
/cc @yifan-gu
The text was updated successfully, but these errors were encountered: