Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container Runtime Interface/API Planning #28789

Closed
yujuhong opened this issue Jul 11, 2016 · 13 comments
Closed

Container Runtime Interface/API Planning #28789

yujuhong opened this issue Jul 11, 2016 · 13 comments
Assignees
Labels
area/extensibility area/kubelet-api lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@yujuhong
Copy link
Contributor

yujuhong commented Jul 11, 2016

This issue is created to help track the progress of the container runtime interface/API. The original issue is #22964

Summary of current status

  • We reached an agreement on the general concept of pod sandbox and imperative container-level operations. The proposal, together with a placeholder golang interface file was merged in
    Add a new container runtime interface #25899 .
  • There is a PR to implement the API using grpc+protobuf. The goal is to add a path in kubelet to use this new runtime API, so that we can start implementing POCs against the API and iterate on the API based on the feedback.
  • There are also a slew of issues filed to further discuss specific parts of the API. This is necessary because the API involves too many aspects and discussing in a central place is almost impossible. We also hope that we can tackle different issues separately while collecting feedback from some early POCs.

To re-iterate a bit, the goals of the API are:

  • Improve extensibility
    • Prevent container runtime lock-in
    • Build a better ecosystem
    • Encourage user adoption
  • Higher velocity & lower code maintenance cost

The timeline is merely an estimate now and is subject to changes.

Milestone 0: v1alpha1 of container runtime API (~1 month)
The v1alpha1 API is experimental and supports the core functionalities required by kubelet/kubernetes. Once this milestone is met, developers can start implementing POCs against the API and provide feedback.
Requirements:

Milestone 1: v1alpha2, feature-complete version of the API (~1 month)
Some advanced/controversial features are not addressed in the v1alpha1 API as they require elongated discussions and potential changes to the Kubernetes API. These features should be be addressed in the v1alphaN version of the API.
These features include:

Milestone 3: Hyper and Docker integrated using runtime API/interface (~2 months)
This milestone demonstrates that the API meets the minimal requirements to support a runtime in the kubelet. Note that Docker will remain in the kubelet codebase, but will be refactored to use the internal runtime interface matching the new API.
Requirements:
The integrated runtime should pass all node conformance tests

Milestone 4: Promote the API version to beta (~N months)
Release the beta version of the API.
Requirements:

  • Rkt integration with Kubernetes via the CRI API. OWNER: @tmrts
  • A complete upgrade story.

Other related issues: oci/runc integration #26788

/cc @kubernetes/sig-node @kubernetes/sig-rktnetes @thockin

@MikeSpreitzer
Copy link
Member

At the SIG Node meeting of July 12, 2016 there was discussion about how this issue includes adding lifecycle hooks that the Kubernetes operator can exploit for various purposes. I do not see that in the initial comment here. What am I missing?

@vishh
Copy link
Contributor

vishh commented Jul 15, 2016

There will be no hooks. Instead if one wants to customize the docker
runtime behavior for example, fork the docker runtime implementation in
kubernetes core, add necessary features, build it as a separate binary and
have kubelet use that binary as the runtime. Essentially bring your own
kubelet runtime for customizations that cannot be handled as part of
kubernetes core.

On Fri, Jul 15, 2016 at 8:40 AM, Mike Spreitzer notifications@github.com
wrote:

At the SIG Node meeting of July 12, 2016 there was discussion about how
this issue includes adding lifecycle hooks that the Kubernetes operator can
exploit for various purposes. I do not see that in the initial comment
here. What am I missing?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28789 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGvIKB7uL3L8dww15K2rQRfeiqYbT6_cks5qV6nfgaJpZM4JJsY7
.

@MikeSpreitzer
Copy link
Member

That seems pretty heavy.
Carrying a permanent fork is pretty ugly practice.
Could I instead create an additional repo that builds a binary from code that wraps the upstream code?

@thockin
Copy link
Member

thockin commented Jul 18, 2016

I think wrappers would be possible, but as this is almost certainly a
daemon interface rather than exec, wrapping is a bit trickier.

There is a 1:1 relationship between kubelet and the runtime. Maybe,
instead of a UNIX socket or TCP we could make kubelet fork and exec the
runtime binary and use a pipe on stdin/stdout. This would make chaining
very easy. Bad for isolation, perhaps. Maybe we can standardize an CLI
for chaining. Something like "look in this dir for your UNIX socket" so
each can add layers to the onion if needed..

On Fri, Jul 15, 2016 at 11:02 AM, Mike Spreitzer notifications@github.com
wrote:

That seems pretty heavy.
Carrying a permanent fork is pretty ugly practice.
Could I instead create an additional repo that builds a binary from code
that wraps the upstream code?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28789 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVCVsI9iLJmRQZ63T7AXclM1IxsSfks5qV8szgaJpZM4JJsY7
.

@MikeSpreitzer
Copy link
Member

@thockin are you talking about chaining where my added functionality is in a daemon of mine that, in turn, invokes the regular daemon? If so, is there a big difference between chaining through a UNIX socket vs. chaining through a TCP socket?

@thockin
Copy link
Member

thockin commented Jul 19, 2016

Not really. UNIX seems saner overall, wrt simple auth.

On Jul 19, 2016 10:32 AM, "Mike Spreitzer" notifications@github.com wrote:

@thockin https://github.com/thockin are you talking about chaining
where my added functionality is in a daemon of mine that, in turn, invokes
the regular daemon? If so, is there a big difference between chaining
through a UNIX socket vs. chaining through a TCP socket?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28789 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVAQZIe718Muk2GQlkm53WpvKeJpNks5qXQoxgaJpZM4JJsY7
.

@euank
Copy link
Contributor

euank commented Jul 25, 2016

To clarify the "Rkt integrates with Kubernetes via the new API (?). OWNER: CoreOS." item at the end, @yujuhong

We do intend to integrate as soon as we reasonably can in order to gain better feature-parity (such as exponential backoff, init containers, etc), and in order to be as well supported of a k8s runtime as we can be.

Due to the design of rkt, this might end up taking a bit longer, but we're hopeful we'll be able to have an alpha integration shortly after Hyper and Docker (so Milestone 3.5 :).

We would like to be able to have this integration soon enough that we can ensure the interface makes sense for us too.

@tmrts
Copy link
Contributor

tmrts commented Jul 27, 2016

I'm adding myself as the owner of Rkt integrates with Kubernetes via the new API

I'll try to help out with the other items as well and own some if needs be.

@dchen1107
Copy link
Member

cc/ @matchstick @thockin Here is the working item and planning we talked about

k8s-github-robot pushed a commit that referenced this issue Aug 3, 2016
Automatic merge from submit-queue

Kubelet: add kubeGenericRuntimeManager for new runtime API

Part of #28789. Add `kubeGenericRuntimeManager` for kubelet new runtime API #17048. 

Note that:

- To facilitate code reviewing, #28396 is splited into a few small PRs. This is the first part.
- This PR also fixes some syntax errors in `api.proto`.
- This PR is depending on #29811 (already merged).

CC @yujuhong @Random-Liu @kubernetes/sig-node
k8s-github-robot pushed a commit that referenced this issue Aug 9, 2016
Automatic merge from submit-queue

Kubelet: implement labels for new runtime API

Implement labels for new runtime API. Part of #28789 . 


CC @yujuhong @Random-Liu @kubernetes/sig-node

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30049)
<!-- Reviewable:end -->
k8s-github-robot pushed a commit that referenced this issue Aug 12, 2016
Automatic merge from submit-queue

Kubelet: generate sandbox/container config for new runtime API

Generate sandbox/container config for new runtime API. Part of #28789 .

CC @yujuhong @Random-Liu @dchen1107
k8s-github-robot pushed a commit that referenced this issue Aug 21, 2016
Automatic merge from submit-queue

Kubelet: add --container-runtime-endpoint and --image-service-endpoint

Flag `--container-runtime-endpoint` (overrides `--container-runtime`) is introduced to identify the unix socket file of the remote runtime service. And flag `--image-service-endpoint` is introduced to identify the unix socket file of the image service.

This PR is part of #28789 Milestone 0. 

CC @yujuhong @Random-Liu
k8s-github-robot pushed a commit that referenced this issue Aug 22, 2016
Automatic merge from submit-queue

Kubelet: implement GetPods for new runtime API

Implement GetPods for kuberuntime. Part of #28789 .

CC @yujuhong @Random-Liu
nhlfr pushed a commit to nhlfr/kubernetes that referenced this issue Aug 24, 2016
Provide support for --container-runtime-endpoint and
--image-service-endpoint in kubelet.

Ref kubernetes#28789
k8s-github-robot pushed a commit that referenced this issue Sep 7, 2016
Automatic merge from submit-queue

Kubelet: implement GetPodStatus for new runtime API

Implement `GetPodStatus()` for new runtime API.  Part of #28789 .

CC @yujuhong @Random-Liu @dchen1107
k8s-github-robot pushed a commit that referenced this issue Sep 10, 2016
Automatic merge from submit-queue

Add client-server runtime support to local-up-cluster.sh

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**: It provides support for using `--container-runtime-endpoint` and `--image-service-endpoint` arguments for kubelet in `local-up-cluster.sh` script.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #28789

**Special notes for your reviewer**:

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
```

Provide support for --container-runtime-endpoint and
--image-service-endpoint in kubelet.

Ref #28789
@yujuhong yujuhong self-assigned this Nov 21, 2016
@warmchang
Copy link
Contributor

nice job! 👍

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 31, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 30, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/extensibility area/kubelet-api lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

10 participants