Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Windows Containers Support #22623

Closed
yllierop opened this issue Mar 7, 2016 · 35 comments
Closed

Add Windows Containers Support #22623

yllierop opened this issue Mar 7, 2016 · 35 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/windows Categorizes an issue or PR as relevant to SIG Windows.

Comments

@yllierop
Copy link
Contributor

yllierop commented Mar 7, 2016

Add Windows Containers Support at least at the node level.

@mikedanese mikedanese added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Mar 10, 2016
@yllierop
Copy link
Contributor Author

I'd like to plan a kickoff meeting with @BenjaminArmstrong and @sschuller sometime in the next couple of weeks. I'd also like to ask @sarahnovotny to create the Windows SIG as well.

@timothysc
Copy link
Member

/cc @kubernetes/sig-node

@asultan001
Copy link

And here we go!

@ghost
Copy link

ghost commented Mar 14, 2016

Here are a few reasonable intro's, for those following along:

The Register: Hands On
Mark Russianovich, Microsoft CTO
Microsoft Docs

@yllierop
Copy link
Contributor Author

Thanks @quinton-hoole greatly appreciate the references. Exciting times @asultan001 @timothysc

@thecloudtaylor
Copy link

This is great to see! Just for a quick intro I am the lead program manager for all server container technologies in Windows, my team is responsible for Windows Server Containers and Hyper-V Containers.

@asultan001
Copy link

Thanks @taylorb-microsoft looking forward to connecting once we have something a bit more concrete.

@yllierop
Copy link
Contributor Author

I've created a shared document available at: https://goo.gl/NE0ABx to track our planning discussions. Thanks for helping us @taylorb-microsoft.

@rakeshm
Copy link

rakeshm commented Mar 14, 2016

A few of us at Apprenda, @taylorb-microsoft and @johngossman are going to do a quick sync up this week.

We'll add to the shard doc @preillyme

@ghost
Copy link

ghost commented Mar 14, 2016

Could someone please grant me comment permission on the shared doc. I'm
quinton@google.com

Thanks

Q

On Mon, Mar 14, 2016 at 11:35 AM, rakeshm notifications@github.com wrote:

A few of us at Apprenda, @taylorb-microsoft
https://github.com/taylorb-microsoft and @johngossman
https://github.com/johngossman are going to do a quick sync up this
week.

We'll add to the shard doc @preillyme https://github.com/preillyme


Reply to this email directly or view it on GitHub
#22623 (comment)
.

@davidopp
Copy link
Member

cc/ @colemickens

@rakeshm
Copy link

rakeshm commented Mar 18, 2016

I added a summary of the Apprenda/Microsoft meeting to the doc with some upcoming key action items.

@smarterclayton
Copy link
Contributor

How receptive will the project be to modifying allowable Kubelet options to include Windows specific flags and options?

We have an "ok" story for container runtime specific flags, but it needs to be upgraded to a "great" story. We'd follow the rkt pattern for now probably.

Is the PodSpec easy to modify in case any idiomatic Windows specs need to be added?

Great question, we should try to figure out what those would be and then have the discussion of a set of them together. ContainerSpecs were designed to be more generic than the default Docker container spec originally was, but I suspect we'll simply have options that don't work on all runtimes, and a way to document that. For instance, SELinux and AppArmor are already two items that don't work on all linux distros, but they're encapsulated within higher level security groupings. Path specs are likely to be painful on volumes. Dealing with persistent volumes across windows systems may not change that much, although certain options simply won't be available. We've started the "how do we deal with images across multiple runtimes" discussion, but since Windows Docker would probably use the same format as Linux I don't expect that to be an issue.

@srounce
Copy link

srounce commented Mar 24, 2016

@smarterclayton I know it isn't perfect, but would it be possible to take an approach similar to cygwin where we leverage a small util to transform paths to/from?

Also Docker on Windows (2016 TP3) appears to use the same image format.

@dchen1107
Copy link
Member

Can someone grant me, dawnchen@google.com the comment permission on that shared doc?

@yllierop
Copy link
Contributor Author

Hey @dchen1107 I've added you as an editor to the shared document.

@luxas
Copy link
Member

luxas commented Mar 30, 2016

Given the OS differences and eventual differences in Pods as a container grouping mechanism, how should we approach any asymmetry that will likely occur when handling Windows hosts when compared to non-Windows hosts?

I've been thinking about if the kubelet should annotate or label itself with the platform it's running on. That might make arm, arm64, ppc64le, amd64 and windows handling easier, if there ever will be cross-clusters. WDYT?

@esotericengineer
Copy link

I think self-labeling based on host platform makes a ton of sense. It will help with idiomatic configuration per platform, but also lends itself to helping with cluster segregation. Is there anything else in k8s that currently self-labels?

@davidopp
Copy link
Member

#9044 says cloud provider but can also cover platform stuff.

@esotericengineer
Copy link

OK, after reading it through #9044, it would seem we could capture 'Platform' as a standard label in
pkg/api/unversioned/well_known_labels.goMy guess is that in a multi-platform k8s, that would make the most sense since multiple platforms could become a pretty standard use case. That label could be used for OS & architecture combos or just OS

@luxas
Copy link
Member

luxas commented Mar 30, 2016

I guess samples value for kubernetes.io/(generic/)platform could be linux/arm, linux/amd64, windows/amd64. This would be very nice to have when we're heading for cross-platfrom (amd64, arm, arm64 and ppc64le): #17981

@davidopp When talking code changes, what should be added more than this or is this fine?

# pkg/api/unversioned/well_known_labels.go:22
const LabelPlatform = "beta.kubernetes.io/platform"
# pkg/kubelet/kubelet.go:1042
node.ObjectMeta.Labels[unversioned.LabelPlatform] = runtime.GOOS + "/" + runtime.GOARCH

I could send a PR for this if you like.

@csrwng
Copy link
Contributor

csrwng commented Apr 14, 2016

Early prototype: csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves well to the Linux model where 1 container = (mostly) 1 process. Windows containers tend to include more of the OS, including the service manager and at least as of right now don't allow namespace sharing. Therefore it doesn't make sense to start a separate infra container to hold on to the ip as on the Linux side. More importantly, a pod cannot be represented as a set of containers that share certain things. In Windows, it may make more sense to have 1 pod = 1 windows container, and each container from the pod simply represent separate process on that container. If modeled that way, it means that containers in a windows pod cannot use a different image each, it also means that resource requirements, security constraints, etc. would apply to the entire pod, and not to each container.

@smarterclayton
Copy link
Contributor

1 pod = 1 container is a dramatic shift, so it's worth diving in what
sharing is important. Network and volumes are critical for sharing.
Everything else is just nice to have. Can Windows handle 1 IP for multiple
containers? Can it handle volume sharing?

On Apr 14, 2016, at 11:08 AM, Cesar Wong notifications@github.com wrote:

Early prototype: csrwng/kubernetes@e755508
csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the
chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves
well to the Linux model where 1 container = (mostly) 1 process. Windows
containers tend to include more of the OS, including the service manager
and at least as of right now don't allow namespace sharing. Therefore it
doesn't make sense to start a separate infra container to hold on to the ip
as on the Linux side. More importantly, a pod cannot be represented as a
set of containers that share certain things. In Windows, it may make more
sense to have 1 pod = 1 windows container, and each container from the pod
simply represent separate process on that container. If modeled that way,
it means that containers in a windows pod cannot use a different image
each, it also means that resource requirements, security constraints, etc.
would apply to the entire pod, and not to each container.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

@csrwng
Copy link
Contributor

csrwng commented Apr 15, 2016

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced through the Docker API, but I suspect that's just a bug with the implementation.

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and understand what will eventually be possible vs never possible.

@smarterclayton
Copy link
Contributor

I would even say volume sharing is the key differentiator for multiple
containers. I'd be extremely hesitant to consider a 1 pod = 1 container
approach for Windows unless we're saying containers on windows
fundamentally can never approach that.

On Fri, Apr 15, 2016 at 10:28 AM, Cesar Wong notifications@github.com
wrote:

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced
through the Docker API, but I suspect that's just

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and
understand what will eventually be possible vs never possible.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

@vishh
Copy link
Contributor

vishh commented Apr 15, 2016

Another thing to consider is the level of configurability we will get with windows containers. If we can emulate pods by dynamically configuring containers, that might work as well.

@rakeshm
Copy link

rakeshm commented Apr 17, 2016

I guess we'll learn more next week on what MSFT would recommend but IMO, having a short term (if indeed it is short term) limitation on Windows that 1 pod = 1 container is better than having a host of materially important caveats on Windows when you have 1 pod = n containers if that's what it ends up coming down to.

Clear statements of limitations - and we know there will be limitations in a variety of areas - are important b/c if you're constantly reading the fine print, it just creates lots of friction.

I'm with you Clayton that we should be very hesitant to limit 1 pod = 1 container but if alternatives are going to be messy, we shouldn't force a model into place that isn't ready.

@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 17, 2016 via email

@rakeshm
Copy link

rakeshm commented Apr 17, 2016

Agreed.

Just for clarification though - we'd be talking about deciding on the composition of a pod, not the fact that a pod is the unit of scheduling right? I'm certainly not suggesting we even consider the latter.

@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 17, 2016 via email

@rakeshm
Copy link

rakeshm commented Apr 17, 2016

Got it. Looks like it will come down to what can actually be shared across containers within a pod and which life cycle operations can be mutually guaranteed before this becomes a tougher call.

@michmike
Copy link
Contributor

hi everyone, members from Apprenda and Red Hat have created the first version of a technical investigations document on how to bring the kubelet to Windows.
Comments and feedback is welcome from everyone (everyone has the ability to add comments) - if you want edit/write access, let me know through the Slack messaging system.

https://docs.google.com/document/d/1qhbxqkKBF8ycbXQgXlwMJs7QBReiSxp_PdsNNNUPRHs/edit?usp=sharing

Our goal is to share some of these findings with Microsoft during our Wednesday meeting. The focal point of that meeting is to go over some of the questions for Microsoft that we started accumulating in this document. If you have additional questions to bring during that discussion, please add them to the document.

k8s-github-robot pushed a commit that referenced this issue May 13, 2016
Automatic merge from submit-queue

Automatically add node labels beta.kubernetes.io/{os,arch}

Proposal: #17981
As discussed in #22623:
> @davidopp: #9044 says cloud provider but can also cover platform stuff.

Adds a label `beta.kubernetes.io/platform` to `kubelet` that informs about the os/arch it's running on.
Makes it easy to specify `nodeSelectors` for different arches in multi-arch clusters.

```console
$ kubectl get no --show-labels
NAME        STATUS    AGE       LABELS
127.0.0.1   Ready     1m        beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
$ kubectl describe no
Name:			127.0.0.1
Labels:			beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
CreationTimestamp:	Thu, 31 Mar 2016 20:39:15 +0300
```
@davidopp @vishh @fgrzadkowski @thockin @wojtek-t @ixdy @bgrant0607 @dchen1107 @preillyme
@michmike
Copy link
Contributor

michmike commented Oct 6, 2016

Today in the Kubernetes community meeting, we demo'ed the alpha version of the Windows Server Container support in Kubernetes, with kubernetes running on Microsoft Azure.

Feature: kubernetes/enhancements#116 will track bringing the work of SIG-Windows to beta with release 1.5 of Kubernetes.

If you want to help, join SIG-Windows at https://kubernetes.slack.com/messages/sig-windows

cc: @sarahnovotny , @brendandburns

@bgrant0607 bgrant0607 added the sig/windows Categorizes an issue or PR as relevant to SIG Windows. label Dec 7, 2016
@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2017
@luxas
Copy link
Member

luxas commented Dec 22, 2017

I think this issue can be closed in favor for kubernetes/enhancements#116; where the feature state is tracked. Also, this is implemented to a large extent already (beta); whohoo!

Thanks all for the great work 👍 and reopen if you disagree with this assessment

@luxas luxas closed this as completed Dec 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/windows Categorizes an issue or PR as relevant to SIG Windows.
Projects
None yet
Development

No branches or pull requests