Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We should make it possible to extends IAM permissions #14226

Closed
justinsb opened this issue Sep 19, 2015 · 11 comments
Closed

We should make it possible to extends IAM permissions #14226

justinsb opened this issue Sep 19, 2015 · 11 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@justinsb
Copy link
Member

If we are to take ownership of the IAM permission for k8s (so that we can update it as we add/remove permissions), we also need to give users a way to add their own permissions specific to their installation.

My current plan is to create two IAM permission sets, one that k8s owns and one for the user that k8s will not alter.

@emmanuel
Copy link

@justinsb - I'm guessing you're already aware of this, but for the record, EC2 instances can only be associated with a single IAM instance profile (and therefore a single IAM role).

Perhaps you're suggesting maintaining two sets of policies within a single IAM role (not quite sure how to parse your comment), but be aware that two roles is going to be a non-starter, assuming that you intend to apply both sets of permissions (k8s-owned and user-owned) to a single EC2 instance.

@davidopp davidopp added team/redhat priority/backlog Higher priority than priority/awaiting-more-evidence. labels Nov 3, 2015
@rafaljanicki
Copy link

Actually this is a case I'd like to discuss a bit further. Currently, if we want to give access for container to another AWS service (e.g. Redshift), we need to use AWS credentials (Access/Secret key). This raises a security concern and forces us to use many secrets inside yamls.

Better solution, suggested by AWS, is to use IAM roles to connect between apps. However, when using containers, it's not wise to do that as we'd allow for accessing the service for whole cluster (it's an option having in mind that the cluster is a single application, but definitely not the option).
On the other hand, we can't add role to a container what results in no other option.

Do you have any solution to that by any chance?

@emmanuel
Copy link

@rafaljanicki I'm not sure what is planned for Kubernetes, but the best approach I've seen is here: https://github.com/dump247/docker-ec2-metadata

That runs as a local proxy on each node that inspects each containers' metadata and assumes different IAM roles according to that metadata. There would be some work to iron out the details, but I believe this approach could even provide better security (per-container IAM roles) than AWS' own ECS does at the moment (based on my brief read of the ECS docs).

@rafaljanicki
Copy link

@emmanuel thanks for the info. The solution is in deed the one I'd like to have in Kubernetes library as it's closest to the best one (I guess). However, the current state looks a bit hacky to be used in other than dev environment.

And I agree, ECS doesn't provide a good solution either.

@therc
Copy link
Member

therc commented Feb 15, 2016

The artist formerly known as docker-ec2-metadata (now ec2metaproxy) could be turned into a library and linked into kube-proxy (or kubelet?). The main limitation is that it looks up the other end's IP address and then matches it against Docker's list of containers, but for network=host containers that's not going to work. Maybe in those cases it could look up the connection in /proc/net/tcp/* instead and match that against the /proc//fd/ symlinks, like lsof does. That's expensive, unless there's a way in iptables to munge the source IP/port to reduce the search space... loopback has this whole 127.0.0.0/8 range, after all. I'm not going to propose LD_PRELOAD or similar hacks. :-)

There would also need to be cooperation from the kubelet, to validate the contents of the IAM_ROLE/IAM_POLICY, if not rewriting them altogether based e.g. on podsecuritypolicy. Such a proxy setup might help with the converse problem of locking down the existing policies, #11936. And if generic enough, it could help with the equivalent GCE issue, #8867.

@jtblin
Copy link
Contributor

jtblin commented Jul 6, 2016

If you're interested I created https://github.com/jtblin/kube2iam, inspired from the original kube2sky, using similar APIs, which sets temporary AWS credentials using sts:assumeRole based on annotations. We've been using it in our system for quite some time without any issue.

@evie404
Copy link
Contributor

evie404 commented Sep 16, 2016

There's a caveat with https://github.com/lyft/metadataproxy and similarly https://github.com/dump247/ec2metaproxy. Those proxies uses the docker networking model where one container has an IP, which is different form Kubernetes pods.

Suppose I create a pod with an nginx container with environment variable of IAM_ROLE=fooRole, Kubernetes will create two containers, nginx with IAM_ROLE=fooRole and no IP address, and kubernetes/pause:go container, which maps to the pod IP address.

With metadataproxy, which I tried and I presume ec2metaproxy also since it works on the same principle, while the proxy is able to get the correct pod IP and map to a container, the container it maps to is the pause container used hold the network namespace of a pod. Since the IAM_ROLE environment variable is not present on the pause container, the proxy will fail to assign the correct IAM role.

Alternatively, kube2iam uses the Kubernetes API to identify the mapping between IP and Pod, and annotations between Pod and IAM role, thus not suffering from the aforementioned consequence. Additionally, it does not have a dependency on the Docker API, so it will also work with rkt or other container engines.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

/sig aws

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 9, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests