Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use cases for kubectl port-forward #25113

Closed
yujuhong opened this issue May 4, 2016 · 28 comments
Closed

Use cases for kubectl port-forward #25113

yujuhong opened this issue May 4, 2016 · 28 comments
Assignees
Labels
area/api Indicates an issue on api area. area/extensibility area/kubelet-api sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@yujuhong
Copy link
Contributor

yujuhong commented May 4, 2016

As we are figuring out the next-version container runtime interface, I'd like to gather some feedback on existing features we support.

The description of kubectl port-forward is here.
What are the existing use cases? Is this feature mainly used for debugging?

/cc @kubernetes/sig-node
/cc @thockin @vishh

@yujuhong yujuhong added area/api Indicates an issue on api area. sig/node Categorizes an issue or PR as relevant to SIG Node. team/api labels May 4, 2016
@vishh
Copy link
Contributor

vishh commented May 4, 2016

cc @kubernetes/rh-cluster-infra

@ncdc
Copy link
Member

ncdc commented May 4, 2016

This is essentially a debugging tool. If you need to connect to a port in your pod and you don't ever want to expose that port via a service or via ingress, you'll need to use port forwarding to get temporary access to that port. One good example is connecting to the JVM debugging port for remote debugging of Java processes.

@yifan-gu
Copy link
Contributor

yifan-gu commented May 4, 2016

We should be able to achieve the same by kubectl exec the socat inside the pod's namespace. Though we need to have socat in the container image. But on the other hand, we need socat on the host today.

@euank @sjpotter Actually found the PortForward is broken for rkt today, as the PID of the pod returned by rkt api service is not the PID1 of the pod, (it's changed to the pid of nspawn/kvm) so it does not hold the namespace now.

@yujuhong
Copy link
Contributor Author

yujuhong commented May 4, 2016

We should be able to achieve the same by kubectl exec the socat inside the pod's namespace

Yes, supporting this for normal Linux containers should not be a problem. This, however, might be a problem for hypervisors and windows? AFAIK, hyper doesn't support this (@feiskyer, correct me if I am wrong).

This is a "pod-level" feature that is not tied to a specific container, so it'd be good to figure out how and whether we are going to support it as part of the sandbox operation proposed in the doc.

@ncdc
Copy link
Member

ncdc commented May 4, 2016

We should be able to achieve the same by kubectl exec the socat inside the pod's namespace

This would make for a much less friendly UX. Also, kubectl port-forward and the associated code in the kubelet already has all the plumbing necessary to listen on local ports on the user's system and to copy the data back and forth between the user's local ports and the ports in the pod.

I know our users would be unhappy if we suggested that port forwarding was going away.

cc @smarterclayton @pweil- @bparees @eparis

@smarterclayton
Copy link
Contributor

This is a very common way to debug and diagnose both development and production features.

I would love to have a more generalized framework for executing in-container helpers (injected static binaries that can evolve differently from the container runtime), especially to decouple them from the kubelet.

@smarterclayton
Copy link
Contributor

s/features/applications/

@feiskyer
Copy link
Member

feiskyer commented May 4, 2016

Yes, supporting this for normal Linux containers should not be a problem. This, however, might be a problem for hypervisors and windows? AFAIK, hyper doesn't support this (@feiskyer, correct me if I am wrong).

@yujuhong Right, Hyper doesn't support port-forward.

This is a very common way to debug and diagnose both development and production features.

@smarterclayton Could we achieve this by some wrapping on ExecInContainer?

@ncdc
Copy link
Member

ncdc commented May 4, 2016

Could we achieve this by some wrapping on ExecInContainer?

It's not that simple. You have to bind to local ports so you can accept connections from local clients, and then you need to get the data copied between the client and the pod. Right now we do client -> kubectl port-forward ports -> apiserver -> kubelet -> pod's network namespace via socat. There is a lot more going on here than just "exec"ing something 😄

@smarterclayton
Copy link
Contributor

The ideal would be "client -> api server -> kubelet -> launch process in
container context -> redirect from kubelet to listening process in
container context -> serve back to client", allowing the kubelet to offload
any communication traffic.

The "standard process" could be a process inside the VM or out, just as
long as there was a way to execute it. I don't think that we should go
lowest common denominator though and say "we'll only support things that
only work inside of VMs" (which I don't necessarily think you are saying,
but is important to see in mind). The container runtime being able to say
"I don't support port-forward" gracefully is a good first step.

On Tue, May 3, 2016 at 9:35 PM, Andy Goldstein notifications@github.com
wrote:

Could we achieve this by some wrapping on ExecInContainer?

It's not that simple. You have to bind to local ports so you can accept
connections from local clients, and then you need to get the data copied
between the client and the pod. Right now we do client -> kubectl
port-forward ports -> apiserver -> kubelet -> pod's network namespace via
socat. There is a lot more going on here than just "exec"ing something 😄


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#25113 (comment)

@euank
Copy link
Contributor

euank commented May 4, 2016

It can also be used as a means to run once-off deployment/migration/analytics type jobs against a service which should not generally be exposed (e.g. for security reasons).

You might argue that it's better for these jobs to run on K8, not on a developer's machine, but I don't think that level of overhead is always warranted / desired, and it certainly can't replace interactive sessions (e.g. poking around in sql workbench).

Those sorts of usecases might also fall under "debugging" in your book, and could also be solved with ssh portforwarding usually, but I don't think that means K8 shouldn't know how to do this.

@yujuhong
Copy link
Contributor Author

yujuhong commented May 9, 2016

I think there are a few questions mixed in this discussion:

  1. Should we allow the container runtime to selectively support kubernetes features? If so, do we expose the supported features for each runtime and how? We should set a general rule on this.
  2. Whether we want to keep port forwarding feature or not? The general feedback seems to be that the feature is good for debugging and is valuable to keep.
  3. How Hyper is going to support port-forwarding if it wants to. @feiskyer, do you have a more concrete idea of how/whether to proceed for hyper?
  4. How we can define port forwarding in the new container runtime interface. Does the interface need to speak network namespace?

@vishh
Copy link
Contributor

vishh commented May 9, 2016

How we can define port forwarding in the new container runtime interface. Does the interface need to speak network namespace?

FYI: @smarterclayton is suggesting a runtime agnostic solution in #25113 (comment)

@dchen1107
Copy link
Member

+1 on @smarterclayton's solution at #25113 (comment)

@smarterclayton
Copy link
Contributor

In practice we have all the tools to do that today. Maybe something we
queue up for 1.4 to try and do port-forward that way to start, then exec,
then potentially attach.

On Mon, May 9, 2016 at 2:14 PM, Dawn Chen notifications@github.com wrote:

+1 on @smarterclayton https://github.com/smarterclayton's solution at #25113
(comment)
#25113 (comment)


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#25113 (comment)

@yujuhong
Copy link
Contributor Author

yujuhong commented May 9, 2016

Alright, so the plan is to drop this from the container runtime interface/api, and implement the function in kubelet.

Thanks for the inputs!

@derekwaynecarr
Copy link
Member

i assume we can only drop it when we validate that we can support the alternate route? for example, we can't move to container runtime until we have port fowarding using the alternate proposal?

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

@smarterclayton

The ideal would be "client -> api server -> kubelet -> launch process in
container context -> redirect from kubelet to listening process in
container context -> serve back to client", allowing the kubelet to offload
any communication traffic.

How do we handle traffic from "client -> listening process in container context" - does it still go through the apiserver? Or using Ingress somehow? Otherwise I don't see how we go from an external client to a pod.

@smarterclayton
Copy link
Contributor

client -> apiserver -> listening process in container context

On Tue, Jun 7, 2016 at 3:28 PM, Andy Goldstein notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton

The ideal would be "client -> api server -> kubelet -> launch process in
container context -> redirect from kubelet to listening process in
container context -> serve back to client", allowing the kubelet to offload
any communication traffic.

How do we handle traffic from "client -> listening process in container
context" - does it still go through the apiserver? Or using Ingress
somehow? Otherwise I don't see how we go from an external client to a pod.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#25113 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_p-EnYCSC59q1_mmZAq_suhWSaqzyks5qJcZvgaJpZM4IWvGS
.

@philips
Copy link
Contributor

philips commented Jun 8, 2016

@ncdc I don't think you can or should avoid the api server because that is the access control for this one time debugging task thingie. Did you have a different design in mind?

@ncdc
Copy link
Member

ncdc commented Jun 8, 2016

No, I just wanted to make sure I understood the suggested design
completely.

On Wednesday, June 8, 2016, Brandon Philips notifications@github.com
wrote:

@ncdc https://github.com/ncdc I don't think you can or should avoid the
api server because that is the access control for this one time debugging
task thingie. Did you have a different design in mind?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#25113 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAABYijcYWLiIfNzYO9V4d8INkehgK6Yks5qJlD4gaJpZM4IWvGS
.

@smarterclayton
Copy link
Contributor

Yeah, the redirect would be from kubelet -> listening process, which would
be handled by the internal tunnel established from the apiserver.
Offloading the apiserver has been discussed but there would still have to
be something like an apiserver acting as a secure proxy. This is probably
different from the "temporarily expose a port all the way to the edge" use
case (like a very temporary host port or node port) which is useful for
testing or verifying network connectivity, but could not enforce security.

On Wed, Jun 8, 2016 at 7:41 AM, Andy Goldstein notifications@github.com
wrote:

No, I just wanted to make sure I understood the suggested design
completely.

On Wednesday, June 8, 2016, Brandon Philips notifications@github.com
wrote:

@ncdc https://github.com/ncdc I don't think you can or should avoid
the
api server because that is the access control for this one time debugging
task thingie. Did you have a different design in mind?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
#25113 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe/AAABYijcYWLiIfNzYO9V4d8INkehgK6Yks5qJlD4gaJpZM4IWvGS

.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#25113 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_p4LZ9VwLRP4mfY9gse7ZaEoMTjjjks5qJqqBgaJpZM4IWvGS
.

@derekwaynecarr
Copy link
Member

@yujuhong
Copy link
Contributor Author

@ncdc @dcbw, would any of you be willing to pick up this and try exploring the suggested solution in #25113 (comment)? Thanks in advance anyway.

@ncdc
Copy link
Member

ncdc commented Jul 11, 2016

I can pick it up

On Mon, Jul 11, 2016 at 2:30 PM, Yu-Ju Hong notifications@github.com
wrote:

@ncdc https://github.com/ncdc @dcbw https://github.com/dcbw, would
any of you be willing to pick up this and try exploring the suggested
solution in #25113 (comment)
#25113 (comment)?
Thanks in advance anyway.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#25113 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAABYmVbHpqpv0o3VKNu0JlnsXo659P1ks5qUovTgaJpZM4IWvGS
.

@yujuhong
Copy link
Contributor Author

Thanks!

@pskrzyns
Copy link

cc @lukasredynk

@yujuhong
Copy link
Contributor Author

This is subsumed by #29579

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/extensibility area/kubelet-api sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests