Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strategy to deal with transitive depencencies on client-go <1.18 #88472

Closed
ibuildthecloud opened this issue Feb 24, 2020 · 12 comments
Closed

Strategy to deal with transitive depencencies on client-go <1.18 #88472

ibuildthecloud opened this issue Feb 24, 2020 · 12 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@ibuildthecloud
Copy link
Contributor

v1.18.0-beta.0 introduces an API change in the generated client-go code where ctx is required now as the first parameter. As a mitigation strategy a deprecated client has been created for people unable/unwilling to change to the new API. This works fine for code that one controls, but it does not work for transitive dependencies. Is there a strategy to handle transitive dependencies that are expecting the old style of API? I'm not sure how to deal with this short of forking all dependencies. Has this scenario been considered?

@ibuildthecloud ibuildthecloud added the kind/bug Categorizes issue or PR as related to a bug. label Feb 24, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Feb 24, 2020
@ibuildthecloud
Copy link
Contributor Author

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 24, 2020
@liggitt liggitt added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Feb 25, 2020
@fedebongio
Copy link
Contributor

KEP is here: kubernetes/enhancements#1503

/assign @mikedanese

@mikedanese
Copy link
Member

Transitively depending on mixed versions of client-go are not supported in general. We should help major transitive dependencies update to 1.18 client-go levels as soon as possible after release. What dependencies are important to you?

@kardianos
Copy link

@mikedanese (I'm author of govendor tool and I now use go modules.) Honest question, why doesn't client-go issue a v2, v3 on obviously breaking changes? I understand you want to keep it in line with k8s releases, but still, there must be a way to do both at some level. Then clients could incrementally adopt versions.

@mikedanese
Copy link
Member

Last time I checked, we bumped the major version of client-go every minor release of Kubernetes:

$ git ls-remote git@github.com:kubernetes/client-go.git refs/tags/v*.0.0
0efe10d8bd8b85763b44a7761915102849a851bb        refs/tags/v10.0.0
bc5e1eb39098107b921343118e77e62cac839ca8        refs/tags/v11.0.0
94a70e8826ac56e2fa99b5035ddc9f3b73fc4b61        refs/tags/v12.0.0
e121606b0d09b2e1c467183ee46217fa85a6b672        refs/tags/v2.0.0
21300e3e11c918b8e6a70fb7293b310683d6c046        refs/tags/v3.0.0
d92e8497f71b7b4e0494e5bd204b48d34bd6f254        refs/tags/v4.0.0
28de3f306665477fd02a0cc5184d006e5f602ed4        refs/tags/v5.0.0
b6fc51ff010994ee00e7810d3066489fae8e3f43        refs/tags/v6.0.0
e2a5cd56d9f6204896868b4839711cc014aef99b        refs/tags/v7.0.0
68594d387fe163ef5299feb552c46db70873b511        refs/tags/v8.0.0
4a80625b792a4fa01843170802f03f39a64c6082        refs/tags/v9.0.0

However, it looks like that might no longer be the case after kubernetes/enhancements#1350, kubernetes/publishing-bot#210 where we moved to v0.x.y. Semver does allow breaking changes on these upgrades:

Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

From https://semver.org/

It seems like the old tags would break go get though? cc @nikhita @liggitt know more about the specifics of the proposal.

@kardianos
Copy link

@mikedanese Go modules does use semver as a guide, but "go mod" only puts major versions in their own package, thus only major versions can be gracefully upgraded. Due to that, breaking changes should bump the major versions so dependencies can gracefully upgrade on their own timeline.

You can see what tags go modules recognize: https://pkg.go.dev/k8s.io/client-go/kubernetes?tab=versions

Note that v12.0.0 isn't listed because the major tag version "v12" doesn't match the module suffix (missing v12, should be "module k8s.io/client-go/v12"). Versions prior to v12 show up because they lack the go.mod file and are thus grandfathered in with "+incompatible".

Go has a good story for packages that break their API from release to release and allows consumers of them to gradually upgrade. It would be awesome to allow that.

@ibuildthecloud
Copy link
Contributor Author

If the stated answer is that there is no API guarantee, I understand. That has been expressed in the pass so I have no complaint as I assume the risk of using k8s go API already knowing that. If there is no other guidance on how to mitigate the burden on the consumers then you can close this issue.

@Helcaraxan
Copy link

Helcaraxan commented May 15, 2020

Another outsider jumping in here (sorry 😅). I have no contribution history within the k8s project but I do have a significant experience with Go modules and their workflows for small and large Go projects.

The problem of transitive dependencies on client-go (and other k8s components) has been one that keeps coming up very frequently in the #modules channel (and others) on the Gophers slack. From the observations I've been able to make from the reported issues, as well as my own workplace experience with k8s dependencies, the core of the issue comes down to a combination of two technical points:

  1. The breaking changes introduced in the APIs within the same major version (and thus Go module) in many k8s components.
  2. The fact that the k8s project's own infrastructure and build make an extremely heavy use of replace statements.

Normally the pain of 1. would be caught by k8s itself as the builds of other components depending on the one with a breaking change would also start failing when upgrading. However 2. means that this pain is hidden from the k8s builds and instead jumps straight to downstream users.

The filepath-relative replace statements between the staging directories result in a specific & local dependency graph for the k8s project that is extremely hard to reproduce for downstream users. Most often the only solution is for developers to introduce replace statements in their own modules to "pin" specific k8s versions. This in turn brings along a whole slew of follow-up issues and unwanted side-effects.

Developers may need to do this even if they don't depend on k8s themselves but one of their transitive dependencies does: a replace pollutes a module's entire downstream ecosystem. This is why the recommended use of replace statements is limited to only edge-cases: temporary local development workflows and short-lived fixes of upstream breakages via a fork until the upstream fixes the issue.

From my understanding of the k8s project's setup there are actually some pretty decent solutions going forward if the team is interested in addressing these pain-points. I could potentially even provide some engineering efforts in this direction as I already have a general workflow tool that could help and which is currently being tested together with the prometheus project to address similar issues.

NB: @liggitt I got told by @bcmills that you'd be the right person to contact in this regard given your close involvement in the initial migration of k8s to Go modules.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 13, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 12, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

8 participants