Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework & Simplify Kubeflow Auth #2864

Merged
merged 3 commits into from
Oct 1, 2024

Conversation

thesuperzapper
Copy link
Member

@thesuperzapper thesuperzapper commented Aug 30, 2024

Resolves #2850

Background

Goals of Kubeflow Auth

See the 20240606-jwt-handling.md doc for more context.

But in short, the goals of Kubeflow Auth are:

  1. Kubeflow apps expect the user's "email" in the kubeflow-userid header.
  2. Kubeflow apps expect the user's "groups" in the kubeflow-groups header (future).
  3. Kubeflow can trust the user's "email" and "groups" headers implicitly, as they are only set by Istio.
  4. Users should be able to log in interactively with Dex.
  5. From outside the cluster, machines should be able to access Kubeflow APIs (e.g. KFP) with a JWT either be issue by Dex (via token exchange) or by Kubernetes (ServiceAccount tokens).
  6. From inside the cluster, machines should be able to access some special Kubeflow APIs (e.g. KFP) with a Kubernetes ServiceAccount token via their Kubernetes Service (not via the ingress gateway).

Issues with Auth in 1.9.0

In Kubeflow 1.9.0 there were a significant number of changes to the way auth was implemented.
The main change was the migration from oidc-authservice to oauth2-proxy for handling authentication.

The solution we implemented in 1.9.0 had a number of issues:

  1. It did not work properly with nearly all major Kubernetes distributions (e.g. EKS, GKE, AKS, K3s, etc.):

  2. A CronJob was to populate JWKS into the Istio RequestAuthentication:

  3. We were needlessly verifying JWTs in multiple places:

    • The JWTs were verified in the following places:
      • When the request hit the istio-gateway Pods (via RequestAuthentication).
      • When the request was envoyExtAuthzHttp with oauth2-proxy.
      • When the request hit any Pod (via RequestAuthentication).
    • This was a waste of resources.
    • This required that we connect both Istio AND oauth2-proxy to the Kubernetes OIDC.
    • It introduced an impossible situation because oauth2-proxy can not retrieve the JWKS from Kubernetes on clusters that don't allow anonymous access to the API server (which is disabled on common distributions like K3s).
  4. The use of cluster-wide RequestAuthentication resources:

    • There were two RequestAuthentication that applied to all Pods in the cluster rather than being scoped to the istio-ingressgateway:
      • RequestAuthentication/m2m-token-issuer (for Kubernetes JWTs)
      • RequestAuthentication/dex-jwt (for Dex JWTs)
    • This broke a lot of things, including the "in-cluster" access to the KFP API.
  5. Somehow, the VirtualService for oauth2-proxy was omitted:

    • I have no idea how this was working before, the /oauth2/ HTTP path was answered by central-dashboard, not oauth2-proxy.
    • This raises a lot of questions, but either way, this is now fixed in this PR.

What does this PR change?

High-Level Overview

Here is a high-level overview of how the new auth flows work:

  • User Authentication:

    1. User attempts to access Kubeflow UI in their browser:
      • They are accessing the istio-ingressgateway pods via something like kubectl port-forward.
    2. The request hits AuthorizationPolicy/istio-ingressgateway-oauth2-proxy:
      • This policy uses a CUSTOM envoyExtAuthzHttp to verify the request with oauth2-proxy.
      • If the request has an authentication cookie, oauth2-proxy will verify it and set the Authorization header with a Dex JWT.
      • Otherwise, oauth2-proxy will redirect the user to the Dex login page.
    3. The request hits the RequestAuthentication/dex-jwt:
      • This resource only applies to traffic to the istio-ingressgateway Pods.
      • This validates the JWT was signed by Dex and sets the kubeflow-userid and kubeflow-groups headers.
    4. The request hits the AuthorizationPolicy/istio-ingressgateway-require-jwt:
      • Because the RequestAuthentication populated a requestPrincipals metadata, it is allowed to pass.
    5. Now in the mesh, the request is routed to the correct VirtualService:
      • The Kubeflow Pods ONLY allow connections from the Istio Gateway (with their own AuthorizationPolicies)
      • Thus, they can trust the kubeflow-userid and kubeflow-groups headers implicitly.
  • Off cluster, Machine Authentication:

    1. Machine attempts to access Kubeflow API from outside the cluster:
      • They are accessing the istio-ingressgateway pods via something like kubectl port-forward.
      • The machine sends a JWT in the Authorization header (either a Dex JWT or a Kubernetes JWT).
    2. The request skips AuthorizationPolicy/istio-ingressgateway-oauth2-proxy:
      • This policy ONLY checks requests which don't have an Authorization header.
      • So, oauth2-proxy never sees the request.
    3. The request hits the RequestAuthentication/dex-jwt or RequestAuthentication/m2m-token-issuer:
      • These resources only apply to traffic to the istio-ingressgateway Pods.
      • This unpacks the JWT and sets the kubeflow-userid and kubeflow-groups headers.
    4. The request hits the AuthorizationPolicy/istio-ingressgateway-require-jwt:
      • Because the RequestAuthentication populated a requestPrincipals metadata, it is allowed to pass.
    5. Now in the mesh, the request is routed to the correct VirtualService:
      • The Kubeflow Pods ONLY allow connections from the Istio Gateway (with their own AuthorizationPolicies)
      • Thus, they can trust the kubeflow-userid and kubeflow-groups headers implicitly.
  • On-cluster, Machine Authentication:

    1. A Pod in the cluster attempts to access the KFP API:
      • The internal service is http://ml-pipeline-ui.kubeflow.svc.cluster.local.
      • The Pod sends a Kubernetes ServiceAccount JWT in the Authorization header.
    2. This request is unrelated to the Istio Gateway:
      • So, it skips the istio-ingressgateway and related RequestAuthentication/AuthorizationPolicy resources.
    3. The request hits the AuthorizationPolicy/ml-pipeline-ui (in the kubeflow namespace):
      • Because the request sets an Authorization header (but not a kubeflow-userid header), it is allowed to pass directly to the ml-pipeline-ui Pods.
      • The KFP Backed will then verify that the JWT is valid using a TokenReview call to the Kubernetes API server.
      • Whatever access the ServiceAccount has to the KFP API is then granted.

New Kustomize Overlays

To enable the above flows there are three new overlays (only one of which should be applied at a time):

  • ./common/oauth2-proxy/overlays/m2m-dex-and-kind:
    • This overlay allows machines to use Kubernetes JWTs OR Dex JWTs to access the Istio Gateway from outside the cluster.
    • This overlay is only compatible with Kubernetes distributions that serve valid JWKS keys at /openid/v1/jwks (e.g. not EKS).
    • This overlay is the default overlay in example/kustomization.yaml.
  • ./common/oauth2-proxy/overlays/m2m-dex-only:
    • This overlay only allows machines to use Dex JWTs to access the Istio Gateway from outside the cluster.
    • This overlay is compatible with all Kubernetes distributions.
  • ./common/oauth2-proxy/overlays/m2m-dex-and-eks:
    • This overlay allows machines to use Kubernetes JWTs OR Dex JWTs to access the Istio Gateway from outside the cluster.
    • The user must manually set the correct JWT issuer for their EKS cluster in the kustomization.yaml.

About the m2m-dex-and-kind Overlay

As discussed in #2850, rather than using a CronJob/kubeflow-m2m-oidc-configurator to populate the JWKS into the Istio RequestAuthentication, we now allow Istio to directly access the /openid/v1/jwks endpoint on the Kubernetes API server.

Because this endpoint is may not be accessible anonymously on all Kubernetes distributions, we have created a Deployment/cluster-jwks-proxy which uses kubectl proxy to make the JWKS keys available without authentication at http://cluster-jwks-proxy.istio-system.svc.cluster.local/openid/v1/jwks.

This is only necessary for Kubernetes distributions that do not allow anonymous access to the https://kubernetes.default.svc.cluster.local/openid/v1/jwks endpoint. However, we include it by default to maximise the amount of clusters the default manifests work on.

Removal of unneeded resources

The following resources have been removed because they were no longer needed:

  • EnvoyFilter/x-forwarded-host:
    • This never did anything, as it was not correctly applied to the ingress gateway.
  • CronJob/kubeflow-m2m-oidc-configurator:
    • We no longer needed this because we have Deployment/cluster-jwks-proxy to provide anonymous access to the cluster JWKS (for kind clusters).
    • We also removed the associated RBAC resources
  • There were a bunch of unused params.yaml files in the Kustomize bases, these have been removed.

Test Fixes

The following changes to the tests have been made:

Other Notes

Update Notes

Users updating from 1.9.0 to 1.9.1 will need to manually remove the resources that are no longer needed:

  • CronJob/kubeflow-m2m-oidc-configurator (namespace: istio-system):
    • ServiceAccount/kubeflow-m2m-oidc-configurator
    • Role/kubeflow-m2m-oidc-configurator
    • RoleBinding/kubeflow-m2m-oidc-configurator
  • EnvoyFilter/x-forwarded-host

Custom Patches

We have made a few custom patches to the upstream folder under apps/pipeline/upstream/base/installs/multi-user/istio-authorization-config.yaml. We should try and upstream these changes to the Kubeflow manifests repo, so we don't need to maintain them in the future.

Off-Cluster Access with Kubernetes JWTs

I am not a fan of allowing Kubernetes JWTs to be used from outside the cluster, as this requires exfiltrating the cluster JWTs. However, I understand that some users may want to do this, so the default m2m-dex-and-kind overlay allows this.

The m2m-dex-and-kind overlay will only work on K8S distributions whose cluster API serves valid JWKS keys at /openid/v1/jwks (e.g. not EKS). However, if someone applies the m2m-dex-and-kind overlay on an incompatible cluster, everything should still work, except for the ability to use K8s JWTs from outside the cluster.

@thesuperzapper
Copy link
Member Author

@juliusvonkohout @kimwnasptd here is my proposed solution for the JWT issue.

@google-oss-prow google-oss-prow bot added size/XXL and removed size/M labels Sep 2, 2024
@thesuperzapper thesuperzapper changed the title Disconnect Istio from Kubernetes JWTs Rework & Simplify Kubeflow Auth Sep 2, 2024
@thesuperzapper
Copy link
Member Author

/assign @juliusvonkohout @kimwnasptd

I want to get these changes in for 1.9.1, so would greatly appreciate your review as soon as possible.

Otherwise, we will keep getting reports of things not working on very common kubernetes distributions like EKS.

@b4sus
Copy link

b4sus commented Sep 3, 2024

Hey @thesuperzapper,
this is great PR, I am testing it on the fly on our rancher cluster and it seems to work (the dex only overlay)- dex login, running pipelines, notebooks. I have a minor issue though (not sure if it is our set up or is it related to manifests) with logout in dashboard.
When login is done, the oauth2_proxy_kubeflow cookie gets created and everything is fine. When I logout, the oauth2/sign_out is called removing the cookie. So far so good. Every subsequent request (let's say clicking on pipelines) is redirecting to /dex/auth?... - this however doesn't offer me to log in (and get the oauth2_proxy_kubeflow cookie), but returns 200 with oauth2_proxy_kubeflow_csrf cookie. Every next request does basically the same, so I have to clean all cookies to get to the dex login page.
Do you thing this could be related to this PR or is it likely problem on my side?

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 3, 2024

"From inside the cluster, machines should be able to access some Kubeflow APIs (e.g. KFP) with a Kubernetes ServiceAccount token."

Should be
"From inside the cluster, machines should be able to access ALL Kubeflow APIs (e.g. KFP) with a Kubernetes ServiceAccount token"

Because also from inside you can talk to the ingressgateway and do everything that you can do via the UI/API

I adjusted your text slightly there.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 3, 2024

apps/pipeline/upstream/base/installs/multi-user/istio-authorization-config.yaml upstreaming is tracked in #2804 and we welcome PRs.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 3, 2024

I think we should move away from this special KFP way and always go trough the ingressgateway in the future, no matter whether you are inside or outside of the cluster.

Here is a verbose draft.

export KF_PIPELINES_TOKEN_PATH=$(pwd)/kf_pipelines_token.yml
echo "${{ secrets.KUBECONFIG }}" > $(pwd)/kubeconfig.yaml
export KUBECONFIG=$(pwd)/kubeconfig.yaml
TOKEN=$(kubectl create token default-editor -n $KF_PROJECT_NAMESPACE --duration $KF_K8S_API_SERVER_TOKEN_EXPIRATION --audience istio-ingressgateway.istio-system.svc.cluster.local)
echo -n $TOKEN > $(pwd)/kf_pipelines_token.yml
python submit_run.py
auth_token = pathlib.Path(os.environ["KF_PIPELINES_TOKEN_PATH"]).read_text() 
KF_ISTIO_INGRESSGATEWAY_URL = os.environ.get('KF_ISTIO_INGRESSGATEWAY_URL')
kfp_client = kfp.Client(host=KF_ISTIO_INGRESSGATEWAY_URL, existing_token=auth_token)

This was tested on 1.8.1 and for 1.9 we can probably even drop the audience when creating the token. So we could just use the default token from e.g. a jupyterlab and we would not need the pod defaults for this special KFP token anymore as described here https://www.kubeflow.org/docs/components/pipelines/user-guides/core-functions/connect-api/#serviceaccount-token-volume.

@thesuperzapper
Copy link
Member Author

@b4sus thanks for remining me, I found a similar issue when I implemented oauth2-proxy in deployKF.

I have added 939010b which will enable the "sign in" screen for oauth2-proxy, meaning that users have to explicitly "start" the auth flow, rather than it just being redirected in the background (and accumulating many CSRF cookies for each request if the page is already open).

Screenshot 2024-09-03 at 10 23 24

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 3, 2024

Can we document or even test in a GitHub action workflow "From outside the cluster, machines should be able to access Kubeflow APIswith a JWT issued by Dex (via token exchange)" somehow?

@thesuperzapper
Copy link
Member Author

I think we should move away from this special KFP way and always go trough the ingressgateway in the future, no matter whether you are inside or outside of the cluster.

@juliusvonkohout this would be a breaking change, and so can not be included in 1.9.1. Also, this would need to be a decision for the KFP maintainers, because it's a long-standing feature and most/all KFP users are currently depending on it.

Personally, I see no benefit to removing this functionality, it only makes things more complex for users who don't want to allow Kubernetes JWTs to be used from outside the cluster, while also making it harder for people to migrate from the 'Standalone' KFP to the 'Kubeflow Platform' one.

This was tested on 1.8.1 and for 1.9 we can probably even drop the audience when creating the token. So we could just use the default token from e.g. a jupyterlab and we would not need the pod defaults for this special KFP token anymore as described here https://www.kubeflow.org/docs/components/pipelines/user-guides/core-functions/connect-api/#serviceaccount-token-volume.

This feature is part of KFP for a reason, removing this check would be a security risk, as it reduces the "isolation" of the JWTs, both in terms of not having these tokens be useful outside the cluster, and also not allowing random SA tokens to be used.

Also, there is already a TOKEN_REVIEW_AUDIENCE env-var on the ml-pipeline Pods if the user wants to remove this check (and set it to their cluster issuer, which as we have found, is not consistent across distributions).

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 3, 2024

I went through the 50 files on my phone, so I might have missed things, but in general it looks like a good compromise and it significantly improves the documentation. Maybe you should extend Kimonas' proposal with some details from your initial post.

# Dex default cookie expiration is 24h.
# If set to 168h (default oauth2-proxy), Istio will not be able to use the JWT after 24h,
# but oauth2-proxy will still consider the cookie valid.
# It's possible to configure the JWT Refresh Token to enable longer login session.
Copy link
Member

@juliusvonkohout juliusvonkohout Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more information about where to change the jwt refresh token would be useful. Setting the expiration to 7 days or so for all components involved (Istio, Dex, oauth2-proxy) is often requested by users.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do this in a separate PR, it's not related to this change.

@kimwnasptd
Copy link
Member

Switching for the rest of the week to also help with the review of the PR. I also took a look on the issue.

@thesuperzapper this looks amazing! With a quick look I really like the approach of distinguishing the scenarios, and for the cloud/vendor specific ones only give some basic guidance.

Will provide some more comments the following hours/days

@thesuperzapper
Copy link
Member Author

thesuperzapper commented Sep 3, 2024

"From inside the cluster, machines should be able to access some Kubeflow APIs (e.g. KFP) with a Kubernetes ServiceAccount token."

Should be "From inside the cluster, machines should be able to access ALL Kubeflow APIs (e.g. KFP) with a Kubernetes ServiceAccount token"

@juliusvonkohout I have reverted your change because as I was saying in #2864 (comment), we need to support in cluster access to some of these special API endpoints via their Kubernetes Service, not the ingress-gateway.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 4, 2024

@thesuperzapper regarding #2864 (comment)

It seems that committing your suggestion by clicking on the button there breaks the DCO because it does not properly sign the commit. Feel free to just overwrite it and commit yourself.

@thesuperzapper
Copy link
Member Author

@thesuperzapper regarding #2864 (comment)

It seems that committing your suggestion by clicking on the button there breaks the DCO because it does not properly sign the commit. Feel free to just overwrite it and commit yourself.

@juliusvonkohout in general, please don't commit to others PRs unless they are unresponsive, its akin to "steeling" the PR as GitHub will attribute the commit to you as well, I have removed your commit and removed your access to push to my branch.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 4, 2024

@thesuperzapper regarding #2864 (comment)

It seems that committing your suggestion by clicking on the button there breaks the DCO because it does not properly sign the commit. Feel free to just overwrite it and commit yourself.

@juliusvonkohout in general, please don't commit to others PRs unless they are unresponsive, its akin to "steeling" the PR as GitHub will attribute the commit to you as well, I have removed your commit and removed your access to push to my branch.

The missing sign-off looks more like a GitHub bug and if you explicitly create a button (based on my input to just remove the word "to") to click on to commit for others and explicitly enable "edit from maintainers" in your PR this is expected behaviour and a formal invitation. These settings, buttons and suggestions are all in your control in the end.

The same would apply if I create a PR in Kubeflow/kubeflow and do the same things. Then it would be my responsibility if I enable it and make an explicit suggestion for you to commit.

So for me it really does not matter and I welcome commits from others in general to my branches as allowed by my own PR settings. I even invite some people to my forks sometimes to collaborate.

But I also have no emotional ego investment here and just want to get this reviewed and merged. Since it is your PR it is just your decision and I am totally fine with it.

But back to the important things: I think with a few more hours of addressing the remaining comments and maybe some additional tests and documentation additions we can get this over the finish line within September.

@thesuperzapper
Copy link
Member Author

@juliusvonkohout can you give a specific list of things you need me to change, or do you just need time to review?

I am waiting on @kimwnasptd to give a review so we can get this merged.

I am pretty confident (especially given the passing tests) that this PR works, so I am happy to merge it (so that master works for people) and then update docs in a follow-up PR, if there is any remaining things to document.

README.md Outdated Show resolved Hide resolved
@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 5, 2024

@juliusvonkohout can you give a specific list of things you need me to change, or do you just need time to review?

Yes, let me provide a list.

Some comments/conversations have been answered/resolved, but some are still open:

  • "Can we document or even test in a GitHub action workflow "From outside the cluster, machines should be able to access Kubeflow APIswith a JWT issued by Dex (via token exchange)" somehow? Core functionality without tests can easily regress.
  • Rework & Simplify Kubeflow Auth #2864 (comment) is still open. I know its trivial, but we should fix it here and close all conversations before merging.
  • Rework & Simplify Kubeflow Auth #2864 (comment) "Here we should probably add a sentence to explain that there are also other m2m issuers available and link to other files/documentation as mentioned in Rework & Simplify Kubeflow Auth #2864 (comment)" is still open and should be low effort
  • Rework & Simplify Kubeflow Auth #2864 (comment) "more information about where to change the jwt refresh token would be useful. Setting the expiration to 7 days or so for all components involved (Istio, Dex, oauth2-proxy) is often requested by users." is still open and might be very useful

Probably all of this can be addressed in an hour or so. Nobody is expecting perfection there, just rough guidance.

I am waiting on @kimwnasptd to give a review so we can get this merged.

I need to do a second run through all files and to focus more on the architecture than 50+ individual files as well. I plan to get this done in the next week. By then you might also get feedback from Kimonas. But if the stuff above is addressed and Kimonas is fine with the architectural details that might also be enough.

I am pretty confident (especially given the passing tests) that this PR works, so I am happy to merge it (so that master works for people) and then update docs in a follow-up PR, if there is any remaining things to document.

Yes, we now have so many and more extensive tests such that they are a strong indicator. Given the 1-2 hour effort mentioned above, it probably makes sense to address the small things mentioned above in this PR. This would increase the probability of getting this merged next week already.

README.md Outdated Show resolved Hide resolved
@kromanow94
Copy link
Contributor

Somehow, the VirtualService for oauth2-proxy was omitted:

  • I have no idea how this was working before, the /oauth2/ HTTP path was answered by central-dashboard, not oauth2-proxy.
  • This raises a lot of questions, but either way, this is now fixed in this PR.

Istio should do the magic here, it managing this route oob with envoyExtAuthzHttp. It's interesting that it was required here to add this VirtualService. It's working without this VirtualService in my case for clusters deployed with kind, vcluster and EKS. We also don't define any specific /oauth2 rule in our AWS ALB. Maybe the networking configuration changes some Istio mechanism which now requires to set this route explicitely (but that's just my guess).

A CronJob was to populate JWKS into the Istio RequestAuthentication:

As mentioned, please mind that it's intention was only for self-signed issuers on kind, vcluster and so on. Including this mechanism for deployments like EKS, Azure and others with OIDC Issuer served behind publicly trusted certificates was not only not needed but also the real source of issues.

I had a look at this PR and from my perspective it looks fine, just this one doubt about the VirtualService.

I asked @MaxKavun from my team to verify this PR in EKS Cluster.

@MaxKavun, can you also check if the setup works without the VirtualService/oauth2-proxy?

@tarekabouzeid
Copy link
Member

Hi,

Quick update regarding testing, I have tested this PR via connecting to KFP from notebook in Rancher cluster "RKE - k8s v1.28.12".

Also tested in my local cluster "Kind v1.31.0".
both worked fine.

from kfp import dsl
from kfp.client import Client
client = Client()
print(client.list_experiments(namespace="kubeflow-user-xxx"))

Result:

{'experiments': None, 'next_page_token': None, 'total_size': None}

In my other cluster which contains more experiments, that also worked and returned experiments metadata, earlier was getting 403 error.

Thanks

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 9, 2024

Hi,

Quick update regarding testing, I have tested this PR via connecting to KFP from notebook in Rancher cluster "RKE - k8s v1.28.12".

Also tested in my local cluster "Kind v1.31.0".
both worked fine.

from kfp import dsl
from kfp.client import Client
client = Client()
print(client.list_experiments(namespace="kubeflow-user-xxx"))

Result:

{'experiments': None, 'next_page_token': None, 'total_size': None}

In my other cluster which contains more experiments, that also worked and returned experiments metadata, earlier was getting 403 error.

Thanks

Are m2m token tests such as https://github.com/kubeflow/manifests/blob/master/.github/workflows/notebook_controller_m2m_test.yaml, https://github.com/kubeflow/manifests/blob/master/.github/workflows/kserve_m2m_test.yaml and https://github.com/kubeflow/manifests/blob/master/.github/workflows/pipeline_test.yaml also working on rancher?

@thesuperzapper
Copy link
Member Author

@juliusvonkohout @kromanow94 don't bother testing without the ouath2-proxy virtualService, because even if Istio was doing something strange with regard to the /oauth2/callback URL, it would not have been proxying the sign-in page which I discussed in #2864 (comment).

That page is needed to ensure that background requests don't all initiate a login flow and flood the browser with CSRF cookies when the current auth cookie becomes invalid (while still having a window open to the dashboard).

@kimwnasptd
Copy link
Member

So I've done a first pass on the structural side, and things look good but I still would like to take a bit of a deeper look before approving

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 11, 2024

So I've done a first pass on the structural side, and things look good but I still would like to take a bit of a deeper look before approving

Can you get this done until Saturday? Because the plan is that I cut 1.9.1rc1 on Saturday/Sunday.

If not, I have to add this PR in 1.9.1rc.2.

@MaxKavun
Copy link

MaxKavun commented Sep 12, 2024

I've done some testing and it all look good, can confirm that oauth2-proxy virtualservice is indeed needed.
Couple things I noticed:

  1. Logout button doesn't work
    I added small update
        - name: LOGOUT_URL
          value: /oauth2/sign_out

But even after update logout works but it doesn't redirect to the login page (you have to refresh the page manually)
2) README doesn't mention any other oauth2-proxy variations, only for dex and kind. Worth to mention it

@thesuperzapper
Copy link
Member Author

@juliusvonkohout @kimwnasptd I have applied most of the requested changes in 547d2c4 and rebased because other things were merged to master in the meantime (e.g. a KFP update).

I added notes in the example/kustomize.yaml and README.md that there are multiple ouath2-proxy overlays available. I have also made the default example/kustomize.yaml the common/oauth2-proxy/overlays/m2m-dex-only one, so that people who blindly run the install command will not have a broken deployment on non-Kind clusters.

We should be ready to merge now, and as long as we do an RC.2 for 1.9.1 (the RC.1 was cut without this PR).

This is really the most critical fix for 1.9.1 so it must be merged.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 14, 2024

@thesuperzapper I will do the 1.9.1 RC2 in two weeks after my vacation. Yes, we need to include this PR for 1.9.1. So I will merge this if nothing serious comes up in the next week.

@juliusvonkohout
Copy link
Member

juliusvonkohout commented Sep 27, 2024

@thesuperzapper can you solve the merge conflict? Afterwards I can merge it.

@juliusvonkohout
Copy link
Member

Alright as discussed on slack as well

/lgtm
/approve

And we can continue in follow up PRs.

@google-oss-prow google-oss-prow bot added the lgtm label Oct 1, 2024
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: juliusvonkohout

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit a7c646e into kubeflow:master Oct 1, 2024
27 checks passed
@thesuperzapper thesuperzapper deleted the remove-m2m-cronjob branch October 1, 2024 20:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants