Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New Hub] [Aug 9] binder.nasa-veda.2i2c.cloud #4576

Closed
4 tasks done
yuvipanda opened this issue Aug 6, 2024 · 10 comments · Fixed by #4612
Closed
4 tasks done

[New Hub] [Aug 9] binder.nasa-veda.2i2c.cloud #4576

yuvipanda opened this issue Aug 6, 2024 · 10 comments · Fixed by #4612
Assignees

Comments

@yuvipanda
Copy link
Member

yuvipanda commented Aug 6, 2024

A part of https://github.com/2i2c-org/meta/issues/1368

A part of NASA-IMPACT/veda-jupyterhub#47

Phase 1: Account

Done

Phase 2: Cluster

Use existing nasa-veda cluster

Phase 3: Hub Setup

Hub 1: binder

Phase 3.1: Initial setup (READY)

Question Answer Notes
Name of the hub binder
Dask gateway? no
URL binder.nasa-veda.2i2c.cloud (may change)
BinderHub UI? yes
Authentication Mechanism None
Admin Users NA Because BinderHub UI will be used
Persistent Home Directory no

Phase 3.2: Object storage access (READY)

Question Answer Notes
Scratch bucket enabled? yes retention set to 1 days, instead of 7
Persistent bucket enabled? no
Requestor pays requests to external buckets allowed? no

Phase 3.3: Profile List (READY)

N/A, as BinderHub UI will be used

Phase 3.31: Resource configuration (READY)

No profile list, properties set under singleuser

Question Answer
memory limit 2G
memory request 1G
CPU limit 1

Phase 3.4: Authentication tuning (READY)

Question Answer
Authentication Mechanism None

Phase 3.5: Profile List finetuning (NA)

NA, as BinderHub UI will be used

Phase 3.6: BinderHub config (READY)

Question Answer
Allowed Repositories? all
Banner Message default
BinderHub About Message default

Phase 4: Customizations

  • Use binder.openveda.cloud as the community manged domain. There's already a CNAME pointing from binder.openveda.cloud to binder.nasa-veda.2i2c.cloud so all the community managed steps are taken. (Deadline: Aug 9)
  • Apply same extra_iam_policy as the staging hub (Deadline: Aug9)
  • Create a new nodepool dedicated to this hub, with max size of 4 nodes and xlarge size. This is pathway to limiting cost. This should use the tags in Move each hub to its own nodegroup on the openscapes cluster #4482 as well. (Deadline: Aug 14)
  • Set max lifetime of users to 12hr. This is set as a property on the idle culler. (Deadline: Aug 14)
@yuvipanda yuvipanda changed the title [New Hub] binder.nasa-veda.2i2c.cloud [New Hub] [Aug 9] binder.nasa-veda.2i2c.cloud Aug 6, 2024
@GeorgianaElena GeorgianaElena self-assigned this Aug 8, 2024
@GeorgianaElena
Copy link
Member

GeorgianaElena commented Aug 8, 2024

I will sign off for the rest of the week and this issue is almost done with the following pieces missing:

cc @sgibson91

@sgibson91
Copy link
Member

sgibson91 commented Aug 8, 2024

Figure out why [veda-binder] Attempt to fix image push veda binder #4597 still not works

  • Confirmed that the credentials work by pushing an image to the org locally
  • I have compared config with a working binder and cannot find any differences that aren't superficial (e.g., urls)
  • I tried deleting the deployment and recreating just in case
  • I've asked in JupyterHub's gitter channel if there is a way that I can exec into a build pod before the r2d process kicks off so I can poke around

@GeorgianaElena
Copy link
Member

@sgibson91, the issue was that I had forgotten to add the robot account to the owners team on quay 🤦🏼‍♀️ It should work now. Sorry about it!

@sgibson91
Copy link
Member

Thanks @GeorgianaElena!

@sgibson91
Copy link
Member

I may have broken things further #4595 (comment)

@sgibson91
Copy link
Member

New builds are currently not working #4595 (comment)

@sgibson91 sgibson91 self-assigned this Aug 9, 2024
@sgibson91
Copy link
Member

sgibson91 commented Aug 9, 2024

I ended up using a nodegroup-per-hub (as per #4482) and removing the taint/toleration on the binder nodegroup. I couldn't get the build pods to schedule properly into the binder nodegroup and then they couldn't find the docker socket. Using hub-specific nodegroups at least works.

@sgibson91
Copy link
Member

Outstanding Actions

  1. need to confirm in the user culling configuration is correct as we didn't do it before
  2. need to decide if it's ok to enable logging of launch events to the 2i2c GCP account (I wanted to be more conservative with this one to make sure it's ok)

For (1) I've spun up some binders, so we can see if they still exist in 12 hours

@yuvipanda can you provide a yes/no answer for (2)?

@yuvipanda
Copy link
Member Author

@sgibson91 the answer for (2) is yes! It's ok to do that for now.

@yuvipanda
Copy link
Member Author

I also see that the idle culler is set to cull after 4h of inactivity. Let's leave it unset so it uses the default (1h I think) instead.

yuvipanda added a commit to yuvipanda/nbgitpuller that referenced this issue Aug 13, 2024
Otherwise we end up with targetpath being `.`, which always exists (given it is current
working directory) and we end up with weird failures

Discovered while working on 2i2c-org/infrastructure#4576
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants