-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Hubs on a Page Clobber Cookies #4512
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
Sorry for the delay, I was on vacation in July and didn't get through the backlog when I came back. Since cookies in general are per-hostname and not per-port, collisions are likely if you have many Hubs on one hostname only differentiated by port. I'm not sure if this is avoidable. The JupyterHub should be setting its cookies on the correct paths, so if your Hubs were differentiated by |
I'll look into this. Unfortunately we don't have a choice. We run one JuypterHub per a student's assignment and for most courses will have to run about 50 assignments. We simply can't afford 50 VMs to run this on. Our current workaround is to create a unique username per assignment (hub) for a user. That appears to be working but we don't have enough data yet to guarantee this to be a good final solution. |
Neither suggestion requires more than one VM. You can use multiple base_url prefixes without any other changes (hostname is unchanged, urls only change from |
I'm going to try the base_url prefix approach. All hubs are dynamically generated by our system when an instructor creates a new assignment, so modifying DNS would be tricky unless we have a lot of them pre-assigned. We simply get too many requests to make that practical. |
That makes sense. Lots of requests is where the wildcard DNS becomes useful, since you'd only need to do that once, and add a single route on the server when a new Hub is added. But if you don't have wildcard DNS and easy SSL certificates, it would definitely be a big pain. |
Thanks for your help by the way! :) |
Bug description
We are building an LTI service where instructors can add Jupyter Notebooks as an assignment. They can either add one as a standalone assignment or embed one or more in a page. We're primarily testing with Canvas. All our infrastructure is complete and the single assignment setup works. However, we're seeing an issue when more than one Jupyter hubs are added to a single html page.
Expected behaviour
We are expecting all Jupyter hubs embedded in a page to appear.
Actual behaviour
Depending on the speed at which our server allows the hub to create the single user Jupyter lab containers, the embedded Jupyter lab either appear or not. We noticed that in the Chrome debug tools there are 2 cookies in the case we add just 1 hub to a page. We see a cookie with the name jupyterhub-session-id for the FQDN and we see an additional one for the FQDN with the portnumber. The hub URL we generate always, and which is the only one we use as the iframe src, has a unique port number for that hub. When we embed more than one hub we always see cookies for the FQDN and a separate set for the hub(s) with the port. That is what made us suspect cookie pollution, where multiple hubs are using the same cookie content.
How to reproduce
Create an HTML page where you iframe more than 2 Jupyter hubs, then load that page.
Your personal set up
We're using whatever latest version of the Jupyterhub Docker image currently is.
The Notebook configuration can be any of:
jupyter/base-notebook
jupyter/minimal-notebook
jupyter/r-notebook
jupyter/scipy-notebook
jupyter/tensorflow-notebook
jupyter/jupyter/datascience-notebook
jupyter/pyspark-notebook
jupyter/all-spark-notebook
OS:
Debian 11, 64 RAM
Version(s):
Jupyterhub: 4.0.1 (Docker image)
Full environment
Not sure how to do this since we work in an exclusively containerized environment
Configuration
Spawner: DockerSpawner
Config: see, below
Logs
In this particular case it's difficult to provide logs since the failure is mostly silent. Anything we see manifests itself as things like HTTP 431 errors or HTTP 500 errors (without noticeable errors in the Docker logs)
The text was updated successfully, but these errors were encountered: