New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support url prefix instead of just top-level #425
Comments
Hmm let's find all hardcoded urls first and document them here, then think about the different needs of each. I'm distracted rn, but here's a start:
|
did a very quick scan and didn't find anything else in the |
presumably the html templates could be handled with more aggressive use of
not sure so much about the js. it doesn't seem like a great plan to force them to be templated if they don't need to be otherwise. |
I'm not opposed to the js files being templated, especially since there's already this weird thing that happens where psiturk.js is loaded as a string from here and here and then served as "static" route here So it's already templated, basically, even if someone has already run |
Also, I think those |
We have the use case where we have one public domain name as entry point to our lab resources, but we'd like to be able to use multiple studies. We have nginx set up to reverse-proxy to different instances of psiturk to enable different, unrelated experiments but it's not clear how to make studies differently addressable from outside. The easiest way to map to different locations would be to use an alternate path, i.e.
Unfortunately, the links within the files refer to top-level locations, so this fails (i.e. "/sync"). (Note that because of our setup, it's not easy to add additional subdomains nor open additional ports.)
Suggested resolutions:
/
(e.g.,/sync/
->sync/
), then the links will be interpreted relative to any top level directoriesprefix
option to the config file that gets supplied to each url which gets resolved in the templates and python init, similar to theurl_prefix
option in BlueprintI am happy to do this + add a PR if there is resolution on what the right approach is, but no promises that I'll be able to track down all the urls in the first pass :)
The text was updated successfully, but these errors were encountered: