Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regenerate all test fixtures #294

Open
juliasilge opened this issue Nov 15, 2022 · 8 comments
Open

Regenerate all test fixtures #294

juliasilge opened this issue Nov 15, 2022 · 8 comments

Comments

@juliasilge
Copy link
Collaborator

In ropensci/vcr#255 @sckott suggested it is good practice to regenerate all the test fixtures on occasion. We definitely have not done that, mostly because I (the maintainer) don't have access to many of the features we have tested. These fixtures were created by the individuals who did the original PRs (so involve their surveys).

What is the best way forward from here?

@sckott
Copy link

sckott commented Nov 16, 2022

vcr does have the re_record_interval config option https://docs.ropensci.org/vcr/reference/vcr_configure.html#cassette-options - it is tested and all, but I've not used it and I don't see anyone else using it in a github search. Other than that, you can just re-record whenever works for you: e.g., whenever making a change to a function which tests use vcr; whenever there's a qualtrics API update - in which case it's a good idea to re-record the cassette to make sure it reflects what you're getting back from the current version of the API.

@juliasilge
Copy link
Collaborator Author

Yes, but unfortunately I as a maintainer can't update many (maybe even most?) of the cassettes because I don't have access to those API features.

@sckott
Copy link

sckott commented Nov 16, 2022

Got it, okay

@jmobrien
Copy link
Collaborator

I agree that the cassettes for existing tests should probably be re-generated regularly, but I have the same issue with feature availability.

How does Qualtrics itself view open-source support tools like ours? Assuming they view our existence at least somewhat positively, perhaps someone there would be open to giving us a full-featured account just for testing?

@jmobrien
Copy link
Collaborator

jmobrien commented Dec 7, 2022

Referencing #297, which brought to mind an additional challenge here:

We currently have the situation where all the recorded tests use "www." in the credentials so as to not to be bound to a particular datacenter (and also so it doesn't expose anything about the accounts individual developers rely on for their work). This actually comes about manually after test generation: the vcr fixtures for individual tests are first generated using some specific developer's datacenter ID, then converted after the fact to "www." in both the test R code and fixture YAML.

Obviously this won't easily work with any kind of regular test regeneration scheme, automated or otherwise. Realistically we'd probably need something like a package-linked Qualtrics account that contains all the surveys used for tests, or at least someone willing and able to do that with their existing account.

@jmobrien
Copy link
Collaborator

Looking at the docs, I'm seeing a "mock server" for each endpoint (example ) located at their API dev platform stoplight.io. Could that be useful here somehow?

@juliasilge
Copy link
Collaborator Author

@jmobrien Oh maybe! That would be a really high impact thing to check out.

If that is something that it is OK to ping during CI, then we could feasibly rip out all the vcr/etc from the package and run tests against that mock server on CI, skipping everything that uses the mock server on CRAN.

@jmobrien
Copy link
Collaborator

jmobrien commented Dec 12, 2022

Right. I wasn't even thinking ahead quite that far, but it does make sense that we could really simplify tests that way.

After a few tests looks like the mocking server doesn't care about API keys, and also will accept basically anything that looks like a proper survey ID in the request (or, presumably, some other form of ID). Not sure about whether/how additional paramters are respected (e.g., not sure whether requesting a QSF from get survey does anything). So, maybe some caveats, but I think this has real potential.

If we use this, the biggest challenge might be to rework the internals around URL building somehow, since the mocking URLs aren't even in the ballpark of what the endpoint URLs look like currently. But it might still be worth it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants