Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend benchmark with mujoco v4 envs #1140

Open
MischaPanch opened this issue May 6, 2024 · 0 comments
Open

Extend benchmark with mujoco v4 envs #1140

MischaPanch opened this issue May 6, 2024 · 0 comments
Labels
documentation experiment-eval Issues about evaluation: plots, stats, multiprocessing etc.

Comments

@MischaPanch
Copy link
Collaborator

Contact forces have been removed from obs going from v3 to v4, making the envs more challenging.

V4 is the new default in all our scripts, we should include benchmark results.

Generally, we should have a single script per example subdir that runs all experiments a la rliable and creates the benchmark files.

@opcode81 this is one argument in favor of inlcuding num_experiments=1 into all high level example scripts. It should be completely clear how the reported benchmark results were created, and in fact we should run the creation at least at every release.

I can look for compute sponsors :)

@MischaPanch MischaPanch added documentation experiment-eval Issues about evaluation: plots, stats, multiprocessing etc. labels May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation experiment-eval Issues about evaluation: plots, stats, multiprocessing etc.
Projects
None yet
Development

No branches or pull requests

1 participant