Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Flatting evaluation #486

Closed

Conversation

bruAristimunha
Copy link
Collaborator

No description provided.

@bruAristimunha bruAristimunha linked an issue Sep 20, 2023 that may be closed by this pull request
Copy link
Collaborator

@PierreGtch PierreGtch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @bruAristimunha ! maybe it would make sense to re-write the evaluations from scratch and leave the old ones as deprecated? because their structure will substantially change. I think we can re-use even more code between the different evals

@@ -77,7 +80,7 @@ def __init__(
if not isinstance(paradigm, BaseParadigm):
raise (ValueError("paradigm must be an Paradigm instance"))
self.paradigm = paradigm

self.n_splits = n_splits
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should protect this new attribute, or at least raise a warning in case the user changes it. Because one of the purposes of MOABB is to normalize the evaluation of algorithms across BCI research, so it's best if everyone uses 5 folds.

moabb/evaluations/evaluations.py Show resolved Hide resolved

grid_clf = clone(clf)
# To-do: find a way to expose this for.
# Here, we will have n_splits = n_sessions*n_splits (default 5)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should even have n_splits = n_sessions*n_splits*n_pipelines.


grid_clf = clone(clf)
# To-do: find a way to expose this for.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that if we want to only have one Parallel to avoid nesting them, we should put it here instead of across the datasets and subjects for two reasons:

  • parallel calls between subjects and datasets means loading a lot of data in parallel so it's not super efficient
  • plus, if the user wants to also do parallel calls between datasets or subjects, they can launch multiple scripts, each with a different subject.

@@ -168,7 +172,7 @@ def _evaluate(
results = Parallel(n_jobs=self.n_jobs_evaluation, verbose=1)(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment bellow about this Parallel

moabb/evaluations/evaluations.py Outdated Show resolved Hide resolved
moabb/evaluations/evaluations.py Outdated Show resolved Hide resolved
moabb/evaluations/evaluations.py Show resolved Hide resolved
Co-authored-by: PierreGtch <25532709+PierreGtch@users.noreply.github.com>
@bruAristimunha
Copy link
Collaborator Author

I will restart this PR. We changed a lot of stuff in the evaluation file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve parallelisation of evaluations
2 participants