Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Support missing regressors in FirstlevelGLM for t contrasts #4076

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

Remi-Gau
Copy link
Collaborator

@Remi-Gau Remi-Gau commented Oct 23, 2023

Changes proposed in this pull request:

  • rename _compute_fixed_effect_contrast to compute_fixed_effect_contrast as it used outside of the module where it is defined
  • only applies for expression contrasts (e.g. "c0 - c1")
  • if the expression cannot be evaluated in:
  df = pd.DataFrame(np.eye(len(design_columns)), columns=design_columns)
  contrast_vector = df.eval(expression, engine="python").values

and a UndefinedVariableError pandas error is thrown, then expression_to_contrast_vector returns None and the contrast for this run is skipped with a warning.

TODO

  • update changelog

@github-actions
Copy link
Contributor

👋 @Remi-Gau Thanks for creating a PR!

Until this PR is ready for review, you can include the [WIP] tag in its title, or leave it as a github draft.

Please make sure it is compliant with our contributing guidelines. In particular, be sure it checks the boxes listed below.

  • PR has an interpretable title.
  • PR links to Github issue with mention Closes #XXXX (see our documentation on PR structure)
  • Code is PEP8-compliant (see our documentation on coding style)
  • Changelog or what's new entry in doc/changes/latest.rst (see our documentation on PR structure)

For new features:

  • There is at least one unit test per new function / class (see our documentation on testing)
  • The new feature is demoed in at least one relevant example.

For bug fixes:

  • There is at least one test that would fail under the original bug conditions.

We will review it as quick as possible, feel free to ping us with questions if needed.

@codecov
Copy link

codecov bot commented Oct 23, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (12a417a) 91.60% compared to head (39a98c0) 91.51%.
Report is 103 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4076      +/-   ##
==========================================
- Coverage   91.60%   91.51%   -0.09%     
==========================================
  Files         143      143              
  Lines       16086    16094       +8     
  Branches     3340     3343       +3     
==========================================
- Hits        14736    14729       -7     
- Misses        804      820      +16     
+ Partials      546      545       -1     
Flag Coverage Δ
macos-latest_3.10 91.50% <100.00%> (+<0.01%) ⬆️
macos-latest_3.11 91.50% <100.00%> (+<0.01%) ⬆️
macos-latest_3.12 91.50% <100.00%> (+<0.01%) ⬆️
macos-latest_3.8 91.46% <100.00%> (+<0.01%) ⬆️
macos-latest_3.9 91.47% <100.00%> (?)
ubuntu-latest_3.10 91.50% <100.00%> (+<0.01%) ⬆️
ubuntu-latest_3.11 91.50% <100.00%> (+<0.01%) ⬆️
ubuntu-latest_3.12 91.50% <100.00%> (+<0.01%) ⬆️
ubuntu-latest_3.8 91.46% <100.00%> (+<0.01%) ⬆️
ubuntu-latest_3.9 91.47% <100.00%> (+<0.01%) ⬆️
windows-latest_3.10 91.46% <100.00%> (+<0.01%) ⬆️
windows-latest_3.11 91.46% <100.00%> (+<0.01%) ⬆️
windows-latest_3.12 91.46% <100.00%> (+<0.01%) ⬆️
windows-latest_3.8 91.42% <100.00%> (+<0.01%) ⬆️
windows-latest_3.9 91.42% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

nilearn/glm/tests/test_first_level.py Outdated Show resolved Hide resolved
nilearn/glm/tests/test_first_level.py Outdated Show resolved Hide resolved
nilearn/glm/contrasts.py Outdated Show resolved Hide resolved
@Remi-Gau Remi-Gau changed the title [ENH] Support missing regressors in FirstlevelGLM [ENH] Support missing regressors in FirstlevelGLM for t contrasts Oct 24, 2023
@Remi-Gau
Copy link
Collaborator Author

@bthirion

the whole implementation relies on catching an error thrown by pandas when evaluating the expression.

Does this seem ok to you?

Copy link
Member

@bthirion bthirion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried we may arrive to weird situations where the user expects a contrast to have been computed and won't find it. To me contrasts and fixed effects should be defined consistently in all runs of a given fit.

@Remi-Gau
Copy link
Collaborator Author

To me contrasts and fixed effects should be defined consistently in all runs of a given fit.

Can you explain a bit more?

If by consistently you mean that "all runs should have all conditions" then in that case shouldn't we just close #2401.

@bthirion
Copy link
Member

Yes, that's the idea.
But we can revise this strategy if we think it's too conservative. Maybe something to discuss in an upcoming coredev meeting.

@Remi-Gau
Copy link
Collaborator Author

that may add quite a bit of friction for users who have designs that are "response" driven (trials where subject has given a certain response): not sure how frequent those are though.

In any case the error that is thrown at the moment is not helping user know what is wrong (it just says something is wrong with the expression), so I can at least recycle this PR to improve this.

@bthirion
Copy link
Member

We probably need to document current behavior better.

@Remi-Gau Remi-Gau added the GLM Issues/PRs related to the nilearn.glm module. label Jan 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GLM Issues/PRs related to the nilearn.glm module.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support missing regressors in FirstlevelGLM
2 participants