Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Firing inhibition only works for flattened layers #68

Open
jeshraghian opened this issue Oct 6, 2021 · 1 comment
Open

Firing inhibition only works for flattened layers #68

jeshraghian opened this issue Oct 6, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@jeshraghian
Copy link
Owner

Description

the inhibition argument of neurons is only setup for single dimensional channels / outputs of nn.Linear
It does not work for convolutional layers.

The way I see it, there are two options:

  1. Only allow 1 neuron to fire from an entire layer; this is how it is currently setup for Linear layers
  2. Allow 1 neuron to fire per-channel. Trickier, as the channel dimension must be known.
@pengzhouzp
Copy link
Collaborator

The second option makes more sense to me, as features are extracted by different filters/channels. What is the meaning of the channel dimension must be known? Maybe I need to see the code to understand haha.

@ahenkes1 ahenkes1 added the enhancement New feature or request label Aug 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants