Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BEP for audio/video capture of behaving subjects #1771

Open
bendichter opened this issue Apr 11, 2024 · 18 comments
Open

BEP for audio/video capture of behaving subjects #1771

bendichter opened this issue Apr 11, 2024 · 18 comments
Labels
BEP opinions wanted Please read and offer your opinion on this matter

Comments

@bendichter
Copy link
Contributor

I would like to create a BEP to store the audio and/or video recordings of behaving subjects.

While this would obviously be problematic for sharing human data, it would be useful to internal human data and for internal and shared data of non-human subjects.

Following the structure of the Task Events we will define types of files that can be placed in various data_type directories.

sub-<label>/[ses-<label>]
    <data_type>/
        <matches>_behcapture.mp3|.wav|.mp4|.mkv|.avi
        <matches>_behcapture.json

This schema will follow the standard principles of BIDS, listed here for clarity:

  • If no relevant <data_type> exists, use beh/.
  • Video or audio files that are continuous recordings split into files will use the _split- entity.
  • Video or audio files that are recorded simultaneously but from different angles or at different locations would use the _recording- entity to differentiate. We will need to modify the definition of this term to generalize it a bit to accommodate this usage. This entity would also be used to differentiate if a video and audio were recorded simultaneously but from different devices. Not that simply using the file extension to differentiate would not work because it would not be clear which file the .json maps to.
  • The start time of each audio or video recording should be noted in the scans.tsv file.

The JSON would define "streams" which would define each stream in the file.

The *_beh.json would looks like this:

{
  "device": "Field Recorder X200",
  "streams": [
    {
      "type": "audio",
      "sampling_rate": 44100.0,
      "description": "High-quality stereo audio stream."
    },
    {
      "type": "video",
      "sampling_rate": 30.0,
      "description": "Standard 1080p video stream."
    }
  ]
}

To be specific, it would follow this JSON Schema structure:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "device": {
      "type": "string"
    },
    "streams": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "type": {
            "type": "string",
            "enum": ["audio", "video"]
          },
          "sampling_rate": {
            "type": "number",
            "format": "float"
          },
          "description": {
            "type": "string"
          }
        },
        "required": ["type", "sampling_rate"],
        "additionalProperties": false
      }
    }
  },
  "required": ["device", "streams"],
  "additionalProperties": false
}

This BEP would be specifically for audio and/or video, and would not include related data like eye tracking, point tracking, pose estimation, or behavioral segmentation. All of these would be considered derived and are reserved for another BEP.

@bendichter
Copy link
Contributor Author

cc @yarikoptic who is providing guidance on this concept.

@bendichter
Copy link
Contributor Author

An alternative idea is to name the files "_video.mp4|avi|mkv|..." and "_audio.mp3|wav|...". The advantage of this is it may be more clear what these files are. The disadvantages are that this does not make it clear that it's a recording of the subject as opposed to a stimulus, and that it's not clear what you should do if you have an audio/video recording.

@bendichter
Copy link
Contributor Author

bendichter commented Apr 11, 2024

Another alternative idea is to have the files called "_beh.mp3|.wav|.mp4|.mkv|.avi|...", though this conflicts with the current beh modality. If there is a beh.tsv file in the beh/ directory, then it will have an accompanying beh.json file, which would conflict with the json file that corresponds to the data (e.g. beh.mp3) file

@Remi-Gau
Copy link
Collaborator

@Remi-Gau
Copy link
Collaborator

This BEP would be specifically for audio and/or video, and would not include related data like eye tracking, point tracking, pose estimation, or behavioral segmentation. All of these would be considered derived and are reserved for another BEP.

Some of this may already be covered by the BIDS support for motion data and look at the the eyetracking BEP (PR and HTML)

@Remi-Gau
Copy link
Collaborator

tagging @gdevenyi who I think mentioned wanting to work on something like this last time I saw him.

@VisLab
Copy link
Member

VisLab commented Apr 12, 2024

The ideas for allowing annotations of movies and audios as expressed in issue #153 could be expanded to allow annotation of participant video/audio but in the imaging directories themselves with appropriate file structure to distinguish.
@neuromechanist @Remi-Gau @yarikoptic @adelavega @dungscout96 @dorahermes @arnodelorme
.

@Remi-Gau
Copy link
Collaborator

I like how those different initiatives are synching up.

Wouldn't those annotations of videos using HED when experimenters "code" their video be more appropriate as a derivative though.

@VisLab
Copy link
Member

VisLab commented Apr 12, 2024

Wouldn't those annotations of videos using HED when experimenters "code" their video be more appropriate as a derivative though.

Not necessarily.... in one group I worked with on experiments on stuttering -- the speech pathologist's annotations were definitely considered part of the original data. Most markers that you see in typical event files didn't come from the imaging equipment but are extracted from the control software or external devices. The eye trackers have algorithms to mark saccades and blinks and these are written as original data.

In my mind, if the annotations pertain to data that has been "calculated" from the original experimental data it should go into the derivatives folder. Annotations pertaining to data acquired during the experiment itself should probably go in the main folder.

@Remi-Gau
Copy link
Collaborator

I see I was more thinking of cases where videos of an animal behavior have to be annotated to code when certain behavior happened. Given this is not automated and can happen long time after data acquisition I would have seen this as more derivatives. But your examples show that the answer like in many cases will be "it depends".

@gdevenyi
Copy link

We have potential animal applications in both domains:

  1. Video with annotation timestreams coming from automated touchscreen-based animal behaviour systems.
  2. Videos of animals in classic "open field test" and similar setups where poostprocessing analysis collects a variety of annotations of the video determined by behaviour.

also I guess a:
3. Manual human annotation of videos of animals in naturalistic environments, like maternal care events

@Remi-Gau Remi-Gau added the opinions wanted Please read and offer your opinion on this matter label Apr 16, 2024
@Remi-Gau Remi-Gau changed the title RFC: BEP for audio/video capture of behaving subjects BEP for audio/video capture of behaving subjects Apr 16, 2024
@Remi-Gau Remi-Gau added the BEP label Apr 16, 2024
@DimitriPapadopoulos
Copy link
Collaborator

Would non-contiguous recordings (using the same setup) end up in the same or distinct files?

As an example, there could be cases where video recording has been stopped while taking care of a crying baby and resumed later on. Should BIDS try to enforce anything here, or leave it to end users (and data providers)?

What about other types of "time-series" data? Not sure about MEG, for EEG I know the EDF+ format allows discontinuous recordings:

EDF+ allows storage of several NON-CONTIGUOUS recordings into one file. This is the only incompatibility with EDF. All other features are EDF compatible. In fact, old EDF viewers still work and display EDF+ recordings as if they were continuous. Therefore, we recommend EDF+ files of EEG or PSG studies to be continuous if there are no good reasons for the opposite.

@bendichter
Copy link
Contributor Author

@DimitriPapadopoulos I believe this would be different runs. You would specify the start time of each run in the scans file

@yarikoptic
Copy link
Collaborator

I think there might be multiple scenarios (entities) how it could be handled:

  • runs - if e.g. this corresponds also to separate runs of neural data if any acquired along, so primarily as "this is how we intended this all to be".
  • But I wonder if we should look into adopting/extending (currently they are too narrowly focused) any of other entities meaning of which relate somehow to have "pieces of" (using term which is not yet an entity): split, part, chunk.

@neuromechanist
Copy link

We have potential animal applications in both domains

From the annotation perspective in #153, an annot- entity enables multiple annotations per _media file. It might be useful here as well.

But I wonder if we should look into adopting/extending (currently they are too narrowly focused) any of the other entities meaning of which relate somehow to have "pieces of" (using term which is not yet an entity): split, part, chunk.

Any of them seems great, I currently suggested part- as an entity to use. But I can see any of the three work.

Video or audio files recorded simultaneously but from different angles or at different locations would use the _recording- entity to differentiate.

This is similar to having a stimulus with multiple tracks (left or right video streams, multiple audio channels, or separate video and audio), but they are not recording- per se. So, we might be able to look for a common entity that covers both potentially. We have two suggestions in #153 for now, (1) stream- and (2) track-. Would be happy to have any additional suggestions.

@neuromechanist
Copy link

Also, as @bendichter mentioned, this proposal will very soon find its audience in human neuroscience, especially with DeepLabCut adding subject masking capabilities and newer modalities such as LiDAR and wifi motion capture comes into play.

It might be useful to have Motion-BIDS maintainers' (@sjeung and @JuliusWelzel) opinions as well.

@bendichter
Copy link
Contributor Author

How do we feel about this naming convention?

sub-<label>/[ses-<label>]
    <data_type>/
        <matches>_behcapture.mp3|.wav|.mp4|.mkv|.avi
        <matches>_behcapture.json

I'm not 100% on it myself but I can't think of anything better. Other options:

  • "_video.mp4|avi|mkv|..." and "_audio.mp3|wav|...".
  • "_behvideo.mp4|avi|mkv|..." and "_behaudio.mp3|wav|...".
  • "_behmedia.mp4|avi|mkv|mp3|wav|..."

Is there any precedence from other standards we could use here?

@gdevenyi
Copy link

Is there any precedence from other standards we could use here?

Technically mkv is container format, it could have different kinds of video/audio streams.

Should we specify non-patent-encumbered video compression formats?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BEP opinions wanted Please read and offer your opinion on this matter
Projects
None yet
Development

No branches or pull requests

7 participants