You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the PlanktoScope GUI provides no quantitative feedback to the operator about how "good" the camera settings are - e.g. is the image brightness suitable for processing with the segmenter's binary threshold value of 127, and is the white balance good or is it still tinted by some color; instead, the operator has to guess by looking at the camera preview. This lack of feedback caused very real problems for me (@ethanjli) when I was on the R/V Sikuliaq in summer 2023 and we were using a PlanktoScope to image an average of 18 samples per day, and the PlanktoScope was suffering from a very gradual hardware failure (probably with the LED) over the course of the cruise in which images became progressively dimmer - but the difference in image brightness between each day was so small that we never noticed the brightness drift, and so we ended up with extremely dim images by the end of the cruise - because by that point I didn't remember how bright images were at the start of the cruise, but I had thought they were brighter, but I hadn't imagined the hardware could be failing in such a gradual way. @fabienlombard also mentioned experiencing a similar problem in the past. We could have compensated for the hardware failure (by increasing the camera ISO setting) and prevented the excessively dim images if we could have easily seen a quantitative measurement of image brightness before starting each image acquisition; then we would've seen two weeks into the cruise that the image brightness had gradually degraded from the desired value, and we could've increased the ISO to compensate.
Also, in the meantime until #204 is implemented, providing feedback to users about the actual red-green and red-blue ratios of the image would help them iteratively adjust white balance values for intercalibration between PlanktoScopes in a manual process.
Proposal
There should be some easy way for the operator to check the brightness and white balance of the preview frames from the camera, before starting image acquisition. This could be as simple as showing a live preview of the mean (or median?) R/G/B values of the most recent camera frame; or a more complicated solution (e.g. a histogram) could be used if necessary. Ideally, the specific solution - and the specific design changes to make to the GUI - should be guided by talking to PlanktoScope operators to better understand their needs; and by running easy/cheap experiments to get user feedback, e.g. with (initially low-fidelity) prototypes of the possible designs we're considering. This is one area where we'll need help from anyone joining FairScope to work on UI/UX - ideally someone matching that description could take over this proposal, and they would gradually start building up a design document to record any designs considered and any relevant inputs for decision-making about this proposal.
Architecturally, this could be done by computing metrics on the live preview image from the camera (e.g. before it's sent to the browser in an MJPEG stream), and then sending those metrics to the browser (e.g. via MQTT, websockets, or whatever). In terms of the sequencing of work, any software changes should probably only be done after we finish #79.
The text was updated successfully, but these errors were encountered:
Motivation
Right now, the PlanktoScope GUI provides no quantitative feedback to the operator about how "good" the camera settings are - e.g. is the image brightness suitable for processing with the segmenter's binary threshold value of 127, and is the white balance good or is it still tinted by some color; instead, the operator has to guess by looking at the camera preview. This lack of feedback caused very real problems for me (@ethanjli) when I was on the R/V Sikuliaq in summer 2023 and we were using a PlanktoScope to image an average of 18 samples per day, and the PlanktoScope was suffering from a very gradual hardware failure (probably with the LED) over the course of the cruise in which images became progressively dimmer - but the difference in image brightness between each day was so small that we never noticed the brightness drift, and so we ended up with extremely dim images by the end of the cruise - because by that point I didn't remember how bright images were at the start of the cruise, but I had thought they were brighter, but I hadn't imagined the hardware could be failing in such a gradual way. @fabienlombard also mentioned experiencing a similar problem in the past. We could have compensated for the hardware failure (by increasing the camera ISO setting) and prevented the excessively dim images if we could have easily seen a quantitative measurement of image brightness before starting each image acquisition; then we would've seen two weeks into the cruise that the image brightness had gradually degraded from the desired value, and we could've increased the ISO to compensate.
Also, in the meantime until #204 is implemented, providing feedback to users about the actual red-green and red-blue ratios of the image would help them iteratively adjust white balance values for intercalibration between PlanktoScopes in a manual process.
Proposal
There should be some easy way for the operator to check the brightness and white balance of the preview frames from the camera, before starting image acquisition. This could be as simple as showing a live preview of the mean (or median?) R/G/B values of the most recent camera frame; or a more complicated solution (e.g. a histogram) could be used if necessary. Ideally, the specific solution - and the specific design changes to make to the GUI - should be guided by talking to PlanktoScope operators to better understand their needs; and by running easy/cheap experiments to get user feedback, e.g. with (initially low-fidelity) prototypes of the possible designs we're considering. This is one area where we'll need help from anyone joining FairScope to work on UI/UX - ideally someone matching that description could take over this proposal, and they would gradually start building up a design document to record any designs considered and any relevant inputs for decision-making about this proposal.
Architecturally, this could be done by computing metrics on the live preview image from the camera (e.g. before it's sent to the browser in an MJPEG stream), and then sending those metrics to the browser (e.g. via MQTT, websockets, or whatever). In terms of the sequencing of work, any software changes should probably only be done after we finish #79.
The text was updated successfully, but these errors were encountered: