Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create new 3D bioimaging example. #7309

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

mkcor
Copy link
Member

@mkcor mkcor commented Jan 27, 2024

Description

This tutorial is humbly inspired by this monumental paper [1]; check out the video abstract!

This is WIP so, for now, I've just uploaded data to another repo and I haven't even used permalinks. I haven't written up the narrative parts and section titles (yet). I just wanted to share the data analysis workflow without further ado.

@ana42742 @decorouz @lagru feedback welcome!

[1] Katie McDole, Léo Guignard, Fernando Amat, Andrew Berger, Grégoire Malandain, Loïc A. Royer, Srinivas C. Turaga, Kristin Branson, Philipp J. Keller (2018) "In Toto Imaging and Reconstruction of Post-Implantation Mouse Development at the Single-Cell Level" Cell, 175(3):859-876.e33. ISSN: 0092-8674 https://doi.org/10.1016/j.cell.2018.09.031

Checklist

Release note

Summarize the introduced changes in the code block below in one or a few sentences. The
summary will be included in the next release notes automatically:

...

@mkcor mkcor added the 📄 type: Documentation Updates, fixes and additions to documentation label Jan 27, 2024
@mkcor
Copy link
Member Author

mkcor commented Jan 28, 2024

I'm not sure why the second .npy file raises ValueError("Cannot load file containing pickled data"). I tried

gt = np.load(io.BytesIO(resp.content), allow_pickle=True)

but, as expected: UnpicklingError: Failed to interpret file <_io.BytesIO object at 0x7ff45f0aea20> as a pickle.

Anyway, here's a screenshot of the missing figure:
comparison

# Ensure TGMM result is an image of type "labeled"
assert gt.dtype in [np.uint16, np.uint32, np.uint64]
assert gt.min() == 0
assert gt.max() == np.unique(gt).shape[0] - 1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lagru maybe, once we type the API, the above could turn into a one-liner looking like assert isinstance(gt, Labels)? 💪

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, isinstance() works with class inheritance, typing is not necessarily about that but might be about structural typing and such. And I find it difficult to imagine that running a type checker would check the actual content like it's done here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is structural typing? 👀

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have a look at protocols and PEP 544 -- Protocols: Structural subtyping (static duck typing) if you are interested. Basically it's trying to address the problem that typing doesn't support a lot of the more dynamic duck typing capabilities of Python. It's also referred to as "static duck typing".

@mkcor mkcor marked this pull request as ready for review May 11, 2024 21:56
Comment on lines +9 to +14
.. [1] McDole K, Guignard L, Amat F, Berger A, Malandain G, Royer LA,
Turaga SC, Branson K, Keller PJ (2018) "In Toto Imaging and
Reconstruction of Post-Implantation Mouse Development at the
Single-Cell Level" Cell, 175(3):859-876.e33.
ISSN: 0092-8674
:DOI:`10.1016/j.cell.2018.09.031`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if that's supported but what do you think about moving this out of the way to the end of the example?

#
# data = klb.readfull('Mmu_E1_CAGTAG1.TM000184_timeFused_blending/SPM00_TM000184_CM00_CM01_CHN00.fusedStack.corrected.shifted.klb')
# sample = data[400:450, 1000:1750, 400:900]
# np.savez_compressed('sample_3D_frame_184.npz', sample)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was originally looking at the rendered example and was wondering / confused on why this code was passing even though pyklb isn't available when building the docs.

This might confuse readers in a similar manner; is this really relevant here? Or how we can make it more clear that this is not part of the examples workflow?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rendering of such code blocks used to make it clear that they weren't part of the current workflow... They would appear as 'commented out,' sort of. I had used this formatting in a previous example ("Estimate anisotropy in a 3D microscopy image"). I can see that it's not the case anymore (with the new theme, I guess).

Yes, it would be best to find some alternative formatting...

Copy link
Member

@lagru lagru May 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a code block within a citation block? 🤔

doc/examples/applications/plot_3d_segmentation_embryo.py Outdated Show resolved Hide resolved
# The sample dataset is a 3D image with 50 `xy` sections stacked along `z`. Let us
# visualize it by picking one such section in five.

data_montage = ski.util.montage(im3d[::5], grid_shape=(2, 5), padding_width=5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always surprised when discovering another small convenience function. 😄

doc/examples/applications/plot_3d_segmentation_embryo.py Outdated Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking a step back, I'm curious about your (teaching) goals with this example.

At first glance, the segmentation approach taken here seems very similar to the one taken in Segment human cells (in mitosis). Is the goal to compare the approach to a machine-learning-based one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first glance, the segmentation approach taken here seems very similar to the one taken in Segment human cells (in mitosis).

Yes, but in 3D. I also had in mind the comparison between this 'native' 3D segmentation and a 2D version along z ('stitching' back the xy sections together afterwards): I tried quickly, and the direct 3D segmentation works much better. Also, I'd like to extend this example one day (or write a new one which would refer to this one) with the tracking challenge, since the (original/full) dataset is not only 3D in space but has a time dimension.

Is the goal to compare the approach to a machine-learning-based one?

Yes!

mkcor and others added 2 commits May 13, 2024 09:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📄 type: Documentation Updates, fixes and additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants