-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Perform Edge Enhancement and Histogram Adjustment for Matching only #1301
Comments
I'm noticing that pre-processing the data seems to reduce the reported Error in ODM by orders of magnitude, which is quite interesting. Please find Report.pdf attached for original data: Please find Report.pdf attached for pre-processed data: |
I'd love to contribute. I'd need to know a few things first:
As for the approach, I think pre-processing them on the fly and keeping in memory will slow things down. Images should be pre-processed and kept in a separate folder for reference during feature matching. |
I'm not super well-versed in this, but my thoughts are these:
|
If I may suggest, any image enhancement for feature detection should be done on-the-fly (without storing the images onto disk). It will be faster (and much less complicated). |
I'll defer to your suggestion as you guys have built it 😄 |
Increasing contrast using opencv |
This line of thought was recently corroborated by a user's test dataset consisting of mostly open snow, which, when they performed automatic contrast and adaptive histogram equalization. Their approach created severe artifacting and "tiling" on the output due to not having the pipeline only use the pre-processed images for matching, but it did greatly increase their reconstruction. I've been curious about, and researching, using a First-Order PCA on the images to match on the most significant features. This should be sensor and dataset agnostic. @smathermather has suggested doing an IHS (Intensity/Hue/Saturation) transform and then using the Intensity channel. In cruising around the literature, it seems like both have some prior art. |
Not sure if you've had the bandwidth to investigate any of the above. What are your thoughts? |
@Saijin-Naib Hey! I won't be able to research, however, can definitely help in dev 😄 |
Now I have to go back and find the papers, I've un-pinned the tabs haha |
In the past, I've done a bit of a hacky pre-processing of images by sharpening and auto-contrasting in a tool like XNConvert. I was reminded of this today working with a particularly reticent dataset.
I think it'd be great if we (optionally? I don't think always would hurt, though) pre-processed the input images for feature matching/extraction only, to help stitch datasets with contrast/exposure/sharpness issues, and maybe even net some better matching/extraction on good data, as well!
Ideally, these would be done non-destructively (in memory on the fly? cached on disk?) for the matching, and once matching is completed, the originally submitted images would be passed to the pipeline to colorize the point cloud and generate the orthophoto and textured models.
The text was updated successfully, but these errors were encountered: