I'm wondering if there are some algorithms that perform some sort of an
"adaptive" segmentation on a grayscale image. The image is a 16 bit distance
map from a range finding camera (not that it really matters).
What I currently have is a simple algorithm that splits the 0-65,535 range
into X equally sized bins and then scans all pixels in the source image and
"turns on" the corresponding pixel in the corresponding bin, thereby
producing X binary images.
What I want to have are such binary pictures but instead of blindly
assigning the pixels into bins, I would like there to be some correlation
between adjacent pixels such that if two such pixels fall on opposite sides
of a bin boundary by a small margin, they still get put into the same bin
due to the fact that they probably are a part of the same "object".
Does anybody have any pointers to algorithms that do this? I would think
something like this exists.
Best regards, Stefan Freyr.
[Non-text portions of this message have been removed]