I would like a method to analyze retail shelf photographs, detect the shelf rail/edge angle relative to a horizontal line, and return “Not Detectable” if no legitimate shelf line is discovered. If the detected deviation is greater than 15 levels from horizontal, it must be flagged as misaligned.
The reference photographs present shelf rails, value strips, and product rows that could possibly be used as horizontal cues.
What I would like recommendation on
-
Probably the most dependable method to detect shelf horizontal strains in a cell app.
-
How you can calculate the angle of detected strains from horizontal.
-
How you can resolve when the shelf is not detectable.
-
Finest practices for dealing with blurry, occluded, or low-contrast shelf pictures.
I tried to make use of OpenCV with the Probabilistic Hough Line Rework and Apple’s VNDetectHorizonRequest.**
Nevertheless, the outcomes haven’t been correct or dependable.** The principle points I am going through:
-
Hough strains picks up too many non-shelf strains (product edges, label textual content, reflections)
-
Outcomes differ considerably with small modifications in lighting or product density
-
Can not confidently decide when there are really “no detectable” shelf strains vs. simply noisy detection
-
If Cabinets are actually angled, it offers incorrect outcomes
What I am on the lookout for:
-
Is there a greater pre-processing pipeline (morphological ops, masking, and so on.) to isolate shelf rails particularly earlier than operating Hough?
-
Is there a scoring/confidence threshold technique to resolve when to return
"Not Detectable"vs. a low-confidence angle? -
Are there different approaches on iOS (CoreML, ARKit planes, CoreMotion) that might give extra constant outcomes for this particular use case?
I need customers to keep away from importing pictures like these (can not see merchandise clearly from angles, cabinets not detectable):

[!’Example 1′]
