Skip to main content
Applied Vision

Fourier contrast optimization for occluded motion estimation

Authors
  • Ido Akov (Austrian Institute of Technology)
  • Roman Pflugfelder (Austrian Institute of Technology)
  • Daniel Cremers (Technical University of Munich)

Abstract

Fragmented occlusion, as encountered in through-foliage observation, makes monocular motion estimation difficult because the target is visible only through sparse, discontinuous image fragments. We estimate motion by warping frames under a parametric model and maximizing the contrast of their integrated image. Although effective for 2DoF translation, this objective becomes ill-conditioned for 4DoF similarity motion. To analyze this, we derive a Fourier-domain reformulation that exposes the optimization structure and shows that static occlusion biases the objective toward zero motion. This motivates a decoupled 4DoF pipeline in which rotation and scale are estimated separately from translation. On synthetic videos with controlled fragmented occlusion, the Fourier formulation matches the spatial baseline at low-to-mid occlusion while converging faster, and the decoupled pipeline restores reliable translation recovery where joint 4DoF optimization fails.

How to Cite:

Akov, I., Pflugfelder, R. & Cremers, D., (2026) “Fourier contrast optimization for occluded motion estimation”, Proceedings of the Austrian Symposium on AI, Robotics, and Vision 3(1), 107-111.

Downloads:
Download PDF

8 Views

2 Downloads

Published on
2026-04-10

Peer Reviewed