Info for RLQ 2021 invited speakers will be updated soon. Check our YouTube Recordings for RLQ 2020 @ ECCV.
How is the robustness of the current state-of-the-art for recognition and detection algorithms in non-ideal visual environments? While the visual recognition research has made tremendous progress in recent years, most models are trained, applied, and evaluated on high-quality (HQ) visual data. However, in many emerging applications such as robotics and autonomous driving, the performances of visual sensing and analytics are largely jeopardized by low-quality(LQ) visual data acquired from unconstrained environments, suffering from various types of degradation such as low resolution, noise, occlusion, motion blur, contrast, brightness, sharpness, out-of-focus etc. We are organizing the 3rd RLQ workshop in conjunction with ICCV 2021 to provide an integrated forum for both low-level and high-level vision researchers to review the recent progress of robust recognition models from LQ visual data and the novel image restoration algorithms. You could contribute to our workshop in three aspects:
For inquiry, please send emails to one of the following addresses:
|Paper Submission Deadline||8 Aug 2021 (23:59 PDT)|
|Notification to Authors||15 Aug 2021 (23:59 PDT)|
|Camera-Ready Deadline||17 Aug 2021 (23:59 PDT)|
|Workshop Presentation||11-17 Oct 2021 (TBD)|
We embrace the most advanced deep learning systems, meanwhile being open to classical physically grounded models and feature engineering, as well as any well-motivated combination of the two streams. We will solicit papers from but not limited to the following topics:
Submitted paper will go through a double-blind peer-reviewed process, and the accepted papers will appear in the ICCV Workshop Proceedings. All submissions should follow the requirement of ICCV main conference in terms of format and length.
We will select Best Papers from accepted papers this year with potential monetary prizes for the winners. In addtion, we will support authors in need for ICCV registration etc.