Bottom-up visual attention model based on FPGA

Abstract

We present a model and a hardware architecture for the computation of bottom-up inherent visual attention for FPGA. The bottom-up inherent attention is generated including local energy, local orientation maps, and red-green and blue-yellow color opponencies. In this work, we describe the simplifications to parallelize and embed the model without significant accuracy loss. We also include feedback loops to adapt the weights of the features, depending on the target application.

Publication
In Electronics, Circuits and Systems (ICECS), 2012 19th IEEE International Conference on
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.

Supplementary notes can be added here, including code and math.

Francisco Barranco
Francisco Barranco
Associate Professor of Computer Engineering

Neuromorph, Hardware, CPS, Graná, Tellurider, UMD & DC.

Related