Drones

BIO-PERCEPTION - Next generation of smart vision systems for real-time processing with bio-inspired sensors

Our goal for this project is to set up the basis for a new generation of smart autonomous agents that are able to carry out 3D perception in real-time, using biologically-inspired vision sensors. These sensors independently processed all pixels and only trigger changes in the scene (events) in the case a substantial difference in the intensity luminance happens over time for a specific location (this happens only at object contours and textures). This allows for the reduction of the transmission of redundant information and hence, the data bandwidth.

BRAINAV - BRAIn-inspired visual processing for real-time energy-efficient autonomous NAVigation

BRAINAV builds on the integration of visual processing pipelines for energy-efficient edge processing using neuromorphic strategies, for the application of autonomous navigation. First, visual processing is crucial in perception system components and specifically, for scene awareness in navigation applications. Robotic agents require understanding their context to plan and make decisions accordingly. This is even more relevant in applications such as robotics. However, computer vision is a very demanding task in terms of resources, and effective navigation requires low latencies to close perception-action loops in real-time. Second, regarding 3D perception and scene understanding, state-of-the-art solutions are focused on accuracy performance but other qualities such as energy consumption must be also taken into account.

Drone control based on vision and gestural programming

This project integrates gesture recognition as a drone control mechanism to provide the drone with a certain degree of autonomy respect to its operator, which is specially useful in military operations. The application has been implemented in Python using mainly OpenCV and OpenPose libraries.

Drones for video surveillance applications

The final objective of this project is to provide autonomy to a drone, in order to do video surveillance tasks with it. The main tasks are people detection and people tracking using the video stream given by the onboard drone camera. To do this, computer vision and machine learning algorithms are used.

Egomotion estimation methods for drones

This project integrates the bebop_autonomy package with ORB-SLAM2 in the ROS platform, to do localization and mapping using the Parrot Bebop2 drone. Applications are mainly indoor navigation and mapping.