Projects

BIO-PERCEPTION - Next generation of smart vision systems for real-time processing with bio-inspired sensors

Our goal for this project is to set up the basis for a new generation of smart autonomous agents that are able to carry out 3D perception in real-time, using biologically-inspired vision sensors. These sensors independently processed all pixels and only trigger changes in the scene (events) in the case a substantial difference in the intensity luminance happens over time for a specific location (this happens only at object contours and textures). This allows for the reduction of the transmission of redundant information and hence, the data bandwidth.

Drone control based on vision and gestural programming

This project integrates gesture recognition as a drone control mechanism to provide the drone with a certain degree of autonomy respect to its operator, which is specially useful in military operations. The application has been implemented in Python using mainly OpenCV and OpenPose libraries.

Drones for video surveillance applications

The final objective of this project is to provide autonomy to a drone, in order to do video surveillance tasks with it. The main tasks are people detection and people tracking using the video stream given by the onboard drone camera. To do this, computer vision and machine learning algorithms are used.

Real-Word anomaly detection in Surveillance through Semi-supervised Federated Active Learning

This project shows the deployment and research of semi-supervised deep learning models for the anomaly detection in Surveillance videos deployed on a synchronous Federated Learning architecture for which training is being distributed on many nodes.

SLAM for unknown scenarios with Cartographer

The project integrates Cartographer, an algorithm for building maps based on SLAM, on a robot via ROS. Images from the robot will be used as a machine learning library input in order to detect objects.

Egomotion estimation methods for drones

This project integrates the bebop_autonomy package with ORB-SLAM2 in the ROS platform, to do localization and mapping using the Parrot Bebop2 drone. Applications are mainly indoor navigation and mapping.

Dataset for visual navigation with event-based sensors

This work presents a dataset that provides both frame-free event data and classic image, motion and depth data to assess different event-based methods and compare them to frame-based conventional methods. We hope that this will help researchers understand the potential of the new technology of event-based vision.

Real-time clustering and tracking for event-based sensors

This work presents a real-time clustering technique that takes advantage of the unique properties of event-based vision sensors. Our approach redefines the well-known mean-shift clustering method using asynchronous events instead of conventional frames.

Visuoimitation - Baxter imitates movements looking at an operator

The objective of the system is to control a baxter robot using the kinect V2 and tis body structure extraction modules (body tracking) under the ROS (Robot Operating System) framework. Modules will be developed to make an interface between the robot and the Kinect body pose estimation so that the robot can imitate the movements of the operator in real time

Object manipulation with the Baxter robot

The project presents various demonstrations of pick and place tasks with the Baxter research robot, with object recognition and manipulation with objects of various shapes, sizes, and colours. It also includes trajectory and motion planning and the use of different grippers and simulation with Gazebo.