MaLGa logo MaLGa black extended MaLGa white extended UniGe ¦ MaLGa UniGe ¦ MaLGa Universita di Genova | MaLGa Universita di Genova
keyboard_arrow_up

Machine Learning & Vision

  • We investigate different nuances of visual perception in artificial intelligence systems, where computer vision and machine learning are combined to obtain robust data-driven methods addressing a variety of problems.

    In this multidisciplinary world, we are interested in mathematically sound algorithms, computational models, and a wide range of applications.

  • In particular, we study and develop methods for scene understanding, motion analysis, and action recognition, with applications to assisted living, human-machine interaction, and robotics.

  • Human Pose and Motion Understanding

    Computational models and cognitive science

    We consider motion understanding tasks in the general domain of Cognitive Computer Vision (or bio-inspired Computer Vision). To this purpose, we design strategies characterized by a strong interplay between visual computation and cognitive science, to address tasks including motion detection and recognition, action categorization and anticipation. Furthermore, we are interested in the understanding of motion qualities that may be not directly visible, as the style, the emotional load, and the goals or intentions. We apply our research in particular to HRI.

    Collaboration with: Alessandra Sciutti, Francesco Rea and Giulio Sandini (IIT), Paul Hemeren (University of Skovde)

    References: A Vignolo, N Noceti, F Rea, A Sciutti, F Odone, G Sandini "Detecting biological motion for human–robot interaction: A link between perception and action" Frontiers in Robotics and AI 4, 14, 2017

    Marker-less motion analysis

    We tackle marker-less motion analysis to describe the motion evolution in time and to provide a quantitative analysis of human behavior in supervised or unsupervised way. Our goal is to understand the quality of motion, derive information on functional impairments, possibly also assessing the benefits of a rehabilitation procedure. Long term objectives of our research are ecological, non invasive, unbiased motion analysis methods to be adopted in the clinical practice.We both consider full-body movements and gestures, and data provided by cameras, RGB-D sensors, and graphical tablets. We address different application: motion analysis in Multiple Sclerosis patients (with FISM) and Stroke survivors, General Movements analysis in premature infants (with Gaslini Hospital, Genova), analysis of motor learning in instruments players (with Conservatorio Nicolò Paganini, Marquette University, Music Institute of Chicago).

    Collaboration with: Maura Casadio (UniGe), Alessandra Sciutti (IIT), Andrea Tacchino (FISM)

    Gaze estimation

    We study methods for apparent gaze (or heading) estimation in video sequences containing multiple individuals. Our goal is to rely on multiple information gathered from the scene under analysis, starting from the outputs of 2D pose estimation methods. We address the challenges of occlusions and partial information with a methodology able to provide a gaze direction estimate associated with an uncertainty prediction.

    Collaboration with: Henry Medeiros

    References: P A Dias, D Malafronte, H Medeiros and F Odone “Gaze Estimation for Assisted Living Environments” WACV 2020

    Cross-view action recognition

    Cross-view action recognition is a natural task for humans, while it is well known that view-point changes are a major challenge for computer vision algorithms, which have to deal with signal variations in geometry and overall appearance. To this end, we explore the appropriateness of deep learning approaches to implicitly learn view-invariant features, as well as other dynamic and appearance information. We also study the general transferability of a learnt model: how well an extensively learnt spatio-temporal representation fares when used for different types of action datasets, varying from full-body actions with large movements and variations to focused upper-body movements with subtle, fine grain difference in actions.

    The MoCa project

    The goal of the project is to acquire and maintain a multi-modal multi-view dataset in which we collect MoCap data and video sequences from multiple views of upper body actions in a cooking scenario. The acquisition has the specific purpose of investigating view-invariant action properties in both biological and artificial systems. Beside addressing classical action recognition tasks, the dataset enables research on different nuances of action understanding, from the segmentation of action primitives robust across different sensors and viewpoints, to the detection of actions categories depending on their dynamic evolution or the goal.

    Collaboration with: Alessandra Sciutti

    References: The cooking dataset (on GitHub)

    • Human motion analysis
  • Well-being estimation

    Various recent funded projects fuelled multi-disciplinary applied research in this direction.

    Physical well-being

    We assess physical well-being by analysing patients over medium to long time periods, evaluating in particular their motility and the quality of Activity of Daily Living (how much they move, how active they are). To achieve this goal we employ RGB and RGB-D sensors and address the following main tasks: joint detection and tracking of people, apparent velocity estimation, pose transitions estimation (sit to stand), simple action recognition (sitting, standing, walking, bending, lying,...); human-object interaction and action recognition for ADL. For a more comprehensive analysis, the observations acquired with environmental visual sensors may be coupled with measures collected with wearable sensors. This allows us to build richer models able to capture interconnections between heterogeneous information, enabling the design of personalized healthcare.

    The research is carried out in collaboration with Ospedale Galliera (Genova, It) within the MoDiPro facility (Modello di Dimissione Protetta, Protected Discharge Facility), a sensorized apartment within the hospital, an ideal test bed for research in Ambient Assisted Living.
    Also in collaboration with: Henry Medeiros (Marquette University)

    Funded by "Liguria 4P Health - Predictive, Personalized, Preventive, Participatory Healthcare" (POR-FESR Liguria 2014-2021). In collaboration with MaLGa-MLDS

    Reference paper: Data-driven Continuous Assessment of Frailty in Older People, Frontiers in Digit. Humanit., 17 April 2018

    Social interaction assessment and emotional well-being

    Emotional well-being  is related to the sense of fulfilment; it includes satisfaction, optimism, having a purpose in life as well as being able to make the most of your abilities to cope with the normal challenges of life. An increasing body of research suggests that initiatives promoting physical wellbeing disregarding mental  and social wellbeing may lead to failure. In this general framework we address the following main topics: human-human interaction for social signals assessment and evaluation of indepencence. Emotion analysis: emotion recognition, assessment of valence-arousal vs cognitive models approaches.

    Collaboration with: Raffaella Lanzarotti, Giuliano Grossi (UNIMI), Claudio de'Sperati (San Raffaele), Andrea Gaggioli (Uni Cattolica)

    Funded by CARIPLO "Stairway to elders: bridging space, time and emotions in their social environment for wellbeing"

    References: G Grossi, R Lanzarotti, P Napoletano, N Noceti, F Odone “Positive technology for elderly well-being: a review” Pattern Recognition Letters 2019

    • Elderly motion tracking
  • Object recognition, object detection and tracking

    A core topic of our research activity in the past years, now fuelled primarily by active collaborative projects.

    We address the general task of scene understanding, with a special reference to video analysis. Here we focus on object detection and object tracking, with the goal of designing efficient algorithms applicable to different scenarios: video surveillance, robotics (see the collaborative projects section), automotive. Among the specific challenges we are currently addressing we mention few shot learning, where we consider in particular different approaches to transfer learning, domain adaptation and data augmentation, and efficient joint detection and tracking.

    Collaboration with: Imavis srl

  • Collaborative Projects

    Multi-resolution signal processing in computer vision

    Shearlets are a multi-resolution analysis framework with many suitable properties for it to be applied to the analysis of images and videos. Among these, we mention its ability in characterizing anisotropic structures, and in enhancing signal singularities. For these reasons, we have adopted it in the detection and descriptions of keypoints in images and image sequences. Its robustness to noise (including motion blur and compression artifacts), makes it suitable to address different applications in the signal processing domain.

    Reference papers:

    Collaboration with: MaLGa-CHARML

    Marine robotics

    Unmanned Surface Vehicles (USVs) are autonomous boats that have different applications, ranging from patrolling and monitoring of the waterways for security reasons, to scientific applications such as sampling the water for pollution or biological investigations. One of the major research themes to be tackled, for the widespread adoption of USVs, is the development of reliable obstacle detection and avoidance systems. In this research we investigate data fusion of LIDAR sensors and video-cameras, for efficient detection of potential obstacles, in port areas and open sea.

    Reference paper: M Sorial, I Mouawad, E Simetti, F Odone, G Casalino. Towards a Real Time Obstacle Detection System for Unmanned Surface Vehicles OCEANS 2019 ISME

    Collaboration with: UniGe-Graal and ISME

    Robot Vision

    Visual perception is an important cue for understanding and interacting with the surrounding environment, and robotics is one of the main application fields where this ability can be appreciated. Here we mention two of the most relevant collaborations we have in this field.

    • Indoor exploration (with IIT): we specialize to the robotics scenario object detection and object recognition tasks. In this context we also participate in the acquisition of the iCubWorld dataset, a growing dataset built through human-robot interaction
      G Pasquale, C Ciliberto, F Odone, L Rosasco, L Natale Are we done with object recognition? the iCub robot’s perspective Robotics and Autonomous Systems 112, 260-281, 2019
    • 6D object pose estimation (with Marquette): we explore the use of deep architectures in robotic scenarios to solve the 6D pose estimation task; we also participate to the design of a robotic platform for semi-automatic acquisitions of annotated data, to ease benchmarking in this specific application domain.

    Collaboration with: Marquette University and IIT