Georgios Kapidis Multi-Task Learning paper at ICCV-EPIC workshop

Georgios Kapidis will present his recent work on Multi-Task Learning for the recognition of actions in ego-centric videos at the EPIC workshop of ICCV. By predicting multiple outputs such as gaze targets and hand locations, we can improve the network structure which benefits other tasks such as action recognition. We demonstrate state-of-the-art results on the EGTEA Gaze+ dataset, and show nice improvements over only action recognition on EPIC Kitchens.


Alex Stergiou has three papers accepted!

Alex Stergiou had a good run: three papers were accepted recently at ICIP (paper), ICMLA (paper) and the ICCV workshop on Interpreting and Explaining Visual Artificial Intelligence Models (paper). The papers deal with the visualization of what 3D CNNs learn for video recognition. Congrats on the acceptance!

Moreover, the survey on vision-based analysis of human-human interactions has appeared in Computer Vision and Image Understanding (CVIU). It can be downloaded for free.2019_alex_iccvw

PlosOne paper on using mocap for detection deception

Our (Sophie van der Zee, Ronald Poppe, Paul J. Taylor and Ross J. Anderson) paper “To freeze or not to freeze: A culture-sensitive motion capture approach to detecting deceit” was accepted for PlosOne. It describes our ground-breaking research on using motion capture technology to accurately, objectively and automatically detect whether people are lying. Our approach can distinguish between lies and truthful statements approximately 82% of the cases.2019_plos_one

Tracking as exertion measure in IJHCS

Our paper on tracking players to estimate their exertion has been published in IJHCS. This is work by Alejandro Moreno, together with Dirk Heylen and Jenny L. Gibson. We demonstrate that unobtrusive tracking, in the context of interactive play, can be readily used as a group measure for exertion. This shows promise for real-time adaptation of game mechanics, to keep exertion within a desirable range.2019_ijhcs_exertion