8:30 – 9:00

Chairs’ Welcome

9:00 – 9:50

Oral Session 3

Head2Head: Video-based Neural Head Synthesis

Mohammad Rami Koujan (University of Exeter)*; Michail C Doukas (Imperial College London); Anastasios Roussos (Institute of Computer Science, Foundation for Research and Technology Hellas); Stefanos Zafeiriou (Imperial College London)

A recurrent cycle consistency loss for progressive face-to-face synthesis

Enrique Sanchez (Samsung AI Centre, Cambridge)*; Michel Valstar (University of Nottingham)

Face  Denoising  and  3D  Reconstruction  from  A  Single  Depth  Image

Yicheng Zhong (Peking University); Yuru Pei (Peking University)*; Peixin Li (Peking University); Yuke Guo (Luoyang Institute of Science and Technology); Gengyu Ma (Usens Inc); Meng Liu (Huawei Technologies); Wei Bai (Huawei Technologies); WenHai Wu (Huawei Technologies); Hongbin Zha (Peking University, China)

Synthesising 3D Facial Motion from “In-the-Wild” Speech

Panagiotis Tzirakis (Imperial College London)*; Athanasios Papaioannou (Imperial College London); Alexandros Lattas (Imperial College London); Michael Tarasiou (Imperial College London); Bjoern W. Schuller (Imperial College London); Stefanos Zafeiriou (Imperial College London)

10:00 – 10:50

Special Session 2: Advances and challenges in face and gesture based security systems (ACFGSS 2020)

Attention Fusion for Audio-Visual Person Verification Using Multi-scale Features

Stefan Hörmann (Technical University of Munich)*; Abdul Moiz (Technical University of Munich); Martin Knoche (Technical University of Munich); Gerhard Rigoll (Institute for Human-Machine Communication, TU Munich, Germany)

Learning privacy-enhancing face representations through feature disentanglement

Blaz Bortolato (University of Ljubljana); Marija Ivanovska (Univerza v Ljubljani, Fakulteta za Elektrotehniko); Peter Rot (Univerza v Ljubljani, Fakulteta za Elektrotehniko); Janez Krizaj (University of Ljubljana); Philipp Terhörst (Fraunhofer Institute for Computer Graphics Research IGD); Naser Damer (Fraunhofer IGD); Peter Peer (University of Ljubljana); Vitomir Struc (University of Ljubljana)*

A video is worth more than 1000 lies. Comparing 3DCNN approaches for detecting deepfakes

Yaohui WANG (INRIA); Antitza Dantcheva (INRIA)*

3D Face Mask Anti-spoofing via Deep Fusion of Dynamic Texture and Shape Clues

Song Chen (Beihang University, China)*; Weixin Li (Beihang University); Hongyu Yang (Beihang University); Di Huang (Beihang University, China); Yunhong Wang (State Key Laboratory of Virtual Reality Technology and System, Beihang University, Beijing 100191, China)

11:00 – 11:50

Oral Session 4

Toward fast and accurate human pose estimation via soft-gated skip connections

Adrian Bulat (Samsung AI Center, Cambridge)*; Jean Kossaifi (Imperial College London); Georgios Tzimiropoulos (Samsung AI Centre, Cambridge & University of Nottingham); Maja Pantic (Imperial College London / Samsung )

FT-RCNN: Real-time Visual Face Tracking with Region-based Convolutional Neural Networks

Yiming Lin (Imperial College London); Jie Shen (Imperial College London)*; Shiyang Cheng (Samsung); Maja Pantic (Imperial College London / Samsung )

Deep Entwined Learning Head Pose and Face Alignment Inside an Attentional Cascade with Doubly-Conditional fusion

Arnaud Dapogny (Pierre and Marie Curie University (UPMC))*; Kevin Bailly (UPMC); Matthieu Cord (Sorbonne University)

Learning Monocular Face Reconstruction using Multi-View Supervision

Zhixin Shu (Stony Brook University)*; Duygu Ceylan (Adobe Research); Kalyan Sunkavalli (Adobe Research); Sunil Hadap (Adobe); Eli Shechtman (Adobe Research, US); Dimitris Samaras (Stony Brook University)

12:00 – 12:50

Poster Session 2

Spatio-Temporal Attention and Magnification for Classification of Parkinson’s Disease from Videos Collected via the Internet

Mohammad Rafayet Ali (University of Rochester)*; Daniel McDuff (Microsoft Research); Javier Hernandez (Microsoft Research); Ray Dorsey (University of Rochester Medical Center); Ehsan Hoque (University of Rochester)

PAS-Net: Pose-based and Appearance-based Spatiotemporal Networks Fusion for Action Recognition

changzhen li (ICT-VIPL)*; Jie Zhang (ICT, CAS); Shiguang Shan (Chinese Academy of Sciences); Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)

Neural Sign Language Translation by Learning Tokenization

Alptekin Orbay (Bogazici University)*; Lale Akarun (Bogazici University)

Set Operation Aided Network for Action Units Detection

Huiyuan Yang (Binghamton University-SUNY)*; Lijun Yin (State University of New York at Binghamton); Taoyue Wang (State Univerisity of New York at Binghamton)

Recognizing Perceived Emotions from Facial Expressions

Saurabh Hinduja (University of South Florida)*; Shaun Canavan (University of South Florida); Lijun Yin (State University of New York at Binghamton)

Dual-Attention GAN for Large-Pose Face Frontalization

Yu Yin (Northeastern University)*; Songyao Jiang (Northeastern University); Joseph P Robinson (Northeastern University); YUN FU (Northeastern University)

Taking Control of Intra-class Variation in Conditional GANs Under Weak Supervision

Richard T Marriott (Ecole Centrale de Lyon)*; Sami Romdhani (IDEMIA); Liming Chen (Université de Lyon)

Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading

Mingshuang Luo (Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Science)*; Shuang Yang (ICT, CAS); Shiguang Shan (Chinese Academy of Sciences); Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)

Semi-supervised Emotion Recognition Using Inconsistently Annotated Data

S L Happy (INRIA Sophia Antipolis – Méditerranée research center)*; Antitza Dantcheva (INRIA); Francois Bremond (Inria Sophia Antipolis, France)

Self-supervised Deformation Modeling for Facial Expression Editing

ShahRukh Athar (Stony Brook University)*; Zhixin Shu (Stony Brook University); Dimitris Samaras (Stony Brook University)

Landmarks-assisted Collaborative Deep Framework for Automatic 4D Facial Expression Recognition

Muzammil Behzad (University of Oulu)*; Nhat Vo (University of Oulu); Xiaobai Li (University of Oulu); Guoying Zhao (University of Oulu)

CLIFER: Continual Learning with Imagination for Facial Expression Recognition

Nikhil Churamani (University of Cambridge)*; Hatice Gunes (University of Cambridge)

13:00 – 13:50

Prof. Rama Chellappa

Shallow and Deep Representations for Video-based Face Recognition

Abstract

Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this talk, I will briefly review traditional tracking and recognition approaches for video-based face recognition using Gabor jets, appearance, albedo, and dictionaries. While elegant, these methods did not scale to large galleries and also did not perform well on multiple-shot videos and surveillance videos with low-quality frames. Since 2014, we have been developing deep representations for still and video-based face recognition. We will present robust and efficient systems for unconstrained video-based face recognition, which are composed of modules for face and fiducial detection, face association, and face recognition. First, we discuss multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized using deep features as well as an unsupervised subspace learning approach. Results on video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), IARPA Janus Surveillance Video Benchmarks IJB-S and IJB-B are presented.

Bio

Prof. Rama Chellappa is a Bloomberg Distinguished Professor in the Departments of Electrical and Computer Engineering and Biomedical Engineering with a secondary appointment in the Department of Computer Science at Johns Hopkins University (JHU). At JHU, he is also affiliated with CIS, CLSP, Malone Center and MINDS. Before coming to JHU in August 2020, he was a Distinguished University Professor, a Minta Martin Professor of Engineering, and a Professor in the ECE department and the University of Maryland Institute Advanced Computer Studies at the University of Maryland (UMD).  He holds a non-tenure position as a College Park Professor in the ECE department at UMD. His current researcher interests are computer vision, pattern recognition, machine intelligence and artificial intelligence. He received the K. S. Fu Prize from the International Association of Pattern Recognition (IAPR). He is a recipient of the Society, Technical Achievement, and Meritorious Service Awards from the IEEE Signal Processing Society and four IBM Faculty Development Awards. He also received the Technical Achievement and Meritorious Service Awards from the IEEE Computer Society. He received the Inaugural Leadership Award from the IEEE Biometrics Council and recently the 2020 IEEE Jack S. Kilby Medal for Signal Processing. At UMD, he received college and university level recognitions for research, teaching, innovation, and mentoring of undergraduate students. He has been recognized as an Outstanding ECE by Purdue University and as a Distinguished Alumni by the Indian Institute of Science, India. He served as the Editor-in-Chief of PAMI. He is a Golden Core Member of the IEEE Computer Society, served as a Distinguished Lecturer of the IEEE Signal Processing Society and as the President of the IEEE Biometrics Council. He is a Fellow of AAAI, AAAS, ACM, IAPR, IEEE and OSA and holds six patents.  

14:00 – 15:20

Oral Session 5

ATFaceGAN: Single Face Image Restoration and Recognition from Atmospheric Turbulence

Chun Pong Lau (University of Maryland, College Park)*; Hossein Souri (University of Maryland); Rama Chellappa (University of Maryland)

How are attributes expressed in face DCNNs?

Prithviraj Dhar (University of Maryland, College Park)*; Ankan Bansal (University of Maryland); Carlos Castillo (University of Maryland); Joshua Gleason (Univ of Maryland); Jonathon Philips (NIST); Rama Chellappa (University of Maryland)

Robustness Analysis of Face Obscuration

Hanxiang Hao (Purdue University)*; David Güera (Purdue University); János Horváth (Purdue University); Amy R. Reibman (Purdue University); Edward Delp (Purdue University)

Face Attributes as Cues for Deep Face Recognition Understanding

Matheus A Diniz (Federal University of Minas Gerais)*; William R Schwartz (Federal University of Minas Gerais)

Post Conference Activity