8:30 – 9:00
9:00 – 9:50
Scene Monitoring using Active Cameras
Computer vision and machine learning techniques applied to video surveillance and biometrics have been investigated for several years aiming at finding accurate and efficient solutions to allow the execution of smart surveillance systems in real environments. Aiming at the analysis of the scene to recognize and understand suspicious activities performed by humans in the scene, video surveillance and biometrics still face several challenges. A particular challenge is the person identification due to the poor data acquisition condition and the large distance between the cameras and the persons in the scene. In this talk, I will discuss approaches designed to overcome the difficulties faced by person identification in surveillance scenarios covered by a camera network, especially when PTZ cameras are available to gather higher quality information from the monitored scene.
William Robson Schwartz is an Associate Professor in the Department of Computer Science at the Federal University of Minas Gerais, Brazil. He is recipient of the CNPq Productivity Fellowship since 2013 and the Minas State researcher since 2015. He received his BSc and MSc degrees in Computer Science from the Federal University of Parana, Curitiba, Brazil in 2003 and 2005, respectively. He received his PhD degree in Computer Science from the University of Maryland, College Park, USA in 2010, with a CAPES/Fulbright scholarship. Then, he spent one year in the Institute of Computing at the University of Campinas as a Postdoctoral researcher. His research interests include Computer Vision and Machine Learning applied to Video Surveillance, Computer Forensics and Biometrics. He is also the head of the Smart Sense Laboratory, which focuses mainly on large-scale surveillance based on visual and sensor data. In addition, he advises several MSc and PhD students and he has worked as the principal investigator in projects sponsored by public agencies such as CAPES, CNPq and FAPEMIG, and by companies such as Petrobras, Samsung and Hewlett-Packard.
10:00 – 10:50
S L Happy (INRIA Sophia Antipolis – Méditerranée research center)*; Antitza Dantcheva (INRIA); ABHIJIT DAS (ISI); Francois Bremond (Inria Sophia Antipolis, France); Radia Zeghari (Cobtek); Philippe Robert (CobTek)
João Firmino (Instituto Superior Técnico – Universidade de Lisboa); Paulo L Correia (Instituto de Telecomunicacoes / Instituto Superior Técnico – Universidade de Lisboa)*
Laura Schiphorst (Utrecht University); Metehan Doyran (Utrecht University); Sabine Molenaar (Utrecht University); Albert Ali Salah (Utrecht University); Sjaak Brinkkemper (Utrecht University)
Siqi Liu (Department of Computer Science, Hong Kong Baptist University)*; PongChi Yuen (Department of Computer Science, Hong Kong Baptist University)
11:00 – 11:50
Okan Köpüklü (Technical University of Munich)*; Thomas Ledwon (Ludwig-Maximilians-Universitaet Muenchen); Yao Rong (University of Tübingen); Neslihan Kose Cihangir (Intel Deutschland GmbH); Gerhard Rigoll (Institute for Human-Machine Communication, TU Munich, Germany)
Jiayi Wang (Max Planck Institut Informatik)*; Franziska Mueller (MPI Informatics); Florian Bernard (MPI); Christian Theobalt (MPI Informatik)
Weizhe Lin (University of Cambridge)*; Indigo Orton (University of Cambridge); Mingyu Liu (University of Oxford); Marwa Mahmoud (University of Cambridge)
István Sárándi (RWTH Aachen University)*; Timm Linder (Robert Bosch GmbH); Kai O Arras (Robert Bosch GmbH); Bastian Leibe (RWTH Aachen University-)
12:00 – 12:50
Ankit Sharma (University of Central Florida)*; Hassan Foroosh (University of Central Florida)
Jingyun Xiao (University of Chinese Academy of Sciences)*; Shuang Yang (ICT, CAS); Yuan-Hang Zhang (University of Chinese Academy of Sciences); Shiguang Shan (Chinese Academy of Sciences); Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)
Dario Dotti (Maastricht University)*; Esam A. H. Ghaleb (Maastricht University); Stylianos Asteriadis (Maastricht University)
Francisca Pessanha (University of Porto)*; Krista McLennan (University of Chester); Marwa Mahmoud (Cambridge University)
Peter J Thompson (University of Manchester)*; Aphrodite Galata (The University of Manchester)
Eimear M O’ Sullivan (Imperial College London)*; Stefanos Zafeiriou (Imperial College London)
Md Sirajus Salekin (University of South Florida)*; Ghada Zamzmi (USF); Dmitry Goldgof (USF); Rangachar Kasturi (USF); Thao Ho (USF Health); Yu Sun (University of South Florida)
xing zhao (Zhejiang University of Technology; Institute of Computing Technology, Chinese Academy of Sciences)*; Shuang Yang (ICT, CAS); Shiguang Shan (Chinese Academy of Sciences); Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)
Gazi Naven (University of Rochester)*; taylan k sen (University of Rochester); Luke Gerstner (University of Rochester); Kurtis Glenn Haut (University of Rochester); Melissa Wen (University of Rochester); Ehsan Hoque (University of Rochester)
Torsten Wörtwein (Carnegie Mellon University)*; Louis-Philippe Morency (Carnegie Mellon University)
Gnana Praveen Rajasekar (Ecole Technologie Superieure)*; Eric Granger (ETS Montreal ); Patrick Cardinal (Canada)
13:00 – 13:50
Is FG Enabling a Surveillance Dystopia?
Face and gesture recognition technologies hold great promise to improve people’s lives, yet they also raise serious issues with respect to privacy, bias, and serious misuse by individuals, companies, and governments. Some civil liberties and advocacy groups have been increasingly raising warnings and supporting legislation banning such technologies. Many in law enforcement push back, arguing that it saves lives and helps to make us safer. Legislative bodies are trying to decide if and how to address these issues, sometimes with limited information. As technologists, what is our role in this public debate? Are we helping to create a surveillance dystopia? What should we do about it? Let’s discuss.
Matthew Turk is president of the Toyota Technological Institute at Chicago (TTIC) and a professor emeritus at UC Santa Barbara. He received a B.S. from Virginia Tech, an M.S. from Carnegie Mellon University, and a Ph.D. from the Massachusetts Institute of Technology. His research focuses on computer vision and multimodal interaction, including early work in autonomous vehicles and in face recognition. He has received several best paper awards and has been general or program chair of several top conferences in computer vision and multimodal interaction. He co-founded an augmented reality startup company in 2014 that was acquired by PTC Vuforia in 2016. He is an IEEE Fellow, an IAPR Fellow, and the recipient of the 2011-2012 Fulbright-Nokia Distinguished Chair in Information and Communications Technologies.
14:00 – 15:20
Ronald J Cotton (Shirley Ryan AbilityLab)*
Diego Guarín (Toronto Rehabilitation Institute)*; Aidan Dempster (Univ. of Toronto); Andrea Bandini (KITE – Toronto Rehab – University Health Network); Yana Yunusova (University of Toronto); Babak Taati (University Health Network)
Ziheng Zhang (University of Cambridge)*; Weizhe Lin (University of Cambridge); Mingyu Liu (University of Oxford); Marwa Mahmoud (University of Cambridge)
Pietro Pala (University of Florence)*; Stefano Berretti (University of Florence, Italy); Luca Cultrera (University of Florence); Ettore Celozzi (University of Florence); Luca Ciabini (University of Florence); Mohamed Daoudi (IMT Lille Douai); Alberto Del Bimbo (University of Florence)