The paper “Pointly-Supervised Action Localization” by Pascal Mettes and Cees Snoek has been published in the International Journal of Computer Vision. The paper strives for spatio-temporal localization of human actions in videos. In the literature, the consensus is to achieve localization by training on bounding box annotations provided for each frame of each training video. As annotating boxes in video is expensive, cumbersome and error-prone, we propose to bypass box-supervision. Instead, we introduce action localization based on point-supervision. We start from unsupervised spatio-temporal proposals, which provide a set of candidate regions in videos. While normally used exclusively for inference, we show spatio-temporal proposals can also be leveraged during training when guided by a sparse set of point annotations. We introduce an overlap measure between points and spatio-temporal proposals and incorporate them all into a new objective of a Multiple Instance Learning optimization. During inference, we introduce pseudo-points, visual cues from videos, that automatically guide the selection of spatio-temporal proposals. We outline five spatial and one temporal pseudo-point, as well as a measure to best leverage pseudo-points at test time. Experimental evaluation on three action localization datasets shows our pointly-supervised approach (i) is as effective as traditional box-supervision at a fraction of the annotation cost, (ii) is robust to sparse and noisy point annotations, (iii) benefits from pseudo-points during inference, and (iv) outperforms recent weakly-supervised alternatives. This leads us to conclude that points provide a viable alternative to boxes for action localization.
Time-aware encoding of frame sequences in a video is a fundamental problem in video understanding. While many attempted to model time in videos, an explicit study on quantifying video time is missing. To fill this lacuna, the BMVC 2018 paper by Amir Ghodrati, Efstratios Gavves and Cees Snoek aims to evaluate video time explicitly. We describe three properties of video time, namely a) temporal asymmetry, b)temporal continuity and c) temporal causality. Based on each we formulate a task able to quantify the associated property. This allows assessing the effectiveness of modern video encoders, like C3D and LSTM, in their ability to model time. Our analysis provides insights about existing encoders while also leading us to propose a new video time encoder, which is better suited for the video time recognition tasks than C3D and LSTM. We believe the proposed meta-analysis can provide a reasonable baseline to assess video time encoders on equal grounds on a set of temporal-aware tasks.
Science and business in the Netherlands are joining forces in the field of artificial intelligence. On Thursday April 26, the Innovation Center for Artificial Intelligence (ICAI) was officially launched at Amsterdam Science Park. The first lab within ICAI is a partnership with Ahold Delhaize.
ICAI is focused on the joint development of AI technology through industry labs with the business sector, government and knowledge institutes. Maarten de Rijke, director of ICAI and professor of Information Retrieval at the University of Amsterdam: ‘The Netherlands has all the resources to take up a prominent position in the international AI landscape – top talent, innovation strength and research at world-class level. ICAI combines these strengths in a unique national initiative.’
ICAI is an open collaborative initiative between knowledge institutes that is aimed at AI innovation through public-private partnerships. The Center is located at Amsterdam Science Park and is initiated by the University of Amsterdam and the VU University Amsterdam together with the business sector and government. The organisation is built around industry labs – multi-year partnerships between academic and industrial partners aimed at technological and talent development. ICAI will be housed in a new co-development building where teaching, research and collaboration with the business sector and societal partners will come together.
We are witnessing a revolution in machine learning with the reinvigorated usage of neural networks in deep learning, which promises a solution to cognitive tasks that are easy for humans to perform but hard to describe formally. It is intended to allow computers to acquire knowledge directly from data without the need for human to specify, and model the inherent problem in terms of a layered composition of simpler concepts making it possible to express complex problems by elementary operators. By not relying on handcrafted features, hard-coded knowledge and showing the ability to regress intricate objective functions, deep learning methods are now employed in a broad spectrum of applications from image classification to speech recognition. Deep learning achieves exceptional power and flexibility by learning to represent the task as a nested hierarchy of layers, with more abstract representations computed in terms of less abstract ones. The current resurgence is a result of the breakthroughs in efficient layer-wise training, availability of big datasets, and faster computers. Thanks to the simplified training of very deep architectures, today we can provide these algorithms with the resources they need to succeed.
A number of challenges are being raised and pursued. For instance, many deep learning algorithms have been designed to tackle supervised learning problem for a wide variety of tasks, and how to reliably solve unsupervised learning problems in a similar degree of success is an important issue to address. Another key research area is to work successfully with smaller datasets, focusing on how we can take advantage of large quantities of unlabeled examples with a few labeled samples. Deep agents may play a more significant role in hybrid decision systems where other machine learning techniques are used to address the reasoning, bridging the gap between data and application decisions. We expect deep learning to be applied to increasingly multi-modal problems with more structure in the data, opening application domains in robotics and data mining.
This special issue in the high-impact IEEE Signal Processing Magazine seeks to provide a venue accessible to a wide and diverse audience to survey the recent research R&D advances in learning, including deep learning and beyond. Interested authors are asked
to prepare a white paper first based on the instruction and schedule outlined below.
Topics of Interest include (but are not limited to):
- Advanced deep learning techniques for supervised learning
- Deep learning for unsupervised & semi-supervised learning
- Online, reinforcement, incremental learning by deep models
- Domain adaptation and transfer learning with deep networks
- Deep learning for spatiotemporal data and dynamic systems
- Visualization of deep features
- New zero- and one-shot learning techniques
- Advanced hashing and retrieval methods
- Software and specialized hardware for deep learning
- Novel applications and experimental activities
White papers are required, and full articles are invited based on the review of white papers. The white paper format is up to 4 pages in length, including proposed article title, motivation and significance of the topic, an outline of the proposed paper, and
representative references; an author list, contact information and short bios should also be included. Articles submitted to this issue must be of tutorial and overview/survey nature and in an accessible style to a broad audience, and have a significant relevance to
the scope of the special issue. Submissions should not have been published or under review elsewhere, and should be made online at http://mc.manuscriptcentral.com/sps-ieee. For submission guidelines, visit http://signalprocessingsociety.org/publicationsresources/
- Prof. Fatih Porikli, Australian National University, firstname.lastname@example.org
- Dr. Shiguang Shan, Chinese Academy of Sciences, email@example.com
- Prof. Cees Snoek, University of Amsterdam, firstname.lastname@example.org
- Dr. Rahul Sukthankar, Google, email@example.com
- Prof. Xiaogang Wang, Chinese University of Hong Kong, firstname.lastname@example.org
The ECCV 2016 paper Spot On: Action Localization from Pointly-Supervised Proposals by Pascal Mettes, Jan van Gemert and Cees Snoek is now available. We strive for spatio-temporal localization of actions in videos. The state-of-the-art relies on action proposals at test time and selects the best one with a classifier demanding carefully annotated box annotations at train time. Annotating action boxes in video is cumbersome, tedious, and error prone. Rather than annotating boxes, we propose to annotate actions in video with points on a sparse subset of frames only. We introduce an overlap measure between action proposals and points and incorporate them all into the objective of a non-convex Multiple Instance Learning optimization. Experimental evaluation on the UCF Sports and UCF 101 datasets shows that (i) spatio-temporal proposals can be used to train classifiers while retaining the localization performance, (ii) point annotations yield results comparable to box annotations while being significantly faster to annotate, (iii) with a minimum amount of supervision our approach is competitive to the state-of-the-art. Finally, we introduce spatio-temporal action annotations on the train and test videos of Hollywood2, resulting in Hollywood2Tubes, available at tinyurl.com/hollywood2tubes.
The ECCV 2016 paper Online Action Detection by Roeland De Geest, Efstratios Gavves, Amir Ghodrati, Zhenyang Li, Cees G. M. Snoek and Tinne Tuytelaars is now available. In online action detection, the goal is to detect the start of an action in a video stream as soon as it happens. For instance, if a child is chasing a ball, an autonomous car should recognize what is going on and respond immediately. This is a very challenging problem for four reasons. First, only partial actions are observed. Second, there is a large variability in negative data. Third, the start of the action is unknown, so it is unclear over what time window the information should be integrated. Finally, in real world data, large within-class variability exists. This problem has been addressed before, but only to some extent. Our contributions to online action detection are threefold. First, we introduce a realistic dataset composed of 27 episodes from 6 popular TV series. The dataset spans over 16 hours of footage annotated with 30 action classes, totaling 6,231 action instances. Second, we analyze and compare various baseline methods, showing this is a challenging problem for which none of the methods provides a good solution. Third, we analyze the change in performance when there is a variation in viewpoint, occlusion, truncation, etc. We introduce an evaluation protocol for fair comparison. The dataset, the baselines and the models will all be made publicly available to encourage (much needed) further research on online action detection on realistic data.
The BMVC2016 paper Video Stream Retrieval of Unseen Queries using Semantic Memory by Spencer Cappallo, Thomas Mensink and Cees Snoek is now available. Retrieval of live, user-broadcast video streams is an under-addressed and increasingly relevant challenge. The on-line nature of the problem requires temporal evaluation and the unforeseeable scope of potential queries motivates an approach which can accommodate arbitrary search queries. To account for the breadth of possible queries, we adopt a no-example approach to query retrieval, which uses a query’s semantic relatedness to pre-trained concept classifiers. To adapt to shifting video content, we propose memory pooling and memory welling methods that favor recent information over long past content. We identify two stream retrieval tasks, instantaneous retrieval at any particular time and continuous retrieval over a prolonged duration, and propose means for evaluating them. Three large scale video datasets are adapted to the challenge of stream retrieval. We report results for our search methods on the new stream retrieval tasks, as well as demonstrate their efficacy in a traditional, non-streaming video task.
The best paper of ICMR2016 entitled “Pooling Objects for Recognizing Scenes without Examples” by Svetlana Kordumova, Thomas Mensink and Cees Snoek is now available. In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the training classes to match the unseen scene classes. Instead, we propose a pooling over ten thousand of off-the-shelf object classifiers. To steer the knowledge transfer between objects and scenes we learn a semantic embedding with the aid of a large social multimedia corpus. Our key contributions are: we are the first to investigate pooling over ten thousand object classifiers to recognize scenes without examples; we explore the ontological hierarchy of objects and analyze the influence of object classifiers from different hierarchy levels; we exploit object positions in scene images and we demonstrate a new scene retrieval scenario with complex queries. Finally, we outperform attribute representations on two challenging scene datasets, SUNAttributes and Places2.