The ICCV paper “Counting with Focus for Free” by Zenglin Shi, Pascal Mettes and Cees Snoek is now available. This paper aims to count arbitrary objects in images. The leading counting approaches start from point annotations per object from which they construct density maps. Then, their training objective transforms input images to density maps through deep convolutional networks. We posit that the point annotations serve more supervision purposes than just constructing density maps. We introduce ways to repurpose the points for free. First, we propose supervised focus from segmentation, where points are converted into binary maps. The binary maps are combined with a network branch and accompanying loss function to focus on areas of interest. Second, we propose supervised focus from global density, where the ratio of point annotations to image pixels is used in another branch to regularize the overall density estimation. To assist both the density estimation and the focus from segmentation, we also introduce an improved kernel size estimator for the point annotations. Experiments on six datasets show that all our contributions reduce the counting error, regardless of the base network, resulting in state-of-the-art accuracy using only a single network. Finally, we are the first to count on WIDER FACE, allowing us to show the benefits of our approach in handling varying object scales and crowding levels. Code is available at https://github.com/shizenglin/Counting-with-Focus-for-Free.

 

The paper “Repetition Estimation” by Tom Runia, Cees Snoek and Arnold Smeulders has been accepted by the International Journal of Computer Vision. The paper studies visual repetition. Visual repetition is ubiquitous in our world. It appears in human activity (sports, cooking), animal behavior (a bee’s waggle dance), natural phenomena (leaves in the wind) and in urban environments (flashing lights). Estimating visual repetition from realistic video is challenging as periodic motion is rarely perfectly static and stationary. To better deal with realistic video, we elevate the static and stationary assumptions often made by existing work. Our spatiotemporal filtering approach, established on the theory of periodic motion, effectively handles a wide variety of appearances and requires no learning. Starting from motion in 3D we derive three periodic motion types by decomposition of the motion field into its fundamental components. In addition, three temporal motion continuities emerge from the field’s temporal dynamics. For the 2D perception of 3D motion we consider the viewpoint relative to the motion; what follows are 18 cases of recurrent motion perception. To estimate repetition under all circumstances, our theory implies constructing a mixture of differential motion maps. We temporally convolve the motion maps with wavelet filters to estimate repetitive dynamics. Our method is able to spatially segment repetitive motion directly from the temporal filter responses densely computed over the motion maps. For experimental verification of our claims, we use our novel dataset for repetition estimation, better-reflecting reality with non-static and non-stationary repetitive motion. On the task of repetition counting, we obtain favorable results compared to a deep learning alternative.

The CVPR 2019 paper Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres by Shuai Liao, Stratis Gavves and Cees Snoek is now available. Many computer vision challenges require continuous outputs, but tend to be solved by discrete classification. The reason is classification’s natural containment within a probability n-simplex, as defined by the popular softmax activation function. Regular regression lacks such a closed geometry, leading to unstable training and convergence to suboptimal local minima. Starting from this insight we revisit regression in convolutional neural networks. We observe many continuous output problems in computer vision are naturally contained in closed geometrical manifolds, like the Euler angles in viewpoint estimation or the normals in surface normal estimation. A natural framework for posing such continuous output problems are n-spheres, which are naturally closed geometric manifolds defined in the R^{(n+1)} space. By introducing a spherical exponential mapping on n-spheres at the regression output, we obtain well-behaved gradients, leading to stable training. We show how our spherical regression can be utilized for several computer vision challenges, specifically viewpoint estimation, surface normal estimation and 3D rotation estimation. For all these problems our experiments demonstrate the benefit of spherical regression. All paper resources are available at https://github.com/leoshine/Spherical_Regression.

The CVPR 2019 paper Dance with Flow: Two-in-One Stream Action Detection by Jiaojiao Zhao and Cees Snoek is now available. The goal of this paper is to detect the spatio-temporal extent of an action. The two-stream detection network based on RGB and flow provides state-of-the-art accuracy at the expense of a large model-size and heavy computation. We propose to embed RGB and optical-flow into a single two-in-one stream network with new layers. A motion condition layer extracts motion information from flow images, which is leveraged by the motion modulation layer to generate transformation parameters for modulating the low-level RGB features. The method is easily embedded in existing appearance- or two-stream action detection networks, and trained end-to-end. Experiments demonstrate that leveraging the motion condition to modulate RGB features improves detection accuracy. With only half the computation and parameters of the state-of-the-art two-stream methods, our two-in-one stream still achieves impressive results on UCF101-24, UCFSports and J-HMDB.

The paper “Pointly-Supervised Action Localization” by Pascal Mettes and Cees Snoek has been published in the International Journal of Computer Vision. The paper strives for spatio-temporal localization of human actions in videos. In the literature, the consensus is to achieve localization by training on bounding box annotations provided for each frame of each training video. As annotating boxes in video is expensive, cumbersome and error-prone, we propose to bypass box-supervision. Instead, we introduce action localization based on point-supervision. We start from unsupervised spatio-temporal proposals, which provide a set of candidate regions in videos. While normally used exclusively for inference, we show spatio-temporal proposals can also be leveraged during training when guided by a sparse set of point annotations. We introduce an overlap measure between points and spatio-temporal proposals and incorporate them all into a new objective of a Multiple Instance Learning optimization. During inference, we introduce pseudo-points, visual cues from videos, that automatically guide the selection of spatio-temporal proposals. We outline five spatial and one temporal pseudo-point, as well as a measure to best leverage pseudo-points at test time. Experimental evaluation on three action localization datasets shows our pointly-supervised approach (i) is as effective as traditional box-supervision at a fraction of the annotation cost, (ii) is robust to sparse and noisy point annotations, (iii) benefits from pseudo-points during inference, and (iv) outperforms recent weakly-supervised alternatives. This leads us to conclude that points provide a viable alternative to boxes for action localization.

Time-aware encoding of frame sequences in a video is a fundamental problem in video understanding. While many attempted to model time in videos, an explicit study on quantifying video time is missing. To fill this lacuna, the BMVC 2018 paper by Amir Ghodrati, Efstratios Gavves and Cees Snoek aims to evaluate video time explicitly. We describe three properties of video time, namely a) temporal asymmetry, b)temporal continuity and c) temporal causality. Based on each we formulate a task able to quantify the associated property. This allows assessing the effectiveness of modern video encoders, like C3D and LSTM, in their ability to model time. Our analysis provides insights about existing encoders while also leading us to propose a new video time encoder, which is better suited for the video time recognition tasks than C3D and LSTM. We believe the proposed meta-analysis can provide a reasonable baseline to assess video time encoders on equal grounds on a set of temporal-aware tasks.

 

Science and business in the Netherlands are joining forces in the field of artificial intelligence. On Thursday April 26, the Innovation Center for Artificial Intelligence (ICAI) was officially launched at Amsterdam Science Park. The first lab within ICAI is a partnership with Ahold Delhaize.

ICAI is focused on the joint development of AI technology through industry labs with the business sector, government and knowledge institutes. Maarten de Rijke, director of ICAI and professor of Information Retrieval at the University of Amsterdam: ‘The Netherlands has all the resources to take up a prominent position in the international AI landscape – top talent, innovation strength and research at world-class level. ICAI combines these strengths in a unique national initiative.’

ICAI is an open collaborative initiative between knowledge institutes that is aimed at AI innovation through public-private partnerships. The Center is located at Amsterdam Science Park and is initiated by the University of Amsterdam and the VU University Amsterdam together with the business sector and government. The organisation is built around industry labs – multi-year partnerships between academic and industrial partners aimed at technological and talent development. ICAI will be housed in a new co-development building where teaching, research and collaboration with the business sector and societal partners will come together.

dates

We are witnessing a revolution in machine learning with the reinvigorated usage of neural networks in deep learning, which promises a solution to cognitive tasks that are easy for humans to perform but hard to describe formally. It is intended to allow computers to acquire knowledge directly from data without the need for human to specify, and model the inherent problem in terms of a layered composition of simpler concepts making it possible to express complex problems by elementary operators. By not relying on handcrafted features, hard-coded knowledge and showing the ability to regress intricate objective functions, deep learning methods are now employed in a broad spectrum of applications from image classification to speech recognition. Deep learning achieves exceptional power and flexibility by learning to represent the task as a nested hierarchy of layers, with more abstract representations computed in terms of less abstract ones. The current resurgence is a result of the breakthroughs in efficient layer-wise training, availability of big datasets, and faster computers. Thanks to the simplified training of very deep architectures, today we can provide these algorithms with the resources they need to succeed.
A number of challenges are being raised and pursued. For instance, many deep learning algorithms have been designed to tackle supervised learning problem for a wide variety of tasks, and how to reliably solve unsupervised learning problems in a similar degree of success is an important issue to address. Another key research area is to work successfully with smaller datasets, focusing on how we can take advantage of large quantities of unlabeled examples with a few labeled samples. Deep agents may play a more significant role in hybrid decision systems where other machine learning techniques are used to address the reasoning, bridging the gap between data and application decisions. We expect deep learning to be applied to increasingly multi-modal problems with more structure in the data, opening application domains in robotics and data mining.
This special issue in the high-impact IEEE Signal Processing Magazine seeks to provide a venue accessible to a wide and diverse audience to survey the recent research R&D advances in learning, including deep learning and beyond. Interested authors are asked
to prepare a white paper first based on the instruction and schedule outlined below.

Topics of Interest include (but are not limited to):

  • Advanced deep learning techniques for supervised learning
  • Deep learning for unsupervised & semi-supervised learning
  • Online, reinforcement, incremental learning by deep models
  • Domain adaptation and transfer learning with deep networks
  • Deep learning for spatiotemporal data and dynamic systems
  • Visualization of deep features
  • New zero- and one-shot learning techniques
  • Advanced hashing and retrieval methods
  • Software and specialized hardware for deep learning
  • Novel applications and experimental activities

White papers are required, and full articles are invited based on the review of white papers. The white paper format is up to 4 pages in length, including proposed article title, motivation and significance of the topic, an outline of the proposed paper, and
representative references; an author list, contact information and short bios should also be included. Articles submitted to this issue must be of tutorial and overview/survey nature and in an accessible style to a broad audience, and have a significant relevance to
the scope of the special issue. Submissions should not have been published or under review elsewhere, and should be made online at http://mc.manuscriptcentral.com/sps-ieee. For submission guidelines, visit http://signalprocessingsociety.org/publicationsresources/
ieee-signal-processing-magazine/information-authors-spm

Guest Editors

  • Prof. Fatih Porikli, Australian National University, fatih.porikli@anu.edu.au
  • Dr. Shiguang Shan, Chinese Academy of Sciences, sgshan@ict.ac.cn
  • Prof. Cees Snoek, University of Amsterdam, cgmsnoek@uva.nl
  • Dr. Rahul Sukthankar, Google, rahulsukthankar@gmail.com
  • Prof. Xiaogang Wang, Chinese University of Hong Kong, xgwang@ee.cuhk.edu.hk