The NeurIPS 2019 paper ‘Hyperspherical Prototype Networks’ by Pascal Mettes, Elise van der Pol and Cees Snoek is available now. This paper introduces hyperspherical prototype networks, which unify classification and regression with prototypes on hyperspherical output spaces. For classification, a common approach is to define prototypes as the mean output vector over training examples per class. Here, we propose to use hyperspheres as output spaces, with class prototypes defined a priori with large margin separation. We position prototypes through data-independent optimization, with an extension to incorporate priors from class semantics. By doing so, we do not require any prototype updating, we can handle any training size, and the output dimensionality is no longer constrained to the number of classes. Furthermore, we generalize to regression, by optimizing outputs as an interpolation between two prototypes on the hypersphere. Since both tasks are now defined by the same loss function, they can be jointly trained for multi-task problems. Experimentally, we show the benefit of hyperspherical prototype networks for classification, regression, and their combination over other prototype methods, softmax cross-entropy, and mean squared error approaches.

Any Guest wishes to get cost-free tablets treatment for drowsiness in San Diego pharmacy on the internet overtly. And get the best offers for tablets treatment for drowsiness. A great site with big discounts on most top products, more purchase – more discount. Conclusions Modafinil allows you to stay awake in the absence of sleep for several days, increases performance, concentration, and short-term memory The drug has a positive effect on the emotional state, reducing anxiety and depression http://themedicsclub.net Due to its properties, it has been illegally distributed among students and people who are experiencing overload at work.
The ICCV paper “SILCO: Show a Few Images, Localize the Common Object” by Tao Hu, Pascal Mettes, Jia-Hong Huang and Cees Snoek is now available. Few-shot learning is a nascent research topic, motivated by the fact that traditional deep learning requires tremendous amounts of data. In this work, we propose a new task along this research direction, we call few-shot common-localization. Given a few weakly-supervised support images, we aim to localize the common object in the query image without any box annotation. This task differs from standard few-shot settings, since we aim to address the localization problem, rather than the global classification problem. To tackle this new problem, we propose a network that aims to get the most out of the support and query images. To that end, we introduce a spatial similarity module that searches the spatial commonality among the given images. We furthermore introduce a feature reweighting module to balance the influence of different support images through graph convolutional networks. To evaluate few-shot common-localization, we repurpose and reorganize the well-known Pascal VOC and MS-COCO datasets, as well as a video dataset from ImageNet VID. Experiments on the new settings for few-shot common-localization shows the importance of searching for spatial similarity and feature reweighting, outperforming baselines from related tasks.

The ICCV paper “Counting with Focus for Free” by Zenglin Shi, Pascal Mettes and Cees Snoek is now available. This paper aims to count arbitrary objects in images. The leading counting approaches start from point annotations per object from which they construct density maps. Then, their training objective transforms input images to density maps through deep convolutional networks. We posit that the point annotations serve more supervision purposes than just constructing density maps. We introduce ways to repurpose the points for free. First, we propose supervised focus from segmentation, where points are converted into binary maps. The binary maps are combined with a network branch and accompanying loss function to focus on areas of interest. Second, we propose supervised focus from global density, where the ratio of point annotations to image pixels is used in another branch to regularize the overall density estimation. To assist both the density estimation and the focus from segmentation, we also introduce an improved kernel size estimator for the point annotations. Experiments on six datasets show that all our contributions reduce the counting error, regardless of the base network, resulting in state-of-the-art accuracy using only a single network. Finally, we are the first to count on WIDER FACE, allowing us to show the benefits of our approach in handling varying object scales and crowding levels. Code is available at https://github.com/shizenglin/Counting-with-Focus-for-Free.

 

The paper “Repetition Estimation” by Tom Runia, Cees Snoek and Arnold Smeulders has been accepted by the International Journal of Computer Vision. The paper studies visual repetition. Visual repetition is ubiquitous in our world. It appears in human activity (sports, cooking), animal behavior (a bee’s waggle dance), natural phenomena (leaves in the wind) and in urban environments (flashing lights). Estimating visual repetition from realistic video is challenging as periodic motion is rarely perfectly static and stationary. To better deal with realistic video, we elevate the static and stationary assumptions often made by existing work. Our spatiotemporal filtering approach, established on the theory of periodic motion, effectively handles a wide variety of appearances and requires no learning. Starting from motion in 3D we derive three periodic motion types by decomposition of the motion field into its fundamental components. In addition, three temporal motion continuities emerge from the field’s temporal dynamics. For the 2D perception of 3D motion we consider the viewpoint relative to the motion; what follows are 18 cases of recurrent motion perception. To estimate repetition under all circumstances, our theory implies constructing a mixture of differential motion maps. We temporally convolve the motion maps with wavelet filters to estimate repetitive dynamics. Our method is able to spatially segment repetitive motion directly from the temporal filter responses densely computed over the motion maps. For experimental verification of our claims, we use our novel dataset for repetition estimation, better-reflecting reality with non-static and non-stationary repetitive motion. On the task of repetition counting, we obtain favorable results compared to a deep learning alternative.

Time-aware encoding of frame sequences in a video is a fundamental problem in video understanding. While many attempted to model time in videos, an explicit study on quantifying video time is missing. To fill this lacuna, the BMVC 2018 paper by Amir Ghodrati, Efstratios Gavves and Cees Snoek aims to evaluate video time explicitly. We describe three properties of video time, namely a) temporal asymmetry, b)temporal continuity and c) temporal causality. Based on each we formulate a task able to quantify the associated property. This allows assessing the effectiveness of modern video encoders, like C3D and LSTM, in their ability to model time. Our analysis provides insights about existing encoders while also leading us to propose a new video time encoder, which is better suited for the video time recognition tasks than C3D and LSTM. We believe the proposed meta-analysis can provide a reasonable baseline to assess video time encoders on equal grounds on a set of temporal-aware tasks.

 

Science and business in the Netherlands are joining forces in the field of artificial intelligence. On Thursday April 26, the Innovation Center for Artificial Intelligence (ICAI) was officially launched at Amsterdam Science Park. The first lab within ICAI is a partnership with Ahold Delhaize.

ICAI is focused on the joint development of AI technology through industry labs with the business sector, government and knowledge institutes. Maarten de Rijke, director of ICAI and professor of Information Retrieval at the University of Amsterdam: ‘The Netherlands has all the resources to take up a prominent position in the international AI landscape – top talent, innovation strength and research at world-class level. ICAI combines these strengths in a unique national initiative.’

ICAI is an open collaborative initiative between knowledge institutes that is aimed at AI innovation through public-private partnerships. The Center is located at Amsterdam Science Park and is initiated by the University of Amsterdam and the VU University Amsterdam together with the business sector and government. The organisation is built around industry labs – multi-year partnerships between academic and industrial partners aimed at technological and talent development. ICAI will be housed in a new co-development building where teaching, research and collaboration with the business sector and societal partners will come together.

dates

We are witnessing a revolution in machine learning with the reinvigorated usage of neural networks in deep learning, which promises a solution to cognitive tasks that are easy for humans to perform but hard to describe formally. It is intended to allow computers to acquire knowledge directly from data without the need for human to specify, and model the inherent problem in terms of a layered composition of simpler concepts making it possible to express complex problems by elementary operators. By not relying on handcrafted features, hard-coded knowledge and showing the ability to regress intricate objective functions, deep learning methods are now employed in a broad spectrum of applications from image classification to speech recognition. Deep learning achieves exceptional power and flexibility by learning to represent the task as a nested hierarchy of layers, with more abstract representations computed in terms of less abstract ones. The current resurgence is a result of the breakthroughs in efficient layer-wise training, availability of big datasets, and faster computers. Thanks to the simplified training of very deep architectures, today we can provide these algorithms with the resources they need to succeed.
A number of challenges are being raised and pursued. For instance, many deep learning algorithms have been designed to tackle supervised learning problem for a wide variety of tasks, and how to reliably solve unsupervised learning problems in a similar degree of success is an important issue to address. Another key research area is to work successfully with smaller datasets, focusing on how we can take advantage of large quantities of unlabeled examples with a few labeled samples. Deep agents may play a more significant role in hybrid decision systems where other machine learning techniques are used to address the reasoning, bridging the gap between data and application decisions. We expect deep learning to be applied to increasingly multi-modal problems with more structure in the data, opening application domains in robotics and data mining.
This special issue in the high-impact IEEE Signal Processing Magazine seeks to provide a venue accessible to a wide and diverse audience to survey the recent research R&D advances in learning, including deep learning and beyond. Interested authors are asked
to prepare a white paper first based on the instruction and schedule outlined below.

Topics of Interest include (but are not limited to):

  • Advanced deep learning techniques for supervised learning
  • Deep learning for unsupervised & semi-supervised learning
  • Online, reinforcement, incremental learning by deep models
  • Domain adaptation and transfer learning with deep networks
  • Deep learning for spatiotemporal data and dynamic systems
  • Visualization of deep features
  • New zero- and one-shot learning techniques
  • Advanced hashing and retrieval methods
  • Software and specialized hardware for deep learning
  • Novel applications and experimental activities

White papers are required, and full articles are invited based on the review of white papers. The white paper format is up to 4 pages in length, including proposed article title, motivation and significance of the topic, an outline of the proposed paper, and
representative references; an author list, contact information and short bios should also be included. Articles submitted to this issue must be of tutorial and overview/survey nature and in an accessible style to a broad audience, and have a significant relevance to
the scope of the special issue. Submissions should not have been published or under review elsewhere, and should be made online at http://mc.manuscriptcentral.com/sps-ieee. For submission guidelines, visit http://signalprocessingsociety.org/publicationsresources/
ieee-signal-processing-magazine/information-authors-spm

Guest Editors

  • Prof. Fatih Porikli, Australian National University, fatih.porikli@anu.edu.au
  • Dr. Shiguang Shan, Chinese Academy of Sciences, sgshan@ict.ac.cn
  • Prof. Cees Snoek, University of Amsterdam, cgmsnoek@uva.nl
  • Dr. Rahul Sukthankar, Google, rahulsukthankar@gmail.com
  • Prof. Xiaogang Wang, Chinese University of Hong Kong, xgwang@ee.cuhk.edu.hk

The ECCV 2016 paper Spot On: Action Localization from Pointly-Supervised Proposals by Pascal Mettes, Jan van Gemert and Cees Snoek is now available. We strive for spatio-temporal localization of actions in videos. The state-of-the-art relies on action proposals at test time and selects the best one with a classifier demanding carefully annotated box annotations at train time. Annotating action boxes in video is cumbersome, tedious, and error prone. Rather than annotating boxes, we propose to annotate actions in video with points on a sparse subset of frames only. We introduce an overlap measure between action proposals and points and incorporate them all into the objective of a non-convex Multiple Instance Learning optimization. Experimental evaluation on the UCF Sports and UCF 101 datasets shows that (i) spatio-temporal proposals can be used to train classifiers while retaining the localization performance, (ii) point annotations yield results comparable to box annotations while being significantly faster to annotate, (iii) with a minimum amount of supervision our approach is competitive to the state-of-the-art. Finally, we introduce spatio-temporal action annotations on the train and test videos of Hollywood2, resulting in Hollywood2Tubes, available at tinyurl.com/hollywood2tubes.

 

screen-shot-2016-10-26-at-5-18-08-pm