better-or-more-noisy-concepts

The ICMR2013 paper ‘Recommendations for Video Event Recognition Using Concept Vocabularies’ by Amirhossein Habibian, Koen van de Sande and Cees Snoek is now available. Representing videos using vocabularies composed of concept detectors appears promising for event recognition. While many have recently shown the benefits of concept vocabularies for recognition, the important question what concepts to include in the vocabulary is ignored. In this paper, we study how to create an effective vocabulary for arbitrary event recognition in web video. We consider four research questions related to the number, the type, the specificity and the quality of the detectors in concept vocabularies. A rigorous experimental protocol using a pool of 1,346 concept detectors trained on publicly available annotations, a dataset containing 13,274 web videos from the Multimedia Event Detection benchmark, 25 event groundtruth definitions, and a state-of-the-art event recognition pipeline allow us to analyze the performance of various concept vocabulary definitions. From the analysis we arrive at the recommendation that for effective event recognition the concept vocabulary should i) contain more than 200 concepts, ii) be diverse by covering object, action, scene, people, animal and attribute concepts, iii) include both general and specific concepts, and iv) increase the number of concepts rather than improve the quality of the individual detectors. We consider the recommendations for video event recognition using concept vocabularies the most important contribution of the paper, as they provide guidelines for future work.

Any Visitor needs to purchase real drugs to prevent streptococcal infection in Omaha pharmacy from property right right here Ampicillin rapidly. And get the ideal delivers for medications to prevent streptococcal infection. Acquire Low cost Medication On the internet With no Prescription. austrmed.com Together they set the liver a heavy burden, which is why the liver commences to perform much more intensively This not only has an effect on the liver negatively, but also the therapeutic impact of the preparing is impaired Ampicillin for young children The antibiotic can.
informative-concept-bank
The ICMR2013 paper ‘Searching Informative Concept Banks for Video Event Detection’ by Masoud Mazloom, Efstratios Gavves, Koen van de Sande and Cees Snoek is now available. An emerging trend in video event detection is to learn an event from a bank of concept detector scores. Different from existing work, which simply relies on a bank containing all available detectors, we propose in this paper an algorithm that learns from examples what concepts in a bank are most informative per event. We model finding this bank of informative concepts out of a large set of concept detectors as a rare event search. Our proposed approximate solution finds the optimal concept bank using a cross-entropy optimization. We study the behavior of video event detection based on a bank of informative concepts by performing three experiments on more than 1,000 hours of arbitrary internet video from the TRECVID multimedia event detection task. Starting from a concept bank of 1,346 detectors we show that 1.) some concept banks are more informative than others for specific events, 2.) event detection using an automatically obtained informative concept bank is more robust than using all available concepts, 3.) even for small amounts of training examples an informative concept bank outperforms a full bank and a bag-of-word event representation, and 4.) we show qualitatively that the informative concept banks make sense for the events of interest, without being programmed to do so. We conclude that for concept banks it pays to be informative.

Two positions of POSTDOCTORAL RESEARCH FELLOW in Video Search are open in the Informatics Institute of the University of Amsterdam, starting Spring 2013.

The positions are part of a 5-year Personal VIDI Grant funded by the Dutch Organization for Scientific Research and headed by Dr. Cees Snoek. The successful candidates will participate in a frontier research project on video recognition and explanation, and will work in a stimulating environment of a leading and highly-active research team including 1 faculty member and 6 Ph.D. students. The team has repeatedly won the major visual search competitions, including NIST TRECVID, PASCAL Visual Object Challenge, ImageCLEF, and the ImageNet Large Scale Visual Recognition Challenge.

Details on requirements, appointment and application are now available: http://www.uva.nl/en/about-the-uva/working-at-the-uva/vacancies/item/13-007.html

As part of the Dutch Prize for ICT research a beautiful poster has been created by NWO and Smidswater, which will be distributed to high schools in the Netherlands. You may also download it here.

Foto: Hilde de Wolf Fotografie.

Dr Cees Snoek of the University of Amsterdam (UvA) has won the Netherlands Prize for ICT research 2012. The prizewinner also receives €50,000. Computer scientist Snoek leads a research team working on the development of a smart search engine for digital video: the Media Mill Semantic Video Search Engine.

The Netherlands Prize for ICT research was established for scientists under 40, who are conducting innovative research or are responsible for a scientific breakthrough in the field of ICT. The award is an initiative of the ICT Research Platform Netherlands (IPN) and the Netherlands Organisation for Scientific Research’s (NWO) Physical Sciences division in cooperation with the Royal Holland Society of Sciences (KHMW).

The paper “Content-Based Analysis Improves Audiovisual Archive Retrieval” by Bouke Huurnink, Cees Snoek, Maarten de Rijke, and Arnold Smeulders, which appears in the August issue of IEEE Transactions on Multimedia, is now available. Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs and retrieval data already present in the audiovisual archive, and demonstrate that retrieval performance can be significantly improved when content-based methods are applied to search. To the best of our knowledge, this is the first time that the practice of an audiovisual archive has been taken into account for quantitative retrieval evaluation. To arrive at our main result, we propose an evaluation methodology tailored to the specific needs and circumstances of the audiovisual archive, which are typically missed by existing evaluation initiatives. We utilize logged searches, content purchases, session information, and simulators to create realistic query sets and relevance judgments. To reflect the retrieval practice of both the archive and the video retrieval community as closely as possible, our experiments with three video search engines incorporate archive-created catalog entries as well as state-of-the-art multimedia content analysis results. A detailed query-level analysis indicates that individual content-based retrieval methods such as transcript-based retrieval and concept-based retrieval yield approximately equal performance gains. When combined, we find that content-based video retrieval incorporated into the archive’s practice results in significant performance increases for shot retrieval and for retrieving entire television programs. The time has come for audiovisual archives to start accommodating content-based video retrieval methods into their daily practice.

The paper “Harvesting Social Images for Bi-Concept Search” by Xirong Li, Cees Snoek, Marcel Worring and Arnold Smeulders, which appears in the August issue of IEEE Transactions on Multimedia, is now available. Searching for the co-occurrence of two visual concepts in unlabeled images is an important step towards answering complex user queries. Traditional visual search methods use combinations of the confidence scores of individual concept detectors to tackle such queries. In this paper we introduce the notion of bi-concepts, a new concept-based retrieval method that is directly learned from social-tagged images. As the number of potential bi-concepts is gigantic, manually collecting training examples is infeasible. Instead, we propose a multimedia framework to collect de-noised positive as well as informative negative training examples from the social web, to learn bi-concept detectors from these examples, and to apply them in a search engine for retrieving bi-concepts in unlabeled images. We study the behavior of our bi-concept search engine using 1.2M social-tagged images as a data source. Our experiments indicate that harvesting examples for bi-concepts differs from traditional single-concept methods, yet the examples can be collected with high accuracy using a multi-modal approach. We find that directly learning bi-concepts is better than oracle linear fusion of single-concept detectors, with a relative improvement of 100\%. This study reveals the potential of learning high-order semantics from social images, for free, suggesting promising new lines of research.

Médicaments Sur Ordonnance Au Canada. Expédition Rapide Par Courrier Ou Par Avion. Pharmacie Canadienne Officielle. Aucune Prescription Requise. Vous Pourriez Obtenir Rapidement Des Réductions Auprès De La Pharmacie De Quartier De Nice De La Résidence Proprement Dite learn the facts here now. Vous N’aurez Pas Besoin D’aller Chez Le Médecin Pour Obtenir Une Ordonnance. Prix ​​idéaux Pour Les Commandes Suivantes.

Congratulations to Xirong Li for being awarded the 2012 IEEE Transactions on Multimedia Prize Paper Award. Li received the prize for the publication “Learning Social Tag Relevance by Neighbor Voting”. The Multimedia Prize Paper Award is an annual award for an original paper in the field of multimedia published in the IEEE Transactions on Multimedia in the previous three calendar years. The paper of Xirong Li was selected out of 14 nominations. The basis for judging is the composite of: originality, utility, timeliness, and clarity of presentation.

About the research

In a world where the amount of digital images is ever-growing, it is important to to be able to search based on the visual content. Xirong Li was inspired by social media and investigated the value of images with social tags for visual search. He developed an algorithm that automatically determines whether the tag people assign to a photo matches what is actually visible in the image. Moreover, the paper provides a formal analysis on the proposed algorithm, theoretically showing its effectiveness for both image ranking and tag ranking.

Publication information

Xirong Li, Cees G. M. Snoek, and Marcel Worring, “Learning Social Tag Relevance by Neighbor Voting,” IEEE Transactions on Multimedia, vol. 11, iss. 7, pp. 1310-1322, 2009.

The Chinese Government Award for Outstanding Self-financed Students Abroad was awarded to Xirong Li.

The PhD thesis of Xirong, entitled ‘Content-Based Visual Search Learned from Social Media’, reveals the value of socially tagged images for content-based visual search. To learn from social media, Xirong proposed algorithms which automatically determine whether a tag spontaneously assigned to a picture is factually relevant with respect to the visual content. By identifying relevant tags, he has found a way to transfer noisy social data into numerous well-labelled examples. This leads to an intelligent search engine which can find unlabelled images on the Internet, a smart phone, or a laptop. The increasing availability of labelled examples also enables the search engine to answer more complex queries, e.g., finding images of horse riders on the beach. Xirong’s work opens up promising avenues for search engines that provide access to the semantics of unlabelled images, without the need for expert labelling. Xirong successfully defended his thesis on 9 March 2012 and is currently an Assistant Professor at Renmin University of China.

The Chinese Government Award for Outstanding Self-financed Students Abroad was founded by the Chinese government in 2003 with the purpose of rewarding academic excellence among  self-financed Chinese students studying overseas. Only those with outstanding performance in their PhD studies are considered by the award selection committee. Each year, approximately 500 young Chinese talents worldwide are granted the award.