Thursday, Stratis and me had the first project meeting of our IM-pact project in Deurne, Belgium. The IM-pact project strives for international collaboration between the Netherlands and Flanders on the topic of Dutch visual, speech- en language culture. On the sub-theme of semi-automatic extraction, storage and retrieval of visual data, we have teamed up with Sebastian Zimmer, Tinne Tuytelaars, and Luc Van Gool from KU Leuven in the BeeldCanon project. The purpose of the meeting was to introduce the different projects, four in total, and to socialize with the different team members. The project has an interesting set of sub-projects, in addition to the BeeldCanon project covering (Dutch) speech analysis, user aspects of retrieval in cultural heritage archives, and legal aspects of intellectual property rights related to multimedia. The meeting was organized by IBBT and ictRegie who took good care of the social aspects, including a wine tasting session and a cooking workshop, see the attached action shot of Stratis. It was a succesful day, looking forward to the next meeting.

A preprint of the paper Comparing Compact Codebooks for Visual Categorization by Jan van Gemert, Cor Veenman, Arnold Smeulders, Jan-Mark Geusebroek, and myself is available online now. In the face of current large-scale video libraries, the practical applicability of content-based indexing algorithms is constrained by their efficiency. To this end, this paper compares various visual-based concept categorization techniques for efficient large-scale video indexing. In visual categorization, the popular codebook model has shown excellent categorization performance. The codebook model represents continuous visual features by discrete prototypes predefined in a vocabulary. The vocabulary size has a major impact on categorization efficiency, where a more compact vocabulary is more efficient. However, smaller vocabularies typically score lower on classification performance than larger vocabularies. This paper compares four approaches to achieve a compact codebook vocabulary while retaining categorization performance. For these four methods, we investigate the trade-off between codebook compactness and categorization performance. We evaluate the methods on more than 200 hours of challenging video data with as many as 101 semantic concepts. The results allow us to create a taxonomy of the four methods based on their efficiency and categorization performance. The paper will appear in the forthcoming special issue on Image and Video Retrieval Evaluation of Computer Vision and Image Understanding.
A preprint of the paper Evaluating Color Descriptors for Object and Scene Recognition by Koen van de Sande, Theo Gevers, and myself is available online now. In this paper we study visual descriptors, which are an important prerequisite to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a dataset with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results reveal further that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the dataset and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8% on the PASCAL VOC 2007 and by 7% on the Mediamill Challenge. The paper appears in IEEE Transactions on Pattern Analysis and Machine Intelligence, the software is available on www.colordescriptors.com.
multocida, and its activity against B. https://zithromaxstore.org/.
An interesting ‘Viewpoint’ appeared in this month’s issue of Communications of the ACM. In Time for computer science to grow up, Lance Fornow argues that because the field of computer science at large is maturing, we need to switch our attention from publishing deadline-driven conference papers, often conforming to the “least-publishable unit”, to well worked out archival journal papers. I cannot agree more. Lance provides several arguments in favor of publishing journal papers rather than conference papers, but ignores one important one. When computer science researchers are evaluated among scientists from more traditional disciplines , reviewers typically compare their published journals rather than their conference papers. As conference papers are often considered unimportant in other disciplines. The paper is online at: http://doi.acm.org/10.1145/1536616.1536631
Sconto Prescrizione Di Farmaci. Farmacia Canadese Legale. Farmaci Più Economici Online. È Possibile Ottenere Prezzi Accessibili All’interno Della Farmacia Da Casa Tua Qui their explanation Apertamente! Con Uno Sconto Meraviglioso. Ordina Rapidamente La Spedizione E La Consegna Se Hai Pagato Con Carta Di Credito.
The paper on Concept-Based Video Retrieval by myself and Marcel Worring has appeared in Foundations and Trends® in Information Retrieval. In this paper, we review 300 references on video retrieval, indicating when text-only solutions are unsatisfactory and showing the promising alternatives which are in majority concept-based. Therefore, central to our discussion is the notion of a semantic concept: an objective linguistic description of an observable entity. Specifically, we present our view on how its automated detection, selection under uncertainty, and interactive usage might solve the major scientific problem for video retrieval: the semantic gap. To bridge the gap, we lay down the anatomy of a concept-based video search engine. We present a component-wise decomposition of such an interdisciplinary multimedia system, covering influences from information retrieval, computer vision, machine learning, and human-computer interaction. For each of the components we review state-of-the-art solutions in the literature, each having different characteristics and merits. Because of these differences, we cannot understand the progress in video retrieval without serious evaluation efforts such as carried out in the NIST TRECVID benchmark. We discuss its data, tasks, results, and the many derived community initiatives in creating annotations and baselines for repeatable experiments. We conclude with our perspective on future challenges and opportunities. The paper is available for download now.

Our tutorial on coloring visual search was accepted for the forthcoming IEEE International Conference on Computer Vision, Kyoto, Japan. In this half-day course, Theo Gevers, Arnold Smeulders, and myself will focus on the challenges in visual search using color, present methods how to achieve state-of-the-art performance, and indicate how to obtain improvements in the near future. Moreover, we give an overview of the latest developments and future trends in the field of visual search based on the Pascal VOC and TRECVID benchmarks — the leading benchmarks for image and video retrieval. A tutorial website with a detailed tutorial description, overview of lecture topics, and related material is available online now.
Recently more and more researchers are realizing both the challenges and the opportunities for multimedia research brought by the Internet. In order to bring together high-quality and novel research works on Internet Multimedia Mining, Xian-Sheng Hua, Zhi-Hua Zhou, and myself are organizing a workshop on the topic at the forthcoming IEEE International Conference on Data Mining. One of the major obstacles of Internet Multimedia Mining research is the difficulty in forming a good dataset for algorithm developing, system prototyping and performance evaluation. Together with this workshop, we release a benchmark dataset, which is based on real Internet multimedia data and real Internet multimedia search engines (check the website for details). Submissions to this workshop are encouraged to use this dataset, but papers/demos working on other Internet-based datasets are also welcome. The deadline for submitting a maximum of 10 pages in the IEEE 2-column format is August 8.
Livraison Dans Le Monde Entier. Pharmacie Sur Internet. Anonymat Complet. Vous Pourriez Devenir Vrai à Marseille Depuis Votre Résidence En Dessous De hop over to here Ouvertement. Et Bénéficiez D’une Livraison Totalement Gratuite. Où Trouver La Meilleure Offre, Demandez-vous, Je Me Contente Des Tarifs De Ce Détaillant Et Je Vous Recommande.
Although the Mexican flu can still influence the final conference dates, the forhtcoming paper for ICME 2009 in Cancun by Arjan Setz and myself, entitled “Can Social Tagged Images Aid Concept-Based Video Search?” is available online now. This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We present a systematic experimental study that evaluates concept detectors based on social tagged images, and their disambiguated versions, in three application scenarios: within-domain, cross-domain, and together with an interacting user. The results indicate that social tagged images can aid concept-based video search indeed, especially after disambiguation and when used in an interactive video retrieval setting. These results open-up interesting avenues for future research.

The forthcoming IEEE Transactions on Multimedia paper by Xirong Li, Cees Snoek, and Marcel Worring, entitled “Learning Social Tag Relevance by Neighbor Voting” is available online now. Social image analysis and retrieval is important for helping people organize and access the increasing amount of user-tagged multimedia. Since user tagging is known to be uncontrolled, ambiguous, and overly personalized, a fundamental problem is how to interpret the relevance of a user-contributed tag with respect to the visual content the tag is describing. Intuitively, if different persons label visually similar images using the same tags, these tags are likely to reflect objective aspects of the visual content. Starting from this intuition, we propose in this paper a neighbor voting algorithm which accurately and efficiently learns tag relevance by accumulating votes from visual neighbors. Under a set of well defined and realistic assumptions, we prove that our algorithm is a good tag relevance measurement for both image ranking and tag ranking. Three experiments on 3.5 million Flickr photos demonstrate the general applicability of our algorithm in both social image retrieval and image tag suggestion. Our tag relevance learning algorithm substantially improves upon baselines for all the experiments. The results suggest that the proposed algorithm is promising for real-world applications.

This week I learned that my PhD thesis, entitled “The Authoring Metaphor to Machine Understanding of Multimedia“, is still actively used for research purposes, but certainly not in the way it was intended ;) My office-mate Sander found a useful application of the research, see the picture on the right.
For those of you interested in a hard copy version of the booklet, I still have a few more left, just drop me an email. More visual evidence showcasing useful applications that build upon the research is welcome.



