Nikola Tesla, best known for his contributions to electromagnetism, might also be considered as one of the founding fathers of mobile multimedia. This can be concluded from an interview with the New York Times (pictured below), published in Popular Mechanics in 1909. It took a 100 years before his vision became reality, amazing.

tesla-times

Source: http://recombu.com/news/nikola-tesla-predicted-mobile-phones-in-1909_M11683.html

approach

And another CIVR 2010 paper titled: Today’s and Tomorrow’s Retrieval Practice in the Audiovisual Archive by Bouke Huurnink, Cees Snoek, Maarten de Rijke, and Arnold Smeulders is also available online.

Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video retrieval methods can improve search in the audiovisual archive. In particular, we propose an evaluation methodology tailored to the specific needs and circumstances of the audiovisual archive, which are typically missed by existing evaluation initiatives. We utilize logged searches and content purchases from an existing audiovisual archive to create realistic query sets and relevance judgments. To reflect the retrieval practice of both the archive and the video retrieval community as closely as possible, our experiments with three video search engines incorporate archive-created catalog entries as well as state-of-the-art multimedia content analysis results. We find that incorporating content-based video retrieval into the archive’s practice results in significant performance increases for shot retrieval and for retrieving entire television programs. Our experiments also indicate that individual content-based retrieval methods yield approximately equal performance gains. We conclude that the time has come for audiovisual archives to start accommodating content-based video retrieval methods into their daily practice.

The resources developed as part of the system evaluation methodology for users of the audiovisual archive are available from http://ilps.science.uva.nl/resources/avarchive

22. April 2010 · 2 comments · Categories: Science

Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval

The CIVR 2010 paper entitled Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval by Xirong Li, Cees Snoek, and Marcel Worring is available online. The work extends upon our tag-relevance approach. Interpreting the relevance of a user-contributed tag with respect to the visual content of an image is an emerging problem in social image retrieval. In the literature this problem is tackled by analyzing the correlation between tags and images represented by specific visual features. Unfortunately, no single feature represents the visual content completely, e.g., global features are suitable for capturing the gist of scenes, while local features are better for depicting objects. To solve the problem of learning tag relevance given multiple features, we introduce in this paper two simple and effective methods: one is based on the classical Borda Count and the other is a method we name UniformTagger. Both methods combine the output of many tag relevance learners driven by diverse features in an unsupervised, rather than supervised, manner. Experiments on 3.5 million social-tagged images and two test sets verify our proposal. Using learned tag relevance as updated tag frequency for social image retrieval, both Borda Count and UniformTagger outperform retrieval without tag relevance learning and retrieval with single-feature tag relevance learning. Moreover, the two unsupervised methods are comparable to a state-of-the-art supervised alternative, but without the need of any training data.

Assistenza Online 24 Ore Su 24. Ottieni Farmaci Generici O Modello Per La Piena Soddisfazione Del Cliente. Ordina E Metti I Tuoi Farmaci Online. Qualsiasi Cliente Del Sito Può Eventualmente Ottenere A Basso Costo All’interno Del Drugstore Di Milano A Casa Proprio Qui click here to read In Modo Affidabile. E Ottieni I Migliori Risultati Per viagra. Spese Più Basse E Adempimento Certo.

What problems do Google, Yahoo!, HP, Radvision, CeWe, Nokia and other companies see driving the future of multimedia? The Multimedia Grand Challenge is a set of problems and issues from these industry leaders, geared to engage the Multimedia research community towards solving relevant, interesting and challenging questions in the multimedia industry’s 2-5 year horizon. The Grand Challenge was first presented as part of ACM Multimedia 2009. and it will be presented again in slightly modified form at ACM Multimedia 2010. Researchers are encouraged to submit working systems in response to these challenges to win the grand Challenge competition!

Check it out at: http://www.multimediagrandchallenge.com/

Our Pinkpop video search engine is generating some media attention, including coverage in national news papers and television. It is likely that more concert footage will be added in the coming weeks, so stay tuned at: http://www.hollandsglorieoppinkpop.nl/.

I am quite excited that the technology is finally finding its way to a broad audience on very interesting video assets, see for example the concert of Moke on Pinkpop 2008. This is my best Sinterklaas present in years.

A video says more than 25,000 words per second, check it out at: http://www.hollandsglorieoppinkpop.nl/

hollandsglorieoppinkpop

The program for the first International Workshop on Internet Multimedia Mining is now available. With the explosion of video and image data available on the Internet, online multimedia applications become more and more important. Moreover, mining semantics and other useful information from large-scale Internet multimedia data to facilitate online and local multimedia content analysis, search and other related applications has also gained more and more attention from both academia and industry. The program covers the breadth of internet multimedia mining, with papers focusing on auto-annotation and new retrieval models. We are proud to have a keynote by Zhongfei Zhang, who will deliver a keynote on Multimedia Data Mining Theory and Its Applications. The workshop is co-located with the IEEE International Conference on Data Mining in Miami, Florida and will be held on Sunday December 6th.

imm-banner

Cheap Rates And No Prescription Essential. Fast Shipping By Courier Or Airmail. Get Generic Or Manufacturer Medication For Entire Consumer Gratification. Any Visitor Want To Buy Cheap Inside The Regional Pharmacy In Chicago At House Below try this web-site Unreservedly! With Excellent Low Cost. Not Only Buy, But Also Get In Depth Data About Medications.
query

Last Friday I was given the opportunity to speak at the Society of the Query Conference, organized by the Amsterdam-based Institute of Network Cultures. The conference critically reflected on the information society and the dominant role of the (Google) search engine in our culture. Most speakers and audience had a background in media studies, as Wikipedia puts it “an academic discipline that deals with the content, history and effects of various media; in particular, the ‘mass media’.” I was invited to speak on alternative search methods, especially concept-based video search of course. The conference was quite an interesting experience, both in terms of speaker presentation conventions (some presenters only used one slide, for decoration), and a very different type of audience (heavily debating via twitter during the conference).  I even learned some new words, most notably scookies and clickworkers. Although the conventions are different, there are still many similarities between media studies and the field of multimedia,which could results in interesting cross-fertilizations in the near future. All in all this was a rewaring experience, and I would like to thank the Institute of Network Cultures again for having me as a speaker.

tv09

The draft notebook paper for TRECVID 2009 by the MediaMill team, containing members of the University of Amsterdam, INESC-ID and the University of Surrey, is now available. In the paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participated in three tasks: concept detection, automatic search, and interactive search. Starting point for the MediaMill concept detection approach is our top-performing bag-of-words system of last year, which uses multiple color descriptors, codebooks with soft-assignment, and kernel-based supervised learning. We improve upon this baseline system by exploring two novel research directions. Firstly, we study a multi-modal extension by the inclusion of 20 audio concepts and fusing using two novel multi-kernel supervised learning methods. Secondly, with the help of recently proposed algorithmic refinements of bag-of-words, a bag-of-words GPU implementation, and compute clusters, we scale-up the amount of visual information analyzed by an order of magnitude, to a total of 1,000,000 i-frames. Our experiments evaluate the merit of these new components, ultimately leading to 64 robust concept detectors for video retrieval. For retrieval, a robust but limited set of concept detectors necessitates the need to rely on as many auxiliary information channels as possible. For automatic search we therefore explore how we can learn to rank various information channels simultaneously to maximize video search results for a given topic. To improve the video retrieval results further, our interactive search experiments investigate the roles of visualizing preview results for a certain browse-dimension and relevance feedback mechanisms that learn to solve complex search topics by analysis from user browsing behavior. The 2009 edition of the TRECVID benchmark has again been a fruitful participation for the MediaMill team, resulting in the top ranking for both concept detection and interactive search.