ICMR2012: Special Session on Socio-Video Semantics

We are organizing a special session on Socio-Video Semantics at the forthcoming ACM International Conference on Multimedia Retrieval in Hong Kong.

Aims and Scope
All of a sudden video became social. In just five years, individual and mostly inactive consumers transformed into active and connected prosumers, revolutionaries even, who create, share, and comment on massive amounts of video artifacts all over the world wide web 2.0. Pronounced manifestations of social video on the Internet include industry initiatives like YouTube, Vimeo, WikiPedia, and Flickr, who manage to attract millions of users, daily. It has been predicted that soon 91 percent of Internet data will be video, where smartphones will only accelerate the unstoppable momentum. In order to make sense of the massive amounts of video content, online social platforms rely on what other people say is in the image, which is known to be ambiguous, overly personalized, and limited. Hence, the lack of semantics currently associated with online video is seriously hampering retrieval, repurposing, and usage. In contrast to social video platforms, academic video sensemaking approaches rely on an analysis of the multimedia content. Such content-driven image search is important, if only to verify what people have said is factually in the video, or for (professional) archives which cannot be shared for crowdsourcing. Despite good progress, automated multimedia analysis of video content is still seriously hampered by the semantic gap, or the lack of correspondence between the low-level audiovisual features that machines extract from video and the high-level conceptual interpretations a human gives to multimedia data. For sensemaking, exploiting the social multimedia context of video has largely been ignored in the multimedia community. This special session provides a unique opportunity for high-quality papers connecting the social context of online video to video sensemaking.

Topics of Interest
Topics of interest include (but are not limited to):

Socio-video content analysis

  • Cross-modal (social / visual / audio) socio-video content analysis
  • Contextual models for socio-video analysis
  • Novel features for socio-video analysis
  • Complex event recognition in socio-videos
  • Socio-video copy detection
  • content-aware ads optimization in socio-video sharing sites
  • efficient learning and mining algorithms for scalable socio-video content analysis

 

Socio-video browsing and retrieval

  • Socio-video retrieval systems
  • Socio-video summarization
  • Recommender techniques for socio-video browsing
  • Mobile socio-video browsing and retrieval
  • User-centered interface and system design for socio-video browsing and retrieval

 

Socio-video benchmark construction and open-source software

  • Benchmark database construction for socio-video semantic analysis
  • Ontology construction for socio-video semantic analysis
  • Open-source software libraries for socio-video analysis

 

Paper Submission
All papers must be formatted according to the ACM conference style, cannot exceed 8 pages in 9 point font, and must be submitted as pdf files.

ACM ICMR 2012 follows double-blind review. Please make sure that the names and affiliations of the authors are excluded in the document. Also remember to avoid information that may identify the authors.

Either the Microsoft Word or LaTex can be used to prepare the manuscripts (but final submission file should be in pdf format). The paper templates can be downloaded directly from the ACM ICMR 2012 website:
http://www.icmr2012.org/submission.html

Selected manuscripts will also be invited for a special issue in IEEE Transactions on Multimedia on the same topic.

Important Dates
— Paper submission deadline: January 15, 2012
— Notification of acceptance: March 15, 2012
— Camera-ready manuscript: April 5, 2012

Organizers
Cees G. M. Snoek, University of Amsterdam (Netherlands)
Yu-Gang Jiang, Fudan University (China)

This entry was posted in CfP, Science. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *