1950 University Ave, Suite 200
Berkeley, California 94704

Free and open to the public, followed at 4pm with a Social Jam that includes refreshments - and beer. It would be helpful if you mark attending/watching above.

This week: two talks by Y!RB researchers, in preparations for the upcoming ACM MM conference.

* David A. Shamma (Y!RB) - Watch What I Watch: Using Community Activity to Understand Content

* Mor Naaman (Y!RB) - How Flickr Helps us Make Sense of the World: Context and Content in Community-Contributed Media Collections

ABSTRACTS:
Watch What I Watch: Using Community Activity to Understand Content
We present a high-level overview of Yahoo Research Berkeley’s approach to multimedia research and the ideas motivating it. This approach is characterized primarily by a shift away from building subsystems that attempt to discover or understand the “meaning” of media content toward systems and algorithms that can usefully utilize information about how media content is being used in specific contexts; a shift from semantics to pragmatics. We believe that, at least for the domain of consumer and web videos, the latter provides a more promising basis for indexing media content in ways that satisfy user needs. To illustrate our approach, we present ongoing work on several applications which generate and utilize contextual usage meta-data to provide novel and useful media experiences.

How Flickr Helps us Make Sense of the World: Context and Content in Community-Contributed Media Collections
The advent of media-sharing sites like Flickr and YouTube has drastically increased the volume of community-contributed multimedia resources available on the web. These collections have a previously unimagined depth and breadth, and have generated new opportunities -- and new challenges -- to multimedia research. How do we analyze, understand and extract patterns from these new collections? How can we use these unstructured, unrestricted community contributions of media (and annotation) to generate ``knowledge''?

As a test case, we study Flickr -- a popular photo sharing website. Flickr supports photo, time and location metadata, as well as a light-weight annotation model. We extract information from this Flickr dataset using two different approaches. First, we employ a location-driven approach to generate aggregate knowledge in the form of ``representative tags'' for arbitrary areas in the world. Second, we use a tag-driven approach to automatically extract place and event semantics for Flickr tags, based on each tag's metadata patterns.

With the patterns we extract from tags and metadata, vision algorithms can be employed with greater precision. In particular, we demonstrate a location-tag-vision-based approach to retrieving images of geography-related landmarks and features from the Flickr dataset. The results suggest that community-contributed media and annotation can enhance and improve our access to multimedia resources -- and our understanding of the world.

BIOS:
Ayman Shamma and Mor Naaman hang out and mostly try to stay out of trouble (sometimes successfully) at Yahoo! Research Berkeley.

Official Website: http://yahooresearchberkeley.com

Added by mor on August 30, 2007

Comments

ronin691

Will this event eventually be on Yahoo video to watch, for those of us who cannot attend in person?

mor

We're hoping that it would be one day...