Louis-Philippe Morency, “Towards Multimodal Sentiment Analysis: Integrating Linguistic, Auditory and Visual Cues”

December 16, 2011 | Granada, Spain

Speaker: Louis-Philippe Morency
Host: Neural Information Processing Systems Foundation

With more than 10,000 new videos posted online every day on social websites such as YouTube and Facebook, the internet is becoming an almost innite source of information. One crucial challenge for the coming decade is to be able to harvest relevant information from this constant ow of multimodal data.Subjectivity and sentiment analysis focuses on the automatic identication of private states, such as opinions, emotions, sentiments, evaluations, beliefs, and speculations in natural language. While subjectivity classication labels data as either subjective or objective, sentiment classification adds an additional level of granularity, by further classifying subjective data as either positive, negative or neutral. Much of the work to date on subjectivity and sentiment analysis has focused on textual data, and a number of resources have been created including lexicons or large annotated datasets. Given the accelerated growth of other media on the Web and elsewhere, which includes massive collections of videos (e.g., YouTube, Vimeo, VideoLectures), images (e.g., Flickr, Picasa, Facebook), audio (e.g., podcasts), the ability to address the identification of opinions and sentiment for diverse modalities is becoming increasingly important.