Home   >   CSC-OpenAccess Library   >    Manuscript Information
Connotative Feature Extraction For Movie Recommendation
N. G. Meshram, A. P. Bhagat
Pages - 343 - 354     |    Revised - 10-08-2014     |    Published - 15-09-2014
Volume - 8   Issue - 5    |    Publication Date - September / October 2014  Table of Contents
MORE INFORMATION
KEYWORDS
Audio Features, Connotative Features, Emotion Recognition, Movie Recommendation, Video Features.
ABSTRACT
It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotative features can be used for the recommendation of movies. There are various methodologies for the recommendation of movies. This paper gives comparative analysis of some of these methods. This paper introduces some of the audio features that can be useful in the analysis of the emotions represented in the movie scenes. The video features can be mapped with emotions. This paper provides methodology for mapping audio features with some emotional states such as happiness, sleepiness, excitement, sadness, relaxation, anger, distress, fear, tension, boredom, comedy and fight. In this paper movie’s audio is used for connotative feature extraction which is extended to recognize emotions. This paper also provides comparative analysis of some of the methods that can be used for the recommendation of movies based on user’s emotions.
CITED BY (1)  
1 Meshram, N. G., & Bhagat, A. P. (2014). Connotative Feature Extraction For Movie Recommendation. International Journal of Image Processing (IJIP), 8(5), 343.
1 Google Scholar 
2 CiteSeerX 
3 refSeek 
4 Scribd 
5 SlideShare 
6 PdfSR 
A. Hanjalic and L. Xu, “Affective video content representation and modeling,” IEEE Trans.Multimedia, vol. 7, no. 1, pp. 143–154, Feb. 2005.
A. Metallinou, A. Katsamanis, F. Eyben and S. Narayanan,” Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification” IEEE Trans on affective computing, Vol. 3,no. 2, Apr-Jun 2012.
A. Tawari and M. Trivedi,” Face Expression Recognition by Cross Modal Data Association,” IEEE Trans on Multimedia, Vol. 15, no. 7, Nov 2013.
A.Smola and B. Sch¨olkopf,“A Tutorial on Support Vector Regression”, Sep 2003.
C. Busso, M. Bulut, C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. Chang, S. Lee, and S.Narayanan, “IEMOCAP: Interactive emotional dyadic motion capture database,” Journal of Language Resources and Evaluation, vol. 42, no. 4, pp. 335–359, December 2008.
C. Liu, D. Wang, J. Zhu, and B. Zhang, “Learning a Contextual Multi-Thread Model for Movie/TV Scene Segmentation,” IEEE Trans on Multimedia, Vol 15, no.4, June 2013.
C. Tsai, L. Kang, C. Lin and W. Lin,” Scene-Based Movie Summarization via RoleCommunity Networks,” IEEE Trans On Circuits and Systems for Video Technology, 2013.
D. Lottridge, M. Chignell, and M. Yasumura, “Identifying Emotion through Implicit and Explicit Measures: Cultural Differences, Cognitive Load, and Immersion,” IEEE Transaction on Affective Computing, Vol 3, no. 2, April-June 2012.
E. A. Eyjolfsdottir, G. Tilak, N. Li, “MovieGEN: A Movie Rec System,” IEEE Trans on Mult.,Vol 15, no 5, Aug 2010.
G. M. Smith, “Film Structure and the Emotion System”. Cambridge, U.K.: Cambridge Univ.Press, 2003.
G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schr¨oder, “The SEMAINE database:Annotated multimodal records of emotionally colored conversations between a person and a limited agent,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 5–17,January-March 2012.
J. Kim and E. Andre,” Emotion Recognition Based on Physiological Changes in Music Listening,” IEEE Trans on Pattern Analysis and Machine Intelligence,Vol. 30, no. 12, Dec 2008.
J. Kim and E. Andre,” Emotion Recognition Based on Physiological Changes in Music Listening” IEEE Trans On Pattern Analysis and Machine Intelligence , Vol. 30, No. 12, Dec 2008.
L. Canini, S. Benini, and R. Leonardi, “Affective Recommendation of Movies Based on Selected Connotative Features,” IEEE Trans on circuits and systems for video technology,vol. 23, no. 4, April 2013.
L. Lu, D. Liu and H. Zhang,”Automatic Mood Detection and Tracking of Music Audio Signals,” IEEE Trans on audio, speech and language processing, Vol. 14, no. 1, Jan 2006.
M. Aeberhard, S. Schlichthärle, N. Kaempchen, and T. Bertram,” Track-to-Track Fusion with Asynchronous Sensors Using Information Matrix Fusion for Surround to-Track Fusion with Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception,” IEEE Trans on intelligent transportation system, Vol. 13, no. 4, Dec 2012.
Soroosh Mariooryad and Carlos Busso,” Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition,” IEEE Transaction on affective computing, Vol. 4, no. 2,April-June 2013.
W. Xu, C. Chang,Y. S. Hung and P. Fung,” Asymptotic Properties of Order Statistics Correlation Coefficient in the Normal Cases,” IEEE Trans on signal processing, Vol. 56,no. 6, Jun 2008.
X. Zhang, W. Hu, H. Bao, and S. Maybank, ” Robust Head Tracking Based on Multiple Cues Fusion in the Kernel- Bayesian Framework” IEEE Trans On Circuits and Systems for Video Technology, Vol. 23, No. 7, July 2013.
Z. Deng, U. Neumann, T. Kim, M. Bulut, and S. Narayanan, “Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces,” IEEE Trans on visualization and computer graphics Vol. 12, No. 6 Dec 2006.
Mr. N. G. Meshram
PG Department of Computer Science and Engineering Prof Ram Meghe College of Engineering and Management Badnera, 444 701, India - India
Dr. A. P. Bhagat
PG Department of Computer Science and Engineering Prof Ram Meghe College of Engineering and Management Badnera, 444 701, India - India
amol.bhagat84@gmail.com


CREATE AUTHOR ACCOUNT
 
LAUNCH YOUR SPECIAL ISSUE
View all special issues >>
 
PUBLICATION VIDEOS