AI, film and emotion
Music can form a soundtrack for actions. In film, for example, it is an essential tool for creating suspense and different moods. We subconsciously associate the sound of a solo cello with an emotionally moving moment, for example, while a deep, rapid, irregular rhythm amps up the feeling of tension. The aesthetic appeal of such music differs widely from one audience member to the next, depending on their musical education, taste and the situation in which they find themselves. AI can help here by applying sound analyses in order to create a factual, measurable, physical basis for evaluating the impact of various types of music. It can then find just the right musical expression for a particular film sequence.
AI sorts Film Music
On the Kohonen map on the left, you can see how AI sorts different film scores. The dots on the chart represent excerpts of famous film music. The different colours signify the related pieces of music.
Touch the dots with your finger to listen to the music!
Compare the sorting of the AI with the evaluation of experts regarding Valence and Arousal* in the graph on the right (*see glossary text).
Can you follow the AI’s sorting? Why, for example, is “Winnetou” often so close to “Three Hazelnuts for Cinderella”? And why does “Star Wars” spread across the whole map, while TV music from the 1960s is only found in the bottom right corner? Using the buttons at the top left might be helpful. Thus, you see how the individual musical parameters are distributed on the map (where yellow indicates a strong influence).
References
Bader, Rolf (2021). Music, Meaning, and Emotion. In: How Music Works – A Physical Culture Theory. DOI: https://doi.org/10.1007/978-3-030-67155-6
Bader, R., Blaß, M. (2019). Content-Based Music Retrieval and Visualization System for Ethnomusicological Music Archives. In: Computational Music Archiving as Physical Culture Theory. In: Computational Phonogram Archiving. DOI: 10.1007/978-3-030-02695-0_7
Bergmann, P. et al (2009). Perceptual and emotional categorization of sound. DOI: https://doi.org/10.1121/1.3243297
Brown, Royal S. (1994). Overtones and Undertones. Reading Film Music. Berkeley u. a.: University of California Press. DOI:10.1525/fq.1996.49.4.04a00130
Bullerjahn, Claudia (2018). Psychologie der Filmmusik. In: Hentschel, Frank; Moormann, Peter Hrsg. Filmmusik – Ein alternatives Kompendium. Springer VS (S. 181 – 220). DOI:10.1007/978-3-658-11237-0_9
Döring-Bortz (2016). Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften. 5. Auflage, Springer. DOI https://doi.org/10.1007/978-3-642-41089-5
Eerola, T.; Vuoskoski, J.K. (2012). A review of music and emotion studies: Approaches, emotion models and stimuli, Music Percept. 30(3), 307–340. DOI: 10.1525/mp.2012.30.3.307
Eerola / Vuoskoski (2011). Measuring Music-Induced Emotion: A Comparision of Emotion Models, Personality Biases, and Intensity of Experiences. DOI: 10.1177/102986491101500203
Eerola, T.; Vuoskoski, J.K. (2011). A comparison of the discrete and dimensional models of emotion in music. In: Psychology of Music 39(1) 18-49- © 2011 Sagebub. DOI: 10.1177/0305735610362821
Godøy, Rolf Inge (1999). Cross-Modality and Conceptual Shapes and Spaces in Music Theory. In: Music & Signs – Semiotic and Cognitive Studies in Music. Ed. by Ioannis Zannos. Bratislava: Asco Art & Science, 85 – 98.
Grekow, J. (2018). From Content-based Music Emotion. DOI: 10.1007/978-3-319-70609-2
Juslin, Patrik – 2010. How does Music evoke emotions, in: Handbook of Music and Emotion – Theory, Research, Applications. [Sloboda / Juslin – 2010], Oxford University Press. DOI: 10.1093/acprof:oso/9780199230143.003.0022
Karbusicky, Vladimir (1990). Musikwissenschaft und Semiotik. Aus: Semiotik in den Einzelwissenschaften (Walter A. Koch Hrsg.). Bochum, Brockmeyer, S. 214 – 228
Kurzweil, Ray (2012). Das Geheimnis des menschlichen Denkens. Einblicke in das Reverse Engineering des Gehirns.
Lipscomb, Scott David (1995). Cognition of musical and visual accent structure alignment in film and animation, Diss . University of California, Los Angeles
McFee, Brian (2012). More like this: machine learning approaches to music similarity (Diss.)
McGinn, Conor, Kelly, Kevin – 2018. Using the Geneva Emotion Wheel to Classify the Expression of Emotion on Robots. ACM ISBN 978-1-4503-5615-2/18/03. DOI: https://doi.org/10.1145/3173386.3177058di
Peltola, H.-R.; Eerola, T. (2015). Fifty shades of blue: Classification of music-evoked sadness, Musicae Scientiae 20(1), 84–102. DOI: 10.1177/1029864915611206
Saari, P., Eerola, T., Barthet, M., Fazekas, G., Lar- tillot, O. (2015). Genre-adaptive semantic computing and audio-based modelling for music mood an- notation. In: IEEE Trans. Audio Speech Lang. Process. (TASLP). DOI: https://doi.org/10.1109/taffc.2015.2462841
Saari, P.; Eerola, T. (2013). Semantic computing of moods based on tags in social media of music. In: IEEE Trans. Knowl. Data Eng. DOI: 10.1109/TKDE.2013.128
Scherer, K.R. et al (2017). The expression of emotion in the singing voice: Acoustic patterns in vocal performance. DOI: https://doi.org/10.1121/1.5002886
Scherer, Klaus (2005). What are emotions? And how can they be measured? DOI:10.1177/0539018405058216
Schuller et al (2010). Mister D.J., Cheer Me Up!’: Musical and Textual Features for Automatic
Mood Classification; DOI:10.1080/09298210903430475
Sloboda, John A. (1985). Music, Language and Meaning. In: The Musical Mind – The cognitive psychology of music. Clarendon Press: Oxford
Warpechowki et al (2020). Tagging emotions using a wheel user interface. DOI: 10.1145/3351995.3352056
Zentner et al (2008). Emotions Evoked by the Sound of Music: Characterization, Classification, and Measurement. DOI: 10.1037/1528-3542.8.4.494