Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-24T12:42:41.518Z Has data issue: false hasContentIssue false

Lyrics segmentation via bimodal text–audio representation

Published online by Cambridge University Press:  05 May 2021

Michael Fell*
Affiliation:
Université Côte d’Azur, CNRS, Inria, I3S, France
Yaroslav Nechaev
Affiliation:
Amazon, Cambridge, MA, USA
Gabriel Meseguer-Brocal
Affiliation:
Ircam Lab, CNRS, Sorbonne Université, Paris, France
Elena Cabrio
Affiliation:
Université Côte d’Azur, CNRS, Inria, I3S, France
Fabien Gandon
Affiliation:
Université Côte d’Azur, CNRS, Inria, I3S, France
Geoffroy Peeters
Affiliation:
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
*
*Corresponding author. E-mail: [email protected]

Abstract

Song lyrics contain repeated patterns that have been proven to facilitate automated lyrics segmentation, with the final goal of detecting the building blocks (e.g., chorus, verse) of a song text. Our contribution in this article is twofold. First, we introduce a convolutional neural network (CNN)-based model that learns to segment the lyrics based on their repetitive text structure. We experiment with novel features to reveal different kinds of repetitions in the lyrics, for instance based on phonetical and syntactical properties. Second, using a novel corpus where the song text is synchronized to the audio of the song, we show that the text and audio modalities capture complementary structure of the lyrics and that combining both is beneficial for lyrics segmentation performance. For the purely text-based lyrics segmentation on a dataset of 103k lyrics, we achieve an F-score of 67.4%, improving on the state of the art (59.2% F-score). On the synchronized text–audio dataset of 4.8k songs, we show that the additional audio features improve segmentation performance to 75.3% F-score, significantly outperforming the purely text-based approaches.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baratè, A., Ludovico, L.A. and Santucci, E. (2013). A semantics-driven approach to lyrics segmentation. In 2013 8th International Workshop on Semantic and Social Media Adaptation and Personalization, pp. 73–79.CrossRefGoogle Scholar
Brackett, D. (1995). Interpreting Popular Music. MA: Cambridge University Press.Google Scholar
Cheng, H.T., Yang, Y.H., Lin, Y.C. and Chen, H.H. (2009). Multimodal structure segmentation and analysis of music using audio and textual information. In 2009 IEEE International Symposium on Circuits and Systems, pp. 16771680.CrossRefGoogle Scholar
Cohen-Hadria, A. and Peeters, G. (2017). Music structure boundaries estimation using multiple self-similarity matrices as input depth of convolutional neural networks. In AES International Conference Semantic Audio 2017.Google Scholar
Davis, S.B. and Mermelstein, P. (1980). Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 28, pp. 357366. doi: 10.1109/TASSP.1980.1163420.CrossRefGoogle Scholar
Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2018). Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.Google Scholar
Fell, M. (2014). Lyrics Classification. M.Phil. Thesis, Saarland University, Germany.Google Scholar
Fell, M., Nechaev, Y., Cabrio, E. and Gandon, F. (2018). Lyrics segmentation: textual macrostructure detection using convolutions. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 20442054.Google Scholar
Foote, J. (2000). Automatic audio segmentation using a measure of audio novelty. In 2000 IEEE International Conference on Multimedia and Expo, 2000. ICME 2000, vol. 1. IEEE, pp. 452455.Google Scholar
Fujishima, T. (1999). Realtime chord recognition of musical sound: a system using common lisp music. In ICMC. Michigan Publishing.Google Scholar
Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning. MIT Press. http://www.deeplearningbook.org.Google Scholar
Levenshtein, V.I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics Doklady, vol. 10, pp. 707710.Google Scholar
Mahedero, J.P.G., Martínez, Á., Cano, P., Koppenberger, M. and Gouyon, F. (2005). Natural language processing of lyrics. In Proceedings of the 13th Annual ACM International Conference on Multimedia. MULTIMEDIA’05. New York, NY, USA: ACM, pp. 475478.CrossRefGoogle Scholar
Mayer, R. and Rauber, A. (2011). Musical genre classification by ensembles of audio and lyrics features. In Proceedings of the 12th International Conference on Music Information Retrieval, pp. 675680.Google Scholar
McFee, B., Raffel, C., Liang, D., Ellis, D.P.W., McVicar, M., Battenberg, E. and Nieto, O. (2015). librosa: audio and music signal analysis in python. In Proceedings of the 14th Python in Science Conference, vol. 8, pp. 1825.CrossRefGoogle Scholar
Meseguer-Brocal, G., Cohen-Hadria, A. and Peeters, G. (2018). DALI: a large dataset of synchronized audio, lyrics and notes, automatically created using teacher-student machine learning paradigm. In ISMIR Paris, France.Google Scholar
Meseguer-Brocal, G., Peeters, G., Pellerin, G., Buffa, M., Cabrio, E., Faron Zucker, C., Giboin, A., Mirbel, I., Hennequin, R., Moussallam, M., Piccoli, F. and Fillon, T. (2017). WASABI: a two million song database project with audio and cultural metadata plus WebAudio enhanced client applications. In Web Audio Conference 2017 – Collaborative Audio #WAC2017. London, UK: Queen Mary University of London.Google Scholar
Mihalcea, R. and Strapparava, C. (2012). Lyrics, music, and emotions. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pp. 590599.Google Scholar
Ojala, M. and Garriga, G.C. (2010). Permutation tests for studying classifier performance. Journal of Machine Learning Research 11, 18331863.Google Scholar
Pennington, J., Socher, R. and Manning, C. (2014). Glove: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 295313.CrossRefGoogle Scholar
Philips, L. (2000). The double metaphone search algorithm. C/C++ Users Journal 18(06), 3843.Google Scholar
Snoek, C.G.M., Worring, M. and Smeulders, A.W.M. (2005). Early versus late fusion in semantic video analysis. In Proceedings of the 13th Annual ACM International Conference on Multimedia. MULTIMEDIA’05. New York, NY, USA: ACM, pp. 399402.Google Scholar
Tagg, P. (1982). Analysing popular music: theory, method and practice. Popular Music 2, 3767.CrossRefGoogle Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L. and Polosukhin, I. (2017). Attention is all you need. In Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. and Garnett, R. (eds), Advances in Neural Information Processing Systems 30. Curran Associates, Inc., pp. 59986008.Google Scholar
Watanabe, K., Matsubayashi, Y., Orita, N., Okazaki, N., Inui, K., Fukayama, S., Nakano, T., Smith, J. and Goto, M. (2016). Modeling discourse segments in lyrics using repeated patterns. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 19591969.Google Scholar