Διεθνές Συμπόσιο Υπολογιστικής και Γνωσιακής Μουσικολογίας

Αθήνα, 22-23 Ιουνίου, 2023 

📺 Live streaming

Tο Εργαστήριο Μουσικής, Γνωσιακών Επιστημών και Κοινότητας (MCCC)  και το Εργαστήριο Μουσικής Ακουστικής και Τεχνολογίας (LABMAT) του Τμήματος Μουσικών Σπουδών ΕΚΠΑ διοργανώνουν το Διεθνές Συμπόσιο Γνωσιακής και Υπολογιστικής Μουσικολογίας στις 22 και 23 Ιουνίου 2023 στην Αθήνα. Το συμπόσιο αποτελεί παράλληλα και συνάντηση εργασίας της ομάδας Ψηφιακής Μουσικολογίας της Διεθνούς Μουσικολογικής Ένωσης.

Στις εργασίες του συμποσίου, που θα πραγματοποιηθούν κυρίως στο Αμφιθέατρο «Ι. Δρακόπουλος» στο Κεντρικό Κτήριο του ΕΚΠΑ, στα Προπύλαια, θα συμμετέχουν οι πιο καταξιωμένοι ερευνητές του τομέα διεθνώς από δέκα ευρωπαϊκά πανεπιστήμια με ομιλίες, ανοιχτές συζητήσεις και εργαστήρια που θα προσεγγίσουν ζητήματα που απασχολούν τη σύγχρονη έρευνα. Τα θέματα που θα καλυφθούν από τις ομιλίες ποικίλουν και ξεκινούν από πολύ γενικές τοποθετήσεις για τη μουσική αρμονία μέχρι και ειδικές παρουσιάσεις σε τρέχοντα μουσικολογικά θέματα. Ιδιαίτερο ενδιαφέρον παρουσιάζουν οι ομιλίες που προσπαθούν να ενοποιήσουν τους τομείς της υπολογιστικής και της γνωσιακής μουσικολογίας και να τοποθετήσουν την έρευνα σε ένα ευρύτερο συστηματικό πλαίσιο.

Η παρακολούθηση είναι ελεύθερη. Στο τέλος δίνεται βεβαίωση παρακολούθησης σε όσους το επιθυμούν.

ΔΙΟΡΓΑΝΩΣΗ: 
  • Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών
  • Φιλοσοφική Σχολή
  • Τμήμα Μουσικών Σπουδών
  • Εργαστήριο Μουσικής, Γνωσιακών Επιστημών και Κοινότητας
  • Εργαστήριο Μουσικής Ακουστικής Τεχνολογίας
ΑΙΘΟΥΣΑ

Κεντρικό κτήριο Πανεπιστημίου Αθηνών, Αμφιθέατρο «Ιωάννης Δρακόπουλος», Πανεπιστημίου 30

Πρόγραμμα

Thursday, 22 June

TIME (EEST)
BUILDING
9:00 – 9.30WelcomeMain University Building
9.30 – 10:00
10:00 – 11:00
Welcome opening talks
Frans Wiering (Utrecht University) Are we ready for a big data history of music?
Martin Rohrmeier (EPFL Lausanne) Bridging theory and computation: On structural models in computational musicology and music cognition
Main University Building
11:00 – 11.30Coffee breakMain University Building
11:30 – 13:00Olivier Lartillot (University of Oslo, RITMO) Towards a comprehensive model for computational music transcription and analysis: a necessary dialog between machine learning and rule-based design?
Main University Building

Emilios Cambouropoulos, Konstantinos Giannos, Kostas Tsougras (Aristotle University of Thessaloniki) Harmony, Chords, Roots, Types: the General Chord Type (GCT) representation
Anja Volk (Utrecht University) Musical patterns: towards a dialogue between computational musicology and music therapy
Mark Steedman (University of Edinburgh) Possible World Harmonics
Main University Building
13:30 – 15:30LunchStudent place: Solonos and Lykavitou 14
16:00 – 18:00Working groups (List of topics TBC)Dept of Music Studies, School of Philosophy

21:00 Traditional dinner @ SALEROS (MODERN GREEK TAVERN)

Friday, 23 June

TIME
BUILDING
9:30 – 11:00Peter Nelson (University of Edinburgh) Habits and algorithms: reconsidering the narrative frames of music
David Lewis, Tim Crawford, Golnaz Badkobeh (University of Oxford) Corpus-building and corpus-based musicology for the Early Modern Period: Towards a complete Electronic Corpus of Lute Music… and beyond
Geraint Wiggins (Vrije Universiteit Brussel) Spectral knowledge representation and the Information Dynamics of Thinking: implications for music cognition
Main University Building
11:00 – 11: 30Coffee breakMain University Building
11:30 – 13:00Poster session (List of presentations TBC)Main University Building
13:30 – 15:00LunchStudent place: Solonos and Lykavitou 14
15:00 – 16:30DiscussionCostis Palamas Building

17:00 ACROPOLIS MUSEUM

19:00 CONCERT AT MAIN UNIVERSITY BUILDING

21:00 Late dinner

Optional Day 3: 24th June

One-day trip to the sea and lunch

ΟΡΓΑΝΩΤΙΚΗ ΕΠΙΤΡΟΠη
  • Χριστιάνα Αδαμοπούλου
  • Χριστίνα Αναγνωστοπούλου
  • Αρετή Ανδρεοπούλου
  • Μάξιμος Καλιακάτσος-Παπακώστας
  • Τάσος Κολυδάς
  • Peter Nelson 
  • Frans Wiering

Αφίσα

Abstracts

Are we ready for a big data history of music?

Frans Wiering

Music Information Computing, Department of Information and Computing Sciences,

Utrecht University

The phrase “big data history of music” was coined by Stephen Rose and Sandra Tuppen in 2014. Their aim was to use the ever-growing musical datasets to study music history from a perspective that is less dominated by the Great Composers and that pays more attention to large-scale patterns in musical culture. They used catalogues such as RISM A/ii that contain millions of metadata items as their main resource. A logical follow-up step is to connect this approach to ideas such as Franco Moretti’s distant reading and Nicholas Cook’s distant listening, and to research large-scale historical patterns that may emerge from compositions. Even though there is a long tradition of corpus-based study of musical objects, to turn this into the writing of history is not so easy. Hence the main question of this talk: “are we ready for a big data history of music?” Specifically, I will discuss five subquestions related to readiness:

1. community: who are ‘we’?

2. data: what data are available?

3. processing: what methods and tools are available?

4. study: what research can be done?

5. persuasion: how convincing are the outcomes?

While the outcome is that we aren’t ready yet, there may some ground for cautious optimism. An obvious problem, despite the long history of music encoding, is the shortage of suitable data, another a lack of persuasive results that can be integrated into the musicological discourse. Both issues seem difficult but not insolvable, and some ideas for addressing these will be proposed.

Frans Wiering is an associate professor in the Music Information Computing group of the Department of Information and Computing Sciences of Utrecht University. His research interests include Music Information Retrieval, Computational Musicology and Interactive Systems design. Currently, he researches the use and acceptance of digital technology in musicological research in ‘What Do Musicologists Do All Day’ (with Charlie Inskip). The CANTOSTREAM project (with Mirjam Visscher and Peter van Kranenburg) aims to study tonal structures in early music from a big data perspective. He co-chairs the International Musicological Society’s Study Group on Digital Musicology.

Towards a comprehensive model for computational music transcription and analysis: a necessary dialog between machine learning and rule-based design?

Olivier Lartillot

University of Oslo, RITMO

Computational music analysis, still in its infancy, lacking overarching reliable tools, can be seen at the same time as a promising approach to fulfil core epistemological needs. Analysis in the audio domain, although approaching music in its entirety, is doomed to superficiality if it does not fully embrace the underlying symbolic system, requiring a complete automated transcription and scaffolding metrical, modal/harmonic, voicing and formal structures on top of the layers of elementary events (such as notes). Automated transcription enables to get over the polarity between sound and music notation, providing an interfacing semiotic system that combines the advantages of both domains. Concerning low-level aspects of music transcription related to the detection of pitch curves and individual note events, traditional signal processing methods have been supplanted by deep learning methods. Machine learning demands rich training sets: our attempt to automatically transcribe Norwegian Hardanger fiddle music required to manually annotate with high level of precision and details a corpus of music. We developed a new interface facilitating that task.  Concerning higher-level aspects, I advocate the necessity of a multi-dimensional music transcription and analysis framework (where both tasks are actually deeply intertwined), taking into account the far-reaching interdependencies between dimensions, for instance between motivic and metrical analyses, and the importance of core mechanisms such as sequential pattern mining. For those aspects, rule-based design seems more tractable. This is illustrated in particular with the task of detecting beats in Hardanger fiddle music.

Transcending strict hierarchical representations, which continue playing a core role in cognitive and computational modelling of music understanding (for instance through the notion of segmentation), computational modelling allows multi-level, non-hierarchical and polysemic grouping.

Olivier Lartillot is a researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, at the University of Oslo. He works in the transdisciplinary field of computational music analysis, of audio, music scores and intermediary representations, articulating musicology, computer, cognitive science and signal processing. He obtained a funding from the Research Council of Norway under the FRIPRO-IKTPLUSS program, for a project called MIRAGE – A Comprehensive AI-Based System for Advanced Music Analysis (2020-2023, grant number 287152) and was previously an Academy of Finland Research Fellow.

Musical patterns: towards a dialogue between computational musicology and music therapy

Anja Volk

Utrecht University

I present research of the Music Information Computing group at Utrecht University on investigating musical patterns with computational means to enhance our understanding of music, and to employ these patterns within different interaction contexts. Starting with computational music analysis dedicated to pattern discovery, I will discuss then the bridge to employing patterns for training in health and wellbeing, such as in music-based collaborative games for visually impaired children, applied games for musical attention control training, and games for rhythmic training of children with autism. I will discuss how to bridge between musicological and cognitive insights and applications in music therapy.

Anja Volk is Professor of Music Information Computing at Utrecht University, and has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands. Anja has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music, the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR mentoring program, which organizes yearly mentoring rounds in order to foster greater diversity in MIR.

Possible World Harmonics

Μark Steedman

University of Edinburg

Musicians in the tonal tradition that covers not only Western Tonal Music, but also various modal traditions including the Eastern, have always thought of harmony in spatial terms: they talk of “close” harmonic intervals, such as the perfect fifth and the diatonic semitone, and “distant” ones, like the tritone and the imperfect fourth. The space that they have in mind can be thought of as a three dimensional discrete lattice of frequency ratios whose primary generators are the intervals of the octave,the perfect fifth, and the major third, or more generally the prime factors 2, 3, and 5, an insight that can be traced back to \cite{Eule:39}:147 and \cite{Riem:93,Riem:95}.  Harmonic distance can then be defined as various metrics over such a lattice, and the nodes of the lattice form a model or “possible world”. If the harmonic space can be defined as such a lattice, then other musics can be imagined in which the parallel relation is based on other prime factors, and even on other numbers of dimensions. And indeed, such harmonic systems have arisen naturally, and have been proposed mathematically.  So the question is, what if anything is special about the 2.3.5 space, and why is it so widespread? The present paper shows that the space based on prime factors 2, 3, and 5 is unique among systems based on small integer ratios in ensuring that distinct harmonic relations which are confusible in the sense of being close in frequency, such as the perfect and imperfect fourths, are widely separated in the harmonic space, and can therefore be interpreted correctly in context in terms of proximity to the interpretation of other notes, even when not exactly pitched due to errors in performance or perception, or when performed using intentionally tempered tunings that drastically compress the unbounded space of distinct frequency ratios of just intonation onto the twelve semitone degrees of the modern keyboard, each related to itspredecessor by the fixed ratio $\sqrt[12]{2}$.

Mark Steedman is Professor of Cognitive Science in the School of Informatics, working in Computational Linguistics, Artificial Intelligence, and Cognitive Science, on Generation of Meaningful Intonation for Speech by Artificial Agents, The Communicative Use of Gesture, Tense and Aspect, and Wide coverage parsing and robust semantics for Combinatory Categorial Grammar (CCG). Also interested in Computational Musical Analysis and Combinatory Logic.  Previously, he taught as Professor in the Department of Computer and Information Science at the University of Pennsylvania, which he joined as Associate Professor in 1988, after teaching at the Universities of Warwick and Edinburgh. His PhD is in Artificial Intelligence from the University of Edinburgh. 

Habits and algorithms: reconsidering the narrative frames of music

Peter Nelson

University of Edinburg

As Anna Tsing remarks, ‘I find myself without the handrail of stories that tell where everyone is going, and, also, why.’ (Tsing 2015, 2). In thinking about music as a set of narrative frames, and in the context of recent advances in AI that allow the analysis and generation of musical corpora, the inadequacy of traditional representations of music becomes apparent. Taking the Pragmatist view, in which living selves present as bundles of habit (James), and acknowledging both the contributions of habit to 4E cognition and recent anthropological accounts of situated practice, this paper considers how a reassessment of the concepts and aims of music cognition might proceed.

Peter Nelson is Emeritus Professor of Music and Technology at the University of Edinburgh, where he initiated and directed the Electronic and Computer Music Studios, co-ordinated the Music Informatics Research Group, and was a founder of the Institute for Music in Human and Social Development. He has written on topics in music and music informatics, including the UPIC system of Iannis Xenakis, and social theories of rhythm. His compositional output includes chamber, choral, orchestral, and electronic music, and he currently works with In the Making dance collective. He was Editor of the journal Contemporary Music Review 1994-2022.

Corpus-building and corpus-based musicology for the Early Modern Period: Towards a complete Electronic Corpus of Lute Music… and beyond

David Lewis, Tim Crawford, Golnaz Badkobeh

University of Oxford

Sustainable musicology must make use of as wide a set of contributors, and hear as many voices, as possible. As the online Electronic Corpus of Lute Music approaches its 20th year, a change of approach – embracing enthusiast and scholarly collections alike – is increasing the size of the encoded corpus tenfold and could allow us to provide metadata on almost all of the over 60,000 items in the known lute repertory. This approach brings challenges and limitations, as well as opportunities for scholarship beyond what has previously been possible. The new sub-corpora have diverse editorial strategies and metadata quality, sometimes lacking basic information such as instrumental tuning. On the other hand, a combination of resources to give even 15-20% of the known repertory, combined with metadata to evaluate biases in that sample, could prove invaluable for corpus studies, and also help discover hitherto unrecognised connections and quotations between works. As many vocal pieces of the period are now available online in digital facsimile, the lute corpus also presents a tantalising key for exploring the wider repertory of the period. Through Optical Music Recognition, we are gathering an expanding corpus of >500,000 pages transcribed from early-modern sources. Again, the nature of the material and how it has been gathered places limitations on the uses that can be made of it. Nonetheless, appropriate pattern discovery methods can support search and certain kinds of analysis. Large-scale, cross-corpus analysis between vocal and instrumental works presents a particularly exciting opportunity, but requires adaptations to existing approaches. Lute tablature makes no distinction between the voices of a composition, making many conventional melodic features unavailable without further processing. Building and using these corpora requires new approaches to computational musicology – not just algorithmic approaches, but also social and organisational – to ensure a strong future for corpus-based research. 

David Lewis is a researcher at the University of Oxford e-Research Centre in Oxford and Lecturer in Computer Science at Goldsmiths, University of London. He studied historical musicology at Kings College London and has since worked on a wide range of digital musicology and digital humanities projects, including lute tablatures, medieval and early modern music treatises, exploring musical memory for pop tunes, and, currently, arrangements of Beethoven’s works for domestic settings, and the sustainability of DH software infrastructure. 

Spectral knowledge representation and the Information Dynamics of Thinking: implications for music cognition

Steven T. Homer, Nicholas Harley and Geraint A. Wiggins

Vrije Universiteit Brussel

We present progress in the development of a cognitive architecture, Information Dynamics of Thinking, mostly inspired by music cognition, that aims to unify recent advances in cognitive modelling using the mathematics of quantum theory. In various items of work, this mathematical approach has modelled human behaviour more closely than previous logic-based approaches (in abstract reasoning and in language, for example), leading researchers to conclude that cognition is a quantum phenomenon. However, as noted by some of the successful experimenters, the fact that their framework happens to describe quantum physics may be a coincidence: the same mathematics can equally well describe the behaviour of other physical phenomena. Our overall hypothesis is that the same effects can be modelled in the brain, viewed as a collection of interconnected oscillatory networks at non-quantum scale. This said, it is important to understand that our approach is not a typical neural approach. Our modelling is at a level of functional abstraction higher than the wetware itself. In this talk, we sketch the outline of our architecture to give context. However, the main content of the talk is our first empirical evidence of a cognitive model, related to those of Large and Milne, of musical harmony. We explain our general model, which we call Resonance Space, showing how it accounts for the specific empirical results of Krumhansl and Kessler at least as well as other models in the literature. An important property of working in the spectral domain, as we are, is that composition of functions in the time domain is reduced to multiplication. This means that composition of functions and application of functions to objects may be addressed grammatically without recourse to recursion, and with the capacity to create semantic representations for any subsequence of syntactic structures of the correct grammatical types. Thus the need for state memory in parsing is removed, and long-term dependencies either are encoded in ambiguous representations or become a property of memory and expectation (depending on one’s perspective). Such sequential ambiguity may also be explicitly captured via a probability density manifold over the Resonance Space, and we are currently implementing this to capture, first, structural segmentation by boundary entropy, and then harmonic movement.

Geraint A. Wiggins studied Mathematics and Computer Science at Corpus Christi College, Cambridge and holds PhDs from the University of Edinburgh’s Edinburgh’s Artificial Intelligence and Music Departments. His main research area is computational creativity, which he views as an intersection of artificial intelligence and cognitive science. He is interested in understanding how humans can be creative by building computational models of mental behaviour and comparing them with the behaviour of humans. He has worked at the University of Edinburgh and three colleges of the University of London: City (where he served as Head of Computing, and Senior Academic Advisor on quality), Goldsmiths, and Queen Mary (where he served as Head of School of Electronic Engineering and Computer Science). In 2018, he moved his Computational Creativity Lab to the Vrije Universiteit Brussel, in Belgium. He is a former chair of SSAISB, the UK learned society for AI and Cognitive Science, and of the international Association for Computational Creativity, whose General Assembly he curretnly chairs. He is associate editor (English) of Musicae Scientiae (the journal of the European Society for the Cognitive Sciences of Music), a consulting editor of Music Perception (the journal of the Society for Music Perception) and an editorial board member of the Journal of New Music Research, and the Journal of Creative Music Systems.

Poster presentations

Digital Tools for Rediscovering Late-19th Century African-American Music

Nico Schüler, Texas State University

Auditory but not audiovisual cues lead to higher neural sensitivity to the statistical regularities of an unfamiliar musical style

Joanna Zioga – Donders Centre for Cognitive Neuroimaging, Nijmegen

Anna Maria Christodoulou – University of Oslo, RITMO 

George Konstandelakis – NKUA 

George Velissaridis -NKUA and Athens University of Economics and Business