Figure 3 - uploaded by Vincenzo Lombardo
Content may be subject to copyright.
A chord in input and the domains related to each note of the chord. Each triple in the note domains indicates <string,fret,finger>.

A chord in input and the domains related to each note of the chord. Each triple in the note domains indicates <string,fret,finger>.

Source publication
Article
Full-text available
Abstract Fingering is a cognitive process that maps,each note on a music score to a fingered position on some instru- ment. This paper presents a computational model for the fingering process with string instruments, based on a constraint satisfaction approach. The model is imple- mented in a computer program, which has been tested in an experiment...

Contexts in source publication

Context 1
... graph in Figure 3 represents a chord fingering problem. For example, the note F2, corresponding to the variable x in the graph, could be played on <6,1,1> i.e. on the 6 th string, 1 st fret, by index finger; on <6,1,2>, by middle finger; and so forth. ...
Context 2
... us consider the chord presented in Figure 3. The order of variables is {x, y, z}. ...

Citations

... In previous papers we have addressed two separate cases of translating the notes in the score into the actual executor's gestures, namely the case of melodies of individual notes [6], and the case of isolated chords [7]. In this paper we improve the fingering model in accounting for both cases, so that it assigns fingerings to melodies that also include chords and polyphonic passages (e.g., when a note is held, and new notes are to be played). ...
... Cabral et al. model [1] also includes frequency in the repertoire. In previous papers we have addressed two separate cases of translating the notes in the score into the actual executor's gestures, namely the case of melodies of individual notes [6], and the case of isolated chords [7]. In this paper we improve the fingering model in accounting for both cases, so that it assigns fingerings to melodies that also include chords and polyphonic passages (e.g., when a note is held, and new notes are to be played). ...
... TheFigure 4 (case index-middle) shows that we have a comfortable span between close fingers working on close frets, while the comfortable span between index-ring is reached when ring finger presses a position two frets higher than index, thus delta fret being 2. Instead, if the comfortable span is exceeded, a larger effort is required to perform the transition between p and q: this is the case of delta fret wider than 1 for index-middle, and delta fret wider than 2 for index-ring; in both these cases the fret stretch measure is 2. The two diagrams also show the higher difficulty for the negative directions. Finally, the two diagrams are simplified in the sense that the fret stretch also takes into account the direction over strings: when the two fingered positions are on the same fret, it expresses the preference for higher fingers to press lower strings 1 , which is the relaxed formulation for a binary constraint enforced during the chords fingering [7] . This results in different measures depending on the strings in- volved. ...
Article
Full-text available
This paper presents a computational model of fingering for string instruments, based on a graph search approach. The implemented fingering model, which accounts for the bio-mechanical constraints of the performer's hand, is in- terfacedwithaphysicalmodeloftheclassicalguitar, which exploits the fingering to compute some sound synthesis parameters. The output of the system is validated against the performance of a human expert.
Article
Music transcription involves the transformation of an audio recording to common music notation, colloquially referred to as sheet music. Manually transcribing audio recordings is a difficult and time-consuming process, even for experienced musicians. In response, several algorithms have been proposed to automatically analyze and transcribe the notes sounding in an audio recording; however, these algorithms are often general-purpose, attempting to process any number of instruments producing any number of notes sounding simultaneously. This paper presents a polyphonic transcription algorithm that is constrained to processing the audio output of a single instrument, specifically an acoustic guitar. The transcription system consists of a novel note pitch estimation algorithm that uses a deep belief network and multi-label learning techniques to generate multiple pitch estimates for each analysis frame of the input audio signal. Using a compiled dataset of synthesized guitar recordings for evaluation, the algorithm described in this work results in an 11% increase in the f-measure of note transcriptions relative to Zhou et al.’s (2009) transcription algorithm in the literature. This paper demonstrates the effectiveness of deep, multi-label learning for the task of polyphonic transcription.