Figure 3 - uploaded by Karl Kristoffer Jensen
Content may be subject to copyright.
Histogram of chunk shapes, according to perception-based categorization for three songs.

Histogram of chunk shapes, according to perception-based categorization for three songs.

Source publication
Conference Paper
Full-text available
The ‘chunk’ phenomenon has attracted some attention lately. In the auditory domain a chunk is seen as a segment of sound. This work first investigates theories of chunks from cognitive science and musicology. We define the chunk as a closed entity of a certain size and with a dynamic structure containing a peak. The perceptual analysis of three son...

Contexts in source publication

Context 1
... histogram for the curve shape of the chunks, according to the classification in figure 2, are shown in figure 3. For all three songs, a falling linear slope (f) is most common, followed by either a rising linear slope (d) and a falling peaked slope (c) for Allaturca, or stable peak (b) and rising linear slope (d) for Hold On, or rising peaked slope (a) and rising linear slope for First Song. ...
Context 2
... feature seems to have the same distribution for the slopes. These values correspond well with the histograms for the perceptually obtained chunk shapes in figure 3, in which the negative slopes were more common. The peakedness values are shown in figure 4 (bottom). ...

Similar publications

Conference Paper
Full-text available
Music is a language of emotions, and hence music emotion could be useful in music understanding, recommendation, retrieval and some other music-related applications. Many issues for music emotion recognition have been addressed by different disciplines such as physiology, psychology, cognitive science and musicology. In this paper, we focus on the...

Citations

... The subchunk is related to individual notes, the chunk to meter and gesture, and the superchunk is related to form. The superchunk was analyzed and used for in a generative model in Kühl and Jensen (2008), and the chunks were analyzed in Jensen and Kühl (2009). Further analysis of the implications of how temporal perception is related to durations and timing of existing music, and anatomic and perceptual finding from the literature is given in section 2 along with an overview of the previous work in rhythm. ...
... In this work, the gestures in the pitch contour are investigated. Jensen and Kühl (2009) investigated the gestures of music through a simple model, with positive or negative slope, and with positive or negative arches, as shown in figure 1. For the songs analyzed, Jensen and Kühl found more negative than positive slopes and slightly more positive than negative arches. ...
Conference Paper
Full-text available
Generative models of music are in need of performance and gesture additions, i.e. inclusions of subtle temporal and dynamic alterations, and gestures so as to render the music musical. While much of the research regarding music generation is based on music theory, the work presented here is based on the temporal perception, which is divided into three parts, the immediate (subchunk), the short-term memory (chunk), and the superchunk. By review of the relevant temporal perception literature, the necessary performance elements to add in the metrical generative model, related to the chunk memory, are obtained. In particular, the pitch gestures are modeled as rising, falling, or as arches with positive or negative peaks. Keywordsgesture–human cognition–perception–chunking–music generation
... These time-scales are named (Kühl and Jensen 2008) subchunk, chunk and superchunk, and subchunks extend from 30 ms to 300 ms; the conscious mesolevel of chunks from 300 ms to 3 sec; and the reflective macrolevel of superchunks from 3 sec to roughly 30 40 sec. The superchunk was analyzed and used for in a generative model in Kühl and Jensen (2008), and the chunks were analyzed in Jensen and Kühl (2009 the implications of temporal perception is related to analysis of durations and timing of existing music, and anatomic and perceptual finding from the literature, and this is given in the next section. Section 3 presents current work on the analysis of differences between accented and unaccented notes in 7/8 music, and the section 4 discusses the integration of the metrical rhythm in the superchunk generative music model. ...
Article
Full-text available
Generative models of music rhythm are in severe need of performance additions, i.e. inclusions of subtle temporal and dynamic alterations so as to render the music musical. While much of the research is based on music theory, the work presented here is based on the temporal perception, which is divided into three parts, the immediate (subchunk), the short-term memory (chunk), and the superchunk. By review of the relevant temporal perception literature, and analysis of performances of metrical music, the necessary performance elements to add in the metrical generative model, related to the chunk memory, are obtained.
Article
We present AutoLeadGuitar, a system for automatically generating guitar solo tablatures from an input chord and key sequence. Our system generates solos in distinct musical phrases, and is trained using existing digital tablatures sourced from the web. When generating solos AutoLeadGuitar assigns phrase boundaries, rhythms and fretboard positions within a probabilistic framework, guided towards chord tones by two user-specified parameters (chord tone preference during and at the end of phrases). Furthermore, guitar-specific ornaments such as hammer-ons, pull-offs, slides and string bends are built directly into our model. Listening tests with our model output confirm that the inclusion of chord tone preferences, phrasing, and guitar ornaments corresponds to an increase in user satisfaction.
Article
We present AutoGuitarTab, a system for generating realistic guitar tablature given an input symbolic chord and key sequence. Our system consists of two modules: AutoRhythmGuitar and AutoLeadGuitar. The first of these generates rhythm guitar tablatures which outline the input chord sequence in a particular style (using Markov chains to ensure playability) and performs a structural analysis to produce a structurally consistent composition. AutoLeadGuitar generates lead guitar parts in distinct musical phrases, guiding the pitch classes towards chord tones and steering the evolution of the rhythmic and melodic intensity according to user preference. Experimentally, we uncover musician-specific trends in guitar playing style, and demonstrate our system’s ability to produce playable, realistic and style-specific tablature using a combination of algorithmic, user-surveyed and expert evaluation techniques.