Monday, February 23, 2009

Lecture 6: Music and Digital Culture, Andy P

i forgot to take a picture....alas. so instead i offer a video of prof hugill 2003




the lecture ran at a nice relaxed pace. not too slow or too complex so as to discombobulate, even though i am no musician or even critic myself. so where to begin? the beginning? seems to easy. while the fundamentals are important, the lecture contents importance was not chronological to me (though personal taste only)... so we'll start with (can carry on with) issues that mattered to me the most. note that for each section i suspend belief in the other sections written. as you will see, since i can talk about something that doesn't exist;

music is organized sound?:
"music is organized sound" was the definition Prof hugill preferred. a phrase coined by anton webern (known to us for his use of the twelve-tone technique also explained in the lecture). the problem i find with this definition is that if music is simply organized sound, then the very act of writing anything at all down makes it music notation though the shear intent of communicating something "as it was intended". this of course means that anything ever written is music (presumable if spoken/played out loud).
a point that goes even further than this is the work of john cage, who's sense of music transcended 'organization' and was composed by chance happenings in a very post modern statement about opposing organized structures of sound(though arguably was in fact organized the moment he wrote it down). so what we have in the end is music is both organized and unorganized sound.but then if everything is music, then surely nothing is music, so why study music at all?
upon first thinking about it, i prefer a music defined as organized emotive sound. to organize is not enough, but to have feelings about it (any feeling at all) is what places value on music to me. it was argued however that to say it is emotive is to say it is subjective (such as emotions are). therefore to say music is subjective is to say music is undefinable, which in itself is not a definition. to this i have a few thoughts:

1, if music is art, then to this effect i have not found any satisfactory answer to 'what is art?' or 'what is music' for that matter. therefore both are undefined at this moment to me other that to say it is an emotive expression.

2, emotion is definable. subjective, yes, but definable. this simply means that they vary in intensity: mild amusement, overjoyed etc are states of happiness in varying intensity. if emotion were undefinable, there would be no difference between happiness and depression and all emotion could be viewed as chemical imbalances in the brain, yet to artists (though not one myself) im sure emotion is more than this.

3, the examples for 'music is organized sound' are two extremes of music is organized sound (12-tone) and music is also unorganized sound (cage) (which i assume is music since it was brought up and we were told as much).

4, break down everything in the universe into atoms, break all the atoms down to the subatomic and anywhere in between, i challenge anyone to show me one gram of music. sound may exist as compressions of particals, but nowhere does music exist except for in the realms of emotion as we perceive it. so i don't think its unfair to say that music is emotive.

so to restate, if anything is music, then nothing is music(therefore everything is sound/noise), so why study music at all and if so, what to study?



total cerealism - the crunchy nut flakes of music:
'total serialism' was the next logical step to the twelve tone technique. the twelve tone technique used 12 notes in a scale, each played only once to avoid putting any importance one any one note. however, it was noted that not only could one play notes, but pitch, duration, tambre, volume and articulation also played their parts for the sound of each note played (i suspect there are more things that could alter how a note is played, but these 5 where given in the lecture). total serialism then, proposed that each of these affect a note in a manner similar to the twelve tone technique (so its really like the twelve tone technique to the 6th power, ergo a 6-dimensional matrix of note variations played one after another).
since total serialism is a very logical, if not mathematical process, why do we need humans to make this 'music' at all? a computer could easily generate every combination of the twelve notes with twelve (for the sake of argument) variations for each parameter:

eg (in pseudo code for ease of understanding).

create 2d array 'music' of 1 to x; //(where 'x' is 12 to the 6th power... or make it dynamic)
y = 1;

for note(1 to 12);{
for pitch(1 to 12);{
for duration(1 to 12);{
for tambre(1 to 12);{
for volume(1 to 12);{
for articulation (1 to 12);{
music[y] = note,pitch,duration,tambre,volume,articulation;
y= y+1;
}}}}}}

within that output for the program would lie every possible 'total serialism' piece of music that can be created with those parameters. unfortunately, since it can be generated by a computer with no inspiration need at all and (at this time to me) sounds terrible, how can we say it is any different from what we would accept as patterned 'noise' from any other aesthetically unpleasing machine? this being said...is it really music?



the (anti)modernism of music:

john cage has already been discussed above (the man who wrote 'music' though chance). personally, i really like this guy. not for his music but for the fact that he appears to have a fantastic sense of humor while managing to present his so called work in a sensible way. but then i guess that's post modernism for you. if total serialism is structured and ordered, then cage is chaotic and almost anarchistic. true musical modernism versus post modernism, but to the naked ear, resulting in exactly the same thing.


can you hear me?.....110001101:
microsound music('glitch' and other like phenomena) is the culmination of inaudible noise, either so short or so quiet that psycho-acoustically, we as humans cannot pick it up. i say psycho-acoustically, as the ear itself will in fact pick up the sound and in turn our brain will dismiss it as unimportant. apparently it has been shown that the playback of these sounds can have an effect on peoples moods despite being inaudible. this leads me to an interesting thought. we know that subliminal messaging is not allowed in advertising. but to define subliminal messaging we need an audio/visual actual message (be it words or images). microsound affects mood (i am lead to believe), but has no message per sae. therefore could we use microsound as an almost empathic subliminal messaging to make consumers 'feel good' when viewing a product?.... unfortunately, one could never regulate it if it were possible, since it is essentially just noise.


how did you make/hear that?:

since noise is the topic at hand, other things of note include additive and subtractive methods of sound synthesis.
think of 'sound' as a block of wood.... the sound you want to make is a particular shape made of wood. subtractive sound synthesis would strip away the wood until the shape is reveled (like whittling) while additive sound synthesis would use tiny bits of wood (sawdust perhaps) to build up that shape. now replace the word 'wood' with 'frequencies' (frequencies being sine waves of correct wavelength and amplitude) and you have a decent explanation of additive and subtractive methods of sound synthesis.



come out, come out, wherever you are:

steve reich “come out". lots of pieces of work where discussed, and while it was all quite interesting, this is the one that made me want to seek it our to hear it. the work consists of a single line (taken form the harlem riots if i remember correctly: "I had to, like, open the bruise up and let some of the bruise blood come out to show them" .... specifically "come out to show them" ) looped and replayed in parallel with versions of the same phrase at varying speeds. each version of the phrase introduced at specific intervals. the net effect is to have a single voice that builds into 'out of sync' voices to (as prof hugil puts it) 'create a wall of sound'. while i love the idea (especially with the social context of the phrase) is it not remarkably similar the the additive methods of sound synthesis described above? each speed of voice can be seen as a different frequency that is added until we get the final 'sound'. this is perhaps not completely accurate as the sound is (or becomes) out of phase with it predecessor. in this respect it is very much like OFDM (another topic entirely that could take hours(http://en.wikipedia.org/wiki/OFDM)). in any case, i cant bring myself to think of it as new information or even a new style, though i do love its simplicity.

disclaimer:
all of this was based on a single 2 hour lecture and therefore may be incomplete in places. i am not an artist, musician, formal critic or superhero. and please remember: flame responsibly.

2 comments:

Andrew Hugill said...

Please note - it was Edgard Varèse, not Anton Webern, who coined the definition 'organised sound' in his text 'The Liberation of Sound'

Jess said...

Andy, thinking about yesterday's lecture on critical thinking and valid and invalid arguments I'm re-reading your post and see this comment:

"if music is simply organized sound, then the very act of writing anything at all down makes it music notation though the shear intent of communicating something "as it was intended". this of course means that anything ever written is music"


Perhaps you'd like to go over this statement in terms of logical argumentation?

How does "if music is organised sound" lead to your "therefore" statement concerning writing = music?

Can you elaborate on this?