RlutzSurveyofMusicTechnology

From GGCWiki
Jump to: navigation, search

Contents

Project A

Assets Produced

  • Listen to this piece on soundcloud.com.
  • Download MP3 file
  • ProjectA zip file (without licensed "Inner Demons" recording). Built with Cockos Reaper.

Warning, I'm an engineer, developer, researcher and teacher. I'm not a musician!

Description

In this project, I assembled a number audio clips, sounds and midi tracks to create a piece titled "Sounds in the Ether and in Space". I thought it'd be a challenge to assemble some sounds and music that recall what's happened with sound and music technology over the past 100-150 years. Where appropriate, I performed noise reduction and excerpted smaller pieces of the original works. In cases where I'd risk an intellectual property violation, I recorded small passages in my own voice. Clearly the original artist or studio can't own a short phrase, , like "Number 9" or "I repeat myself when under stress", right? I've listed the assets used with comments about where they came from and what manipulation I did to them here.

The most interesting part for me was the short clip where I recite, "Number 9, Number 9, Number 9, Number 9". This is to recall the flap about running the Beatles record backwards, to extract a hidden and eerie message. (I provide a link below if you'd like to read deeper. Beware!) At first I thought I'd attempt to repeat the handiwork of Beatles producer Alan Parson when he determined that he could reverse John Lennon's recital of "Number 9" to arrive at something that sounded a bit like "Turn me on dead man." I thought I'd recall this in my piece with a different approach. I recorded "Turn me on dead man" and turned it around (reversing the order of the samples in the waveform) to arrive at something that sounds a bit like "Number 9" in my own voice! Maybe not that creepy ....

All assets used were either in the public domain, used under the Creative Common license, distributed with the course material and subsequnetly used, explicitly purchased by me for use in my project or recorded by me. An accounting of the resources is cataloged below.

This project was creating while taking Coursera's 'Survey of Music Technology' offered by Dr. Jason Freeman of Georgia Tech in August-September of 2013. Well done Dr. Freeman!

The soundcloud group for this course is here.

Assets Used

The assets I included are:

Track Details

Track Number: 1 
Track Name: RealSynth 
Track Type: MIDI
Audio Effects: VST:ReaVerb, JS:Guitar/distortion
MIDI Effects: midi_arp
Effect Envelopes: volume, distortion_gain

Track Number: 2
Track Name: 024655273-inner-demons
Track Type: Audio
Audio Effects: None
MIDI Effects: None
Effect Envelopes: Fade In, Fade Out

Track Number: 3
Track Name:Hear-my-voice-DCFIR_01-filtered
Track Type: Audio  
Audio Effects: VST:ReaVerb, VST:ReaComp
MIDI Effects: None 
Effect Envelopes: Volume

Track Number: 4
Track Name: unnamed
Track Type: MIDI
Audio Effects: VST:ReaVerb
MIDI Effects: JS:MIDI/sequencer_baby, VTSi:ReaSamlOmatic5000
Effect Envelopes: Volume

Track Number: 5
Track Name: unnamed
Track Type: Audio
Audio Effects: VVST:ReaVerb, VST:ReaComp
MIDI Effects:None
Effect Envelopes: Volume

Track Number: 6
Track Name: Y04 Horns
Track Type: Audio
Audio Effects: VST:ReaVerb
MIDI Effects: None
Effect Envelopes: Volume

Track Number: 7
Track Name: eight-bit Video Game Loop
Track Type: Audio
Audio Effects: None
MIDI Effects: None
Effect Envelopes: None

Project Requirements

In this project, use Reaper to compose a short piece of music with both audio and MIDI tracks.. The submission requirements below are intended to place minimal restrictions on musical creativity.

Submission Requirements:

  • Your MP3 should be 60 - 120 seconds in duration.
  • Your project should include at least 6 tracks.
  • Of those tracks at least 2 should be audio tracks.
  • Of those tracks at least 2 should be MIDI tracks.
  • Your project should include at least 10 different audio clips (either from the EarSketch loop library, used with permission of the copyright holder, or used under a creative commons or public domain license). Each of these 10 clips should come from a different audio source file.
  • You must record at least one of those clips yourself with a microphone or direct audio input from an instrument. (You can even just use your laptop's built-in mic.)
  • Your project must include at least 1 MIDI clip that you record yourself using a MIDI controller. (You can even just use Reaper's virtual keyboard.)
  • Your project must include at least 1 MIDI track generated using a pattern sequencer.
  • Your project must include at least 3 different audio effects.
  • Your project must include at least 1 MIDI effect.
  • Your project must include at least 3 envelopes (e.g. volume, panning, effect parameter).
  • Your music must be rhythmically precise. Each track must remain aligned to the project's meter and tempo. Use quantization on MIDI tracks as needed and make sure that audio files are matched to the project tempo and aligned to the beat.
  • Your music must be balanced. No track should dominate the others and no track should be lost in the mix. Set track volumes, use volume envelopes, and use compression as needed.
  • Your music should have a clear form, and everything should sound like it belongs together rhythmically, harmonically, and timbrally. Use your musical ear to guide you.

What to Submit:

Zip file containing:

  • Rendered MP3
  • Reaper Project Folder

When saving, check both the "create subdirectory for project" and "copy all media into project directory" options in the save dialog box.) Also, in Reaper make sure to go to File -> Clean Current Project Directory to remove unused audio files and keep the project folder size smaller.

Answer the questions about your project below. [on Coursera's page]

Project B

Assets Produced

  • Listen to this piece on soundcloud.com.
  • Download MP3 file
  • ProjectB zip file (Built with Cockos Reaper.)

Warning, I'm an engineer, developer, researcher and teacher. I'm not a musician!

Description

I created my effect by combining two UGen-based effects that were discussed in the Module 4 lecture. I started with a track that I extracted from a midi rendition of Also Sprach Zarathustra. I applied a frequency modulation effect to it and then subsequently fed this as input to a grain delay effect. My effect is called lutzFMGrainDelay.

For my algorithmic component, I combined the stochastic example provided in lecture, a variation of the same stochastic generation logic and several drum tracks that were generated by makeBeat() in a nested loop.

This project was creating while taking Coursera's 'Survey of Music Technology' offered by Dr. Jason Freeman of Georgia Tech in August-September of 2013. Well done Dr. Freeman!

The soundcloud group for this course is here.

Assets Used

The assets I included are:

  • 2001.mid, http://library.thinkquest.org
  • RICHARDDEVINE__DUBSTEP_140_BPM__DUBDRUM/*, EarSketch Library
  • YOUNGGURU__Y63_80_BPM_B_MINOR/*, EarSketch Library
  • HOUSE_BREAKBEAT_020, EarSketch Library
  • HIPHOP_TRAPHOP_BEAT_007,EarSketch Library
  • OS_COWBELL01, EarSketch Library
  • OS_COWBELL02,EarSketch Library
  • ELECTRO_DRUM_MAIN_BEAT_004, EarSketch Library
  • ELECTRO_DRUM_MAIN_BEAT_007,EarSketch Library
  • OS_OPENHAT02, EarSketch Library
  • OS_OPENHAT03, EarSketch Library
  • OS_CLAP01, EarSketch Library
  • OS_CLAP02, EarSketch Library
  • OS_SNARE01, EarSketch Library
  • OS_OPENHAT06, EarSketch Library

Project Details

methods occurrences (with line numbers)

init() - line 79
setTempo() - line 80
finish() - line 130
fitMedia() - line 91
makeBeat() - line 123

python usages

list - line 101
variable - line 128
for loop - line 86
random function - line 123

effect usages

initEffect() - line 15
createUGen() - line 18
connect() - line 34
setParam() - line 47
setParamMin() - line 49
setParamMax() - line 50
createControl() - line 52
finishEffect() - line 75

uGen usages

INPUT - line 25
OUTPUT - line 29
TIMES - line 27
SAMPLEHOLD - line 30
NOISE - line 26

setEffect() usages

setEffect() - line 126

full source

#
#
#       script_name: projectB.py
#
#       author: R Lutz
#
#       description: for Project B of 'Survey in Music Technology' Coursera MOOC Sept 22 2013
#

from earsketch import *
from random import *
from math import *

#Initialize new effect:
fmGrainDelay = initEffect('lutzFmGrainDelay')

# create unit gnerators used by fmEffect
modulator = createUGen(fmGrainDelay, SINE)
depth = createUGen(fmGrainDelay, TIMES)
base = createUGen(fmGrainDelay, ADD)
timesfm = createUGen(fmGrainDelay, TIMES)

# we will use the INPUT track as the carrier
# create unit generators used by grainDelay:
track = createUGen(fmGrainDelay, INPUT)
noise = createUGen(fmGrainDelay, NOISE)
times = createUGen(fmGrainDelay, TIMES)
delay = createUGen(fmGrainDelay, ECHO)
output = createUGen(fmGrainDelay, OUTPUT)
samp = createUGen(fmGrainDelay, SAMPLEHOLD)
add = createUGen(fmGrainDelay, ADD)

#Connect unit generators (fm):
connect(fmGrainDelay, modulator, depth)
connect(fmGrainDelay, depth, base)
connect(fmGrainDelay, track, timesfm, 0, 0)
connect(fmGrainDelay, base, timesfm, 0, 1)

#Connect unit generators (grain):
connect(fmGrainDelay, timesfm, delay)
connect(fmGrainDelay, delay, output)
connect(fmGrainDelay, noise, add)
connect(fmGrainDelay, add, samp)
connect(fmGrainDelay, samp, times)
connect(fmGrainDelay, times, delay, 0, 1)

setParam(add, VALUE, 1)  # need to offset noise output

setParamMin(samp, FREQUENCY, 1)
setParamMax(samp, FREQUENCY, 100)
setParam(samp, FREQUENCY, 40)
createControl(fmGrainDelay, samp, FREQUENCY, 'grainrate (hz)')

setParamMin(times, VALUE, 1)
setParamMax(times, VALUE, 500)
setParam(times, VALUE, 100)
createControl(fmGrainDelay, times, VALUE, 'delay range (ms)')

setParamMin(modulator, FREQUENCY, 1)
setParamMax(modulator, FREQUENCY, 2000)
setParam(modulator, FREQUENCY, 440)
createControl(fmGrainDelay, modulator, FREQUENCY, 'modulator frequency')

setParamMin(depth, VALUE, 1)
setParamMax(depth, VALUE, 1000)
setParam(depth, VALUE, 50)
createControl(fmGrainDelay, depth, VALUE, 'modulation depth')

setParamMin(base, VALUE, 1)
setParamMax(base, VALUE, 1000)
setParam(base, VALUE, 440)
createControl(fmGrainDelay, base, VALUE, 'base frequency')

#Must finish effect before using:
finishEffect(fmGrainDelay)

  
#alogrithmic music 
init()
setTempo(120)

soundFolder = RICHARDDEVINE__DUBSTEP_140_BPM__DUBDRUM
folder2 = YOUNGGURU__Y63_80_BPM_B_MINOR
stopMeasure = 30;

for i in range(randint(50,150)):
    sound = selectRandomFile(soundFolder)
    track = randint(4, 21)
    start = floor(gauss(stopMeasure / 2, 3))
    end = start + randint(1, 8) / 4.0
    fitMedia(sound, track, start, end)
    
for i in range(randint(100,200)):
    sound = selectRandomFile(folder2)
    track = randint(21, 37)
    start = floor(gauss(stopMeasure / 2, 3))
    end = start + randint(1, 8) / 4.0
    fitMedia(sound, track, start, end)
    
#Some drums 
fractionTubes = [HOUSE_BREAKBEAT_020, HIPHOP_TRAPHOP_BEAT_007]
tinCanDrum = [OS_COWBELL01, OS_COWBELL02]
guiro = [ELECTRO_DRUM_MAIN_BEAT_004, ELECTRO_DRUM_MAIN_BEAT_007]
shaker = [OS_OPENHAT02, OS_OPENHAT03]
tubeDrums = [OS_CLAP01, OS_CLAP02]
waterBottles = [OS_SNARE01, OS_OPENHAT06]
 
uniList = [fractionTubes, tinCanDrum, guiro, shaker, tubeDrums, waterBottles]
 
# From Ghana
ftBeat = "0001-10-0001-10-"
tcBeat = "0--1-10-0--1-10-"
guiroBeat = "0--10-1--1-10-1-"
shakerBeat = "1001-101-1011001"
tubeBeat = "0--1--0--1--1--0"
bottleBeat = "0-0-----00-1---0"

ghanaList = [ftBeat, tcBeat, guiroBeat, shakerBeat, tubeBeat, bottleBeat]

for measure in range(1, stopMeasure):
    for i in range(1, 3):
        track = i + 37
        makeBeat(uniList[randint(1,len(uniList)-1)], track, measure,
                 ghanaList[randint(1,len(ghanaList)-1)])

setEffect(4, 'LutzFMGainEffect')

drumMinVol = -6;
drumMaxVol = 8;
setEffect(38, VOLUME, GAIN, drumMinVol, 0, drumMaxVol, 17);
setEffect(38, VOLUME, GAIN, drumMaxVol, 17, drumMinVol, stopMeasure);

drumMinVol = -10;
setEffect(39, VOLUME, GAIN, drumMinVol, 0, drumMaxVol, 13);
setEffect(39, VOLUME, GAIN, drumMaxVol, 13, drumMinVol, stopMeasure);

#finish section
finish()

Project Requirements

In this project, use the EarSketch API to create your own audio effect and to algorithmically generate music. The submission requirements below are intended to place minimal restrictions on musical creativity.

Requirements

  • Write your own audio effect using the EarSketch API. (in Module 4)
    • Use the functions initEffect(), createUGen(), connect(), setParam(), setParamMin(), setParamMax(), createControl(), and finishEffect().
    • Your effect should perform digital signal processing on an input signal rather than synthesize sound from scratch, so it should include both an INPUT and OUTPUT unit generator.
    • Include, at minimum, at least 3 additional unit generators. (You may use the same type of unit generator, such as TIMES, more than once to meet this requirement.)
    • Do not simply copy an example from lecture verbatim. (A variation of an example that adds at least one new unit generator is acceptable.)
  • Add your effect to a track and create an automation envelope for it with setEffect(). (in Module 5)
  • Algorithmically generate music using, at minimum, the following additional EarSketch API functions: (in Module 5)
    • init()
    • setTempo()
    • finish()
    • fitMedia()
    • makeBeat()
  • Use the following Python computing constructs in your script at least once each: (in Module 5)
    • Declare a variable.
    • Create and use a list.
    • Create a for loop.
    • Make a random decision using random(), randint(), and/or gauss().
  • Adhere to the following musical guidelines
    • Your project should be 30 - 120 seconds in duration.
    • Your project should have at least 3 tracks.
    • Your project should include at least 4 different audio clips (either from the EarSketch loop library, used with permission of the copyright holder, or used under a creative commons or public domain license).
    • You should not edit the output of your Python script in Reaper's graphical interface. Everything you want in the music should be created through Python code.

Submit:

  • Zip file containing:
    • Python Script
    • Rendered MP3
    • Reaper Project Folder
      • When saving, check the "create subdirectory for project", "copy all media into project directory", and "Convert media" (make the format MP3) options in the save dialog box.
      • Also, in Reaper make sure to go to File -> Clean Current Project Directory to remove unused audio files and keep the project folder size smaller.
Personal tools