issue09
EMUSIC-L Digest Volume 55, Issue 09
This issue's topics:
Orchestral Synthesis (9 messages)
Your EMUSIC-L Digest moderator is Joe McMahon .
You may subscribe to EMUSIC-L by sending mail to listserv@american.edu with
the line "SUB EMUSIC-L your name" as the text.
The EMUSIC-L archive is a service of SunSite (sunsite.unc.edu) at the
University of North Carolina.
------------------------------------------------------------------------
Date: Mon, 9 Aug 1993 15:21:12 EDT
From: Mark Simon
Subject: Re: Orchestral Synthesis
To me, what really makes synthesis of orchestral textures sound cheesy is
the lack of articulation, rather than lack of fatness in the sound. I think
I'm going to have to enroll in an institution for the synthetically-
challenged, because all of my experiments with routing such-and-such a
parameter to velocity or such-and-such a controller still sound like
absolute sh*t compared to real musicians. Real musicians produce infinite
shadings of articulations without having to think about it a whole lot,
because they've practiced the physical motions which produce these shadings
to the point where it becomes automatic. How long before I reach that
level of sophistication with my synth programming? Even if you're not trying
to imitate acoustic instruments, your sounds still heed to have articulation.
May I be so bold as to inquire from the expert minds out there their preferred
methods for producing articulation effects via real-time controllers? or any
other methods?
Part of my problem, of course, is that my extremely modest outlay of
gear (a Proteus/1 and a Roland S-50, I mean really!) may not be simlply not
be capable of producing what I want it to. But suppose I drop a few thousand
for a K2000 and find that my sounds still suck because I don't know what
the heck I'm doing?
Someone out there is going to tell me not to try to emulate the sound
of an orchestra, but rather just let electronic sounds be electronic sounds,
but I think that's avoiding the issue. All sounds need articulation.
These days I use my electronic gear more for inputting notes into Finale, and
for making dummy versions of pieces to be eventually performed by live
musicians.
I know there's a certain sensitivity out there to people asking naive
questions. If this is such a question, then please send replies directly to:
--Mark Simon
tip@cornellc.cit.cornell.edu
------------------------------
Date: Tue, 10 Aug 1993 08:57:12 -0400
From: idealord
Subject: Re: Orchestral Synthesis
>
> To me, what really makes synthesis of orchestral textures sound cheesy is
> the lack of articulation, rather than lack of fatness in the sound.
Well, I started this discussion ;-) and I never intended to talk about fatness
- I'm talking about beauty, lushness, etc... in electronic sounds not in the
re-synthesis of orchestral instruments - which I find hopeless.
One of the main problems with duping orchestral instruments is that you really
do have to know how to perform the music with that instrument. No instrument
is going to perform itself - and playing a keyboard into a sequencer just
doesn't cut it. Ultimately, this is why electronic music is so lame these
days - no one performs the piece - and if they do - maybe one or two tracks
are musical and the rest suck... (typical lame percussion track or
pathetically repetitious bass line).
I don't have
the time to learn how to perform each instrument - which is ultimately what
we're talking about - how is this attach approached from this note, etc. (not
real instruments - synthesized instruments) much less try and dupe the
infinitely large space of timbral possibilities. Shit, I don't have the time
to to learn to perform my totally synthetic instruments.
I think
> I'm going to have to enroll in an institution for the synthetically-
> challenged, because all of my experiments with routing such-and-such a
> parameter to velocity or such-and-such a controller still sound like
> absolute sh*t compared to real musicians.
Real musicians _want_ to sound good. What does a synthesizer want? Until we
have intelligent instruments - really intelligent - which have been trained
in all kinds of traditions - it's hopeless...
> Real musicians produce infinite
> shadings of articulations without having to think about it a whole lot,
> because they've practiced the physical motions which produce these shadings
> to the point where it becomes automatic. How long before I reach that
> level of sophistication with my synth programming? Even if you're not trying
> to imitate acoustic instruments, your sounds still heed to have articulation.
> May I be so bold as to inquire from the expert minds out there their preferred
> methods for producing articulation effects via real-time controllers? or any
> other methods?
My biggest success has been with exotic MIDI controllers in an improvisational
mode. Then I edit the improvisation... The controller I used let me create
certain types of effects with quite a broad range of articulations - but I've
since played out this type of thing - bored with it... back to real music!
> Part of my problem, of course, is that my extremely modest outlay of
> gear (a Proteus/1 and a Roland S-50, I mean really!) may not be simlply not
> be capable of producing what I want it to. But suppose I drop a few thousand
> for a K2000 and find that my sounds still suck because I don't know what
> the heck I'm doing?
Again - you're absolutely right - a new synth and new sounds are still going
to require a "performance" of your music. You'll have to practice each note
just as a musician would to get the articulation right. I don't know about
you but I just don't have the time to write a piece and then learn how to
perform it... maybe if I was rich ;-)
> Someone out there is going to tell me not to try to emulate the sound
> of an orchestra, but rather just let electronic sounds be electronic sounds,
> but I think that's avoiding the issue. All sounds need articulation.
> These days I use my electronic gear more for inputting notes into Finale, and
> for making dummy versions of pieces to be eventually performed by live
> musicians.
Me too! I just got through spending about a week working with the
Bohlen-Pierce tuning - trying to get some good patches for it and developing a
piece. Finally, I just hit my head against the same door too many times and
went back to a suite of piano pieces I've been writing... I think in the end
we're going to end up writing for synthesizers - with patches for each pieces
and then expect musicians - to perform the piece. Write the accents, slurs,
etc.. just like you would for any instrument...
> I know there's a certain sensitivity out there to people asking naive
> questions. If this is such a question, then please send replies directly to:
>
> --Mark Simon
> tip@cornellc.cit.cornell.edu
>
Jeff Harrington
idealord@dorsai.dorsai.org
------------------------------
Date: Tue, 10 Aug 1993 09:40:03 PDT
From: metlay
Subject: Re: Orchestral Synthesis
>To find out if this is true, maybe we could organize a test where we get
>15 or 20 people playing whatever synths and samplers through individual
>monitor systems all playing a "violin" patch. I wonder how that would
>sound. Try it not only with imitative sounds but also with purely
>electronic sounds, too.
I've tried this with eight people at once. You get mush, unless you are
very very VERY careful with your timbral blend. On the other hand, Amin
Bhatia specializes in multitracking Minimoogs to get thick orchestral
timbres; I only wish his compositional strengths equalled his timbral skill.
But that was a primary argument in this thread to start with, wasn't it?
>. Where Metlay is getting
>lean and mean, these techniques speak to the bigger is better syndrome.
I've tried it your way, and the results are not always satisfactory. Lean
and mean gets you results faster because you have fewer devices to master;
I'm good, but I'm not Nick Rothwell, and if I want to produce music on my
own I have to limit the number of things I need to be able to do well.
>imagine an "orchestra" of 110 Xpanders....
No thanks. "Killing Chrome" from BANDWIDTH was my one and only experiment
with multiple Xpanders; we had five going at once. Everything except the
drums and one wine-glass K5 sound was Xpander. The mix was the one we
were least happy with-- it has all the subtlety of a brick to the head.
Fun, yes, but hardly subtle.
>I'm babling, now. Please forgive the bandwidth
>usage. ^^^^^^^^^
You'll be hearing from our lawyers.
--
mike metlay * atomic city * box 81175 pgh pa 15217-0675 * metlay@netcom.com
---------------------------------------------------------------------------
"Wow, now my hand's all sticky! Yum." (metlay's wife)
------------------------------
Date: Tue, 10 Aug 1993 14:53:31 -0400
From: Joe McMahon
Subject: Re: Orchestral Synthesis
>Well, I started this discussion ;-) and I never intended to talk about fatness
>- I'm talking about beauty, lushness, etc... in electronic sounds not in the
>re-synthesis of orchestral instruments - which I find hopeless.
I think this is a problem with expectations on one hand (I'll talk about the
other hand later). Of *course* a synthesized whatever doesn't sound like a
real whatever; it isn't one. If you want a "real piano" or "real strings",
you'd better use that instead. There is no question that a synthesizer is
only going to be an imitation of a complex physical system. That doesn't
mean that it is therefore useless.
>One of the main problems with duping orchestral instruments is that you really
>do have to know how to perform the music with that instrument. No instrument
>is going to perform itself - and playing a keyboard into a sequencer just
>doesn't cut it.
You are correct in saying that no instrument performs itself. One *must*
think like a flautist, or saxophonist, or drummer to sound like one. If
performed properly, a sequencer track will sound like an analog or
digitally recorded track, and by implication, it will sound the way you
want it if you put enough work into it. Admittedly, the nuances of the
performance are not going to be the same if Wynton Marsalis plays his
trumpet, and I play it on a trumpet patch. They key here is working on the
synth until what it does works for you, or realizing as quickly as possible
that you are going to have to have a real trumpet player play it.
>I don't have
>the time to learn how to perform each instrument - which is ultimately what
>we're talking about - how is this attach approached from this note, etc. (not
>real instruments - synthesized instruments) much less try and dupe the
>infinitely large space of timbral possibilities. Shit, I don't have the time
>to to learn to perform my totally synthetic instruments.
I think this is where your problem lies, frankly. One must learn to perform
on one's instrument (this is the other hand). If one chooses to play a
large number of instruments and one is uncompromising about quality, then
one has to spend a lot of time learning, or be dissatisfied with the
results. There cannot be a compromise. You work hard to learn many
instrumental idioms, or you choose a subset of timbres and learn those to
the best of your ability, or you get the proper instrumentalist(s) to play
the piece for you.
>Real musicians _want_ to sound good. What does a synthesizer want? Until we
>have intelligent instruments - really intelligent - which have been trained
>in all kinds of traditions - it's hopeless...
I'm not sure I'm following this. Yes, a musician wants to sound good. But I
don't think a synthesizer can be called a musician, or can be said to want
anything, any more than a tape deck or CD player can. A synthesizer
produces outputs in response to inputs. It has no passion for excellence or
interest in quality. A performer has to manipulate an instrument to get
results.
As far as intelligent performers (other than humans): no thanks, I'd rather
play it myself. An intelligent instrument which adapts to one's playing
style is one thing. A sythesizer that "wants" something or has its own
agenda as to how something "ought" to be played is sooner or later going to
butt heads with me. If I didn't write the software and can't change it, I
think we can all see who will lose the argument. It's not going to be the
instrument. :-) If the intelligent player insists that the piece be played
according to a specific vision of How The Flute Must Be Played, the results
will still not be satisfying, no matter how technically correct, if your
vision does not match (e.g., Claude Bolling vs. Ian Anderson).
>> Real musicians produce infinite
>> shadings of articulations without having to think about it a whole lot,
>> because they've practiced the physical motions which produce these shadings
>> to the point where it becomes automatic. How long before I reach that
>> level of sophistication with my synth programming?
Forever. :-)
Programming is not a substitute for practice and performance. Wendy Carlos,
on "Secrets of Synthesis", admits that she was trying to remove the
performer from the music and let it stand on its own by playing it on
synthesizers. After a time, she realized that it doesn't work that way. The
performer is always an integral part of a musical performance.
Programming will produce a sound. Whether or not it is or can be part of a
musical experience is completely in the hands of the performer, much moreso
than the composer. Jan Hammer plays stuff on the Minimoog that I can't even
approach. Technically speaking, I could set up the exact same equipment.
But I wouldn't sound the same because the performance and experience is
what counts. I might be able to learn to do it, but equipment does not
guarantee results.
>Me too! I just got through spending about a week working with the
>Bohlen-Pierce tuning - trying to get some good patches for it and developing a
>piece. Finally, I just hit my head against the same door too many times and
>went back to a suite of piano pieces I've been writing... I think in the end
>we're going to end up writing for synthesizers - with patches for each pieces
>and then expect musicians - to perform the piece. Write the accents, slurs,
>etc.. just like you would for any instrument...
Sure. It *is* an instrument. Why shouldn't you expect to be, with its own
limitations and strengths?
As I said, I think it's a conceptual problem, coupled with a certain amount
of frustration brought on by the fact that many synthesizers nowadays
provide many sampled "instruments" with very few intuitive real-time
controls. It's easy for a real trumpet player, for instance, to control
subtle nuances of the sound because so many parts of the body are used in
playing it. Engaging the lips and tongue, two of the most highly enervated
areas of the body, in controlling a physical process makes using just
fingers and feet seem clumsy.
The other thing to remember is that a player has to practice for many years
to learn the physical processes necessary to make it sound easy. A keyboard
player who punches up a brass patch and expects to sound just like Miles
Davis without lots and lots of practice is fooling himself; he doesn't have
the experience in playing like a trumpet player to draw on, let alone the
same physical tools.
This is not to say that one cannot develop the skills to use a synthesizer
patch in a subtle and idiomatic way. It *is* saying that it will require a
lot of work and practice.
No matter how close and amazing simulation of X gets, it's still a
simulation. If you really and truly want X, you're doing yourself a
disservice by trying to convince yourself that a substitute will do.
However, trying to play like a specific instrument may lead you in
unexpected directions to find music you might not have looked for on your
own, and will improve your technique and ability.
If you consider putting a lot of work into performance a waste of time,
then performing your music yourself, whether on synthesizers or not, is an
even bigger waste of your time.
--- Joe M.
------------------------------
Date: Tue, 10 Aug 1993 16:17:17 -0400
From: idealord
Subject: Re: Orchestral Synthesis
>
> >Well, I started this discussion ;-) and I never intended to talk about
fatness
> >- I'm talking about beauty, lushness, etc... in electronic sounds not in the
> >re-synthesis of orchestral instruments - which I find hopeless.
> I think this is a problem with expectations on one hand (I'll talk about the
> other hand later). Of *course* a synthesized whatever doesn't sound like a
> real whatever; it isn't one. If you want a "real piano" or "real strings",
> you'd better use that instead. There is no question that a synthesizer is
> only going to be an imitation of a complex physical system. That doesn't
> mean that it is therefore useless.
>
Hmmm.. I think I said that. I said that the re-synthesis of orchestral
instruments was hopeless - mainly because of the necessity of performance
intelligence - I didn't say synthesizers were hopeless ;-) puhleeze...
As someone who has written music for orchestra and conducted orchestras (not
very well, mind you) I understand something of this...
>
> >I don't have
> >the time to learn how to perform each instrument - which is ultimately what
> >we're talking about - how is this attach approached from this note, etc. (not
> >real instruments - synthesized instruments) much less try and dupe the
> >infinitely large space of timbral possibilities. Shit, I don't have the time
> >to to learn to perform my totally synthetic instruments.
> I think this is where your problem lies, frankly. One must learn to perform
> on one's instrument (this is the other hand). If one chooses to play a
> large number of instruments and one is uncompromising about quality, then
> one has to spend a lot of time learning, or be dissatisfied with the
> results. There cannot be a compromise. You work hard to learn many
> instrumental idioms, or you choose a subset of timbres and learn those to
> the best of your ability, or you get the proper instrumentalist(s) to play
> the piece for you.
Really... I've got about 2-3 hours a night for my music. I'm a composer not a
performer - I'm totally uninterested in performing... know any good string
quartets?
>
> >Real musicians _want_ to sound good. What does a synthesizer want? Until we
> >have intelligent instruments - really intelligent - which have been trained
> >in all kinds of traditions - it's hopeless...
> I'm not sure I'm following this. Yes, a musician wants to sound good. But I
> don't think a synthesizer can be called a musician, or can be said to want
> anything, any more than a tape deck or CD player can. A synthesizer
> produces outputs in response to inputs. It has no passion for excellence or
> interest in quality. A performer has to manipulate an instrument to get
> results.
>
Someday, there should be pre-trained neural nets trained on each of the
instruments so that the appropriate tone or articulation is produced under
the right musical circumstances...there would still be the problem of
"feedback" in context with the other layers of music being produced.
> As far as intelligent performers (other than humans): no thanks, I'd rather
> play it myself. An intelligent instrument which adapts to one's playing
> style is one thing. A sythesizer that "wants" something or has its own
> agenda as to how something "ought" to be played is sooner or later going to
> butt heads with me. If I didn't write the software and can't change it, I
> think we can all see who will lose the argument. It's not going to be the
> instrument. :-) If the intelligent player insists that the piece be played
> according to a specific vision of How The Flute Must Be Played, the results
> will still not be satisfying, no matter how technically correct, if your
> vision does not match (e.g., Claude Bolling vs. Ian Anderson).
>
> >> Real musicians produce infinite
> >> shadings of articulations without having to think about it a whole lot,
> >> because they've practiced the physical motions which produce these shadings
> >> to the point where it becomes automatic. How long before I reach that
> >> level of sophistication with my synth programming?
> Forever. :-)
>
> Programming is not a substitute for practice and performance. Wendy Carlos,
> on "Secrets of Synthesis", admits that she was trying to remove the
> performer from the music and let it stand on its own by playing it on
> synthesizers. After a time, she realized that it doesn't work that way. The
> performer is always an integral part of a musical performance.
>
I'm sorry - but neither Wendy nor Jan have produced satisfactory results to my
ear... Sorry to get real snobby here, but I'm more interested in the
performance paradigms of Horowitz or Casals... ;-) This is not to be
discriminate against genres - just to say I want my music to be deep... and in
the classical musical tradition... your mileage...hmm.... my vary...I'm a good
classical pianist - not some newcomer to performing traditions (of any genre).
And I'm a real good blues pianist ;-) and my funk bass rocks the...
nevermind...
I know what a good performance is, Joe...
> Programming will produce a sound. Whether or not it is or can be part of a
> musical experience is completely in the hands of the performer, much moreso
> than the composer. Jan Hammer plays stuff on the Minimoog that I can't even
> approach. Technically speaking, I could set up the exact same equipment.
> But I wouldn't sound the same because the performance and experience is
> what counts. I might be able to learn to do it, but equipment does not
> guarantee results.
Didn't I read this same post 6 months ago? (Sorry...)
>
> >I just got through spending about a week working with the
> >Bohlen-Pierce tuning - trying to get some good patches for it and developing
a
> >piece. Finally, I just hit my head against the same door too many times and
> >went back to a suite of piano pieces I've been writing... I think in the end
> >we're going to end up writing for synthesizers - with patches for each pieces
> >and then expect musicians - to perform the piece. Write the accents, slurs,
> >etc.. just like you would for any instrument...
> Sure. It *is* an instrument. Why shouldn't you expect to be, with its own
> limitations and strengths?
I guess my perspective is lost here - but I've been experimenting for years
with the simulation of performances. That is pieces that are either too fast
or too rhythmically complex to be "performed." This is an area which is still
wide open for computer instruments...
>
> As I said, I think it's a conceptual problem, coupled with a certain amount
> of frustration brought on by the fact that many synthesizers nowadays
> provide many sampled "instruments" with very few intuitive real-time
> controls.
NOT!!! Joe - man - you've totally lost the perspective of this discussion -
It's not about people trying to dupe Miles -
I was commenting on the fact that as a composer - I was extremely excited
about the possibilites of computer music in the '70's and '80's. I assumed
that it would be possible to create _musical_ recordings of my compositions
without developing a performance paradigm for each "instrument." As I said in
my post - which you cut out - I did develop a performance paradigm for a MIDI
controller...and I created a tape with it - it's in the Emusic archive...
Now - I feel that given the time it would take to create a
"performance paradigm" for each patch - I'm just going to give up on the
creation of stand-alone recordings as compositions and go back to music on
paper...
>
> No matter how close and amazing simulation of X gets, it's still a
> simulation. If you really and truly want X, you're doing yourself a
> disservice by trying to convince yourself that a substitute will do.
> However, trying to play like a specific instrument may lead you in
> unexpected directions to find music you might not have looked for on your
> own, and will improve your technique and ability.
>
As I said earlier - the first sentence - we weren't talking about instrumental
simulation - but about the creation of "orchestral" type sounds - not
necessarily "fat" sounds but sounds which have a resonance and a complexity
known primarily from large ensemble playing. Then the discussion drifted to
the development of synthetic performances.
> If you consider putting a lot of work into performance a waste of time,
> then performing your music yourself, whether on synthesizers or not, is an
> even bigger waste of your time.
>
Duhhh... thanks Joe - ;-)
> --- Joe M.
>
Mr. Jeff Harrington
idealord@dorsai.dorsai.org
------------------------------
Date: Tue, 10 Aug 1993 15:23:04 -0500
From: Stephen David Beck
Subject: Re: Orchestral Synthesis
Mark Harrington writes:
> One of the main problems with duping orchestral instruments is that you really
> do have to know how to perform the music with that instrument. No instrument
> is going to perform itself - and playing a keyboard into a sequencer just
> doesn't cut it. Ultimately, this is why electronic music is so lame these
> days - no one performs the piece - and if they do - maybe one or two tracks
> are musical and the rest suck... (typical lame percussion track or
> pathetically repetitious bass line).
> I don't have
> the time to learn how to perform each instrument - which is ultimately what
> we're talking about - how is this attach approached from this note, etc. (not
> real instruments - synthesized instruments) much less try and dupe the
> infinitely large space of timbral possibilities. Shit, I don't have the time
> to to learn to perform my totally synthetic instruments.
Maybe this should tell you something. Maybe you don't have what it takes to
really
make music. I'm not trying to be "holier than thou". On the contrary, why
should
you be able to do in 10 minutes what most musicians spend a lifetime learning
how
to do? As the joke goes, "How do you get to Carnegie Hall? Practice, practice,
practice."
Mark continues:
> Real musicians _want_ to sound good. What does a synthesizer want? Until we
> have intelligent instruments - really intelligent - which have been trained
> in all kinds of traditions - it's hopeless...
What you don't realize is that a musical instrument (synthesizer or otherwise)
does
not inherently make music. They are not intelligent, they have no wants (except
for
some 110V AC), they are nothing but mechanical (or electro-mechanical) systems
which remain silent until energy is put into the system (i.e. you play it).
The only thing that can make music is a musician. A synthesizer is but a means
to a
musical end. Until you realize that, you'll remain puzzled, bewildered and
frustrated.
By the way, if you spent less time ranting about how hard it is to make music,
you'd
have more time to practice.
In closing, I just have to say:
M U S I C I S N O T E A S Y .
NEVER WAS, NEVER WILL BE.
If you can't take the heat, get out of the pit.
=Stephen David Beck=
------------------------------
Date: Tue, 10 Aug 1993 16:45:08 -0400
From: Joe McMahon
Subject: Re: Orchestral Synthesis
>
>I guess my perspective is lost here - but I've been experimenting for years
>with the simulation of performances. That is pieces that are either too fast
>or too rhythmically complex to be "performed." This is an area which is still
>wide open for computer instruments...
>
>I was commenting on the fact that as a composer - I was extremely excited
>about the possibilites of computer music in the '70's and '80's. I assumed
>that it would be possible to create _musical_ recordings of my compositions
>without developing a performance paradigm for each "instrument." As I said in
>my post - which you cut out - I did develop a performance paradigm for a MIDI
>controller...and I created a tape with it - it's in the Emusic archive...
>
Okay, now I understand a little better. You don't want to re-create a
performance on a synthesized anything, you want to create a whole new genre
of musical instrument and musical performance from scratch. Correct?
>Now - I feel that given the time it would take to create a
>"performance paradigm" for each patch - I'm just going to give up on the
>creation of stand-alone recordings as compositions and go back to music on
>paper...
And I agree with you. Essentially constructing both an instrument and all
of the concomitant performance paradigm from scratch is an incredibly
challenging and time-consuming project. My point still applies, though.
Even if a computer-controlled, smart instrument existed, *someone* would
have had to spend the time to construct the paradigm of performance. The
machine cannot figure out from first principles what a "good" or "bad" ot
"proper" or "classical" or "orchestral" performance is. So the time to
construct the paradigm is still needed, along with time to debug the code
to implement the paradigm.
As a sum, no time is saved, and most likely, more time is spent, if you do
it yourself. If you don't, then you have to live with So-and-so's paradigm
of performance, which may not have been the one you wanted. You *know* what
you want. That's why actually playing it yourself, if you can, is better.
Second best is training someone else to do it.
>> If you consider putting a lot of work into performance a waste of time,
>> then performing your music yourself, whether on synthesizers or not, is an
>> even bigger waste of your time.
>>
>
>Duhhh... thanks Joe - ;-)
I apologize if that seemed an oversimplification, but there really are
people who believe that having a sequencer with note-entry and a 32-voice
sound board to reproduce the notes makes them a musician. I certainly did
not mean it as a remark to you, but to the audience at large, most if not
all of whom no doubt know this as well.
Am I closer now?
--- Joe M.
------------------------------
Date: Wed, 11 Aug 1993 09:41:00 GMT+0100
From: Chris Gray
Subject: Re: Orchestral Synthesis
Joe M writes:
> [...] Engaging the lips and tongue, two of the most highly enervated
^^^^^^^^^
> areas of the body, in controlling a physical process makes using just
> fingers and feet seem clumsy.
Speak for yourself Joe. Mine are still fit as a fiddle, as I will happily
demonstrate to the right person. 8^P
Chris
------------------------------
Date: Wed, 11 Aug 1993 08:50:09 -0400
From: idealord
Subject: Re: Orchestral Synthesis
>
> >
> >I guess my perspective is lost here - but I've been experimenting for years
> >with the simulation of performances. That is pieces that are either too fast
> >or too rhythmically complex to be "performed." This is an area which is
still
> >wide open for computer instruments...
> >
> >I was commenting on the fact that as a composer - I was extremely excited
> >about the possibilites of computer music in the '70's and '80's. I assumed
> >that it would be possible to create _musical_ recordings of my compositions
> >without developing a performance paradigm for each "instrument." As I said
in
> >my post - which you cut out - I did develop a performance paradigm for a MIDI
> >controller...and I created a tape with it - it's in the Emusic archive...
> >
> Okay, now I understand a little better. You don't want to re-create a
> performance on a synthesized anything, you want to create a whole new genre
> of musical instrument and musical performance from scratch. Correct?
>
Exactly! And now not only do I have to compose the music, prepare the score?
(sometimes ;-) I have to generate a performance paradigm for each patch. One
thing that I'm quickly becoming wary of is the regularity of envelopes - I
want envelopes to fade or rise when I say so - and I end up riding a volume
control wheel - eating up tons of memory and losing the velocity/timbral
relationship I set up in my patch...
> >Now - I feel that given the time it would take to create a
> >"performance paradigm" for each patch - I'm just going to give up on the
> >creation of stand-alone recordings as compositions and go back to music on
> >paper...
> And I agree with you. Essentially constructing both an instrument and all
> of the concomitant performance paradigm from scratch is an incredibly
> challenging and time-consuming project. My point still applies, though.
> Even if a computer-controlled, smart instrument existed, *someone* would
> have had to spend the time to construct the paradigm of performance. The
> machine cannot figure out from first principles what a "good" or "bad" ot
> "proper" or "classical" or "orchestral" performance is. So the time to
> construct the paradigm is still needed, along with time to debug the code
> to implement the paradigm.
>
In a way - we're where visual artists are - each painting might require a new
painting technique - one appropriate for each piece. For me, though, working
in a 9 to 5 (I need to live in NYC for a lot of reasons and there just ain't
no teaching jobs here ;-( I have about 2-3 hours a night for my music. The
process is getting overly time consuming.
For the Bohlen-Pierce piece I was going to write for the SEAMUS convention - I
developed about 20 patches - they fit the paradigm of the scale - all based on
odd-harmonic spectrums - but then I had to set about learning the different
characteristics of each patch - and then write music appropriate for each
patch. It was just getting way too complicated for my deadline ;-(
> As a sum, no time is saved, and most likely, more time is spent, if you do
> it yourself. If you don't, then you have to live with So-and-so's paradigm
> of performance, which may not have been the one you wanted. You *know* what
> you want. That's why actually playing it yourself, if you can, is better.
> Second best is training someone else to do it.
>
Absolutely! I have complete faith in my musical ideas ;-) but their
implementation is time-consuming...
> >> If you consider putting a lot of work into performance a waste of time,
> >> then performing your music yourself, whether on synthesizers or not, is an
> >> even bigger waste of your time.
> >>
> >
> >Duhhh... thanks Joe - ;-)
>
> I apologize if that seemed an oversimplification, but there really are
> people who believe that having a sequencer with note-entry and a 32-voice
> sound board to reproduce the notes makes them a musician. I certainly did
> not mean it as a remark to you, but to the audience at large, most if not
> all of whom no doubt know this as well.
>
> Am I closer now?
>
Yup...
> --- Joe M.
>
Jeff Harrington
idealord@dorsai.dorsai.org
------------------------------
End of the EMUSIC-L Digest
******************************