Comment by nuclearnicer
19 hours ago
Wonderful question. I suspect it's partially the culture issue you point to, but also a practical issue of composition. If we decompose sound into the basic waveforms, similar to the subject pdf on page 18, we then have parts that we can reassemble. We can take the defense-funded DSP math of the likes of a John Cooley or a John Tukey and build an engine for assembling the parts of sound.
All this being said, I think that's a process of convenience and a historical path not a absolute constraint. We have some more flexible means of communicating with the machines today. And I strongly encourage someone to work on a new UI for computer music. "Jazz trio piano, upright bass, and drums. start drummer laid-back, piano blowing over the changes, then piano on top."
This is the kind of UI people should be building.
https://youtu.be/3poN6FDyB28?is=QjDzlmRQCMMbP_lS
What you wrote described an output, not a UI.
Indeed it described an output and was also a UI. I meant that describing the output could be the UI. I pictured a textbox or a whisper-style text to speech session. Basically an ai chatbot specialized for generating music.
I couldn't figure out precisely what that video showed, but it was fascinating. Somehow it reminded me of the orca music programming environment.
https://www.youtube.com/watch?v=r28Xy-1_F8I
https://www.youtube.com/watch?v=-tAAsolMG-M