What I really want to see is a music programming language that does not require elementary knowledge of trig. In fact, I don't want it to use any numbers at all.
They all do: Faust, Impromptu, ChuCK, csound, SuperCollider, etc., etc. I suppose that by itself should convince me that there's no other way (there are things like Orca, but I'm thinking of things that look more like conventional programming languages).
I seldom think about numbers when I'm programming a synthesizer -- I just turn this knob more this way or this slider down a bit. Why can't a music programming language be more . . . gestural? Not sure what the right word is. I want the benefits of a textual programming language, but I don't really want to have to start thinking about sound waves in terms of literal numeric frequencies, particular since I really don't do that while doing sound design.
I don't meant to pick on your project in particular; it looks really cool. But since you're writing one of these, perhaps you can answer my question. Why do these languages ask synth programmers to think in terms of precise numbers when programming an "actual" synth isn't really like that at all?
> I want the benefits of a textual programming language, but I don't really want to have to start thinking about sound waves in terms of literal numeric frequencies
That's a bit of a contradiction. It's exactly the reason to use a programming language to have that level of control over signal processing. Otherwise you're better off with a tool like Reactor or VCV Rack.
Try evaluating `d1 $ s "bd sn"` to get a bass drum-snare drum rhythm going. Then `d1 $ s "bd*2 sn"` to kick the bass drum twice each loop instead of once. It can be extremely intuitive.
Because it's a text interface, not a gestural interface.
If you really want a gestural interface you can add it to any language that supports for MIDI control, or some other hardware connection option. (UDP, etc.)
The gestural part is trivial, just as putting a knob on a synth is trivial. The hard part is getting the knob to do something,. Which is not trivial at all.
In fact it's impossible without using numbers somewhere in the make-it-work chain. They may be a few levels down, but they're always going to be there.
I'm not suggesting that we banish numbers from the entire stack. I'm asking why they are absolutely necessary for a textual interface in this domain -- to the point that, as someone says below, it is a "contradiction" to suggest otherwise. As if there's some sort of impassable gulf between csound and VCV Rack.
Let's talk about LaTeX this way. Suppose LaTeX required everything to be specified in explicitly numeric terms. I find this sort of tedious, and I start asking why we can't just express the idea that things be "centered," or "justified," or "kerned" or "Large(r)," or "ragged right."
Would you really say that I am speaking in terms of contradictions? That what I really need is Word, that I must not understand the difference between a textual language and a gestural one, and that the only reason to use a textual language is precisely so that I can have precise control over these matters?
What I'm really asking here is whether we are really talking about the hard limits of programming languages, or if we are instead talking about a certain lack of imagination among programming language designers in this space?
I think what you are asking for is for an abstraction layer that is on top of what the current systems do. LaTeX is a very good analogy, as it is a set of macros that hide away the numericity of the underlying TeX typesetting system. Once upon a time people did use TeX to write papers because LaTeX did not exist yet.
Such a higher level macro system must make tradeoffs between sufficient control and conciseness (though its typically possible to insert low level code in between the macros since before any macro can be executed, rendered etc. it must be converted to the lower level anyway).
Developing such a system is probably a task for tech-savvy musician rather than a music-savvy techie. Its value proposition would be precisely to crystallize composition "invariants" that are expressive and versatile enough to enable people to compose genuinely new things.
But you should keep in mind that all LaTeX papers look a bit alike :-) (though much better than Word papers).
You are looking for what we call an "algorithmic music language", where the focus is on manipulating symbolic music representations rather than digital audio.
OpusMondi is one of these. Others include my project (Scheme for Max, mentioned elsewhere at top level of the thread), Common Music, OpenMusic, etc.
Visual options include Max and Pd, which have an entire message/event layer that does not have to be used for digital audio at all.
All of these can output to standard synthesizers if the digital sound part is not your interest.
Sonic Pi binds MIDI codes to symbols like `:a4` for their corresponding note names, if that helps. It's not comprehensive -- there are no entries for doubled flats/sharps, for instance -- but you can write basic chords/melodic lines with a pretty limited need for numbers.
You'll still need some for `sleep` (or `play_pattern_timed`), but if we're being pedantic about it, note lengths are already numbers as is.
I agree. When possible/working with sound, I would take a tactile experience (right-brain/feel?) over a mathematical/analytical experience (left-brain/logic?). I also don't know music theory and I haven't dedicated time to learn math, so I lean on my intuition more than anything.
As math fields go, trig is fairly easy. The part you need to understand for music is a nicely-delimited chunk that you can learn quickly. I find it very rewarding as a way to think about music; you might too.
(I'm not talking about Fourier transforms; just basic trig.)
Music is numbers. Musical instructions that don't use numbers seem as feasible as architectural plans without measurements -- which is to say, somewhat, but impossible to do precisely.
(Music notation might not have explicit numbers but it's got stand-ins for them.)
They all do: Faust, Impromptu, ChuCK, csound, SuperCollider, etc., etc. I suppose that by itself should convince me that there's no other way (there are things like Orca, but I'm thinking of things that look more like conventional programming languages).
I seldom think about numbers when I'm programming a synthesizer -- I just turn this knob more this way or this slider down a bit. Why can't a music programming language be more . . . gestural? Not sure what the right word is. I want the benefits of a textual programming language, but I don't really want to have to start thinking about sound waves in terms of literal numeric frequencies, particular since I really don't do that while doing sound design.
I don't meant to pick on your project in particular; it looks really cool. But since you're writing one of these, perhaps you can answer my question. Why do these languages ask synth programmers to think in terms of precise numbers when programming an "actual" synth isn't really like that at all?