Comment by ksr
3 days ago
Thanks for your answer. To me, the thrill of modeling the rich and vast domain of music practices across the world is a never-ending source of joy and discovery. To answer your specific points:
1. In my own similar library (https://github.com/infojunkie/scalextric) I decided to pragmatically throw an exception if the transpose method is called on an untransposable scale... At the moment I'm not trying very hard to handle all the cases of transposability - I'm sure there's a mathematical object like a group or ring to describe such cases.
2. I'll read your code to understand better how you handle sub-scale sequences. I would like to revisit my own approach to start at the dyads (sequence of 2 tones plucked from a given tuning) and build up longer sequences from those. I think this would allow to distinguish between ascending and descending sequences in a natural way.
3. I've spent quite a bit of time with MusicXML, and some time with MEI as well.. Yes, neither format has a model for arbitrary tunings which is a glaring gap. MusicXML does represent the full set of visual accidentals (including custom SMuFL glyphs) as well as per-note decimal pitch alterations that can accommodate any cent value. MEI also represents a reasonably complete set of accidentals although it's a closed set, as opposed to MusicXML's open set of accidentals. MEI supports a few pre-defined tunings which also falls short of generality. In addition, MEI does not support per-note pitch alterations, which makes it even harder to represent anything outside the tunings that it pre-defines.
There's unfortunately little activity to update these formats to include general tunings. I'm having discussions in both communities, maybe you'd like to add your voice: https://github.com/music-encoding/music-encoding/issues/1167, https://github.com/w3c/musicxml/discussions/586.
4. I've also spent quite a bit of time on MIDI microtonality... There are 3 main approaches that I'm aware of:
a. MIDI Tuning System which maps the 128 MIDI notes to arbitrary frequencies.
b. Pitch bending which has the limitation that you mentioned above. MPE is essentially an organized methodology for reusing empty MIDI channels in a round-robin fashion for pitch bends and other controller effects.
c. MIDI 2 supports per-note controller settings out of the box, thereby superseding MPE.
My current focus is to create a MusicXML => MIDI conversion pipeline that supports arbitrary tunings, using Verovio as the MIDI conversion engine. I am of course "inventing" new MusicXML elements to represent these tunings and their mappings (and MEI elements too, because Verovio represents its own internal state based on MEI). My aim is to produce a MIDI version of the canonical Sagittal Example (https://www.sagittal.org/exmp/index.htm) from a MusicXML file.
1. Thanks for the link. I am reading a bit of your code right now :) Also nice to see that it is GPL licensed too. In regards to intervals: What still confuses me in the design process is that in certain uneven tunings, (e.g. Pythagorean) there seems to be two approaches to naming intervals. One would be an "absolute" approach, where intervals that do not form a 5/4 ratio would not be considered a major third, and one "functional" approach, where the exact frequency ratio does not matter and the interval name is simply deduced from the note names (D to F# is a major third even tough the ratio is 81/64)
2. I'm not sure what you exactly mean by sub-scales. Imho the best approach in xenharmlib to define maqams from ajnas would be to define ajnas as interval sequences in 24-EDO and then concatenate them with + to a maqam interval sequences. Then you can use the result to define maqams on any note.
3. Thanks for the links. I will add something soon ;) I also tried my luck on the MEI Slack channel but did not receive any response. I think MusicXML might be a good format for exporting xenharmonic data, but importing from it is tricky. In certain temperaments the cent values have so many digits after the point that importing would be forced to apply heuristics for rounding errors :/
4. Option a) is currently the one I use, but I am somewhat unhappy about it, because it requires a "tacit" understanding between two programs of how exactly the 128 notes are redefined (also 128 is not plenty, just think of turkish makam that has 53 notes per octave. That is barely 2 octaves in this system). My hopes are currently on MIDI 2.
1. Regarding naming of intervals, this has confused me for months, until I understood (or convinced myself) that there are in fact 2 different things that are called "intervals": intervals between tuning pitches (which is your first example) and intervals between scale notes (your second example). Thinking of them as different things (and calling them differently) has helped me a lot with modeling musical objects in scalextric. For example, tuning intervals in different tunings (12-TET vs. Pythagorean) can end up having the same logical interval name because they are approximating the same logical relationship between notes even though the physical frequencies are different.
2. sub-scales = what you call interval sequence.
If you don't mind, we can continue this conversation by email. Mine is in my profile on HN.