Linux DAW: Help Linux musicians to quickly and easily find the tools they need

1 day ago (linuxdaw.org)

I’d like to see some company come out with a wrapper for Logic, Ableton, ProTools, etc with the following:

- portable, reproducible environments: I suppose you could achieve this with a docker setup. If I jump to a different workstation, I want to be able to load my current project without playing setup wrangler.

- license management like a dotfile database: all my licenses are fettered to the wind across two or three email addresses, and every time my PC crashes (twice in the past 5y) or something breaks I have to go recollect them. It’s a quest. Quests suck.

- remote or cloud processing: connect to your workstation if you’re on the road, or a cloud cluster running DAWs. Sure, this introduces some lag which can force you into drag to piano roll contexts, but other times you’re just messing with mixes. But gaming giants like Sony figured it out for PS4/5 titles.

- shareable projects with some innovative business solution to the license barrier. I want to be able to access somebody else’s project and load it in its entirety—whether I have Neural DSP’s latest Archetype or not. Whether I have Serum2 or not. Apple managed to get bands onto iTunes. We seriously can’t get Waves or Neural DSP to go the same route? Some royalty-based approach?

I know these are outlandish requirements in some scenarios but I feel like this would fix the misery of music making in the era of Windows and macOS being goblinware operating systems.

  • I just don't think the economics of this work. Software synths are just such cutthroat competition that there is no profit margin.

    I have a Pittsburgh Modular TAIGA that cost $800. How much am I willing to pay for a software synth doing analog modeling? Basically nothing.

    To get everyone a piece of the pie to make it worth it for this, it would cost the end user so much for the pie that no one would bother.

    My experience with music collaboration is that the technical challenges have been minimal for a long time now. The challenges of collaboration are social and being on the same page musically. Even in the 90s, it wasn't hard to find people to make rock music with locally. The problem was always what was meant by "rock music" to begin with.

    It is exactly the opposite problem of video games. It would be like the economics of video games if the goal was to team up to play video games no one else was playing.

    I think this would only work if every musician's goal was to be Taylor Swift.

  • The site linked to by the title is about mostly FLOSS projects for people doing audio/music creation work on Linux. The problems you're describing, in particular the licensing ones, are endemic to proprietary software. While this still affects e.g. Bitwig on Linux, it isn't really a feature of Linux audio software in general.

    Cloud work so far just hasn't had much appeal. People are walking around with Apple M{1..5} laptops with enough compute power to do things you couldn't do on a studio system 10 years ago. Sure, if you're doing sample based playback with really high end sample libraries, or physically modelled synthesis, you can always max out any system with a large orchestral piece, but the stuff you can do on a laptop on a bus or train or the back of car really does encompass most of what most people want to be able to do.

    iTunes was useless for people on Linux, just as any system that convinced Waves to participate in some sort of "implicit licensing" scheme would be (even though they run linux inside the hardware units they sell). Again, this link was to a set specifically concerned with the situation for audio work on Linux; until plugin developers en masse recognize it as a valid latform that they should support (improvements every day, but very slowly), this will as useless for Linux as iTunes was (and remains).

  • Despite open plugins, and mostly open end result file formats, the entirety of the media software world is built around proprietary software primarily for several reasons: Ephemeral fads have you making money in one big upfront push, integration of some new technique doesn't lend itself to open standards, or the fact that the actual bulk of paying customers are presumably working musicians with budgets, and that market is ultimately small. The side effect of this, much like radios, rc planes and drones, and many other hobbies, is that a collection of smaller product producing organizations have an even smaller, more even playing field full of smaller professionals and some amateurs. Some of the modular hardware producers have the right idea and provide free versions of their hardware as plugins as a marketing gimmick for their actual hardware. However, outside of the Linux world, the mystique of a proprietary salve that will supplant your creative block pushes people towards short sighted sales pushes instead of trying to lock in a give and take interaction with the broader community.

    But I would love every thing that you list. I think things like PipeWire for better or worse are pushing things towards sanity, or least, better ideas for managing the mess in the open source world, which is decades in the making.

  • I'm a former film/game composer turned programmer, and you basically just outlined what I hope to be my life's work :p Each and every one of these is a white whale for me, and is something I'm working on in one way or another.

    Get in touch if you'd like to chat more about this stuff (my email is in my profile).

It took quite a bit of scrolling until I found my old faves of dexed and zynaddsubfx, and I didn't see Helm (https://tytel.org/helm/) at all.

  • Helm has been replaced in practice by Vital (same author), I think.

    • They are completely different synths.

      Vital is a wave table synth; Helm is a subtractive synth.

      Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.

      It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.

      That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.

i can recommend renoise, not sequencer, but a tracker, used for creating demo tracks (the music in cracks) or genres like breakcore, jungle, edm. venetian snares uses it after he used cubase in the 90s/00s.

it's rather customisable, reasonably priced and just works great as a daw for electronic music.

https://renoise.com/download it even comes with a demo and it's own vst for other daw'.

This is fantastic! There are plenty of us out there that dont mind paying for software if its high quality. This is an excellent resource for people who are less militant about open source and just want to make music.

Thanks for making the list but I hate endless scroll. Who doesn't? It makes no sense to me, GUI torture. why make it impossible to reach the footer?!

Side note: looking at the screenshot gallery on the linked site, it is interesting to see how often audio software GUIs mimic real, physical devices in remarkable detail. Carefully crafted graphics for volume dials, sliders etc.

This. Is. Awesome.

Really. It amazes me that I still find out about new Linux plugins after years of producing music on the platform. It could not have been easy to compile this; the information is all over the place online.

The ability to filter (!) for compression, saturation, etc. is so great.

Just an fyi to anyone making or thinking of making one of these:

Turning a knob with a mouse is the worst interface I can think of. I don't know why audio apps/DAWs fall so hard on skeuomorphism here when the interface just doesn't make sense in the context.

  • I use knobs everyday in my audio tools (with my track pad) and they're perfectly fine as long as they have three features:

    1. Drag up/down to change value. 2. A modifier key to slow the drag for finer resolution changes when dragging. 3. The ability to double-click the knob and type in precise values when I know exactly what I want.

    The problem with knobs on a GUI is when designers stay with them when there is a faster option. Like an opportunity to combine three knobs.

    For example, the EQ on any SSL channel strip is a nightmare because they slavishly stick with a skeumorphic design of the original hardware. The hardware required mixers to use two hands to adjust gain and frequency at the same time, and then dial in Q on a third knob. Very tedious when you have a mouse.

    When this is done right, you get something like FabFilter's Pro-Q graphic EQ. The gain and frequency controls are instead an X/Y slider that you can easily drag across a representation of the frequency spectrum. In addition you can use a modifier key to narrow/widen your Q. All with a single click and drag of your band.

    • > For example, the EQ on any SSL channel strip is a nightmare because they slavishly stick with a skeumorphic design of the original hardware.

      True though I would put this very much in the "feature, not a bug" bucket. These tools are for people who have worked with the original hardware and want a very faithful emulation, including the look and feel. In the digital world with a modern PC there's not much purpose of a channel strip plugin in the first place, so the only people using one are doing so with intention.

      It's a bit like saying that manual transmission cars could be controlled more easily if they were automatic transmission; it's completely true, but if you're buying a manual you want that experience.

      Pro-Q is a great example of a digital-first tool (the automatic transmission equivalent), with lots of great visual feedback and a lot of thought put into a mouse+kb workflow. All of Fabfilter's stuff is like this actually, though sometimes to its detriment; the Fabfilter automation and LFO system feels very different from basically every other plugin. It's actually a more efficient workflow when you get used to it, but due to how different it is from everything else most people I talk to dislike it unless they've really bought into the Fabfilter suite.

      Which kind of goes back to the original point: VSTs use knobs because it's what people are used to, and using something different might be a negative even if it's better!

      1 reply →

  • Good morning. An expanding plethora of buttons, tabs, menus requires geometrical memory that may have nothing directly to do with the function in question. The first GUIs were designed that functions be "discoverable," however the size of haystacks in which these discoverable functions hide has grown exponentially, adding cognitive overhead, and increasing the length of apprenticeship needed to master the application.

    A slick-looking GUI is a kind of ad for the app. As author of an accessible, terminal-based DAW app, I contrast remembering an incantation like 'add-track' or 'list-buses' with hunting around. These incantations can have shorter abbreviations, such 'lb' for list buses, and 'help bus' or 'h bus' to be sufficiently discoverable, easier for both implementer and user. And then to have hotkeys to bump plugin parameters +/- 1/10/100 etc. Probably I'm pissing into the wind to think the majority of users will ever choose this -- and GUIs do provide amazing facilities for many purposes -- but we do have a huge array of choices on linux, including this plethora of music creation and production apps. That is a big success, IMO.

  • A 20 pixel knob has considerably greater resolution than a 20 pixel slider with its max resolution of 20. I don't think I have come across a digital knob that you have to turn with the mouse since the previous century, just drag up or down or left or right.

    • A slider is just a UI element as is a knob, the underlying resolution does not have to correlate 1:1 with the exact number of pixels something takes up on a screen. The resolution could be effectively infinite depending on the implementation of the controls.

      3 replies →

  • It allows for dense controls and everyone's used to them. I don't find them to be a problem, they aren't intuitive in that you might think you're supposed to grab the knob and "turn" it with a circular cursor motion or something, but once you learn to drag linearly, they're an easy to use and consistent interface. And as giancarlostoro mentioned, you can map them to a MIDI device if you want to twiddle knobs while playing/recording live.

    • I'll add in addition - the skeumorphism here is generally pretty functional, you touched on this when you said "everyone is used to them"

      But the layout of these buttons, while certainly not standard, is generally familiar across various filters, etc. So if you are dealing with a complex interface the skeumorphism absolutely helps to make the input more familiar and easily accessible.

      This is what skeumorphism is for and this is a great place to use it.

      Imagine if the symbols for "play" "pause" and "stop" were changed simply because it no longer made sense to follow the conventions of a VCR, then multiply that by an order of magnitude.

  • Unless the implementation is really bad, you actually have more control over these knobs than you would have over sliders. You could technically remove the knob completely, replace it with just textual number you click on and move your mouse, but the knob is easier to read.

  • It works great though, what's the alternative? It's visually small, so you can fit a lot of controls in a small space. You can glance at it and know the current setting and where it falls within the range of possible values. By making the mouse control modal when you click on a knob (so you start dragging and can drag over a much larger area than you could for say a slider, which isn't modal) you have immensely precise control over the value in realtime, while still being able to quickly make big changes. This is essential for performance. Combining this with some gentle mouse acceleration for the rate of change of the control when dragging gives you even more precise control. This isn't possible with a slider either.

    I would say the opposite, it's basically the perfect interface for a very specific scenario with requirements that don't really occur in much other computer software.

    • The alternative is the mouse wheel and keybinds. Flight Simulators got this right. Roll up on the wheel to increase the value, roll back on the wheel to decrease the value. Left click to push, right click to pop (or context menu, left click to push it again to turn off).

      In fact, if it was all MIDI controlled, it's just a matter of mapping the mouse scroll wheel to a midi channel.

      4 replies →

  • > Turning a knob with a mouse is the worst interface I can think of.

    I'm racking my brain thinking of what a better interface would be for selecting a number between a range of values, where the number is a point on a continuum and not any specific value, and can't think of one. The equivalent "traditional" UX for webapps would be a slider control, but that's functionally the same and you'd be going against many years of domain-specific common understanding for not much benefit.

    • I personally prefer the good old number box but they have their problems and you actually have to read each and ever box to see what the state is, with sliders and knobs we can see the value of a great many controls at a glance.

      2 replies →

  • If not using hardware, you just click and move horizontally or vertically; not sure what a better interface would be? Though I do like it when the numeric value shows when changing. I really don't know what other UI would work well here. Usually there are so many knobs it makes sense to be compact. Though really it makes sense as well to match the visualization of the knobs on my midi controller anyway.

  • Also they are horrifically broken if you use OS-level magnifier (ctrl+scroll etc). I don't know if this is the application devs' fault or not; I haven't investigated OS mouse warping APIs. Warping the mouse back to the center of the knob goes in a feedback loop with the magnifier and spams crazy mouse events such that every knob will immediately go to min or max. Really shameful accessibility fail that no one cares about.

  • most daws allow you to map hardware to the dials so u dont need to tweak by mouse. that being said, good automations are a fair replacement depending on your style of music. lfos, adsrs and pattern tools for automation lanes aswell as ability to record automations (to keep em consistent, modify manually etc ), and ofc humanization algorithms that u can apply to automation lanes.

    i never use 'hardware', totally happy doin what i do. (thats music i think. enjoying your craft). most ppl i know using similar tools do have midi controllers to have more of an instrumental interface. theres tons of options. no need to discourage anyone...

    • and most interfaces have a condition watching for CTRL or SHIFT to ++/-- values slower or faster depending on the modifier held... that allows one to turn a knob with much greater precision than a physical interface!

      double-clicking usually lets one type the value... really good interfaces let one scroll seamless independent of screen borders; the perfect pair with a trackball or a long surface/desk for sliding the mouse

  • It makes a lot of sense when you're holding a chord on a MIDI keyboard with one hand and dragging various knobs with a mouse in the other. Once you know the params you want to tune, you can obviously automate or map them to a MIDI controller, but doing that upfront slows things down considerably.

  • I can't think of the last time I used a knob with a mouse; you usually map it to a knob on a MIDI device and the GUI just gives you visual feedback

    • Really depends on your workflow. Many, many successful musicians are entirely or almost-entirely "in the box" and use mouse+kb for everything. Doubly true when you're talking about mixing and mastering workflows where you're not usually going to be using a MIDI controller at all (but doing plenty of knob-tweaking).

  • Isn't the entire idea that you hook it up to physical hardware?

    • No. MIDI controllers have their place, but many people work without one, or only use one for live performances. There are often also way more knobs in the various FX chains in a DAW than you would reasonably want to map to a controller, but still want to touch at least a few times while making a song.

      Knobs are confusing when converted to a mouse paradigm because there can be a few strategies to control them (click+drag up/down, click+drag right/left, weird rotational things, etc), and you have to guess since each FX studio and software may implement it just a little different.

      1 reply →

  • Are you experienced with DAWs as a composer or producer?

    Many if not most professional producers use MIDI controllers with knobs/sliders/buttons MIDI mapped to DAW controls. As such the skeuomorphism actually plays a valuable role in ensuring that the physical instrument experience maps to their workflows. Secondarily, during production/mastering, producers are generally using automation lanes and envelopes to program parameters into the timeline, and the piano roll to polish the actual notes.

    When I've historically done working sessions, the composition phase of what I'm doing tends to involve very little interaction with the keyboard, and is almost entirely driven by my interaction with the MIDI controller.

    Conversely, when I'm at the production phase, I am generally not futzing with around with either knobs or the controller, and I am entirely interacting with the DAW through automation lanes or drawing in notes through the piano roll. So I don't really ever use the knob through a mouse and I've never really encountered any professional or even hobbyist musicians who do except for throwaway experimentation purposes.

  • The amount of time it takes to have 1 debate about the choice is more time than I'll spend in my entire life figuring out how all the specific "knobs" I'll ever touch work. It's just not a real problem.

    Reaper has a standard UI for controlling plugins you can use instead of the VST UIs, other DAWs probably do too. It's an awful, lifeless sea of sliders and check boxes that hurts to look at, and instantly drains one of all creativity.

    • I've heard this POV before. Personally, I'm glad there's a DAW option with a no-frills approach to UI. I don't want a flashy or "inspiring" UI. Everything should be within arms' reach and do what it says on the tin. All the creativity happens in the audio domain. I prefer to use my ears.

      Some people like Reason for instance, but I find that its UI innovations just get in my way.

For some reason "Linux musicians" made me think of someone making art out of 'cat /dev/random > /dev/dsp', and made me wonder what Windows musicians are like (lots of anger and frustration to express I'd imagine)

  • "linux musician" once meant trying to get this audio program to compile that would never work because I am not a unix system admin.

    I didn't realize Dave Phillips had passed away. I remember he had an incredible page of audio software links but all stuff I almost never got to make any sound. Sometimes I would even blow up my whole system trying to get something to work and have to reinstall the whole operating system.

    Seeing how far we have come with this site is just incredible.

  • I currently run a PC based Ableton setup albeit one that is exclusively ITB and uses no external gear outside of a sound card and an Ableton Push gen 1.

    I've got no issues with it.

    deadmau5 is famously a PC guy as well, he seems to have no issues with Windows (that I know of or that are not extremely specific to a setup that involves millions of dollars' worth of hardware and multiple computers). His setup is like an amusement park for nerds.

  • Back in the pre-alsa days when linux used OSS you could pipe /dev/random into /dev/dsp and get noise, you could pipe anything into /dev/dsp and generally get some sort of noise. Possibly can still do this on the BSDs since they still use OSS.

Is there a way I can see which would run on a raspberry pi?

  • Zynthian (a RPi-based synth collection & groovebox) lists some of the most prominent ones on its website: https://zynthian.org/engines

    Its install recipes directory may yield a less fancy, but probably more comprehensive list: https://github.com/zynthian/zynthian-sys/tree/oram/scripts/r...

    With Zynthian OS up and running, the full list of plugins shows in its webconf page, it's so long that they have to hide basically most of the plugins from the main on-device UI.

    Roughly speaking, if it's open source, most likely it will work. If it's proprietary, assume that only Pianoteq and a small number of u-he plugins will work. Most commercial products with binary-only distribution don't feel like RPi devices are a large enough market for them to build binaries for it. Even if they otherwise offer ARM builds for Apple Silicon and Linux builds for x86.

  • kxstudio supports rpi, it comes with a few DAWs and a great deal more, it is probably your best bet for this stuff on pi unless you want to compile stuff yourself.

    https://kx.studio/

I used this site as a reference earlier this year quite often, as I attempted to establish some baseline for a Linux DAW. I use Mac for serious audio stuff, but 'what if' (since I use Linux for everything else anyway).

I came back pleasantly surprised with the current state of things. Minus the underlying linux sound system, which is still a mess of things that barely work together. (I have a lot of expensive/pro plugins and all the DAWs on the Mac, so this was mostly a filtering exercise - what I can use on Linux that can still mix/master a whole project).

- I'm not a FOSS purist in audio, so that wasn't a requirement. But I am 'linux purist' so no VST wrappers of windows DLLs etc.

- Watershed moment for me: Toneboosters and Kazrog coming to Linux. Along with u-he, these make for a very, very high quality offering. You can easily mix a commercial release just with these. Kazrog isn't even 'Linux beta' like the rest, proper full release on Linux. I was briefly involved in beta testing for Linux, Shane & co are incredible people.

- I have most/all DAWs for the Mac. Reaper and Bitwig on Linux are enough for me and feel like good citizens in Linux. (ProTools is never coming, neither is Logic. But addition of Studio One makes for a really good trio).

- Any USB class-compliant audio interface will work (modulo control applications which generally aren't available on Linux, so ymmv).

- iLok is missing, which removes a whole host of possible options (I have 500+ licences on my iLok dongle, none of that stuff is accessible). I can't say I miss iLok, but I do miss Softube (not that it's available on Linux, iLok or not).

I made a few 100+ tracks mixes on my thinkpad with Reaper and the above combo of plugins, it worked just fine.

But Linux is still Linux, and 30 years later still annoys me with typical 'linux problems', which generally boil down to 'lack of care'. UI is still laggy, compositors be damned. While Reaper is butter-smooth on a Mac, audio thread never interferes with UI (and vice versa), it can get quite choppy on Linux. If you allow your laptop to go to sleep with a DAW open, chances are good that upon resuming you'll have to restart it as it will lose sound. And a lot of smaller annoyances that are just lack of polish and/or persistent bugs, that I'm sadly used to on Linux (want to switch users on Linux Mint? The lock screen can get hella confused and require a lot of tinkering to get the desktop back). But overall, it's a million miles away from a hobbyist endeavour that Linux audio used to be until recently. I could get actual work done with Linux this time around.

  • PipeWire seems to solve all the audio stuff for me, zero problems since I made the switch. I had audio fail on resume once when I first installed PipeWire; if memory serves it was that the default settings for PipeWire was to restart the audio server on resume which screwed things up because Jack kept running. Something like that. The fix was simple, just comment out a line and uncomment another. Everything audio has just worked ever since.

    I have not had any UI issues in at least a decade on Slackware. The few times I tried Mint over the years, it was filled with random annoyances like you mention.

    Edit: This is not advocating using Slackware for audio work, it works great but it is Slackware and most don't get along with the Slackware way. But there is a DAW module for AlienBob's Slackware Live Edition[0]. It worked alright when I tried it, as well as any other live distro.

    [0] https://docs.slackware.com/slackware:liveslak

    • I dont/didn't use Jack at all, straight into pipewire, which makes for a super unintuitive way to select 'audio device' in Reaper (iirc, something like select ALSA and 'default's for input/output and somehow that's all routed via pipewire). I'm not unhappy about pipewire, I finally have a low-ish latency audio system (enough for mixing, if not recording) that I don't have to spend hours on to get it to work. A la MacOS.

      But generally that's my point, 'it works if you go and edit this obscure line in this obscure config file'. Mac has had a stable CoreAudio backend for quarter of a century now (counterpoint - Windows is also a mess). I wish Linux would stabilise their userland a bit more and stop rewriting stuff every few years.

      Sometimes I wish there was a commercial company behind 'linux for audio' that will give me a finely tuned Linux distro on a finely tuned desktop machine, based on whatever distro, I don't really care. But have it all released/patched at their own pace, as long as everything 'just works'. I'd be happy to pay for that. The whole 'OS due upgrade, is anything going to work tomorrow, I have a session' is still an unsolved problem on _every_ OS/platform. Most busy studio heads go years without installing/upgrading _anything_ for fear of having a lemon after said upgrade, with clients waiting at the door.

      1 reply →

This is great. Makes these tools much more discoverable. I can help but notice the drop in plugin ui quality one you click the foss filter checkbox. Something in me wants a foss plugin to come with a cool skin like the free ones do, but I know that's silly.

  • It really says something about designers when there's so few of them contributing to FOSS projects. It also says something about FOSS devs that they don't/can't find better UI for their projects. Especially for web based UIs where CSS isn't that hard to look at sites you want to emulate and get much much closer to a respectable UI.

    • A not-so-insignificant number of FOSS developers are well able to make quality UIs, but decide to charge for their more polished creations.

      Between having to make a living somehow, and not reaping a whole lot of other personal benefits from open source audio development, it takes a very special kind of person to publish these contributions in the first place. Once they're published, generally with its UI defined in code by a developer person, they're not necessarily easy for a designer to edit.

      Nor is there much of a steady community around most of the plugins. So many are "publish, feature-complete enough, move on" kind of projects.

      As always, be the change you want to see in the world.

it's an add for apps that cost as much as a box of decent used pedals and rack mount gear. though "linux musicians" does appear to be a thing, and the bot used to check if you are human, is amusing and fully automated.

https://linuxmusicians.com/

  • I actually assumed the link was linuxmusicians.com and I bet I am not the only one who assumed that. It is not an ad, but a store that also lists free software.

The real-time low latency multi channel audio streaming needed for musicians is awfully similar to the real time low latency multi channel audio streaming required for telephony.

Yet somehow the two industries have pretty much entirely different tech stacks and don't seem to talk to one another.

  • This is very much not true.

    Telephony is significantly less latency sensitive than real time audio processing, it’s also significantly less taxing since you’re dealing with a single channel.

    The level of compression and audio resolution required are significantly different too. You can tune codecs for voice specifically, but you don’t want compression when recording audio and can’t bias towards specific inputs.

    They’re only similar in that they handle audio. But that’s like saying the needs of a unicycle and the needs of an F1 car are inherently the same because they have wheels.

  • Most telephony I've experienced has latency measured in seconds (if you ever call your friend or spouse sitting next to you it becomes very obvious :) vs audio recording and processing which is measured in milliseconds.

    Additionally, from what little I'm aware of, telephony is heavily optimized for particular frequencies of human voice and then heavily compressed within that. As well, any single telephony stream is basically a single channel. A song may have dozen of channels, at high resolution, full spectrum, all sorts of computationally demanding effects and processing, and still need latency and sync measured on milliseconds.

    So... Kind of the opposite of each other,while both being about processing sound :-).

  • I feel like equating telephony and music production is like saying writing firmware and a HTTP/JSON backend for a website is the same. True, both are programming I suppose, but vastly different requirements, assumptions and environments.

  • This is a very interesting thought. I'm not super experienced with low level audio and basically completely ignorant of telephony.

    I feel like most people doing audio in music are not working at the low level. Even if they are creating their own plugins, they are probably not integrating with the audio interface. The point of JACK or Pipewire is to basically abstract all of that away so people can focus on the instrument.

    The latency in music is a much, much bigger issue than in voice, so any latency spike would render network audio completely unusable. I know Zoom has a "real time audio for musicians" feature, but outside of a few Zoom demos during lockdown, I'm not sure anybody uses this.

    Pipewire supports audio channels over network, but again I'm not entirely sure what this is for. Certainly it's useful for streaming music from device A to device B, but I'm not sure anybody uses it in a production setting.

    I could see something like a "live coding symphony", where people have their own livecoding setups and the audio is generated on a central server. This is not too different than what, say, Animal Collective did. But while live coding is a beautiful medium on its own, it does lack the muscle memory and tactile feedback you get from playing an instrument.

    I would love to see, as you said, these fields collaborate, but these, to me, are the immediate blockers which make it less practical.

    • "Even if they are creating their own plugins, they are probably not integrating with the audio interface".

      The audio interface is abstracted away in exchange for some metadata about the buffer's properties and the buffer itself, and that is true for basically everything related to audio: the buffer is the lowest level the OS offers you, and you are free to implement lower-level stuff in your dsp/instrument, like using assembly, maybe also functions for SSE, AVX or NEON based acceleration.

      You get chunks of samples in a buffer, you read them, do something with them and write the result out into another buffer.

      "Pipewire supports audio channels over network" thanks for reminding me: I'm planning to stream the audio out of my Windows machine to a raspi zero to which I will then connect my bluetooth headphones. First tests worked, but the latency is really bad with shairport-sync [0] at around 400 ms. This is what I would use Pipewire for, if my workstation were Linux and not Windows.

      Maybe Snapcast [1] could be interesting for you: "Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution."

      "I could see something like a "live coding symphony", where people have their own livecoding setups and the audio is generated on a central server." Tidal Cycles [2] might interest you, or the JavaScript port named Strudel [3]. Tidal can synchronize multiple instances via Link Synchronization. Then there's Troop [4], which "is a real-time collaborative tool that enables group live coding within the same document across multiple computers. Hypothetically Troop can talk to any interpreter that can take input as a string from the command line but it is already configured to work with live coding languages FoxDot, TidalCycles, and SuperCollider."

      [0] https://github.com/mikebrady/shairport-sync

      [1] https://github.com/snapcast/snapcast

      [2] https://tidalcycles.org

      [3] https://strudel.cc

      [4] https://github.com/Qirky/Troop*

  • irony amplified by the nature of the tech stacks xD surely they can figure out some channel to communicate over clearly haha

  • Not really! AES67 is essentially RTP with a PTP derived media clock. Connection description uses SDP and unicast signaling uses SIP. Just like VoIP.

    Also I imagine TDM was first used in telephony.