Streaming audio between two Macs

Macs can share video, so that you can use a Mac from a remote Mac, as if you were sitting in front of it. The Share Screen feature makes remote access to another computer incredibly easy. You can use a Mac’s display, keyboard, mouse or trackpad from the controlled Mac. You can even access all the peripherals connected to that other Mac.

But you can’t hear the sound from it. Audio is not shared. A commercial solution to this missing feature is Rogue Amoeba’s Airfoil. A free one, that you might already have in your computers, is Skype, together with Soundflower or Rogue Amoeba’s Loopback.

First of all, install Soundflower in your controlled Mac. You can download it for free from the developer’s site (Matt Ingalls). Once it is installed in your Mac, you get a virtual audio stream between applications. An application sending audio via Soundflower can be recorded by an application that has Soundflower as its audio input. It is like a virtual cable running between applications.

In the controlled Mac’s System Preferences > Audio control panel, choose Soundflower (2ch) as the audio output. You can do the same from the audio icon on the right side of the Mac menu, if you have checked the relevant option. At this point, the Mac is outputting stereo audio via Soundflower.

Then, go to Skype’s Audio and Video settings, and choose Soundflower (2ch) as the audio input device. This makes Skype listen audio from Soundflower, instead of the microphone. Any application sending audio to the Mac’s audio output (that is now Soundflower) will send audio to Skype.

From the controlling Mac, log into Skype with a different account. Call the local Mac from the other Mac. When you answer your own call, you can listen on the controlling Mac the audio generated by the controlled Mac.

Please keep in mind that, depending on the speed of the connection, audio may be delayed. Skype should choose to connect via the LAN, and not via the internet, but its audio stream will not be in realtime. Lip-synching, for example, may not be perfect. However, this should be enough to prelisten to a score or an edited sound.

Praise for the open meter and key

One of the most common criticism moved to Dorico is that new documents are not in 4/4 meter and in C Major. They are in an open meter and key.

In my view, this is one of the strongest points in Dorico, and one of the most innovative. I liked it immensely in Igor Engraver, and am immensely happy to find it again.

From the point of view of a student or beginner, having a blank staff is highly educative, since it forces one to learn how to choose the right meter and key. It is not a given, it has to be carefully considered. Having it ready is easier? It depends on the target. If someone only needs a way to transcribe a simple tune, maybe a program as complex as Dorico Pro is not the right one.

For someone writing music in the 'academic' styles since the 20th century (including much film music), having a blank staff is liberating. The idea of 'flow' is at the basis of Dorico, and flow is not only the name of one of the structural elements: it's the deep philosophy of this program. Music flows. You are free to give it a measure, design patterns. But, at the basis, it is a free flow in time.

Incommunicable microtuning

Not many DAWs, notation programs, players, virtual synths and sound libraries allow for alternative tuning and microtonal accidentals. Some allow them, but then are unable to communicate to other software the altered notes.

For example, a DAW or music program may allow for microtonal accidentals, but then send out a message that most sound players can't understand (like VST Note Expression vs MIDI Tuning Standard).

Alternative tuning is useful for music in non-Western standards (Post-Minimalism meets Ancient India). Or for ancient music (for example, tuning a harpsichord to match a lute). Or for experimental music (Harry Partch-inspired microtuning scales, or the mystical Pythagorean tuning).

Microtonal accidentals are used in much contemporary music or hybrid music, but also to transcribe as finely as possible folk music. Some experimental rock/EDM is using clusters, and going over the walls of the Equal tempered system.

Yet, not all the DAWs, notation programs, and sound generators can communicate their alternative tuning and exoteric accidentals. A world open to the world in theory, but less in practice.

A missing standard for keyswitching

The lack of an universal and advanced standard for keyswitching makes me crazy. I’m one of those who prefer not to insert keyswitches in the score, nor use separate tracks for playing techniques. I want a meta-code to drive my technique changes.

What I did, in making my articulation sets for Logic, was to first create my own personal articulations/techniques map, starting from a Spitfire Audio UACC map repeated two times (UACC s 1-128, Logic 1-256). This means that all my maps will have the same articulation types at the same ID. Selection messages will start from those fixed positions.

Unfortunately, not all libraries are coherent in how they map their articulations/techniques, so I'm still using too many articulation sets and expression maps. With VSL VI libraries I built my own presets, all organized in the same way. But this is not possible with all libraries.

Composing contemporary music in the age of sound libraries

I’ve been trained to write music on paper. My main teacher, one of the best composers of his generation, was also a copyist, and insisted on good and accurate calligraphy. Writing on paper seems like the most obvious way, when dealing with a very rational type of music, based on proportions and semi-automatic processes. It’s also the fastest way to notate some tightly integrated music gestures, made of a bundle of pitches, articulations, and expressions, that would be impossible to quickly notate with notation programs (an example: a violin playing a starting pitch, fading into a jété and gliding up to an uncertain pitch).

At the same time, I’ve always felt the need to feel the music under my fingers. Neither my mind, nor the calculated music coming out of a computer when listening to the results of what I was writing, has given me a satisfactory connection with my music. My mind, a powerful generator of music, is only a part of my body; and my ears, receiving the waves of pressure from the loudspeakers, are only a part of the sensorial experience. I need a tactile experience of my music, together with the auditory experience.

When very young, I composed at the piano. Sometimes, I sat at the piano, and went on experimenting with stravinskian overlapping chords, bartókian hammering rhythms, schoenbergian piercing intervals and misty outburst of notes; some other times, I just checked at the piano what I was writing on paper. I had a physical connection with my music.

Later, when I could afford one, I switched to computers. I tried to simulate “real” music and sounds. However, notation programs were unable to make my notes sound as I wrote them; I wrote them as music, the computer insisted on playing them back as arithmetic expressions. The sounds I could feel when playing on the keyboard were not what they were named after: the piano lacked hammers and resonance, violins lacked wood and body, brass did not explode, woodwinds lacked breath and click. The computer was great for electronic music, not for simulating music made with real instruments.

But acoustic sound libraries improved over the years. VSL was a revolution. Other libraries appeared for specialized types of sound. For what I was looking for, VSL and XSample offered great support to my music. At first, I had something resembling realism in the libraries that came with the Native Instruments package (a taste of VSL, then the realistic chamber strings of Session Strings Pro). Then, I could finally afford the solo instruments of the XSample Library with their extended techniques, and the accurate orchestral sounds of the VSL Special Edition. I had a powerful, realistic sonic arsenal under my fingers. I had all the sound tools I could need.

So, I could compose at the computer again. But how? Notation programs continued to be cold as a grave. Wallander’s NotePerformer added life to my Sibelius scores, but still more the life of a lemur than that of a living body. And composing by patiently writing and sculpting pitches on the staff looked like underusing the tools I had. What I had was basically a glorified piano – the same keyboard on which the greatest composers of the past loved to improvise and compose, the same white and black technological interface with music they loved to spend time imagining and feel their music – but capable to really *play*, and not only suggest, a full orchestra.

Is composing at the keyboard legit? With Logic, I can keep the score, pianoroll and controller pages open, in a mix of traditional notation, evolving texture and cluster graphic notation, oscillator and modulator diagrams. I can move the input cursor where I have to insert a segment, and start recording from there. I can step-input pitches as I would do in a notation program. I can have rather accurate notation and realistic sonic rendering at the same time, write notes on staves, and later export a MusicXML file to refine notation with a dedicated program. Logic can also assist me with some serial-based elaboration, unless I want to cut and paste pitches generated by OpenMusic.

What I feel is that, by composing at a DAW that lets me easily have accurate control on the piece’s micro- and macro-structure, I can really go further, and maintain a better control on the piece’s macrostructure and evolution. I can create a general structure and time signature map in advance, insert motives and focal points as placeholders, use the track arrangement space as a blank wall where to attach post-its, and create my piece by going from the general image to the finer details, gradually filling that wall. And always keeping contact with the actual sound, not simply a mental image of the sound.

Isn’t this a lot like the old Maestros sitting at their klaviers, one hand on the keyboard and the other hand writing on a music sheet?

(The above is an old reflection, made in February 2017.)