Added auto-synchronization to my rephasor algorithm, a process which resynthesizes an input phasor signal (periodic ramp signal), using a scaling value to change the frequency in relative terms (2=twice as fast, 0.5=twice as slow, etc).

This sound demo consists of 3 oscillators which pitches are being modulated by phasor signals. 2 of these phasors are rephasors with synchronization, sourced from the other phasor.

In the patch, these rephasors speed up and then slow down back to the original rate.

Notice how the voices settle back to the original rate, they gradually line up again.

And here is the same patch using old version of the algorithm.

Without autosync, rephasors will suffer from time drift, even if they are playing at the same rate.

Right where the voices settle down, you can hear some voices consistently playing at the wrong time. It never lines up. This is the "drift".

The rephasor is at the core of my gesture sequencer, Gest. Most of the constructs and complexities in Gest have something to do with mitigating this clock drift.

But now, here it is, clock drift prevention baked into the rephasor as part of the DSP algorithm. This has a lot of implications for what can be possible now.

A new model for a gesture sequencer can be reimagined as a DSP block diagram.

Instead of reading from a fairly linear score like Gest, this new system can be controlled in a more nonlinear way using a state machine and/or VM.

It would be a much more elegant system. A VM would allow for more generative musical structures to emerge. Also, multiple gestures could share the same VM, allowing for concurrent cross-communication between gestures.


@paul Where can I read about Gest, or gestural sequencers? Not sure I understand the term, but I’m interested in composable/combinable sequencers, which it sounds like you’re talking about?

Sign in to participate in the conversation

A newer server operated by the Mastodon gGmbH non-profit