Skip to content

Audioeffects

Tim Sharii edited this page Apr 19, 2019 · 6 revisions

Audio effects are special filters used extensively in speech and music processing:

  • WahWah
  • AutoWah
  • Phaser
  • Flanger
  • Vibrato
  • Tremolo
  • Delay
  • Echo
  • Overdrive
  • Distortion
  • Tube distortion
  • Pitch shift
  • Robotize
  • Whisperize
  • Morph sounds

Each effect inherits from abstract class AudioEffect. This class implements IFilter and IOnlineFilter interfaces (leaving methods Process() and Reset() abstract) and adds two properties: Wet and Dry. These are conventional mixing parameters. Usually their sum is equal to 1.0f, however, in NWaves there's no such strict contract. By default Wet=1, Dry=0.

The general audioeffect workflow is similar to any other filtering:

AudioEffect effect = new <Some>Effect(signal.SamplingRate, <effect parameters>);

effect.Wet = 0.8f;
effect.Dry = 0.2f;

// offline
var outputSignal = effect.ApplyTo(signal);

// online
effect.Reset();

//while (new sample is available)
{
    var outputSample = effect.Process(sample);
    //...
}

effect.Reset();

Parameters can usually be tweaked at anytime during online processing.

Audioeffects are customizable, some of their blocks can be reused. Many of them are based on LFOs (low frequency oscillators), and you can set your own LFO anytime (or pass it as a constructor parameter).

All audioeffects support online filtering, except PitchShiftEffect (for online pitch-shifting the class PitchShiftVocoderEffect can be used).

WahWah effect

Technically, wahwah effect is LFO + bandpass filter. There are two ways to construct WahwahEffect object:

  1. Specify all parameters in constructor:
var wahwah = new WahwahEffect(signal.SamplingRate, 1, 50, 1800, 0.7);

wahwah.LfoFrequency = 1.5;  // change LFO frequency from 1 to 1.5 Hz
wahwah.MinFrequency = 200;  // change min frequency from 50 to 200 Hz
wahwah.MaxFrequency = 1500; // change max frequency from 1800 to 1500 Hz
wahwah.Q = 0.4;             // change Q from 0.7 to 0.4

Note. LFO will be triangular by default.

  1. Specify your own LFO (perhaps, sinusoidal or any other SignalBuilder-derived class):
var fs = signal.SamplingRate;

var sawtooth = new SawtoothBuilder()
                       .SetParameter("freq", 1.2)
                       .SampledAt(fs);

var wahwah = new WahwahEffect(fs, sawtooth, q: 0.6);

// ...
// change LFO anytime

var squareWave = new SquareWaveBuilder()
                         .SetParameter("freq", 1.5)
                         .SampledAt(fs);
wahwah.Lfo = squareWave;

// ...or only LFO frequency
wahwah.LfoFrequency = 0.5;

AutoWah effect

AutoWah effect is envelope follower + bandpass filter:

var autowah = new AutowahEffect(signal.SamplingRate, 100, 4000, 0.7, 0.01f, 0.05f);

autowah.MinFrequency = 200;  // change min frequency from 100 to 200 Hz
autowah.MaxFrequency = 3500; // change max frequency from 4000 to 3500 Hz
autowah.Q = 0.4;             // change Q from 0.7 to 0.4

// envelope follower settings

autowah.AttackTime = 0.02f;  // change attack time from 0.01 to 0.02 sec
autowah.ReleaseTime = 0.09f; // change release time from 0.05 to 0.09 sec

Phaser effect

Technically, phaser effect is LFO + bandreject filter. The class is PhaserEffect, whose construction and usage is analogous to WahwahEffect.

Tremolo effect

Technically, tremolo effect is LFO + ring modulator. Like in case of WahWah, there are two ways to construct TremoloEffect object:

// 1) all parameters in constructor
var tremolo = new TremoloEffect(signal.SamplingRate, 5, 0.5);

tremolo.Frequency = 3.4f;     // change tremolo frequency from 5 to 3.4 Hz
tremolo.TremoloIndex = 0.9f;  // change tremolo index from 0.5 to 0.9


// 2) prepare LFO separately
var modulator = new SineBuilder()
                        .SetParameter("freq", 3)
                        .SetParameter("min", 0)
                        .SetParameter("max", 2.5)
                        .SampledAt(signal.SamplingRate);

tremolo = new TremoloEffect(signal.SamplingRate, modulator);

// change LFO
tremolo.Lfo = new TriangleWaveBuilder()/*set parameters*/.SampledAt(signal.SamplingRate);

Note. LFO will be CosineBuilder by default.

Vibrato effect

Technically, vibrato effect is LFO + variable delay comb filter + interpolation of fractional delay. Also, it's the only audioeffect in NWaves that always sets parameters Wet=1 and Dry=0. Like in previous cases, we can tweak all parameters of VibratoEffect object at anytime:

var vibrato = new VibratoEffect(signal.SamplingRate, 0.003f/*sec*/, 1/*Hz*/);

vibrato.MaxDelay = 0.001f;    // change max delay from 3ms to 1ms
vibrato.LfoFrequency = 1.5f;  // change LFO frequency from 1 to 1.5 Hz

vibrato.Lfo = new SawtoothBuilder().SampledAt(signal.SamplingRate);
// parameters min and max will be always set as: min = 0, max = 1

Note. LFO will be sinusoidal by default.

Flanger effect

It's almost identical to vibrato effect except that LFO is always sinusoidal and wet/dry coefficients are not ignored. Usually, Wet=Dry=0.5f.

Delay effect

Technically, it's just a feedforward (FIR) comb filter.

var delay = new DelayEffect(signal.SamplingRate, 0.024/*sec*/, 0.4);

delay.Length = 0.018;  // change length of delay from 24ms to 18ms
delay.Decay = 0.6;     // change decay coefficient from 0.4 to 0.6

Echo effect

Echo effects, in general, may be implemented different ways, but in NWaves it's just a feedback (IIR) comb filter. Construction and usage is similar to DelayEffect.

Overdrive effect

[Udo Zoelzer] DAFX book, p.118.

var overdrive = new OverdriveEffect(12, 0.8);

overdrive.InputGain = 8;    // change input gain from 12 to 8
overdrive.OutputGain = 0.6; // change output gain from 0.8 to 0.6

Distortion effect

[Udo Zoelzer] DAFX book, p.124-125.

var dist = new DistortionEffect(15);

dist.InputGain = 20;  // change input gain from 15 to 20

Tube distortion effect

[Udo Zoelzer] DAFX book, p.123-24.

var dist = new TubeDistortionEffect(15, 0.2, q: -0.5, dist: 5);

dist.InputGain = 20;   // change input gain from 15 to 20
dist.OutputGain = 0.5; // change output gain from 0.2 to 0.5

Note. Overdrive, distortion and tube distortion are non-linear effects. They are implemented using very simple formulae and algorithms, so the resulting sound, most likely, will not be very good. But as a starting point these effects are OK.

Pitch shift effect

There are two classes for pitch shifting:

  • PitchShiftEffect
  • PitchShiftVocoderEffect

PitchShiftEffect provides only offline method based on 2-stage algorithm: 1) time stretch; 2) linear interpolation. In order to construct an object of class PitchShiftEffect, one needs to specify the pitch shift ratio and parameters of time stretching algorithm: FFT size, hop size and TSM algorithm. In general, the quality of pitch shifted signal is satisfactory, however, in some cases you may want to apply anti-aliasing LP filter to the resulting signal.

var pitchShift = new PitchShiftEffect(1.12, 1024, 200);
var shifted = pitchShift.ApplyTo(signal);

PitchShiftVocoderEffect provides both offline and online methods based on phase vocoder technique with "spectral stretching" in frequency domain.

var pitchShift = new PitchShiftVocoderEffect(samplingRate, 1.12, 1024, 64);
var shifted = pitchShift.ApplyTo(signal);

// online:
// while input sample is available
{
    var outputSample = pitchShift.Process(inputSample);
    //...
}

Robotize

Phase vocoder-based effect. At each step phases are just set to 0.

var robot = new RobotEffect(hopSize: 120, fftSize: 512);
var robotized = robot.ApplyTo(signal);

// online:
// while input sample is available
{
    var outputSample = robot.Process(inputSample);
    //...
}

Whisperize

Phase vocoder-based effect. At each step phases are simply randomized. Best results are achieved when window size and hop size are relatively small:

var whisper = new WhisperEffect(hopSize: 60, fftSize: 256);
var whispered = whisper.ApplyTo(signal);

// online:
// while input sample is available
{
    var outputSample = whisper.Process(inputSample);
    //...
}

MorphEffect

Phase vocoder-based effect. At each processing step the output spectrum is combined from magnitude of the first signal and phase of the second signal. Also, it's the only audioeffect that processes two signals instead of one, so ApplyTo() method is overloaded:

var morpher = new MorphEffect(hopSize: 200, fftSize: 1024);
var morphed = morpher.ApplyTo(signal1, signal2);

// online:
// while input sample is available
{
    // get inputSample and morphSample

    var outputSample = morpher.Process(inputSample, morphSample);
    //...
}

Create your own audioeffect

Just for example, let's see how to create very simple effect of adding some red "waterfall" noise to each signal sample:

public class NoisyEffect : AudioEffect
{ 
    private SignalBuilder noise;

    public NoisyEffect(int samplingRate, double noiseLevel = 0.5)
    {
        noise = new RedNoiseBuilder()
                        .SetParameter("min", -noiseLevel)
                        .SetParameter("max",  noiseLevel)
                        .SampledAt(samplingRate);
    }
 
    public override float Process(float sample)
    {
        var output = sample + noise.NextSample();
        return output * Wet + sample * Dry;
    }

    public override void Reset()
    {
        noise.Reset();
    }
}
Clone this wiki locally