fader/mixer snapshots

Hi, I have a question.
I’m doing a piece with 100% vsti’s
I want to prepare to mix down the outputs from the various audio tracks. IE, do an offline mixdown.

So I want to have small changes to the various track levels etc. When I turn on the record automation button it causes a dramatic increase in cpu useage when it’s playing back the fader movements. Also because I’m doing fader movements to virtual audio tracks from the vsti’s, I can’t seem to find the actual automation data to edit it.

Is there a way to just have N-track take snapshots of various fader positions and then play them back, without needing to
have continuous automation data be generated?


There’s a much better way than fader automation. I tried fader automation, but it gets too confusing.

Instead, use “Volume evolutions”. It’s great! Click on the black wedge-shaped tool in the top toolbar (“Draw volume envelopes”). This puts a bunch of green lines on the screen – one for every track.

Use the mouse to adjust the green lines: up for louder, down for quieter.

It’s the bee’s knees. This is one of the things that makes working on a computer-based DAW so much better than things were back in the bad ol’ days of tape recording!

Hey Learjeff, how can I use volume evolution of virtual audio returns though?

Like in the current song, I have some midi tracks but all the audio is coming live from the plug-ins, there are no concrete wave files!

Should I apply the volume envelope to the corresponding midi track?


There are three answers. The first one is applying the volume envelope to the MIDI tracks, just as you suggest. I find I’ve had issues with that, though I don’t quite remember what they were. Worth a try.

The second is to click on the little black down-triangle to the right of the toolbar button. You should see the options to do the master fader and to do the plugin instruments. Note that when you pick the plugin instruments, you’ll get lines of some other color (I haven’t actually done this) and they’ll correspond to the plugin channels in order. That is, the top one will be for the first plugin in the mixer, etc.

I use a third method, and I find this to be the clearest and most repeatable, although it takes a few extra steps. When I have a MIDI part I think is done, I render it to a wave file. For a plugin instrument track, I solo the track and use Offline Mixdown, using 32 bits and no master channel effects. Then I mute the MIDI track, import the audio file, and use that for the rest of the processing, including volume envelopes, FX, and EQ. (If an effect or EQ is important to play the part well, I’ll use that on the plugin channel, but only if it really helps me play the part better.)


Thanx alot for the information.
Much appreciated.

I’ve now located the “intrument ch vols” just as you said.
That method was how I’d presumed I’d be best proceeding, however now you have mentioned doing mixdowns to convert
virtual tracks into concrete wave files…I’m torn in two directions.
I’m not sure if keeping everything “live” is justified, I need to add vocals, I was kind of thinking I’d probably get issues
if I did try keeping everything live, and yet it seems so funky to do so.

If I import the “mixed” waves into a project at 32 bit, will this cause problems if I want to do vocals at 16bit?

Anyway thanx again for the information.


No problems mixing formats, as long as the sample rate is the same for all (e.g., 44100 Hz).

Also, keep in mind that you can render MIDI to audio as I said, and at any time toss the audio file and go back to MIDI. It’s not a one-way street. I will do that if I find something to change in the MIDI. Often, I’ll delete the wave file but keep the track with any volume envelopes, FX, and EQ. (“Delete part” rather than “Delete track”.) Then I can always re-render the MIDI and drag the new imported audio file to the old track.

I usually render MID to audio as soon as I’m happy (not “satisfied”, just happy) with the MIDI track. This eliminates a lot of variables and makes things simpler. And it keeps CPU usage a lot lower.

If I were using lots of VSTi tracks , I’d probably do two or three tracks and render the earlier ones once I find I’m not changing them much.

Normally, I start out with a click track. I build the click track using MIDI, and then render it to audio. If it’s constant tempo & signature, I’ll usually just make a an audio click track that’s about 16 bars long, and copy/paste it in as many times as needed. I often adjust it so that the click track part boundaries match the verse/chorus/bridge part boundaries – just because it makes it easier to see where I am in the song. (N-Track has “marks”, but I find them too annoying to use.)

That’s easier than it sounds: After rendering 16 bars of MIDI click to audio and importing it, I start a new track for my click track. I turn the grid on. If I want a 2-bar count in, I copy 2 measures from the click wave track and paste them into the new track. Say the intro is 8 bars, so I copy 8 bars and paste them in where I left off. And so on, for the rest of the song.

With meter changes, it’s more complicated, of course.

In either case, though, I always render the click to wave first. Then I start recording tracks. In the past, most of the first few tracks are audio (my first DAW-recorded CD is mostly acoustic music). Then I’d add the MIDI. My next projects will be more the other way around, starting with keyboards.

One reason I record the click to wave is that it acts as a good visual guide. Another reason is that it’s more repeatable. With MIDI, the precise timing can vary a bit from run to run, due to the way PC clocks and MIDI timing works. With an audio file, the click never moves once it’s down.

Thanx again for your detailed reply, it’s really helpful indeed!

I have set out this first album track of mine to test
n-track and my workflow technique just to make sure I knew how to get through a song without blowing it totally.

Because I’ve come from working with creator/atari and midi modules locked to tape via smpte, I’ve kept all the backing tracks, ie, bass, keyboard, etc, live, the idea of mixing/rendering indi tracks to audio as I go, is an idea which I understand but since the vsti’s seem so happy “live” (thanx n-track) I’ve not commited them to wav format so far.
For my click track, I’ve just been running Groove Agent totally live, and, also I’ve not done any hardcore FX or EQ on any parts/ch’s or any buss.

My idea is to lay out all the parts dry
first, then, when the arrangement is done I was going to
do do EQ/FX…My way however, will leave me with multiple
vsti’s running live just as I need to approach doing vocals…

I guess I’m scared tho that when I’m there I’ll be low on cpu to focus on vocals so…I may commit stuff now to audio wav format and get it done.

Your comment about it not being a one way street is useful to read.

I’ve got one last question regarding this stuff.

Say I was going to render the vsti bass track to wav…
Would you render that tracks EQ settings etc and then re-import the wav file, or would you keep it all neutral?

again thanx for the feedback.

Anything you might want to undo, don’t do. In other words, if you omit the EQ when rendering the bass track and redo it on the rendered wave track, you can always change your mind.

EQ and effects should always be adjusted while listening to the whole mix. Of course, many of us (including myself) apply some FX and EQ as we add each track, but only “rough” adjustments, to make listening to the tune pleasant, and so that as we add tracks we’re listening to a quick mix that’s reasonably close to the final.

But if you think you nailed the EQ enough, and can always adjust it a bit later for touch up, no harm done if you render it with EQ. Just remember you don’t want to overprocess the material & send it through the EQ too many times. Don’t forget that you’ll do some EQ as part of mastering, after you’re done with the mixdown. Still, this is a fine point and not one to get too worked up over.

If everything’s running fine live, then that’s a good thing! One option you can consider is to render a submix of all the instrumentals before recording vocals, if you’re worried about losing a good take due to overtaxing your gear. That would be a throw-away track, but useful and quick to create.

No harm in postponing rendering your MIDI tracks – if it works, it works! And you have more flexibility that way. But I do strongly suggest you render each MIDI track separately to wave before doing the final serious mixing. It offloads the CPU, and it provides an archive that you can always play back even when that plugin no longer works. And it always sounds exactly the same way, which isn’t necessarily true with VSTi’s.