64 bits
Maybe this has been discussed before but I couldn’t bring up anything about it after several searches of the forum.
S@n’ar has the 64 bit mixing option and so does N-track. I think S’aw’Studi@ has 48 bit integer mixing.
I’ve tested them all with simple 2 and 3 track projects and couldn’t hear any difference between any of the three programs regardless of the 64 bit setting being on or off.
Forgetting about other programs, are there any N-track users who can say with certainty they hear a difference when the 64 bit option is on in N-track? If so are there any examples you can post?
It seemed as if the 64 bit mixing option increased CPU usage but I noticed that whether its on or off, the CPU usage tends to jump around a lot so its hard for me to determine if there is a greater load on the CPU or not.
I think its good N-track has this option but its not a major concern of mine. I record lots of percussion instruments and find that mic proper mic placement makes the difference between a clear well-defined recording and one thats grainy and unclear.
If the 64 bit mixing option can make even a small improvement in clarity without overloading the CPU then I’ll leave it on at all times. Thats as far as my interest in it goes
I spent about a week getting eyestrain reading a seemingly endless number of opions on the subject. They all seem to forget that these programs shouldn’t have a “sound.” They should all be sonically transparent and we should hear exactly what we put into them - nothing more or less. Hopefully the 64 bit mixing option can offer that extra little bit of transparency while only requiring a small percentage of CPU usage.
no cents
Don’t expect to be able to hear any difference. The worst case difference between 32-bit and 64-bit mixing is below -144 dB SPL, and that’s only at the very top of the peaks (when the resulting signal is near 0dBFS). When the signal level is -6dBFS, then the difference is -150dB; for -12dB it’s below -156dB, and so on.
This is SO not an issue, although I’m sure we’ll hear folks talking about the “clear openness” of 64 bit mixes, or some rot like that. And if there is an objectively definable difference (meaning, we could show it in a double-blind study), it would only appear for the very best mixes and would be very subtle – something for the golden-ears guys.
On the other hand, it won’t do any harm, other than increasing CPU (by how much I don’t know, maybe not much).
With one possible exception: might cause interoperability issues with some plugins. That depends on whether the host (e.g., n-Track) tries to use the plug in 64-bit mode, passing it 64-bit data. (I don’t know whether “64-bit mixing” applies just to the final summation stage, or 64-bit throughout. Could be either way.) There are methods for using plugins in different formats, but 32-bit float is the current default and going outside of that could be messy.
I just read a discussion about the value of increased bit depth with Terry Howard, who recorded a lot of Ray Charles’ music. What I gathered from the discussion (well actually they were talking about going from 16 bits to 32 bits) was that increased amplitude resolution makes it easier to record music with a lot of dynamic range without encountering volume-stepping problems at low volumes and digital clipping at the high end. Basically, increased bit depth increases headroom. But as LearJeff said this resolution would have to be maintained throughout your digital signal chain to gain the benefit.
T
Right, T, but that doesn’t apply to 32-bit float versus 64-bit float in the mix path – it applies to 16 bit vs. 24 bit when recording.
There are also differences between fixed point formats (like 16 and 24) and floating point formats (like 32 [usually] and 64) in the mix path. Note that floating point formats are not used for recording and playback, just mixing and processing. One big difference is that floating point formats don’t clip (level off the signal) when it goes over 0dB. Nor does it distort the signal in any way, not unless it reaches astoundingly high levels that are nearly impossible to reach even if you try really hard (thanks to the use of exponents).
There are no soundcards that record at 32 bits, so I bet the discussion you read was about 16 vs. 24. There probably never will be a soundcard that records at 32 bits, because our “24-bit” soundcards don’t even have full 24-bit accuracy. To get better than that level, we’d have to use something other than electronics (or else, really high line levels) because quantum fluctuations would cause higher signal levels than what we’d be trying to record, or so I read (I won’t pretend to understand the physics/electronics involved!)
You’re right - it was 24 bits not 32 for recording.
Also thanks for clarifying the difference in usage between fixed and floating point formats. I guess that the implication for mixing using the floating point format is that you really NEVER have to worry about introducing clipping the signal that is stored when mixing. And because clipping at playback is always preventable, you really reduce the chances of screwing up a mix.
T
thanks for this thread,guys especally LearJeff.a good explanation.i’ve been wondering the same thing…
<!–QuoteBegin>
Quote |
Forgetting about other programs, are there any N-track users who can say with certainty they hear a difference when the 64 bit option is on in N-track? If so are there any examples you can post? |
Thanks for the input, especially from Jeff, though my question was just limited to the practical side of the 64/32 bit mixing question - as I stated it in the original post - and again above.
Unfortunately I’m all too familiar with the technical details surrounding all this (and definitely do not recommend anyone waste their time trying to understand all the technicalities - it takes your mind off the music way too much!!).
All that really matters is if it makes a difference in the overall sound of the mix. There aren’t many examples available - and none using N-track which is all I use. Flavio gave us the 64 bit option so I’m still wondering if anyone noticed anything - good or bad - and can post examples. That all that counts, and it may turn out that I won’t like the 64 bit sound

In my case I couldn’t hear a difference in several very clear and detailed mixes run through at unity gain and many other combinations of levels.
So I put the call out again - has anyone noticed anything different in the sound of their mixes at 64 bits?? And do you have an example we can listen to?
Thanks!
no cents
<!–QuoteBegin>
Quote |
Originally Posted by Frits Nilsen Technical explanation of the 64 bit floating point mixing: If your music production mainly concentrates on using softsynths and encoding the final master to MP3 files, then don’t bother with 64 bit mixing as you won’t be able to hear any improvements in audio quality. If however you are producing music recorded in pristine studio conditions and aimed for reproduction on high end audio equipment, 64 bit mixing will offer better precision and larger dynamic range. The 64 bit mixing will only be utilized when the mixer engine is processing and routing audio internally. The VST plugin specification currently only supports 32 bit floating point audio, so the mixer engine will convert the audio down to 32 bit floats when routing audio through VST plugins. When describing floating point numbers, the 32 and 64 bits refers to the amount of memory that is required to store the floats. The bits are split into two parts that stores the precision and the exponent for the ‘floating point’. 32 bit floats offers 25 bit precision and 64 bit floats offers 54 bit precision. The advantage of 64 bit mixing is evident when an audio signal is gain scaled or when two or more audio signals are ‘summed’ in the mixer engine. Gain scaling occurs when the track gain or pan settings are affecting the audio signal. The scaling involves multiplying the floating point audio signal with a floating point scale value. These multiplications will result in values that uses more precision bits than the original audio signal, causing the least significant bits of the result to be discarded. To illustrate the effect of summing, let’s assume that you have two 25 bit precision audio sources that you want to mix together. These sources could be wave files or the 32 bit floating point output of VST plugins. If you mix these sources together without changing the gain of the tracks, the output will fit within 32 bit floats, provided that the summed output does not clip. If you change the gain of one of the tracks, then the 25 bit precision of that track is displaced up or down in relation to the other track. A gain change of 6 dB will result in approximately one bit displacement. When summing the two tracks the combined precision interval has thus been extended beyond the 25 bits that can be stored in 32 bit floats, so the least significant bits are truncated and you loose some of the precision in the audio sources. This is where 64 bit mixing will offer an advantage over 32 bit mixing, as it can use 54 bit precision to store the results of gain scaling and summing, and thereby reduce the artifacts of the floating point truncations. So what’s the point of the higher precision if the master output is being bounced to a 24 bit or 16 bit wave file? For every audio track that you add to a mix, you add noise as a result of the truncation of the lower bits of the signal. The accumulation of these rounding errors can result in a sligthly degraded output that can be present even when rendering to 16 bit wave files. Despite these apparent mathematical benefits, 64 bit mixing will only yield a minimal quality improvement. 32 bit mixing is still fully sufficient for professional grade productions. An example of how to test the difference between 32 bit and 64 bit mixing: * Set the mixer engine to 32 bit mode in the preferences dialog. * Create an arrangement with a bounce enabled master track. * Import a 16 bit wave file onto a new track. * Create a track with level automation and create a curve sequence containing a few curve events with slow fades. * Bounce record the arrangement. * Drag the bounced master sound to a new track and mute the track. The sound properties should show that it is using 32 bit float. * Switch the mixer engine to 64 bit mode. * Create a new sound on the master output track. * Bounce record again. * Drag the bounced master sound to a new track. The sound properties should show that it is using 64 bit float. * Enter the sound editor for one of the two bounced recordings, select the entire wave data and use the ‘Invert phase’ edit menu. * Mute the 16 bit wave file track and the level automation track, and unmute the two tracks containing the bounced audio. * Create a new sound on the master output track. * Bounce record again. * The new bounced sound appears to be silent, because the inverted phase sound will cancel out the non-inverted phase sound. * Enter the sound editor for the newly bounced sound, select the entire wave data and use the ‘Normalize’ edit menu. * Normalizing will boost the extremely low level noise differences between the 32 bit and 64 bit bounced recordings. * Before hitting play to listen to the noise, make sure the volume dial on your stereo is turned down. The noise you hear is the result of the truncation that occurs during 32 bit processing. |
Interesting quote and finally someone has a test that we can all try to see the difference between 32 and 64 bit mixing. I’m gonna try this in N-track.
no cents
There you have it nocents. I can’t hear the difference between 32 bit and 64 bit, not even 24 bit.
But when a track is processed, even if it’s only normalization, then it does matter.
Every process involves math, and with a digital system, the result of almost every calculation has to be rounded, and that gives rounding errors. The more bits, the less significant is the rounding error.
So if a track is processed a lot then there are many rounding errors and 64 bit minimizes those errors.
I can highly recommend Bob Katz The Art and Science of Mastering for in-depth explanation of this. I won’t quote him here but he states that mixing and mastering at less than 48 bit is very noticeable indeed.
Quote (nocents @ June 01 2006,14:11) |
Originally Posted by Frits Nilsen Technical explanation of the 64 bit floating point mixing: If your music production mainly concentrates on using softsynths and encoding the final master to MP3 files, then don't bother with 64 bit mixing as you won't be able to hear any improvements in audio quality. If however you are producing music recorded in pristine studio conditions and aimed for reproduction on high end audio equipment, 64 bit mixing will offer better precision and larger dynamic range. The 64 bit mixing will only be utilized when the mixer engine is processing and routing audio internally. The VST plugin specification currently only supports 32 bit floating point audio, so the mixer engine will convert the audio down to 32 bit floats when routing audio through VST plugins. When describing floating point numbers, the 32 and 64 bits refers to the amount of memory that is required to store the floats. The bits are split into two parts that stores the precision and the exponent for the 'floating point'. 32 bit floats offers 25 bit precision and 64 bit floats offers 54 bit precision. The advantage of 64 bit mixing is evident when an audio signal is gain scaled or when two or more audio signals are 'summed' in the mixer engine. Gain scaling occurs when the track gain or pan settings are affecting the audio signal. The scaling involves multiplying the floating point audio signal with a floating point scale value. These multiplications will result in values that uses more precision bits than the original audio signal, causing the least significant bits of the result to be discarded. To illustrate the effect of summing, let's assume that you have two 25 bit precision audio sources that you want to mix together. These sources could be wave files or the 32 bit floating point output of VST plugins. If you mix these sources together without changing the gain of the tracks, the output will fit within 32 bit floats, provided that the summed output does not clip. If you change the gain of one of the tracks, then the 25 bit precision of that track is displaced up or down in relation to the other track. A gain change of 6 dB will result in approximately one bit displacement. When summing the two tracks the combined precision interval has thus been extended beyond the 25 bits that can be stored in 32 bit floats, so the least significant bits are truncated and you loose some of the precision in the audio sources. This is where 64 bit mixing will offer an advantage over 32 bit mixing, as it can use 54 bit precision to store the results of gain scaling and summing, and thereby reduce the artifacts of the floating point truncations. So what's the point of the higher precision if the master output is being bounced to a 24 bit or 16 bit wave file? For every audio track that you add to a mix, you add noise as a result of the truncation of the lower bits of the signal. The accumulation of these rounding errors can result in a sligthly degraded output that can be present even when rendering to 16 bit wave files. Despite these apparent mathematical benefits, 64 bit mixing will only yield a minimal quality improvement. 32 bit mixing is still fully sufficient for professional grade productions. An example of how to test the difference between 32 bit and 64 bit mixing: * Set the mixer engine to 32 bit mode in the preferences dialog. * Create an arrangement with a bounce enabled master track. * Import a 16 bit wave file onto a new track. * Create a track with level automation and create a curve sequence containing a few curve events with slow fades. * Bounce record the arrangement. * Drag the bounced master sound to a new track and mute the track. The sound properties should show that it is using 32 bit float. * Switch the mixer engine to 64 bit mode. * Create a new sound on the master output track. * Bounce record again. * Drag the bounced master sound to a new track. The sound properties should show that it is using 64 bit float. * Enter the sound editor for one of the two bounced recordings, select the entire wave data and use the 'Invert phase' edit menu. * Mute the 16 bit wave file track and the level automation track, and unmute the two tracks containing the bounced audio. * Create a new sound on the master output track. * Bounce record again. * The new bounced sound appears to be silent, because the inverted phase sound will cancel out the non-inverted phase sound. * Enter the sound editor for the newly bounced sound, select the entire wave data and use the 'Normalize' edit menu. * Normalizing will boost the extremely low level noise differences between the 32 bit and 64 bit bounced recordings. * Before hitting play to listen to the noise, make sure the volume dial on your stereo is turned down. The noise you hear is the result of the truncation that occurs during 32 bit processing. |
Interesting quote and finally someone has a test that we can all try to see the difference between 32 and 64 bit mixing. I'm gonna try this in N-track.
no cents
There are a number of factual errors in this post.
There is a 64-bit option for VST. (This may simply be out of date and not a factual error.) However, it's pretty uncharted waters and no telling whether any DAWs try to run plugins in 64-bit mode. So, practically speaking, there's no 64-bit operation. But the spec does define it.
Regardless of what you're recording, if you're delivering as MP3, there's no point using 64 bits.
There's no reason softsynths would benefit less than audio recorded in a pristine studio; that implication is pure crap.
32-bit floating point has 24-bit precision, not 25. The implied '1' bit due to normalization does not increase the number of significant bits (since it's always 1). Likewise for 64-bit floating point.
The paragraph starting "To illustrate the effect ..." has a number of technical flaws, but the point he's making is essentially correct. But to say that the difference is "evident" is misleading. It's only evident if you can really detect noise that's 144 dB below the signal level (or thereabouts). Note that this is not at -144 dBFS; if the signal is at -20 dBFS, then the resulting noise is at -164dB.
But his bottom line is correct: the quality improvement is minimal. Actually, it can be shown to be negligible unless you have lots of tracks (assuming FX processing is 32-bits anyway, which is going to be the case for the time being).
I don't think that volume fades (in his example) are significant in any way, btw.
Gizmo, something to keep in mind is that while any process creates noise (as you say), often that noise will be below the level of signal that is truncated (dithered) when converting to 24-bit fixed point to play on your soundcard. This is true for both 32-bit and 64-bit, but is of course far more true for 64-bit. However, I dount the difference between 64 and 32 bits would be distinguishable by the best golden ears in most cases.
Katz's book is great, but be careful that it was written in the days of ADAT and some of the things he says apply to ADAT but not to recording on a computer. For example, leaving 3dB of headroom.
Also, note that he's talking about 48-bit fixed point versus 24-bit fixed point, not 48-bit fixed point versus 32-bit floating point with 64-bit internal calculations (as many plugins are coded).
It's a great book, but it's just a bit out of date. 98% of the info applies regardless.
For the difference test, mix down to 24 bits without dithering. (If you dither, most of what you hear will be the difference in dithering on the two passes.) Mix down to 24 bits simply to imitate sending the data to a 24-bit soundcard.
For a second test, mix down the two trials to 32-bit floating point instead of 24 bits. This will give you a better idea of what the differences are, but note that they won’t be very significant since the extra bits of precision would be lost when downsampling to 24 bits. However, if you’re planning to send 32-bit tracks to be mastered, then the 32-bit test is more applicable.
Also, when normalizing the difference signal (the one subtracted from the other), pay close attention to the peak level before normalizing (result of “Scan”). This is the level of the “noise” (the difference between the two tracks). And it matters a lot more than what the noise happens to sound like.
The more tracks in the song, the bigger the difference will be, unless n-Track uses a 64-bit accumulator (which IIRC, it does).
The edition of the book I have was written in 2002. In it he was talking about DAWs, Protools, Waves plugins, etc., so it’s not totally out of date.
But the facts remain:
Processing introduces rounding errors.
The more processing you do, the more the rounding errors accumulate.
The greater the bit depth, the less the rounding errors are in proportion to the wanted data.
And I think we agree on those facts.
But the question remains; is there an audible difference?
I suppose it depends on how much processing is carried out. And let’s be honest, with todays software it’s all too easy to stick in one more plug-in to “improve” the sound.
But personally speaking I agree with you. I don’t have golden ears. I can hear the difference between 16 bit and 24 bit, but anything above that is wasted on me.
But Bob Katz states that it is important for him and that those rounding errors (which I suppose is a species of quantizing noise) does degrade the quality.
Remember when we changed from 8 bit sound cards to 16 bit? We thought we’d reached audio heaven and that no improvement was ever possible.
Nowadays if the raw audio is not 24 bit, and the mixing environment is not at least 32 bit then we feel cheated, and we certainly can hear the difference.
So I wonder, in a couple of years will our ears and experience demand 64 bit?
Right, Gizmo: we agree.
Actually, I realize I was talking about “Mastering Audio”, not the book you mentioned, a newer one perhaps. Also, stuff on his web pages that’s clearly a bit moldy. Not that I want to take anything from the man, he’s a true guru. But a statement like the one you made doesn’t make complete sense out of context due to differences between floating & fixed point. I think there are some aspects of computer math that he doesn’t understand the way a pro who deals with this kind of stuff does, and I think he sometimes overgeneralizes the lessons from valid experiences he’s had. Never mind about that anyway.
Sure, rounding errors matter. But where do they occur?
In n-Track, very little math actually happens. For a moment, ignore auxes and groups. Also ignore plugins, because as we already discussed, 64-bit processing probably doesn’t affect plugins yet.
In n-Track, the data is converted from the source track format to 32-bit float. After that it’s not touched (other than being passed to plugins) until summing. Now, while there are multiple faders involved in summing (e.g., the track fader and the master fader), there is only one multiply (because N is clever). Then all the tracks are summed to an accumulator. And, IIRC, n-Track uses a 64-bit accumulator here – but perhaps I imagined that so let’s assume it’s a 32-bit accumulator.
So, all this processing we’re talking about is a single multiply instruction (for each track) followed by summing the results (for each track). If you have 2 tracks, that’s 1 extra bit of result that could be lost. For 4 tracks, 2 bits. For 8 tracks, 3 bits. Note that these are bits BELOW the significant 24 bits. Also note that after the multiply, all 24 bits are significant (the low-order 24 bits are lost, but who the heck cares?)
So, if auxes and groups are not used, I predict we’d see only a tiny difference between 32 bits and 64 bits. In other words, the only difference would be due to bits below the 24 significant mantissa bits that would have added up and rolled into the significant ones (due to adding multiple tracks). This is extremely small in significance.
Of course, if we take auxes and groups into account, this same process happens at each summing point, so we have the same small loss of significance at 2 or 3 stages (or more, if you have groups feeding groups, or groups feeding auxes). So, there is some cascading of errors here, causing an accumulation of errors. But remember we’re accumulating tiny errors.
For the kind of processing plugins do, the difference is far, far more significant. Note that I’m not talking about the bus between plugins, but internal math in complex plugins, for example convolutions & such. In this case, small errors can accumulate dramatically and the difference between 32-bit internal computations and 64 bits can be dramatic. (Perhaps this is what Katz must be talking about when he says 48 bits are minimum – but even then it totally depends on how the plugin is coded and what it does and all sorts of stuff, too much to make a gross generalization justifiable.)
Also, note that 24-bit soundcards do NOT record 24 bits of accurate data – it’s closer to 20 bits in most cases. The low order 4 bits is generally pure noise. The electrical requirements to getting a signal that perfect are incredible; you run into stuff like brownian motion and quantum effects causing problems. And BTW, those 4 bits of noise are much better than 4 bits of zeros would be. This is a well-understood phenomenon in medical imaging (an area where I’ve done some work), but not one I ever hear folks discuss much in audio.
Personally, I’m pretty darn sure that 64 bits is a waste of time (except for accumulators and for internal math in plugins) and that it won’t lead to better sounding mixes. But it’ll become standard for the same reason 24x192 is in demand in pro studios: more must be better and it’s feasible. It’s easier to say “Yes, we do that” than explain why it doesn’t matter.
No harm in it anyway.
Cheers,
Jeff
<!–QuoteBegin>
Quote |
So I wonder, in a couple of years will our ears and experience demand 64 bit? |
I actually think that the trend (outside of those directly invloved in recording) is entirely in the other direction. A rapidly growing number of people are becoming accustomed to the various low-bit rate codecs (MP3, WMA, AAC, etc.) as the default playback format. It is almost impossible to find a high-end audio store when there used to be dozens. While many people can hear the degradation due to (data) compression, they find convenience an over-riding consideration and appear to be more interested in listening to a lot of music rather than “intense” listening to high-quality recordings. I would also suggest that they are not “wrong” to do so. It really is about the music, not just the sound. Eventually storage and bandwidth will become so cheap and common that these compromizes will not be required but there is presently relatively little consumer demand for improved audio quality. We will get the quality eventually, but not because consumers demand it. Instead it will arrive (as it already has in the pro market) because it can be done and it will initially provide an opportunity for product diffentiation (they will have run out of other things to promote).
Most people just listen to the melody and the lyrics and if they can understand them, that is good enough. “Hi-Fi” enthusiasts have always been a minority and the emergence of mass-market Hi-Fi in the late sixties was partly a reflection of the improved technology of the day and partly a function of the widespread use of marijuana and psychedelics which provided the increased focus on the details of musical reproduction. As the culture moved on, focus shifted to other things. While I am very concerned about quality in music, only a fraction of that is audio quality per se, good songs and good performances trump audio quality every time. That said, I try to maintain as high a quality as is appropriate for the quality of the music. I will not go to great lengths unless there is a reasonable expectation that the song and performance warant it.
My advice to beginners is not to get too worked up about gear and technology issues and just get on with performing and recording. I do use 24-bits for the relaxed tracking requirements and if I cared I could probably discern a difference between it and 16-bits, but it is a moot point. Distribution will be at 16-bits or worse for the forseeable future. Improvements in raw audio perfomance beyond the distribution media’s performance limits will only be audible in your studio but improvements in musical performance, arrangement, balance and songwriting will be obvious at almost any playback resolution. People still listen to Django, Caruso and Bix Biderbecke even though the recordings are far below contemporary standards because the music is so good.
Modern technology is more than adequate, just make sure you don’t have gross problems (hum, buzz, distortion, etc.) and work on the musical aspects. It is tempting to think that you can buy your way to a good recording and you can - if you don’t care about the content of that recording. I find these issues interesting since I design audio gear for a living but ultimately I subscribe to Limey’s advice: "If you can’t make a good recording with a “stick-mic” and a Soundblaster, you can’t make a good recording with anything."
I don’t acually mean to discourage this kind of discussion since this is how people learn, but I do want to keep reminding people to focus most intently on the most significant issues. In my youth I was an art student, we were constantly taught to rough in the whole drawing and not get too absobed in portions of the drawing. A detailed and lovely drawing of a hand is all well and good, but if it is in the wrong place it won’t work no matter how well-rendered it is. It is similar in audio; great guitar tone and noise-free recording of a poorly concieved and exececuted solo is a waste of effort unless it is purely to practice recording technique. Ask yourself the question: Is this music good enough that people will say "This would be really good if only it didn’t have that quantitization noise?"
Jim
I think that Jimbob’s posting sounds so right that I almost hate to extend the thread, but I just noticed something interesting about n-track’s drum machine. Specs include the following:
"Superior sound quality, thanks to the 64 bit internal engine.
DK+ can import wave files with 8, 16, 20, 24 and 32 bit integer resolution, and 32 or 64 bit floating point resolution, mono or stereo."
So, Flavio and the programmers at Nusofting are definitely paying attention to using 64 bit floating point.
T
Yessir, I think Jimbob’s post is very interesting.
And Jimbob is probably right in all he says, although I don’t think my becoming a “Hi-Fi” enthusiast was due to marijuana.
And it’s certainly true that there were many times I found myself listening to the equipment and not the music.
But isn’t that the role of an engineer? And I come from an an egineering background. I am not now and never was much of a musician.
But I take Jimbob’s point, but is that why most music these days is horribly compressed?
"Screw the quality of the sound! Just make sure everybody hears it on the jukebox!"
Well, perhaps that what the public wants, but it’s not what I want.
And I hope to #### that the mastering engineer who charges a fortune to master my mixes thinks like me too.
Sure, “good enuff” is usually good enough, but isn’t “best we can” better?
Anyway, I’ll leave music to the musicians, and I’ll try not to screw up my part of the process more than I can avoid.
Interesting discussion, but totally off the topic of this thread, which is:
Has anyone heard a difference using the 64 bit mixing option and if so, can you post examples of the 32 bit vs 64 bit sound?
Thats about it!
I’ve been through these discussions before including with people like Katz (amazing he has the time to answer so many questions!).
At this point I’m just interested in practical results, so all I can add is another part to the original topic:
When using the 64 bit mixing option, is your CPU usage noticeably higher?
no cents
<!–QuoteBegin>
Quote |
Has anyone heard a difference using the 64 bit mixing option and if so, can you post examples of the 32 bit vs 64 bit sound? |
I have been designing telephone headsets for 20 years with some work on Hi-Fi headsets. I guarantee you that any anecdotal comment about the difference in sound will not be informative. In order to hear the differences we are describing, a very artificial test condition will have to be created and even then it is likely that the persons doing the evaluation (including myself) will be unable to avoid bias.
The only way to do these evaluations is with double-blind testing where neither the person listening or the person running the test know which file is which. We have repeatedly done subjective testing in my line of work and have learned the hard way that even gross differences are not as obvious as you might suppose when evaluated subjectively and that with proper experimental technique, strongly held opinions about “obvious” differences can regularly be proven wrong. This is not to say that these differences do not exist, just that only careful experiment can prove it (to a rational person). If we were talking about 8-bits versus 16-bits this would probably not be true but even there you might be surprized if we were talking about non-expert listeners.
I would argue that the mere fact that we have to go to these lengths to evaluate the difference is an indication that it is not worth worrying about when other issues are much more significant and immediate. There is also an underlying hubris associated with the obsession with perfection in amateur recording. Give that up.
<!–QuoteBegin>
Quote |
When using the 64 bit mixing option, is your CPU usage noticeably higher? |
This is actually the most relevant question. There is no question that 64-bit mixing will have less error than 32-bit so CPU usage is the only reason to avoid using it. I have no faith that anyone will be able to reliably pick out the difference between a real song mixed with 64-bits and the same song mixed at 32-bits (with all the same control settings) but if there are no downsides for using the higher resolution, go ahead and use it.
As to whether my last response was <!–QuoteBegin>
Quote |
totally off the topic |
In general readers of this forum tend to be more rational than some of the others. We use N-track because it is cost-effective, not because it is the ultimate in quality (although I believe that it is as good as any of the others in terms of audio quality). The pro audio business is trying to promote “features” to encourage you to buy more and “better” gear and they are faced with the problem of differentiating their products from others. They will gladly make “improvements” that are purely academic, knowing that there will always be someone who will claim to hear “big differences”. Other manufacturers will then follow because they want feature parity. I will continue to chime in with reminders because not everyone who follows a given thread will be familiar enough with previous threads to understand when we are talking about esoteric concerns. We all need to be reminded now and then what it is we are really trying to do and also be reminded that there is indeed a forest, not just trees.
Jim
jimbob, no need for a lengthy rehash of all the issues. i’ve been well aware of them for years, probably like a lot of other people who’ve done lots of investigating on their own.
I ask a simple question - there isn’t any “hubris” involved. If you don’t hear any difference between 64 and 32 bit mixing just say so and move on - no need for a digital audio catechism.
I’ve known people who design audiophile speakers who do things by computer mostly, yet always ask their uninformed friends and neighbors - just because their lack of exposure to the technicalities makes them a “clean slate.” So I wouldn’t be surprised if you a) hear no difference, b) go back and forth between 32 & 64 bit and c) get frustrated with the whole thing.
Sometimes people surprise me and really do hear things that audiophile people miss. People will surprise you too. So if you don’t mind, I’d like to hear from some of them - in a non-technical manner.
At this point I could care less about all the technical crud. Its much more interesting to know about real opinions based on what people have actually heard. And if they hear a difference, all it takes is two files to see if anyone else can hear it.
no cents