help with dB math?

who’s wrong: Flavio or me?

Me, most likely. But how come?

I have a 24-bit wave file. The highest sample value in the file is 0x60ef89, or 6352777.

By my understanding, the dB value for this should be:

dB = 20 log10(6352777 / 0x800000)
dB = 20 log10(0.75731)
dB = -2.414

But n-Track reports the peak value as -1.85dB. (Either during playback, or using the “scan” button in the “Normalize” popup.)

n-Track and my math agree on the value for a half-height wave, or peak values of 0x400000 (4194304, or 0.5 of full scale), which is -6.02 dB.

Geez, are you working on that wav file splitter or something? :) I am no math or dB expert, but just doing some quick (hopefully correct) math on a 16-bit mono wav file I have, I find I am in agreement with N-track. The peak sample in the file was 0x7e12, and using a reference of 0x8000, I get:

20 log 10 (0x7e12/0x8000) ~= -.1319…

…N-track says “-.13” db.

Could it be some error in obtaining your peak sample (endian-ness, block alignment, etc.)? Could my example be irrelevant? Like I said, I’m no expert…

(Edit: Interesting to see that to get the -1.85 dB that N-track shows for your file, the peak sample should be somewhere around 0x6771F2, or 6779378. Quite a difference. Hmmm…)


A little more info: I converted the file I mentioned above to 24-bit using Audacity. My peak sample became 0x7E11FF (was 0x7E12 when it was 16-bit). Changing my reference in the formula to 0x800000 yielded the same dB value as before by my calculations (about -.13 dB), which N-track still agreed with.


Thanks, Tony.

Yes, I’m starting to build a simple track splitter. (By simple, I mean no GUI.)

Perhaps my sample value is wrong, but I can’t quite see how. My program reports the sample number, and it matches the sample number where n-Track reports the other number.

However, a funny thing happened. I made some unrelated changes to my program, and ran it and it reported -1.85! So I bet I just had a bug of some kind with the peak sample value. I wonder if it will still agree today – you know, the phase of the moon has changed a bit. :wink:

Now I’m having trouble getting my RMS values to make sense. Oh, well … the struggle continues.



As it turns out, both programs were buggy (mine and n-Track).

The “bug” with n-Track is that the “Selection” choice in the “Normalize” popup isn’t precise to the sample level. That is, with very small selections, it scans outside the selection.

The bug with my earlier version was I was just finding the max value and then printing the dB conversion. Well (doh!) the peak value was negative. (Did I say “DOH” yet?) :p


When you are done with the splitter how about working on a “Reverb Remover” for all of us heavy-handed Irishmen? I can’t go back and re-dub my vox and it would sure be nice to remove some of the verb from them. :D

Seriously, I wonder what the physics would be for such a plug-in. I’m sure there would be a big market for it in Ireland.

In lieu of a plug-in, what do you do to get rid of excess reverb in your studio when re-dubbing is not an option?


Excess reverb in the studio? I assume you are getting to much room sound in the mic? I mean, turn down the reverb level if it is a plugin. But with room sound, a health dose of hanging quilts and comforters hanging behind he singer, assumming you are using a cardiod mic, can help a lot.

Thanks Bubba,

But my problem is in trying to reduce reverb that is already recorded and mixed down. The room reflections were not that bad, but stupid me had to go and crank the reverb up on my vox. Now what do I do? Re-dubbing the vox is not an option.

Don (Hopelessly Irish) Gaynor

Well, there is very little you can do other than maybe some EQ and learn for next time. Don’t ever do destructive things like that. Always record vocals dry and then add reverb as an aux effect so you can turn it down later if you find it to be too much.

Quote (learjeff @ Oct. 11 2004,09:29)

As it turns out, both programs were buggy (mine and n-Track).


Glad you could squash at least one of the bugs :) . In my limited audio file experience, I've found that my biggest "Doh!"s have been related to signed integers *cringes at hours wasted*. I don't know what your exact needs are, but have you looked at using a sound file library like libsndfile to avoid having to reinvent the wheel? libsndfile is popular in the free/open-source world, and I believe the LGPL license means commercial products can link to it (edit: dynamically only) and still stay closed-source (in case you're worried about anything like that).


Once I get the algorithms right I might consider translating from Python to C or C++. But I don’t even have a C or C++ development environment on my PC. And Python is way more fun. Too bad it’s not nearly as fast as C, nor even as fast as Java. (Who knows, I might just recode in Java – that way it’s more portable!)

I also got the RMS problem fixed. Silly me, I was doing two steps in the wrong order, like converting to dB before the square root or something. Another palm-print on forehead.

I’m pretty familiar with the numeric format issues and have those under control. My code would have problems with 32-bit fixed format files, if there is such a thing, and it also won’t work fo 32-bit float. Right now I’m only debugging it for 24 bit files, and when I’m done I’ll probably fix it for 16 bits too.

Pretty soon I’ll have it able to detect the sounds and trim out the noise. It’ll probably clip nice and close at the start but leave in a bit of noise at the end – with configurable adjustments. If an idea works out, it’ll be able to automatically figure out the background noise level and adjust if it changes.

Next thing I want to figure out how to do is determine what the note is. The straightforward way is to do an FFT and pick the strongest low component. If I can do that, it’ll name the wave file according to the MIDI note number and a runtime parameter (which I’d use to indicate the velocity layer). Unfortunately, I don’t know how to do an FFT. Hopefully I can find a freebie.


I hit one at work a while back in some code that was used to determine how long a wave file should play in milliseconds. The length of the raw data in a PCM wave is held in a DWORD (unsigned 32 bit). Doing math in place to figure this out (based on wave file format data) when using a DWORD variable in place will go over MAX_DWORD if the channels and sample rate are high enough and the wave is long enough.

The code in question was barfing for a wave that was only three minutes long. It was showing up as 20 seconds. GREAT BIG DOH!! when I found it. It was in code that’s been around for at least 6 years and no one caught it because most of the files being used were only a few seconds long.


Just a heads up about the difficulties of using the FFT. There are some better ways to determine the frequency, which you can find all over the internet. music-dsp

The method that you are planning on has the problem that it is very sensitive to transients (which raise the level of many frequencies at once), and that the lowest frequency that you can possibly measure with a FFT is sample_rate / number_of_samples. So, the lower the frequency you want to measure, the longer you have to sample the wave, and the more likely that a transient will be in the sample range.

So, measuring a 20 Hz frequency at a sample rate of 44.1kHz would require at least 2205 samples. The closest power of 2 to that is 4096 samples = 93 milliseconds. So, if you are within 1/10 of a second from a transient…

Plus, the FFT does not do well if the frequency is between the fixed frequencies that the FFT uses. There are other algorithms that you can use to track down the “exact” frequency if you know where to look.

Good luck,

OK, thanks Ben. I’ll follow that link and see what I find.

I don’t have a problem with sample length, since a sample will generally be the full duration of a note (note played and held until it fades out). Furthermore, I plan to skip the initial attack in the note to avoid transient confusion; start the “note detector” at the first positive zero crossing after (say) 10% of the sample duration.

Note that my case is far simpler than an on-the-fly real-time note recognizer (like, for a real-time guitar-to-MIDI converter). This is all post-processing.

But the point about the fixed frequencies is important. I’ve always been a bit confused about the discrepancy between my foggy notion of an FFT as being a discrete set of values, versus the “FFT display” we see in programs like n-Track, where it looks continuous. What’s the deal with that?