Samplerates: the higher the better, right?


Hi, and welcome back. Today we’re going to talk about samplerates
and oversampling: why do we need oversampling options in dynamics plugins like compressors
and limiters, or saturators like Saturn? And would it be better to just run a higher
project samplerate instead? But lets start with a general chat about resolution. I grew up watching standard definition TV,
which here in the UK meant an image size of 720 by 576. Honestly, it seemed fine to me at the time. But compared to modern HD video with 1920
pixels by 1080, the differences are obvious and easy to see. Audio standards have also evolved since my
youth: CDs use a samplerate of 44.1KHz and store each sample with 16 bits of resolution. Modern high resolution audio at 96KHz and
24 bits might seem like it should yield as big an improvement over CD audio as HD video
provides over standard definition. But its not quite as simple as that. We’re looking here at a 100Hz sine wave
in Sound Forge, and what we’re seeing is a graph of how the speaker will move when
I play back this test tone. Lets zoom in so we can see the individual
samples… each of these is connected with a straight line, but there are enough of the
samples in each cycle that when I zoom out it looks like a perfectly smooth sine wave…
and when I press play, sure enough we hear a clean, pure sine tone. Now lets look at another sine wave, this one
right up at 10KHz. When we zoom in on this it looks awful. Nothing at all like a sine wave. As our samplerate is 48KHz we barely have
two sample points for each half of the cycle, and the smooth curves of the sine wave have
become harsh jagged spikes. It seems logical to conclude that this isn’t
going to sound like a pure sine wave. And it also seems logical to conclude that
if we increased the samplerate, so that we had more sample points in between these current
ones, we would get closer to a pure sine wave. Now, a 10K test tone is never going to sound
pleasant, but when I play this back at a level that doesn’t hurt my ears… actually that
sounds like a pure sine wave to me. Because in fact most audio editors and DAWs
don’t really tell you the truth when you’re zoomed in this tight: joining the dots with
straight lines is fast and efficient, and works fine in most cases, but its not what
will happen when the signal is converted to analogue. In fact these dots will be connected with
a smooth curve. And not just any smooth curve: there’s actually
only one mathematically correct curve that can be drawn through these points without
breaking the golden rule, which I’ll come to in a moment. I can show you this by switching to a different
audio editor: Acoustica from Acon Digital is unusual in that it does display the wave
as it will look after reconstruction. As we can see, even with not much more than
two sample points for each half of the wave, this is enough information to recreate a perfectly
smooth sine wave. We can go further: here’s a 20KHz sine wave,
and we now have barely more than a single sample for each half of the wave. But this is still enough information to recreate
a perfect sine wave. However we’re close to the limit now: if the
frequency of this sine wave increases so that we have less than one sample for each half
of the wave, it all suddenly falls apart. If the signal can oscillate from positive
to negative and back again between two sample points, then that entire negative half of
the waveform gets lost. The decoder will see two adjacent positive
samples, and will reconstruct this incorrectly as a much lower frequency signal. This is the golden rule I referred to earlier:
a digital system can represent all frequencies perfectly, as long as they’re less than half
the samplerate. As 20KHz is less than half the CD standard
samplerate of 44.1 KHz, CD resolution audio is quite capable of recreating a 20KHz sine
wave. If we increase the samplerate, let’s say to
96KHz… now I can represent frequencies right up to 48KHz… but this 20KHz sine wave isn’t
reproduced with any greater accuracy. 20KHz is usually considered to be the upper
limit of human hearing. But in fact most people lose their ability
to hear that high before reaching adulthood. I’m not ashamed to admit that I can’t hear
this 20KHz tone at all. So if a CD standard samplerate of 44.1 KHz
is already enough to reproduce the entire range of frequencies that we can hear, why
would we ever want to use a higher samplerate? For consumer formats there’s actually not
much benefit. Some people argue that those higher frequencies
can make an audible difference even though we don’t hear them in the conventional sense. But its kind of academic anyway: most consumer
speaker systems can’t reproduce frequencies higher than 20KHz, and most consumers don’t
care. And there is another issue, which I’ll demonstrate
with some test tones again: here’s a 7kHz sine wave shown on an analyser, where it shows
up as a single pure frequency. Now here’s another sine wave at 25KHz… I’ve set my system sample rate to 96KHz, so
this sine wave can be represented perfectly even though we can’t hear it. Let’s add both sine waves together. In the perfectly linear and distortion free
world of a DAW this happens perfectly: we get a 7K sine wave that we can hear, plus
a 25KHz sine wave that we can’t hear, and the two don’t affect one another at all. Now let’s add some subtle saturation: here’s
an instance of Saturn running the Gentle Saturation algorithm, which adds just odd harmonics. Adding this to just the 7KHz sine we see a
3rd harmonic appear at 21KHz. Now lets add in the 25KHz sine wave as well…
and notice the extra partials that appear due to intermodulation between the two sine
waves. These are well below 20KHz and could be audible
if they’re loud enough. Of course a real world signal will be much
more complex than these two sine waves, so lets swap the higher sine wave for some white
noise, brickwall filtered to remove everything below 24KHz. When we add saturation the noise starts to
intermodulate itself, and spread down into the audible range. But now notice what happens when I add the
lower 7KHz sine wave… that high frequency content is now spread out over the whole frequency
range. What this means is that, unless your playback
system is 100% linear and free of any distortion whatsoever, then having extra inaudible content
above 20KHz simply means you’ll have more unwanted intermodulation below 20KHz. If you bandlimit the signal by removing the
inaudible content above 20KHz, you might actually end up with a cleaner, better quality result. But the situation is a bit different when
you’re producing, specifically if you’re using non linear plugins like saturators or compressors. I’ve generated a sine wave sweep for the next
demonstration… in this spectral display it shows as a diagonal line starting at 20Hz
and sweeping up to 20KHz. Now let’s distort this with the Warm Tape
saturation style in Saturn… I’m going to keep this really subtle, with
the Drive knob all the way down… this adds a third harmonic, which rises faster than
the original sine wave, as you would expect. When it reaches the Nyquist limit at half
the samplerate, in this case 24KHz, it can’t continue to increase in frequency. But it doesn’t just disappear: instead it
reflects back down as a uniquely digital form of distortion called aliasing. This aliased signal is no longer harmonically
related to the original sine wave, so can sound un-musical and harsh… and it keeps
sweeping down until it reaches zero Hz, at which point it reflects back up again… so
it can easily end up at a lower frequency than the original signal. Ok, now lets try the same thing, but with
our samplerate set to 96KHz instead of 48KHz. Here’s the sine wave sweep: the vertical
scale now goes all the way up to 48KHz, so when the sweep reaches 20KHz its less than
half way up. Lets saturate it again… now our third harmonic
has a whole extra octave of headroom before it reaches the Nyquist limit and reflects
back down… but actually, we then get another octave above that where the resulting aliasing
is above 20KHz and can be considered inaudible. When the original sweep reaches 20KHz, the
aliased third harmonic is still up at 36KHz. So doubling the samplerate has actually given
us two extra octaves of headroom for our saturation to add harmonics without audible aliasing:
when we’re only adding a third harmonic this is enough to completely fix the problem. However, this was a very subtle saturation
setting. Lets try switching to Heavy Saturation instead…
and add a little bit of Drive… and try the same test… and thats a bit of a mess. This time we’re adding a whole harmonic
series, which increase in frequency rapidly, hit the Nyquist limit, and then spend the
rest of the sweep merrily bouncing from top to bottom and back again. This might look pretty, but it doesn’t sound
much like analogue distortion: notice those strange chirping sounds as the harmonics sweep
rapidly in the wrong directions… If we try this at 96KHz… its certainly better:
that chirping is much quieter… but its still there, and the aliasing is still clearly visible. So simply doubling the samplerate isn’t
good enough in this case. We could go to 192KHz… again its a lot better:
remember that only the bottom quarter of this graph is actually audible now… but while
many of you will have interfaces capable of running at 192KHz, its not a very practical
solution to the issue: all your plugins will require four times the cpu compared to a 48KHz
project, even the ones that don’t benefit at all from a higher samplerate, and all your
recordings will be four times the size. And you haven’t even completely solved the
problem. So lets revert to a 48KHz samplerate, but
this time turn on HQ mode in Saturn. This enables internal oversampling, so the
audio is upsampled to 8 times the original rate, then saturated at that samplerate. Everything above 24KHz then gets filtered
out before the samplerate is dropped back down to the original rate of 48KHz. The result is harmonics that extend up to
Nyquist, and then stop, more or less: these tiny little bits of reflection get filtered
out well before they reach down to 20KHz. So now we have a distortion effect that behaves
much more like analogue distortion. Here’s an easy way to create audible aliasing:
I’ve got a clean sine wave patch in Twin2, which I’m playing up in the higher registers…
and I’m going to distort this with the Heavy Saturation style in Saturn… I’m not going to be shy, lets crank the
Drive all the way up. If we watch the result on Pro-Q3’s analyser
its easy to see the harmonic series that we’ve added extends up to the Nyquist limit, and
then reflects back down again as aliasing. This sounds brittle and metallic, as if we’ve
added subtle ring modulation… the effect becomes especially noticeable if I use the
pitch wheel to bend notes… now those reflected harmonics sweep in the wrong direction, creating
distinctive chirpy sounds… and likewise if I add vibrato. Lets try doubling the samplerate to 96KHz,
which is really the highest thats practical in most cases… if I play an E its easy to
see that the aliased harmonics now reflect down from a point an octave higher than before…
but in this admittedly contrived and extreme example, we can still hear chirping artifacts
when I bend notes… or add vibrato. If I switch back to a 48KHz samplerate…
and enable HQ mode in Saturn instead… we get much purer sounding harmonics, with vastly
reduced aliasing, and no chirpy artifacts. So we can see that when applying a lot of
hard distortion to a high frequency signal, the eight times oversampling provided by Saturn’s
HQ mode is much more effective than running a higher project samplerate. So what about compression? This also generates extra harmonics: the faster
the attack and release times the closer compression comes to being a distortion effect. But modulating the signal at audio rates also
creates sidebands, which I can demonstrate with a sine wave again: here’s a static
sine wave that I’m running through a compressor… but I’m driving the sidechain of the compressor
with a drum loop… so the sine wave is modulated in level in the same way as the drums would
be if I were compressing those. Lets look at the result on Pro-Q3’s analyser:
when the sine wave is static we see it as a single pure frequency as you’d expect…
but when its being modulated by the drums, extra partials start to appear above and below
the sine wave. With very fast and aggressive compressor settings…
these partials extend higher and lower… but even with fairly normal settings there’s
still significant content either side of the original sine wave. When this compression is applied to a drum
track, with cymbals extending right up into the highest octave, this extra content can
end up too high and result in aliasing, which is why Pro-C2 also provides oversampling options. This time rather than a single HQ option we
have a choice of 2 times or 4 times oversampling. Doubling the samplerate should be enough in
most cases, but aggressive settings might benefit from four times oversampling. However, this isn’t completely free. Pro-C2 uses linear phase oversampling filters,
which add extra latency and an extra cpu hit, which comes on top of the cost of running
the processing at 2 or 4 times the rate. Saturn uses minimum phase filters, which results
in much less latency but adds some phase shift instead. So what about a scenario where multiple more
subtle non linearities are running in series… like for example these drums. This is a multitrack drum loop… and I’ve
processed each channel in the usual kind of way, with noise gates, compressors, EQs, and
saturators. Then the whole lot goes into a sub group for
more saturation and compression. Each of these individual processes is relatively
gentle, and would probably only require 2 times oversampling to avoid aliasing. So in this case perhaps its better to run
the project at 96KHz, so that each plugin in the chain doesn’t have to continually
upsample and downsample again? In order to examine that concept I’m going
to switch to guitar. This is a Les Paul that I DI’ed through
a clean preamp… starting at the end of my plugin chain, here’s a Pro-Q3 with a setting
that I EQ matched from a guitar cabinet impulse, which represents my speaker cabinet. Before that in the chain I have a Saturn with
a warm tube setting, and just a little touch of compression: this represents the power
amp stage. Before that I have another instance of Saturn
to represent the preamp stage provided on most modern guitar amps… and before that
I have another instance of Saturn running in Heavy Saturation mode, to emulate a distortion
pedal. If that seems a little overkill, its not really:
many guitarists will chain two or even three overdrive or distortion pedals before an amp
that’s itself not entirely clean, with each pedal adding a touch more dirt, and boosting
the level into the next overdrive stage Actually I also have a pair of Pro-Q3 instances
either side of this first instance of Saturn, to shape the signal before and after the distortion…
with these bypassed it sounds too much like a fuzz pedal… turning them back on sounds
much more like the Boss distortion pedal I had when I was 14. I’m running the project samplerate at 96KHz,
with no oversampling for any of the saturation stages. And lets make it easier to analyse whats happening
by swapping the guitar for a sine wave test tone… lets have this somewhere around 4KHz,
to represent the most prominent parts of the upper midrange… and now lets add the first
saturation stage. This is a hard distortion type, close to clipping
in response, so we get a lot of harmonics, and its easy to push this into aliasing even
at 96KHz… but the 4KHz components of the original guitar DI signal are not very high
in level, so the aliased components below 20KHz are likely to be very low in level and
probably not significant. So running at 96KHz instead of 44.1 or 48
is probably enough to prevent aliasing being a problem. However, that extra octave of headroom we’ve
gained from the higher samplerate is now full of content: we’ve got harmonics up there
now, plus aliased harmonics that are too high to be audible. When this signal hits the next non-linearity,
in this case my virtual preamp stage, there’s no headroom anymore: those higher partials
are going to grow more harmonics which will immediately alias, and perhaps worse, we’ll
get intermodulation between those higher partials and the audible content lower down. I can make it easier to distinguish the wanted
harmonics from aliasing by adding a very subtle slow vibrato to the original sine wave: this
is too subtle to be noticeable for the original sine wave and its harmonics, but the reflected
aliasing, and sidebands from intermodulation with that aliasing, now sweep around instead
of remaining static. Of course when we add the third non linear
stage the aliasing and intermodulation from the first two stages will interact with everything
else again, and we’ll get yet more aliasing and yet more intermodulation. If we go up to 192KHz the aliasing is much
improved for the first saturation stage… and the second saturation stage is probably
ok now too… but we’re still filling up those extra octaves of headroom that we’ve
gained from increasing the samplerate, and sooner or later there will be aliasing and
intermodulation down in the audible range. What we actually need to make this setup work
better is some filtering: if we remove the extra high frequency content after each non
linear stage with a steep brick wall low pass filter… then the next non linear stage will
be processing a band limited signal with no inaudible high frequency content, and will
benefit from all the extra high frequency bandwidth that the higher samplerate provides. When you’re running multiple non linear
stages in series, you’ll actually get better, cleaner results by band limiting the signal
between each stage. So we’ve fixed the aliasing issue. But consider how inefficient this setup is:
the static EQ settings either side of the first non linear stage are using 4 times the
CPU, as will all the other EQs in your project. And we need an extra two steep lowpass filters
in the chain. Now lets try dropping the samplerate back
to 48KHz, and running each instance of Saturn in HQ mode instead… now each instance of
Saturn up samples the input signal 8 times, applies its processing with an internal samplerate
of 384KHz, then filters out the harmonics higher than 20KHz when down sampling. This means we don’t need to run extra brick
wall filters: these are included in the oversampling anyway. And the EQs running either side of the first
instance, and indeed all the other EQs in your project, will no longer have to pointlessly
process four times as many samples as needed. This is not just more efficient than increasing
the project samplerate, its also more effective at reducing aliasing. Of course its important to keep all of this
in perspective, as in most cases aliasing remains subtle and difficult to hear. So lets create a listening test: here’s
the multi-track drum loop I used earlier, with compression and saturation both on individual
channels and the subgroup. Lets add some bass from Twin2, with some saturation…
compression… and more saturation… and a Twin2 chord part, with a couple more saturation
stages… and lets have a few guitars as well, all DI’ed through multiple instances of
Saturn. All these then go through a sub group with
more saturation and compression, and then join the drums in the master bus with yet
more saturation and another compressor. Lets start with the project samplerate at
44.1KHz, and no oversampling for any of the plugins… Now lets switch the samplerate to 96KHz… I’ve samplerate converted all the drums
and guitar tracks using a high quality offline converter, but the synths are now generating
96KHz audio: notice how this Twin2 part now fills up that extra octave of bandwidth at
the top, so it will start to alias as soon as it hits a non linearity. Of course you’ll still be listening to this
at 44.1KHz as thats the rate YouTube runs at, but I’ll convert this audio using the
same high quality offline converter, so you’re hearing this as accurately as possible. Ok, now lets switch back to 44.1KHz, but with
HQ mode turned on for all instances of Saturn, and 4 times oversampling in every instance
of Pro-C2… Can you hear the difference? Ok before I leave you, let’s quickly look
at the difference between 44.1KHz and 48KHz. It might seem like the difference is negligible,
so you might as well run at 44.1KHz for music productions, and avoid any need to convert
the samplerate at the mastering stage. But let’s take another look at the anti-aliasing
filters: at 44.1KHz these have to be extremely steep in order to remove everything above
22.05KHz without affecting a signal at 20KHz. If I switch to a 48KHz samplerate however…
these filters can be somewhat less severe, as they now only have to remove content above
24KHz, and they’ve got twice as much room to work in. In this respect you could say that 48KHz is
twice as good as 44.1KHz. This might be why many pros choose to track
and mix at 48KHz instead of 44.1KHz: a negligible increase in CPU load and file size means every
plugin that uses internal oversampling can relax the filters slightly. Of course, the FabFilter guys know how to
make good quality transparent oversampling filters, and you’re unlikely to hear any side
effects when you turn it on. But if the signal is upsampled and downsampled
half a dozen times by a chain of plugins, perhaps the differences accumulate? So here’s my little test mix running at 44.1KHz
again, with oversampling turned on for all plugins… and now here it is rendered at
48KHz, then converted offline afterwards. So, lets try to form a conclusion. The original proposition: “samplerates,
the higher the better, right?” is clearly wrong. Higher samplerates do nothing to improve the
quality of audible content below 20KHz, and can result in lower quality due to increased
intermodulation. Higher samplerates only provide any benefit
when using non linear processes that can add extra harmonics above Nyquist. But in this case its usually better to oversample
each non linear stage individually, rather than just increase the project samplerate. However, aliasing is actually likely to be
inaudible in most cases, even without oversampling. If you have plugins that technically should
oversample but don’t, that doesn’t mean they’re useless. And if your processor can’t keep up with the
scale of your ambition when you oversample every plugin, turn off oversampling before
you scale back your ambition. Because at the end of the day, if it sounds
good, it is good. That’s all for now, thanks for watching.

100 comments

We use DAW live. Higher sample rate gives us lower latency (at same block size) but eats up double the CPU. How can we fight this issue? there is no other reason we want to run a 96khz.

Thanks for the video. I didn't realize that inter mod' distortion would generate so many extra harmonics.
The best way to total avoid all this sample rate malarkey, is to use a tape recorder 🙂

7:17 All that's wrong with modern production standards in one picture. (Viva tubes and analog equipment. – for music purposes ofc.)

connecting the dots is wrong in the first place. computers don't sample signals in that way, they just retrieve coordinates in a sin function

Brilliant video! Thanks so much for this one! I had suspected that I might have been experiencing some issues in some of my projects due to using too high a sample rate, but after watching this, I know I am. Now I can go back and fix things with a much better grasp of what's going on!

Excellent explanation, I've learned a ton in just 30 minutes. But I believe it's much better to get a Boss pedal if you want that particular sound :). But regardless, I totally love FabFilter, best VST investment I ever made in my studio.

Dan, your presentations convey not only technical knowledge but also, and more importantly, much wisdom about how the skills fit into the whole project and to this work in general. Much thanks for an EXTREMELY well-planned and produced presentation — the density of useful knowledge per unit time is off-the-charts! If there were awards given for this sort of thing, you would sweep the whole lot, every time!

Really interesting lesson. I would say the audio that was done "correctly" had a little bit more presence. Worth knowing about.

I was listening to the tests and wondering, "Wtf is he on about?" Lol. I would have continued to be baffled weren't it for the conclusion.

I struggled with that for a long years .. and none of the so called pros came with the answer that convinced me.
Now I feel having the « knowledge » !

Thanks Sir .. this is one of the best course I’ve ever had in any domain !

Really really great video! Best ever visuals & explanation i have seen regarding this common debate.. Thanks for your input and hard work!

Thanks for the explanation! I think I learned a bit more to explain the problems better to some of my 'highres is best' friends

This was as good as it gets when it comes to presenting real facts about audio processing. Thanks a lot, love it!

I knew it ;D The hi resolution hysteria does not make it better,.. i stick to 24bit 44,1 khz..used it for over 20 years,…there are other factor that make a good sound. thank you very much for thist video <3

I wonder if it is okay to stay at 44khz (2nbit), like I record for decades. to the extra 4khz really matter?

I feel kind of stupid but the differences are really small and sometimes I like the weird digital artifacts when Im looking for creative production.. Keep your ears open, all the sample rates & up/downsampling have their own good and bad.

The explanation at 2:20 about having "not enough samples" to represent an accurate sine wave is totally wrong. A sine wave is the only thing that is properly represented by at least 2 samples, that's why the nyquist frequency works. The software does linear interpolation, which is not a proper representation of what the DAC will actually output, nor how the speaker will actually move. With an ideal DAC (we are not that far off ideal DACs these days) your speaker will actually produce a 10 KHz sine wave. Use a good microphone and an oscilloscope, not a simulated display to prove (or disprove) your point.
That example would work better with a square wave, a triangle wave, or anything more complex, and even then, what you will see would still be an accurate representation of that wave up to the nyquist frequency if you have a proper DAC.

After reading and arguing about this stuff on the forums for so long, these videos (especially your latest ones) feel like a big therapeutic release, because all the stuff I've learned by doing a bunch of tests in my home studio are finally put into a form that is tangible and easily shareable.

Thank you for this demo, i've record in many studios and it concur with the knowledge i have acquired, and i do my own mix in 48khz, however after hearing your mixes at 44.1, 44.1 with oversamplig and 96khz, i hear a difference in the very highs specially on the cymbals and hit hat in 96khz like a bit more of "air" than the other exemple

nice one i have a Avid I/O i always track and mix at 48k sounds good and saves my ssd drive space why sample at higher rates if you don't need to!

Well I switched to 96KHz because I tried an 808 patch I had made in 48KHz and just tried the same patch in 96KHz (just for shits and giiggles) and suddenly it sounded amazing. Didn't really understand aliasing then. I knew about cutting bass below 20 but somehow everything in my 48KHz mixes sounded horrible. Or at least not as professional as I wanted it to.

The conclusion to this video seems to be that I just need to be mindful about cutting content above ~21k before it hits some non-linear plugin. Huh… that would be great news because 96KHz tends to kill my computer pretty quickly so I have to bounce out stuff long before I want to. I'll try that. We'll see. Incredible good explanation and tests. Thank you very, very much!.

They all sound like rubbish. Very poor stereo image. Everything sampled at 192 khz is going to produce the most accurate stereo image especially for acoustic guitar and vocal.

I am producer and artist been doing it for 5 years or so this information is great but man all this information is overwhelming me lol Like how do you guys remember all this stuff

Excellent concise explanation!
Just dropping some "whataboutism" here: What about transient definition? Independently of a continuous pitched sound, there is more transient definition in a higher sampling rate. I can tell the difference between drums recorded (and being reproduced) in 96khz from drums recorded on 44.1. Would have been interesting to see some of that in the video, but I understand the point was to show the overall effect on the audio file.
Also what about mastering for vinyl?
For those interested in exploring the subject of oversampling and its effects on aliasing I recommend this:
https://www.juansaudio.com/post/about-oversampling-and-aliasing-in-digital-compression

Great info Thanks. I had along time ago settled on 48 Khz to record. I just split the difference at that time when I was reading and all said 48 khtz. Only thing missing is are you using 16 or 24 bit?

isnt there a problem with youtube videos being mp4 format which is the same audio as mp3 so it just goes to 15khz anyways ?

I wish I could like this video 10 times, impressive knowledge gained just from watching this singular video!

There are differences between 44khz oversampled and 48khz especially in the hi freq, you can hear it on HH for example in your video. Using hi sample rate will give more extra content on VST instruments more audible than audio files recorded. I will suggest to do another video about VSTi exported at 44 and 96 then let us know. Thanks for the wonderful video.

BTW. In the future it will be software who will be ableto increase the quality from any bit rate (If someone believe they need it). No stress 😎 In the meantime just say it’s recorded in 192/30 what so ever and everyone will be happy. This bit rate hype is all about imagination anyway.

sorry but for my ears at 48khz the mix is much more open, defined and percussive. it has more depht in the fundamental frequencies and below. and it grooves nicer.
listen closely and repeat it.

Excellent video as usual. Anyone interested in learning more about the basics of digital audio would do well to check out the also-excellent video by Monty Montgomery (from Xiph.org) https://www.youtube.com/watchv=cIQ9IXSUzuM&t=5s (it's probably the best primer out there; no offence to Dan Worrall) and the accompanying article: https://people.xiph.org/~xiphmont/demo/neil-young.html.

Thanks for this video, I might need to adjust my Sylenth oversampling setting for bounce, I left it at 2x times because I didn't think it would change much, but I might be wrong

WOW. Extremely well produced. Thanks for the effort you put in to explain this in an understandable way.

Beautiful my man! I actually love that you demonstrate points rather than just state them. Your voice is also as noted by many others pure hypnosis.

7:46 "If you remove inaudible frequencies you might end up with a better result". Correct. But ALL audio equipment that converts digital to analog has to do that. Even the first CD player had low pass filters for that, after the digital to analog conversion, as otherwise, you ALWAYS end up with mirror spectra – as the analog chain is NEVER purely linear. These low pass filters must have a cutoff frequency of 22 kHz at most for the 44 kHz sample rate of a CD.
The real reason why even consumer audio equipment sounds better, i.e. removes mirror spectra and intermodulations due to nonlinearities better when run with higher sample rate is: The low pass filters have finite steepness , say 24 dB / octave if they are a 4 pole filter. If the sample frequency is one octave higher, the dampening therefore increases by 24 dB, equivalent to a redustion in amplitude by a factor of 16 (6dB is a doubling, 24 dB are 4 doublings). So the same low pass filter design can remove much more of any high frequency energy beyond half the sample rate.

TL;DR: The higher the sample rate, the better job can the low pass filters do: There is a larger spectral gap to the unwanted mirror spectra above half the sample rate.

Basically Philips found this with CDA that 1bit with 256 oversampling was better than 16bit with 4 oversampling.

Exceptionally well done Dan.

7:15 One point here, I don't think that the noise came down through out the spectrum because of inter-modulation, but I think its because the 7Khz sine wave lifted the level of the signal to the level of the clipping/distortion point at saturn so the noise started to produce overtones beyond nyquist that reflected back to audible range. I could be wrong offcourse.

Thank you very much for your effort

Leave a Reply