So, either subjective accounts of improved audio performance correlate to the improved jitter performance over time, or we are still measuring the wrong thing and the data/impressions relationship is just coincidental.
Good question.. the answer will depend on who you ask.
Is it jitter ?
What kind of jitter ?
Is it the jitter frequency or frequency spectrum ?
Is it caused by being R2R or other type of conversion ?
Is it caused by filtering ?
Is it a combination of all of the above ? And in what 'mix' ?
If it's only the jitter that alters during the warm up time then obviously when we can trust the ears on here the treshold of 'magic' is below 3ps and with unknown spectrum.
Obviously 18 bits resolution is enough for 'magic' and as many report red book also shows magic than the 'simple conclusion' could be:
18 bits/44.1khz, upsampling while retaining bit perfect reproduction and jitter below 3ps should do it.
Regardless of transfer type ? USB vs other connection methods.
What was the supposed proclaimed level where jitter became inaudible and everything sounded the same again?
That too will depend on who you ask.
I am curious about those numbers as well.
Maybe someone should spend time on this researching it while using audiophile/trained ears and capable equipment/recordings (that obviously needs to have low jitter in the ADC stage as well)
Also would be interesting to 'test' people plucked of the street and determine the gap between trained and untrained.
Who is going to fund the research ?
I can't participate in the listening tests though as I am (fortunately for me) DAC-deaf