This explains why the recording
sounds very different. If you were
in the room, your hearing system can
apply many tools to clarify the
speech. Primarily there is binaural auto-correlation that can be used. Then there
is sound-shadowing (from your head) to
localize the person speaking. Monaural
auto-correlation can be employed as
well, since your brain “knows” that any
identical sound coming from a different
direction must be an echo. You have a
visual reference as well, which certainly
has the potential to assist in localization.
Lastly, the sonic characteristics of the
room itself can provide useable cues.
Auto-correlation cleans up the voices and makes them much more intelligible. It also explains why auto-correlation
stops around 40 ms and echos are then
heard. This may be because the shortest
speech sound is also about 40 ms long.
If an auto-correlation system was longer
than that, it is possible that short speech
sounds would be degraded because
speech contains many repeated sounds.
Obviously, the speech and hearing
mechanisms must be compatible in
order for them to function properly.
This finally provides a clear explanation of why phase is so important.
The auto-correlation function fails
with reflected multiple-driver, speaker
systems because wave shape is significantly changed. Photos 1 and 2 show a
distinct difference in wave shape. The
ear perceives this as a different sound
coming from a different direction. This
is why ordinary speaker systems do not
and cannot sound like the real thing.
saw that the ear’s sensitivity to phase
is extremely high. Finally, it was shown
that common speaker systems with
more than one driver ruin the phase
relationships. This is because sound
reflections from the ceiling (or floor,
etc.) do not maintain the same phase
relationships as the direct line sound.
In the next part, two different speak-
er systems will be detailed that the
provide proper phase relationships
regardless of path length differences. A
high-fidelity system will also be described
that produces a qualitative difference in
sonic reality. Ordinary stereo recordings
and/or stereo FM broadcasts will be used
as the source material. Recording considerations will also be discussed. NV
This has been a very cursory
examination of the human hearing
system with a focus on phase. We saw
that hearing is very complicated and
employs subtle and sophisticated functions associated with phase. We also
REF 1: Normal Aspects of Speech,
Hearing and Language; Minifie,
Hixon, Williams, 1973, Prentice-Hall
May 2007 77