Digital Audio
Digital Audio
Digital Audio
0 Digital Audio
Lesson Introduction
In previous lesson, you have learned about digital video production and editing.
During this week you will learn about digital audio technologies related to
multimedia product development. The learner will be able to will learn the
fundamental concepts and technologies behind the digital audio. Further, the
learner will be able to obtain the necessary skills for audio editing and publishing.
Learning Outcomes:
After completion of this lesson, the learner will be able to create digital
audio clips and integrate them to videos and animations.
▪ Describe theoretical aspects of digital audio and sound
▪ Apply audio effects
▪ Edit and compress audio files
▪ Integrate audio with other multimedia assets
Lesson Outline:
▪ Introduction to Analog and Digital Audio Basics
▪ Audio data capacity and attributes
▪ Capture and Reproduction Techniques
▪ Audio Filtering and Encoding Techniques
▪ Audio Editing
▪ Sound Effects
▪ Mixing audio and video
13.1 Introduction to Audio and Digital Audio Basics
Audio media refers to the use of sounds in multimedia communication. The audio
integration can be used as an efficient media to transfer information to the
audience. Other than transferring content or vocal information, it may also
helpful to attract the audience while setting their mind suitable to the event.
Audio content can take the form of symbolic sound, vocal sound and music.
The digital sound can be obtained through a process called digitizing where this
wave can be converted into digital signal. The main two steps involved in the
digitization process are sampling of signal and quantization of signal. Normally,
Sound is generated as a continuous signal, that is an analog signal. To generate
audio data in a computer, this signal needs to be digitized.
Sampling: Sampling is the slicing of the signal at discrete time intervals. The
frequency of sampling or the number of times the sampling is carried out is
determined by the Nyquist theorem. It emphasizes that the perfect reproduction
of the signal is possible when sampling is done more than the twice of highest
frequency of the signal been sampled. Then the output is discrete set of samples.
Sound waves can be characterized by the several attributes. Those are Frequency,
sound resolution, number of channels, Period, Amplitude, Bandwidth, Pitch,
Loudness and Dynamics.
Frequency is the rate which sound is stored in samples per second or Hertz (Hz). It
means for one second how many sound information can be stored. This is also
known as sampling rate, the higher the frequency, the clearer and sharper the
sound. It measures the quality of the overall sound. The unit is Herts (Hz) or
kiloHertz (kHz).
Infra-sound 0 – 20 Hz
Human hearing range 20 – 20 kHz
Ultrasound 20 kHz – 1 GHz
Hypersound 1 GHz – 10 THz
Sound resolution refers to number of bits used to represent each sample. Sound
resolution actually determines the quality of each sample which will give a better
precision in sound. Normally, sound resolution is based on 8-bit, 16-bit or 32-bit.
Sound channel is either mono or stereo recording. Stereo recordings are more
lifelike and realistic sound (3D localization) because humans have two ears. Mono
recordings are fine but tend to sound a bit “flat” and uninteresting.
Dynamic range means the change in sound levels. For example, a large orchestra
can reach 130dB at its climax and drop to as low as 30dB at its softest, giving a
range of 100dB.
Using the first three characteristics just described above, you can calculate the
audio file size. The formula to calculate file size is shown below:
File size = C * S * T * B
For example, when calculating a file size for 1 minute, 44.1 kHz, 16-bit, stereo
sound:
C = 2, S = 44100Hz, T = 60s, B = 2 Bytes
File size = 2 * 44100 * 60 * 2 = 10584000 bytes or 10.6 MB
A table of audio sampling rates are given below.
A higher frequency sampling rate means more samples, it gives better quality.
When more samples, the more storage space will be needed.
Higher Frequency -> higher quality -> higher storage space
Sound cards are able to record sound at different sampling rates.
Activity 13.1
Calculate the file size of a CD quality, 2 minutes mono audio file with 16 bits
resolution.
Audio capturing is the process of obtaining a signal from outside the computer.
A common method of audio capture is recording, such as recording the
microphone input to a sound file as shown in the figure 13.1. But, capturing is
not similar with recording, because recording implies that the application
always saves the sound data that's coming in. A capturing application of audio
not necessarily stores the audio. It can do different operation with the data or
process it.
For various sound production, capturing process carryout with various devices,
steps and roles. The capturing devices include various types of microphones
such as omni mic, clip on mics etc. The steps or methods may be live capturing,
one track or multi track capturing can be used. Based on the type of production,
there are various roles such as editors, effect generators, instrument players,
vocalists may involve.
Activity 13.2
Conduct a literature survey and describe the steps involved in multitrack music
production.
Audio filters are used for cleaning the audio signals and pass the wanted audio
signal. In this process unwanted noises will be removed. Filters are used to
clean up a signal, rather than to shape the sound creatively. They only provide
attenuation of unwanted frequencies, and there's no scope to boost any part of
the frequency range. The filters may operate on time domain or frequency
domain. There are two common approaches, one is the high‑pass filter, which
probably the most useful, as it helps to remove unwanted rumbles and other
unwanted sub‑sonic rubbish that microphones tend to capture. A low-pass filter
is one which does not affect low frequencies and rejects high frequencies. The
function giving the gain of a filter at every frequency is called the amplitude
response.
An audio encoding is the process of which audio data is stored and transmitted
with new format of data. Based on the nature of application, there are
guidelines on choosing the best encoding for multimedia application. This
encoding method highly depends on the distribution media and application
domain. Audio data, like all data, is often compressed to make it easier to store
and to transport. Compression within audio encoding may be either lossless or
lossy. Lossless compression can be unpacked to restore the digital data to its
original form. Lossy compression necessarily removes some such information
during compression and decompression and is parameterized to indicate how
much tolerance to give to the compression technique to remove data.
Activity 13.3
Trimming: Removing “dead air” or silence space from the front of recording to
reduce file size.
Splicing and Assembly: Cutting and Pasting different recording into one.
Volume adjustment: When combining several recordings into one there is a
good chance that it won’t get a consistent volume level. It is best to use a sound
editor to normalize the combined audio about 80% – 90% of the maximum
level. If the volume is increased too loud, then it will create a distortion.
Resampling and Down sampling: It will record sounds at higher sampling rates,
then down sample to lower rates by down sampling the file to reduce the file
size.
Fade-ins and Fade-outs: To smooth the beginning and the end of the sound file
by gradually increasing or decreasing volume.
Time stretching: Alter the length (in seconds) of a sound file without changing
its pitch.
Reversing sound: Spoken dialog can produce a surreal effect when played
backward.
Digital Signal Processing (Special Effect): To increase pitch, robot voice, echo,
reverb and other special effects.
For obtaining audio editing skills and audio/video mixing capabilities, follow the
adobe audition tutorials given in the following link.
https://helpx.adobe.com/audition/tutorials.html
Activity 13.4
Take an existing audio file and add echo effect using adobe audition or similar
software.