[go: up one dir, main page]

0% found this document useful (0 votes)
51 views35 pages

A Study of Sound, Its Effects and Applications

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 35

“A STUDY OF SOUND, ITS EFFECTS AND

APPLICATIONS”
JAYAWANTRAO SAWANT SHIKSHAN PRASARAK MANDAL’S

JAYAWANTRAO SAWANT COLLEGE OF


COMMERCE AND SCIENCE, PUNE

A
PROJECT REPORT
ON

“A STUDY OF SOUND, ITS EFFECTS AND APPLICATIONS”

Guided by

Dr. H.R.KULKARNI
Submitted to

Savitribai Phule Pune University


FOR THE DEGREE OF BACHELOR OF SCIENCE IN PHYSICS BY

MR. AKUS USAMA IRFAN


DEPARTMENT OF PHYSICS

JAYAWANTRAO SAWANT COLLEGE OF

COMMERCE AND SCIENCE


JAYAWANTRAO SAWANT COLLEGE OF
COMMERCE AND SCIENCE, PUNE

DEPARTMENT OF PHYSICS

CERTIFICATE
This is to certify that MR. AKUS USAMA IRFAN student
of B.Sc. (Physics) Semester - VI has satisfactorily
completed the project work towards the partial
fulfillment of his Bachelor Degree of Savitribai Phule
Pune University, Pune for academic year 2020-21
Project Title

“A STUDY OF SOUND, ITS EFFECTS AND APPLICATIONS”

Guide Head
DR.H.R.KULKARNI Dr.H.R.KULKARNI

Internal Examiner External Examiner


ACKNOWLEDGEMENT

I would like to acknowledge the following as being idealistic channel and fresh
dimensions in the completion of this project.

I take this opportunity to thank the University of Pune for giving me chance to
do this project.

It has been a wonderful learning experience for me to work on this project under
the guidance of Dr. H.R.KULKARNI

I express my sincere gratitude to Mr.A.I.Fakir, Mr.Gawade for their full co-


operation and timely help, as well as my Head of Department, Dr.
H.R.KULKARNI for his active encouragement and interest in my work that
helped me to accomplish my goal.

Lastly, I am obliged to the laboratory assistance whose contributes were valuable


towards the completion of my project.

Last but not at least, I would like to thanks all our friends and classmates for their
sincere suggestions.

Mr. Usama Irfan Akus


TABLE OF CONTENTS

→ INTRODUCTION
→ CHARACTERISTICS
− Longitudinal waves
− Amplitude
− Frequency
− Frequency
− Speed of Sound
− Human hearing and speech
→ STUDY OF SOUND
− Sound Production
− Sound Propagation
− Sound Perception
→ EFFECTS OF SOUND
− Physiological and Psychological effects
− NIHL
→ APPLICSTIONS OF SOUND
− SONAR
− Echolocation
− Ultrasonic
− Infrasonic
→ CONCLUSION
→ BIBLIOGRAPHY
CHAPTER - I
INTRODUCTION
INTRODUCTION

Sound, a mechanical disturbance from a state of equilibrium that propagates through


an elastic material medium. A purely subjective definition of sound is also possible,
as that which is perceived by the ear, but such a definition is not particularly
illuminating and is unduly restrictive, for it is useful to speak of sounds that cannot
be heard by the human ear, such as those that are produced by dog whistles or by
sonar equipment.

The study of sound should begin with the properties of sound waves. There are two
basic types of waves, transverse and longitudinal, differentiated by the way in which
the wave is propagated. In a transverse wave, such as the wave generated in a
stretched rope when one end is wiggled back and forth, the motion that constitutes
the wave is perpendicular, or transverse, to the direction (along the rope) in which
the wave is moving. An important family of transverse waves is generated by
electromagnetic sources such as light or radio, in which the electric and magnetic
fields constituting the wave oscillates perpendicular to the direction of propagation.

Sound propagates through air or other mediums as a longitudinal wave, in which the
mechanical vibration constituting the wave occurs along the direction of propagation
of the wave. A longitudinal wave can be created in a coiled spring by squeezing
several of the turns together to form a compression and then releasing them, allowing
the compression to travel the length of the spring. Air can be viewed as being
composed of layers analogous to such coils, with a sound wave propagating as layers
of air “push” and “pull” at one another much like the compression moving down the
spring.
A sound wave thus consists of alternating compressions and rarefactions, or regions
of high pressure and low pressure, moving at a certain speed. Put another way, it
consists of a periodic (that is, oscillating or vibrating) variation of pressure occurring
around the equilibrium pressure prevailing at a particular time and place.
Equilibrium pressure and the sinusoidal variations caused by passage of a pure sound
wave (that is, a wave of a single frequency) are represented in Figure.

Graphic representations of a sound wave. (A) Air at equilibrium, in the absence of a


sound wave; (B) compressions and rarefactions that constitute a sound wave; (C)
transverse representation of the wave, showing amplitude (A) and wavelength (λ).
CHAPTER - II
CHARACTERISTICS
SOUND WAVE CHARACTERISTICS

1. LONGITUDINAL WAVES
When sound waves are represented in a waveform, we instantly notice some basic
characteristics. The waveform is a pictorial representation of the pressure variation
in the air which travels as sound. These waves are alternately regions of high
pressure and low pressure. Thanks to the waveform, sound waves now seem very
similar to light and other electromagnetic radiation.
2. AMPLITUDE
Amplitude in light refers to the amount of energy in an electromagnetic wave and its
meaning is the same here. Amplitude refers to the distance of the maximum vertical
displacement of the wave from its mean position. Larger the amplitude, the higher
the energy. In sound, amplitude refers to the magnitude of compression and
expansion experienced by the medium the sound wave is travelling through. This
amplitude is perceived by our ears as loudness. High amplitude is equivalent to loud
sounds.

Two graphs showing the difference between sound waves with high and low amplitude.
3. FREQUENCY
Frequency in a sound wave refers to the rate of the vibration of the sound travelling
through the air. This parameter decides whether a sound is perceived as high pitched
or low pitched. In sound, the frequency is also known as Pitch. The frequency of the
vibrating source of sound is calculated in cycles per second.
The SI Unit for Frequency being hertz and its definition being ‘1/T’ where T refers
to the time period of the wave. The time period is the time required for the wave to
complete one cycle. Wavelength and frequency of a sound wave are related
mathematically as:
The velocity of Sound = Frequency * Wavelength

4. TIMBRE

Imagine a bell and a piano in an orchestra. The same musical notes can be obtained
by both the instruments but their sounds are very different. The piano produces a
distinct note whereas the bell struck to the same pitch and amplitude produces a
sound that continues to ring after it has been struck. This difference in the sound is
referred to as the Timbre. Timbre is actually defined as; if two different sounds have
the same frequency and amplitude, then by definition they have different timbres.
5. SPEED OF SOUND

The speed of a sound wave is affected by the type of medium through which it
travels. Sound waves travel the fastest in solids due to the proximity of molecules.
Likewise, sound waves travel slowest in gases because gases are spread far apart
from one another. The state of the medium through which sound travels is not the
only factor that affects a sound’s speed. Speed of a sound wave can also be affected
by the density, temperature, and elasticity of the medium through which the sound
waves travel. Below is a table, we have listed the speed of sound in various materials.

6. HUMAN HEARING AND SPEECH

Humans can hear sounds ranging from 20 Hz to 20 kHz. Sounds with frequencies
above the range of human hearing are called ultrasound. Sounds with frequencies
below the range of human hearing are called infrasound. The typical sound produced
by human speech have frequencies in the order of 100 to 1,000 Hz.
CHAPTER - III
STUDY OF SOUND
STUDY OF SOUND

Sound is a wave that is created through pressure transmitted from a medium such as
air or water and is composed of frequencies which are within the range of human
hearing (20 Hertz or 20kHz) - 20Hx is the lowest a human can hear; 20Hz is the
highest (Although through-out life and aging the human ear starts to notice less of
the more subtle sounds). Sound is created through the vibrations in the air which
causes the auditory sensation in your ear – making you able to head the noises that
we do.
For example, if you clap the sound that you hear goes from the movement of your
hands, through the waves in the air, causing the hairs inside the ear to vibrate.

The science of acoustics study both sound itself and all the phenomena associated
with it (its production, its propagation, and its perception).
Acoustics can adopt two different perspectives:

• The acoustics that studies sound as a physical phenomenon, that is, as the
mechanical vibration that causes a stimulus to produce an auditory sensation.

• The acoustics that studies sound as a physiological phenomenon, that is, as


the auditory sensation that causes the mechanical vibration.

1. Sound Production: How it’s created – through vibrations.


Basically, three elements come to play in the phenomenon of sound: the
transmitter, the transmission medium and the receiver. The sound is generated
when the vibration caused by the emitter is propagated through a medium (air, water,
etc) to end up reaching the receiver.
It should be noted that the sound cannot propagate in a vacuum as light does, sound
waves need a physical medium in which they can travel from the transmitter to the
receiver.
A piano, guitar, a speaker or a person’s vocal cords are examples of a sound source
(the element that generates sound when vibrating). The vibration is transmitted to
nearby air particles that also transmit it to the adjacent particles by an oscillating
movement

2. Sound Propagation: How it travels from one place to another - waves.

Sound travels in mechanical waves. A mechanical wave is a disturbance that moves


and transports energy from one place to another through a medium. In sound, the
disturbance is a vibrating object. And the medium can be any series of
interconnected and interactive particles. This means that sound can travel through
gases, liquids and solids.
Let's take a look at an example. Imagine a church bell. When a bell rings, it vibrates,
which means the bell itself flexes inward and outward very rapidly. As the bell
moves outward, it pushes against particles of air. Those air particles then push
against other adjacent air particles, and so on. As the bell flexes inward, it pulls
against the adjacent air particles, and they, in turn, pull against other air particles.
This push and pull pattern are a sound wave. The vibrating bell is the original
disturbance, and the air particles are the medium.
The bell's vibrations push and pull against adjacent air molecules, creating a
sound wave.

3. Sound Perception: How it affects the senses and emotions of the


audience/listener.

To hear sound, your ear has to do three basic things:


− Direct the sound waves into the hearing part of the ear
− Sense the fluctuations in air pressure
− Translate these fluctuations into an electrical signal that your brain can
understand.

The pinna, the outer part of the ear, serves to "catch" the sound waves. Your outer
ear is pointed forward and it has a number of curves. This structure helps you
determine the direction of a sound. If a sound is coming from behind you or above
you, it will bounce off the pinna in a different way than if it is coming from in front
of you or below you. This sound reflection alters the pattern of the sound wave. Your
brain recognizes distinctive patterns and determines whether the sound is in front of
you, behind you, above you or below you

Your brain determines the horizontal position of a sound by comparing the


information coming from your two ears. If the sound is to your left, it will arrive at
your left ear a little bit sooner than it arrives at your right ear. It will also be a little
bit louder in your left ear than your right ear.
CHAPTER - IV
EFFECTS OF SOUND
EFFECTS OF SOUND

Sound has very powerful effects. They can be physiological and psychological
effects. In human physiology and psychology, sound is the reception of such waves
and their perception by the brain. Only acoustic waves that have frequencies lying
between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory
percept in humans. In air at atmospheric pressure, these represent sound waves with
wavelengths of 17 meters (56 ft) to 1.7 centimetres (0.67 in). Sound waves above 20
kHz are known as ultrasound and are not audible to humans. Sound waves below 20
Hz are known as infrasound. Different animal species have varying hearing ranges.

PHYSIOLOGICAL AND PSYCHOLOGICAL

Most physiological effects of sound exposure are also psychologically based, as they
result from subjective interpretations of the sounds, which, in turn, result in stress or
pleasure Sound affects our bodies. Your body is 70% water. Sound travels well in
water, so we’re very good conductors of sound. It’s not surprising that sound has a
powerful effect on us. Because hearing is your primary warning sense, a sudden
sound will start a process. It releases cortisol, it increases your heart rate, it changes
your breathing. This is because we’ve been programmed over hundreds of thousands
of years to assume that any sudden or unexplained sound is a threat and your body
gets ready to fight or flee.

Acute effect caused by noise depends upon the pressure and frequency.

▫ At 150dB immediate permanent hearing impairment


▫ At 120dB effect on -
− Respiratory systems
− Dizzying
− Disorientation
− Loss of physical control
− Stress
− Nausea
− Vomiting
▫ 70dB can measurable physiological effect, load sound secretion of many
hormones at the pituitary gland – Adrino cartico tropic hormone (ACTH),
turn stimulated adrenal gland and that trigger
− Enhancement of sensitivity
− Blood sugar
− Suppression of immune system
− Decreasing the efficiency of liver to detoxify blood
NOISE INDUCED HEARING LOSS [NIHL]

Noise-induced hearing loss (NIHL) is hearing impairment resulting from exposure


to loud sound. People may have a loss of perception of a narrow range of frequencies
or impaired perception of sound including sensitivity to sound or ringing in the ears.
When exposure to hazards such as noise occur at work and is associated with hearing
loss, it is referred to as occupational hearing loss.
Hearing may deteriorate gradually from chronic and repeated noise exposure (such
as to loud music or background noise) or suddenly from exposure to impulse noise,
which is a short high intensity noise (such as a gunshot or airhorn). In both types,
loud sound overstimulates delicate hearing cells, leading to the permanent injury or
death of the cells. Once lost this way, hearing cannot be restored in humans.
There are a variety of prevention strategies available to avoid or reduce hearing loss.
Lowering the volume of sound at its source, limiting the time of exposure and
physical protection can reduce the impact of excessive noise. If not prevented,
hearing loss can be managed through assistive devices and communication
strategies.
The largest burden of NIHL has been through occupational exposures; however,
noise-induced hearing loss can also be due to unsafe recreational, residential, social
and military service-related noise exposures. It is estimated that 15% of young
people are exposed to sufficient leisure noises (i.e., concerts, sporting events, daily
activities, personal listening devices, etc.) to cause NIHL. There is not a limited list
of noise sources that can cause hearing loss; rather, exposure to excessively high
levels from any sound source over time can cause hearing loss.
The first symptom of NIHL may be difficulty hearing a conversation against a noisy
background. The effect of hearing loss on speech perception has two components.
The first component is the loss of audibility, which may be perceived as an overall
decrease in volume. Modern hearing aids compensate this loss with amplification.
The second component is known as "distortion" or "clarity loss" due to selective
frequency loss. Consonants, due to their higher frequency, are typically affected
first. For example, the sounds "s" and "t" are often difficult to hear for those with
hearing loss, affecting clarity of speech. NIHL can affect either one or both ears.
Unilateral hearing loss causes problems with directional hearing, affecting the ability
to localize sound.
The ear can be exposed to short periods of sound in excess of 120 dB without
permanent harm — albeit with discomfort and possibly pain; but long-term exposure
to sound levels over 85 dB(A) can cause permanent hearing loss.
NIHL occurs when too much sound intensity is transmitted into and through the
auditory system. An acoustic signal from a sound source, such as a radio, enters into
the external auditory canal (ear canal), and is funnelled through to the tympanic
membrane (eardrum), causing it to vibrate. The vibration of the tympanic membrane
drives the middle ear ossicles, the malleus, incus, and stapes to vibrate in sync with
the eardrum. The middle ear ossicles transfer mechanical energy to the cochlea by
way of the stapes footplate hammering against the oval window of the cochlea,
effectively amplifying the sound signal. This hammering causes the fluid within the
cochlea (perilymph and endolymph) to be displaced. Displacement of the fluid
causes movement of the hair cells (sensory cells in the cochlea) and an
electrochemical signal to be sent from the auditory nerve (CN VIII) to the central
auditory system within the brain. This is where sound is perceived. Different groups
of hair cells are responsive to different frequencies. Hair cells at or near the base of
the cochlea are most sensitive to higher frequency sounds while those at the apex are
most sensitive to lower frequency sounds There are two known biological
mechanisms of NIHL from excessive sound intensity: damage to the structures
called stereocilia that sit atop hair cells and respond to sound, and damage to the
synapses that the auditory nerve makes with hair cells, also termed "hidden hearing
loss".
CHAPTER - IV
APPLICATIONS OF SOUND
APPLICATIONS OF SOUND

1. SONAR
Sonar stands for SOund NAvigation Ranging. Sonar is used in navigation,
forecasting weather, and for tracking aircraft, ships, submarines, and missiles. Sonar
devices work by bouncing sound waves off objects to determine their location. A
sonar unit consists of an ultrasonic transmitter and a receiver. On boats, the receiver
is mounted on the bottom of the ship. To measure water depth, for instance, the
transmitter sends out a short pulse of sound, and later, the receiver picks up the
reflected sound. The water depth is determined from the time elapsed between the
emission of the ultrasonic sound and the reception of its reflection off the sea-floor.
In the diagram below, a ship sends out ultrasonic waves (green) in order to detect
schools of fish swimming beneath. The waves reflect off the fish (white), and return
to the ship where they are detected and the depth of the fish is determined.
2. ECHOLOCATION
Echolocation, also called bio sonar, is a biological sonar used by several animal
species. Echolocating animals emit calls out to the environment and listen to the
echoes of those calls that return from various objects near them. They use these
echoes to locate and identify the objects. Echolocation is used for navigation,
foraging, and hunting in various environments.
Echolocating animals include some mammals (most notably Laurasiatheria) and a
few birds. Especially some bat species and odontocetes (toothed whales and
dolphins), but also in simpler forms in other groups such as shrews, and two cave
dwelling bird groups, the so-called cave swiftlets in the genus Aerodramus (formerly
Collocalia) and the unrelated Oilbird Steatornis caripensis.
Echolocation is the same as active sonar, using sounds made by the animal itself.
Ranging is done by measuring the time delay between the animal's own sound
emission and any echoes that return from the environment. The relative intensity of
sound received at each ear as well as the time delay between arrival at the two ears
provide information about the horizontal angle (azimuth) from which the reflected
sound waves arrive.

The depiction of the ultrasound signals emitted by a bat, and the echo from a nearby
object.
3. ULTRASONIC
Ultrasound is sound waves with frequencies higher than the upper audible limit of
human hearing. Ultrasound is not different from "normal" (audible) sound in its
physical properties, except that humans cannot hear it. This limit varies from person
to person and is approximately 20 kilohertz (20,000 hertz) in healthy young adults.
Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.
Ultrasound is used in many different fields. Ultrasonic devices are used to detect
objects and measure distances. Ultrasound imaging or sonography is often used in
medicine. In the nondestructive testing of products and structures, ultrasound is used
to detect invisible flaws. Industrially, ultrasound is used for cleaning, mixing, and
accelerating chemical processes. Animals such as bats and porpoises use ultrasound
for locating prey and obstacles.

Galton whistle, one of the first devices to produce ultrasound.


Applications:
a. Cleaning:
In objects with parts that are difficult to reach, for example, spiral tubes and
electronic components, the process of ultrasonic cleaning is used. Here, the object is
dipped in a solution of suitable cleaning material and ultrasonic waves are passed
into it. As a result of this, high-frequency waves are generated that cause the dirt and
grease to detach from the surface.
b. Detection of cracks:
Ultrasound is used to detect cracks in the metallic components that are used in the
construction of high-rise structures such as buildings and bridges. They generate and
display an ultrasonic waveform that is interpreted by a trained operator, often with
the aid of analysis software, to locate and categorize flaws in test pieces. High-
frequency sound waves reflect from flaws in predictable ways, producing distinctive
echo patterns that can be displayed and recorded by portable instruments. A trained
operator identifies specific echo patterns corresponding to the echo response from
good parts and from representative flaws. The echo pattern from a test piece may
then be compared to the patterns from these calibration standards to determine its
condition.
c. Echocardiography:
In the process of electrocardiography, the ultrasonic waves are used to form an image
of the heart using reflection and detection of these waves from various parts.
d. Ultrasonography:
Medical ultrasound is a diagnostic imaging technique based on it. It is used for the
imaging of internal body structures such as muscles, joints and internal organs.
Ultrasonic images are known as sonograms. In this process, pulses of ultrasound are
sent to the tissue using a probe. The sound echoes off the tissue, where different
tissues reflect sound varying in degrees. These echoes are recorded and displayed an
image.

e. Lithotripsy:
Ultrasonic waves are used to break stones in the kidney. High energy sound waves
are passed through the body without injuring it and break the stone into small pieces.
These small pieces move through the urinary tract and out of the body more easily
than a large stone.
4. INFRASONIC
Infrasound, sometimes referred to as low-frequency sound, describes sound waves
with a frequency below the lower limit of audibility (generally 20 Hz). Hearing
becomes gradually less sensitive as frequency decreases, so for humans to perceive
infrasound, the sound pressure must be sufficiently high. The ear is the primary
organ for sensing low sound, but at higher intensities it is possible to feel infrasound
vibrations in various parts of the body.
The study of such sound waves is sometimes referred to as Infrasonics, covering
sounds beneath 20 Hz down to 0.1 Hz (and rarely to 0.001 Hz). People use this
frequency range for monitoring earthquakes and volcanoes, charting rock and
petroleum formations below the earth, and also in ballistocardiography and seism
cardiography to study the mechanics of the heart.
Infrasound is characterized by an ability to get around obstacles with little
dissipation. In music, acoustic waveguide methods, such as a large pipe organ or, for
reproduction, exotic loudspeaker designs such as transmission line, rotary woofer,
or traditional subwoofer designs can produce low-frequency sounds, including near-
infrasound. Subwoofers designed to produce infrasound are capable of sound
reproduction an octave or more below that of most commercially available
subwoofers, and are often about 10 times the size.
CHAPTER - V
CONCLUSION
CONCLUSION
After doing this project, I learned a lot more about sound and its impact overall. I
learnt the advantages and disadvantages of sound waves and appliances used with
its help. The science of acoustics study both sound itself and all the phenomena
associated with it (its production, its propagation, and its perception).
Acoustics have adopted two different perspectives:

• The acoustics that studies sound as a physical phenomenon, that is, as the
mechanical vibration that causes a stimulus to produce an auditory sensation.

• The acoustics that studies sound as a physiological phenomenon, that is, as


the auditory sensation that causes the mechanical vibration.
CHAPTER - VI
BIBLIOGRAPHY
BIBLIOGRAPHY
→ www.google.com
→ www.wikipedia.com
→ www.byjus.com
→ www.slideshare.net
→ www.schoolnet.org

You might also like