FOP 21 September
FOP 21 September
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values,
attitudes and preferences. The ability to learn is possessed by humans, animals and some
machines. There is also evidence for some kind of learning in certain plants. Some learning is
immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and
knowledge accumulate from repeated experiences. The changes induced by learning often last a
lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which
cannot be retrieved.
Human learning starts at birth and continues until death as a consequence of ongoing
interactions between people and their environment. Research has led to the identification of
various sorts of learning. For example, learning may occur as a result of habituation, or classical
conditioning, operant conditioning or as a result of more complex activities such as play, seen
only in relatively intelligent animals. Learning may occur consciously or without conscious
awareness. Learning that an aversive event can't be avoided nor escaped may result in a
condition called learned helplessness.
Play has been approached by several theorists as a form of learning. Children experiment with
the world, learn the rules, and learn to interact through play. Lev Vygotsky agrees that play is
pivotal for children's development since they make meaning of their environment through playing
educational games. For Vygotsky, however, play is the first form of learning language and
communication and the stage where a child begins to understand rules and symbols. This has
led to a view that learning in organisms is always related to semiosis.
Learned helplessness is learning that an aversive event cant be avoided nor escaped.
Lev Vygotsky suggested that play is the first form of learning language and communication and
the stage where a child begins to understand rules and symbols.
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values,
attitude and preferences. Learning can be defined as any relatively permanent change in
behavior that occurs as a result of practice or experience.
If you could understand the learning process and apply our general understanding of it to a
particular person's life, we would go a long way towards explaining many of the things that person
does. If we could understand some of the principles of learning, we would have a better idea of
how to change behavior when we want to change it.
https://www.youtube.com/watch?v=qSqWiTG-o2Y
Classical conditioning gets its name from the fact that it is the kind of learning situation that existed in
the early ‘classical’ experiments of Ivan P. Pavlov (1849-1936). In the late 1890s, this famous Russian
physiologist began to establish many of the basic principles of this form of conditioning. Classical
conditioning is also sometimes called respondent conditioning or Pavlovian conditioning.
Student volunteers were the subjects in this experiment. Each student sat in a booth in which a brief
jet of air could be puffed at his or her right eye. The response to this puff was a sharp blink of the eyes.
Half second before each puff, a dim spot of light came up. Test showed that at the beginning of the
experiment, the students did not blink in response to the light. The light and the puff were paired - light
followed by puff - a number of times. Soon the students began to blink when the light came on before
the puff. The number of blinks to the light increased steadily as more and more pairings were given.
Figures show how the blinking to the light increased during the experiment. Each point on the graph
shows the percentage of blinks to the light during ten pairings of light and puff. The students had
learned to blink when the light came on; this is the conditioned response in this experiment.
What do these examples of classical conditioning have in common? In other words what are the
general characteristics of situations in which conditioned responses are acquired, or learned? In
classical conditioning two stimuli are presented to the learner. (The term stimulus comes from the
Latin word for ‘goad’ , or ‘prod’; and thus, in psychology the term is sometimes used to refer to
events which evoke, or call forth, a response - the individual is ‘goaded into action’. A more general
meaning of the term stimulus is anything in the environment that can be detected by the senses).
One of the stimuli in classical conditioning is called the ‘conditioned stimulus’ (CS). It is also known
as a ‘neutral stimulus’ because except for an alerting, or attentional, response the first few times it is
presented, it does not evoke a specific response. Almost any stimulus which is detectable can serve
as a CS. The bell and the light was the CSs in the two examples. The other stimulus is known as
‘unconditioned stimulus’ (US). This stimulus consistently evokes a response or is reliably followed by
one. The response that reliably follows the unconditioned stimulus is known as the ‘unconditioned
response’ (UR).
What were the USs and the URs in the example? The two stimuli - the CS and the US – are paired in
classical conditioning so that the conditioned stimulus comes a short time - say from half second to
several seconds - before the unconditioned stimulus is presented. After the stimuli have been paired
a number of times (each pairing is called a trial), presentation of the originally neutral conditioned
stimulus evokes a response. This response is what is learned or in classical conditioning; it is termed
the ‘conditioned response’ (CR).
The acquisition of a conditioned response is usually gradual; as more and more trials (CS-US
pairings) are given, conditioned responses grows stronger and stronger or are more and more likely
to occur. For example, in figure, the left-hand curve shows the course of acquisition, or learning,
typical in an experiment on salivary conditioning. (It is drawn without specifying the amount of saliva
or the number of trials, but this values would be plotted on a graph in an actual experiment). Figure
illustrates the course of acquisition of a conditioned eye-blink response. These are both examples of
acquisition, or learning, curves in which the course of learning is followed over trials or time. Learning
curves typically have the shape shown in these two examples; the rate of learning is rapid at first but
gradually decreases, as shown by the flattening of the curve. Such curves are said to be negatively
accelerated. In other words, the increase in learning on later trials is less than was the increase on
earlier trials. This is probably because there is a limit on the strength, or magnitude, of a conditioned
response in a given experiment; after all, in the saliva conditioning experiment, the dog could only
drool so much. Increases can be great on the early trials, but there is less and less to be added to the
magnitude of the response as conditioning proceeds.
Theories of classical conditioning try to describe and give order to the results of the many, many
conditioning experiments that have been done; they are often mathematical in form (Rescorla and
Wagner, 1972). These theories are also concerned with the processes occurring when a conditioned
response is acquired. In other words, they speculate about the nature of the learning that takes place
in classical conditioning. One older theory about the nature of classical conditioning is the theory of
stimulus substitution; more recent and current ideas are the information an expectation theories.
Stimulus Substitution
This theory, which originated with Pavlov and was influential for many years relies on the idea that the
CS, simply as a result of pairing in the US, acquires the capacity to substitute for the US in invoking a
response. In other words, an association - a link of a bond - is formed between the CS and the US so
that the CS becomes the equivalent of the US in eliciting a response. Pavlov thought this link up, or
association, took place in the brain.
SVKM’s NMIMS; SDSOS
CLASSICAL CONDITIONING
THEORIES
Stimulus Substitution
He thought the two areas of the brain, one for the CS and one for the US, became activated during the
conditioning procedure and that activation of the US area resulted in a reflex, or automatic, response.
As a result of the CS-US pairings during the conditioning procedure, he theorized, the CS acquired the
ability to excite US area, thus leading to the reflex response.
While the idea of stimulus substitution is appealingly simple, it is not currently accepted by most
learning theorists. A major difficulty with the theory is that it says the conditioned response (CR)
should be the same as, or at least very similar to, the unconditioned response (UR). According to this
theory, all that has happened is that the CS has acquired the ability to evoke the response after
conditioning. The response has not changed; the change is in the stimulus that elicits it. However, it is
clear that the CR may not be at all like the UR. For instance, when using a mild foot shock as the US
and a tone as the CS, the unconditioned response of rats to the shock is an increase in running and
activity, but the conditioned response to the tone is a decrease in activity - a response known as
‘freezing’.
SVKM’s NMIMS; SDSOS
CLASSICAL CONDITIONING
THEORIES
In classical conditioning, extinction occurs when the CS is presented alone without the US for a
number of trials. When this is done, the strength, or magnitude, of the CR gradually decreases as
shown in figure. For example, the number of drops of saliva decreases over unpaired trials; or blinks
to a light CS gradually becomes less frequent. Note that the process of extinction is not ‘forgetting’.
A response is set to be forgotten over time when there is no explicit procedure involved. The
process of extinction, however, involves a specific procedure - presentation of the CS by itself.
Just as with acquisition, there are several views concerning why the extinction process works.
Pavlov thought of conditioning in terms of two opposing tendencies; extinction and inhibition. During
acquisition the excitatory tendency has the upper hand; but during extinction, inhibition builds up to
suppress conditioned responding. Another view of extinction process stems from the information-
expectation theory of conditioning described before. Because, during extinction, the CS is no longer
paired with the US, the CS ceases to be a signal for the US; the CS becomes a neutral stimulus, as
it was before conditioning occurred, and little attention is paid to it.
The decrease in conditioned-response magnitude resulting from extinction need not be permanent.
Suppose, the day after extinction of a saliva reconditioned response, a dog is brought back into the
laboratory and the tone CS is presented. The magnitude of the dog's condition response will
probably be much greater than it was at the end of extinction the day before. Such an increase in
the magnitude of a conditioned response after a period of time with no explicit training is known as
‘spontaneous recovery’. The phenomenon of spontaneous recovery shows that the extinction
procedure, while decreasing the magnitude of a conditioned response, does not entirely remove the
tendency to respond to the CS. The extinction does not completely erase conditioning is also shown
by the fact that reconditioning is usually more rapid than was the original conditioning. To recondition
after extinction, the experimenter again pairs the CS and the US from the original condition. When
Pavlov did this, he got the general result shown by the right-hand curve of Figure; reconditioning
following extinction occurred more rapidly then did the first conditioning. Thus, some learning was
left, after extinction, from the original conditioning. Indeed, an experimenter can condition and
extinguish up to a point, and learning will be a little faster each time.
Pavlov used the phenomenon of spontaneous recovery and faster reconditioning to support his
inhibition idea of extinction. He assumed that inhibition from the extinction process gradually decays
with time and thus the excitatory tendency is less suppressed after an interval of a few hours or a
day. On the other hand, the information-expectation view of spontaneous recovery stresses the
attention paid to the CS by the learner. Remember that this theory says that the CS loses its
excitatory value in extinction because the learner no longer pays attention to it; it is as if the CS
becomes part of the background as it is presented over and over in the extinction situation. But the
passage of time changes the situation, and then the CS now comes on it are novel again, the
learner pays attention to it, and the CS is once more excitatory. Reconditioning is said to occur
rapidly because the signal value of the CS has already been acquired during the original
conditioning, and the learner simply carries this over to the reconditioning trials.
Pavlov discovered very early in his work that if he conditioned an animal to salivate at the sound of a
bell, it would also salivate, though not quite so much, at the sound of a buzzer or the beat of a
metronome. In other words, the animal tended to generalize the conditioned response to other
stimuli that were somewhat similar to the original condition stimulus. Subsequent conditioning
experiments have demonstrated this phenomena of ‘stimulus generalization’ over and over. The
amount of generalization follows this rough rule of thumb: The greater the similarity the greater the
generalization among conditioned stimuli.
‘Generalization’ means that conditioned responses occur to stimuli that have never been paired with
a specific unconditioned stimulus. It broadens the scope of classical conditioning. Consider the
development of irrational fears, or phobias, by children. Insofar, as conditioning and generalization
plays a role, the process might go something like this: A child is conditioned, accidentally perhaps,
to fear something by its being paired with a fear producing unconditioned stimulus. The fear
becomes irrational when it generalizes, or spreads, to similar but harmless objects.
For example, the original conditioning might have involved conditioned fear response to a white
fluffy dog that bit the child. If this fear generalized too many white, fluffy things - other white animals,
white blankets, white birds, and so forth, we would have an example of an irrational fear of white
fluffy things, or a phobia. This child might be afraid of Santa Claus or Uncle Mike, who happens to
have a white beard; because of generalization, the fear has spread a long way from the original
conditioning.
The generalization of fear may make tracing it back to its conditioned origin difficult. But even
though the specific conditioning that has led to some phobias cannot be discovered, this irrational
fears can sometimes be eliminated by conditioning procedures that involve extinction and the
learning of condition responses, such as relaxation, that are incompatible with being afraid.
‘Discrimination’ is the process of learning to make one response to one stimulus and a different
response, or no response to another stimulus. Although many kinds of discriminations are possible,
a typical discrimination experiment in classical conditioning involves learning to respond to one
stimulus and not to respond to another. When we learn to respond to one stimulus and not to
another, the range of stimuli that are capable of calling forth a conditioned response is narrowed. In
a sense, then, this kind of discrimination is the opposite of generalization, or the tendency for a
number of stimuli to call forth the same condition response. SVKM’s NMIMS; SDSOS
SIGNIFICANCE OF CLASSICAL CONDITIONING
Many of our subjective feelings - from our violent emotions to the subtle nuances of our moods - are
probably conditioned responses. A face, a scene, or a voice maybe the condition stimulus for an
emotional response. Generalization and the fact that we learned many of these responses before
we could talk and thus label them make it difficult to trace such feelings back to their conditioned
beginnings. No wonder we're not always able to identify the origins of our emotional responses.
Since some emotional responses to stimuli are learned, perhaps they can be unlearned. Or perhaps
other, less disturbing responses can be associated with the stimuli that produce unpleasant
emotional responses. The extinction and alteration of disturbing emotional responses by classical
conditioning is one form of ‘behavior therapy’, or as it is called, ‘behavior modification’.
Suppose your friend, who knows you are taking a psychology course, asks for advice on how to
teach her young children to behave politely at dinner, watch less television, and do their household
course without constant prodding. What can you tell her that will be helpful? You know about
classical conditioning, but that is not going to help much. For instance, how could the children be
classically conditioned to say ‘thank you’ when appropriate? What is the unconditioned stimulus for
this or for other social behaviors? You need another form of learning to shape, or mold, such
behavior. To make the desired response more likely and the undesired one less likely, you need a
set of techniques that can be applied to the ongoing behavior of the children. The techniques of
instrumental conditioning will help do just that.
Instrumental conditioning is called instrumental because, as we will describe in detail, the key
feature of this form of learning is that some action (some behavior) of the learner is instrumental in
bringing about a change in the environment that makes the action more or less likely to occur again
in the future. For example, if the environmental change is a reward, the instrumental behavior that
brings about the reward will be more likely to occur in the future. In other words, if the behavior pays
off it is likely to be repeated. SVKM’s NMIMS; SDSOS
INSTRUMENTAL CONDITIONING
An environmental event that is the consequence of an instrumental response and that makes that
response more likely to occur again is known as ‘reinforcer’ or ‘reinforcement’. A positive reinforcer is
a stimulus or event which, when it follows a response, increases the likelihood that the response will
be made again. It is important to note that the reinforcer is contingent upon the instrumental
response. In other words, the response results in the occurrence of the reinforcer. This should not be
a strange idea to you; responses which pay off in some way or are likely to be repeated. For instance,
food for a hungry animal, water for a thirsty one, praise for a parent, a price, and many, many other
stimuli or events will show as positive reinforcers when they are contingent on some behavior.
Negative reinforcement is another tool in the kit of the instrumental conditioner. A ‘negative reinforcer’
is a stimulus or event which, when its cessation or termination is contingent on a response, increases
the likelihood that the response will occur again. The word negative refers to the fact that the
response causes the termination of an event. Again, this concept should not be strange to you.
Negative reinforcers are often but not necessarily painful, or noxious, events - an electric shock, for
example, or a shouting from the boss. The payoff is that the response, when made, stops the noxious
event (the shock or the boss’s abuse). We tend to repeat responses that pay off in this way.
SVKM’s NMIMS; SDSOS
REINFORCERS AND PUNISHERS
So instrumental, or operant, conditioning has quite a few ways of changing behavior. When you have
mastered these ideas, you should have something useful to say to the hypothetical friend in the
opening paragraph of this section. We will not go into the omission of reinforcement, but we will use
the concept of positive reinforcement, negative reinforcement, and punishment to organize the rest of
this section on instrumental conditioning.
It is a stimulus or event which, when it is contingent on a response, increases the likelihood that
the response will be made again. A negative reinforcer is a stimulus or event which, when its
session or termination is contingent on a response, increases the likelihood that the response will
occur again. A punisher is a stimulus or event which, when its onset is contingent on a response,
decreases the likelihood that the response will occur again. In omission of reinforcement, or
omission training, positive reinforcement is withdrawn following a response; this has the effect of
decreasing the likelihood of the behaviour leading to the removal of positive reinforcement.
In instrumental, or operant, conditioning, the term shaping refers to the process of learning a
complex response by first learning a number of simple responses leading up to the complex one.
Each step is learned by the application of contingent positive reinforcement, and each step builds
on the one before it until the complex response occurs and is reinforced. Shaping is also called
the method of successive approximations. Classical conditioning also seems to contribute to the
shaping of responses through the process known as auto-shaping.
In instrumental, or operant, conditioning, the procedure of not reinforcing a response is called
extinction. If after learning, reinforcement is no longer contingent on a response, the response will
become less likely to occur.
SVKM’s NMIMS; SDSOS
POSITIVE REINFORCER
Primary reinforces in instrumental conditioning are reinforcers that are effective without any
previous special training; they work ‘naturally’ to increase the likelihood of a response when they
are made contingent on it. On the other hand the ability of conditioned, or secondary, reinforcers to
influence the likelihood of a response depends upon learning; stimuli become conditioned
reinforcers by being paired with primary reinforcers.
https://www.youtube.com/watch?v=PRdCowYEtAg
SVKM’s NMIMS; SDSOS
THE GIFT OF FEAR by Gavin de Becker
‘Like every creature, you can know when you are in the presence of danger. You have the gift of
a brilliant internal guardian that stands ready to warn you of hazards and guides you through
risky situations.’
‘Though we want to believe that violence is a matter of cause and effect, it is actually a process,
a chain in which the violent outcome is only one link.’
‘For men like this, rejection is a threat to the identity, the persona, to the entire self, and in this
sense their crimes could be called murder in defense of the self.’
In a nutshell: Trust your intuitions, rather than technology, to protect you from violence.
We normally think of fear as something bad, but de Becker tries to show how it is a gift that may
protect us from harm. ‘The Gift of Fear: Survival Signals that Protect Us from Violence’ is about
getting into other people's mind so that their actions do not come as a terrible surprise.
‘The Gift of Fear’ may not just change your life - it could actually save it.
In the modern world, de Becker observes, we have forgotten to rely on our instincts to look after
ourselves. Most of us leave the issues of violence up to the police and criminal justice system,
believing that they will protect us, but often by the time we involve the authorities it is too late.
Alternatively, we believe that better technology will protect us from danger; the more alarms and
high fences we have, the safer we feel.
But there is a more reliable source of protection - our intuition or gut feeling. Usually, we have all
the information we need to warn us of certain people or situations; like other animals, we have an
inbuilt warning system for danger. Dogs’ intuition is much exaggerated, but de Becker argues
that in fact human beings have better intuition; the problem is that we are less prepared to trust it.
De Becker suggests that there is a universal code of violence that most of us can automatically
sense, yet modern life often has the effect of deadening our sensitivity. We either don't see the
signal at all or we won't admit them. ‘Trusting intuition is the exact opposite of living in fear.’ Real
fear does not paralyze you, it energizes you, enabling you to do things you normally could not.
De Becker debunks the idea that there is a criminal mind separating certain people from the rest
of us. Most of us would say that we can never kill another person, but then you usually hear the
caveat: ‘unless I was having to protect a loved one.’ We are all capable of criminal thoughts and
even actions. Many murderers are described as ‘inhuman’, but surely, de Becker observes, they
can't be anything but human. If one person is capable of a particular act under certain
circumstances, he may also be capable of the attack. In his work, de Becker does not have the
luxury of making distinctions like ‘human’ and ‘monster’. Instead, he looks for whether a person
may have the intent or ability to harm. He concludes, ‘the resource of violence is in everyone; all
that changes is our view of the justification.’
Justification - the person makes a judgment that they have been intentionally wronged.
Alternatives - violence seems like the only way forward to seek redress or justice.
Consequences - they decide they can live with the probable outcome of their violent act. For
instance, a stalker may not mind going to a jail as long as he gets his victim.
Ability - they have confidence in their ability to use their body or bullets or a bomb to achieve
their ends.
De Becker’s team check through this ‘pre-incident indicators’ when they have to predict the
likelihood of violence from someone threatening a client. If we pay attention he says, ‘violence
never comes from nowhere.’ It is actually not very common for people to ‘snap’ before they
commit murder. Generally, de Becker remarks, violence is as predictable as water coming to a
boil. What also helps in predicting violence is to understand it as a process, ‘in which the violent
outcome is only one link.’ Three quarters of spousal murders happen after the woman leaves the
marriage.
What is the best predicator of violent criminality? De Becker's experience is that a troubled or
abusive childhood is an important factor. In a study into serial killers, 100% were found to have
suffered violence themselves, been humiliated, or simply neglected as children.
Violent people can be very good at hiding the signals that they are psychopaths. They may
studiously model normality so that they can at first appear to be ‘regular guys.’
We don't have to lead paranoid lives - most of the things we worry about never happen - yet it is
foolish to trust our home or office security system or the police absolutely. As it is people who
harm, de Becker notes, it is people we must understand.
‘The Gift of Fear’ remarks, the common factor is a desperate hunger for recognition.
All of us want recognition, glory, significance to some extent, and in killing someone famous,
stalkers themselves become famous. To such people assassination makes perfect sense; it is a
shortcut to fame, and psychotic people do not really care whether the attention they gain is
positive or negative.
Final comments
In writing The Gift of Fear de Becker was influenced by three books in particular: FBI behavioral
scientists Robert Ressler’s ‘Whoever Fights Monsters; Psychologist John Monahan’s ‘Predicting
Violent Behavior’; and Robert D. Hare’s Without Conscience’; which takes the reader into the
minds of psychopaths. There is now a large literature on the psychology of violence, but de
Becker's book is still a great place to start.
Gavin de Becker is considered a pioneer in the field of threat assessment and the prediction and
management of violence. His other books include Protecting the Gift, on the safety of children,
and Fear Less: Real Truth About Risk, Safety and Security in a Time of Terrorism.
About Learning
Classical Conditioning
Instrumental Conditioning
Reinforcers
Punishment