[go: up one dir, main page]

0% found this document useful (0 votes)
31 views89 pages

(Xande Phillips) Terahertz Technology

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 89

First Edition, 2011

ISBN 978-93-81157-35-0

© All rights reserved.

Published by:
The English Press
4735/22 Prakashdeep Bldg,
Ansari Road, Darya Ganj,
Delhi - 110002
Email: info@wtbooks.com
Table of Contents
Chapter 1- Introduction to Terahertz Technology

Chapter 2 - Quantum Cascade Laser

Chapter 3 - Free-Electron Laser

Chapter 4 - Synchrotron Light Source

Chapter 5 - Sources of Terahertz Technology

Chapter 6 - Terahertz Time-Domain Spectroscopy

Chapter 7 - Terahertz Metamaterials

Chapter 8 - Important examples of Terahertz Technology


Chapter- 1

Introduction to Terahertz Technology

Terahertz waves lie at the far end of the infrared band, just before the start of the
microwave band.

In Electronics, Terahertz Technology refers to electromagnetic waves propagating at


frequencies in the terahertz range. It is synonymously termed submillimeter radiation,
terahertz waves, terahertz light, T-rays, T-light, T-lux and THz. The term typically
applies to electromagnetic radiation with frequencies between high-frequency edge of the
microwave band, 300 gigahertz (3×1011 Hz), and the long-wavelength edge of far-
infrared light, 3000 GHz (3×1012 Hz or 3 THz). In wavelengths, this range corresponds to
0.1 mm (or 100 μm) infrared to 1.0 mm microwave. The THz band straddles the region
where electromagnetic physics can best be described by its wave-like characteristics
(microwave) and its particle-like characteristics (infrared).
Introduction
Like infrared radiation or microwaves, these waves usually travel in line of sight.
Terahertz radiation is non-ionizing submillimeter microwave radiation and shares with
microwaves the capability to penetrate a wide variety of non-conducting materials.
Terahertz radiation can pass through clothing, paper, cardboard, wood, masonry, plastic
and ceramics. It can also penetrate fog and clouds, but cannot penetrate metal or water.

Plot of the zenith atmospheric transmission on the summit of Mauna Kea throughout the
range of 1 to 3 THz of the electromagnetic spectrum at a precipitable water vapor level of
0.001 mm.

The Earth's atmosphere is a strong absorber of terahertz radiation, so the range of


terahertz radiation is quite short, limiting its usefulness for communications. In addition,
producing and detecting coherent terahertz radiation was technically challenging until the
1990s.

Sources
Terahertz radiation is emitted as part of the black body radiation from anything with
temperatures greater than about 10 kelvin. While this thermal emission is very weak,
observations at these frequencies are important for characterizing the cold 10-20K dust in
the interstellar medium in the Milky Way galaxy, and in distant starburst galaxies.
Telescopes operating in this band include the James Clerk Maxwell Telescope, the
Caltech Submillimeter Observatory and the Submillimeter Array at the Mauna Kea
Observatory in Hawaii, the BLAST balloon borne telescope, the Herschel Space
Observatory, and the Heinrich Hertz Submillimeter Telescope at the Mount Graham
International Observatory in Arizona. The Atacama Large Millimeter Array, under
construction, will operate in the submillimeter range. The opacity of the Earth's
atmosphere to submillimeter radiation restricts these observatories to very high altitude
sites, or to space.

As of 2004 the only viable sources of terahertz radiation were:

• the gyrotron,
• the backward wave oscillator ("BWO"),
• the far infrared laser ("FIR laser"),
• quantum cascade laser,
• the free electron laser (FEL),
• synchrotron light sources,
• photomixing sources, and
• single-cycle sources used in terahertz time domain spectroscopy such as
photoconductive, surface field, photo-Dember and optical rectification emitters.

The first images generated using terahertz radiation date from the 1960s; however, in
1995, images generated using terahertz time-domain spectroscopy generated a great deal
of interest, and sparked a rapid growth in the field of terahertz science and technology.
This excitement, along with the associated coining of the term "T-rays", even showed up
in a contemporary novel by Tom Clancy.

There have also been solid-state sources of millimeter and submillimeter waves for many
years. AB Millimeter in Paris, for instance, produces a system that covers the entire range
from 8 GHz to 1000 GHz with solid state sources and detectors. Nowadays, most time-
domain work is done via ultrafast lasers.

In mid-2007, scientists at the U.S. Department of Energy's Argonne National Laboratory,


along with collaborators in Turkey and Japan, announced the creation of a compact
device that can lead to portable, battery-operated sources of T-rays, or terahertz radiation.
The group was led by Ulrich Welp of Argonne's Materials Science Division. This new T-
ray source uses high-temperature superconducting crystals grown at the University of
Tsukuba, Japan. These crystals comprise stacks of Josephson junctions that exhibit a
unique electrical property: when an external voltage is applied, an alternating current will
flow back and forth across the junctions at a frequency proportional to the strength of the
voltage; this phenomenon is known as the Josephson effect. These alternating currents
then produce electromagnetic fields whose frequency is tuned by the applied voltage.
Even a small voltage – around two millivolts per junction – can induce frequencies in the
terahertz range, according to Welp.

In 2008 engineers at Harvard University announced they had built a room temperature
semiconductor source of coherent Terahertz radiation. Until then sources had required
cryogenic cooling, greatly limiting their use in everyday applications.

In 2009 it was shown that T-waves are produced when unpeeling adhesive tape. The
observed spectrum of this terahertz radiation exhibits a peak at 2 THz and a broader peak
at 18 THz. The radiation is not polarized. The mechanism of terahertz radiation is
tribocharging of the adhesive tape and subsequent discharge.

Research
• Medical imaging:
o Terahertz radiation is non-ionizing, and thus is not expected to damage
tissues and DNA, unlike X-rays. Some frequencies of terahertz radiation
can penetrate several millimeters of tissue with low water content (e.g.
fatty tissue) and reflect back. Terahertz radiation can also detect
differences in water content and density of a tissue. Such methods could
allow effective detection of epithelial cancer with a safer and less invasive
or painful system using imaging.
o Some frequencies of terahertz radiation can be used for 3D imaging of
teeth and may be more accurate and safer than conventional X-ray
imaging in dentistry.
• Security:
o Terahertz radiation can penetrate fabrics and plastics, so it can be used in
surveillance, such as security screening, to uncover concealed weapons on
a person, remotely. This is of particular interest because many materials of
interest have unique spectral "fingerprints" in the terahertz range. This
offers the possibility to combine spectral identification with imaging.
Passive detection of Terahertz signatures avoid the bodily privacy
concerns of other detection by being targeted to a very specific range of
materials and objects.
• Scientific use and imaging:
o Spectroscopy in terahertz radiation could provide novel information in
chemistry and biochemistry.
o Recently developed methods of THz time-domain spectroscopy (THz
TDS) and THz tomography have been shown to be able to perform
measurements on, and obtain images of, samples which are opaque in the
visible and near-infrared regions of the spectrum. The utility of THz-TDS
is limited when the sample is very thin, or has a low absorbance, since it is
very difficult to distinguish changes in the THz pulse caused by the
sample from those caused by long term fluctuations in the driving laser
source or experiment. However, THz-TDS produces radiation that is both
coherent and spectrally broad, so such images can contain far more
information than a conventional image formed with a single-frequency
source.
o A primary use of submillimeter waves in physics is the study of condensed
matter in high magnetic fields, since at high fields (over about 15 teslas),
the Larmor frequencies are in the submillimeter band. Many high-
magnetic field laboratories perform this work, such as the National High
Magnetic Field Laboratory (NHMFL) in Florida.
o Submillimetre astronomy.
o Terahertz radiation could let art historians see murals hidden beneath coats
of plaster or paint in centuries-old buildings, without harming the artwork.
• Communication:
o Potential uses exist in high-altitude telecommunications, above altitudes
where water vapor causes signal absorption: aircraft to satellite, or satellite
to satellite.
• Manufacturing:
o Many possible uses of terahertz sensing and imaging are proposed in
manufacturing, quality control, and process monitoring. These generally
exploit the traits of plastics and cardboard being transparent to terahertz
radiation, making it possible to inspect packaged goods.

Terahertz versus submillimeter waves


The terahertz band, covering the wavelength range between 0.1 and 1 mm, is identical to
the submillimeter wavelength band. However, typically, the term "terahertz" is used more
often in marketing in relation to generation and detection with pulsed lasers, as in
terahertz time domain spectroscopy, while the term "submillimeter" is used for
generation and detection with microwave technology, such as harmonic multiplication.

Safety
The terahertz region is between the radio frequency region and the optical region
generally associated with lasers. Both the IEEE RF safety standard and the ANSI Laser
safety standard have limits into the terahertz region, but both safety limits are based on
extrapolation. It is expected that effects on tissues are thermal in nature and, therefore,
predictable by conventional thermal models. Research is underway to collect data to
populate this region of the spectrum and validate safety limits.
Chapter- 2

Quantum Cascade Laser

Quantum cascade lasers (QCLs) are semiconductor lasers that emit in the mid- to far-
infrared portion of the electromagnetic spectrum and were first demonstrated by Jerome
Faist, Federico Capasso, Deborah Sivco, Carlo Sirtori, Albert Hutchinson, and Alfred
Cho at Bell Laboratories in 1994.

Unlike typical interband semiconductor lasers that emit electromagnetic radiation through
the recombination of electron–hole pairs across the material band gap, QCLs are unipolar
and laser emission is achieved through the use of intersubband transitions in a repeated
stack of semiconductor multiple quantum well heterostructures, an idea first proposed in
the paper "Possibility of amplification of electromagnetic waves in a semiconductor with
a superlattice" by R.F. Kazarinov and R.A. Suris in 1971.

Intersubband vs. interband transitions


Interband transitions in conventional semiconductor lasers emit a single photon.

Within a bulk semiconductor crystal, electrons may occupy states in one of two
continuous energy bands - the valence band, which is heavily populated with low energy
electrons and the conduction band, which is sparsely populated with high energy
electrons. The two energy bands are separated by an energy band gap in which there are
no permitted states available for electrons to occupy. Conventional semiconductor laser
diodes generate light by a single photon being emitted when a high energy electron in the
conduction band recombines with a hole in the valence band. The energy of the photon
and hence the emission wavelength of laser diodes is therefore determined by the band
gap of the material system used.
A QCL however does not use bulk semiconductor materials in its optically active region.
Instead it comprises a periodic series of thin layers of varying material composition
forming a superlattice. The superlattice introduces a varying electric potential across the
length of the device, meaning that there is a varying probability of electrons occupying
different positions over the length of the device. This is referred to as one-dimensional
multiple quantum well confinement and leads to the splitting of the band of permitted
energies into a number of discrete electronic subbands. By suitable design of the layer
thicknesses it is possible to engineer a population inversion between two subbands in the
system which is required in order to achieve laser emission. Since the position of the
energy levels in the system is primarily determined by the layer thicknesses and not the
material, it is possible to tune the emission wavelength of QCLs over a wide range in the
same material system.

In quantum cascade structures, electrons undergo intersubband transitions and photons


are emitted. The electrons tunnel to the next period of the structure and the process
repeats.

Additionally, in semiconductor laser diodes, electrons and holes are annihilated after
recombining across the band gap and can play no further part in photon generation.
However in a unipolar QCL, once an electron has undergone an intersubband transition
and emitted a photon in one period of the superlattice, it can tunnel into the next period of
the structure where another photon can be emitted. This process of a single electron
causing the emission of multiple photons as it traverses through the QCL structure gives
rise to the name cascade and makes a quantum efficiency of greater than unity possible
which leads to higher output powers than semiconductor laser diodes.

Operating principles
Rate equations

Subband populations are determined by the intersubband scattering rates and the
injection/extraction current.

QCLs are typically based upon a three-level system. Assuming the formation of the
wavefunctions is a fast process compared to the scattering between states, the time
independent solutions to the Schrödinger equation may be applied and the system can be
modelled using rate equations. Each subband contains a number of electrons ni (where i is
the subband index) which scatter between levels with a lifetime τif (reciprocal of the
average intersubband scattering rate Wif), where i and f are the initial and final subband
indices. Assuming that no other subbands are populated, the rate equations for the three
level lasers are given by:
In the steady state, the time derivatives are equal to zero and Iin = Iout = I. The general rate
equation for electrons in subband i of an N level system is therefore:

Under the assumption that absorption processes can be ignored (which is valid at low
temperatures), the middle rate equation gives

Therefore if τ32 > τ21 (i.e. W21 > W32) then n3 > n2 and a population inversion will exist.
The population ratio is defined as

If all N steady-state rate equations are summed, the right hand side becomes zero,
meaning that the system is underdetermined, and it is possible only to find the relative
population of each subband. An additional equation is required to set the total number of
carriers equal to the total number of dopant ions:

.
Electron wave functions are repeated in each period of a three quantum well QCL active
region. The upper laser level is shown in bold.

Active region designs

The scattering rates are tailored by suitable design of the layer thicknesses in the
superlattice which determine the electron wave functions of the subbands. The scattering
rate between two subbands is heavily dependent upon the overlap of the wave functions
and energy spacing between the subbands. The figure shows the wave functions in a three
quantum well (3QW) QCL active region and injector.

In order to decrease W32, the overlap of the upper and lower laser levels is reduced. This
is often achieved through designing the layer thicknesses such that the upper laser level is
mostly localised in the left-hand well of the 3QW active region, while the lower laser
level wave function is made to mostly reside in the central and right-hand wells. This is
known as a diagonal transition. A vertical transition is one in which the upper laser level
is localised in mainly the central and right-hand wells. This increases the overlap and
hence W32 which reduces the population inversion, but it increases the strength of the
radiative transition and therefore the gain.

In order to increase W21, the lower laser level and the ground level wave functions are
designed such that they have a good overlap and to increase W21 further, the energy
spacing between the subbands is designed such that it is equal to the longitudinal optical
(LO) phonon energy (~36 meV in GaAs) so that resonant LO phonon-electron scattering
can quickly depopulate the lower laser level.

Material systems
The first QCL was fabricated in the InGaAs/InAlAs material system lattice-matched to an
InP substrate. This particular material system has a conduction band offset (quantum well
depth) of 520 meV. These InP-based devices have reached very high levels of
performance across the mid-infrared spectral range, achieving high power, above room-
temperature, continuous wave emission.

In 1998 GaAs/AlGaAs QCLs were demonstrated by Sirtori et al. proving that the QC
concept is not restricted to one material system. This material system has a varying
quantum well depth depending on the aluminium fraction in the barriers. Although GaAs-
based QCLs have not matched the performance levels of InP-based QCLs in the mid-
infrared, they have proven to be very successful in the terahertz region of the spectrum.

The short wavelength limit of QCLs is determined by the depth of the quantum well and
recently QCLs have been developed in material systems with very deep quantum wells in
order to achieve short wavelength emission. The InGaAs/AlAsSb material system has
quantum wells 1.6 eV deep and has been used to fabricate QCLs emitting at 3 μm.
InAs/AlSb QCLs have quantum wells 2.1 eV deep and electroluminescence at
wavelengths as short as 2.5 μm has been observed.

QCLs may also allow laser operation in materials traditionally considered to have poor
optical properties. Indirect bandgap materials such as silicon have minimum electron and
hole energies at different momentum values. For interband optical transitions, carriers
change momentum through a slow, intermediate scattering process, dramatically reducing
the optical emission intensity. Intersubband optical transitions however, are independent
of the relative momentum of conduction band and valence band minima and theoretical
proposals for Si/SiGe quantum cascade emitters have been made.

Emission wavelengths
QCLs currently cover the wavelength range from 2.75–250 μm (and extends to 355 μm
with the application of a magnetic field).

Optical waveguides
End view of QC facet with ridge waveguide. Darker gray: InP, lighter gray: QC layers,
black: dielectric, gold: Au coating. Ridge ~ 10 um wide.

End view of QC facet with buried heterostructure waveguide. Darker gray: InP, lighter
gray: QC layers, black: dielectric. Heterostructure ~ 10 um wide

The first step in processing quantum cascade gain material to make a useful light-
emitting device is to confine the gain medium in an optical waveguide. This makes it
possible to direct the emitted light into a collimated beam, and allows a laser resonator to
be built such that light can be coupled back into the gain medium.

Two types of optical waveguides are in common use. A ridge waveguide is created by
etching parallel trenches in the quantum cascade gain material to create an isolated stripe
of QC material, typically ~10 um wide, and several mm long. A dielectric material is
typically deposited in the trenches to guide injected current into the ridge, then the entire
ridge is typically coated with gold to provide electrical contact and to help remove heat
from the ridge when it is producing light. Light is emitted from the cleaved ends of the
waveguide, with an active area that is typically only a few micrometers in dimension.

The second waveguide type is a buried heterostructure. Here, the QC material is also
etched to produce an isolated ridge. Now, however, new semiconductor material is grown
over the ridge. The change in index of refraction between the QC material and the
overgrown material is sufficient to create a waveguide. Dielectric material is also
deposited on the overgrown material around QC ridge to guide the injected current into
the QC gain medium. Buried heterostructure waveguides are efficient at removing heat
from the QC active area when light is being produced.

Laser types
Although the quantum cascade gain medium can be used to produce incoherent light in a
superluminescent configuration, it is most commonly used in combination with an optical
cavity to form a laser.

Fabry–Pérot lasers

This is the simplest of the quantum cascade lasers. An optical waveguide is first
fabricated out of the quantum cascade material to form the gain medium. The ends of the
crystalline semiconductor device are then cleaved to form two parallel mirrors on either
end of the waveguide, thus forming a Fabry–Pérot resonator. The residual reflectivity on
the cleaved facets from the semiconductor-to-air interface is sufficient to create a
resonator. Fabry–Pérot quantum cascade lasers are capable of producing high powers, but
are typically multi-mode at higher operating currents. The wavelength can be changed
chiefly by changing the temperature of the QC device.

Distributed feedback lasers

A distributed feedback (DFB) quantum cascade laser is similar to a Fabry–Pérot laser,


except for a distributed Bragg reflector (DBR) built on top of the waveguide to prevent it
from emitting at other than the desired wavelength. This forces single mode operation of
the laser, even at higher operating currents. DFB lasers can be tuned chiefly by changing
the temperature, although an interesting variant on tuning can be obtained by pulsing a
DFB laser. In this mode, the wavelength of the laser is rapidly “chirped” during the
course of the pulse, allowing rapid scanning of a spectral region.

External cavity lasers

Schematic of QC device in external cavity with frequency selective optical feedback


provided by diffraction grating in Littrow configuration.

In an external cavity (EC) quantum cascade laser, the quantum cascade device serves as
the laser gain medium. One, or both, of the waveguide facets have an anti-reflection
coating that defeats the optical cavity action of the cleaved facets. Mirrors are then
arranged in a configuration external to the QC device to create the optical cavity.

If a frequency-selective element is included in the external cavity, it is possible to reduce


the laser emission to a single wavelength, and even tune the radiation. For example,
diffraction gratings have been used to create a tunable laser that can tune over 15% of its
center wavelength.

Growth
The alternating layers of the two different semiconductors which form the quantum
heterostructure may be grown on to a substrate using a variety of methods such as
molecular beam epitaxy (MBE) , metalorganic vapour phase epitaxy (MOVPE), or
metalorganic chemical vapor deposition (MOCVD).

Applications
Distributed feedback (DFB) quantum cascade lasers were first commercialized in 2004,
and broadly-tunable external cavity quantum cascade lasers first commercialized in 2006.
The high optical power output, tuning range and room temperature operation make QCLs
useful for spectroscopic applications such as remote sensing of environmental gases and
pollutants in the atmosphere and homeland security. They may eventually be used for
vehicular cruise control in conditions of poor visibility, collision avoidance radar,
industrial process control, and medical diagnostics such as breath analyzers. QCLs are
also used to study plasma chemistry.

Their large dynamic range, excellent sensitivity, and failsafe operation combined with the
solid-state reliability should easily overcome many of the technological hurdles that
impede existing technology in these markets. When used in multiple-laser systems,
intrapulse QCL spectroscopy offers broadband spectral coverage that can potentially be
used to identify and quantify complex heavy molecules such as those in toxic chemicals,
explosives, and drugs.

Unguided QCL emission in the 3–5 μm atmospheric window could be used as a cheaper
alternative to optical fibres for high-speed Internet access in built up areas.
Chapter- 3

Free-Electron Laser

Free-electron laser FELIX at FOM (Nieuwegein)

A free-electron laser, or FEL, is a laser that shares the same optical properties as
conventional lasers such as emitting a beam consisting of coherent electromagnetic
radiation which can reach high power, but which uses some very different operating
principles to form the beam. Unlike gas, liquid, or solid-state lasers such as diode lasers,
in which electrons are excited in bound atomic or molecular states, FELs use a relativistic
electron beam as the lasing medium which moves freely through a magnetic structure,
hence the term free electron. The free-electron laser has the widest frequency range of
any laser type, and can be widely tunable, currently ranging in wavelength from
microwaves, through terahertz radiation and infrared, to the visible spectrum, to
ultraviolet, to X-rays.

Free-electron lasers were invented by John Madey in 1976 at Stanford University. The
work emanates from research done by Hans Motz who proposed the wiggler magnetic
configuration which is at the heart of a free electron laser. Madey used a 24 MeV electron
beam and 5 m long wiggler to amplify a signal. Soon afterward, other laboratories with
accelerators started developing such lasers.

Beam creation

Free electron laser schematic of operation


Undulator of FELIX

To create an FEL, a beam of electrons is accelerated to almost light speed. The beam
passes through an FEL oscillator in the form of a periodic, transverse magnetic field,
produced by arranging magnets with alternating poles within a laser cavity along the
beam path. This array of magnets is sometimes called an undulator, or a "wiggler",
because it forces the electrons in the beam to follow a sinusoidal path. The acceleration of
the electrons along this path results in the release of photons (synchrotron radiation).
Since the electron motion is in phase with the field of the light already emitted, the fields
add together coherently. Whereas conventional undulators would cause the electrons to
radiate independently, instabilities in the electron beam resulting from the interactions of
the oscillations of electrons in the undulators and the radiation they emit leads to a
bunching of the electrons, which continue to radiate in phase with each other. The
wavelength of the light emitted can be readily tuned by adjusting the energy of the
electron beam or the magnetic field strength of the undulators.

Accelerators
Today, a free-electron laser requires the use of an electron accelerator with its associated
shielding, as accelerated electrons are a radiation hazard. These accelerators are typically
powered by klystrons, which require a high voltage supply. The electron beam must be
maintained in a vacuum which requires the use of numerous vacuum pumps along the
beam path. While this equipment is bulky and expensive, free-electron lasers can achieve
very high peak powers, and the tunability of FELs makes them highly desirable in several
disciplines, including medical diagnosis and non-destructive testing.

X-ray uses
The lack of suitable mirrors in the extreme ultraviolet and x-ray regimes prevent the
operation of a FEL oscillator; consequently, there must be suitable amplification over a
single pass of the electron beam through the undulator to make the FEL worthwhile. X-
ray free electron lasers use long undulators. The underlying principle of the intense pulses
from the X-ray laser lies in the principle of Self-Amplified Stimulated-Emission (SASE),
which leads to the microbunching of the electrons. Initially all electrons are distributed
evenly and they emit incoherent spontaneous radiation only. Through the interaction of
this radiation and the electrons' oscillations, they drift into microbunches separated by a
distance equal to one radiation wavelength. Through this interaction, all electrons begin
emitting coherent radiation in phase. In other words, all emitted radiation can reinforce
itself perfectly whereby wave crests and wave troughs are always superimposed on one
another in the best possible way. This results in an exponential increase of emitted
radiation power, leading to high beam intensities and laser-like properties. Examples of
facilities operating on the SASE FEL principle include the Free electron LASer in
Hamburg (FLASH), the Linac Coherent Light Source (LCLS) at the SLAC National
Accelerator Laboratory, the European X-ray free-electron laser, the SPring-8 Compact
SASE Source (SCSS) and the PSI SwissFEL.

One problem with SASE FELs is the lack of temporal coherence due to a noisy startup
process. To avoid this, one can "seed" an FEL with a laser tuned to the resonance of the
FEL. Such a temporally coherent seed can be produced by more conventional means,
such as by high-harmonic generation (HHG) using an optical laser pulse. This results in
coherent amplification of the input signal; in effect, the output laser quality is
characterized by the seed. While HHG seeds are available at wavelengths down to the
extreme ultraviolet, seeding is not feasible at x-ray wavelengths due to the lack of
conventional x-ray lasers.

Medical uses
Research by Dr. Glenn Edwards and colleagues at Vanderbilt University's FEL Center in
1994 found that soft tissues like skin, cornea, and brain tissue could be cut, or ablated,
using infrared FEL wavelengths around 6.45 micrometres with minimal collateral
damage to adjacent tissue. This led to further research and eventually surgeries on
humans, the first ever using a free-electron laser. Starting in 1999, and using the Keck
foundation funded FEL operating rooms at the Vanderbilt FEL Center, Dr. Michael
Copeland and Dr. Pete Konrad of Vanderbilt performed three surgeries in which they
resected meningioma brain tumors. Beginning in 2000, Dr. Karen Joos and Dr. Louise
Mawn performed five surgeries involving the cutting of a window in the sheath of the
optic nerve, to test the efficacy for optic nerve sheath fenestration. These eight surgeries
went as expected with results consistent with the routine standard of care and with the
added benefit of laser surgery and minimal collateral damage. A review of FELs for
medical uses is given in the 1st edition of Tunable Laser Applications.

Since these successful results, there have been several efforts to build small, clinical
lasers tunable in the 6 to 7 micrometre range with pulse structure and energy to give
minimal collateral damage in soft tissue. At Vanderbilt, there exists a Raman shifted
system pumped by an Alexandrite laser.

At the 2006 annual meeting of the American Society for Laser Medicine and Surgery
(ASLMS), Dr. Rox Anderson of the Wellman Laboratory of Photomedicine of Harvard
Medical School and Massachusetts General Hospital reported on the possible medical
application of the free-electron laser in melting fats without harming the overlying skin.
It was reported that at infrared wavelengths, water in tissue was heated by the laser, but at
wavelengths corresponding to 915, 1210 and 1720 nm, subsurface lipids were
differentially heated more strongly than water. The possible applications of this selective
photothermolysis (heating tissues using light) include the selective destruction of sebum
lipids to treat acne, as well as targeting other lipids associated with cellulite and body fat
as well as fatty plaques that form in arteries which can help treat atherosclerosis and heart
disease.

Military uses
FEL technology is being evaluated by the US Navy as a good candidate for an anti
aircraft and missile directed-energy weapon. Significant progress is being made in raising
FEL power levels (the Thomas Jefferson National Accelerator Facility's FEL has
demonstrated over 14 kW) and it should be possible to build compact multi-megawatt
class FEL weapons. On June 9, 2009 the Office of Naval Research announced it had
awarded Raytheon a contract to develop a 100 kW experimental FEL. On March 18,
2010 Boeing Directed Energy Systems announced the completion of an initial design for
U.S. Naval use.
Chapter- 4

Synchrotron Light Source

Synchrotron radiation emerging from a beam port. The blue colour comes from oxygen
and nitrogen atoms in the air, ionised by the X-rays

A synchrotron light source is a source of electromagnetic radiation produced by a


synchrotron, which is artificially produced for scientific and technical purposes by
specialized particle accelerators, typically accelerating electrons. Once the high-energy
electron beam has been generated, it is directed into auxiliary components such as
bending magnets and insertion devices (undulators or wigglers) in storage rings and free
electron lasers. These supply the strong magnetic fields perpendicular to the beam which
are needed to convert the high-energy electron energy into light or some other form of
EM radiation.

The major applications of synchrotron light are in condensed matter physics, materials
science, biology and medicine. A large fraction of experiments using synchrotron light
involve probing the structure of matter from the sub-nanometer level of electronic
structure to the micrometer and millimeter level important in medical imaging. An
example of a practical industrial application is the manufacturing of microstructures by
the LIGA process.

Properties of sources
Especially when artificially produced, synchrotron radiation is notable for its:

• High brightness and high intensity, many orders of magnitude more than with X-
rays produced in conventional X-ray tubes
• High level of polarization (linear or elliptical)
• High collimation, i.e. small angular divergence of the beam
• Low emittance, i.e. the product of source cross section and solid angle of emission
is small
• Wide tunability in energy/wavelength by monochromatization (sub-electronvolt
up to the megaelectronvolt range)
• High brilliance, exceeding other natural and artificial light sources by many
orders of magnitude: 3rd generation sources typically have a brilliance larger than
1018 photons/s/mm2/mrad2/0.1%BW, where 0.1%BW denotes a bandwidth 10−3w
centered around the frequency w.
• Pulsed light emission (pulse durations at or below one nanosecond, or a billionth
of a second).

Synchrotron radiation from accelerators


Synchrotron radiation may occur in accelerators either as a nuisance, causing undesired
energy loss in particle physics contexts, or as a deliberately produced radiation source for
numerous laboratory applications. Electrons are accelerated to high speeds in several
stages to achieve a final energy that is typically in the gigaelectronvolt range. The
electrons are forced to travel in a closed path by strong magnetic fields. This is similar to
a radio antenna, but with the difference that the relativistic speed changes the observed
frequency due to the Doppler effect by a factor γ. Relativistic Lorentz contraction bumps
the frequency by another factor of γ, thus multiplying the gigahertz frequency of the
resonant cavity that accelerates the electrons into the X-ray range. Another dramatic
effect of relativity is that the radiation pattern is distorted from the isotropic dipole
pattern expected from non-relativistic theory into an extremely forward-pointing cone of
radiation. This makes synchrotron radiation sources the brightest known sources of X-
rays. The planar acceleration geometry makes the radiation linearly polarized when
observed in the orbital plane, and circularly polarized when observed at a small angle to
that plane.

The advantages of using synchrotron radiation for spectroscopy and diffraction have been
realized by an ever-growing scientific community, beginning in the 1960s and 1970s. In
the beginning, accelerators were built for particle physics, and synchrotron radiation was
used in "parasitic mode" when bending magnet radiation had to be extracted by drilling
extra holes in the beam pipes. The first storage ring commissioned as a synchrotron light
source was Tantalus, at the Synchrotron Radiation Center, first operational in 1968. As
accelerator synchrotron radiation became more intense and its applications more
promising, devices that enhanced the intensity of synchrotron radiation were built into
existing rings. Third-generation synchrotron radiation sources were conceived and
optimized from the outset to produce bright X-rays. Fourth-generation sources that will
include different concepts for producing ultrabright, pulsed time-structured X-rays for
extremely demanding and also probably yet-to-be-conceived experiments are under
consideration.

Bending electromagnets in the accelerators were first used to generate the radiation; but
to generate stronger radiation, other specialized devices, called insertion devices, are
sometimes employed. Current third-generation synchrotron radiation sources are
typically heavily based upon these insertion devices, when straight sections in the storage
ring are used for inserting periodic magnetic structures (composed of many magnets that
have a special repeating row of N and S poles) that force the electrons into a sinusoidal
path or helical path. Thus, instead of a single bend, many tens or hundreds of "wiggles" at
precisely calculated positions add up or multiply the total intensity that is seen at the end
of the straight section. These devices are called wigglers or undulators. The main
difference between an undulator and a wiggler is the intensity of their magnetic field and
the amplitude of the deviation from the straight line path of the electrons.

There are openings in the storage ring to let the radiation exit and follow a beam line into
the experimenters' vacuum chamber. A great number of such beamlines can emerge from
modern third-generation synchrotron radiation sources.

Storage rings

The electrons may be extracted from the accelerator proper and stored in an ultrahigh
vacuum auxiliary magnetic storage ring where they may circle a large number of times.
The magnets in the ring also need to repeatedly recompress the beam against Coulomb
(space charge) forces tending to disrupt the electron bunches. The change of direction is a
form of acceleration and thus the electrons emit radiation at GeV frequencies.

Applications of synchrotron radiation


• Synchrotron radiation of an electron beam circulating at high energy in a
magnetic field leads to radiative self-polarization of electrons in the beam
(Sokolov-Ternov effect). This effect is used for producing highly polarised
electron beams for use in various experiments.

• Synchrotron radiation sets the beam sizes (determined by the beam emittance) in
electron storage rings via the effects of radiation damping and quantum excitation.

Beamlines
Beamlines of Soleil

At a synchrotron facility, electrons are usually accelerated by a synchrotron, and then


injected into a storage ring, in which they circulate, producing synchrotron radiation, but
without gaining further energy. The radiation is projected at a tangent to the electron
storage ring and captured by beamlines. These beamlines may originate at bending
magnets, which mark the corners of the storage ring; or insertion devices, which are
located in the straight sections of the storage ring. The spectrum and energy of X-rays
differ between the two types. The beamline includes X-ray optical devices which control
the bandwidth, photon flux, beam dimensions, focus, and collimation of the rays. The
optical devices include slits, attenuators, crystal monochromators, and mirrors. The
mirrors may be bent into curves or toroidal shapes to focus the beam. A high photon flux
in a small area is the most common requirement of a beamline. The design of the
beamline will vary with the application. At the end of the beamline is the experimental
end station, where samples are placed in the line of the radiation, and detectors are
positioned to measure the resulting diffraction, scattering or secondary radiation.

Experimental techniques and usage


Synchrotron light is an ideal tool for many types of research and also has industrial
applications. Some of the experimental techniques in synchrotron beamlines are:
Structural analysis
Structural analysis comprises the set of physical laws and mathematics required to study
and predict the behavior of structures. The subjects of structural analysis are engineering
artifacts whose integrity is judged largely based upon their ability to withstand loads;
they commonly include buildings, bridges, aircraft, ships and cars. Structural analysis
incorporates the fields of mechanics and dynamics as well as the many failure theories.
From a theoretical perspective the primary goal of structural analysis is the computation
of deformations, internal forces, and stresses. In practice, structural analysis can be
viewed more abstractly as a method to drive the engineering design process or prove the
soundness of a design without a dependence on directly testing it.

Structures and Loads


A structure refers to a system of connected parts used to support a load. Important
examples related to Civil Engineering include buildings, bridges, and towers; and in other
branches of engineering, ship and aircraft frames, tanks, pressure vessels, mechanical
systems, and electrical supporting structures are important. In order to design a structure,
one must serve a specified function for public use, the engineer must account for its
safety, esthetics, and serviceability, while taking into consideration economic and
environmental constraints.

Classification of Structures

It is important for a structural engineer to recognize the various types of elements


composing a structure and to be able to classify structures as to their form and function.
Some of the structural elements are tie rods, rod, bar, angle, channel, beams, and
columns. Combination of structural elements and the materials from which they are
composed is referred to as a structural system. Each system is constructed of one or more
basic types of structures such as Trusses, Cables and Arches, Frames, and Surface
Structures.

Loads

Once the dimensional requirement for a structure have been defined, it becomes
necessary to determine the loads the structure must support. In order to design a structure,
it is therefore necessary to first specify the loads that act on it. The design loading for a
structure is often specified in codes. There are two types of codes: general building codes
and design codes, engineer must satisfy all the codes requirements for a reliable structure.
There are two types of loads that structure engineering must encounter in the design. First
type of load is called Dead loads that consist of the weights of the various structural
members and the weights of any objects that are permanently attached to the structure.
For example, columns, beams, girders, the floor slab, roofing, walls, windows, plumbing,
electrical fixtures, and other miscellaneous attachments. Second type of load is Live
Loads which vary in their magnitude and location. There are many different types of live
loads like building loads, highway bridge Loads, railroad bridge Loads, impact loads,
wind loads, snow loads, earthquake loads, and other natural loads.

Analytical methods
To perform an accurate analysis a structural engineer must determine such information as
structural loads, geometry, support conditions, and materials properties. The results of
such an analysis typically include support reactions, stresses and displacements. This
information is then compared to criteria that indicate the conditions of failure. Advanced
structural analysis may examine dynamic response, stability and non-linear behavior.

There are three approaches to the analysis: the mechanics of materials approach (also
known as strength of materials), the elasticity theory approach (which is actually a special
case of the more general field of continuum mechanics), and the finite element approach.
The first two make use of analytical formulations which apply mostly to simple linear
elastic models, lead to closed-form solutions, and can often be solved by hand. The finite
element approach is actually a numerical method for solving differential equations
generated by theories of mechanics such as elasticity theory and strength of materials.
However, the finite-element method depends heavily on the processing power of
computers and is more applicable to structures of arbitrary size and complexity.

Regardless of approach, the formulation is based on the same three fundamental


relations: equilibrium, constitutive, and compatibility. The solutions are approximate
when any of these relations are only approximately satisfied, or only an approximation of
reality.

Limitations
Each method has noteworthy limitations. The method of mechanics of materials is
limited to very simple structural elements under relatively simple loading conditions. The
structural elements and loading conditions allowed, however, are sufficient to solve many
useful engineering problems. The theory of elasticity allows the solution of structural
elements of general geometry under general loading conditions, in principle. Analytical
solution, however, is limited to relatively simple cases. The solution of elasticity
problems also requires the solution of a system of partial differential equations, which is
considerably more mathematically demanding than the solution of mechanics of materials
problems, which require at most the solution of an ordinary differential equation. The
finite element method is perhaps the most restrictive and most useful at the same time.
This method itself relies upon other structural theories (such as the other two discussed
here) for equations to solve. It does, however, make it generally possible to solve these
equations, even with highly complex geometry and loading conditions, with the
restriction that there is always some numerical error. Effective and reliable use of this
method requires a solid understanding of its limitations.

Strength of materials methods (classical methods)


The simplest of the three methods here discussed, the mechanics of materials method is
available for simple structural members subject to specific loadings such as axially
loaded bars, prismatic beams in a state of pure bending, and circular shafts subject to
torsion. The solutions can under certain conditions be superimposed using the
superposition principle to analyze a member undergoing combined loading. Solutions for
special cases exist for common structures such as thin-walled pressure vessels.

For the analysis of entire systems, this approach can be used in conjunction with statics,
giving rise to the method of sections and method of joints for truss analysis, moment
distribution method for small rigid frames, and portal frame and cantilever method for
large rigid frames. Except for moment distribution, which came into use in the 1930s,
these methods were developed in their current forms in the second half of the nineteenth
century. They are still used for small structures and for preliminary design of large
structures.

The solutions are based on linear isotropic infinitesimal elasticity and Euler-Bernoulli
beam theory. In other words, they contain the assumptions (among others) that the
materials in question are elastic, that stress is related linearly to strain, that the material
(but not the structure) behaves identically regardless of direction of the applied load, that
all deformations are small, and that beams are long relative to their depth. As with any
simplifying assumption in engineering, the more the model strays from reality, the less
useful (and more dangerous) the result.

Elasticity methods
Elasticity methods are available generally for an elastic solid of any shape. Individual
members such as beams, columns, shafts, plates and shells may be modeled. The
solutions are derived from the equations of linear elasticity. The equations of elasticity
are a system of 15 partial differential equations. Due to the nature of the mathematics
involved, analytical solutions may only be produced for relatively simple geometries. For
complex geometries, a numerical solution method such as the finite element method is
necessary.

Many of the developments in the mechanics of materials and elasticity approaches have
been expounded or initiated by Stephen Timoshenko.

Methods Using Numerical Approximation


It is common practice to use approximate solutions of differential equations as the basis
for structural analysis. This is usually done using numerical approximation techniques.
The most commonly used numerical approximation in structural analysis is the Finite
Element Method.

The finite element method approximates a structure as an assembly of elements or


components with various forms of connection between them. Thus, a continuous system
such as a plate or shell is modeled as a discrete system with a finite number of elements
interconnected at finite number of nodes. The behaviour of individual elements is
characterised by the element's stiffness or flexibility relation, which altogether leads to
the system's stiffness or flexibility relation. To establish the element's stiffness or
flexibility relation, we can use the mechanics of materials approach for simple one-
dimensional bar elements, and the elasticity approach for more complex two- and three-
dimensional elements. The analytical and computational development are best effected
throughout by means of matrix algebra.

Early applications of matrix methods were for articulated frameworks with truss, beam
and column elements; later and more advanced matrix methods, referred to as "finite
element analysis," model an entire structure with one-, two-, and three-dimensional
elements and can be used for articulated systems together with continuous systems such
as a pressure vessel, plates, shells, and three-dimensional solids. Commercial computer
software for structural analysis typically uses matrix finite-element analysis, which can
be further classified into two main approaches: the displacement or stiffness method and
the force or flexibility method. The stiffness method is the most popular by far thanks to
its ease of implementation as well as of formulation for advanced applications. The finite-
element technology is now sophisticated enough to handle just about any system as long
as sufficient computing power is available. Its applicability includes, but is not limited to,
linear and non-linear analysis, solid and fluid interactions, materials that are isotropic,
orthotropic, or anisotropic, and external effects that are static, dynamic, and
environmental factors. This, however, does not imply that the computed solution will
automatically be reliable because much depends on the model and the reliability of the
data input.

Powder diffraction
Electron powder pattern (red) of an Al film with an fcc spiral overlay (green) and a line
of intersections (blue) that determines lattice parameter.

Powder diffraction is a scientific technique using X-ray, neutron, or electron diffraction


on powder or microcrystalline samples for structural characterization of materials.

Explanation
Ideally, every possible crystalline orientation is represented very equally in a powdered
sample. The resulting orientational averaging causes the three dimensional reciprocal
space that is studied in single crystal diffraction to be projected onto a single dimension.
The three dimensional space can be described with (reciprocal) axes x*, y* and z* or
alternatively in spherical coordinates q, φ*, χ*. In powder diffraction intensity is
homogeneous over φ* and χ* and only q remains as an important measurable quantity. In
practice, it is sometimes necessary to rotate the sample orientation to eliminate the effects
of texturing and achieve true randomness.
Two-dimensional powder diffraction setup with flat plate detector

When the scattered radiation is collected on a flat plate detector the rotational averaging
leads to smooth diffraction rings around the beam axis rather than the discrete Laue spots
as observed for single crystal diffraction. The angle between the beam axis and the ring is
called the scattering angle and in X-ray crystallography always denoted as 2θ. (In
scattering of visible light the convention is usually to call it θ). In accordance with
Bragg's law, each ring corresponds to a particular reciprocal lattice vector G in the
sample crystal. This leads to the definition of the scattering vector as:

Powder diffraction data are usually presented as a diffractogram in which the diffracted
intensity I is shown as function either of the scattering angle 2θ or as a function of the
scattering vector q. The latter variable has the advantage that the diffractogram no longer
depends on the value of the wavelength λ. The advent of synchrotron sources has
widened the choice of wavelength considerably. To facilitate comparability of data
obtained with different wavelengths the use of q is therefore recommended and gaining
acceptability.

An instrument dedicated to perform powder measurements is called a powder


diffractometer.

Uses
Relative to other methods of analysis, powder diffraction allows for rapid, non-
destructive analysis of multi-component mixtures without the need for extensive sample
preparation. This gives laboratories around the world the ability to quickly analyse
unknown materials and perform materials characterization in such fields as metallurgy,
mineralogy, forensic science, archeology, condensed matter physics, and the biological
and pharmaceutical sciences. Identification is performed by comparison of the diffraction
pattern to a known standard or to a database such as the International Centre for
Diffraction Data's Powder Diffraction File (PDF) or the Cambridge Structural Database
(CSD). Advances in hardware and software, particularly improved optics and fast
detectors, have dramatically improved the analytical capability of the technique,
especially relative to the speed of the analysis. The fundamental physics upon which the
technique is based provides high precision and accuracy in the measurement of
interplanar spacings, sometimes to fractions of an Ångström, resulting in authoritative
identification frequently used in patents, criminal cases and other areas of law
enforcement. The ability to analyze multiphase materials also allows analysis of how
materials interact in a particular matrix such as a pharmaceutical tablet, a circuit board, a
mechanical weld, a geologic core sampling, cement and concrete, or a pigment found in
an historic painting. The method has been historically used for the identification and
classification of minerals, but it can be used for any materials, even amorphous ones, so
long as a suitable reference pattern is known or can be constructed.

Phase identification

The most widespread use of powder diffraction is in the identification and


characterization of crystalline solids, each of which produces a distinctive diffraction
pattern. Both the positions (corresponding to lattice spacings) and the relative intensity of
the lines are indicative of a particular phase and material, providing a "fingerprint" for
comparison. A multi-phase mixture, e.g. a soil sample, will show more than one pattern
superposed, allowing for determination of relative concentration.

J.D. Hanawalt, an analytical chemist who worked for Dow Chemical in the 1930s, was
the first to realize the analytical potential of creating a database. Today it is represented
by the Powder Diffraction File (PDF) of the International Centre for Diffraction Data
(formerly Joint Committee for Powder Diffraction Studies). This has been made
searchable by computer through the work of global software developers and equipment
manufacturers. There are now over 550,000 reference materials in the 2006 Powder
Diffraction File Databases, and these databases are interfaced to a wide variety of
diffraction analysis software and distributed globally. The Powder Diffraction File
contains many subfiles, such as minerals, metals and alloys, pharmaceuticals, forensics,
excipients, superconductors, semiconductors etc., with large collections of organic,
organometallic and inorganic reference materials.

Crystallinity

In contrast to a crystalline pattern consisting of a series of sharp peaks, amorphous


materials (liquids, glasses etc.) produce a broad background signal. Many polymers show
semicrystalline behavior, i.e. part of the material forms an ordered crystallite by folding
of the molecule. One and the same molecule may well be folded into two different
crystallites and thus form a tie between the two. The tie part is prevented from
crystallizing. The result is that the crystallinity will never reach 100%. Powder XRD can
be used to determine the crystallinity by comparing the integrated intensity of the
background pattern to that of the sharp peaks. Values obtained from powder XRD are
typically comparable but not quite identical to those obtained from other methods such as
DSC.

Lattice parameters

The position of a diffraction peak is independent of the atomic positions within the cell
and entirely determined by the size and shape of the unit cell of the crystalline phase.
Each peak represents a certain lattice plane and can therefore be characterized by a Miller
index. If the symmetry is high, e.g. cubic or hexagonal it is usually not too hard to
identify the index of each peak, even for an unknown phase. This is particularly
important in solid-state chemistry, where one is interested in finding and identifying new
materials. Once a pattern has been indexed, this characterizes the reaction product and
identifies it as a new solid phase. Indexing programs exist to deal with the harder cases,
but if the unit cell is very large and the symmetry low (triclinic) success is not always
guaranteed.

Expansion tensors, bulk modulus

Thermal expansion of a sulfur powder


Cell parameters are somewhat temperature and pressure dependent. Powder diffraction
can be combined with in situ temperature and pressure control. As these thermodynamic
variables are changed, the observed diffraction peaks will migrate continuously to
indicate higher or lower lattice spacings as the unit cell distorts. This allows for
measurement of such quantities as the thermal expansion tensor and the isothermal bulk
modulus, as well determination of the full equation of state of the material.

Phase transitions

At some critical set of conditions, for example 0 °C for water at 1 atm, a new
arrangement of atoms or molecules may become stable, leading to a phase transition. At
this point new diffraction peaks will appear or old ones disappear according to the
symmetry of the new phase. If the material melts to an isotropic liquid, all sharp lines will
disappear and be replaced by a broad amorphous pattern. If the transition produces
another crystalline phase, one set of lines will suddenly be replaced by another set. In
some cases however lines will split or coalesce, e.g. if the material undergoes a
continuous, second order phase transition. In such cases the symmetry may change
because the existing structure is distorted rather than replaced by a completely different
one. E.g. the diffraction peaks for the lattice planes (100) and (001) can be found at two
different values of q for a tetragonal phase, but if the symmetry becomes cubic the two
peaks will come to coincide.

Crystal structure refinement and determination

Crystal structure determination from powder diffraction data is extremely challenging


due to the overlap of reflections in a powder experiment. The crystal structures of known
materials can be refined, i.e. as a function of temperature or pressure, using the Rietveld
method. The Rietveld method is a so-called full pattern analysis technique. A crystal
structure, together with instrumental and microstructural information is used to generate a
theoretical diffraction pattern that can be compared to the observed data. A least squares
procedure is then used to minimise the difference between the calculated pattern and each
point of the observed pattern by adjusting model parameters. Techniques to determine
unknown structures from powder data do exist, but are somewhat specialized. A number
of programs that can be used in structure determination are TOPAS, GSAS, Fox,
EXPO2004, and a few others.

Size and strain broadening

There are many factors that determine the width B of a diffraction peak. These include:

1. instrumental factors
2. the presence of defects to the perfect lattice
3. differences in strain in different grains
4. the size of the crystallites
It is often possible to separate the effects of size and strain. Where size broadening is
independent of q (K=1/d), strain broadening increases with increasing q-values. In most
cases there will be both size and strain broadening. It is possible to separate these by
combining the two equations in what is known as the Hall-Williamson method:

Thus, when we plot vs. we get a straight line with slope and

intercept .

The expression is a combination of the Scherrer Equation for size broadening and the
Stokes and Wilson expression for strain broadening. The value of η is the strain in the
crystallites, the value of D represents the size of the crystallites. The constant k is
typically close to unity and ranges from 0.8-1.39.

Comparison of X-ray and Neutron Scattering

X-ray photons scatter by interaction with the electron cloud of the material, neutrons are
scattered by the nuclei. This means that, in the presence of heavy atoms with many
electrons, it may be difficult to detect light atoms by X-ray diffraction. In contrast, the
neutron scattering length of most atoms are approximately equal in magnitude. Neutron
diffraction techniques may therefore be used to detect light elements such as oxygen or
hydrogen in combination with heavy atoms. The neutron diffraction technique therefore
has obvious applications to problems such as determining oxygen displacements in
materials like high temperature superconductors and ferroelectrics, or to hydrogen
bonding in biological systems.

A further complication in the case of neutron scattering from hydrogenous materials is


the strong incoherent scattering of hydrogen (80.27(6) barn). This leads to a very high
background in neutron diffraction experiments, and may make structural investigations
impossible. A common solution is deuteration, i.e. replacing the 1-H atoms in the sample
with deuterium (2-H). The incoherent scattering length of deuterium is much smaller
(2.05(3) barn) making structural investigations significantly easier. However, in some
systems, replacing hydrogen with deuterium may alter the structural and dynamic
properties of interest.

As neutrons also have a magnetic moment, they are additionally scattered by any
magnetic moments in a sample. In the case of long range magnetic order, this leads to the
appearance of new Bragg reflections. In most simple cases, powder diffraction may be
used to determine the size of the moments and their spatial orientation.
Aperiodically-arranged clusters

Predicting the scattered intensity in powder diffraction patterns from gases, liquids, and
randomly-distributed nano-clusters in the solid state is (to first order) done rather
elegantly with the Debye scattering equation:

where the magnitude of the scattering vector q is in reciprocal lattice distance units, N is
the number of atoms, fi(q) is the atomic scattering factor for atom i and scattering vector
q, while rij is the distance between atom i and atom j. One can also use this to predict the
effect of nano-crystallite shape on detected diffraction peaks, even if in some directions
the cluster is only one atom thick.

Devices
Cameras

The simplest cameras for X-ray powder diffraction consist of a small capillary and either
a flat plate detector (originally a piece of X-ray film, now more and more a flat-plate
detector or a CCD-camera) or a cylindrical one (originally a piece of film in a cookie-jar,
now more and more a bent position sensitive detector). The two types of cameras are
known as the Laue and the Debye-Scherrer camera.

In order to ensure complete powder averaging, the capillary is usually spun around its
axis.

For neutron diffraction vanadium cylinders are used as sample holders. Vanadium has a
negligible absorption and coherent scattering cross section for neutrons and is hence
nearly invisible in a powder diffraction experiment. Vanadium does however have a
considerable incoherent scattering cross section which may cause problems for more
sensitive techniques such as neutron inelastic scattering.

A later development in X-ray cameras is the Guinier camera. It is built around a focusing
bent crystal monochromator. The sample is usually placed in the focusing beam., e.g. as a
dusting on a piece of sticky tape. A cylindrical piece of film (or electronic multichannel
detector) is put on the focusing circle, but the incident beam prevented from reaching the
detector to prevent damage from its high intensity.

Diffractometers

Diffractometers can be operated both in transmission and in reflection configurations.


The reflection one is more common. The powder sample is filled in a small disc like
container and its surface carefully flattened. The disc is put on one axis of the
diffractometer and tilted by an angle θ while a detector (scintillation counter) rotates
around it on an arm at twice this angle. This configuration is known under the name
Bragg-Brentano.

Another configuration is the theta-theta configuration in which the sample is stationary


while the X-ray tube and the detector are rotated around it. The angle formed between the
tube and the detector is 2theta. This configuration is most convenient for loose powders.

The availability of position sensitive detectors and CCD-cameras is making this type of
equipment more and more obsolete.

Neutron diffraction

Sources that produce a neutron beam of suitable intensity and speed for diffraction are
only available at a small number of research reactors and spallation sources in the world.
Angle dispersive (fixed wavelength) instruments typically have a battery of individual
detectors arranged in a cylindrical fashion around the sample holder, and can therefore
collect scattered intensity simultaneously on a large 2θ range. Time of flight instruments
normally have a small range of banks at different scattering angles which collect data at
varying resolutions.

X-ray tubes

Laboratory X-ray diffraction equipment relies on the use of an X-ray tube, which is used
to produce the X-rays.

The most commonly used laboratory X-ray tube uses a Copper anode, but Cobalt,
Molybdenum are also popular. The wavelength in nm varies for each source. The table
below shows these wavelengths, determined by Bearden (1967) and quoted in the
International Tables for X-ray Crystallography:


Kα2 Kα1 Kβ
Element (weight
(strong) (very strong) (weak)
average)
Cr 0.229100 0.229361 0.228970 0.208487
Fe 0.193736 0.193998 0.193604 0.175661
Co 0.179026 0.179285 0.178897 0.162079
Cu 0.154184 0.154439 0.154056 0.139222
Mo 0.071073 0.071359 0.070930 0.063229

According to the last re-examination of Holzer et al. (1997), these values are
respectively:
Element Kα2 Kα1 Kβ
Cr 0.2293663 0.2289760 0.2084920
Co 0.1792900 0.1789010 0.1620830
Cu 0.1544426 0.1540598 0.1392250
Mo 0.0713609 0.0709319 0.0632305

Other sources
In house applications of X-ray diffraction has always been limited to the relatively few
wavelengths shown in the table above. The available choice was much needed because
the combination of certain wavelengths and certain elements present in a sample can lead
to strong fluorescence which increases the background in the diffraction pattern. A
notorious example is the presence of iron in a sample when using copper radiation. In
general elements just below the anode element in the period system need to be avoided.

Another limitation is that the intensity of traditional generators is relatively low, requiring
lengthy exposure times and precluding any time dependent measurement. The advent of
synchrotron sources has drastically changed this picture and caused powder diffraction
methods to enter a whole new phase of development. Not only is there a much wider
choice of wavelengths available, the high brilliance of the synchrotron radiation makes it
possible to observe changes in the pattern during chemical reactions, temperature ramps,
changes in pressure and the like.

The tunability of the wavelength also makes it possible to observe anomalous scattering
effects when the wavelength is chosen close to the absorption edge of one of the elements
of the sample.

Neutron diffraction has never been an in house technique because it requires the
availability of an intense neutron beam only available at a nuclear reactor. Typically the
available neutron flux, and the weak interaction between neutrons and matter, require
relative large samples.

Advantages and disadvantages


Although it possible to solve crystal structures from powder X-ray data alone, its single
crystal analogue is a far more powerful technique for structure determination. This is
directly related to the fact that much information is lost by the collapse of the 3D space
onto a 1D axis. Nevertheless powder X-ray diffraction is a powerful and useful technique
in its own right. It is mostly used to characterize and identify phases rather than solving
structures.

The great advantages of the technique are:

• simplicity of sample preparation


• rapidity of measurement
• the ability to analyse mixed phases, e.g. soil samples

By contrast growth and mounting of large single crystals is notoriously difficult. In fact
there are many materials for which despite many attempts it has not proven possible to
obtain single crystals. Many materials are readily available with sufficient
microcrystallinity for powder diffraction, or samples may be easily ground from larger
crystals. In the field of solid-state chemistry that often aims at synthesizing new
materials, single crystals thereof are typically not immediately available. Powder
diffraction is therefore one of the most powerful methods to identify and characterize
new materials in this field.

Particularly for neutron diffraction, which requires larger samples than X-Ray Diffraction
due to a relatively weak scattering cross section, the ability to use large samples can be
critical, although new more brilliant neutron sources are being built that may change this
picture.

Since all possible crystal orientations are measured simultaneously, collection times can
be quite short even for small and weakly scattering samples. This is not merely
convenient, but can be essential for samples which are unstable either inherently or under
X-ray or neutron bombardment, or for time-resolved studies. For the latter it is desirable
to have a strong radiation source. The advent of synchrotron radiation and modern
neutron sources has therefore done much to revitalize the powder diffraction field
because it is now possible to study temperature dependent changes, reaction kinetics and
so forth by means of time dependent powder diffraction.

Small-angle X-ray scattering


Small-angle X-ray scattering (SAXS) is a small-angle scattering (SAS) technique where
the elastic scattering of X-rays (wavelength 0.1 ... 0.2 nm) by a sample which has
inhomogeneities in the nm-range, is recorded at very low angles (typically 0.1 - 10°).
This angular range contains information about the shape and size of macromolecules,
characteristic distances of partially ordered materials, pore sizes, and other data. SAXS is
capable of delivering structural information of macromolecules between 5 and 25 nm, of
repeat distances in partially ordered systems of up to 150 nm. USAXS (ultra-small angle
X-ray scattering) can resolve even larger dimensions.

SAXS and USAXS belong to a family of X-ray scattering techniques that are used in the
characterization of materials. In the case of biologic macromolecules such as proteins, the
advantage of SAXS over crystallography is that a crystalline sample is not needed. NMR
methods encounter problems with macromolecules of higher molecular mass (> 30000-
40000). However, owing to the random orientation of dissolved or partially ordered
molecules, the spatial averaging leads to a loss of information in SAXS compared to
crystallography.

Applications
SAXS is used for the determination of the microscale or nanoscale structure of particle
systems in terms of such parameters as averaged particle sizes, shapes, distribution, and
surface-to-volume ratio. The materials can be solid or liquid and they can contain solid,
liquid or gaseous domains (so-called particles) of the same or another material in any
combination. Not only particles, but also the structure of ordered systems like lamellae,
and fractal-like materials can be studied. The method is accurate, non-destructive and
usually requires only a minimum of sample preparation. Applications are very broad and
include colloids of all types, metals, cement, oil, polymers, plastics, proteins, foods and
pharmaceuticals and can be found in research as well as in quality control. The X-ray
source can be a laboratory source or synchrotron light which provides a higher X-ray
flux.

SAXS instruments
In a SAXS instrument a monochromatic beam of X-rays is brought to a sample from
which some of the X-rays scatter, while most simply go through the sample without
interacting with it. The scattered X-rays form a scattering pattern which is then detected
at a detector which is typically a 2-dimensional flat X-ray detector situated behind the
sample perpendicular to the direction of the primary beam that initially hit the sample.
The scattering pattern contains the information on the structure of the sample.

The major problem that must be overcome in SAXS instrumentation is the separation of
the weak scattered intensity from the strong main beam. The smaller the desired angle,
the more difficult this becomes. The problem is comparable to one encountered when
trying to observe a weakly radiant object close to the sun, like the sun's corona. Only if
the moon blocks out the main light source does the corona become visible. Likewise, in
SAXS the non-scattered beam that merely travels through the sample must be blocked,
without blocking the closely adjacent scattered radiation. Most available X-ray sources
produce divergent beams and this compounds the problem. In principle the problem
could be overcome by focusing the beam, but this is not easy when dealing with X-rays
and was previously not done except on synchrotrons where large bent mirrors can be
used. This is why most laboratory small angle devices rely on collimation instead.

Laboratory SAXS instruments can be divided into two main groups: point-collimation
and line-collimation instruments:

1. Point-collimation instruments have pinholes that shape the X-ray beam to a


small circular or elliptical spot that illuminates the sample. Thus the scattering is
centro-symmetrically distributed around the primary X-ray beam and the
scattering pattern in the detection plane consists of circles around the primary
beam. Owing to the small illuminated sample volume and the wastefulness of the
collimation process — only those photons are allowed to pass that happen to fly
in the right direction — the scattered intensity is small and therefore the
measurement time is in the order of hours or days in case of very weak scatterers.
If focusing optics like bent mirrors or bent monochromator crystals or collimating
and monochromating optics like multilayers are used, measurement time can be
greatly reduced. Point-collimation allows the orientation of non-isotropic systems
(fibres, sheared liquids) to be determined.
2. Line-collimation instruments confine the beam only in one dimension so that
the beam profile is a long but narrow line. The illuminated sample volume is
much larger compared to point-collimation and the scattered intensity at the same
flux density is proportionally larger. Thus measuring times with line-collimation
SAXS instruments are much shorter compared to point-collimation and are in the
range of minutes. A disadvantage is that the recorded pattern is essentially an
integrated superposition (a self-convolution) of many pinhole adjacent pinhole
patterns. The resulting smearing can be easily removed using model-free
algorithms or deconvolution methods based on Fourier transformation, but only if
the system is isotropic. Line collimation is of great benefit for any isotropic
nanostructured materials, e.g. proteins, surfactants, particle dispersion and
emulsions.

Porod's law
SAXS patterns are typically represented as scattered intensity as a function of the
magnitude of the scattering vector q = 4πsin(θ) / λ. Here 2θ is the angle between the
incident X-ray beam and the detector measuring the scattered intensity, and λ is the
wavelength of the X-rays. One interpretation of the scattering vector is that it is the
resolution or yardstick with which the sample is observed. In the case of a two-phase
sample, e.g. small particles in liquid suspension, the only contrast leading to scattering in
the typical range of resolution of the SAXS is simply Δρ, the difference in average
electron density between the particle and the surrounding liquid, because variations in ρ
due to the atomic structure only become visible at higher angles in the WAXS regime.
This means that the total integrated intensity of the SAXS pattern (in 3D) is an invariant
quantity proportional to the square Δρ2. In 1-dimensional projection, as usually recorded

for an isotropic pattern this invariant quantity becomes , where the integral
runs from q=0 to wherever the SAXS pattern is assumed to end and the WAXS pattern
starts. It is also assumed that the density does not vary in the liquid or inside the particles,
i.e. there is binary contrast.

In the transitional range at the high resolution end of the SAXS pattern the only
contribution to the scattering come from the interface between the two phases and the
intensity should drop with the fourth power of q if this interface is smooth. This is a
consequence of the fact that in this regime any other structural features, e.g. interference
between one surface of a particle and the one on the opposite side, are so random that
they do not contribute. This is known as Porod's law:

This allows the surface area S of the particles to be determined with SAXS. However,
since the advent of fractal mathematics it has become clear that this law requires
adaptation because the value of the surface S may itself be a function of the yardstick by
which it is measured. In the case of a fractally rough surface area with a dimensionality d
between 2-3 Porod's law becomes:

Thus if plotted logarithmically the slope of ln(I) versus ln(q) would vary between -4 and -
3 for such a surface fractal. Slopes less negative than -3 are also possible in fractal theory
and they are described using a volume fractal model in which the whole system can be
described to be self similar mathematically although not usually in reality in the nature.

Scattering from particles


Small-angle scattering from particles can be used to determine the particle shape or their
size distribution. A small-angle scattering pattern can be fitted with intensities calculated
from different model shapes when the size distribution is known. If the shape is known, a
size distribution may be fitted to the intensity. Typically one assumes the particles to be
spherical in the latter case.

If the particles are dispersed in a solution and they are known to be monodisperse, all of
the same size, then a typical strategy is to measure different concentrations of particles in
the solution. From the obtained SAXS patterns one can extrapolate to the intensity pattern
one would get for a single particle. This is a necessary procedure that eliminates the
concentration effect, which is a small shoulder that appears in the intensity patterns due
to the proximity of neighbouring particles. The average distance between particles is then
roughly the distance 2π/q*, where q* is the position of the shoulder on the scattering
vector range q. The shoulder thus comes from the structure of the solution and this
contribution is called the structure factor. One can write for the small-angle X-ray
scattering intensity:

I(q) = P(q)S(q),

where

• I(q) is the intensity as a function of the magnitude q of the scattering vector


• P(q) is the form factor
• and S(q) is the structure factor.

When the intensities from low concentrations of particles are extrapolated to infinite
dilution, the structure factor is equal to 1 and no longer disturbs the determination of the
particle shape from the form factor P(q). One can then easily apply the Guinier
approximation (also called Guinier law), which applies only at the very beginning of the
scattering curve, at small q-values. According to the Guinier approximation the intensity
at small q depends on the radius of gyration of the particle.
An important part of the particle shape determination is usually the distance distribution
function p(r), which may be calculated from the intensity using a Fourier transform

The distance distribution function p(r) is related to the frequency of certain distances r
within the particle. Therefore it goes to zero at the largest diameter of the particle. It starts
from zero at r = 0 due to the multiplication by r2. The shape of the p(r)-function already
tells something about the shape of the particle. If the function is very symmetric, the
particle is also highly symmetric, like a sphere. The distance distribution function should
not be confused with the size distribution.

The particle shape analysis is especially popular in biological small-angle X-ray


scattering, where one determines the shapes of proteins and other natural colloidal
polymers.

Extended X-ray absorption fine structure


X-ray Absorption Spectroscopy (XAS) includes both Extended X-Ray Absorption
Fine Structure (EXAFS) and X-ray Absorption Near Edge Structure (XANES). XAS
is the measurement of the x-ray absorption coefficient (μ(E) in the equations below) of a
material as a function of energy. X-rays of a narrow energy resolution are shone on the
sample and the incident and transmitted x-ray intensity is recorded as the incident x-ray
energy is incremented. The number of x-ray photons that are transmitted through a
sample (It) is equal to the number of x-ray photons shone on the sample (I0) multiplied by
a decreasing exponential that depends of the type of atoms in the sample, the absorption
coefficient μ, and the thickness of the sample x.

It = I0e − μx

The absorption coefficient is obtained by taking the log ratio of the incident x-ray
intensity to the transmitted x-ray intensity.

When the incident x-ray energy matches the binding energy of an electron of an atom
within the sample, the number of x-rays absorbed by the sample increases dramatically,
causing a drop in the transmitted x-ray intensity. This results in an absorption edge. Each
element on the periodic table has a set of unique absorption edges corresponding to
different binding energies of its electrons. This gives XAS element selectivity. XAS
spectra are most often collected at synchrotrons. Because X-rays are highly penetrating,
XAS samples can be gases, solids or liquids. And because of the brilliance of
Synchrotron X-ray sources the concentration of the absorbing element can be as low as a
few ppm.

EXAFS spectra are displayed as graphs of the absorption coefficient of a given material
versus energy, typically in a 500 – 1000 eV range beginning before an absorption edge of
an element in the sample. The x-ray absorption coefficient is usually normalized to unit
step height. This is done by regressing a line to the region before and after the absorption
edge, subtracting the pre-edge line from the entire data set and dividing by the absorption
step height, which is determined by the difference between the pre-edge and post-edge
lines at the value of E0 (on the absorption edge).

The normalized absorption spectra are often called XANES spectra. These spectra can be
used to determine the average oxidation state of the element in the sample. The XANES
spectra are also sensitive to the coordination environment of the absorbing atom in the
sample. Finger printing methods have been used to match the XANES spectra of an
unknown sample to those of known "standards". Linear combination fitting of several
different standard spectra can give an estimate to the amount of each of the known
standard spectra within an unknown sample.

X-ray absorption spectra are produced over the range of 200 – 35,000 eV. The dominant
physical process is one where the absorbed photon ejects a core photoelectron from the
absorbing atom, leaving behind a core hole. The atom with the core hole is now excited.
The ejected photoelectron’s energy will be equal to that of the absorbed photon minus the
binding energy of the initial core state. The ejected photoelectron interacts with electrons
in the surrounding non-excited atoms.

If the ejected photoelectron is taken to have a wave-like nature and the surrounding atoms
are described as point scatterers, it is possible to imagine the backscattered electron
waves interfering with the forward-propagating waves. The resulting interference pattern
shows up as a modulation of the measured absorption coefficient, thereby causing the
oscillation in the EXAFS spectra. A simplified plane-wave single-scattering theory has
been used for interpretation of EXAFS spectra for many years, although modern methods
(like FEFF, GNXAS) have shown that curved-wave corrections and multiple-scattering
effects can not be neglected. The photelectron scattering amplitude in the low energy
range (5-200 eV) of the phoelectron kinetic energy become much larger so that multiple
scattering events become dominant in the NEXAFS (or XANES) spectra.

The wavelength of the photoelectron is dependent on the energy and phase of the
backscattered wave which exists at the central atom. The wavelength changes as a
function of the energy of the incoming photon. The phase and amplitude of the
backscattered wave are dependent on the type of atom doing the backscattering and the
distance of the backscattering atom from the central atom. The dependence of the
scattering on atomic species makes it possible to obtain information pertaining to the
chemical coordination environment of the original absorbing (centrally excited) atom by
analyzing these EXAFS data.
Experimental considerations
Since EXAFS requires a tunable x-ray source, data are always collected at synchrotrons,
often at beamlines which are especially optimized for the purpose. The utility of a
particular synchrotron to study a particular solid depends on the brightness of the x-ray
flux at the absorption edges of the relevant elements.

Applications
XAS is an interdisciplinary technique and its unique properties, as compared to x-ray
diffraction, have been exploited for understanding the details of local structure in:

• glass, amorphous and liquid systems


• solid solutions
• Doping and ionic implantation materials for electronics
• local distortions of crystal lattices
• organometallic compounds
• metalloproteins
• metal clusters
• vibrational dynamics
• ions in solutions
• speciation of elements

High pressure
High pressure in science and engineering is studying the effects of high pressure on
materials and the design and construction of devices, such as a diamond anvil cell, which
can create high pressure. By high pressure it is usually meant pressures of thousands
(kilobars) or millions (megabars) of times atmospheric pressure (about 1 bar).

It was by applying high pressure as well as high temperature to carbon that man-made
diamonds were first produced as well as many other interesting discoveries. Almost any
material when subjected to high pressure will compact itself into a denser form, for
example, quartz, also called silica or silicon dioxide will first adopt a denser form known
as coesite, then upon application of more temperature, form stishovite. These two forms
of silica were first discovered by high pressure experimenters, but then found in nature at
the site of a meteor impact.

Chemical bonding is likely to change under high pressure, when the P*V term in the free
energy becomes comparable to the energies of typical chemical bonds - i.e. at around 100
GPa. Among the most striking changes are metallization of oxygen at 96 GPa (rendering
oxygen a superconductor), and transition of sodium from a nearly-free-electron metal to a
transparent insulator at ~200 GPa. At ultimately high compression, however, all materials
will metallize.
High pressure experimentation has led to the discovery of the types of minerals which are
believed to exist in the deep mantle of the Earth, such as perovskite which is thought to
make up half of the Earth's bulk, and post-perovskite, which occurs at the core-mantle
boundary and explains many anomalies inferred for that region.

Pressure "landmarks": pressure exerted by a fingernail scratching is ~0.6 GPa, typical


pressures reached by large-volume presses are up to 30-40 GPa, pressures that can be
generated inside diamond anvil cells are ~320 GPa, pressure in the center of the Earth is
364 GPa, and highest pressures ever achieved in shock waves are over 100,000 GPa.

Photoemission spectroscopy

Principle of angle resolved photoelectron spectroscopy

Photoemission spectroscopy (PES), also known as photoelectron spectroscopy, refers


to energy measurement of electrons emitted from solids, gases or liquids by the
photoelectric effect, in order to determine the binding energies of electrons in a
substance. The term refers to various techniques, depending on whether the ionization
energy is provided by an X-ray photon, an EUV photon, or an ultraviolet photon.

X-ray photoelectron spectroscopy (XPS) was developed by Kai Siegbahn starting in 1957
and is used to study the energy levels of atomic core electrons, primarily in solids.
Siegbahn referred to the technique as Electron Spectroscopy for Chemical Analysis
(ESCA), since the core levels have small chemical shifts depending on the chemical
environment of the atom which is ionized, allowing chemical structure to be determined.
Siegbahn was awarded the Nobel Prize in 1981 for this work.

In the ultraviolet region, the method is usually referred to as photoelectron spectroscopy


for the study of gases, and photoemission spectroscopy for solid surfaces.

Ultra-violet photoelectron spectroscopy (UPS) is used to study valence energy levels and
chemical bonding; especially the bonding character of molecular orbitals. The method
was developed originally for gas-phase molecules in 1962 by David W. Turner, and other
early workers included David C.Frost, J.H.D. Eland and K. Kimura. Later, Richard
Smalley modified the technique and used a UV laser to excite the sample, in order to
measure the binding energy of electrons in gaseous molecular clusters.

Extreme ultraviolet photoelectron spectroscopy (EUPS) lies in between XPS and UPS. It
is typically used to assess the valence band structure. Compared to XPS it gives better
energy resolution, and compared to UPS the ejected electrons are faster, resulting in a
better spectrum signal.

Physical principle
The physics behind the PES technique is an application of the photoelectric effect. The
sample is exposed to a beam of UV or XUV light inducing photoelectric ionization. The
energies of the emitted photoelectrons are characteristic of their original electronic states,
and depend also on vibrational state and rotational level. For solids, photoelectrons can
escape only from a depth on the order of nanometers, so that it is the surface layer which
is analyzed.

Because of the high frequency of the light, and the substantial charge and energy of
emitted electrons, photoemission is one of the most sensitive and accurate techniques for
measuring the energies and shapes of electronic states and molecular and atomic orbitals.
Photoemission is also among the most sensitive methods of detecting substances in trace
concentrations, provided the sample is compatible with ultra-high vacuum and the analyte
can be distinguished from background.

Typical PES (UPS) instruments use helium gas sources of UV light, with photon energy
up to 52 eV (corresponding to wavelength 23.7 nm). The photoelectrons that actually
escaped into the vacuum are collected, energy resolved, slightly retarded and counted,
which results in a spectrum of electron intensity as a function of the measured kinetic
energy. Because binding energy values are more readily applied and understood, the
kinetic energy values, which are source dependent, are converted into binding energy
values, which are source independent. This is achieved by applying Einstein's relation Ek
= hν − EB. The hν term of this equation is due to the energy (frequency) of the UV light
that bombards the sample. Photoemission spectra are also measured using synchrotron
radiation sources.
The binding energies of the measured electrons are characteristic of the chemical
structure and molecular bonding of the material. By adding a source monochromator and
increasing the energy resolution of the electron analyzer, peaks appear with full width at
half maximum (FWHM) less than 5–8 meV.

Crystallography

A crystalline solid: atomic resolution image of strontium titanate. Brighter atoms are Sr
and darker ones are Ti.

Crystallography is the experimental science of the arrangement of atoms in solids. The


word "crystallography" derives from the Greek words crystallon = cold drop / frozen
drop, with its meaning extending to all solids with some degree of transparency, and
grapho = write.

Before the development of X-ray diffraction crystallography (see below), the study of
crystals was based on their geometry. This involves measuring the angles of crystal faces
relative to theoretical reference axes (crystallographic axes), and establishing the
symmetry of the crystal in question. The former is carried out using a goniometer. The
position in 3D space of each crystal face is plotted on a stereographic net, e.g. Wulff net
or Lambert net. In fact, the pole to each face is plotted on the net. Each point is labelled
with its Miller index. The final plot allows the symmetry of the crystal to be established.

Crystallographic methods now depend on the analysis of the diffraction patterns of a


sample targeted by a beam of some type. Although X-rays are most commonly used, the
beam is not always electromagnetic radiation. For some purposes electrons or neutrons
are used. This is facilitated by the wave properties of the particles. Crystallographers
often explicitly state the type of illumination used when referring to a method, as with the
terms X-ray diffraction, neutron diffraction and electron diffraction.

These three types of radiation interact with the specimen in different ways. X-rays
interact with the spatial distribution of the valence electrons, while electrons are charged
particles and therefore feel the total charge distribution of both the atomic nuclei and the
surrounding electrons. Neutrons are scattered by the atomic nuclei through the strong
nuclear forces, but in addition, the magnetic moment of neutrons is non-zero. They are
therefore also scattered by magnetic fields. When neutrons are scattered from hydrogen-
containing materials, they produce diffraction patterns with high noise levels. However,
the material can sometimes be treated to substitute hydrogen for deuterium. Because of
these different forms of interaction, the three types of radiation are suitable for different
crystallographic studies.

Theory
Generally, an image of a small object is made using a lens to focus the illuminating
radiation, as is done with the rays of the visible spectrum in light microscopy. However,
the wavelength of visible light (about 4000 to 7000 angstroms) is three orders of
magnitude longer than the length of typical atomic bonds and atoms themselves (about 1
to 2 angstroms). Therefore, obtaining information about the spatial arrangement of atoms
requires the use of radiation with shorter wavelengths, such as X-rays. Employing shorter
wavelengths implied abandoning microscopy and true imaging, however, because there
exists no material from which a lens capable of focusing this type of radiation can be
created. (That said, scientists have had some success focusing X-rays with microscopic
Fresnel zone plates made from gold, and by critical-angle reflection inside long tapered
capillaries.) Diffracted x-ray beams cannot be focused to produce images, so the sample
structure must be reconstructed from the diffraction pattern. Sharp features in the
diffraction pattern arise from periodic, repeating structure in the sample, which are often
very strong due to coherent reflection of many photons from many regularly spaced
instances of similar structure, while non-periodic components of the structure result in
diffuse (and usually weak) diffraction features.

Because of their highly ordered and repetitive structure, crystals give diffraction patterns
of sharp Bragg reflection spots, and are ideal for analyzing the structure of solids.

Notation
• Coordinates in square brackets such as [100] denote a direction vector (in real
space).

• Coordinates in angle brackets or chevrons such as <100> denote a family of


directions which are related by symmetry operations. In the cubic crystal system
for example, <100> would mean [100], [010], [001] or the negative of any of
those directions.

• Miller indices in parentheses such as (100) denote a plane of the crystal structure,
and regular repetitions of that plane with a particular spacing. In the cubic system,
the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases,
the normal to (hkl) is not parallel to [hkl].
• Indices in curly brackets or braces such as {100} denote a family of planes and
their normals which are equivalent in cubic materials due to symmetry operations,
much the way angle brackets denote a family of directions. In non-cubic
materials, <hkl> is not necessarily perpendicular to {hkl}.

Technique
Some materials studied using crystallography, proteins for example, do not occur
naturally as crystals. Typically, such molecules are placed in solution and allowed to
crystallize over days, weeks, or months through vapor diffusion. A drop of solution
containing the molecule, buffer, and precipitants is sealed in a container with a reservoir
containing a hygroscopic solution. Water in the drop diffuses to the reservoir, slowly
increasing the concentration and allowing a crystal to form. If the concentration were to
rise more quickly, the molecule would simply precipitate out of solution, resulting in
disorderly granules rather than an orderly and hence usable crystal.

Once a crystal is obtained, data can be collected using a beam of radiation. Although
many universities that engage in crystallographic research have their own X-ray
producing equipment, synchrotrons are often used as X-ray sources, because of the purer
and more complete patterns such sources can generate. Synchrotron sources also have a
much higher intensity of X-ray beams, so data collection takes a fraction of the time
normally necessary at weaker sources.

Producing an image from a diffraction pattern requires sophisticated mathematics and


often an iterative process of modelling and refinement. In this process, the
mathematically predicted diffraction patterns of an hypothesized or "model" structure are
compared to the actual pattern generated by the crystalline sample. Ideally, researchers
make several initial guesses, which through refinement all converge on the same answer.
Models are refined until their predicted patterns match to as great a degree as can be
achieved without radical revision of the model. This is a painstaking process, made much
easier today by computers.

The mathematical methods for the analysis of diffraction data only apply to patterns,
which in turn result only when waves diffract from orderly arrays. Hence crystallography
applies for the most part only to crystals, or to molecules which can be coaxed to
crystallize for the sake of measurement. In spite of this, a certain amount of molecular
information can be deduced from the patterns that are generated by fibers and powders,
which while not as perfect as a solid crystal, may exhibit a degree of order. This level of
order can be sufficient to deduce the structure of simple molecules, or to determine the
coarse features of more complicated molecules. For example, the double-helical structure
of DNA was deduced from an X-ray diffraction pattern that had been generated by a
fibrous sample).

Crystallography in materials engineering


An example of a cubic lattice

Crystallography is a tool that is often employed by materials scientists. In single crystals,


the effects of the crystalline arrangement of atoms is often easy to see macroscopically,
because the natural shapes of crystals reflect the atomic structure. In addition, physical
properties are often controlled by crystalline defects. The understanding of crystal
structures is an important prerequisite for understanding crystallographic defects. Mostly,
materials do not occur in a single crystalline, but poly-crystalline form, such that the
powder diffraction method plays a most important role in structural determination.

A number of other physical properties are linked to crystallography. For example, the
minerals in clay form small, flat, platelike structures. Clay can be easily deformed
because the platelike particles can slip along each other in the plane of the plates, yet
remain strongly connected in the direction perpendicular to the plates. Such mechanisms
can be studied by crystallographic texture measurements.

In another example, iron transforms from a body-centered cubic (bcc) structure to a face-
centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a
close-packed structure, and the bcc structure is not, which explains why the volume of the
iron decreases when this transformation occurs.
Crystallography is useful in phase identification. When performing any process on a
material, it may be desired to find out what compounds and what phases are present in
the material. Each phase has a characteristic arrangement of atoms. Techniques like X-
ray diffraction can be used to identify which patterns are present in the material, and thus
which compounds are present.

Crystallography covers the enumeration of the symmetry patterns which can be formed
by atoms in a crystal and for this reason has a relation to group theory and geometry.

Biology
X-ray crystallography is the primary method for determining the molecular
conformations of biological macromolecules, particularly protein and nucleic acids such
as DNA and RNA. In fact, the double-helical structure of DNA was deduced from
crystallographic data. The first crystal structure of a macromolecule was solved in 1958
A three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The
Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins
and other biological macromolecules. Computer programs like RasMol or Pymol can be
used to visualize biological molecular structures.

Electron crystallography has been used to determine some protein structures, most
notably membrane proteins and viral capsids.
Chapter- 5

Sources of Terahertz Technology

Far-infrared laser
Far infrared laser (FIR laser, terahertz laser) is a laser with output wavelength in far
infrared part of the electromagnetic spectrum, between 30-1000 µm (300 GHz - 10 THz).
It is one of the possible sources of terahertz radiation.

FIR lasers have application in terahertz time-domain spectroscopy, terahertz imaging as


well in fusion plasma physics diagnostics. They can be used to detect explosives and
chemical warfare agents, by the means of infrared spectroscopy or to evaluate the plasma
densities by the means of interferometry techniques.

FIR lasers typically consist of a long (1-3 meters) waveguide filled with gaseous organic
molecules, optically pumped or via HV discharge. They are highly inefficient, often
require helium cooling, high magnetic fields, and/or are only line tunable. Efforts to
develop smaller solid-state alternatives are under way.

The p-Ge (p-type germanium) laser is a tunable, solid state, far infrared laser which has
existed for over 25 years . It operates in crossed electric and magnetic fields at liquid
helium temperatures. Wavelength selection can be achieved by changing the applied
electric/magnetic fields or through the introduction of intracavity elements.

Quantum cascade laser (QCL) is a construction of such alternative. It is a solid-state


semiconductor laser that can operate continuously with output power of over 100 mW
and wavelength of 9.5 µm. A prototype was already demonstrated. and potential use
shown .

Free electron lasers can also operate on far infrared wavelengths.

Femtosecond Ti:sapphire mode-locked lasers are also being used.


Photomixing
Photomixing is the generation of continuous wave terahertz radiation from two lasers.
The beams are mixed together and focussed onto a photomixer device which generates
the terahertz radiation. It is technologically significant because there are few sources
capable of providing radiation in this waveband, others include frequency multiplied
electronic/microwave sources, quantum cascade laser and ultrashort pulsed lasers with
photoconductive switches as used in terahertz time-domain spectroscopy. The advantages
of this technique are that it is continuously tunable over the frequency range from 300
GHz to 3 THz (10 cm-1 to 100 cm-1) (1 mm to 0.1 mm), and spectral resolutions in the
order of 1 MHz can be achieved. However, the achievable power is on the order of 10-8
W.

Principle
Two continuous wave lasers with identical polarisation are required, the lasers with
frequency ω1 and ω2 are spatially overlapped to generate a terahertz beatnote. The co-
linear lasers are then used to illuminate an ultra fast semiconductor material such as
GaAs. The photonic absorption and the short charge carrier lifetime results in the
modulation of the conductivity at the desired terahertz frequency ωTHz = ω1 - ω2. An
applied electric field allows the conductivity variation to be converted into a current
which is radiated by a pair of antenna. A typical photoconductive device or 'photomixer'
is made from low temperature GaAs with a patterned metalised layer which is used to
form an electrode array and radiating antenna.

High resolution spectrometer


The photomixing source can then form the basis of a laser spectrometer which can be
used to examine the THz signature of various subjects such as gases, liquids or solid
materials.

The instrument can be divided into the following functional units:

• Laser sources which provide a THz beatnote in the optical domain. These are
usually two near infrared lasers and maybe an optical amplifier.

• The photomixer device converts the beatnote into THz radiation, often emitted
into free space by an integrated antenna.

• A THz propagation path, depending on the application suitable focussing


elements are used to collimate the THz beam and allow it to pass through the
sample under study.
• Detector, with the relatively low levels of available power, in the order of 1µW, a
sensitive detector is required to ensure a reasonable signal to noise ratio. Si
bolometers provide a solution for in-coherent instruments. Alternatively a second
photomixer device can be used as a detector and has the advantage of allowing
coherent detection.

Gyrotron
Gyrotrons are high powered vacuum tubes which emit millimeter-wave beams by
bunching electrons with cyclotron motion in a strong magnetic field. Output frequencies
range from about 20 to 250 GHz, covering wavelengths from microwave to the edge of
the terahertz gap. Typical output powers range from tens of kilowatts to 1-2 megawatts.
Gyrotrons can be designed for pulsed or continuous operation.

Principle of operation
The gyrotron is a type of free electron maser (microwave amplification by stimulated
emission of radiation). It has high power at millimeter wavelengths because its
dimensions can be much larger than the wavelength, unlike conventional vacuum tubes,
and it is not dependent on material properties, as are conventional masers. The bunching
depends on a relativistic effect called the Cyclotron Resonance Maser instability. The
electron speed in a gyrotron is slightly relativistic (comparable to but not close to the
speed of light). This contrasts to the free electron laser (and xaser) that work on different
principles and which electrons are highly relativistic.

Applications
Gyrotrons are used for many industrial and high technology heating applications. For
example, gyrotrons are used in nuclear fusion research experiments to heat plasmas, and
also in manufacturing industry as a rapid heating tool in processing glass, composites,
and ceramics, as well as for annealing (solar and semiconductors). Additionally, years of
testing by the U.S. military has led to the development of a weapon system intended for
non-lethal crowd control called the Active Denial System, which heats the skin of the
crowd it is directed at to unpleasant levels.

Manufacturers
Gyrotron makers include Communications & Power Industries (USA), Gycom (Russia),
Thales Group (EU), and Toshiba (Japan). System developers include Gyrotron
Technology, Inc
Backward wave oscillator
A backward wave oscillator (BWO), also called carcinotron (a trade name for tubes
manufactured by CSF, now Thales) or backward wave tube, is a vacuum tube that is
used to generate microwaves up to the terahertz range. It belongs to the traveling-wave
tube family. It is an oscillator with a wide electronic tuning range.

An electron gun generates an electron beam that is interacting with a slow-wave


structure. It sustains the oscillations by propagating a traveling wave backwards against
the beam. The generated electromagnetic wave power has its group velocity directed
oppositely to the direction of motion of the electrons. The output power is coupled out
near the electron gun.

It has two main subtypes, the M-type, the most powerful, (M-BWO) and the O-type (O-
BWO). The O-type delivers typically power in the range of 1 mW at 1000 GHz to
50 mW at 200 GHz. Carcinotrons are used as powerful and stable microwave sources.
Due to the good quality wavefront they produce, they find use as illuminators in terahertz
imaging.

The backward wave oscillators were demonstrated in 1951, M-type by Bernard Epsztein,
(French patent 1,035,379; British patent 699,893; US patent 2,880,355) and O-type by
Rudolf Kompfner. The M-type BWO is a voltage-controlled non-resonant extrapolation
of magnetron interaction, both types are tunable over a wide range of frequencies by
varying the accelerating voltage. They can be swept through the band fast enough to be
appearing to radiate over all the band at once, which makes them suitable for effective
radar jamming, quickly tuning into the radar frequency. Carcinotrons allowed airborne
radar jammers to be highly effective. However, frequency-agile radars can hop
frequencies fast enough to force the jammer to use barrage jamming, diluting its output
power over a wide band and significantly impairing its efficiency.

Carcinotrons are used in research, civilian and military applications. For example, the
Kopac passive sensor and Ramona passive sensor employed carcinotrons in their receiver
systems.

The Slow-wave structure


(a) Forward fundamental space harmonic (n=0), (b) Backward fundamental

The needed slow-wave structures must support a Radio Frequency (RF) electric field
with a longitudinal component; the structures are periodic in the direction of the beam
and behave like microwave filters with passbands and stopbands. Due to the periodicity
of the geometry, the fields are identical from cell to cell except for a constant phase shift
Φ. This phase shift, a purely real number in a passband of a lossless structure, varies with
frequency. According to Floquet's theorem, the RF electric field E(z,t) can be described at
an angular frequency ω, by a sum of an infinity of "spatial or space harmonics" En

E(z,t)=

where the wave number or propagation constant kn of each harmonic is expressed as:

kn=(Φ+2nπ)/p (-π<Φ<+п)

z being the direction of propagation, p the pitch of the circuit and n an integer.

Two examples of slow-wave circuit characteristics are shown, in the ω-k or Brillouin
diagram:

• on figure (a), the fundamental n=0 is a forward space harmonic (the phase
velocity vn=ω/kn has the same sign as the group velocity vg=dω/dkn), synchronism
condition for backward interaction is at point B, intersection of the line of slope ve
- the beam velocity - with the first backward (n = -1) space harmonic,

• on figure (b) the fundamental (n=0) is backward

A periodic structure can support both forward and backward space harmonics, which are
not modes of the field, and cannot exist independently, even if a beam can be coupled to
only one of them.

As the magnitude of the space harmonics decreases rapidly when the value of n is large,
the interaction can be significant only with the fundamental or the first space harmonic.

M-type BWO

Schematic of an M-BWO

The M-type carcinotron, or M-type backward wave oscillator, uses crossed static
electric field E and magnetic field B, similar to the magnetron, for focussing an electron
sheet beam drifting perpendicularly to E and B, along a slow-wave circuit, with a velocity
E/B. Strong interaction occurs when the phase velocity of one space harmonic of the
wave is equal to the electron velocity. Both Ez and Ey components of the RF field are
involved in the interaction (Ey parallel to the static E field). Electrons which are in a
decelerating Ez electric field of the slow-wave, lose the potential energy they have in the
static electric field E and reach the circuit. The sole electrode is more negative than the
cathode, in order to avoid collecting those electrons having gained energy while
interacting with the slow-wave space harmonic.

O-type BWO
The O-type carcinotron, or O-type backward wave oscillator, uses an electron beam
longitudinally focused by a magnetic field, and a slow-wave circuit interacting with the
beam. A collector collects the beam at the end of the tube.
Chapter- 6

Terahertz Time-Domain Spectroscopy

Typical pulse as measured with THz-TDS.


Fourier transform of the above pulse.

In physics, terahertz time-domain spectroscopy (THz-TDS) is a spectroscopic


technique in which the properties of a material are probed with short pulses of terahertz
radiation. The generation and detection scheme is sensitive to the sample material's effect
on both the amplitude and the phase of the terahertz radiation. In this respect, the
technique can provide more information than conventional Fourier-transform
spectroscopy, which is only sensitive to the amplitude.

THz radiation has several distinct advantages over other forms of spectroscopy: many
materials are transparent to THz, THz radiation is safe for biological tissues because it is
non-ionizing (unlike for example X-rays), and images formed with terahertz radiation can
have relatively good resolution (less than 1 mm). Also, many interesting materials have
unique spectral fingerprints in the terahertz range, which means that terahertz radiation
can be used to identify them. Examples which have been demonstrated include several
different types of explosives, polymorphic forms of many compounds used as Active
Pharmaceutical Ingredients (API) in commercial medications as well as several illegal
narcotic substances. Since many materials are transparent to THz radiation, these items of
interest can be observed through visually opaque intervening layers, such as packaging
and clothing. Though not strictly a spectroscopic technique, the ultrashort width of the
THz radiation pulses allows for measurements (e.g., thickness, density, defect location)
on difficult to probe materials (e.g., foam). The measurement capability shares many
similarities to that observed with pulsed ultrasonic systems. Reflections from buried
interfaces and defects can be found and precisely imaged. THz measurements are non-
contact however.
Typically, the terahertz pulses are generated by an ultrashort pulsed laser and last only a
few picoseconds. A single pulse can contain frequency components covering the whole
terahertz range from 0.05 to 4 THz. For detection, the electrical field of the terahertz
pulse is sampled and digitized, conceptually similar to the way an audio card transforms
electrical voltage levels in an audio signal into numbers that describe the audio
waveform. In THz-TDS, the electrical field of the THz pulse interacts in the detector with
a much-shorter laser pulse (e.g. 0.1 picoseconds) in a way that produces an electrical
signal that is proportional to the electric field of the THz pulse at the time the laser pulse
gates the detector on. By repeating this procedure and varying the timing of the gating
laser pulse, it is possible to scan the THz pulse and construct its electric field as a
function of time. Subsequently, a Fourier transform is used to extract the frequency
spectrum from the time-domain data.

Generation
There are three widely used techniques for generating terahertz pulses, all based on
ultrashort pulses from titanium-sapphire lasers.

Surface emitters

When an ultra-short (100 femtoseconds or shorter) optical pulse illuminates a


semiconductor and its wavelength (energy) is above the energy band-gap of the material,
it photogenerates mobile carriers. Given that absorption of the pulse is an exponential
process most of the carriers are generated near the surface (typically within 1
micrometre). The presence of the surface has two main effects. Firstly it generates a band
bending which has the effect of accelerating carriers of different signs in opposite
directions (normal to the surface) creating a dipole, this effect is known as surface field
emission. Secondly, the presence of the surface itself creates a break of symmetry which
results carriers being able move (in average) only into the bulk of the semiconductor, this
phenomenon combined with the difference of mobilities of electrons and holes also
produces a dipole, this is known as Photo-dember effect and it is particularly strong in
high-mobility semiconductors such as InAs.

Photoconductive emitters

In a photoconductive emitter, the optical laser pulse (100 femtoseconds or shorter)


creates carriers (electron-hole pairs) in a semiconductor material. Effectively, the
semiconductor changes abruptly from being an insulator into being a conductor. This
conduction leads to a sudden electrical current across a biased antenna patterned on the
semiconductor. This changing current emits terahertz radiation, similar to what happens
in the antenna of a radio transmitter. Typically the two antenna electrodes are patterned
on a low temperature gallium arsenide (LT-GaAs), semi-insulating gallium arsenide (SI-
GaAs), or other semiconductor (such as InP) substrate. In a commonly used scheme, the
electrodes are formed into the shape of a simple dipole antenna with a gap of a few
micrometers and have a bias voltage up to 40 V between them. The ultrafast (100 fs)
laser pulse, must have a wavelength that is short enough to excite electrons across the
bandgap of the semiconductor substrate. This scheme is suitable for illumination with a
Ti:sapphire oscillator laser with pulse energies of about 10 nJ. For use with amplified
Ti:sapphire lasers with pulse energies of about 1 mJ, the electrode gap can be increased to
several centimeters with a bias voltage of up to 10 kV.

The short duration of THz pulses generated (typically ~2 ps) are primarily due to the
rapid rise of the photo-induced current in the semiconductor and the short carrier lifetime
semiconductor materials (e.g., LT-GaAs). This current may persist for only a few
hundred femtoseconds, up to several nanoseconds, depending on the material of which
the substrate is composed. This is not the only means of generation, but is currently (as of
2008) the most common.

Pulses produced by this method have average power levels on the order of nanowatts,
although the peak power during the pulses can be many orders of magnitude higher. The
bandwidth of the resulting THz pulse is primarily limited by how quickly the charge
carriers can accelerate in the semiconductor material, rather than the duration of the laser
pulse.

Optical rectification

In optical rectification, a high-intensity ultrashort laser pulse passes through a transparent


crystal material that emits a terahertz pulse without any applied voltages. It is a
nonlinear-optical process, where an appropriate crystal material is quickly electrically
polarized at high optical intensities. This changing electrical polarization emits terahertz
radiation.

Because of the high laser intensities that are necessary, this technique is mostly used with
amplified Ti:sapphire lasers. Typical crystal materials are zinc telluride, gallium
phosphide, and gallium selenide.

The bandwidth of pulses generated by optical rectification is limited by the laser pulse
duration, terahertz absorption in the crystal material, the thickness of the crystal, and a
mismatch between the propagation speed of the laser pulse and the terahertz pulse inside
the crystal. Typically, a thicker crystal will generate higher intensities, but lower THz
frequencies. With this technique, it is possible to boost the generated frequencies to 40
THz (7.5 µm) or higher, although 2 THz (150 µm) is more commonly used since it
requires less complex optical setups.

Electro-optic rectification (EOR) (also referred to as optical rectification) is a non-linear


optical process which consists in the generation of a quasi-DC polarization in a non-
linear medium at the passage of an intense optical beam. For typical intensities, optical
rectification is a second-order phenomenon (difference frequency mixing) which is based
on the inverse process of the electro-optic effect. It was reported for the first time in
1962, when radiation from a ruby laser was transmitted through potassium dihydrogen
phosphate (KDP) and potassium dideuterium phosphate (KDdP) crystals.
Optical rectification can be intuitively explained in terms of the symmetry properties of
the non-linear medium: in the presence of a preferred internal direction, the polarization
will not reverse its sign at the same time as the driving field. If the latter is represented by
a sinusoidal wave, then an average DC polarization will be generated. This is the
analogue of the electric rectification effect, where an AC signal is converted ("rectified")
to DC.

When the applied electric field is delivered by a femtosecond laser, the spectral
bandwidth associated with such short pulses is very large. The mixing of different
frequency components produces a beating polarization, which results in the emission of
electromagnetic waves in the terahertz region. The EOR effect is somewhat similar to a
classical electrodynamic emission of radiation by an accelerating/decelerating charge,
except that here the charges are in a bound dipole form and the THz generation depends
on the second order susceptibility of the nonlinear optical medium. A popular material for
generating radiation in the 0.5–3 THz range (1 mm wavelength) is zinc telluride.

Together with carrier acceleration in semiconductors and polymers, optical rectification


is one of the main mechanisms for the generation of terahertz radiation using lasers. This
is different from other processes of terahertz generation such as polaritonics where a
polar lattice vibration is thought to generate the terahertz radiation.

Detection
The electrical field of the terahertz pulses is measured in a detector that is simultaneously
illuminated with an ultrashort laser pulse. Two common detection schemes are used in
THz-TDS: photoconductive sampling and electro-optical sampling. THz pulses can also
be detected by bolometers, heat detectors cooled to liquid-helium temperatures. Since
bolometers can only measure the total energy of a terahertz pulse, rather than its electrical
field over time, it is not suitable for use in THz-TDS.

In both detection methods, a part (called the detection pulse) of the same ultrashort laser
pulse that was used to generate the terahertz pulse is fed to the detector, where it arrives
simultaneously with the terahertz pulse. The detector will produce a different electrical
signal depending on whether the detection pulse arrives when the electric field of the THz
pulse is low or high. An optical delay line is used to vary the timing of the detection
pulse.

Because the measurement technique is coherent, it naturally rejects incoherent radiation.


Additionally, because the time slice of the measurement is extremely narrow, the noise
contribution to the measurement is extremely low.

The Signal-to-Noise (S/N) of the resulting time-domain waveform obviously depends on


experimental conditions (e.g., averaging time), however due to the coherent sampling
techniques described, high S/N values (>70 dB) are routinely seen with 1 minute
averaging times.
Photoconductive Detection

Photoconductive detection is similar to photoconductive generation. Here, the bias


electrical field across the antenna leads is generated by the electric field of the THz pulse
focused onto the antenna, rather than being applied externally. The presence of the THz
electric field generates current across the antenna leads, which is usually amplified using
a low-bandwidth amplifier. This amplified current is the measured parameter which
corresponds to the THz field strength. Again, the carriers in the semiconductor substrate
have an extremely short lifetime. Thus, the THz electric field strength is only sampled for
an extremely narrow slice (fs's) of the entire electric field waveform.

Electro-optical sampling

The materials used for generation by optical rectification can also be used for detection
by using the Pockels effect, where certain crystalline materials become birefringent in the
presence of an electric field. The birefringence caused by the electric field of a terahertz
pulse leads to a change in the optical polarization of the detection pulse, proportional to
the electric-field strength. With the help of polarizers and photodiodes, this polarization
change is measured.

As with the generation, the bandwidth of the detection is dependent on the laser pulse
duration, material properties, and crystal thickness.
Chapter- 7

Terahertz Metamaterials

Terahertz waves lie at the far end of the infrared band, after the microwave band. In this
image, an array of gold structures on top of a semiconductor base. The metamaterial and
the semiconductor together form a device that can modulate the intensity of the terahertz
radiation, by up to 50 percent when a voltage is applied to the gold structures. The
experimental demonstration of the device exceeds the performance of existing electrical
terahertz modulators.

Terahertz metamaterials are a new class of composite, artificial materials which


interact at terahertz (THz) frequencies. The terahertz frequency range used in materials
research is usually defined as 0.1 to 10 THz. This bandwidth is also known as the
terahertz gap.

Terahertz waves are electromagnetic waves with frequencies higher than microwaves but
lower than infrared radiation and visible light. They possess many advantages for
applications in radio astronomy spectroscopy, non-destructive testing of spacecraft, non-
ionizing medical imaging and tumor detection, high resolution close range radar, and
security detection of chemicals, biological agents, and weapons. However this frequency
region is largely under-utilized and referred to as the “Terahertz Gap” of the
electromagnetic spectrum.

Applications of frequencies in the terahertz radiation range hold the promise of efficient
advancement in notably important technologies. Currently, a fundamental lack in
naturally occurring materials that allow for the desired electromagnetic response has led
to constructing new artificial composite materials, termed metamaterials. The
metamaterials are based on a lattice structure which mimics crystal structures. However
the lattice structure of this new material consists of rudimentary elements much larger
than atoms or single molecules, but is an artificial, rather than a naturally occurring
structure. Yet, the interaction achieved is below the dimensions of the terahertz radiation
wave. In addition, the desired results are based on the resonant frequency of fabricated
fundamental elements. The appeal and usefulness is derived from a resonant response
that can be tailored for specific applications, and can be controlled electrically or
optically. Or the response can be as a passive material.

Terahertz technology
More broadly the submillimeter-wave energy can be defined 1000–100 um (300 GHz–3
THz). Beyond 3 THz, and out to 30 micrometer (10 terahertz) wavelengths has been
metaphorically termed unclaimed territory where few devices, and perhaps none, exist.
The submillimeter, or terahertz band, exists between technologies in traditional
microwave and optical domains. Because atmospheric propagation is limited, the
commercial sector has passed over this frequency band. However, terahertz technology
has been instrumental for high-resolution spectroscopy. Moreover, a rich vein of
knowledge has been amassed via submillimeter remote sensing techniques. In particular,
interdisciplinary researchers in astrophysics and the earth sciences have mapped thermal
emission lines for a wide variety of lightweight molecules. The amount of information
obtained is specifically amenable to this particular band of electromagnetic radiation. In
fact, the universe is bathed in terahertz energy; most of it going unnoticed and
undetected.

Terahertz devices

The development of electromagnetic, artificial-lattice structured materials, termed


metamaterials, has led to the realization of phenomena that cannot be obtained with
natural materials. This is observed , for example, with a natural glass lens, which interacts
with light (the electromagnetic wave) in a way that appears to be one-handed, while light
is delivered in a two-handed manner. In other words, light consists of an electric field and
magnetic field. The interaction of a conventional lens, or other natural materials, with
light is heavily dominated by the interaction with the electric field (one-handed). The
magnetic interaction in lens material is essentially nil. This results in common optical
limitations such as a diffraction barrier. Moreover, there is a fundamental lack of natural
materials that strongly interact with light's magnetic field. Metamaterials, a synthetic
composite structure, overcomes this limitation. In addition, the choice of interactions can
be invented and re-invented during fabrication, within the laws of physics. Hence, the
capabilities of interaction with the electromagnetic spectrum. which is light, are
broadened (two-handed).

Development of metamaterials has traversed the electromagnetic spectrum up to terahertz


and infrared frequencies, but does not yet include the visible light spectrum. This is
because, for example, it is easier to build a structure with larger fundamental elements
that can control microwaves. The fundamental elements for terahertz and infrared
frequencies have been progressively scaled to smaller sizes. In the future, visible light
will require elements to be scaled even smaller, for capable control by metamaterials.

Along with the ability to now interact at terahertz frequencies is the desire to build,
deploy, and integrate THz metamaterial applications universally into society. This is
because, as explained above, components and systems with terahertz capabilities will fill
a technologically relevant void. Because no known natural materials are available that
can accomplish this, artificially constructed materials must now take their place.

Research has begun with first, demonstrating the practical terahertz metamaterial.
Moreover, since, many materials do not respond to THz radiation naturally, it is
necessary then to build the electromagnetic devices which enable the construction of
useful applied technologies operating within this range. These are devices such as
directed light sources, lenses, switches, modulators and sensors. This void also includes
phase-shifting and beam-steering devices. Real world applications in the THz band are
still in infancy

Moderate progress has been achieved. Terahertz metamaterial devices have been
demonstrated in the laboratory as tunable far-infrared filters, optical switching
modulators, and absorbers. The recent existence of a terahertz radiating source in general
are THz quantum cascade lasers. However, technologies to control and manipulate THz
waves are lagging behind other frequency domains of the spectrum of light.

Furthermore, research into technologies which utilize THz frequencies show the
capabilities for advanced sensing techniques. In areas where other wavelengths are
limited, THz frequencies appear to fill the near future gap for advancements in security,
public health, biomedicine, defense, communication, and quality control in
manufacturing. This terahertz band has the distinction of being non-invasive and will
therefore not disrupt or perturb the structure of the object being radiated. At the same
time this frequency band demonstrates capabilities such as passing through and imaging
the contents of a plastic container, penetrate a few millimeters of human skin tissue
without ill effects, clothing to detect hidden objects on personnel, the detection of
chemical and biological agents as novel approaches for counter-terrorism. Terahertz
metamaterials, because they interact at the appropriate THz frequencies, seem to be one
answer in developing materials which use THz radiation.

Researchers believe that artificial magnetic (paramagnetic) structures, or hybrid


structures that combine natural and artificial magnetic materials, can play a key role in
terahertz devices. Some THz metamaterial devices are compact cavities, adaptive optics
and lenses, tunable mirrors, isolators, and converters.

Challenges in this field


Generating THz electromagnetic radiation

Without available terahertz sources, other applications are held back.

Semiconductor devices have become integrated into everyday living. Commercial and
scientific applications for generating the appropriate frequency bands of light, or the
electromagnetic spectrum, commensurate with the semiconductor application or device
are in wide use. Visible and infrared lasers are at the core of information technology, and
at the other end of the spectrum, microwave and radio-frequency emitters enable wireless
communications.

However, applications for the terahertz regime, previously defined as the terahertz gap of
.1 to 10 THz, is an impoverished regime by comparison. Sources for generating the
required THz frequencies (or wavelength) exist, but other challenges hinder their
usefulness. These laser devices are not compact and therefore lack portability and are not
easily integrated into systems. In addition, low-consumption, solid state terahertz sources
are lacking. The current devices also have one or more shortcomings of low power
output, poor tuning abilities, and may require cryogenic liquids for operation (liquid
helium).

This lack of appropriate sources hinders opportunities in spectroscopy, remote sensing,


free space communications, and medical imaging.

Potential terahertz frequency applications are being researched globally. Two recently
developed technologies, Terahertz time-domain spectroscopy and quantum cascade lasers
could possibly be part of a multitude of development platforms worldwide. However, the
devices and components necessary to effectively manipulate terahertz radiation require
much more development beyond what has been accomplished to date (December 2009).

Magnetic field interaction

As briefly mentioned above, naturally-occurring materials such as conventional lenses


and glass prisms are unable to significantly interact with the magnetic field of light. The
significant interaction (permittivity) occurs with the electric field. In natural materials any
useful magnetic interaction will taper off in the gigahertz range of frequencies. Compared
to interaction with the electric field, the magnetic component is imperceptible when in
terahertz, infrared, and visible light. So, a notable step occurred with the invention of a
practical metamaterial at microwave frequencies. This is because the rudimentary
elements of metamaterials have demonstrated a coupling and inductive response to the
magnetic component commensurate to the electric coupling and response. This
demonstrated the occurrence of an artificial magnetism, and was later applied to terahertz
and infrared electromagnetic wave (or light). In the terahertz and infrared domain, it is a
response that has not been discovered in nature.

Moreover, because the metamaterial is artificially fabricated during each step and phase
of construction, this gives ability to choose how light, or the terahertz electromagnetic
wave, will travel through the material and be transmitted. This degree of choice is not
possible with conventional materials. The control is also derived from electrical-magnetic
coupling and response of rudimentary elements that are smaller than the length of the
electromagnetic wave travelling through the assembled metamaterial.

Electromagnetic radiation, which includes light, carries energy and momentum that may
be imparted to matter with which it interacts. The radiation and matter have a symbiotic
relationship. Radiation does not simply act on a material, nor does is it simply acted on
upon by a given material. Radiation interacts with matter. The magnetic interaction, or
induced coupling, of any material can be translated into permeability. The permeability of
natural occurring materials is a positive value. A unique ability of metamaterials is to
achieve permeability values less than 0 (negative), values not accessible in nature.
Negative permeability was first observed at microwave frequencies with the first
metamaterials. A few years later negative permeability was demonstrated in the terahertz
regime.

There have been reports of some natural magnetic materials that have responded at
microwave frequencies. However, the magnetic effects in these materials are typically
weak and often exhibit narrow bands, that limits the scope of possible THz devices. It
was noted that the realization of magnetism at THz and higher frequencies will
substantially affect THz optics and their applications.

This has to do with magnetic coupling at the atomic level. This drawback can be
overcome by using metamaterials that mirror atomic magnetic coupling, on a scale
magnitudes larger than the atom.

Materials which can couple magnetically are particularly rare at terahertz or optical
frequencies.

The first THz metamaterials


The first terahertz metamaterials able to achieve a desired magnetic response, which
included negative values for permeability, were passive materials. Because of this,
"tuning" was achieved by fabricating a new material, with slightly altered dimensions to
create a new response. However, the notable advance, or practical achievement, is
actually demonstrating the manipulation of terahertz radiation with metamaterials.

For the first demonstration, more than one metamaterial structure was fabricated.
However, the demonstration showed a range of 0.6 to 1.8 terahertz. The results were
believed to also show that the effect can be tuned throughout the terahertz frequency
regime by scaling the dimensions of the structure. This was followed by a demonstrations
at 6 THz, and 100 THz.

With the first demonstration, scaling of elements, and spacing, allowed for success with
the terahertz range of frequencies. As with metmaterials in lower frequency ranges, these
elements were non-magnetic materials, but were conducting elements. The design allows
a resonance that occurs with the electric and magnetic components simultaneously. And
notable is the strong magnetic response of these artificially constructed materials.

For the elements to respond at resonance, at specified frequencies, this is arranged by


specifically designing the element. The elements are then placed in a repeating pattern, as
is common for metamaterials. In this case, the now combined and arrayed elements,
along with attention to spacing, comprise a flat, rectangular, (planar) structured
metamterial. Since it was designed to operate at terahertz frequencies, photolithography
is used to etch the elements onto a substrate.

Magnetic response from metamaterials at 1.8 THz

Schematic setup of an ellipsometry experiment.

The Split-Ring Resonator (SRR) is a common metamaterial in use for a variety of


experiments. Magnetic responses (permeability) at terahertz frequencies can be achieved
with a structure composed of non-magnetic elements, such as SRRs, which demonstrate
different responses at resonant frequencies and near resonant frequencies. The desired,
artificially fabricated, magnetic response is realized over a relatively large bandwidth,
and can be tuned throughout the terahertz frequency spectrum. The periodic array allows
the material to behave as a medium with an effective magnetic permeability µeff(ω),
where ω is frequency. In other words, at resonance µeff is achieved.

Effective permeability µ-eff is boosted from the inductance of the rings and the
capacitance occurs at the gaps of the split rings. In prior microwave frequency
experiments bulk metamaterial is used, such as waveguides to transmit the source of
radiation. In this terahertz experiment ellipsometry is applied. In other words, a light
source in free space, emits a polarized beam of radiation which is then reflected off the
sample. The emitted polarization is intended, and angle of polarization is known. The
change in polarization of the radiation reflected off the sample material is then measured.
This is used to obtain phase information and the polarization state of the emitted and
reflected radiation. This information is then is used to demonstrate the boost in effective
magnetic permeability at terahertz frequencies.

Light (THz radiation) strikes the split-ring resonator (SRR) array in the direction of K, oriented 30° from
the surface normal. For each SRR, the split gap is 2 micrometers. The inset, SRR image, from focused ion
beam microscopy imaging. The lattice constant, the corresponding gap between the inner and outer ring
(G), the width of the metal lines (W), the length of the outer ring (L) are varied throughout the experiment.

An external magnetic field is applied with the THz radiation. Then the radiation induces a
current in the looped wire of the SRR cell. This current then induces a local magnetic
field (a vector quantity). The local magnetic field can be understood as a magnetic
response. Well below the resonance frequency ω0 the local magnetic field increases over
time corresponding to increasing frequency. This magnetic response stays in phase with
the electric field. Because the SRR cell is actually a non-magnetic material, this local
magnetic response is temporary and will retain magnetic characteristics only so long as
there is a an externally applied magnetic field. Thus the total magnetization will drop to
zero when the applied field is removed. In addition, the local magnetic response is
actually a fraction of the total magnetic field. This fraction is proportional to the field
strength and this explains the linear dependency. All this has to do with alignments and
spins at the atomic level.

As the frequency continues to increase, approaching resonance, the induced currents in


the looped wire can no longer keep up with the applied field and the local response
begins to lag. Then as the frequency increases above ω0, the induced local field response
lags further until it is completely out of phase with the excitation field. This results in a
magnetic permeability that is falling below unity, over time - including values less than
zero. The linear coupling between the induced local field and the fluctuating applied field
is in contrast to the non-linear characteristics of ferromagnetism, hence no permanent
magnetic effect is achieved.

Three different SRR samples were compared. The wavelength of the resonant excited
field is λ and the material is able to scale 1/7 λ. These are the necessary conditions for the
metamaterial to become a medium with µeff. The sample was placed inside a vacuum
produced inside a compartment. A mercury arc lamp was used as the electromagnetic
source, and shined onto the sample, at an angle of 30°. The SRRs are expected to respond
magnetically when the magnetic field penetrates the rings (S-polarization) and to exhibit
no magnetic response when the magnetic field is parallel to the plane of the SRR (P-
polarization). The frequency range of 0.6 THz to 1.8 THz was used for the
measurements. The reflectance ratio of S- and P- polarizations was matched with strong
magnetic responses of SRRs when the magnetic field penetrates the rings (S-
polarization). Three different artificial magnetic structures are designated D1, D2, and
D3. D1 strong magnetic response at 1.25 THz, with a ratio of just below 1.5. To show
that it is the material that is used to vary the effective permeability, two other samples are
used to show that this resonance should scale with dimensions in accordance to
Maxwell's equations. Therefore D2 has a strong magnetic response at peaks at 0.95 THz,
and the D3 sample peaks at 0.8 THz. This demonstrates the scalability of these magnetic
metamaterials throughout the THz range and potentially into optical frequencies. To
further demonstrate verification of the results, a mathematical simulation was performed
which repeated the demonstration. The results of the simulation were in good agreement
with the actual results for materials D1, D2, and D3.

Magnetic response of metamaterials at 100 terahertz

Illustration of the analogy between a conventional LC circuit (A), consisting of an


inductance L, a capacitance C, and the single SRRs used here (B). l, length; w, width; d,
gap width; t, thickness. (C) An electron micrograph of a typical SRR fabricated by
electron-beam lithography.

From this analysis and demonstration the electrical susceptibility and magnetic
permeability - the parameters of normal materials, are artificially expanded. In normal
materials, resonances fade away above gigahertz frequencies. Instead, resonances at
terahertz frequencies have been effectively demonstrated for metamaterials. This now
allows for interesting new effects in linear optics as well as in nonlinear optics.
Furthermore, a negative magnetic permeability would allow for negative-index materials
at optical frequencies, which seemed totally out of reach just a few years ago.

To fulfill a need to achieve localized magnetic resonant responses for terahertz optical
frequencies, an array of single nonmagnetic metallic split rings can be used to implement
a magnetic resonance at 100 THz. The split-ring resonator mimicked an LC oscillator
which generated waves with frequency ωLC = (LC)−1/2.

An LC circuit is a resonant circuit or tuned circuit that consists of an inductor, and a


capacitor. When connected together, an electric current can alternate between them at the
circuit's resonant frequency. LC circuits are used either for generating signals at a
particular frequency, or picking out a signal at a particular frequency from a more
complex signal. They are key components in many applications such as oscillators,
filters, tuners and frequency mixers.
The radiated transmission is red. The response is graphed in blue. An etched picture of
the sample is shown on right-hand side. Polarization configurations are shown on top of
the two columns. Resonances of the three lattice constants are shown in the gray area of
the graph at about 3-µm. first row (A and B), the lattice constant of the SRRs is a = 450
nm; in the second row (C and D), it is a = 600 nm; and in the third row (E and F), it is a =
900 nm. In the last row (G and H), results for closed-ring resonators with a = 600 nm are
shown

To couple an incident light beam to the LC resonance one of two conditions must be met.
The first condition is that electric field vector E of the incident light source has a
component that is normal to the plates of the capacitor. The second condition is the
magnetic field vector H of the incident light has a component normal to the plane of the
coil. When the second condition is met, a localized magnetic field is created which
counteracts the magnetic field of the light source and can result in a negative
permeability. Such metamaterials were first realized at frequencies around 10 GHz (3-cm
wavelengths) - and could be fabricated on stacked electronic circuit boards. In this case
another two orders of magnitude, to 100 THz, had been achieved. This puts visible
frequencies for negative refraction index much closer.

The first responses are shown with a lattice constant of a = 450 nm. Additionally, this
corresponds to a total number of 56 × 56 = 3136 SRR microstructures. Coupling is
controlled through the polarization of the incident light - the interaction of the electric
field components with the capacitor and the interaction of the magnetic field components
with the inductor. Other lattice constants shown will have a different total number of
SRR microstructures.

The LC resonance occurs at 3 µm. Resonant responses occur at lattice constants of


450 nm, 650 nm, and 900 nm. Two distinct resonant responses occur for all three of these
lattice constants. Additionally, all three lattice constants are notably smaller than the LC
resonant frequency. Coupling to the LC resonance can only occur if there is a component
normal from the polarized electric field to the plates of the capacitance. If the electric
field is rotated 90° then resonance around the 3-µm wavelength completely disappears.

Next, closed rings rather than split rings are radiated to compare results. Linear
polarization does not occur for either position of the metamaterial. Hence, unlike the split
ring resonators, no resonance occurs at 3-µm. Finally, measurements are performed under
an angle of up to 40° with respect to the surface normal, such that the magnetic field
vector of the incident light acquires a component normal to the coils. As expected, the 3-
µm resonance persists and does not shift.

Later, in 2005, resonant magnetic nanostructures were fabricated that experimentally


exhibited a negative permeability in the mid-infrared range. This was the first practical
demonstration to do so. This was seen as an important step toward achieving negative
refractive index in the IR range.

Negative index of refraction at 200 THz

The two previous sections discussed a magnetic response at terahertz frequencies, but not
a negative index of refraction. These two studies are nevertheless important because a
negative magnetic permeability is necessary to achieve negative refraction. In addition,
these experiments demonstrated that optical negative index metamaterials are possible
because of the acquired magnetic response (permeability). In 2005 experimental
observation of a negative refractive index for the optical range, specifically, for the
wavelengths close to 1.5 μm (200 THz frequency) was accomplished.

This accomplishment was in agreement with prior theoretical predictions that a layer of
pairs of parallel metal nanorods can produce a negative refractive index.

Reconfigurable terahertz metamaterials


Electromagnetic metamaterials show promise to fill the Terahertz gap (0.1 – 10 THz).
The terahertz gap is caused by two general shortfalls. First, almost no naturally occurring
materials are available for applications which would utilize terahertz frequency sources.
Second is the inability to translate the successes with EM metamaterials in the microwave
and optical domain, to the terahertz domain.

Moreover, the majority of research has focused on the passive properties of artificial
periodic THz transmission, as determined by the patterning of the metamaterial elements
e.g., the effects of the size and shape of inclusions, metal film thickness, hole geometry,
periodicity, etc. It has been shown that the resonance can also be affected by depositing a
dielectric layer on the metal hole arrays and by doping a semiconductor substrate, both of
which result in significant shifting of the resonance frequency. However, little work has
focused on the "active" manipulation of the extraordinary optical transmission though it
is essential to realize many applications.

Answering this need, there are proposals for "active metamaterials" which can
proactively control the proportion of transmission and reflection components of the
source (EM) radiation. Strategies include illuminating the structure with laser light,
varying an external static magnetic field where the current does not vary, and by using an
external bias voltage supply (semiconductor controlled). These methods lead to the
possibilities of high-sensitive spectroscopy, higher power terahertz generation, short-
range secure THz communication, an even more sensitive detection through terahertz
capabilities. Furthermore these include the development of techniques for, more sensitive
terahertz detection, and more effective control and manipulation of terahertz waves.

Surface-plasmon-enhanced terahertz transmission

In August 2003, measurements of the transmission of terahertz radiation through periodic


arrays of holes made in highly doped silicon wafers were reported. The unusual
transmission was attributed to the resonant tunneling of surface-plasmon polaritons that
can be excited on doped semiconductors at terahertz frequencies.

Electronic control of THz transmission properties

Electronic switching of the extraordinary THz transmission was demonstrated with


subwavelength metal hole arrays fabricated on doped semiconductor substrates. The
passive resonance properties are mainly determined by the geometry and dimensions of
the metal holes as well as the array periodicity. By electronically altering the substrate
conductivity via an external voltage bias, switching of the extraordinary THz
transmission is accomplished in real time.

Hybrid metamaterial modulation of terahertz radiation

Terahertz modulators based on semiconductor structures often require cryogenic


temperatures. This particular modulator is electrically modulated at room temperature.
The bandwidth of the hybrid structure is proactively controlled by semiconductor
conduction.

Semiconductor-SRR metamaterial-based terahertz electrical modulators will be useful for


real-time terahertz imaging, fast sensing and identification, and even in short range secure
terahertz communications.

High-frequency modulation of terahertz radiation

In 2008, a metamaterial based modulator for THz radiation, was designed, fabricated and
experimentally demonstrated. It was electrically tunable. The metamaterial is constructed
with symmetric unit cell structures to ensure the material is not affected by the arbitrary
polarizations of a radiated source.

The metamaterial was composed of an array of gold crosses fabricated on top of an n-


doped semiconductor (GaAs) layer.

The crossbars were effectively electric dipoles. In the vicinity of the resonance frequency
the crossbars create a negative effective permittivity for this metamaterial. Upon reaching
negative permittivity, a major fraction of the electromagnetic wave is reflected from the
metamaterial surface. The other part is of course transmitted, hence a stop band occurs
around the dipole resonance frequency. Here is where the n-doped GaAs layer comes into
play. The conductivity of the semiconductor layer is the tuning device for the transmitted
part of the EM wave. And the semiconductor layer can be purposely tuned.

Adaptive metamaterials (THz)


With adaptive metamaterials the unit cell's response is reorientation. Adaptive
metamaterials offer significant potential to realize novel electromagnetic functionality
ranging from thermal detection to reconfigurable electromagnetic radiation absorbers.

Reconfigurable terahertz metamaterials

The first demonstrations of negative refractive index with metamaterials were anisotropic
metamaterials. Reconfigurable metamaterials at terahertz frequencies are anisotropic
materials where the artificial dipole, which comprises the unit cell, is reoriented when
responding to the external EM source field. The split ring resonators are designed in a
cantilever configuration, which allows bending out of plane in response to stimulus. A
distinctive capability to tune the electric and magnetic response as the split ring
resonators reorient within their unit cells.

Employing MEM technology

By combining metamaterial elements - specifically, split ring resonators - with


Microelectromechanical systems technology - has enabled the creation of non-planar
flexible composites and micromechanically active structures where the orientation of the
electromagnetically resonant elements can be precisely controlled with respect to the
incident field.

Dynamic electric and magnetic metamaterial response


at THz frequencies
The theory, simulation, and demonstration of a dynamic response of metamaterial
parameters were shown for the first time with a planar array of split ring resonators
(SRRs).

Survey of terahertz metamaterial devices


The current trend of metamaterial research aims for design of nanostructures that are
capable of manipulating electromagnetic waves at the visible frequency regime. A
metamaterial mimicking the Drude-Lorentz model can be straightforwardly achieved by
an array of wire elements into which cuts are periodically introduced. At frequencies
above the resonant frequency and below plasma frequency, the permittivity is negative
and, because the resonant frequency can be set to virtually any value in a metamaterial,
phenomena usually associated with optical frequencies including negative ε can be
reproduced at low frequencies.

Novel amplifier designs

Section of a terahertz, folded-waveguide traveling-wave tube circuit with hole arrays on


walls.
Terahertz planar traveling-wave tube circuit with metamaterial embedded in substrate.

In the terahertz compact moderate power amplifiers are not available. This results in a
region that is underutilized, and the lack of novel amplifiers can be directly attributed as
one of the causes.

Research work has involved investigating, creating, and designing light-weight slow-
wave vacuum electronics devices based on traveling wave tube amplifiers. These are
designs that involve folded waveguide, slow-wave circuits, in which the terahertz wave
meanders through a serpentine path while interacting with a linear electron beam.
Designs of folded-waveguide traveling-wave tubes are at frequencies of 670, 850, and
1030 GHz. In order to ameliorate the power limitations due to small dimensions and high
attenuation, novel planar circuit designs are also being investigated.

In-house work at the NASA Glenn Research Center has investigated the use of
metamaterials—engineered materials with unique electromagnetic properties to increase
the power and efficiency of terahertz amplification in two types of vacuum electronics
slow wave circuits. The first type of circuit has a folded waveguide geometry in which
anisotropic dielectrics and holey metamaterials are which consist of arrays of
subwavelength holes.

The second type of circuit has a planar geometry with a meander transmission line to
carry the electromagnetic wave and a metamaterial structure embedded in the substrate.
Computational results are more promising with this circuit. Preliminary results suggest
that the metamaterial structure is effective in decreasing the electric field magnitude in
the substrate and increasing the magnitude in the region above the meander line, where it
can interact with an electron sheet beam. In addition, the planar circuit is less difficult to
fabricate and can enable a higher current. More work is needed to investigate other planar
geometries, optimize the electric-field/electron-beam interaction, and design focusing
magnet geometries for the sheet beam.
Novel terahertz sensors

Device design is quickly becoming a large part of metamaterial research. In the short half
decade since its conception, understanding of the physics behind tailored electromagnetic
responses in metamaterials has progressed far enough to where application
demonstrations are surfacing.

A process is demonstrated for tuning the magnetic resonance frequency of a fixed split-
ring resonator array, by way of adding material near the split-ring elements. The
sensitivity of the fine tuning suggests possible applications as a sensor device. The
resonant frequency responds to silicon nanospheres.

Applying drops of a silicon-nanospheres/ethanol solution to the surface of the sample


decreases the magnetic resonance frequency of the split-ring array in incremental steps of
0.03 THz. This fine tuning is done post fabrication and is demonstrated to be reversible.
The exhibited sensitivity of the split-ring resonance frequency to the presence of silicon
nanospheres also suggests further application possibilities as a sensor device.

A metamaterial solid-state terahertz phase modulator

The terahertz phase modulator uses a voltage-controlled metamaterial of a single unit cell
layer. This new device achieves a voltage-controlled linear phase shift of π /6 radians at
16 V. Moreover, the causal relation between amplitude switching and phase shifting
enables broadband modulation.

THz metamaterial IR sensor

One of the most critical applications of such a filter is to block unwanted radiation from
nearby military high-power laser, while still allowing the sensor to conduct necessary
battlefield.

Biomolecular sensing at THz frequencies

Recently, it has been proposed in a numerical study to use THz-FSS based on asymmetric
split ring resonators as a sensor for detecting biomolecular sample films with a thickness
of only 10 nm. Because large biomolecules, e.g. DNA, exhibit a multitude of inherent
vibrational modes, terahertz radiation is ideal to excite and probe these modes and to
detect DNA by its terahertz properties at a specific binding state. This is a proposal for a
rapid processing and reading of up to 100 arrayed gene sensors for diagnostic
applications.
Chapter- 8

Important examples of Terahertz Technology

Comb generator
A comb generator is a signal generator that produces multiple harmonics of its input
signal. The appearance of the output at the spectrum analyzer screen, resembling teeth of
a comb, gave the device its name.

Comb generators find wide range of uses in microwave technology. E.g., synchronous
signals in wide frequency bandwidth can be produced by a comb generator. The most
common use is in broadband frequency synthesizers, where the high frequency signals
act as stable references correlated to the lower energy references; the outputs can be used
directly, or to synchronize phase-locked loop oscillators. It may be also used to generate a
complete set of substitution channels for testing, each of which carries the same baseband
audio and video signal.

Comb generators are also used in RFI testing of consumer electronics, where their output
is used as a simulated RF emissions, as it is a stable broadband noise source with
repeatable output.

An optical comb generator can be used as generators of terahertz radiation. Internally, it


is a resonant electro-optic modulator, with the capability of generating hundreds of
sidebands with total span of at least 3 terahertz (limited by the optical dispersion of the
lithium niobate crystal) and frequency spacing of 17 GHz. Other construction can be
based on erbium-doped fiber laser or Ti-sapphire laser often in combination with carrier
envelope offset control.
Gunn diode

A rough approximation of the VI curve for a Gunn diode, showing the negative
differential resistance region

A Gunn diode, also known as a transferred electron device (TED), is a form of diode
used in high-frequency electronics. It is somewhat unusual in that it consists only of N-
doped semiconductor material, whereas most diodes consist of both P and N-doped
regions. In the Gunn diode, three regions exist: two of them are heavily N-doped on each
terminal, with a thin layer of lightly doped material in between. When a voltage is
applied to the device, the electrical gradient will be largest across the thin middle layer.
Conduction will take place as in any conductive material with current being proportional
to the applied voltage. Eventually, at higher field values, the conductive properties of the
middle layer will be altered, increasing its resistivity and reducing the gradient across it,
preventing further conduction and current actually starts to fall down. In practice, this
means a Gunn diode has a region of negative differential resistance.

The negative differential resistance, combined with the timing properties of the
intermediate layer, allows construction of an RF relaxation oscillator simply by applying
a suitable direct current through the device. In effect, the negative differential resistance
created by the diode will negate the real and positive resistance of an actual load and thus
create a "zero" resistance circuit which will sustain oscillations indefinitely. The
oscillation frequency is determined partly by the properties of the thin middle layer, but
can be tuned by external factors. Gunn diodes are therefore used to build oscillators in the
10 GHz and higher (THz) frequency range, where a resonator is usually added to control
frequency. This resonator can take the form of a waveguide, microwave cavity or YIG
sphere. Tuning is done mechanically, by adjusting the parameters of the resonator, or in
case of YIG spheres by changing the magnetic field.

Gallium arsenide Gunn diodes are made for frequencies up to 200 GHz, gallium nitride
materials can reach up to 3 terahertz.

The Gunn diode is based on the Gunn effect, and both are named for the physicist J.B.
Gunn who, at IBM in 1962, discovered the effect because he refused to accept
inconsistent experimental results in Gallium arsenide as "noise", and tracked down the
cause. Alan Chynoweth, of Bell Telephone Laboratories, showed in June 1965 that only a
transferred-electron mechanism could explain the experimental results.

Microscopic view
GaAs has another energy minimum in the conduction band above the direct-gap
minimum at Γ-point. This minimum is indirect, so a phonon is needed or created to
deliver the impulse for the transition. The energy stems from the kinetic energy of
ballistic electrons. They either start out in a high-energy Fermi-Dirac region and are
ensured a sufficiently long mean free path by applying a strong electric field, or they are
injected by a cathode with the right energy. For the latter, the cathode material has to be
chosen carefully; chemical reactions at the interface need to be controlled during
fabrication and additional monoatomic layers of other materials inserted. In either case,
with forward voltage applied, the Fermi level in the cathode is the same as the third band,
and reflections of ballistic electrons starting around the Fermi level are minimized by
matching the density of states and using the additional interface layers to let the reflected
waves interfere destructively. In GaAs the drift velocity in the third band is lower than in
the usual conduction band, so with a small increase in the forward voltage, more and
more electrons can reach the third band and current decreases. This creates a region of
negative incremental resistance in the voltage/current relationship.
Multiple Gunn diodes in a series circuit are unstable, because if one diode has a slightly
higher voltage drop across it, it will conduct less current, and the voltage drop will rise
further. In fact, even a single diode is internally unstable, and will develop small slices of
low conductivity and high field strength which move from the cathode to the anode. It is
not possible to balance the population in both bands, so there will always be thin slices of
high field strength in a general background of low field strength. So in practice, with a
small increase in forward voltage, a slice is created at the cathode, resistance increases,
the slice takes off, and when it reaches the anode a new slice is created at the cathode to
keep the total voltage constant. If the voltage is lowered, any existing slice is quenched
and resistance decreases again.

Applications
• Negative resistance behaviour can be used to amplify
• Common use is a high frequency and high power signal source

A bias tee is needed to isolate the bias current from the high frequency oscillations. Since
this is a single-port device, there is no isolation between input and output.

Radio Amateur Use

By virtue of their low voltage operation, Gunn diodes can serve as microwave frequency
generators for very low powered (few-milliwatt) microwave transmitters. In the late
1970s they were being used by some radio amateurs in Britain. Designs for transmitters
were published in journals. They typically consisted simply of an approximately 3 inch
waveguide into which the diode was mounted. A low voltage (less than 12 volt) direct
current power supply that could be modulated appropriately was used to drive the diode.
The waveguide was blocked at one end to form a resonant cavity and the other end
ideally fed a parabolic dish.

Terahertz spectrometry
Terahertz spectrometry is spectrometry using terahertz radiation. In general,
spectrometry is the spectroscopic technique used to measure the properties of a system,
e.g. to assess the concentration or amount of a given species. However, terahertz
spectrometry goes beyond this traditional definition. In addition to measuring
concentration and/or other analytical properties, it can also identify the species via what
is known as a molecular signature spectrum. This signature spectrum is usually not
available from other kinds of spectroscopy because some molecular events happen only
in the terahertz range that can be probed only by terahertz spectrometry.

High electron mobility transistor


Cross section of a GaAs/AlGaAs/InGaAs pHEMT

Band structure in GaAs/AlGaAs heterojunction based HEMT

High electron mobility transistor (HEMT), also known as heterostructure FET


(HFET) or modulation-doped FET (MODFET), is a field effect transistor
incorporating a junction between two materials with different band gaps (i.e., a
heterojunction) as the channel instead of a doped region, as is generally the case for
MOSFET. A commonly used material combination is GaAs with AlGaAs, though there is
wide variation, dependent on the application of the device. Devices incorporating more
indium generally show better high-frequency performance, while in recent years, gallium
nitride HEMTs have attracted attention due to their high-power performance.

To allow conduction, semiconductors are doped with impurities which donate mobile
electrons (or holes). However, these electrons are slowed down through collisions with
the impurities (dopants) used to generate them in the first place. HEMTs avoid this
through the use of high mobility electrons generated using the heterojunction of a highly-
doped wide-bandgap n-type donor-supply layer (AlGaAs in our example) and a non-
doped narrow-bandgap channel layer with no dopant impurities (GaAs in this case).

The electrons generated in the thin n-type AlGaAs layer drop completely into the GaAs
layer to form a depleted AlGaAs layer, because the heterojunction created by different
band-gap materials forms a quantum well (a steep canyon) in the conduction band on the
GaAs side where the electrons can move quickly without colliding with any impurities
because the GaAs layer is undoped, and from which they cannot escape. The effect of
this is to create a very thin layer of highly mobile conducting electrons with very high
concentration, giving the channel very low resistivity (or to put it another way, "high
electron mobility"). This layer is called a two-dimensional electron gas. As with all the
other types of FETs, a voltage applied to the gate alters the conductivity of this layer.

Ordinarily, the two different materials used for a heterojunction must have the same
lattice constant (spacing between the atoms). As an analogy, imagine pushing together
two plastic combs with a slightly different spacing. At regular intervals, you'll see two
teeth clump together. In semiconductors, these discontinuities form deep-level traps, and
greatly reduce device performance.

A HEMT where this rule is violated is called a pHEMT or pseudomorphic HEMT. This
is achieved by using an extremely thin layer of one of the materials – so thin that the
crystal lattice simply stretches to fit the other material. This technique allows the
construction of transistors with larger bandgap differences than otherwise possible,
giving them better performance.

Another way to use materials of different lattice constants is to place a buffer layer
between them. This is done in the mHEMT or metamorphic HEMT, an advancement of
the pHEMT. The buffer layer is made of AlInAs, with the indium concentration graded
so that it can match the lattice constant of both the GaAs substrate and the GaInAs
channel. This brings the advantage that practically any Indium concentration in the
channel can be realized, so the devices can be optimized for different applications (low
indium concentration provides low noise; high indium concentration gives high gain).

Applications are similar to those of MESFETs – microwave and millimeter wave


communications, imaging, radar, and radio astronomy – any application where high gain
and low noise at high frequencies are required. HEMTs have shown current gain to
frequencies greater than 600 GHz and power gain to frequencies greater than 1 THz.
(Heterojunction bipolar transistors were demonstrated at current gain frequencies over
600 GHz in April 2005.) Numerous companies worldwide develop and manufacture
HEMT-based devices. These can be discrete transistors but are more usually in the form
of a 'monolithic microwave integrated circuit' (MMIC). HEMTs are found in many types
of equipment ranging from cellphones and DBS receivers to electronic warfare systems
such as radar and for radio astronomy.

The invention of the HEMT is usually attributed to Takashi Mimura (三村 高志)
(Fujitsu, Japan). In America, Ray Dingle and his co-workers in Bell Laboratories also
played an important role in the invention of the HEMT. In Europe, Daniel
Delagebeaudeuf and Trong Linh Nuyen from Thomson_CSF (France) filed for a patent
of this device on the 28th of March 1979.

Induced high electron mobility transistor


In contrast to a modulation-doped HEMT, an induced high electron mobility transistor
provides the flexibility to tune different electron densities with a top gate, since the
charge carriers are "induced" to the 2DEG plane rather than created by dopants. The
absence of a doped layer enhances the electron mobility significantly when compared to
their modulation-doped counterparts.

This level of cleanliness provides opportunities to perform research into the field of
Quantum Billiard for quantum chaos studies, or applications in ultra stable and ultra
sensitive electronic devices.

Researchers from the Quantum Electronic Devices Group (QED) at the Condensed
Matter Physics Department, School of Physics at the University of New South Wales
have created both n-type and p-type HEMT for studying fundamental quantum physics of
electronic devices.

You might also like