Energy Lecture PDF Combined PDF
Energy Lecture PDF Combined PDF
The CDM, defined in Article 12 of the Protocol, was intended to meet two objectives:
∙ (1) to assist parties not included in Annex I in achieving sustainable development and in contributing
to the ultimate objective of the United Nations Framework Convention on Climate Change
(UNFCCC), which is to prevent dangerous climate change; and
∙ (2) to assist parties included in Annex I in achieving compliance with their quantified emission
limitation and reduction commitments (greenhouse gas (GHG) emission caps). [3]
"Annex I" parties are the countries listed in Annex I of the treaty, the industrialized countries.
Non-Annex I parties are developing countries.
The CDM addresses the second objective by allowing the Annex I countries to meet part of their
emission reduction commitments under the Kyoto Protocol by buying Certified Emission
Reduction units from CDM emission reduction projects in developing countries (Carbon Trust,
2009, p. 14). Both the projects and the issue of CERs units are subject to approval to ensure
[4]
that these emission reductions are real and "additional." The CDM is supervised by the CDM
Executive Board (CDM EB) under the guidance of the Conference of the Parties (COP/MOP) of
the United Nations Framework Convention on Climate Change (UNFCCC).
The CDM allows industrialized countries to buy CERs and to invest in emission reductions
where it is cheapest globally (Grubb, 2003, p. 159). Between 2001, which was the first year
[5]
CDM projects could be registered and 7 September 2012, the CDM issued 1 billion Certified
Emission Reduction units. As of 1 June 2013, 57% of all CERs had been issued for projects
[6]
based on destroying either HFC-23 (38%) or N O (19%). Carbon capture and storage (CCS)
2
[7]
was included in the CDM carbon offsetting scheme in December 2011. [8]
However, a number of weaknesses of the CDM have been identified (World Bank, 2010, p.
265-267) Several of these issues were addressed by the new Program of Activities (PoA),
which moves to approving 'bundles' of projects instead of accrediting each project individually.
In 2012, the report Climate Change, Carbon Markets and the CDM: A Call to Action said
governments urgently needed to address the future of the CDM. It suggested the CDM was in
danger of collapse because of the low price of carbon and the failure of governments to
guarantee its existence into the future. Writing on the website of the Climate & Development
Knowledge Network, Yolanda Kakabadse, a member of the investigating panel for the report
and founder of Fundación Futuro Latinamericano, said a strong CDM is needed to support the
political consensus essential for future climate progress. "Therefore we must do everything in
our hands to keep it working," she said. [9]
Contents
History[edit]
The clean development mechanism is one of the "flexibility mechanisms" defined in the Kyoto
Protocol. The flexibility mechanisms were designed to allow Annex B countries to meet their
emission reduction commitments with reduced impact on their economies (IPCC, 2007). The [1]
flexibility mechanisms were introduced into the Kyoto Protocol by the US government.
Developing countries were highly skeptical and fiercely opposed to the flexibility mechanisms
(Carbon Trust, 2009, p. 6). However, the international negotiations over the follow-up to the
[4]
Purpose[edit]
The purpose of the CDM is to promote clean development in developing countries, i.e., the
"non-Annex I" countries (countries that aren't listed in Annex I of the Framework Convention).
The CDM is one of the Protocol's "project-based" mechanisms, in that the CDM is designed to
promote projects that reduce emissions. The CDM is based on the idea of emission reduction
"production" (Toth et al., 2001, p. 660). These reductions are "produced" and then subtracted
[10]
against a hypothetical "baseline" of emissions. The baseline emissions are the emissions that
are predicted to occur in the absence of a particular CDM project. CDM projects are "credited"
against this baseline, in the sense that developing countries gain credit for producing these
emission cuts.
The economic basis for including developing countries in efforts to reduce emissions is that
emission cuts are thought to be less expensive in developing countries than developed
countries (Goldemberg et al., 1996, p. 30; Grubb, 2003, p. 159). For example, in developing
[11]
[5]
therefore have a very large influence on future efforts to limit total global emissions (Fisher et al.,
2007). The CDM is designed to start developing countries off on a path towards less pollution,
[13]
Dutch companies' purchases of European Union Emission Trading Scheme allowances from
companies in other countries as part of its domestic actions.
The CDM gained momentum in 2005 after the Kyoto Protocol took effect. Before the Protocol
entered into force, investors considered this a key risk factor. The initial years of
[clarification needed]
operation yielded fewer CDM credits than supporters had hoped for, as parties did not provide
sufficient funding to the EB, which left it understaffed. [citation needed]
The Adaptation Fund was established to finance concrete adaptation projects and programmes
in developing countries that are parties to the Kyoto Protocol. The Fund is to be financed with a [14]
share of proceeds from clean development mechanism (CDM) project activities and receive
funds from other sources.
CDM project process[edit]
Outline[edit]
An industrialised country that wishes to get credits from a CDM project must obtain the consent
of the developing country hosting the project and their agreement that the project will contribute
to sustainable development. Then, using methodologies approved by the CDM Executive Board
(EB), the applicant industrialised country must make the case that the carbon project would not
have happened anyway (establishing additionality), and must establish a baseline estimating the
future emissions in absence of the registered project. The case is then validated by a third party
agency, called a Designated Operational Entity (DOE), to ensure the project results in real,
measurable, and long-term emission reductions. The EB then decides whether or not to register
(approve) the project. If a project is registered and implemented, the EB issues credits, called
Certified Emission Reductions (CERs, commonly known as carbon credits, where each unit is
equivalent to the reduction of one metric tonne of CO e, e.g. CO or its equivalent), to project
2 2
participants based on the monitored difference between the baseline and the actual emissions,
verified by the DOE.
Additionality[edit]
To avoid giving credits to projects that would have happened anyway ("freeriders"), specified
rules ensure the additionality of the proposed project, that is, ensure the project reduces
emissions more than would have occurred in the absence of the intervention created by the
CDM. At present, the CDM Executive Board deems a project additional if its proponents can
[15]
document that realistic alternative scenarios to the proposed project would be more
economically attractive or that the project faces barriers that CDM helps it overcome. Current
Guidance from the EB is available at the UNFCCC website. [16]
The determination of additionality and the calculation of emission reductions depends on the
emissions that would have occurred without the project minus the emissions of the project.
Accordingly, the CDM process requires an established baseline or comparative emission
estimate. The construction of a project baseline often depends on hypothetical scenario
modeling, and may be estimated through reference to emissions from similar activities and
technologies in the same country or other countries, or to actual emissions prior to project
implementation. The partners involved in the project could have an interest in establishing a
baseline with high emissions, which would yield a risk of awarding spurious credits.
Independent third party verification is meant to avoid this potential problem.
Methodologies[edit]
Any proposed CDM project has to use an approved baseline and monitoring methodology to be
validated, approved and registered. Baseline Methodology will set steps to determine the
baseline within certain applicability conditions whilst monitoring methodology will set specific
steps to determine monitoring parameters, quality assurance, equipment to be used, in order to
obtain data to calculate the emission reductions. Those approved methodologies are all coded: [17]
AM - Approved Methodology
ACM - Approved Consolidated Methodology
AMS - Approved Methodology for Small Scale Projects ARAM - Aforestation and
Reforestation Approved Methodologies
All baseline methodologies approved by Executive Board are publicly available along with
relevant guidance on the UNFCCC CDM website. If a DOE determines that a proposed project
[18]
activity intends to use a new baseline methodology, it shall, prior to the submission for
registration of this project activity, forward the proposed methodology to the EB for review, i.e.
consideration and approval, if appropriate. [19]
Economics[edit]
According to Burniaux et al., 2009, p. 37, crediting mechanisms like the CDM could play three
important roles in climate change mitigation : [20]
∙ Improve the cost-effectiveness of GHG mitigation policies in developed countries ∙ Help to reduce
"leakage" (carbon leakage) of emissions from developed to developing countries. Leakage is where
mitigation actions in one country or economic sector result in another country's or sector's emissions
increasing, e.g., through relocation of polluting industries from Annex I to non-Annex I countries (Barker
et al., 2007).
[21]
amounts to an income transfer to non-Annex I countries (Burniaux et al., 2009, p. 40).
Additionality is, however, difficult to prove, the subject of vigorous debate. [15]
Burniaux et al. (2009) commented on the large transaction costs of establishing additionality.
Assessing additionality has created delays (bottlenecks) in approving CDM projects. According
to the World Bank (2010), there are significant constraints to the continued growth of the CDM
to support mitigation in developing countries.
Incentives
The CDM rewards emissions reductions, but does not penalize emission increases (Burniaux
et al., 2009, p. 41). It therefore comes close to being an emissions reduction subsidy. This can
create a perverse incentive for firms to raise their emissions in the short-term, with the aim of
getting credits for reducing emissions in the long-term.
Another difficulty is that the CDM might reduce the incentive for non-Annex I countries to cap
their emissions. This is because most developing countries benefit more from a
well-functioning crediting mechanism than from a world emissions trading scheme (ETS),
where their emissions are capped. This is true except in cases where the allocation of
emissions rights (i.e., the amount of emissions that each country is allowed to emit) in the ETS
is particularly favourable to developing countries.
Local resistance
While the C in CDM stands for Clean, most projects might be better defined with the B from Big,
from large hydropower to HFC or waste to energy and clean coal projects (which all together
make the majority of credits generated through CDM). The argument in favor of the CDM is that
it brings development to the South. However, in all continents the mainly Big Development it
stands for is resisted by local people in those countries. A global coalition of researchers
published a large report on African civil society resistance to CDM projects all over the
continent. In New Delhi, India, a grassroots movement of wastepickers is resisting another CDM
[24]
project on what the makers call 'the waste war' in Delhi. In Panama, a CDM project is blocking
[25]
peace talks between the Panamanian government and the indigenous Ngöbe-Buglé people. [26]
Civil society groups and researchers in both North and South have complained for years that
most CDM projects benefit big industries, while doing harm to excluded people. As local protests
against CDM projects are arising on every continent, the notion that CDM 'brings development to
the South' is contested. [citation needed]
Market deflation
Most of the demand for CERs from the CDM comes from the European Union Emissions
Trading Scheme, which is the largest carbon market. In July 2012, the market price for CERs
fell to new record low of €2.67 a tonne, a drop in price of about 70% in a year. Analysts
attributed the low CER price to lower prices for European Union emissions allowances,
oversupply of EU emissions allowances and the slowing European economy. [27]
In September 2012, The Economist described the CDM as a "complete disaster in the
making" and "in need of a radical overhaul". Carbon prices, including prices for CERs, had
collapsed from $20 a tonne in August 2008 to below $5 in response to
the Eurozone debt crisis reducing industrial activity and the over-allocation of emission
allowances under the European Union Emissions Trading Scheme. The Guardian reported that
[28]
the CDM has "essentially collapsed", due to the prolonged downward trend in the price of CERs,
which had been traded for as much as $20 (£12.50) a tonne before the global financial crisis to
less than $3. With such low CER prices, potential projects were not commercially viable. In [29]
October 2012, CER prices fell to a new low of 1.36 euros a metric tonne on the London ICE
Futures Europe exchange. In October 2012 Thomson Reuters Point Carbon calculated that the
[30]
oversupply of units from the Clean Development Mechanism and Joint Implementation would be
1,400 million units for the period up to 2020 and Point Carbon predicted that Certified Emission
Reduction (CER) prices would to drop from €2 to 50 cents. On 12 December 2012 CER prices
[31]
reached another record low of 31 cents. Bloomberg reported that Certified Emission Reduction
[32]
prices had declined by 92 percent to 39 each cents in the 2012 year. [33]
Financial issues[edit]
With costs of emission reduction typically much lower in developing countries than in
industrialised countries, industrialised countries can comply with their emission reduction targets
at much lower cost by receiving credits for emissions reduced in developing countries as long as
administration costs are low.
The IPCC has projected GDP losses for OECD Europe with full use of CDM and Joint
Implementation to between 0.13% and 0.81% of GDP versus 0.31% to 1.50% with only
[34]
domestic action.
While there would always be some cheap domestic emission reductions available in Europe, the
cost of switching from coal to gas could be in the order of €40-50 per tonne CO equivalent.
2
Certified Emission Reductions from CDM projects were in 2006 traded on a forward basis for
between €5 and €20 per tonne CO equivalent. The price depends on the distribution of risk
2
between seller and buyer. The seller could get a very good price if it agrees to bear the risk that
the project's baseline and monitoring methodology is rejected; that the host country rejects the
project; that the CDM Executive Board rejects the project; that the project for some reason
produces fewer credits than planned; or that the buyer doesn't get CERs at the agreed time if
the international transaction log (the technical infrastructure ensuring international transfer of
carbon credits) is not in place by then. The seller can usually only take these risks if the
counterparty is deemed very reliable, as rated by international rating agencies.
Mitigation finance[edit]
The revenues of the CDM constitutes the largest source of mitigation finance to developing
countries to date (World Bank, 2010, p. 261-262). Over the 2001 to 2012 period, CDM projects
[23]
could raise $18 billion ($15 billion to $24 billion) in direct carbon revenues for developing
countries. Actual revenues will depend on the price of carbon. It is estimated that some $95
billion in clean energy investment benefitted from the CDM over the 2002-08 period.
Adaptation finance[edit]
The CDM is the main source of income for the UNFCCC Adaptation Fund, which was
established in 2007 to finance concrete adaptation projects and programmes in developing
countries that are parties to the Kyoto Protocol (World Bank, 2010, p. 262-263). The CDM is[23]
subject to a 2% levy, which could raise between $300 million and $600 million over the 2008-12
period. The actual amount raised will depend on the carbon price.
CDM projects[edit]
less than 100 MtCO e of projected savings by 2012 (Carbon Trust, 2009, p. 18-19). The EU
2
[4]
ETS started in January 2005, and the following month saw the Kyoto Protocol enter into force.
The EU ETS allowed firms to comply with their commitments by buying offset credits, and thus
created a perceived value to projects. The Kyoto Protocol set the CDM on a firm legal footing.
By the end of 2008, over 4,000 CDM projects had been submitted for validation, and of those,
over 1,000 were registered by the CDM Executive Board, and were therefore entitled to be
issued CERs (Carbon Trust, 2009, p. 19). In 2010, the World Bank estimated that in 2012, the
largest potential for production of CERs would be from China (52% of total CERs) and India
(16%) (World Bank, 2010, p. 262). CERs produced in Latin America and the Caribbean would
[23]
make up 15% of the potential total, with Brazil as the largest producer in the region (7%).
By 14 September 2012, 4626 projects had been registered by the CDM Executive Board as
CDM projects. These projects are expected to result in the issue of 648,232,798 certified
[36]
emissions reductions. By 14 September 2012, the CDM Board had issued 1 billion CERs,
[37]
60% of which originated from projects in China. India, the Republic of Korea, and Brazil were
issued with 15%, 9% and 7% of the total CERs. [38]
The Himachal Pradesh Reforestation Project is claimed to be the world's largest CDM. [39]
Transportation[edit]
There are currently 29 transportation projects registered, the last was registered on February
25, 2013 and is hosted in China. [40]
Destruction of HFC-23[edit]
Some CDM projects remove or destroy industrial gases, such as hydrofluorocarbon 23
(HFC-23) and nitrous oxide (N2O). HFC-23 is a potent greenhouse gas (GHG) and is a
byproduct from the production of the refrigerant
gas chlorodifluoromethane (HCFC-22). The gas HFC-23 is estimated to have a global
[4]
warming effect 11,000 times greater than carbon dioxide, so destroying a tonne of HFC-23
earns the refrigerant manufacturer 11,000 certified emissions reduction units. [41]
In 2009, the Carbon Trust estimated that industrial gas projects such as those limiting HFC-23
emissions, would contribute about 20% of the CERs issued by the CDM in 2012. The Carbon
Trust expressed the concern that projects for destroying HFC-23 were so profitable that
coolant manufacturers could be building new factories to produce the coolant gas. (Carbon
Trust, 2009, p. 60). In September 2010, Sandbag estimated that in 2009 59% of the CERs
[4]
used as offsets in the European Union Emissions Trading Scheme originated from HFC-23
projects.
[42]
An example is the Plascon, Plasma arc plant that was installed by Quimobásicos S.A. de
C.V in Monterrey, Mexico to eliminate of HCFC-23, a byproduct of the production of R-22
refrigerant gas. [citation needed]
From 2005 to June 2012, 19 manufacturers of refrigerants (11 in China, 5 in India, and one
each in Argentina, Mexico and South Korea), were issued with 46% of all the certified [43]
emissions reduction units from the CDM. David Hanrahan, the technical director of IDEAcarbon
believes each plant would probably have earned an average of $20 million to $40 million a year
from the CDM. The payments also incentivise the increased production of the ozone-depleting
refrigerant HCFC-22, and discourage substitution of HCFC-22 with less harmful refrigerants. [41]
In 2007 the CDM stopped accepting new refrigerant manufacturers into the CDM. In 2011, the
CDM renewed contracts with the nineteen manufacturers on the condition that claims for
HFC-23 destruction would be limited to 1 percent of their coolant production. However, in 2012,
18 percent of all CERs issued are expected to go to the 19 coolant plants, compared with 12
percent to 2,372 wind power plants and 0.2 percent to 312 solar projects. [41]
In January 2011, the European Union Climate Change Committee banned the use of HFC-23
CERs in the European Union Emissions Trading Scheme from 1 May 2013. The ban includes
nitrous oxide (N2O) from adipic acid production. The reasons given
were the perverse incentives, the lack of additionality, the lack of environmental integrity, the
under-mining of the Montreal Protocol, costs and ineffectiveness and the distorting effect of a
few projects in advanced developing countries getting too many CERs. From 23 December
[44]
2011, CERs from HFC-23 and N2O destruction projects were banned from use in the New
Zealand Emissions Trading Scheme, unless they had been purchased under future delivery
contracts entered into prior to 23 December 2011. The use of the future delivery contracts ends
in June 2013.[45]
As of 1 June 2013, the CDM had issued 505,125 CERs, or 38% of all CERs issued, to 23
HFC-23 destruction projects. A further 19% (or 255,666 CERs) had been issued to 108 N O 2
destruction projects.
[46]
Barriers[edit]
World Bank (n.d., p. 12) described a number of barriers to the use of the CDM in least
developed countries (LDCs). LDCs have experienced lower participation in the CDM to date.
[47]
Four CDM decisions were highlighted as having a disproportionate negative impact on LDCs:
∙ Suppressed demand: Baseline calculations for LDCs are low, meaning that projects cannot
generate sufficient carbon finance to have an impact.
∙ Treatment of projects that replace non-renewable biomass: A decision taken led to essentially a
halving in the emission reduction potential of these projects. This has particularly affected
Sub-Saharan Africa and projects in poor communities, where firewood, often from non-renewable
sources, is frequently used as a fuel for cooking and heating.
∙ Treatment of forestry projects and exclusion of agriculture under the CDM: These sectors are
more important for LDCs than for middle-income countries. Credits from forestry projects are penalized
under the CDM, leading to depressed demand and price.
∙ Transaction costs and CDM process requirements: These are geared more towards the most
advanced developing countries, and do not work well for the projects most often found in LDCs.
L-1
Environmental hazard
An environmental hazard is a substance, a state or an event which has the potential to threaten the surrounding
natural environment / or adversely affect people's health, including n]] and natural disasters such as storms and
earthquakes.
Any single or combination of toxic chemical, biological, or physical agents in the environment, resulting from human
activities or natural processes, that may impact the health of exposed subjects, including pollutants such as heavy
metals, pesticides, biological contaminants, toxic waste, industrial and home chemicals.[1]
Human-made hazards while not immediately health-threatening may turn out detrimental to man's well-being
eventually, because deterioration in the environment can produce secondary, unwanted negative effects on the
human ecosphere. The effects of water pollution may not be immediately visible because of a sewage system that
helps drain off toxic substances. If those substances turn out to be persistent (e.g. persistent organic pollutant),
however, they will literally be fed back to their producers via the food chain: plankton -> edible fish -> humans. In that
respect, a considerable number of environmental hazards listed below are man-made (anthropogenic) hazards.
Hazards can be categorized in four types:
1. Chemical
2. Physical (mechanical, etc.)
3. Biological
4. Psychosocial.
Chemical
Chemical hazards are defined in the Globally Harmonized System and in the European Union chemical regulations.
They are caused by chemical substances causing significant damage to the environment. The label is particularly
applicable to substances with aquatic toxicity. An example is zinc oxide, a common paint pigment, which is extremely
toxic to aquatic life.
Toxicity or other hazards do not imply an environmental hazard, because elimination by sunlight (photolysis), water
(hydrolysis), or organisms (biological elimination) neutralizes many reactive or poisonous substances. Persistence
towards these elimination mechanisms combined with toxicity gives the substance the ability to do damage in the
long term. Also, the lack of immediate human toxicity does not mean the substance is environmentally
non-hazardous. For example, tanker truck-sized spills of substances such as milk can cause a lot of damage in the
local aquatic ecosystems: the added biological oxygen demand causes rapid eutrophication, leading to anoxic
conditions in the water body.
Eutrophication (from Greek eutrophos, "well-nourished"),[1] distrophication or hypertrophication, is when a body
of water becomes overly enriched with minerals and nutrients which induce excessive growth of algae.[2] This process
may result in oxygen depletion of the water body.[3] One example is an "algal bloom" or great increase
of phytoplankton in a sandy body as a response to increased levels of nutrients. Eutrophication is often induced by
the discharge of nitrate or phosphate-containing detergents, fertilizers, or sewage into an aquatic system. Lake
eutrophication has become a global problem of water pollution, Chlorophyll-a, total nitrogen, total phosphorus,
chemical oxygen demand and secchi depth are the main indicators to evaluate lake eutrophication level.[4]
Biochemical oxygen demand (BOD) is the amount of dissolved oxygen needed (i.e. demanded) by aerobic
biological organisms to break down organic material present in a given water sample at certain temperature over a
specific time period. The BOD value is most commonly expressed in milligrams of oxygen consumed per litre of
sample during 5 days of incubation at 20 °C and is often used as a surrogate of the degree of organic pollution of
water.[1]
BOD reduction is used as a gauge of the effectiveness of wastewater treatment plants. BOD of wastewater effluents
is used to indicate the short-term impact on the oxygen levels of the receiving water.
BOD analysis is similar in function to chemical oxygen demand (COD) analysis, in that both measure the amount
of organic compounds in water. However, COD analysis is less specific, since it measures everything that can be
chemically oxidized, rather than just levels of biologically oxidized organic matter.
All hazards in this category are mainly anthropogenic(1) although there exist a number of natural carcinogens and
chemical elements like radon and lead may turn up in health-critical concentrations in the natural environment:
● Anthrax
● Antibiotic agents in animals destined for human consumption
● Arsenic - a contaminant of fresh water sources (water wells)
● Asbestos - carcinogenic
● DDT
● Carcinogens
● dioxins
● Endocrine disruptors
● Explosive material
● Fungicides
● Furans
● Haloalkanes
● Heavy metals
● Herbicides
● Hormones in animals destined for human consumption
● Lead in paint
● Marine debris
● mercury
● Mutagens
● Pesticides
● Polychlorinated biphenyls
● Radon and other natural sources of radioactivity
● Soil pollution
● Tobacco smoking
● Toxic waste
(1) Human impact on the environment or anthropogenic impact on the environment includes changes
to biophysical environments[1] and ecosystems, biodiversity, and natural resources[2][3] caused directly or indirectly by
humans, including global warming,[1] [4] environmental degradation[1] (such as ocean acidification[1][5]), mass
extinction and biodiversity loss,[6]
[7][8][9] ecological crisis, and ecological collapse. Modifying the environment to fit the
needs of society is causing severe effects, which become worse as the problem of human
overpopulation continues.[10][11] Some human activities that cause damage (either directly or indirectly) to the
environment on a global scale include population growth,[12] [13] overconsumption, overexploitation, pollution,
and deforestation, to name but a few. Some of the problems, including global warming and biodiversity loss pose
an existential risk to the human race,[14][15] and human overpopulation causes those problems.[16][17]
The term anthropogenic designates an effect or object resulting from human activity. The term was first used
in the technical sense by Russian geologist Alexey Pavlov, and it was first used in English by British
ecologist Arthur Tansley in reference to human influences on climax plant communities.[18]
The atmospheric
scientist Paul Crutzen introduced the term "Anthropocene" in the mid-1970s.[19] The term is sometimes used
in the context of pollution emissions that are produced from human activity since the start of the Agricultural
Revolution but also applies broadly to all major human impacts on the environment.[20] Many of the actions
taken by humans that contribute to a heated environment stem from the burning of fossil fuel from a variety
of sources, such as: electricity, cars, planes, space heating, manufacturing, or the destruction of forests.[21]
Physical
A physical hazard is a type of occupational hazard that involves environmental hazards that can cause harm with or
without contact. There are many types of physical hazards. Some of them are as follows:-
● Cosmic rays
● Drought
● Earthquake
● Electromagnetic fields
● E-waste
● Floods
● Fog
● Light pollution
● Lighting
● Lightning
● Noise pollution
● Quicksand
● Ultraviolet light
● vibration
● X-rays
Biological[edit]
Biological hazards, also known as biohazards, refer to biological substances that pose a threat to the health of living
organisms, primarily that of humans. This can include medical waste or samples of a microorganism, virus or toxin
(from a biological source) that can affect human health.
● Allergies
● Arbovirus
● Avian influenza
● Bovine spongiform encephalopathy (BSE)
● Cholera
● Ebola
● Epidemics
● Food poisoning
● Malaria
● Molds
● Onchocerciasis (river blindness)
● Pandemics
● Pathogens
● Pollen for allergic people
● Rabies
● Severe acute respiratory syndrome (SARS)
● Sick building syndrome
See also: Toxicology and List of allergies
Psychosocial Hazards[edit]
Psychosocial hazards include but aren't limited to stress, violence, and other workplace stressors. Work is generally
beneficial to mental health and personal wellbeing. It provides people with structure and purpose and a sense of
identity.
EO018 L-2
❖ Introduction to Technology
A steam turbine with the case opened. Such turbines produce most of the electricity used today. Electricity consumption and living
standards are highly correlated.[1] Electrification is believed to be the most important engineering achievement of the 20th century.
Technology ("science of craft", from Greek τέχνη, techne, "art, skill, cunning of hand"; and -λογία, -logia[2] ) is the sum
of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of
objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, and the like,
or it can be embedded in machines to allow for operation without detailed knowledge of their workings. Systems (e.g.
machines) applying technology by taking an input, changing it according to the system's use, and then producing
an outcome are referred to as technology systems or technological systems.
The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to
control fire and the later Neolithic Revolution increased the available sources of food, and the invention of
the wheel helped humans to travel in and control their environment. Developments in historic times, including
the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed
humans to interact freely on a global scale.
Technology has many effects. It has helped develop more advanced economies (including today's global economy)
and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products known
as pollution and deplete natural resources to the detriment of Earth's environment. Innovations have always
influenced the values of a society and raised new questions in the ethics of technology. Examples include the rise of
the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves
the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize
the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of
ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to
society and the human condition.
The distinction between science, engineering, and technology is not always clear. Science is systematic knowledge
of the physical or material world gained through observation and experimentation.[16] Technologies are not usually
exclusively products of science, because they have to satisfy requirements such as utility, usability, and safety.[17]
Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for
practical human means, often (but not always) using results and techniques from science. The development of
technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic,
and historical knowledge, to achieve some practical result.
Technology is often a consequence of science and engineering, although technology as a human activity precedes
the two fields. For example, science might study the flow of electrons in electrical conductors by using
already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools
and machines such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists
and engineers may both be considered technologists; the three fields are often considered as one for the purposes of
research and reference.[18]
The exact relations between science and technology, in particular, have been debated by scientists, historians, and
policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science.
In the immediate wake of World War II, for example, it was widely considered in the United States that technology
was simply "applied science" and that to fund basic science was to reap technological results in due time. An
articulation of this philosophy could be found explicitly in Vannevar Bush's treatise on postwar science policy, Science
– The Endless Frontier: "New products, new industries, and more jobs require continuous additions to knowledge of
the laws of nature ... This essential new knowledge can be obtained only through basic scientific research."[19] In the
late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks
(initiatives resisted by the scientific community). The issue remains contentious, though most analysts resist the
model that technology is a result of scientific research.[20][21]
❖ Appropriate technology
Appropriate technology is a movement (and its manifestations) encompassing technological choice and application
that is small-scale, affordable by locals, decentralized, labor-intensive, energy-efficient, environmentally sound,
and locally autonomous.[1]
It was originally articulated as intermediate technology by the economist Ernst Friedrich
"Fritz" Schumacher in his work Small Is Beautiful. Both Schumacher and many modern-day proponents of
appropriate technology also emphasize the technology as people-centered.[2]
Appropriate technology has been used to address issues in a wide range of fields. Well-known examples of
appropriate technology applications include: bike- and hand-powered water pumps (and other self-powered
equipment), the universal nut sheller, self-contained solar lamps and streetlights, and passive solar building designs.
Today appropriate technology is often developed using open source principles, which have led to open-source
appropriate technology (OSAT) and thus many of the plans of the technology can be freely found on
the Internet.[3]
[4] OSAT has been proposed as a new model of enabling innovation for sustainable development.[5] [6]
Appropriate technology is most commonly discussed in its relationship to economic development and as an
alternative to technology transfer of more capital-intensive technology from industrialized nations to developing
[7] However, appropriate technology movements can be found in both developing and developed countries.
countries.[2]
In developed countries, the appropriate technology movement grew out of the energy crisis of the 1970s and focuses
mainly on environmental and sustainability issues.[8] Today the idea is multifaceted; in some contexts, appropriate
technology can be described as the simplest level of technology that can achieve the intended purpose, whereas in
others, it can refer to engineering that takes adequate consideration of social and environmental ramifications. The
facets are connected through robustness and sustainable living.
Indian ideological leader Mahatma Gandhi is often cited as the "father" of the appropriate technology movement.
Though the concept had not been given a name, Gandhi advocated for small, local and predominantly village-based
technology to help India's villages become self-reliant. He disagreed with the idea of technology that benefited a
minority of people at the expense of the majority or that put people out of work to increase profit.[2] In 1925 Gandhi
founded the All-India Spinners Association and in 1935 he retired from politics to form the All-India Village Industries
Association. Both organizations focused on village-based technology similar to the future appropriate technology
movement.[9]
China also implemented policies similar to appropriate technology during the reign of Mao Zedong and the
following Cultural Revolution. During the Cultural Revolution, development policies based on the idea of "walking on
two legs" advocated the development of both large-scale factories and small-scale village industries.[2]
E. F. Schumacher[edit]
Despite these early examples, Dr. Ernst Friedrich "Fritz" Schumacher is credited as the founder of the appropriate
technology movement. A well-known economist, Schumacher worked for the British National Coal Board for more
than 20 years, where he blamed the size of the industry's operations for its uncaring response to the harm black-lung
disease inflicted on the miners.[2] However it was his work with developing countries, such as India and Burma, which
helped Schumacher form the underlying principles of appropriate technology.
Schumacher first articulated the idea of "intermediate technology," now known as appropriate technology, in a 1962
report to the Indian Planning Commission in which he described India as long in labor and short in capital, calling for
an "intermediate industrial technology"[10] that harnessed India's labor surplus. Schumacher had been developing the
idea of intermediate technology for several years prior to the Planning Commission report. In 1955, following a stint
as an economic advisor to the government of Burma, he published the short paper "Economics in a Buddhist
Country," his first known critique of the effects of Western economics on developing countries.[10] In addition to
Buddhism, Schumacher also credited his ideas to Gandhi.
Initially, Schumacher's ideas were rejected by both the Indian government and leading development economists.
Spurred to action over concern the idea of intermediate technology would languish, Schumacher, George
McRobie, Mansur Hoda[11] and Julia Porter brought together a group of approximately 20 people to form
the Intermediate Technology Development Group (ITDG) in May 1965. Later that year, a Schumacher article
published in the Observer garnered significant attention and support for the group. In 1967, the group published
the Tools for Progress: A Guide to Small-scale Equipment for Rural Development and sold 7,000 copies. ITDG also
formed panels of experts and practitioners around specific technological needs (such as building construction, energy
and water) to develop intermediate technologies to address those needs.[10] At a conference hosted by the ITDG in
1968 the term "intermediate technology" was discarded in favor of the term "appropriate technology" used today.
Intermediate technology had been criticized as suggesting the technology was inferior to advanced (or high)
technology and not including the social and political factors included in the concept put forth by the proponents.[2] In
1973, Schumacher described the concept of appropriate technology to a mass audience in his influential work Small
Is Beautiful: A Study of Economics As If People Mattered.
Growing trend[edit]
Between 1966 and 1975 the number of new appropriate technology organizations founded each year was three times
greater than the previous nine years. There was also an increase in organizations focusing on applying appropriate
technology to the problems of industrialized nations, particularly issues related to energy and the environment.[12] In
1977, the OECD identified in its Appropriate Technology Directory 680 organizations involved in the development and
promotion of appropriate technology. By 1980, this number had grown to more than 1,000. International agencies and
government departments were also emerging as major innovators in appropriate technology, indicating its
progression from a small movement fighting against the established norms to a legitimate technological choice
supported by the establishment. For example, the Inter-American Development Bank created a Committee for the
Application of Intermediate Technology in 1976 and the World Health Organization established the Appropriate
Technology for Health Program in 1977.[12]
Appropriate technology was also increasingly applied in developed countries. For example, the energy crisis of the
mid-1970s led to the creation of the National Center for Appropriate Technology (NCAT) in 1977 with an initial
appropriation of 3 million dollars from the U.S. Congress. The Center sponsored appropriate technology
demonstrations to "help low-income communities find better ways to do things that will improve the quality of life, and
that will be doable with the skills and resources at hand." However, by 1981 the NCAT's funding agency, Community
Services Administration, had been abolished. For several decades NCAT worked with the US departments of Energy
and Agriculture on contract to develop appropriate technology programs. Since 2005, NCAT's informational web site
is no longer funded by the US government.[13]
● Photovoltaic (PV) solar panels, and (large) Concentrating solar power plants. PV solar panels made
from low-cost photovoltaic cells or PV-cells which have first been concentrated by a Luminescent solar
concentrator-panel are also a good option. Especially companies as Solfocus make appropriate technology CSP
plants which can be made from waste plastics polluting the surroundings (see above).
● Solar thermal collector
● wind power (home do-it yourself turbines and larger-scale)
● micro hydro, and pico hydro[44]
● human-powered handwheel generators[45]
● other zero emission generation methods
Some intermediate technologies include:
● Bioalcohols as bioethanol, biomethanol and biobutanol. The first two require minor modifications to allow them to
be used in conventional gasoline engines. The third requires no modifications at all.
● Vegetable oils which can be used only in internal combustion (Diesel) engines. Biofuels are locally available in
many developing countries and can be cheaper than fossil fuels.
● Anaerobic digestion power plants
● Biogas is another potential source of energy, particularly where there is an abundant supply of waste organic
matter. A generator (running on biofuels) can be run more efficiently if combined with batteries and an inverter;
this adds significantly to capital cost but reduces running cost, and can potentially make this a much cheaper
option than the solar, wind and micro-hydro options.
● Dry animal dung fuel can also be used.
● Biochar is another similar energy source which can be obtained through charring of certain types of organic
material (e.g. hazelnut shells, bamboo, chicken manure, ...) in a pyrolysis unit.[46] A similar energy source is terra
preta nova.
● Chemurgy
Finally, urine can also be used as a basis to generate hydrogen (which is an energy carrier). Using urine, hydrogen
production is 332% more energy efficient than using water.[47]
Electricity distribution could be improved so to make use of a more structured electricity line arrangement and
universal AC power plugs and sockets (e.g. the CEE 7/7 plug). In addition, a universal system of electricity
provisioning (e.g. universal voltage, frequency, ampère; e.g. 230 V with 50 Hz), as well as perhaps a better mains
power system (e.g. through the use of special systems as perfected single-wire earth returns; e.g. Tunisia's
MALT-system, which features low costs and easy placement)[48][49]
Electricity storage (which is required for autonomous energy systems) can be provided through appropriate
technology solutions as deep-cycle and car-batteries (intermediate technology), long duration flywheels,
electrochemical capacitors, compressed air energy storage (CAES), liquid nitrogen and pumped hydro.[50] Many
solutions for the developing world are sold as a single package, containing a (micro) electricity generation power
plant and energy storage. Such packages are called remote-area power supply
LED Lamp with GU10 twist lock fitting, intended to replace halogen reflector lamps.
● White LEDs and a source of renewable energy (such as solar cells) are used by the Light Up the World
Foundation to provide lighting to poor people in remote areas, and provide significant benefits compared to
the kerosene lamps which they replace. Certain other companies as Powerplus also have LED-flashlights with
imbedded solar cells.[51]
● Organic LEDs made by roll-to-roll production are another source of cheap light that will be commercially
available at low cost by 2015.
● Compact fluorescent lamps (as well as regular fluorescent lamps and LED-lightbulbs) can also be used as
appropriate technology. Although they are less environmentally friendly than LED-lights, they are cheaper and
still feature relative high efficiency (compared to incandescent lamps).
● The Safe bottle lamp is a safer kerosene lamp designed in Sri Lanka. Lamps as these allow relative long,
mobile, lighting. The safety comes from a secure screw-on metal lid, and two flat sides which prevent it from
rolling if knocked over. An alternative to fuel or oil-based lanterns is the Uday lantern, developed by Philips as
part of its Lighting Africa project (sponsored by the World Bank Group).[52]
● The Faraday flashlight is an LED flashlight which operates on a capacitor. Recharging can be done by manual
winching or by shaking, hereby avoiding the need of any supplementary electrical system.
● HID-lamps finally can be used for lighting operations where regular LED-lighting or other lamps will not suffice.
Examples are car headlights. Due to their high efficiency, they are quite environmental, yet costly, and they still
require polluting materials in their production process.
In his article, Jared Bernstein, a Senior Fellow at the Center on Budget and Policy Priorities,[69]
questions the
widespread idea that automation, and more broadly, technological advances, have mainly contributed to this
growing labor market problem. His thesis appears to be a third way between optimism and skepticism. Essentially, he
stands for a neutral approach of the linkage between technology and American issues concerning unemployment and
declining wages.
He uses two main arguments to defend his point. First, because of recent technological advances, an increasing
number of workers are losing their jobs. Yet, scientific evidence fails to clearly demonstrate that technology has
displaced so many workers that it has created more problems than it has solved. Indeed, automation threatens
repetitive jobs but higher-end jobs are still necessary because they complement technology and manual jobs that
"requires flexibility judgment and common sense"[70] remain hard to replace with machines. Second, studies have not
shown clear links between recent technology advances and the wage trends of the last decades.
Therefore, according to Bernstein, instead of focusing on technology and its hypothetical influences on current
American increasing unemployment and declining wages, one needs to worry more about "bad policy that fails to
offset the imbalances in demand, trade, income, and opportunity."[70]
For people who use both the Internet and mobile devices in excessive quantities it is likely for them to
experience fatigue and over exhaustion as a result of disruptions in their sleeping patterns. Continuous studies have
shown that increased BMI and weight gain are associated with people who spend long hours online and not
exercising frequently.[71] Heavy Internet use is also displayed in the school lower grades of those who use it in
excessive amounts.[72] It has also been noted that the use of mobile phones whilst driving has increased the
occurrence of road accidents — particularly amongst teen drivers. Statistically, teens reportedly have fourfold the
number of road traffic incidents as those who are 20 years or older, and a very high percentage of adolescents write
(81%) and read (92%) texts while driving.[73] In this context, mass media and technology have a negative impact on
people, on both their mental and physical health.
The universal nut sheller (UNS; formerly called the Malian peanut sheller) is a simple hand-operated machine
capable of shelling up to 57 kilograms (126 lb) of raw, sun-dried peanuts per hour.[1]
It requires less than $10 USD in materials to make, and is made of concrete poured into two
simple fibreglass moulds, some metal parts, one wrench, and any piece of rock or wood that can serve as a hammer.
It accepts a wide range of nut sizes without adjustment. Operators can make necessary adjustments quickly and
easily. It is estimated that one Universal Nut Sheller will serve the needs of a village of 2,000 people. The life
expectancy of the machine is around 25 years.[2]
The Full Belly Project is working to establish local, sustainable businesses that manufacture and
distribute appropriate technologies such as the Universal Nut Sheller.
A comprehensive understanding of the concept of Appropriate Technology (AT) and brings out its
relevance today both from the standpoints of developing and developed countries. The topic also focuses
on the evolution of AT movement in India and ideological contributions by various thinkers like Gandhi, E.
F. Schumacher, and others to this movement. It also stresses that AT movement as a discursive one is
not about mobilizing activities and people but is about academic discourses on AT.
Operation[edit]
Diagram of the shelling machine
The user loads the desired crop in the space at the top. The user turns the handle, which rotates the rotor
continuously. This movement facilitates the nuts falling down the gradually narrowing gap. The shell of each nut is
broken at the point where the gap is sufficiently narrow and the rotor motion causes sufficient friction to crack open
the shell. The adjustable minimum width of the gap allows a range of nut sizes to be shelled. The kernels and shell
fragments fall into a basket and are later separated by winnowing. The device works best for Jatropha curcas, shea,
dried coffee, and peanuts (ground nuts).[citation needed]
The Full Belly Project has developed a pedal powered agricultural processor,[3] which places the universal nut sheller
onto a pedaling chassis. In addition to the shelling method described, the pedaling apparatus is connected to a fan.
The fan automatically winnows the harvest (separates the shells from the nuts). The pedal powered versions are
capable of shelling the same variety of crops as the hand crank powered versions. The processor also provides
access for the winnowing section to be used independently from the sheller. This allows winnowing of crops that are
not shelled, including rice, maize, and sorghum.
Today, many of our basic needs are handled by huge, complex systems These systems are
managed centrally by large private corporations or the government. For example, our electricity
typically comes from utility companies that operate across many states. Similarly, many of the
fruits and vegetables we consume come from large-scale agricultural corporations in California.
In contrast, with appropriate technology the person who produces a service or a product also
becomes the consumer - the person who uses it. This has several advantages: For one,
consumer-producers are more likely to care more about their work. As a result, service and
goods are more reliable and of higher quality. Secondly, centralized systems remust invest a lot
of money to purchase large, complex machinery and to employ thousands of workers. Often
these systems are disrupted due to breakdowns in the technology, problems getting needed
supplies, or labor strikes. When this happens a great many people are affected. Breakdowns such
as a power outage may also occur in communities that use small-scale, appropriate technology.
But these local breakdowns are not nearly so difficult and time consuming to track down and
repair as those that cover a broad geographic area. Thus, a simpler technology tends to be more
reliable, and the effects of breakdowns do not disrupt so many lives.
Technologically sophisticated, though simple in design.
It is important to realize that use of appropriate technology does not mean turning the clock back
to the 18th or 19th century. Although the technology involves simple, easy-to use and repair
designs, it is based on sophisticated, 20th-century technologies. One example is the invention of
photovoltaic, or solar cells that convert solar energy, a renewable energy source, into electricity
for homes and businesses.
Environmentally friendly.
Appropriate technology emphasizes the use of renewable resources, like the energy from the sun,
wind, or water. These energy sources are available almost everywhere and need only the right
technology to capture them. Unlike burning coal and oil, these local energy sources do not
contribute to air and water pollution and they do not need to be transported over long distances.
Food, energy, water, and waste disposal are also handled locally by ecological systems. These
are systems that conserve resources by recycling organic nutrients back into the soil and re-using
manufactured goods in innovative ways. Thus, appropriate technology makes it possible to
satisfy our basic human needs while minimizing our impact on the environment.
Social problems.
Many people are coming to realize that neither our economy nor our population can continue to
grow forever. We are running out of the natural resources necessary to sustain ourselves. In
addition we are limited in our ability to deal with the social and environmental problems that
result from continuous growth. There seems to be a growing dissatisfaction with the complexity
and hectic lifestyle of 20th-century society. Many people would prefer to return to a simpler way
of life. Appropriate technology is attractive because it makes households and industries more
self-sufficient, and most things can be managed at a local level. We may have to do more hand
labor instead of depending on automation to satisfy our basic needs. However, there are many
advantages to simplifying our lives. By growing more of our own food and producing and
buying goods in our own communitites, we spend less time and money on transportation,
produce less waste and consume fewer environmental resources.
SOME CASE STUDIES:
(1) CASE 1
March 2014
When introducing the EHC-8 Electrohydraulic Hitch Control to the Indian market,
engineers initially made preparations without taking account of the adverse
environmental conditions. Project managers Raman Sheshadri and Uwe Falkenhain
report on how adaptation to local circumstances resulted in a genuine success.
Bosch Rexroth has had excellent results with the Electrohydraulic Hitch Control in both
the European and North American markets. Farmers fully appreciate the advantages it
offers. Precise regulation of power and position makes for exact lifting and lowering the
hitch and, as a result, highly accurate tillage.
Soils are turned over gently and uniformly. That improves yields. And the work is more
convenient, as well. That is why, in 2009, we decided to introduce Electrohydraulic Hitch
Control (EHC) on the Indian subcontinent – an equipment market with enormous
potentials. But the feedback from initial test runs using tractors built by local
manufacturers was not entirely satisfactory. We did see a need for this concept, but it
had to be more rugged and considerably less expensive.
Adapting and new development
When reworking the system for the Indian market, our first step was to conduct a closer
analysis of the operating conditions. The tractors have to survive the most foreboding
conditions: monsoons, high relative humidity on the one hand, and dryness, dust and
heat on the other hand. All this is aggravated by the demands made in rice cultivation.
Here the parts are sometimes submerged in centimeters of packed-down mud. What’s
more, the tractors usually have no sprung suspension and, as a result, generate greater
vibration. In order to cope with these operating parameters and, at the same time, to
drastically cut the costs for the system, we decided simply to modify certain components
and to develop others from scratch.
Brand new: Control panel and angular sensor
Two components in particular are exposed to the extreme weather conditions: the
angular sensor and the control panel. Our standard control panel, designed for
installation inside the operator’s cab, was not suitable for use in Indian tractors without a
protective cab. That is why we needed something entirely new. Not only did it have to
be more rugged. The illumination needed to be brighter since – given the intense
sunlight – it was impossible to see the operating functions.
Placement was yet another question. Following extensive deliberation, the customer
and we agreed to engineer an armrest ready to accept the control panel. This steadies
the farmer’s hand when he presses the various buttons. We replaced the large actuator
lever with a generously dimensioned switch with three positions: lift, lower, off.
The components of the EHC-8 system. The entirely redesigned control unit is at the
upper right.
Another very important item was to offer farmers a service concept. Local workshops
are normally unable to read out the electronic fault diagnosis. That is why we integrated
that function into the control panel, with a display to output the results. Now the farmer
can himself determine whether an electronic component is malfunctioning. To protect
the entire control panel – and above all the display – we have fitted the unit with a
sturdy film to keep out moisture and dust and to reduce mechanical influences.
In questions of resistance to leaks, we had to depart from European thinking. The IP 67
protection class used for the angular sensor in Europe is not sufficient to withstand the
extremely dusty conditions. As a consequence, we had to entirely re-engineer the
sensor. The electrical components are now completely separate from the mechanical
space.
Indian software relationships
We also developed the controller from the ground up. It is installed in the housing Bosch
had designed for the Tata “Nano” city car. That housing is already laid out to handle the
Indian climate. The software had to be adapted to the Indian market as well – shifting
from the European lower linkage control to upper linkage control. This requires only one
power regulation sensor instead of two and helps us reduce costs.
Drawing upon local resources
To achieve a more attractive price for the Indian design, we turned to local suppliers
who manufacture in India. Thus we established local production capacities for the EHR5
valve. This valve regulates the flow of hydraulic fluid to the cylinders, lending precision
to raising and lowering the lifting unit.
On the other hand, we have done away with many complex, automated procedures.
One good example is the rear cover for the angular sensor. In our standard designs,
ultrasonic welding is used. In principle, this would be possible in India, but we decided
to use an appropriate adhesive, which is far less costly.
Engineering for cost savings
Design options offered a further way to reduce costs. The selection of the materials is
one example. The plastics found on the Indian market are far different from those in
Europe. Their strength is aligned exactly with the conditions and they are less
expensive, as well. Glass fiber reinforcement is seldom used, for example.
In addition, we eliminated a large number of parts. In the European version of the
angular sensor, there is a component that lets us program the sensor for various ranges
of angles, depending on the geometry of the lifting system. We have done away with
this in the Indian version and regulate the opening angle using a resistor, which is
soldered in place.
Our research into the potentials for cost reductions will not end when our Indian
customer launches mass production in the first quarter of 2014. At present, we are
working intensively on replacing the existing power measurement sensor with a newly
developed system tailored exactly to the power ranges used in India. We will be able to
achieve savings on the one hand by converting from a magnetic-elastic measurement
principle to a Hall effect sensor. In addition, we can have this draft pin sensor
manufactured locally, in India.
Larger than India
Customizing the system has already paid off – beyond the Indian market. This readies
us for taking the next step in other BRIC nations, where similar conditions prevail. In
addition, individual components will also be of interest to European customers who can
then integrate them into their overall concepts.
Authors
Raman Sheshadri,
Sales and Industry Sector Management, Agricultural and Forestry Machinery,
Bangalore, India
Uwe Falkenhain,
Sales and Product Management Mobile Electronics,
Schwieberdingen, Germany
CASE 2 :
Welcome to Appropriate Technology India
Appropriate Technology India is a non-government organization that works with mountain
communities of Uttarakhand, offering them innovative alternatives to subsistence agriculture. Its
broad mission is to assist village communities in the Western Himalayan eco-region to conserve
their natural resources while utilizing these resources and non timber forest products (NTFPs) in a
socially equitable, economically efficient and ecologically sustainable manner. The organization
operates under the premise that attaining economic and managerial control over their natural
resources will instinctively provide local communities the impetus to support long-term biodiversity
conservation goals.
The organization has adopted a 'community centric' approach and operates on the principle of
strengthening the local Community Based institutions(CBOs) and work collaboratively to further build
on the indigenous knowledge of the community. The thrust is the promote Climate Resistant
livelihoods. Inclusion of marginalized persons and women is the criterion for engaging with the
community stakeholders.
SERICULTURE
The oak (temperate) tasar silk program has been A T India 'flagship' programme. It best reflects our
synergistic approach of forest conservation through
BEEKEEPING
The rationale for identifying bee-keeping as one of A T India programmes lies in the predominantly
agrarian economy and land use pattern
DAIRY
Dairy has been one of the sub-sectors identified by A T India for intervention due to its potential to
significantly impact the incomes of a large number of rural poor
SPICES
Work in this sector was started after a careful value chain assessment which indicated that local
production of certain organic spices could compete in the end
EO018 L-4 Unit 1 Technology Transfer
Technology transfer offices play a crucial role in the process by identifying developments ripe for translation to real world
solutions, obtaining patents and copyrights that protect them, and licensing products and processes to existing companies (or
forming new businesses) to produce and market the products.
Researchers come up with the best ideas, but unless those ideas are transformed into products and services, they won’t impact
the lives of those around us.”
https://autmfoundation.com/about/technology-transfer-impact/
https://www.youtube.com/watch?v=PmCcMJ7PC_A
Technology transfer, also called transfer of technology (TOT), is the process of transferring
(disseminating) technology from the person or organization that owns or holds it to another person or organization.
These transfers may occur between universities, businesses (of any size, ranging from small, medium,
to large), governments, across geopolitical borders,[ is the study of the effects of Earth's geography (human and physical)
on politics and international relations.[1]
[2] While geopolitics usually refers to countries and relations between them, it may also
focus on two other kinds of states: de facto independent states with limited international recognition and relations
between sub-national geopolitical entities, such as the federated states that make up a federation, confederation or a quasi-federal
system.] both formally and informally, and both openly and secretly. Often it occurs by concerted effort to
share skills, knowledge, technologies, manufacturing methods, samples, and facilities among the participants. to
ensure that scientific and technological developments are accessible to a wider range of users who can then further
develop and exploit the technology into new products, processes, applications, materials, or services. It is closely
related to (and may arguably be considered a subset of) knowledge transfer. Horizontal transfer is the movement of
technologies from one area to another. At present transfer of technology is primarily horizontal. Vertical transfer
occurs when technologies are moved from applied research centers to research and development departments.[1]
Technology transfer is promoted at conferences organized by such groups as the Ewing Marion Kauffman
Foundation [The Ewing Marion Kauffman Foundation (Kauffman Foundation) is a
registered 501(c)(3) non-profit, private foundation based in Kansas City, Missouri.[4] The foundation was founded in
1966 by Ewing Marion Kauffman, who had previously founded the drug company Marion Laboratories. The Kauffman
Foundation works with communities to build and support programs that boost entrepreneurship, improve education,
and contribute to the vibrancy of Kansas City.]and the Association of University Technology Managers, and at
"challenge" competitions by organizations such as the Center for Advancing Innovation in Maryland. Local venture
capital organizations such as the Mid-Atlantic Venture Association (MAVA) also sponsor conferences at which
investors assess the potential for commercialization of technology.
Technology brokers are people who discovered how to bridge the emergent worlds and apply scientific concepts or
processes to new situations or circumstances. A related term, used almost synonymously, especially in Europe, is
"technology valorisation". While conceptually the practice has been utilized for many years (in ancient
times, Archimedes was notable for applying science to practical problems), the present-day volume of research,
combined with high-profile failures at Xerox PARC and elsewhere[has led to a focus on the process itself.
Whereas technology transfer can involve the dissemination of highly complex technology from capital-intensive
origins to low-capital recipients (and can involve aspects of dependency and fragility of systems), it also can
involve appropriate technology, not necessarily high-tech or expensive, that is better disseminated,
yielding robustness and independence of systems.
Technology in India is growing exponentially and has played an important role in all round development and growth of
economy in the country, India has opted for a wise mix of original and imported technology. Henceforth "Technology
transfer" plays a very important role and is generally covered by a technology transfer agreement.
Developing countries like India generally not follow the usual path for development with regard to technologies but
use their advantage in the cutting edge technology options which is now available and put the tools to use this
modern technology.
Technology transfer is assumed to get benefits from R&D which is shared with the developing and underdeveloped
countries , so taking this to the point of consideration National research laboratories is been constructed by the Indian
government for the purpose of R&D which is yet to be commenced by the private sectors.
India generally comprises of Small and medium enterprises and is growing since liberalization, which has resulted in
growth of The multinational enterprises, which in turn is competing with the international companies which has
enhanced the confidence of India. Not only confined to the pharmaceuticals but is broadly categorized in other areas
too such as agriculture, dairy and other technologies.
Government of India is in the verge to open Technology Transfer Offices, Universities, institutions which will be
funded by central government and will acts as mechanism for transferring or exporting the research conducted and its
outcome to the desired place.
Though some of the Indian Institutes have been already commercializing their research and are successful in
technology transfer in which they have been licensed as technologies to industry. Moreover, numerous cases of
technology transfer are seen in India by various well-known institutions.
Vertical transfer refers to transfer of technology where transmission of new technologies is done from the generation
of new technology during the research and development programs into the science and technology organizations, for
instance, to the application related to the industrial and agricultural sectors, or we can say that vertical transfer is the
technology transfer commencing from basic research to applied research, from applied research to development
followed by development to production.
While the horizontal technology transfer is the movement of a well-known technology from one equipped environment
to another (from one company to another) or say refers to the transfer and use of technology used in one place or
organization to another place or organization.
Generally there are the reverse trends in the developing countries because the path to be followed depends upon the
transfer, absorption, and adaptation of existing technology
(Habibie (1990), often referred to as the architect of the Indonesian aircraft industry, states that, "technology receivers
must be prepared to implement manufacturing plans on a step-by step basis, with the ultimate objective of eventually
matching the added-value percentage obtained by the technology transferring firm." He refers to such an approach
as "progressive manufacturing" and popularized the slogan, "begin at the end and end at the beginning" implying that
a transferee firm should start with production and move backwards to research.)
Today in the era of advent in technology one could choose any of the routes of the technology transfer which
depends upon how the technology advancement chains of the transferor and transferee are associated.
CONCLUSION
Technology transfer and its licensing have played a crucial role in all round development and the advent of the
technology which in results help in the development of the economy of the country. Hence forth helps in creating the
wealth to the country.
India as a developing country need to work on the technology development and technology transfer and needs to
make a building strategy comprising of the construction of new offices related to technology transfer and to make
youngsters aware to the benefits related to the technology transfer, by establishing the specified universities and
henceforth increasing the pace of the technology transfer and technical research and development in technical
perspective.
Finally as discussed we can conclude that there is the possible advantage and disadvantage of the technology
transfer. But we have to see this in the broader aspect so that our country as well as the citizen of our country should
be benefited.
Many developed countries have adopted measures that directly or indirectly facilitate technology transfer. These measures
include financing support, training, matching services, partnerships and alliances and support for equipment purchase or
licensing. UNCTAD has surveyed 41 agencies and programmes in 23 developed countries that offer home-country measures
(HCMs), in one way or another, facilitating technology transfer .HCMs are often provided as part of international cooperation
programmes and/or strategic trade and investment initiatives.
Nineteen of the agencies surveyed provide support for training programmes. Of these, four provide support to
enable affiliates of home-country firms in developing countries to train their workers, three provide training as
part of matching services and five-run independent skills development programmes. Fifteen of the agencies
surveyed provide FDI (foreign direct investment)-related technology transfer incentives to their enterprises. Of
these 15, five require their firms to seek partnership with local firms, four include training of local partners or
workers as a requirement and three require a demonstration that transfer of technology does take place.
The foreign direct investor may acquire voting power of an enterprise in an economy through any of the following methods:
The New Delhi Conference, held in February and March 1968, was a forum that allowed developing countries to reach
agreement on basic principles of their development policies. The conference in New Delhi was an opportunity for schemes to be
finally approved. The conference provided a major impetus in persuading the North to follow up UNCTAD I resolutions, in
establishing generalised preferences. The target for private and official flows to LDCs was raised to 1% of the North's GNP, but
the developed countries failed to commit themselves to achieving the target by a specific date. This has proven a continuing point
of debate at UNCTAD conferences.
The conference led to the International Sugar Agreement, which seeks to stabilize world sugar prices.
The International Sugar Agreements and similarly named agreements were a series of International treaties that attempted to
establish an "orderly relationship between the supply and demand for sugar in the world market." They eventually established
the International Sugar Organization.
Main article: Foreign Direct Investment in India
Foreign investment was introduced in 1991 under Foreign Exchange Management Act (FEMA), driven by then finance
minister Manmohan Singh. As Singh subsequently became the prime minister, this has been one of his top political problems,
even in the current times. India disallowed overseas corporate bodies (OCB) to invest in India. India imposes cap on equity
holding by foreign investors in various sectors, current FDI in aviation and insurance sectors is limited to a maximum of 49%.
Starting from a baseline of less than $1 billion in 1990, a 2012 UNCTAD survey projected India as the second most important
FDI destination (after China) for transnational corporations during 2010–2012. As per the data, the sectors that attracted higher
inflows were services, telecommunication, construction activities and computer software and hardware. Mauritius, Singapore, US
and UK were among the leading sources of FDI. Based on UNCTAD data FDI flows were $10.4 billion, a drop of 43% from the
first half of the last year.[40]
Nine from 10 largest foreign companies investing in India (from April 2000- January 2011) are based in Mauritius.[41] List of the
ten largest foreign companies investing in India (from April 2000- January 2011) are as follows[41] --
Antrix Corporation Limited (ACL)
Antrix Complex,
New BEL Road, Bengaluru - 560 231
Chairman-cum-Managing Director: Mr. S. Rakesh
Antrix Corporation Limited (ACL), Bengaluru is a wholly owned Government of India Company under the
administrative control of the Department of Space. Antrix Corporation Limited was incorporated as a
private limited company owned by Government of India in September 1992 as a Marketing arm of ISRO
for promotion and commercial exploitation of space products, technical consultancy services and
transfer of technologies developed by ISRO. Another major objective is to facilitate development of
space related industrial capabilities in India.
As the commercial and marketing arm of ISRO, Antrix is engaged in providing Space products and
services to international customers worldwide. With fully equipped state-of-the-art facilities, Antrix
provides end-to-end solution for many of the space products, ranging from supply of hardware and
software including simple subsystems to a complex spacecraft, for varied applications covering
communications, earth observation and scientific missions; space related services including remote
sensing data service, Transponder lease service; Launch services through the operational launch
vehicles (PSLV and GSLV); Mission support services; and a host of consultancy and training services.
History
Antrix Corporation was incorporated as a private limited company owned by the Indian government on 28 September
1992. Its objective is to promote the ISRO's products, services and technologies. The company is a Public Sector
Undertaking (PSU), wholly owned by the Government of India. It is administered by the Department of Space (DoS)
It had dealings with EADS Astrium,Intelsat, Avanti Group, WorldSpace, Inmarsat, and other space institutions in
Europe, Middle East and South East Asia
It was awarded 'Miniratna' status by the government in 2008 and achieved a turnover of Rs. 18 billion in 2014–15.
Achievements
● Successful launch of W2M satellite for Eutelsat.
● Successful supply of reliable satellite systems and sub-systems. Some of Antrix's better known customers
are Hughes, Matra Marconi, World Space etc.
● Successful Commercial Satellite Launches of SPOT 687 (France), Pathfinder & Dove (U.S), Tecsar
(Israel) Kitsat (Korea), Tubsat (DLR – Germany), BIRD (DLR – Germany), PROBA (Verhaert, Belgium) aboard
the ISRO's Polar Satellite Launch Vehicle (PSLV).
● Execution of many IOT / TTC support services to International Space Agencies. Some of the customers
The Internet of things (IoT) is a system of interrelated computing devices, mechanical and digital machines
provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring
human-to-human or human-to-computer interaction.[1][2][3][4]The definition of the Internet of things has evolved
due to the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors,
and embedded systems.[1] Traditional fields of embedded systems, wireless sensor networks, control
systems, automation (including home and building automation), and others all contribute to enabling the
Internet of things. In the consumer market, IoT technology is most synonymous with products pertaining to
the concept of the "smart home", including devices and appliances (such as lighting fixtures, thermostats,
home security systems and cameras, and other home appliances) that support one or more common
ecosystems, and can be controlled via devices associated with that ecosystem, such
as smartphones and smart speakers.There are a number of serious concerns about dangers in the growth
of IoT, especially in the areas of privacy and security, and consequently industry and governmental moves
to address these concerns have begun including the development of international standards.[5]
● using Antrix services are World Space PANAMSAT, GE Americom, AFRISTAT etc.,
● LEOP support, IOT, TTC.
● Successful launch of TecSar (Israel).
● Two satellites; one from France and another from Japan were launched in September 2012.
● Successful launch of five satellites, including French SPOT 7 satellite on 30 June 2014
● UK based satellite launch UK-DMC 3 on 10 July 2015
Business agreement
On 29 January 2014, Antrix Corporation Limited (Antrix), the commercial arm of Indian Space Research Organization
(ISRO), signed Launch Services Agreement with DMC International Imaging (DMCii), the wholly owned subsidiary of
Surrey Satellite Technology Limited (SSTL), United Kingdom (UK), for launch of three DMC-3 Earth Observation
Satellites being built by SSTL, on-board ISRO’s Polar Satellite Launch Vehicle (PSLV). On 5 February 2014, Antrix
signed another Launch Services Agreement with ST Electronics (Satcom & Sensor Systems) Pte Ltd, Singapore, for
launch of TeLEOS-1 Earth Observation Satellite, on-board PSLV. These launches are planned during end 2014 to
end 2015. On 29 September 2014, Canada announced that it has decided to give the contract of the July 2015
launch of its M3M (Maritime Monitoring and Messaging Micro-Satellite) communications satellite to Antrix during the
inauguration of the International Astronautical Congress at Toronto.
Satellite launches
ist of foreign satellites launched by India
Main article: L
As of January 2018 ISRO has launched 209 foreign satellites for 23 different countries. All satellites were launched
using the ISRO's Polar Satellite Launch Vehicle (PSLV) expendable launch system. Between 2013 and 2015, India
launched 28 foreign satellites for 13 different countries earning a total revenue of US$101 million.
Antrix launched 239 satellites between 2016 and 2019 earning a total revenue of ₹6,289 crore (US$880 million).
Controversies[edit]
S-band spectrum scam
Further information: Indian Space Research Organisation § S-band spectrum scam
In January 2005, Antrix Corporation signed an agreement with Devas Multimedia (a private company formed by
former ISRO employees and Venture Capitalists from USA) for lease of S band transponders on two ISRO satellites
(GSAT 6 and GSAT 6A) for a price of ₹14 billion (US$200 million), a huge amount lower than market price, to be paid
over a period of 12 years. Devas shares were sold at a premium of ₹1,226,000 (US$17,000), taking the accumulated
share premium to ₹5.78 billion (US$81 million), thus getting a high profit. In July 2008, Devas offloaded 17% of its
stake to German company Deutsche Telekom for US$75 million, and by 2010 had 17 investors, including former
ISRO scientists.
In late 2009, some ISRO insiders exposed information about the Devas-Antrix deal, and the ensuing investigations
resulted in the deal being annulled. G. Madhavan Nair (ISRO Chairperson when the agreement was signed) was
barred from holding any post under the Department of Space. Some former scientists were found guilty of "acts of
commission" or "acts of omission". Devas and Deutsche Telekom demanded US$2 billion and US$1 billion,
respectively, in damages.[21] The CBI concluded investigations into the Antrix-Devas scam and registered a case
against the accused in the Antrix-Devas deal under Section 120-B, besides Section 420 of IPC and Section 13(2)
read with 13(1)(d) of PC Act, 1988 on 18 March 2015 against the then Executive Director, Antrix Corporation Limited,
Bengaluru; two officials of USA-based company; Bengaluru based private multi media company and other unknown
officials of Antrix Corporation Limited /ISRO/Department of Space.
DIITM plays a vital role in filling gap between Defence R&D and production by facilitating Transfer of
Technologies (ToT) to industries. DIITM facilitates Limited Series Production (LSP) to cater the need of production
of limited quantity of DRDO developed products. It also coordinates the production with industries to produce
DRDO developed products as per Users’ requirements. DIITM is nodal directorate for the technology acquisition by
DRDO under defence offsets. DIITM also provides necessary policy framework and hand holding support to the
industries for the export of DRDO developed products. The major achievements of DIITM are given below:-
● Framed the “DRDO Policy for Transfer of Technology” and promulgated after approval of Hon’ble RakshaMantri
● Formulated the ‘DRDO Procedures for Transfer of Technology’ with the approval of Secretary DD R&D and
chairman DRDO to implement the new ToT Policy.
● More than 900 Licensing agreements for ToTs have been signed with the Industries including major weapon
systems like Fiber optics Gyro, Varunastra, Medium Power Radar etc.
● Commercialisation of DRDO developed technologies have been done for the benefit of society at large and more
than 100 Licensing agreements signed till date.
● Interaction with more than 900 industries’ at 21 locations all over India.
● DRDO has successfully carried out LSP of NBC recce vehicle, NBC water purification system, etc.
● DRDO has processed more than 300 NoC requests for exports and 150 requests for Industrial License.
● Industry compendium containing details of industry partners who have partnered with DRDO at various stages has
been compiled by DIITM
● Prepared compendium of products which have huge export potential.
● Centre for Scientific and Industrial Consultancy (CSIC) of Indian Institute of Science.
● Industrial Research & Consultancy Centre (IRCC) of Indian Institute of Technology, Bombay.
(1).Bengaluru: What must it have felt like to be a cotton spinner or an iron maker in England in the
1820s in the midst of an industrial revolution? Exactly 200 years later, we may be on the verge of
another era of momentous change: the internet revolution. With internet access expanding
dramatically post the early 1990s, a slew of new technologies have now matured to a point where
fundamental change constantly seems to be right around the corner.
On the doorstep of a brand new decade—the 2020s—what new frontiers may Artificial Intelligence
(AI) or gene editing open up? Will we soon have robot bosses? Will mixed reality change the way
we consume entertainment and sports? Will we be able to cure 90% of all genetic diseases by the end
of the decade? We take a look at five technologies that could alter India and the world. This may not
be a definitive or even exhaustive list, but it is a list of things that could change the way we live,
work, and play sooner than we think.
Imagine watching a football match, not on your TV but on a virtual reality (VR) headset that streams
the match live and projects interesting stats on the fly with the help of augmented reality (AR).
Mumbai-based VR startup Tesseract, now owned by Mukesh Ambani’s Reliance Jio, is promising a
future like that with its Quark camera, Holoboard headset, and the high internet speeds of Jio Fiber.
Similarly, a Hyderabad-based mixed reality startup called Imaginate enables cross-device
communication over VR and AR wearables for better enterprise collaboration in the industrial sector.
Despite the much-hyped yet unmet expectations from the likes of Google Glass, Microsoft HoloLens
and Facebook’s Oculus, Tesseract and Imaginate simply underscore how the fusion of AR and VR
technologies — the combination of which is popularly known as Mixed Reality or MR — is coming
of age and is no longer in the realm of just sci-fi movies like Blade Runner 2049, where Officer K
played by Ryan Gosling develops a relationship with his artificial intelligence (AI) hologram
companion Joi.
For instance, AI-powered chatbots today can not only conduct a conversation in natural language via
audio or text but they can be made more powerful with a dose of mixed reality. Last May, Fidelity
Investments created a prototype VR financial advisor named Cora to answer client queries using a
Nevertheless, challenges abound when dealing MR-and AI-powered robots, humanoids, and human
avatars. For one, whenever a company generates human bodies and faces, concerns over deep fakes
and cheap fakes will always rear their heads. Second, data collection will continually raise concerns
over security and privacy. Third, there’s always the concern regarding the fairness of an AI algorithm
when it is deployed to do human tasks— like giving financial advice. Last, but not the least, there’s
also the question of whether AI bots should be allowed to pose as humans. This will continually pose
a challenge and opportunity for technologists and policy makers.
Future of solar
Heliogen, a company that has billionaire philanthropist Bill Gates as one of its investors, says it has
created the world’s first technology that can commercially replace fuels with carbon-free, ultra-high
temperature heat from the sun. With its patented technology, Heliogen’s field of mirrors acts as a
multi-acre magnifying glass to concentrate and capture sunlight.
suite of tools from Amazon Web Services. Researchers in Southampton have built a device that
displays 3D animated objects that can talk and interact with onlookers.
The Chinese government-run Xinhua News Agency has the world’s first AI-powered news anchor,
whose voice has been modelled to resemble a real human anchor working for the agency. Going a
step further, Japan-headquartered DataGrid Inc. uses generative adversarial networks (GANs) to
develop its so-called “whole body model automatic generation AI" that automatically generates
full-length images of non-existent people with high resolutions.
This is just a case in point that solar technologies have evolved a lot since they first made their debut
in the 1960s. For instance, solar roadways—panels lining the surface of highways—have already
popped up in the Netherlands. Floating solar, on its part, is providing a credible option to address
land use concerns associated with wide scale solar implementations. A French firm called Ciel et
Terre, for instance, has projects set up in France, Japan, and England. Other parts of the world,
including India and California in the US, are piloting similar floating solar initiatives.
Space-based solar technology is another exciting arena. India, China and Japan are investing heavily
in these technologies right now. The Japan Aerospace Exploration Agency’s (JAXA) Space Solar
Power Systems (SSPS) aims to transmit energy from orbiting solar panels by 2030. Further,
researchers at the VTT Technical Research Centre in Finland have used solar and 3D printing
technologies to develop prototypes of what they have christened as “energy harvesting trees".
With solar power cheaper than coal in most countries in the world, it’s worth scaling up these
technologies.
Indians and robot bosses
Between 400 and 800 million individuals around the world could be displaced by automation and
would need to find new jobs by 2030, predicted a December 2017 survey by consultancy firm
McKinsey. The Future of Jobs 2018 report by the World Economic Forum (WEF) suggests that 75
million jobs may be lost to automation by 2022, but adds that another 133 million additional new
roles will be created.
Given that many of the automated jobs are being taken away by AI-powered chatbots and intelligent
robots, would humans eventually have to work for a robo boss? This, however, may not be as big a
concern as it is made out to be. According to the second annual AI at Work study conducted by
Oracle and Future Workplace, people trust robots more than their managers. The study, released this
October, notes that workers in China (77%) and India (78%) have adopted AI over 2X more than
those in France (32%) and Japan (29%). Further, workers in India (60%) and China (56%) are the
most excited about AI, while men have a more positive view of AI at work than women.
Oracle and Future Workplace also found that 82% of the workers believe robot managers are better at
certain tasks, such as maintaining work schedules and providing unbiased information, than their
human counterparts. And almost two-thirds (64%) of workers worldwide say they would trust a robot
more than their human manager. In China and India, that figure rises to almost 90%.
On the other hand, the respondents felt managers can outdo robots when it comes to understanding
their feelings, coaching them, and creating a healthy work culture. Whether humans eventually serve
a robo boss or not remains to be seen. However, we can be certain of one thing: in the near future, we
will increasingly see humans collaborating with smart robots.
Future of payments
Everyone can be a merchant, and every device can be an acceptance device," Accenture noted in its
2017 Driving the Future of Payments report. This trend has only accelerated over the last two years,
especially with banks coming to terms with the fact that young customers, especially those living in
urban areas, prefer net banking and mobile banking and would seldom, or never, want to visit a bank
branch if offered that choice.
Bitcoin and cryptocurrency investors, for instance, have not lost faith in this disruptive currency
despite the run with volatility, and despite the industry being viewed with a lot of suspicion by most
governments around the world, including India. Fintechs too, with their innovative technology
solutions like AI-powered bots and contactless payments to name a few, have only made the
payments ecosystem more inclusive, disruptive, and challenging. In India, especially, the
government’s Aadhaar-enabled payments system and the Unified Payments Interface (UPI) have
revolutionized the payments ecosystem. The total volume of UPI transactions in the third quarter of
calendar 2019 touched 2.7 billion—a 183% rise over the same July-September quarter a year ago. In
terms of value, UPI clocked ₹4.6 trillion—up 189% over the same period a year ago, according to
the Worldline’s India Digital Payments Report-Q3 2019.
However, the number of transactions done on mobile wallets was 1.04 billion—only a 5% rise over
the previous year period.
QR codes, according to the report, will continue to be used for payments, and the internet of things
(IoT) is set to dominate micro payments by transforming connected devices into payment channels,
though the pace of adoption of 5G by countries like India will be the key.
Nevertheless, cash that has been in existence for over 3000 years in different forms is not going to
disappear in a hurry. Trust and security will continue to remain the operative words in digital
payments.
Making sense of gene editing
When Dolly the sheep made news for becoming the first mammal ever to be cloned from another
individual’s body cell, many expected human cloning to follow soon. Dolly died over 16 years ago,
and subsequently animals, including monkeys and dogs, continue to be cloned successfully. Yet, no
human being has yet been cloned in real life.
While human cloning, which may or may not eventually happen, is bound to raise a lot of alarm bells
given the moral implications surrounding the issue, the fact is that human genomes, or genes, are
being routinely edited in a bid to find solutions for what are today considered to be incurable
genetically inherited diseases.
Researchers are using a gene editing tool known as CRISPR-Cas9. CRISPR, which stands for
Clusters of Regularly Interspaced Short Palindromic Repeats, is a tool that allows researchers to
easily alter DNA sequences and modify gene function. The protein Cas9 (CRISPR-associated, or
Cas) is an enzyme that acts like a pair of molecular scissors capable of cutting strands of DNA.
CRISPR-Cas9 is primarily known for its use in treating diseases like AIDS, amyotrophic lateral
sclerosis (ALS), and Huntington’s disease. Two patients, one with beta thalassemia and one with
sickle cell disease, have potentially been cured of their diseases, reveal results from clinical trials that
were jointly conducted by Vertex Pharmaceuticals and CRISPR Therapeutics. The results released
this November involved using Crispr to edit the genes of these patients.
Researchers are now looking to extend its use to tackle famine, lend a hand in creating antibiotics,
and even wipe out an entire species such as malaria-spreading mosquitoes. Further, by genetically
engineering a person’s bone marrow cells, researchers can reprogram their immune and circulatory
systems. Some new cancer treatments are based on this. Moreover, looking at the DNA of the
collection of microbes in your gut can help with digestive disorders, weight loss, and even help
understand mood changes.
Closer home, scientists at the Institute of Genomics and Integrative Biology (IGIB) and the Indian
Institute of Chemical Biology (CSIR-IICB) are trying to correct genetic mutations in their
laboratories using CRISPR Cas9 with encouraging preliminary results. But due to regulatory and
ethical concerns, it may take a while before they can use this on humans.
IGIB also sells CRISPR products such as Cas9 proteins and its variants to educational institutes at
reduced prices in a bid to encourage use of the technology.
The US Food and Drug Administration (FDA), on its part, considers any use of CRISPR-Cas9 gene
editing in humans to be gene therapy and rules that the sale of DIY kits to produce gene therapies for
self-administration is illegal. India, too, has banned the use of stem cell therapy for commercial use
following concerns over “rampant malpractice".
CRISPR-Cas9, thus, remains a work in progress and countries should have policies to govern its use.
Meanwhile, one can watch out for an upgrade to CRISPR called Prime, which theoretically has the
ability to snip out more than 90% of all genetic diseases.
Moreover resulting in commercialization of the researches and the discoveries made, which was the course of the
investment done for the development and being protected by the patent. Hereafter all the Investments done in the
course of the development in intellectual property are returned to the public through products made for the public,
opportunity of more employment, and revenue in the form of taxes.
Technology transfer strengthens industry by identifying new business opportunities which contributes to enhancing
the know-how and competitiveness of the technology providers, which ultimately results in broadening the business
area and re-focusing to the technologies and systems to serve several different fields. In addition, technology transfer
promotes the wider use and awareness of technology and systems.
Technology transfer brings economic benefits by increasing revenues for both technology donors and receiver's
benefits with new and better products, processes, and services that lead to increased efficiency and effectiveness,
greater market share and increased profits.
Moreover technology transfer helps in earning rewards which is above and beyond the regular salary which is
received through patents, licenses, and other technology transfer awards which help in benefiting intellectually and
professionally through working collaboratively with their peers in the industrial sector.
Even when the transfer programme related to the technology transfer is successful or in particular after technology
transfer institutional tensions may arise within the organization which may be in between the recipient of licensing
income and those who know they will never make utilizable inventions. For the sake of remedy in those
circumstances Institutional policies can be made aiming to have partial rearrangement of income received by license
between all research groups but, using this strategy may not eradicate the problem rather in most of the cases
discoverer will be frustrated or disappointed because the income that they have earned is given to other groups.
Technology transfer activities may put researchers in conflict of interest situations, especially when the transfer
involves the creation of the spin- off company, hence Institutions should be aware of these possible dangers.
Moreover problem can be because of non performance of licensee. And may be the licensee has limited chances
beyond the license scope unless future enhancements to patent included in initial agreement and Unrealistic
expectations and demands from licensor.
L-5 UNIT-1
is decentralized. Advantages
Today, many of our basic needs are handled by huge, complex systems These systems are managed centrally
by large private corporations or the government. For example, our electricity typically comes from utility
companies that operate across many states. Similarly, many of the fruits and vegetables we consume come
from large-scale agricultural corporations. In contrast, with appropriate technology the person who produces
a service or a product also becomes the consumer - the person who uses it. This has several advantages: For
one, consumer-producers are more likely to care more about their work. As a result, service and goods are
more reliable and of higher quality. Secondly, centralized systems remits invest a lot of money to purchase
large, complex machinery and to employ thousands of workers. Often these systems are disrupted due to
breakdowns in the technology, problems getting needed supplies, or labor strikes. When this happens a great
many people are affected. Breakdowns such as a power outage may also occur in communities that use
small-scale, appropriate technology. But these local breakdowns are not nearly so difficult and time
consuming to track down and repair as those that cover a broad geographic area. Thus, a simpler technology
tends to be more reliable, and the effects of breakdowns do not disrupt so many lives.
It is important to realize that use of appropriate technology does not mean turning the clock back to the 18th
or 19th century. Although the technology involves simple, easy-to use and repair designs, it is based on
sophisticated, 20th-century technologies. One example is the invention of photovoltaic, or solar cells that
convert solar energy, a renewable energy source, into electricity for homes and businesses.
Environmentally friendly.
Appropriate technology emphasizes the use of renewable resources, like the energy from the sun, wind, or
water. These energy sources are available almost everywhere and need only the right technology to capture
them. Unlike burning coal and oil, these local energy sources do not contribute to air and water pollution and
they do not need to be transported over long distances. Food, energy, water, and waste disposal are also
handled locally by ecological systems. These are systems that conserve resources by recycling organic
nutrients back into the soil and re-using manufactured goods in innovative ways. Thus, appropriate
technology makes it possible to satisfy our basic human needs while minimizing our impact on the
environment.
Social problems.
Many people are coming to realize that neither our economy nor our population can continue to grow
forever. We are running out of the natural resources necessary to sustain ourselves. In addition we are limited
in our ability to deal with the social and environmental problems that result from continuous growth. There
seems to be a growing dissatisfaction with the complexity and hectic lifestyle of 20th-century society. Many
people would prefer to return to a simpler way of life. Appropriate technology is attractive because it makes
households and industries more self-sufficient, and most things can be managed at a local level. We may
have to do more hand labor instead of depending on automation to satisfy our basic needs. However, there
are many advantages to simplifying our lives. By growing more of our own food and producing and buying
goods in our own communitites, we spend less time and money on transportation, produce less waste and
consume fewer environmental resources.
Artificial intelligence is probably the most important and ground-breaking trend in technology today. ... The advent
of smart homes, smart cities, and the Internet Of Things means that AI will be integrated more and more into our
everyday lives.
Disadvantages
Appropriate technology encompasses such a wide field that it is hard to describe the exact points of
weakness. One disadvantage of appropriate technology is that sometimes a solution simply does not
work as planned. Some solutions tried have failed because some factor was not considered or the
design did not work as planned. Since it is a relatively new field of study, there is still much work that
needs to be done on the most effective way to apply the resources available in the area of need. What
might be very practical and cheap in one area of the world would be ridiculously expensive or not work
at all in another region. Also there is the problem of the different cultures within various countries.
One solution used for a village in Africa was found to be cultural repulsive to another village in the
same country. Therefore, appropriate technology takes tremendous study of the region’s climate,
resources, location, and people. There are also many cases that the long term effects are unknown.
There are also the problems of a sustainable solution creating other problems. For example, a
micro-hydroelectric plant might be built in a remote village to provide the village with power. Each hut
would own its own battery that could be charged at the micro-hydro then be taken home to provide
power. Batteries, however, don’t last for an extremely long time and are filled with toxic materials.
What will happen with the batteries when they go bad? How will they be disposed of properly? These
are issues that designers have to deal with. [3]
Most appropriate technology applications are built for small scale use, that work well for small, remote
villages. But appropriate technological solutions pose more problems for large scale applications. Some
forms of sustainable resources are very expensive and not practical for extensive use. Therefore, the
cost becomes much greater than current methods making it not as economically feasible. This is
especially true for countries that are already technologically advanced.
Capt. Jacques Viadoy, R.N., a National Guardsman with the Arizona Medical Detachment, applies fluoride treatment
to children during a medical clinic conducted for the residents of Maria Moseoso Espino in support of Beyond the
Horizon 2014 - Guatemala, Tuesday, May 27. Beyond the Horizon is a U.S. Army South led exercise that provides a
range of support to the region, including medical, dental, engineering, and humanitarian civic assistance. (U.S. Army
National Guard Photo by 1st Sgt. David A. Smith). U.S. Government photo.
Technological innovation over the past century has revolutionized our society’s ability to solve problems. A byproduct
of this movement is the advent of appropriate technology (AT), an approach to address challenges in the developing
world through creative and people focused product development. Appropriate technology recognizes that social,
environmental, cultural, political, and economic concerns are just as important as technical requirements in the design
of innovative products and services .(14/9 B1) For example, Husk Power Systems converts rice husks into electric
power in rural areas of India’s poorest, most remote state . The success of Husk Power is as much in their
technological solution, as their consideration of socio-cultural realities in the design of their revenue model. Treadle
pumps, like those produced by KickStart, help farmers increase their cultivable land, extend their growing seasons,
improve their crop quality, and thus, augment their income. The driving force behind these technologies is a desire to
employ human-centered approaches to empower communities in addressing their own
economic, sociocultural, political, and environmental needs. Such technologies can improve the lives and livelihoods
of individuals living in resource-constrained environments in many ways, from improved access to food, water, and
healthcare to long-lasting shelter and employment opportunities .
There are many competing theories about what constitutes “appropriate” technology and how to define and balance
“people-centered” goals against other dimensions of sustainability. The challenge of defining “appropriate” technology
has been discussed at length in AT literature for decades. Despite some differences, this discussion has come to
some agreement on a core group of design tenets that span from the cultural (e.g., compliance with societal norms),
to the consumer (e.g., community ownership model), to the technological (e.g., environmental friendliness). How-ever,
the relevance and intended implications of the tenets have evolved with the gradual globalization of challenges,
resources, and economic systems. One of the most significant outcomes of globalization has been the rapid
proliferation of Information and Communication Technologies (ICTs), which have democratized the creation, access,
and utilization of knowledge. This knowledge, especially when melded with indigenous knowledge, enables
individuals and communities to pursue appropriate technology in more ways, co-creating solutions that can improve
their collective quality of life. This article suggests that instead of considering AT design tenets as rules for technology
development, they must be considered as a series of tradeoffs and systemic design decisions that are informed and
co-created by the specific communities and their context. Along with relevant real-world examples, this article
presents a series of thought-provoking questions that must be answered when engaging in the design of technology
solutions for resource
constrained environments.
Over the last decade, the Humanitarian Engineering and Social Entrepreneurship (HESE) Program at Penn State has
led technology-based social ventures in Kenya, Tanzania, Rwanda, India, Cameroon, and other countries. Through
approximately thirty different projects, we have found that AT solutions are too nuanced to be generalized across
contexts, cultures, and specific desired out-comes. Though all aspiring AT projects have the same overall goal of
improving the lives of resource-constrained communities, they operate in different environments to address dissimilar
problems. For instance, a company attempting to provide electricity to rural Indian villages need not adhere to the
same tenets as a group helping a community reconstruct a water reservoir in Kenya, or a venture commercializing
affordable food dryers in Nicaragua. We argue against the application of rigid tenets and design principles and
encourage innovators to adopt a systems approach when developing new technologies. We ask the entire community
engaged in appropriate technology – innovators, educators, students, entrepreneurs – to consider how we all should
really be designing such technologies. To what end, and by what means, should this movement progress?
There is considerable discourse in the development community over the usefulness of foreign aid. Over the past 50
years, more than two trillion dollars in foreign relief have been transferred to Africa. Paradoxically, Africa has a lower
real per capita income today than it did before this aid began . The “Marshall Plan for Africa” has not worked, but
why? And what does this tell us about the appropriate circumstances for foreign aid? Despite best efforts, aid
distribution within the current infrastructure of developing nations is often more about the politics of the deliverers than
the economic and social needs of the recipients . Arguably, such aid-based models of development lead to
inefficiencies and waste in the entire system. For instance, foreign aid agencies donate millions of dollars to
developing countries to combat malaria by distributing free insecticide-treated mosquito nets, but these programs
have efficiencies comparable to programs that espouse cost-sharing with customers
Aid distribution is often more about the politics of the deliverers than the economic and social needs of the
recipients.
Should we be donating products when cost-sharing with recipients is just as effective and has the added advantage
of fostering a sense of ownership? What else can we do to lower wastage? One way of combating foreign aid waste
is to invest the funds in local programs that catalyze more opportunities for employment and self-empowerment. For
instance, foreign donations could enable micro-lending for small businesses or financing for public goods like
infrastructure development projects. Instead of donating mosquito nets, foreign aid might invest in social ventures like
NetMark, a company that builds facilities and trains local residents to manufacture low-cost mosquito nets for the local
market. The “aid versus trade” question is important because of the customer-consumer relationship. When foreign
entities donate to non-profits or developing-nation governments, they separate the customers (NGOs and
governments) from the consumers (people: end beneficiaries). When customers don’t understand, or don’t articulate,
the needs of the population correctly, the resulting solutions are likely to fail. This phenomenon is less likely in
market-based ventures like NetMark, where the customer and consumer are one and the same and the feedback
systems are fast and effective.
Alongside the aid vs. trade debate, we must remember that a major application of foreign aid is in short term
humanitarian relief. For instance, disaster relief funding can be necessary for countries and communities to address
immediate, short-term challenges and avoid further danger . In the wake of catastrophic natural disasters, devastated
communities cannot rely solely on market-based or locally developed improvements. However, the consequences of
tragedies (and potentially the causes, in anthropogenic cases) can be mitigated by building resilient systems through
effective long-term planning. Disaster aid is certainly necessary in some instances, but it should not last so long that it
weakens the society’s economy and perpetuates its dependence on foreign donors.
Technology carries with it certain knowledge, perspectives, and lifestyle concepts. Traditionally, AT theorists
differentiated these concepts into “indigenous” (local traditions and understandings often passed down through
generations) and “Western” (positivist and scientifically-derived information, often from the developed world). Often,
external technologies will challenge local traditions and champion a Western perspective. New ideas can help
catalyze change and generate appropriate solutions that meld Western and indigenous knowledge. However,
excessive deviations from indigenous perspectives often lead to the failure of AT projects. In principle, ATs should
leverage both Western and indigenous knowledge – but how can they be balanced?
One example of such a balance is KickStart’s manual treadle pump, which allows communities to access clean water
quickly and easily. The initial design of the treadle pump caused women to move their hips in a provocative manner,
leading many communities to reject it. Subsequently, indigenous perspectives and knowledge informed the redesign
the pump’s pedal geometry in order to satisfy the communities’ cultural norms and expectations. Sustainable Health
Enterprises (SHE) in Uganda took on the problem of young women missing school due to lack of sanitary pads during
menstruation. SHE adopted a more Western perspective, advocating against this status quo by making sanitary pads
affordable and accessible to schoolgirls. They leveraged indigenous knowledge to make pads from eco-friendly
natural materials like banana bark and employed traditional cooperative business structures to integrate this product
into the local marketplace. They successfully improved the girls’ school attendance while augmenting livelihoods and
stimulating the local economy .
New technologies often clash with local cultures. This could be unavoidable, as with ubiquitous technologies such as
the Internet, or unintentional, like the treadle pumps that did not consider local cultural sensitivities. Even when
ventures try to mitigate both these possibilities, achieving harmony with local cultures can be difficult . For example,
cell phones inherently compromise cultural traditions and face-to-face conversations in rural areas. On one hand, we
can blame cellphones for the destruction of traditional culture. At the same time, cellphones have enhanced the lives
and livelihoods of billions of
people, who have readily accepted the technology and adapted their cultures accordingly. Culture is dynamic and
should not be museumified (Dismantling stereotypes and celebrating differences) either.
Should we be donating products when cost sharing with recipients is just as effective and has the added
advantage of fostering a sense of ownership?
Instead of outsiders dictating a specific definition of cultural preservation, local residents should be empowered to
choose the life they want. On one hand, the evolution of culture may be secondary to basic survival and an improved
quality of life. Conversely, technology should not force a community to lose an identity it wishes to preserve. When
foreign breakfast cereals were introduced in Kenya, their popularity among expatriates threatened the traditional
Kenyan breakfast industry. This was true even though the foreign cereals had a higher cost per nutrient ratio than that
of the traditional diet. The loss of traditional food habits did not improve community nutrition but instead worsened it .
Culture is a dynamic entity and technology-driven social development is a valid basis for cultural evolution. Outside
innovators can introduce game-changers and culture-changers, as long as the users maintain their right to determine
which technologies and cultural artifacts they want to adopt and which ones they want to discard
At the height of the appropriate technology movement in the 1980s, ATI developed a sunflower oil press designed
specifically for small communities. The press was efficient and fit perfectly in the community. However, at a cost of
nearly $200, it was too expensive for the target users. .Nearly a decade later, KickStart developed a cooking oil press
that costs less than $30 and has helped over a million people [15]. Their success can be attributed to standardizing
and producing one specific product instead of several locally-attuned versions. Even if a venture accepts the benefits
of standardization at the manufacturing level, implementation may not be possible without localization. For example,
while the design of a basic mudbrick press might be standardized, the entire mudbrick building system must adapt to
dissimilar climates and soils in different parts of the world. Localization in high rainfall areas might involve stabilizing
the bricks with a mud-cement mixture, despite the additional cost in terms of press maintenance and design. In this
case, the product (press) might be standardized while the educational regimen is customized to the specific context.
Clearly, there are many shades of grey within the standardization versus localization decision for technology
ventures, but an ideal medium is often possible.
Local production comes with many social, economic, and environmental trade-offs for appropriate technology
ventures. How important are profit, people, and planet to each AT venture? Profitable ventures are more likely to
scale and deliver their technologies to more people. However, if such a venture employs destructive manufacturing
practices, is the benefit of reaching more people worth the collateral cost? Local manufacturing for local markets with
locally available raw materials can lead to resilient businesses that can quickly respond to the evolving needs of
communities. At the same time, other social ventures insist that using local manufacturing and resources
compromises their business
models and cost effectiveness. For example, KickStart treadle pumps and irrigation systems are manufactured in
China due to cost restrictions [16]. Similarly, biomedical ventures like ClickMedix face cost and quality control barriers
when trying to manufacture locally [17]. Despite the additional transportation and logistics fees, outsourcing is
sometimes necessary to maintain economic sustainability. In situations like these, the venture must accept that all
development goals cannot be achieved simultaneously. Foreign manufacturing usually means fewer jobs and
relatively less economic empowerment for local residents. It may also lead to negative environmental outcomes due
to industrial manufacturing and international shipping. On the other hand, developers may be able to make foreign
production “greener” than local alternatives, for instance, by utilizing more expansive material options, recycling
facilities, and energy infrastructure.
What happens when imported products malfunction? Are tools, replacement parts, and skills available to easily repair
the product? Lack of technicians to maintain and repair expensive biomedical equipment aggravates healthcare
challenges in Africa [18]. On the other hand, although cellphones are not designed or manufactured on the African
continent, ecosystems have emerged to support them and accelerate their adoption. Thousands of cellphone repair
technicians, most with little formal education, eke out a living repairing commonly-used cellphones. The key question
is whether the technology is sustainable in the long term despite being manufactured elsewhere.
This need for trust is evident in the business model of Husk Power Systems (HPS), a rural electrification company in
India. The company uses renewable energy sources to produce and supply electricity on a per diem basis at a low
cost. User accountability is sourced through community monitoring: people’s homes are open and everyone can see
what appliances are being run. Neighbors are billed together, so everyone watches one another to ensure they are
each paying for what they use [2].
In short, there are many issues to consider when deciding what level of community involvement a venture should
pursue. More expensive technologies may require the pooled resources of multiple households. Highly
location-dependent ventures, such as infrastructure projects and healthcare services, might benefit from a
participatory approach that directly involves the community in venture development. Tight-knit, open-home
communities accustomed to central management or cooperatives will likely be a better fit for community ownership
than a more individualist culture with single-family homes and self-contained technologies. At the same time, some of
the most successful technologies like cellphones, solar lanterns, and radios are designed for individual or family use.
Though AT often tries to maximize cost efficiency, the least expensive solution cannot always be assumed to be the
most desirable in resource-constrained environments. AT theory must accept that additional expenses may be
required to meet the emotional and societal needs of the end-users. Basic
designs for greenhouses in developing countries are often based exclusively on efficient function; however, consumer
buying is not always predicated this way. We have initiated greenhouse ventures in Kenya and Cameroon that
manufacture and install affordable greenhouses for local smallholders and agro-businesses. While developing
low-cost substitutes for greenhouse glazing (plastic covering), we discovered that several farmers preferred taking
larger loans to buy glazing that looked “pretty” rather than equally-functional but not as good looking glazing made
from used rice bags. Though not based on function or direct economic returns, this superficial distinction is important
to the customers and must be respected by technology developers. Similar circumstances arise when customers
prefer expensive “brand-name” products that do not differ in quality from generic versions-a phenomenon found
among all socioeconomic classes [19]. Poor people expect good-quality products and are often willing to pay more
for aspirational products that boost their social status.
Technology solutions may inadvertently, or deliberately, help some entities while hurting others. Is a technology
inappropriate if the livelihoods of certain groups are compromised? For example, a venture that provides people with
safe drinking water at low costs can benefit many people. However, it might reduce the profits of bottled water and
soda companies, or compromise the livelihoods of water vendors or racketeers. Similar challenges arise in food value
chains and supply chains for all kinds of products. Customers may prefer to purchase a solar oven and make their
own food instead of frequenting a street vendor. Information and Communication Technologies (ICTs), especially
cellphones, can make supply chains more equitable and efficient, but do so by eliminating middlemen.
Ultimately, technologies will affect different people in different ways, and some may view the consequences as
negative. However, developers must avoid engaging in cultural imperialism and applying their own definition of
negative (or positive) impact to the situation. A technology solution is merely a tool: the customers and communities
should be able to decide whether they want to adopt the technology or not. Societies can address the needs of those
who are negatively impacted in many ways –
by teaching them how to leverage the same technology, through re-education or re-skilling programs, or by
innovating to find new opportunities for value creation. A technology that negatively affects a certain subset of the
population could actually serve as an impetus to increase human capital and systemic efficiency by encouraging the
displaced workers to thrive in another field.
Should Labor-Intensive Tasks be Replaced with Automated Systems? Technology should not
Technological advancements in manufacturing and automation have historically led to periods of lower employment,
as evidenced by the Western industrial revolution [20]. When implementing technology solutions, developers must
consider any effects their ventures may have on the workforce. Some technologies increase workers’ efficiency and
productivity, while others might eliminate job functions and displace workers. Are laborsaving technologies
appropriate for populations already riddled with unemployment and underemployment?
Mass automation is a clearly logical choice in some situations. For instance, when communities suffer from
inadequate food supply, mass automation of food may be essential to its very survival. Such is the case with injera,
the traditional bread of Ethiopia, whose traditional recipe is energy and labor-intensive. Fuel costs have increased
with desertification, directly leading to the high cost of injera in rural areas. At the same time, the rapidly increasing
urban populations living in small quarters do not have the necessary space to make injera. Due to these and other
reasons, the consumption of wheat (bread) and rice has increased in rural and urban populations alike. In this case,
mass-manufacturing injera in factories is much more efficient and provides the people a way to preserve the most
important part of their diet and culture. Although certain technologies can reduce employment and hurt livelihoods,
their integration into modern economies is potentially desirable and often inevitable.
Should Technologies be Deskilled to Allow More People to use Them?
A primary characteristic of modern technology is attempting to deskill operation: allow anyone to operate devices with
little outside instruction. Deskilling increases the potential customer base of the product while decreasing
complications that arise due to misuse. At the same time, it can foster dependency and not actually address all the
systemic issues faced by users. For example, in India, cellphone companies are devoting significant resources to
services like mobile money transfers that allow rural populations to easily transfer money for goods and services.
However, many less educated users lack the trust and self
confidence to use the service by themselves. Instead, they go to a local agent for the transaction. In this case, the
extra effort to simplify the application for end-users is not needed.
In healthcare, direct-to-consumer technologies have tried to deskill and promote self-medication. Biomedical devices
like glucose monitors, scales, and thermometers are marketed to individual consumers. However, the usefulness of
certain tests and the implications of their results are often difficult to convey to less-educated users. An example of
this would be over-the-counter HIV tests that can be completed in the privacy of one’s home. The device is a
technological improvement, but the educational and medical information needed after an HIV diagnosis, whether
positive or negative, is not readily accessible to individuals. Therefore, many patients still go to testing centers for
assistance. A more appropriate approach to this system might involve increasing Community Health Workers’ access
to these devices and the necessary post-diagnosis educational material. This paradigm would refocus efforts away
from deskilling, more towards the entire pre- and post-diagnosis user experience. Technology developers must look
beyond developing the specific technology to incorporate systems-level issues into the design process. They need to
consider who will be using the device, what their educational level will be, and what situations the technology could
precipitate.
High-tech solutions can lead to simple low-tech products that are more likely to be sustainable in
low-resource contexts.
In other words, high-tech solutions can lead to simple low-tech products that are more likely to be sustainable in
low-resource contexts. At the same time, leapfrogging technologies like cell phones and solar power systems might
present viable and highly-scalable solutions. Rather than building and maintaining roads across the African continent,
low-cost airlines might be a more practical and cost
effective solution. Further, an extremely high-tech endeavor has the potential to transform into a ubiquitous
technology. GPS navigation systems, developed for military and aeronautic operations at the cost of billions of
dollars, are easily affordable and find applications in a variety of poverty alleviation endeavors. GPS devices are as
excellent example of a high technology that has become so ubiquitous that people don’t regard it as high-tech
anymore.
One way of balancing repair needs versus lifecycle is to incorporate maintenance requirements directly into the core
of the social venture. SELCO, a social enterprise in India, provides personalized solar power systems for customers
with routine maintenance integrated into product costs [24]. While the initial costs may be higher, this approach
ensures that the solar systems are maintained by trained technicians and continue to meet the needs of customers.
Other social enterprises develop their technology under a “do
it-yourself” methodology to encourage end-users to understand the product and take responsibility for maintenance.
The challenge is that required tools are not available in many areas and do-it-yourself culture is not as common in
developing communities, especially for more expensive products. Another approach is to implement consumable
solutions, such as the disposable, point-of-use water filters being used in several developing countries [25].
Pay-for-use business models, where customers only pay for product usage (e.g., paying for power rather than a solar
panel) alleviate the challenge of access to capital and essentially side-step the affordability/durability debate.
Although eco-friendly technologies and manufacturing processes are preferred, they can be too expensive and hence
unaffordable to people in developing countries. For example, in Kenya, entrepreneurs use car batteries to operate
small businesses that recharge cell phones, power street telephone businesses, or entertainment centers offering TV
viewing services. Car batteries are environmentally toxic but are essential for these small businesses to survive.
Without the batteries, these individuals would likely be relegated to subsistence farming or the unreliable ad hoc labor
market. Instead, they are using environmentally-toxic technology to improve their livelihoods. Ideally, car batteries
could be replaced by solar or other renewable energy sources, but these technologies are often too expensive to be
viable. Also, improved livelihoods engender a respect for the natural world and thoughtful use of resources.
Another option for environmentally-conscious ventures is reusing detrimental materials in benign ways. For instance,
some entrepreneurs embrace the inability of plastics to biodegrade by incorporating them into longer-lasting
structures. Entrepreneurs in Lesotho are using plastic bottles to make mini-green houses for individuals that cannot
afford traditional greenhouses [26]. The bottles have already been used and discarded from their original purpose, so
reusing them in the greenhouses (which need some sort of clear plastic-like material to function) is actually a
relatively benign approach for creating social good. The key point in these situations is that technology products may
benefit from resources that are not environmentally benign. While it is best if the toxic materials are recycled for these
applications, ventures must decide for themselves if they can accept environmentally toxic resources as unfortunate
byproducts to the social value created. In any instance, developers should comply with local policies and endeavor to
find cradle-to-cradle solutions for their products.
As the pursuit of appropriate technology continues, the theory and tenets for its appropriateness will no doubt
continue to develop. However, innovators must realize that all of these tenets are in fact tradeoffs – questions that
each technology venture and set of stakeholders must answer for themselves. These engineering design and
implementation questions span the spectrum from cultural to financial to manufacturing and capital issues. The
answers must be tailored to the context of the problem, the desired
solution, the appropriate business strategy, and the preferences of the stakeholders. To ensure that a technology
achieves economic, social, environmental, and technological sustainability, developers must engage in open
discussions with local partners. Communities should have a voice in these decisions to ensure that the designs meet
their needs and result in a self-determined improvement of livelihoods and agency. However, engaging the
community in every single aspect of the venture can lead to expectations and ownership, which although desirable,
have the potential to negatively impact the success of the venture and limit its scalability [27].
Beyond all the systemic design and implementation tradeoffs is the fundamental question upon which all the others
rest – should outsiders create solutions for the developing world? Why is the appropriate technology movement trying
to develop these solutions? What if it hurts the cultures and countries instead of helping them? One answer is to
consider Humanitarian Engineering a new wave of cultural imperialism: the West is trying a new mechanism of
imposing its ideal worldview on poor countries. This is a valid viewpoint and perhaps true for some. An alternate
perspective, and one that we prefer, is to think of AT as an exercise in co-creation. If we espouse the principles of
empathy, equity, and ecosystems when we engage with people across the world, the distance between “us” and
“them” vanishes. As illustrated in this article, we live in an interconnected world with complicated problems, dwindling
resources, and shared solutions. It is imperative to break down the barriers between our disciplines, cultures, and
epistemologies to find practical, innovative and sustainable solutions. A few ventures will be successful while many
will fail. Cultures are robust enough to survive our spectacular failures while the world is waiting to celebrate and
adopt the successful game-changers that improve the human condition.
Authors
Samir Patel
Siri Maley
Khanjan Mehta
Full article:
EVELOPING WORLD, H
TECHNOLOGIES, D UMANITARIAN ENGINEERING, M
ARCH 2014
Here are the key trends that will shape the E&U industry.
Energy Storage Will Disrupt Consumption Patterns. ...
Renewable Energy Will Take Center Stage. ...
Human well-being is a difficult concept to quantify. Many attempts have been made
in that direction the most obvious of them being the use of gross domestic product (GDP)
per capita as an indicator. The shortcomings of such approach are well known and for this
reason the HDI (Human Development Index) has been conceived as a composite of
A rough idea of the relevance of energy to well being can be gained by plotting HDI
as a function of per capita (commercial + non-commercial) energy consumption per year
for a large number of countries, as shown in Figure 1.
It is apparent from this figure that, for an energy consumption above 1 ton of oil
equivalent (toe)/capita per year, the value of HDI is higher than 0.8 and essentially constant
for all countries. One toe/capita/year∗ seems, therefore, the minimum energy needed to
guarantee an acceptable level of living as measured by the HDI, despite many variations of
consumption patterns and lifestyles across countries.
The statistical analysis presented above shows clearly that energy has a determinant
influence on the HDI, particularly in the early stages of development in which are presently
∗
1 toe/year = 1.3kW
the vast majority of the world’s people, particularly women and children. It also shows that
the influence of per capita energy consumption on the HDI begins to decline somewhere
between 1 and 3 toe per inhabitant. Thereafter, even with a tripling in energy consumption,
the HDI does not increase. Thus, from approximately 1 toe per capita, the strong positive
covariance of energy consumption with HDI starts to diminish. Additional increases in HDI
are more closely correlated to the other variables chosen to define it (life expectancy,
educational level, and per capita income).
A serious problem with such analysis resides on the fact that commercial and non-
commercial energy consumption are related in a complex way to the energy services that
energy offers, which in households include illumination, cooked food, comfortable indoor
temperatures, refrigeration and transportation. Energy services are also required for
virtually every commercial and industrial activity. For instance, heating and cooling are
needed for many industrial processes, motive power is needed for agriculture and electricity
is needed for telecommunications and electronics.
The energy chain that delivers theses services begin with the collection or extraction
of primary energy, that in one or several steps, maybe converted into energy carriers, such
as electricity or diesel oil, that are suitable for end uses. Energy end-use equipment –
stoves, light bulbs, vehicles, machinery – converts final energy into useful energy, which
provides the desired benefits the energy services. An example of an energy chain –
beginning with coal extraction from a mine (primary energy) and ending with produced
steel as an energy service – is shown in figure 2.
The target levels assumed in the Latin American World Model are:
• 3000 kcal and 100 grams of protein per person per day;
• one house (50 square meters of living area) per family; and
• 12 years of basic education (i.e., school enrolment of all children between 6 and
17 years).
It is well known, however, that a large number of people in rural areas in developing
countries do not have access to commercial energy due to lack of purchasing power or
other reasons. These people depend for survival on non-commercial energy sources,
principally firewood, dung and agricultural wastes, which they gather at a negligible
monetary cost. In many developing countries, non-commercial energy accounts for a
significant proportion of total primary energy consumption and 7.5 x 103 kcal/day per
capita is considered to be a representative figure.
Adding this number to the cost of commercial energy to meet basic needs yields the
total energy cost of satisfying basic human needs which, as shown in table 3.2, ranges
between 27.8 x 103 and 36.4 x 103 kcal/day per capita, i.e., between 1.0 and 1.3 toe/capita.
Source: Krugman, H and Goldemberg, J. “The Energy Cost of Satisfying Basic Human Needs”
Technological Forecasting and Social Change, 24, 45-60 (1983).
One very interesting study has tried to approach the problem starting from the
assumption that the standard of living of the Western Europe, Japan, Australia and New
Zealand in the mid 1970s could be considered satisfactory and the immense population
living in developing countries would be very well off it had access to the services be
available to the people of the above mentioned countries.
The activity levels in these countries in the mid 1970s are given in Appendix I and
are basically the following:
• a renewable solid house with 25 m2 per capita;
• water supplies and sanitation;
• clean easy-to-use cooking fuel (gas, for example);
• electrical lighting.
In other words, all families in the model above, on average, live in reasonably solid
houses with about 25 m2 per capita and water supplies and sanitation. Further, all homes
would have a clean, easy-to-use cooking fuel (for example, gas), are illuminated with
electric lights, and all the basic electric appliances – a refrigerator/freezer, a water heater, a
clothes washer and a television set.
There is also one automobile for every 1.2 households on average, and the average
person travels by air to the extent of 350 km per year. All this cannot be sustained without
well-developed industries for the processing of basic materials and large services sector –
hence, it is visualized that this infrastructure has been established and is in operation.
It is clear that these activity levels are more than sufficient to meet the basic needs
of the population; in fact, they go very much farther to provide for major improvements in
the quality of life.
Let’s suppose now that most of these energy-utilizing technologies that are
envisaged the above activities are example of the “best available” technologies in terms of
their energy performance - for example, the most energy-efficient stoves, water-heaters,
refrigerators/freezers, light bulbs, commercial buildings, cement plants, paper mills,
nitrogen fertilizer plants. Because these technologies are available on the market they can
be considered to be economically viable at present energy prices. A few of the indicated
technologies are “advanced technologies” that could be commercialized over the next
decade – hence, they are not contingent on the achievement of technological breakthroughs.
Indications are that these technologies would be cost-effective at present energy prices.
One can then multiply each activity level by the corresponding specific energy
demand, that is, the energy demand for unit level of the activity, and then sum up all the
activities.
It turns out that, roughly speaking, the total final energy demand for the countries
mentioned above, assumed activity levels and the menu of energy-efficient technologies is
only about 1 toe per capita. This is both a surprising and remarkable result, because this
level of final per capita energy use is only about 20 percent more than the actual per capita
energy use rate in developing countries in 1980. The interesting implication of this result is
that with 1 toe per capita of energy, developing countries can provide any standard of life
ranging from the present low level (in which even basic human needs are not satisfied), to a
level as high as in the Western Europe region in the mid and late 1970s for the majority of
the population.
The importance of the efficient use of primary energy use and the effect of
modernizing energy supplies can be gauged by comparing direct energy use in rural and
energy areas. An example is shown in Figure 3, which gives per capita energy consumption
as a function with income in rural and urban area.
The reason for this result is simple: cooking is a major end-use of domestic energy
in developing countries; the use of biomass, particularly fuelwood as a cooking fuel is far
more common in rural areas; and this non-commercial energy is used at low efficiencies in
fuelwood stoves. The tendency in cities is to shift to more efficient cooking fuels, often in
this sequence: fuelwood to charcoal to kerosene to LPG. And the fuel efficiencies, with
current technologies, are in the same sequence. Basically, the same type of effect takes
place in the case of lighting too, because the percentage of kerosene-illuminated houses is
higher in rural areas, and the tendency in cities is to shift to more efficient electric
illumination. Thus, the lower urban energy consumption for a given income level
corresponds to greater efficiencies and a better quality of life for urban households.
More generally speaking, the problem is evidenced by the way different energy
sources are used as income increases in Brazil. As shown in Figure 4, households with low
income rely almost entirely on fuelwood, which is used mainly for cooking in very
inefficient cooking stoves. As income increases, “modern” fuels such as electricity and
liquid fuels become dominant and higher income people not only have access to greater
amounts of primary energy but also use them in more efficient ways. Typically,
commercial energy is used with an efficiency of 25%, i.e., one quarter of the energy content
of commercial energy is converted into electricity or mechanical power used by people.
Non-commercial energy is commonly used for cooking with dismally low efficiencies
around 10%.
The basic problem of the use of fuelwood for cooking is its dismally low efficiency,
which converts only about 10 per cent of the energy contained in the fuelwood into useful
energy in the pot. Simple fireplaces are often dirty and dangerous: dirty because smoke and
soot settles on utensils, walls, ceiling and people; dangerous because the fire is open and
the pots can easily tip over. The smoke irritates and is a well-known danger to health.
With increasing affluence, people move from simple, primitive stoves using dung or
crop residues, to wood or charcoal used in metal or insulated stoves, and finally to propane,
liquid petroleum and electrical appliances, climbing an “energy ladder” which characterizes
cooking (Figure 6)
Experience has shown that very simple improvements to primitive cooking stoves
cost little and can improve their efficiency considerably. This is particularly the case for the
Kenya Ceramic Jiko (KCJ) stove, 700,000 of which are in use today in East Africa, as well
as some of its variants. Over 13,000 KCJ stoves are sold in Kenya each month.
The prospect for women’s education improves as the drudgery of their household
chores is reduced with the availability of efficient energy sources and devices for cooking
and of energy-utilizing technologies for the supply of water for domestic uses. The
deployment of energy for industries, which generate employment and income for women,
can also help delay the marriage age, another important determinant of fertility. If the use
of energy results in child-labour becoming unnecessary for crucial household tasks, an
important rationale for large families is eliminated. Thus, energy can contribute to a
reduction in the rate of population growth if it is directed preferentially towards the needs
of women, households and a healthy environment.
The greatest advance was the steam engine developed by Watt, which opened the
way for an extraordinary increase in the efficiency of the energy contained in coal (or other
fuels) to mechanical power through a steam engine cycle. Figure 7 shows typical
improvements in efficiency since watt’s initial device.
Figure 7 Efficiencies of steam engines
One can obtain an idea of the typical progresses achieved in this area in Figure 8,
which gives the evolution in refrigerators’ consumption of a typical 200 liter refrigerator
with no freezer compartment. A reduction of a factor of 5 was obtained between 1973 and
1988 and further progress achieved since them. in refrigerators’ electricity consumption.
D. Improvements in lighting
More spectacular have been advances in obtaining lighting from electrical lamps.
Since the former days of Edison, some 100 years ago with incandescent filaments (wich
produced more heat than light), enormous progress was achieved and gains of a factor of
100 in lumens/watt obtained, as shown in figure 9.
Even if energy is a poor indicator of human well-being and other factors can be of
considerable importance, there are some relevant correlations between the use of energy
and the HDI rank. Thus, considering the HDI rank and comparing the highest 10 HDI
countries to the lowest 10 HDI countries, some important features become apparent in the
use of energy by each group of countries:
Canada Uganda
France Malawi
Norway Djibouti
United States Guinea-Bissau
Finland Gambia
Iceland Guinea
Japan Burundi
New Zealand Mali
Sweden Burkina Faso
Spain
Austria
Belgium
The use of commercial or traditional fuels is a distinguishable feature for its place in the
HDI ranking. Highest HDI countries use commercial energy, while lowest HDI countries
consume traditional fuels. As shown in figure 10, the share of commercial energy is in the
range of 97-100% in the 10 highest HDI countries and are in the range of 10-20% for most
of the 10 lowest HDI countries.
Figure 10 HDI and energy use
The evolution in energy intensity in the period 1970-1995 shows the 10 highest HDI
countries following a decreasing path and the 10 lowest HDI countries in an increasing
path. Moreover, while the 10 highest HDI countries were successful decoupling energy
consumption and development, the 10 lowest HDI countries use more energy per GDP-PPP
unit using traditional fuels. Energy intensities for the 10 lowest HDI countries were
considered for the period 1973-1985 due to lack of consistency in data for the year 1995.
Figure 11 shows the energy intensity paths followed by the two group of countries.
One major feature of the 10 lowest HDI countries is the use of traditional fuels as shown in
Table II.
Table II – Share of traditional fuels in lowest HDI countries
HDI value Country 1973 1985
0.340 Uganda 83% 92%
0.334 Malawi 87% 94%
0.295 Guinea-Bissau 72% 67%
0.291 Gâmbia 89% 78%
0.277 Guinea 69% 72%
0.241 Burundi 97% 95%
0.236 Mali 90% 88%
0.219 Burkina Faso 96% 92%
Sources: World Resources Institute (for traditional fuels); Human Development Report 1998.
The 10 highest HDI rank countries have each an efficient energy system. Such a system
was built through large investments in infrastructure and system components aiming at
reducing the energy use costs and improving the overall performance. Each of these
countries adopted energy efficiency measures through policies and programs, mainly since
the first oil shock (1973-1974). The evolution of energy use in some of the highest HDI
rank countries is shown in Figure 12, stressing the decoupling between energy consumption
and economic development.
The evolution of the energy intensity is a useful reference to set up the path of
improvements or losses in the efficient use of energy. Moreover, for each country, it can
indicate changes in the economic structure and in the fuel mix. Energy intensity is the ratio
of total primary energy supply to GDP.
Important commonalities exist among the energy systems of rather different countries,
since energy use (E) and GDP per capita vary by more than order of magnitude when
comparing developing to industrialized countries, while energy intensity does not change
by more than a factor of 2. In addition, for developing countries are concerned, this
probably reflects the fact that “modern sector” of the economy dominates both E and GDP,
while the “traditional sector” contributes little to both.
A recent study indicates that the energy intensity in the period 1971-1992 of developing
and industrialized countries is converging to a common pattern of energy use. For each
country, energy intensity was obtained as the ratio of commercial energy use to GDP
converted in terms of purchasing power parity (PPP). The path of energy intensity of a
country was given by the yearly sequence of energy intensity data over the period 1971-
1994. The same procedure was followed to have the energy intensity paths for a set of 18
industrialized countries and for one of 23 developing countries. The energy intensity data
for each of these subsets were given by the ratio of total commercial energy use to total
PPP-converted GDP for each group of countries at each year of the period 1971-1994
(Figure 10)
Energy use data for the 41 countries were gathered at the World Bank’s World
Development Indicators tables at the commercial energy use series over the period 1971-
1992 and given in 1000 t of oil equivalent. The PPP-converted GDP data for the 41
countries over the period 1971-1992 were obtained from the World Resource Institute
based on the Penn World Tables (PWT) and the World Bank’s World Development
Indicators. PPP-converted GDP data were initially obtained in current International
currency. Current data were, then, converted into constant (1992 US dollars) applying the
GDP implicit price deflator published by the US Department of Commerce, Bureau of
Economic Analysis (Survey of Current Business, July 1998).
UNIT-II L 4
Human Development Index and Energy Consumption
the vast majority of the world's people, particularly women and children. It also shows that the influence
of per capita energy consumption on the HDI begins to decline somewhere between 1 and 3 toe per
inhabitant. Thereafter, even with a tripling in energy consumption, the HDI does not increase.
Reducing the risk of climate change requires large and rapid reductions in greenhouse gas
emissions, most of which are caused by the burning of fossil energy sources (1). The debate around
emission reductions has been dominated by concern for economic growth, and its reliance on cheap
and plentiful energy. Instead, I believe energy’s role in achieving human development should be our
core priority.
The right to development effectively comes with a right to use some minimum level of energy, as
recognised in the United Nations Sustainable Energy 4 All initiative and Sustainable Development
Goal7. Because of the dependence of human
development on energy, it is crucially important to understand how to maximise human development
benefits at lower levels of energy use. This change in focus follows growing calls for a new research
and policy agenda focused on achieving well-being within environmental boundaries (2).
The graph below is based on my own analysis comparing the Human Development Index with per
capita energy use (3). Here we see a high correlation between lower energy and lower HDI: a small
increment of energy use corresponds to a relatively large increase in HDI (Figure 1). As energy use
increases, we witness what economists would call “diminishing returns” in human development
outcomes. And at higher energy use, there is no statistically significant dependency: the relationship
shows evidence of saturation (4). The best-fit curve shows high human development (HDI above
0.7) was attainable at 50 GJ of primary energy per person in 2012. However, some countries with
that level of energy use had already achieved very high human development (HDI above 0.8), while
energy use above 100 GJ is well into the saturation area: many countries with lower energy achieve
very high human development (5). For context, in 2012, the EU28 used 145 GJ/person, the USA
305, China 96, Brazil 63 and India 29. In fact, the energy associated with human development
decreases significantly over time: in 1975 high human development required on average 100
GJ/person, and this almost halved by 2005 to 60 GJ/person (6). To explore this data, visit William
Lamb’s interactive graphics website.
Figure 1: Human Development Index and primary energy use per capita in 2012.
It’s clear from the steepness of the fit curve in Figure 1 that it is far more “efficient” in human
development terms to use energy in least developed countries, and far less efficient in highly
developed countries (7). This effect is so extreme that if we redistributed all the energy in our 135
country sample to an average of 85 GJ/person, the average HDI would increase from 0.68 to 0.78: a
leap of 0.1, to right below the “very high development” level. We can also measure the efficiency of
countries at given energy levels, using “residuals” (8): the distance a country is located above or
below the fit curve (see attached spreadsheet). Sri Lanka, Bangladesh, Switzerland and Ireland are
countries with much higher HDI than would be expected given their level of energy use, whereas
Nigeria, Ivory Coast, Oman and Trinidad & Tobago are countries with much lower HDI than
expected given their level of energy use (9).
This analysis raises some fundamental questions.Why do countries with similar energy requirements
arrive at very levels of human development? How can we learn from countries with low energy use,
but high human development? How can we understand the human development consequences of
specific types of energy use? Recent research from the social sciences, which seeks to connect
specific energy uses and human well-being (10), may point the way to well-being for all within
planetary boundaries.
The HDialogue blog is a platform for debate and discussion. Posts reflect the views of respective
authors in their individual capacities and not the views of UNDP/HDRO.
HDRO encourages reflections on the HDialogue contributions. The office posts comments that
supports a constructive dialogue on policy options for advancing human development and are
formulated respectful of other, potentially differing views. The office reserves the right to contain
contributions that appear divisive.
Demand for renewable energy will continue to grow, driven by declining costs of technology, the
need to reduce CO2 emissions, and growing energy demand in developingandunderdeveloped
nations.
International Renewable Energy Agency (IRENA) estimates that to meet the goals of
theParisAgreement, the share of renewables in annual global electricity generation needs
toincreasefrom 25% today to 86% by 2050. To do that, the world needs to invest
USD110trillionby2050 i n the sector compared to USD 95 trillion currently planned.
: “In the next decade, renewable energy will take the position as the cheapest bulkenergyalmost
everywhere globally. Several outlooks have been published that showscenariosinwhich renewable
energy capacity will reach 50-60% penetration during the next 10 years, andduring the next few
decades, there is the possibility for renewable energy capacity toreach60-70% and beyond. At the
same time, we also see inflexible capacities exiting the market.”
This transformation will lead to a visible shift away from fossil fuels infavour ofrenewables.
Bloomberg BNEF estimates that USD 13.3 trillion will be invested innewpowergeneration assets to
fund 15,145 GW of new plants between 2019 and 2050 of which80%isexpected to be carbon-free.
BNEF estimates that by 2050, wind and solar will makeup50%of the world’s electricity generation.
Europe is expected to decarbonise the fastest andfurthest, while China and the US will play catch-up.
Despite the optimistic outlook for renewables, parts of the world will continue burningfossil fuels
including oil, gas, and coal for energy production. The use of these energy sourceswill
continue in some places due to a lack of political will along with the availability of cheapcoal.
Additionally, the shift to renewables is not happening fast enough to meet rising electricitydemand.
The International Energy Agency (IEA)’s recently released World Energy Outlook2019 points out that
unless major policy changes are made, society is and will continuetobeheavily dependent on fossil
fuels.
The report says that, given today’s policy intention and targets, “A three-way race is underwayamong
coal, natural gas and renewables to provide power and heat to Asia’s fast-growingeconomies. Coal is
the incumbent in most developing Asian countries: newinvestmentdecisions in coal-using
infrastructure have slowed sharply, but the large stock of existingcoal- using power plants and
factories (and the 170 GW of capacity under constructionworldwide), provides coal with
considerable staying power.”
According to BNEF, coal will continue to grow in Asia, but collapse everywhere elseandpeakglobally
in 2026. Gas capacity instead, according to BNEF, will play a vital role to support theincreasing
flexibility needs also in the longer term.
As the world moves towards 100% renewable energy, reliability will emerge as a key areaofconcern.
This is where flexibility and innovation in supply will play a key role.
“Power is needed at all times, even when wind is not blowing or the sun is not shining.
System-level flexibility with energy storage solutions, flexible thermal power generation,
andinterconnectors are essential to enable the penetration of cheap renewables andbalancetheir
intermittent nature,” says Pitsinki.
Worldwide, there will be a need to look more closely at controlling demand by reducing, increasing
or shifting it to a specific period of time, according to IRENA.
IRENA’s recent report states: ‘The potential for demand-side flexibility, expressedas thesumof
flexible load at each hour of the year, is high and, according to IEA (2018), is equal to4000TWh (457
GW average) today and is expected to grow to 7 000 TWh (800 GWaverage) by2040 due to the
electrification of transport and buildings (mostly electrification of heat). Whilethere are already
parts of the world in which demand-side flexibility is being leveraged, thereis still a long way to
reach the full potential of this flexibility source.”
Energy storage is also likely to shape decentralised grids driven by consumer energydecisions
such as rooftop solar and behind-the-meter batteries. BNEF estimatesthatbatteries, power plants
that run on gas, and dynamic demand could help windandsolarreach more than 80%
penetration in some markets.
Apart from batteries, there are several new technologies that are being researchedanddeveloped
for energy storage. These enabling technologies help stabilise energy systems’ demand and supply
issues by converting extra power into hydrogen, heat or someotherform of energy carrier while at
the same time decreasing the curtailment inrenewableenergy generation.
“The big driver for these technologies is the availability of ‘free’ excess electricity fromwindand
solar power. Hydrogen is relatively easy and cheap to produce, but it has lowenergydensity and
thus is expensive to transport as such,”
“Combustion technologies also have limited ability to use hydrogen. That is why
thebigpushtowards Power-to-X type of technologies, where hydrogen is further
synthesisedintomethane or methanol that are easier to handle and use in energy production,”
UNIT-II L 8
Energy is one of the major parts of the economic infrastructure, being the basic input needed to sustain economic growth.
There exists a strong relationship between economic development and energy consumption.
The more developed is a country, the higher is the per capita of energy consumption and vice-versa. Human civilization
relies on different sources of energy.
The two major sources of energy can be classified under:
∙ Conventional Sources
∙ Non-Conventional Sources
Below you could see the difference between conventional and non-conventional sources of energy.
Renewable energy
.
Sustainable energy
Overview
∙ Sustainable energy
∙ Carbon-neutral fuel
∙ Fossil fuel phase-out
Energy conservation
∙ Cogeneration
∙ Efficient energy use
∙ Energy storage
∙ Green building
∙ Heat pump
∙ Low-carbon power
∙ Microgeneration
∙ Passive solar building design
Renewable energy
∙ Biofuel
∙ Geothermal
∙ Hydroelectricity
∙ Solar
∙ Tidal
∙ Wave
∙ Wind
Sustainable transport
∙ Electric vehicle
∙ Green vehicle
∙ Plug-in hybrid
Coal (38%)
Natural gas (23%) Hydro (16%) Nuclear (10%) Wind (4%) Oil (3%)
Solar (2%) Biofuels (2%) Other (2%)
Wind, solar, and hydroelectricity are three renewable sources of energy. [2]
Renewable energy is energy that is collected from renewable resources, which are naturally replenished
on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat. Renewable energy [3]
often provides energy in four important areas: electricity generation, air and water heating/cooling,
transportation, and rural (off-grid) energy services. [4]
Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and
24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided
as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar
heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and
other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286
billion in 2015. In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with
[5]
China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5
billion and Europe for US$40.9 billion. Globally there are an estimated 7.7 million jobs associated with the
[6]
renewable energy industries, with solar photovoltaics being the largest renewable employer. Renewable [7]
energy systems are rapidly becoming more efficient and cheaper and their share of total energy
consumption is increasing. As of 2019, more than two
[8]
thirds of worldwide newly installed electricity capacity was renewable. Growth in consumption of coal and [9]
oil could end by 2020 due to increased uptake of renewables and natural gas. [10][11]
At the national level, at least 30 nations around the world already have renewable energy contributing more
than 20 percent of energy supply. National renewable energy markets are projected to continue to grow
strongly in the coming decade and beyond. Some places and at least two countries, Iceland and Norway,
[12]
generate all their electricity using renewable energy already, and many other countries have the set a goal
to reach 100% renewable energy in the future. At least 47 nations around the world already have over 50
[13]
percent of electricity from renewable resources. Renewable energy resources exist over wide
[14][15][16]
geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries.
Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy
security, climate change mitigation, and economic benefits. In [17]
international public opinion surveys there is strong support for promoting renewable sources such as solar
power and wind power. [18][19]
While many renewable energy projects are large-scale, renewable technologies are also suited to rural and
remote areas and developing countries, where energy is often crucial in human development. As most [20][needs update]
of renewable energy technologies provide electricity, renewable energy deployment is often applied in
conjunction with further electrification, which has several benefits: electricity can be converted to heat, can
be converted into mechanical energy with high efficiency, and is clean at the point of consumption. In [21][22]
addition, electrification with renewable energy is more efficient and therefore leads to significant reductions
in primary energy requirements. [2
A non-renewable resource (also called a finite resource) is a natural resource that cannot be readily replaced by natural
means at a quick enough pace to keep up with consumption. An example is carbon-based fossil fuel. The original organic
[1]
matter, with the aid of heat and pressure, becomes a fuel such as oil or gas. Earth minerals and metal ores, fossil fuels
(coal, petroleum, natural gas) and groundwater in certain aquifers are all considered non-renewable resources, though
individual elements are always conserved (except in nuclear reactions).
Conversely, resources such as timber (when harvested sustainably) and wind (used to power energy conversion
systems) are considered renewable resources, largely because their localized replenishment can occur within time
frames meaningful to humans as well.
a) Coal
Coal is the most important source of energy. There are more than 148790 coal deposits in India. Between 2005-2006, the
annual production went up to 343 million tons. India is the fourth-largest coal-producing country and the deposits are
mostly found in Bihar, Orissa, Madhya Pradesh, and Bengal.
c) Electricity:
Electricity is a common source of energy and used for domestic and commercial purposes. The electricity is mainly
utilized in electrical appliances like Fridge, T.V, washing machine and air conditioning.
The major sources of power generation are mentioned below:
∙ Nuclear Power
∙ Thermal Power
∙ Hydro-electric power
1. Thermal Power:
Thermal power is generated at various power stations by means of oil and coal. It is a vital source of electric current
and its share in the total capacity of the nation in 2004-05 was 70 percent.
2. Hydroelectric Power:
The hydroelectric power is produced by constructing dams above flowing rivers like Damodar Valley Project and Bhakra
Nangal Project. The installed capacity of hydroelectric power was 587.4 mW in 1950-51 and went up to 19600 mW in
2004-05.
3. Nuclear Power:
The fuel used in nuclear power plants is Uranium, which costs less than coal. Nuclear power plants can be found in
Kaiga (Karnataka), Kota (Rajasthan), Naroura (UP), and Kalapakam(Chennai).
1. Solar Energy
This is the energy that is produced by sunlight. The photovoltaic cells are exposed to sunlight based on the form of
electricity that needs to be produced. The energy is utilized for cooking and distillation of water.
2. Wind Energy
This kind of energy is generated by harnessing the power of wind and mostly used in operating water pumps for
irrigation purposes. India stands as the second-largest country in the generation of wind power.
3. Tidal Energy
The energy that is generated by exploiting the tidal waves of the sea is known as tidal energy. This source is yet to be
tapped due to the lack of cost-effective technology.
Energy can be defined as the capacity or ability to do work. It plays an important role in our day to day life as it is
required in every field like industry, transport, communication, sports, defence, household, agriculture and more. There
are plenty of energy sources to get energy. These energy resources can be classified as Conventional and Non
conventional sources of energy. Let us see how they differ from each other!
These resources have been depleted to a great extent due to their continuous exploitation. It is believed that the deposits
of petroleum in our country will be exhausted within few decades and the coal reserves can last for a hundred more
years. Some common examples of conventional sources of energy include coal, petroleum, natural gas and electricity.
Non-conventional sources of energy are the energy sources which are continuously replenished by natural processes.
These cannot be exhausted easily, can be generated constantly so can be used again and again, e.g. solar energy, wind
energy, tidal energy, biomass energy and geothermal energy etc. The energy obtained from non-conventional sources is
known as non-conventional energy. These sources do not pollute the environment and do not require heavy expenditure.
They are called renewable resources as they can be replaced through natural processes at a rate equal to or greater than
the rate at which they are consumed.
Based on the above information, some of the key differences between conventional and non-conventional sources of
energy are as follows:
They have been in use for a long time. They are yet in development phase over the past few years.
They are called non-renewable sources of energy. They are called renewable sources of energy.
consumption except for hydel power.
They can be exhausted completely due to over They cannot be exhausted completely.
Next Topic
UNIT III
L1 - SOLAR ENERGY
Solar energy is an important, clean, cheap and abundantly available renewable energy. It is
received on Earth in cyclic, intermittent and dilute form with very low power density 0 to 1
kW/m2.Solar energy received on the ground level is affected by atmospheric clarity, degree of
latitude, etc. For design purpose, the variation of available solar power, the optimum tilt angle
of solar flat plate collectors, the location and orientation of the heliostats should be calculated.
The energy of the sun can be used in many ways. When plants grow, they store the energy of
the sun. Then, when we burn those plants, the energy is released in the form of heat. This is an
example of indirect use of solar energy. The form we are interested in is directly converting
the sun’s rays into a usable energy source: electricity. This is accomplished through the use of
“solar collectors”, or, as they are more commonly known as, “solar panels”. There are two
ways in which solar power can be converted to energy. The first, known as “solar thermal
applications”, involve using the energy of the sun to directly heat air or a liquid. The second,
known as “photoelectric applications”, involve the use of photovoltaic cells to convert solar
energy directly to electricity. There are two types of solar thermal collectors. The first, known
as flat plate collectors, contain absorber plates that use solar radiation to heat a carrier fluid,
either a liquid like oil or water, or air. Because these collectors can heat carrier fluids to
around 80oC, they are suited for residential applications. The second type of solar collectors
is known as concentrating collectors. These panels are intended for larger-scale applications
such as air conditioning, where more heating potential is required. The rays of the sun from a
relatively wide area are focused into a small area by means of reflective mirrors, and thus the
heat energy is concentrated. This method has the potential to heat liquids to a much higher
temperature than
flat plate collectors can alone. The heat from the concentrating collectors can be used to boil
water. The steam can then be used to power turbines attached to generators and produce
electricity, as in wind and hydroelectric power systems. Photovoltaic cells depend on
semiconductors such as silicon to directly convert solar energy to electricity. Because these
types of cells are low-maintenance, they are best suited for remote applications. Solar power
has an exciting future ahead of it. Because solar power utilizes the sun's light, a ubiquitous
resource (a resource that is everywhere), solar panels can be attached to moving objects, such
as automobiles, and can even be used to power those objects. Solar powered cars are being
experimented with more and more frequently now.
Solar power is actually one of the cleanest methods of energy production known. Because
solar panels simply convert the energy of the sun into energy that mankind can use, there are
no harmful by products or threats to the environment. One major concern is the cost of solar
power. Solar panels (accumulators) are not cheap; and because they are constructed from
fragile materials (semiconductors, glass, etc.), they must constantly be maintained and often
replaced. Further, since each photovoltaic panel has only about 40% efficiency, single solar
panels are not sufficient power producers. However, this problem has been offset by the
gathering together of many large panels acting in accord to produce energy. Although this
setup takes up much more space, it does generate much more power.
Advantages and Disadvantages:
Advantages
∙ No pollution.
∙ Versatile is used for powering items as diverse as solar cars and satellites.
Disadvantages
∙ Very diffuse source means low energy production – large numbers of solar panels (and thus
large land areas) are required to produce useful amounts of heat or electricity.
∙ Only areas of the world with lots of sunlight are suitable for solar power generation.
In SI units, energy is expressed in Joule. Other units are angley and Calorie where 1 angley =
1 Cal/cm2.day 1 Cal = 4.186 J
For solar energy calculations, the energy is measured as an hourly or monthly or yearly
average and is expressed in terms of kJ/m2/day or kJ/m2/hour. Solar power is expressed in
terms of W/m2or kW/m2.
1. Solar collector or concentrator: It receives solar rays and collects the energy. It may be of
following types: a) Flat plate type without focusing b) Parabolic trough type with line
focusing c) Paraboloid dish with central focusing d) Fresnel lens with centre focusing e)
Heliostats with centre receiver focusing
2. Energy transport medium: Substances such as water/ steam, liquid metal or gas are used
to transport the thermal energy from the collector to the heat exchanger or thermal storage. In
solar PV systems energy transport occurs in electrical form.
3. Energy storage: Solar energy is not available continuously. So we need an energy storage
medium for maintaining power supply during nights or cloudy periods. There are three major
types of energy storage: a) Thermal energy storage; b) Battery storage; c) Pumped storage
hydro-electric plant.
4. Energy conversion plant: Thermal energy collected by solar collectors is used for
producing steam, hot water, etc. Solar energy converted to thermal energy is fed to steam
thermal or gas-thermal power plant.
5. Power conditioning, control and protection system: Load requirements of electrical
energy vary with time. The energy supply has certain specifications like voltage, current,
frequency, power etc.
6. Alternative or standby power supply: The backup may be obtained as power from
electrical network or standby diesel generator.
Energy from the sun: The sun radiates about 3.8 x 1026 W of power in all the directions.
Out of this about 1.7 x 1017 W is received by earth. The average solar radiation outside the
earth’s atmosphere is 1.35 kW/m2varying from 1.43 kW/m2(in January) to 1.33 kW/m2 (in
July).
A number of semiconductor materials are suitable for the manufacture of solar cells. The most
common types using silicon semiconductor material (Si) are:
• Monocrystalline Si cells
• Polycrystalline Si cells
• Amorphous Si cells
UNIT III
Renewable Energy Sources
L2 -HYDROPOWER
Hydropower is energy that comes from the force of moving water. Hydropower is a renewable
energy source because it is replenished constantly by the fall and flow of snow and rainfall in
the water cycle. As water flows through devices such as a water wheel or turbine, the kinetic
(motion) energy of the water is converted to mechanical energy, which can be used to grind
grain, drive a sawmill, pump water, or produce electricity. The primary way hydropower is
used today in the United States is to produce electricity. In 1991, hydropower provided 10
percent of the nation’s electricity. Although a hydroelectric power plant is initially expensive
to build, in the long run, it is the cheapest way to produce electricity, primarily because the
energy source, moving water, is free. Recently, many people have built smaller hydroelectric
systems that produce only enough electricity to power a few homes.
Two lesser known forms of hydropower are ocean thermal energy conversion (OTEC), which
uses the temperature difference between surface and deep ocean waters to boil and then
recondense fluids, and tidal power, which uses the enormous power of ocean tides. Presently,
these forms of hydropower are not very feasible, but they hold promise for the future.
Advantages of Hydropower
Hydroelectric power comes from flowing water … winter and spring runoff from mountain
streams and clear lakes. Water, when it is falling by the force of gravity, can be used to turn
turbines and generators that produce electricity.
Hydroelectric power is important to our Nation. Growing populations and modern technologies
require vast amounts of electricity for creating, building, and expanding. In the 1920's,
hydroelectric plants supplied as much as 40 percent of the electric energy produced. Although
the amount of energy produced by this means has steadily increased, the amount produced by
other types of powerplants has increased at a faster rate and hydroelectric power presently
supplies about 10 percent of the electrical generating capacity of the United States.
Hydropower is an essential contributor in the national power grid because of its ability to
respond quickly to rapidly varying loads or system disturbances, which base load plants with
steam systems powered by combustion or nuclear processes cannot accommodate.
Hydroelectric power comes from water at work, water in motion. It can be seen as a form of
solar energy, as the sun powers the hydrologic cycle which gives the earth its water. In the
hydrologic cycle, atmospheric water reaches the earth=s surface as precipitation. Some of this
water evaporates, but much of it either percolates into the soil or becomes surface runoff. Water
from rain and melting snow eventually reaches ponds, lakes, reservoirs, or oceans where
evaporation is constantly occurring.
Moisture percolating into the soil may become ground water (subsurface water), some of which
also enters water bodies through springs or underground streams. Ground water may move
upward through soil during dry periods and may return to the atmosphere by evaporation.
Water vapor passes into the atmosphere by evaporation then circulates, condenses into clouds,
and some returns to earth as precipitation. Thus, the water cycle is complete. Nature ensures
that water is a renewable resource.
Generating Power
In nature, energy cannot be created or destroyed, but its form can change. In generating
electricity, no new energy is created. Actually one form of energy is converted to another form.
To generate electricity, water must be in motion. This is kinetic (moving) energy. When
flowing water turns blades in a turbine, the form is changed to mechanical (machine) energy.
The turbine turns the generator rotor which then converts this mechanical energy into another
energy form -- electricity. Since water is the initial source of energy, we call this hydroelectric
power or hydropower for short.
This concept was discovered by Michael Faraday in 1831 when he found that electricity could be
generated by rotating magnets within copper coils.
When the water has completed its task, it flows on unchanged to serve other needs.
Transmitting Power
Once the electricity is produced, it must be delivered to where it is needed -- our homes, schools,
offices, factories, etc. Dams are often in remote locations and power must be transmitted over
some distance to its users.
Vast networks of transmission lines and facilities are used to bring electricity to us in a form we
can use. All the electricity made at a powerplant comes first through transformers which raise
the voltage so it can travel long distances through powerlines. (Voltage is the pressure that
forces an electric current through a wire.) At local substations, transformers reduce the voltage
so electricity can be divided up and directed throughout an area.
Transformers on poles (or buried underground, in some neighborhoods) further reduce the
electric power to the right voltage for appliances and use in the home. When electricity gets to
our homes, we buy it by the kilowatt-hour, and a meter measures how much we use.
While hydroelectric powerplants are one source of electricity, other sources include powerplants
that burn fossil fuels or split atoms to create steam which in turn is used to generate power. Gas-
turbine, solar, geothermal, and wind-powered systems are other sources. All these powerplants
may use the same system of transmission lines and stations in an area to bring power to you. By
use of this Apower grid,” electricity can be interchanged among several utility systems to meet
varying demands. So the electricity lighting your reading lamp now may be from a hydroelectric
powerplant, a wind generator, a nuclear facility, or a coal, gas, or oil-fired powerplant … or a
combination of these.
The area where you live and its energy resources are prime factors in determining what kind of
power you use. For example, in Washington State hydroelectric powerplants provided
approximately 80 percent of the electrical power during 2002. In contrast, in Ohio during the
same year, almost 87 percent of the electrical power came from coal-fired powerplants due to the
area=s ample supply of coal.
Electrical utilities range from large systems serving broad regional areas to small power
companies serving individual communities. Most electric utilities are investor-owned (private)
power companies. Others are owned by towns, cities, and rural electric associations. Surplus
power produced at facilities owned by the Federal Government is marketed to preference power
customers (A customer given preference by law in the purchase of federally generated electrical
energy which is generally an entity which is nonprofit and publicly financed.) by the Department
of Energy through its power marketing administrations.
How Power is Computed
Before a hydroelectric power site is developed, engineers compute how much power can be
produced when the facility is complete. The actual output of energy at a dam is determined by
the volume of water released (discharge) and the vertical distance the water falls (head). So, a
given amount of water falling a given distance will produce a certain amount of energy. The
head and the discharge at the power site and the desired rotational speed of the generator
determine the type of turbine to be used.
The head produces a pressure (water pressure), and the greater the head, the greater the pressure
to drive turbines. This pressure is measured in pounds of force (pounds per square inch). More
head or faster flowing water means more power.
To find the theoretical horsepower (the measure of mechanical energy) from a specific site, this
formula is used:
THP = (Q x H)/8.8
A more complicated formula is used to refine the calculations of this available power. The
formula takes into account losses in the amount of head due to friction in the penstock and other
variations due to the efficiency levels of mechanical devices used to harness the power.
To find how much electrical power we can expect, we must convert the mechanical measure
(horsepower) into electrical terms (watts). One horsepower is equal to 746 watts (U.S. measure).
Turbines
Hydropower does not discharge pollutants into the environment; however, it is not free from
adverse environmental effects. Considerable efforts have been made to reduce environmental
problems associated with hydropower operations, such as providing safe fish passage and
improved water quality in the past decade at both Federal facilities and non-Federal facilities
licensed by the Federal Energy Regulatory Commission.
Efforts to ensure the safety of dams and the use of newly available computer technologies to
optimize operations have provided additional opportunities to improve the environment. Yet,
many unanswered questions remain about how best to maintain the economic viability of
hydropower in the face of increased demands to protect fish and other environmental resources.
Reclamation actively pursues research and development (R&D) programs to improve the
operating efficiency and the environmental performance of hydropower facilities.
Hydropower research and development today is primarily being conducted in the following
areas:
Uprating
The uprating of existing hydroelectric generator and turbine units at powerplants is one of the
most immediate, cost-effective, and environmentally acceptable means of developing additional
electric power. Since 1978, Reclamation has pursued an aggressive uprating program which has
added more than 1,600,000 kW to Reclamation's capacity at an average cost of $69 per kilowatt.
This compares to an average cost for providing new peaking capacity through oil-fired
generators of more than $400 per kilowatt. Reclamation's uprating program has essentially
provided the equivalent of another major hydroelectric facility of the approximate magnitude of
Hoover Dam and Powerplant at a fraction of the cost and impact on the environment when
compared to any other means of providing new generation capacity.
Low-head Hydropower
A low-head dam is one with a water drop of less than 65 feet and a generating capacity less than
15,000 kW. Large, high-head dams can produce more power at lower costs than low-head dams,
but construction of large dams may be limited by lack of suitable sites, by environmental
considerations, or by economic conditions. In contrast, there are many existing small dams and
drops in elevation along canals where small generating plants could be installed. New low-head
dams could be built to increase output as well. The key to the usefulness of such units is their
ability to generate power near where it is needed, reducing the power inevitably lost during
transmission.
Peaking with Hydropower
Demands for power vary greatly during the day and night. These demands vary considerably
from season to season, as well. For example, the highest peaks are usually found during summer
daylight hours when air conditioners are running.
Nuclear and fossil fuel plants are not efficient for producing power for the short periods of
increased demand during peak periods. Their operational requirements and their long startup
times make them more efficient for meeting baseload needs.
Since hydroelectric generators can be started or stopped almost instantly, hydropower is more
responsive than most other energy sources for meeting peak demands. Water can be stored
overnight in a reservoir until needed during the day, and then released through turbines to
generate power to help supply the peakload demand. This mixing of power sources offers a
utility company the flexibility to operate steam plants most efficiently as base plants while
meeting peak needs with the help of hydropower. This technique can help ensure reliable
supplies and may help eliminate brownouts and blackouts caused by partial or total power
failures.
Today, many of
Reclamation=s 58
powerplants are used to meet
peak electrical energy
demands, rather than
operating around the clock to
meet the total daily demand.
Increasing use of other
energy-producing
powerplants in the future
will not make hydroelectric
powerplants obsolete or
unnecessary. On the
contrary, hydropower can be
even more important. While
nuclear or fossil-fuel
powerplants can provide
baseloads, hydroelectric
powerplants can deal more
economically with varying
peakload demands. This is a
job they are well suited for.
Pumped Storage
When we hear the term Asolar energy,” we usually think of heat from the sun=s rays which can be
put to work. But there are other forms of solar energy. Just as hydropower is a form of solar
energy, so too is windpower. In effect, the sun causes the wind to blow by heating air masses
that rise, cool, and sink to earth again. Solar energy in some form is always at work -- in rays of
sunlight, in air currents, and in the water cycle.
Solar energy, in its various forms, has the potential of adding significant amounts of power for
our use. The solar energy that reaches our planet in a single week is greater than that contained
in all of the earth=s remaining coal, oil, and gas resources. However, the best sites for collecting
solar energy in various forms are often far removed from people, their homes, and work places.
Building thousands of miles of new transmission lines would make development of the power
too costly.
Because of the seasonal, daily, and even hourly changes in the weather, energy flow from the
wind and sun is neither constant nor reliable. Peak production times do not always coincide with
high power demand times. To depend on the variable wind and sun as main power sources
would not be acceptable to most American lifestyles. Imagine having to wait for the wind to
blow to cook a meal or for the sun to come out from behind a cloud to watch television!
As intermittent energy sources, solar power and wind power must be tied to major hydroelectric
power systems to be both economical and feasible. Hydropower can serve as an instant backup
and to meet peak demands.
Linking windpower and hydropower can add to the Nation=s supply of electrical energy. Large
wind machines can be tied to existing hydroelectric powerplants. Wind power can be used,
when the wind is blowing, to reduce demands on hydropower. That would allow dams to save
their water for later release to generate power in peak periods.
The benefits of solar power and wind power are many. The most valuable feature of all is the
replenishing supply of these types of energy. As long as the sun shines and the wind blows,
these resources are truly renewable.
Future Potential
What is the full potential of hydropower to help meet the Nation=s energy needs? The
hydropower resource assessment by the Department of Energy=s Hydropower Program has
identified 5,677 sites in the United States with acceptable undeveloped hydropower potential.
These sites have a modeled undeveloped capacity of about 30,000 MW. This represents about
40 percent of the existing conventional hydropower capacity.
A variety of restraints exist on this development, some natural and some imposed by our society.
The natural restraints include such things as occasional unfavorable terrain for dams. Other
restraints include disagreements about who should develop a resource or the resulting changes in
environmental conditions. Often, other developments already exist where a hydroelectric power
facility would require a dam and reservoir to be built.
Finding solutions to the problems imposed by natural restraints demands extensive engineering
efforts. Sometimes a solution is impossible, or so expensive that the entire project becomes
impractical. Solution to the societal issues is frequently much more difficult and the costs are far
greater than those imposed by nature. Developing the full potential of hydropower will require
consideration and coordination of many varied needs.
It is important to remember that people, and all their actions, are part of the natural world. The
materials used for building, energy, clothing, food, and all the familiar parts of our day-to-day
world come from natural resources.
Our surroundings are composed largely of the Abuilt environment@ -- structures and facilities
built by humans for comfort, security, and well-being. As our built environment grows, we grow
more reliant on its offerings.
To meet our needs and support our built environment, we need electricity which can be
generated by using the resources of natural fuels. Most resources are not renewable; there is a
limited supply. In obtaining resources, it is often necessary to drill oil wells, tap natural gas
supplies, or mine coal and uranium. To put water to work on a large scale, storage dams are
needed.
We know that any innovation introduced by people has an impact on the natural environment.
That impact may be desirable to some, and at the same time, unacceptable to others. Using any
source of energy has some environmental cost. It is the degree of impact on the environment
that is crucial.
Some human activities have more profound and lasting impacts than others. Techniques to mine
resources from below the earth may leave long-lasting scars on the landscape. Oil wells may
detract from the beauty of open, grassy fields. Reservoirs behind dams may cover picturesque
valleys. Once available, use of energy sources can further impact the air, land, and water in
varying degrees.
People want clean air and water and a pleasing environment. We also want energy to heat and
light our homes and run our machines. What is the solution?
The situation seems straightforward: The demand for electrical power must be curbed or more
power must be produced in environmentally acceptable ways. The solution, however, is not so
simple.
Conservation can save electricity, but at the same time our population is growing steadily.
Growth is inevitable, and with it the increased demand for electric power.
Since natural resources will continue to be used, the wisest solution is a careful, planned
approach to their future use. All alternatives must be examined, and the most efficient,
acceptable methods must be pursued.
Hydroelectric facilities have many characteristics that favor developing new projects and
upgrading existing powerplants:
As an added benefit, reservoirs have scenic and recreation value for campers, fishermen, and
water sports enthusiasts. The water is a home for fish and wildlife as well. Dams add to
domestic water supplies, control water quality, provide irrigation for agriculture, and avert
flooding. Dams can actually improve downstream conditions by allowing mud and other debris
to settle out.
Existing powerplants can be uprated or new powerplants added at current dam sites without a
significant effect on the environment. New facilities can be constructed with consideration of
the environment. For instance, dams can be built at remote locations, powerplants can be placed
underground, and selective withdrawal systems can be used to control the water temperature
released from the dam. Facilities can incorporate features that aid fish and wildlife, such as
salmon runs or resting places for migratory birds.
In reconciling our natural and our built environments there will be tradeoffs and compromises.
As we learn to live in harmony as part of the environment, we must seek the best alternatives
among all ecologic, economic, technological, and social perspectives.
The value of water must be considered by all energy planners. Some water is now dammed and
can be put to work to make hydroelectric power. Other water is presently going to waste. The
fuel burned to replace this wasted energy is gone forever and, so, is a loss to our Nation.
The longer we delay the balanced development of our potential for hydropower, the more we
unnecessarily use up other vital resources.
HYDROPOWER -- FROM PAST TO PRESENT
By using water for power generation, people have worked with nature to achieve a better
lifestyle. The mechanical power of falling water is an age-old tool. As early as the 1700's,
Americans recognized the advantages of mechanical hydropower and used it extensively for
milling and pumping. By the early 1900's, hydroelectric power accounted for more than 40
percent of the Nation=s supply of electricity. In the West and Pacific Northwest, hydropower
provided about 75 percent of all the electricity consumed in the 1940's. With the increase in
development of other forms of electric power generation, hydropower=s percentage has slowly
declined to about 10 percent. However, many activities today still depend on hydropower.
Niagra Falls was the first of the American hydroelectric power sites developed for major
generation and is still a source of electric power today. Power from such early plants was used
initially for lighting, and when the electric motor came into being the demand for new electrical
energy started its upward spiral.
The Federal Government became involved in hydropower production because of its commitment
to water resource management in the arid West. The waterfalls of the Reclamation dams make
them significant producers of electricity. Hydroelectric power generation has long been an
integral part of Reclamation=s operations while it is actually a byproduct of water development.
In the early days, newly created projects lacked many of the modern conveniences, one of these
being electrical power. This made it desirable to take advantage of the potential power source in
water.
Powerplants were installed at the dam sites to carry on construction camp activities.
Hydropower was put to work lifting, moving and processing materials to build the dams and dig
canals. Powerplants ran sawmills, concrete plants, cableways, giant shovels, and draglines.
Night operations were possible because of the lights fed by hydroelectric power. When
construction was complete, hydropower drove pumps that provided drainage or conveyed water
to lands at higher elevations than could be served by gravity-flow canals.
Surplus power was sold to existing power distribution systems in the area. Local industries,
towns, and farm consumers benefited from the low-cost electricity. Much of the construction
and operating costs of dams and related facilities were paid for by this sale of surplus power,
rather than by the water users alone. This proved to be a great savings to irrigators struggling to
survive in the West.
Reclamation=s first hydroelectric powerplant was built to aid construction of the Theodore
Roosevelt Dam on the Salt River about 75 miles northeast of Phoenix, Arizona. Small
hydroelectric generators, installed prior to construction, provided energy for construction and for
equipment to lift stone blocks into place. Surplus power was sold to the community, and citizens
were quick to support expansion of the dam=s hydroelectric capacity. A 4,500-kW powerplant
was constructed and, in 1909, five generators were in operation, providing power to pump
irrigation water and furnishing electricity to the Phoenix area.
Power development, a byproduct of water development, had a tremendous impact on the area=s
economy and living conditions. Power was sold to farms, cities, and industries. Wells pumped
by electricity meant more irrigated land for agriculture, and pumping also lowered water tables
in those areas with waterlogging and alkaline soil problems. By 1916, nine pumping plants were
in operation irrigating more than 10,000 acres. In addition, Reclamation supplied all of the
residential and commercial power needs of Phoenix. Cheap hydropower, in abundant supply,
attracted industrial development as well. A private company was able to build a large smelter
and mill nearby to process low-grade copper ore, using hydroelectric power.
The Theodore Roosevelt Powerplant was one of the first large power facilities constructed by the
Federal Government. Its capacity has since been increased from 4,500 kW to more than 36,000
kW.
Power, first developed for building Theodore Roosevelt Dam and for pumping irrigation water,
also helped pay for construction, enhanced the lives of farmers and city dwellers, and attracted
new industry to the Phoenix area.
During World War I, Reclamation projects continued to provide water and hydroelectric power
to Western farms and ranches. This helped feed and clothe the Nation, and the power revenues
were a welcome source of income to the Federal Government.
The depression of the 1930's, coupled with widespread floods and drought in the West, spurred
building of great multipurpose Reclamation projects such as Grand Coulee Dam on the
Columbia River, Hoover Dam on the lower Colorado River, and the Central Valley Project in
California. This was the Abig dam@ period, and the low-cost hydropower produced by those
dams had a profound effect on urban and industrial growth.
World War II -- and the Nation=s need for hydroelectric power soared. At the outbreak of the
war, the Axis Nations had three times more available power than the United States. The demand
for power was identified in this 1942 statement on AThe War Program of the Department of the
Interior@:
AThe war budget of $56 billion will require 154 billion kWh of electric energy annually
for the manufacture of airplanes, tanks, guns, warships, and fighting material, and to
equip and serve the men of the Army, Navy, and Marine Corps.@
Each dollar spent for wartime industry required about 2-3/4 kWh of electric power. The demand
exceeded the total production capacity of all existing electric utilities in the United States. In
1942, 8.5 billion kWh of electric power was required to produce enough aluminum to meet the
President=s goal of 60,000 new planes.
Hydropower provided one of the best ways for rapidly expanding the country=s energy output.
Addition of more powerplant units at dams throughout the West made it possible to expand
energy production, and construction pushed ahead to speed up the availability of power. In
1941, Reclamation produced more than five billion kWh, resulting in a 25 percent increase in
aluminum production. By 1944, Reclamation quadrupled its hydroelectric power output.
From 1940 through 1945, Reclamation powerplants produced 47 billion kWh of electricity,
enough to make:
69,000 airplanes
79,000 machine guns
5,000 ships
5,000 tanks
7,000,000 aircraft bombs, and
31,000,000 shells
During the war, Reclamation was the major producer of power in areas where needed resources
were located -- the West. The supply of low-cost electricity attracted large defense industries to
the area. Shipyards, steel mills, chemical companies, oil refineries, and automotive and aircraft
factories . . . all needed vast amounts of electrical power. Atomic energy installations were
located at Hanford, Washington, to make use of hydropower from Grand Coulee.
While power output of Reclamation projects energized the war industry, it was also used to
process food, light military posts, and meet needs of the civilian population in many areas.
With the end of the war, powerplants were put to use in rapidly developing peacetime industries.
Hydropower has been vital for the West=s industries which use mineral resources or farm
products as raw materials. Many industries have depended wholly on Federal hydropower. In
fact, periodic low flows on the Columbia River have disrupted manufacturing in that region.
Farming was tremendously important to America during the war and continues to be today.
Hydropower directly benefits rural areas in three ways:
-- It makes power available for use on the farm for domestic purposes.
Reclamation delivers 10 trillion gallons of water to more than 31 million people each year. This
includes providing one out of five Western farmers (140,000) with irrigation water for 10 million
farmland acres that produce 60% of the nation's vegetables and 25% of its fruits and nuts.
Some of the major hydroelectric powerplants built by Reclamation are located at:
-- Grand Coulee Dam on the Columbia River in Washington (the largest single
electrical generating complex in the United States)
-- Hoover Dam on the Colorado River in Arizona-Nevada
-- Glen Canyon Dam on the Colorado River in Arizona
-- Shasta Dam on the Sacramento River in California
-- Yellowtail Dam on the Bighorn River in Montana
Grand Coulee has a capacity of more than 6.8 million kW of power. Hydropower generated at
Grand Coulee furnishes a large share of the power requirements in the Pacific Northwest.
Reclamation is one of the largest operators of Federal power-generating stations. The agency
uses some of the power it produces to run its facilities, such as pumping plants. Excess
Reclamation hydropower is marketed by either the Bonneville Power Administration or the
Western Area Power Administration and is sold first to preferred customers, such as rural
electric power co-cooperatives, public utility districts, municipalities, and state and Federal
agencies. Any remaining power may be sold to private electric utilities. Reclamation generates
enough hydropower to meet the needs of millions of people and power revenues exceed $900
million a year. Power revenues are returned to the Federal Treasury to repay the cost of
constructing, operating, and maintaining projects.
CONCLUSION
Reclamation is helping to meet the needs of our country, and one of the most pressing needs is
the growing demand for electric power. Reclamation powerplants annually generate more than
42 billion kWh of hydroelectric energy, which is enough to meet the annual residential needs of
14 million people or the energy equivalent of more than 80 million barrels of crude oil.
The deregulation of wholesale electricity sales and the imposition of requirements for open
transmission access are resulting in dramatic changes in the business of electric power
production in the United States. This restructuring increases the importance of clean, reliable
energy sources such as hydropower.
Hydropower=s ability to provide peaking power, load following, and frequency control helps
protect against system failures that could lead to the damage of equipment and even brown or
blackouts. Hydropower, besides being emissions-free and renewable has the above operating
benefits that provide enhanced value to the electric system in the form of efficiency, security,
and most important, reliability. The electric benefits provided by hydroelectric resources are of
vital importance to the success of our National experiment to deregulate the electric industry.
Water is one of our most valuable resources, and hydropower makes use of this renewable
treasure. As a National leader in managing hydropower, Reclamation is helping the Nation meet
its present and future energy needs in a manner that protects the environment by improving
hydropower projects and operating them more effectively.
GLOSSARY
Alternating Current An electric current changing regularly from one direction to the
opposite.
Dam A massive wall or structure built across a valley or river for storing
water.
Demand The rate at which electric energy is delivered to or by a system,
part of a system, or a piece of equipment. It is expressed in
kilowatts, kilovolt amperes, or other suitable units at a given
instant or averaged over any designated period of time. The
primary source of "demand" is the power-consuming equipment of
the customers.
Horsepower A unit of rate of doing work equal to 33,000 foot pounds per
minute or 745.8 watts (Brit.), 746 watts (USA), or 736 watts
(Europe).
Kilowatt-Hour (kWh) The unit of electrical energy commonly used in marketing electric
power; the energy produced by 1 kilowatt acting for one hour. Ten
100-watt light bulbs burning for one hour would consume one
kilowatt hour of electricity.
Kinetic Energy Energy which a moving body has because of its motion, dependent
on its mass and the rate at which it is moving.
Load (Electric) The amount of electric power delivered or required at any specific
point or points on a system. The requirement originates at the
energy-consuming equipment of the consumers.
Megawatt A unit of power equal to one million watts. For example, it's the
amount of electric energy required to light 10,000 100-watt bulbs.
Rated Capacity That capacity which a hydro generator can deliver without
exceeding mechanical safety factors or a nominal temperature rise.
In general this is also the nameplate rating except where turbine
power under maximum head is insufficient to deliver the
nameplate rating of the generator.
Reservoir An artificial lake into which water flows and is stored for future
use.
Volt (V) The unit of electromotive force or potential difference that will
cause a current of one ampere to flow through a conductor with a
resistance of one ohm.
Watt (W) The unit used to measure production/usage rate of all types of
energy; the unit for power. The rate of energy transfer equivalent
to one ampere flowing under a pressure of one volt at unity power
factor.
Watthour (Wh) The unit of energy equal to the work done by one watt in one hour.
Unit III
• Wood and wood processing wastes—firewood, wood pellets, and wood chips, lumber and furniture
mill sawdust and waste, and black liquor from pulp and paper mills
• Agricultural crops and waste materials—corn, soybeans, sugar cane, woody plants, and algae, and
crop and food processing residues
• Biogenic materials in municipal solid waste—paper, cotton, and wool products, and food, yard, and
wood wastes
Direct combustion is the most common method for converting biomass to useful energy. All biomass
can be burned directly for heating buildings and water, for industrial process heat, and for generating
electricity in steam turbines.
Thermochemical conversion of biomass includes pyrolysis and gasification. Both are thermal
decomposition processes in which biomass feedstock materials are heated in closed, pressurized vessels
called gassifiers at high temperatures. They mainly differ in the process temperatures and amount of
oxygen present during the conversion process.
• Pyrolysis entails heating organic materials to 800–900oF (400– 500 oC) in the near complete absence
of free oxygen. Biomass pyrolysis produces fuels such as charcoal, bio-oil, renewable diesel, methane,
and hydrogen.
• Hydrotreating is used to process bio-oil (produced by fast pyrolysis) with hydrogen under elevated
temperatures and pressures in the presence of a catalyst to produce renewable diesel, renewable
gasoline, and renewable jet fuel.
A chemical conversion process known as transesterification is used for converting vegetable oils,
animal fats, and greases into fatty acid methyl esters (FAME), which are used to produce biodiesel.
Biological conversion includes fermentation to convert biomass into ethanol and anaerobic digestion
to produce renewable natural gas. Ethanol is used as a vehicle fuel. Renewable natural gas—also called
biogas or biomethane—is produced in anaerobic digesters at sewage treatment plants and at dairy and
livestock operations. It also forms in and may be captured from solid waste landfills. Properly treated
renewable natural gas has the same uses as fossil fuel natural gas. Researchers are working on ways to
improve these methods and to develop other ways to convert and use more biomass for energy.
• Commercial—146 TBtu—3%
The industrial and transportation sectors account for the largest amounts, in terms of energy content,
and largest percentage shares of total annual U.S. biomass consumption. The wood products and paper
industries use biomass in combined heat and power plants for process heat and to generate electricity
for their own use. Liquid biofuels (ethanol and biomass-based diesel) account for most of the
transportation sector's biomass consumption.
The residential and commercial sectors use firewood and wood pellets for heating. The commercial
sector also consumes, and in some cases, sells renewable natural gas produced at municipal sewage
treatment facilities and at waste landfills.
The electric power sector uses wood and biomass-derived wastes to generate electricity for sale to the
other sectors.
BIO ENERGY
Bioenergy refers to electricity and gas that is generated from organic matter, known as biomass. This
can be anything from plants and timber to agricultural and food waste – and even sewage. The term
bioenergy also covers transport fuels produced from organic matter. But on this page, we’re just
focusing on how it’s used to generate electricity and carbon neutral gas. How does biomass generate
energy? When biomass is used as an energy source, it’s referred to as ‘feedstock’. Feedstocks can be
grown specifically for their energy content (an energy crop), or they can be made up of waste products
from industries such as agriculture, food processing or timber production. Dry, combustible feedstocks
such as wood pellets are burnt in boilers or furnaces. This in turn boils water and creates steam, which
drives a turbine to generate electricity.
In comparison, burning fossil fuels releases carbon dioxide that has been locked away for millions of
years, from a time when the earth’s atmosphere was very different. This adds more carbon dioxide into
our current atmosphere, breaking the carbon balance.
The overall sustainability and environmental benefits of bioenergy can depend on whether waste
feedstocks or energy crops are being used.
Waste feedstocks
Waste biomass gives off gases naturally when it rots. If this happens in a place where there’s no oxygen,
such as food waste buried deep within landfill, it can generate methane which is a much stronger
greenhouse gas than carbon dioxide. Instead of allowing methane to vent into the atmosphere, breaking
it down in a sealed tank allows it to be captured and burnt. Burning methane leaves you with carbon
dioxide and water, which are better for the environment.
Energy crops
Energy crops are grown specifically for generating energy. So, unlike capturing methane from waste,
there isn’t an argument that burning them reduces greenhouse gases which would have been given off
anyway. However, energy crops can still be low carbon if they are managed sustainably. For example,
when energy crops are burnt, equivalent crops should be planted that will absorb the same amount of
carbon that was released by burning.
• Biofuel generators should be highly efficient and able to put waste heat to good use
• Green Gas must be certified under the Green Gas Certification Scheme
Synthetic Fuels: A generic name given to hydrocarbon fuels produced from natural gas, coal or
biomass.
• Coal
• Biomass
• Natural gas
A number of synthetic fuel's definitions include fuels produced from biomass, industrial and municipal
waste. The definition of synthetic fuel may also consist of oil sands and oil shale as synthetic fuel's
sources and in addition to liquid fuels also gaseous fuels are covered.
• Animal wastes
BIOMASS USAGE
• Leading source of renewable energy in U.S. since 1999.
• Agricultural and forestry residues most common resource for generating electricity and process steam.
SUSTAINABILITY
One concern commonly raised about the development of synthetic fuels plants is sustainability.
Fundamentally, transitioning from oil to coal or natural gas for transportation fuels production is a
transition from one inherently depleteable geologically limited resource to another. One of the positive
defining characteristics of synthetic fuels production is the ability to use multiple feedstocks (coal, gas,
or biomass) to produce the same product from the same plant. This provides a path forwards to a
renewable fuel source and possibly more sustainable, even if the plant originally produced fuels solely
from coal, making the infrastructure forwards-compatible even if the original fossil feedstock runs out.
Geothermal Energy
--- a renewable energy source for electricity generation ---
Outlines
Introduction
Geothermal Reservoirs
Extraction & Uses of Geothermal Energy
Electricity Generation
Cost
Geothermal Energy in India
Pros and Cons
Conclusion
Introduction
What is Geothermal Energy ?
etc……
The deeper you go, the hotter it is !!!
Geothermal Reservoirs
Reservoirs can be suspected in the areas where we find :-
Geyser
Boiling mud pot
Volcano
Hot springs
Geothermal Reservoirs (cont.)
The rising hot water &
steam is trapped in
permeable & porous
rocks to form a
geothermal reservoir.
Reservoirs can be
discovered by
testing the soil
analyzing
underground temperature
Extraction & uses
The heat energy can be brought to earth surface by
following ways..
directly from hot springs/ geysers
geothermal heat pump
Jul
Jul2012
2012 © 2012 UPES
Wind pumps and generators have been used in remote areas
of Australia and in other countries around the world for many
years.
Jul
Jul2012
2012 © 2012 UPES
Tower mill
The tower mill appeared later than the post mill; it consists of a
usually circular, stationary body and a roof that rotates with the help
of a fantail.
Jul
Jul2012
2012 © 2012 UPES
Post Mill
The mill body pivots on a vertical axis when a tail pole is activated by
the miller.
Jul
Jul2012
2012 © 2012 UPES
Vertical-axis Wind Turbine Horizontal-axis Wind Turbine
The most common type of wind turbine;
Wind turbine whose axis is
its axis positions itself in the direction of
perpendicular to the wind. the wind.
Jul
Jul2012
2012 © 2012 UPES
Production Of Electricity From Wind Energy
Wind farms contain a group of wind turbines, which are driven by the wind;
they produce electricity and carry it along the transmission and distribution
networks to which they are connected.
Jul
Jul2012
2012 © 2012 UPES
Jul
Jul2012
2012 © 2012 UPES
Advantages
It is a renewable source of energy and causes no pollution
It can generate power in remote areas where other sources are not
available.
It is easily manageable and cheap
While wind-generated electricity does not cause air pollution, it does
cost more to produce than electricity generated from coal.
It can water pumps, flour mills and electric turbines
Although large area of land is required for setting up wind farms but
less than 1% of the total area is covered by the turbine bases, the
foundations and the access roads. The rest of the area can be used
for farming or grazing.
Setting wind mills offshore reduces their demand for land and visual
impact
Jul
Jul2012
2012 © 2012 UPES
Limitations and Environmental impact
Wind does not always blow with required intensity or in desired
direction all the year round
Wind energy is not available in all the regions
While wind generators don't produce any greenhouse gas emissions
they may cause vibrations, noise and visual pollution.
They also cause bird kills, effect on TV Reception, and aesthetic
impact
Wind is an intermittent source of energy and requires some other
backup or standby electric source.
High wind speed is needed to power wind generators effectively.
Jul
Jul2012
2012 © 2012 UPES
L-6 Unit 3
Polymer membrane electrolyte (PEM) fuel cells
Polymer electrolyte membrane (PEM) fuel cells, which convert the chemical energy stored in
hydrogen fuel directly and efficiently to electrical energy with water as the only by product,
have the potential to reduce our energy use, pollutant emissions, and dependence on fossil fuels.
Proton-exchange membrane fuel cells (PEMFC), also known as polymer electrolyte membrane
(PEM) fuel cells, are a type of fuel cell being developed mainly for transport applications, as
well as for stationary fuel-cell applications and portable fuel-cell applications. Their
distinguishing features include lower temperature/pressure ranges (50 to 100 °C) and a special
proton-conducting polymer electrolyte membrane. PEMFCs generate electricity and operate
on the opposite principle to PEM electrolysis, which consumes electricity. They are a leading
candidate to replace the aging alkaline fuel-cell technology, which was used in the Space
Shuttle.
Applications:
The major application of PEM fuel cells focuses on transportation primarily because of their
potential impact on the environment, e.g. the control of emission of the green-house gases
(GHG). Other applications include distributed/stationary and portable power generation. Most
major motor companies work solely on PEM fuel cells due to their high power density and
excellent dynamic characteristics as compared with other types of fuel cells. Due to their light
weight, PEMFCs are most suited for transportation applications. PEMFCs for buses, which use
compressed hydrogen for fuel, can operate at up to 40% efficiency. Generally PEMFCs are
implemented on buses over smaller cars because of the available volume to house the system
and store the fuel. Technical issues for transportation involve incorporation of PEMs into
current vehicle technology and updating energy systems. Full fuel cell vehicles are not
advantageous if hydrogen is sourced from fossil fuels; however, they become beneficial when
implemented as hybrids. There is potential for PEMFCs to be used for stationary power
generation, where they provide 5 kW at 30% efficiency; however, they run into competition
with other types of fuel cells, mainly SOFCs and MCFCs. Whereas PEMFCs generally require
high purity hydrogen for operation, other fuel cell types can run on methane and are thus more
flexible systems. Therefore, PEMFCs are best for small scale systems until economically
scalable pure hydrogen is available. Furthermore, PEMFCs have the possibility of replacing
batteries for portable electronics, though integration of the hydrogen supply is a technical
challenge particularly without a convenient location to store it within the device.
L-7 Unit 3
The electrolyte in SOFCs is unique; it’s a solid, ceramic material. The anode and cathode electrodes in
Bloom’s fuel cells are special proprietary inks that coat the electrolyte. Unlike other types of fuel cells,
no precious metals, corrosive acids, or molten materials are required to create Bloom’s SOFCs.
Operating at high temperatures inside the Energy Server (‘Bloom Box’), ambient air enters the cathode
side of the fuel cell.
Meanwhile, steam mixes with fuel (natural gas or biogas) entering from the anode side to produce
reformed fuel. As the reformed fuel crosses the anode, it attracts oxygen ions from the cathode. The
oxygen ions combine with the reformed fuel to produce electricity, steam, and carbon dioxide.
The steam that is produced in the reaction is recycled to reform the fuel. Because of this recycling
process, Bloom’s fuel cells do not require water during normal operation. Conversely, thermal power
plants require significant amounts of water for cooling. In fact, the number one use of water in the U.S.
is for cooling power plants. To produce one megawatt per hour for a year, thermoelectric power
generation for the U.S. grid withdraws approximately 156 million gallons of water more than our
platform.
The electrochemical process also generates the heat required to keep the fuel cell warm and drive the
reforming reaction process. As long as fuel and air are available, the fuel cells continue converting
chemical energy into electrical energy, providing an electric current directly at the fuel cell site.
SOFCs are the first (and smallest) component manufactured for the Bloom Energy Server. The
SOFCs are then combined to form a fuel cell stack and multiple stacks create a Server module (or
‘Bloom
Box’). Four to six modules combine to form one 200-300kW Energy Server that produces power in a
footprint roughly equivalent to that of half a standard 30-foot shipping container.
Because the Servers come together like building blocks, the modular design allows any number of
systems to be clustered together in various configurations to form solutions from hundreds of
kilowatts to many tens of megawatts.
SOFC Applications
SOFCs are being considered for a wide range of applications, such as working as power systems for
trains, ships and vehicles; supplying electrical power for residential or industrial utility
SOFC Advantages and Disadvantages
SOFCs have a number of advantages due to their solid materials and high operating temperature.
1. Since all the components are solid, as a result, there is no need for electrolyte loss maintenance and
also electrode corrosion is eliminated.
2. Since SOFCs are operated at high temperature, expensive catalysts such as platinum or ruthenium
are totally avoided.
3. Also because of high-temperature operation, the SOFC has a better ability to tolerate the presence
of impurities as a result of life increasing
4. Costs are reduced for internal reforming of natural gas. 5. Due to high-quality waste heat for
cogeneration applications and low activation losses, the efficiency for electricity production is
greater than 50 and even possible to reach 65
5. Releasing negligible pollution is also a commendable reason why SOFCs are popular today
However, there are also some disadvantages in existence for deteriorating the performance of SOFCs.
1. SOFCs operate high temperature, so the materials used as components are thermally challenged
2. The relatively high cost and complex fabrication are also significant problems that need to be solved