Page |1
SUPERCOMPUTER
INTODUCTION
A supercomputer is a computer with a high level of performance compared to
a general-purpose computer. The performance of a supercomputer is commonly measured
in floating-point operations per second (FLOPS) instead of million instructions per
second (MIPS). Since 2017, there are supercomputers which can perform over a
hundred quadrillion FLOPS.[3] Since November 2017, all of the world's fastest 500
supercomputers run Linux-based operating systems. Additional research is being conducted
in China, the United States, the European Union, Taiwan and Japan to build even faster, more
powerful and more technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are
used for a wide range of computationally intensive tasks in various fields, including quantum
mechanics, weather forecasting, climate research, oil and gas exploration, molecular
modeling (computing the structures and properties of chemical compounds,
biological macromolecules, polymers, and crystals), and physical simulations (such as
simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the
detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been
essential in the field of cryptanalysis.[6]
HISTORY
Supercomputers are started led the market in 1970s until the inventor left his own
company. The first supercomputer was introduced in Control Data Corporation (CDC) by
Seymour Cray in 1960s. Then, he was took the supercomputer market with his own and new
design, he was the top person in supercomputer from 1985 until 1990. In 1980s, there are a
quantity of converters entered the supercomputer market, similar to the produce of the
minicomputer market in decennium earlier. Unfortunately, many of these are disappeared in
the middle of 1990s and call “supercomputer market crash
“.Supercomputer is a computer to process on the capacity and
the particularly speed when calculation.
The purpose of supercomputer is greatly calculation-
intensive operations with the problem of weather forecasting,
research and other scientific calculation. Supercomputer can
easily calculate this kind of things to scientist. Time-limit of
supercomputer as such is rather unfixed. At present,
supercomputers tendency become future common computer.
Control Data Corporation (CDC) early computers were
simply faster for a quantity of processors, the speed is several
ten times of the fastest supercomputer provided by other
companies. 1970s had most of the supercomputers for a
special purpose to running a vector processor. Many of the
newer users were developed their own processors with a lower
Page |2
price to get into the market.
Before the later of 1980s, the supercomputers had an appropriate amount of vector
processors in similar to become the standard. A classic number of processors were in the
level from four to sixteen. After that, change to pay heed to large parallel processing systems
with thousands of the “ordinary” CPUs from the vector processors. Now, similar designs are
basic of “off the shelf” server-class microprocessors. For example, Xeon, PowerPC and other
coprocessors like IBM Cell. In the general modern supercomputers are using the product
processors to combined the usage interconnects.
EVOLUTION OF SUPERCOMPUTER
Engineers measured early computing
devices in kilo-girls, a unit roughly equal to the
calculating ability of a thousand women. By the
time the first supercomputer arrived in 1965, we
needed a larger unit. Thus, FLOPS, or floating
point operations (a type of calculation) per
second. In 1946, ENIAC, the first (nonsuper)
computer, processed about 500 FLOPS.
Today’s supers crunch petaFLOPS—or 1,000
trillion. Shrinking transistor size lets more
electronics fit in the same space, but processing
so much data requires a complex design,
intricate cooling systems, and openings for
humans to access hardware. That’s why
supercomputers stay supersize. A few special
call-outs: 1. CDC 6600 Rapidly sifted through 3
million of CERN’s experimental research
images per year 2. ASCI Red Modeled the
U.S.’s nuclear weapons’ capabilities, avoiding
underground testing 3. IBM Sequoia Used more
than 1 million cores to help Stanford engineers
study jet engines 4. Sunway TaihuLight
Reached a record 93 petaFLOPS by trading
slower memory for high energy efficiency
a. CDC 6600 – first supercomputer
b. Released: 1964
c. Speed: 3 megaflops
d. 10x faster than any other computer at the
time
e. First successful supercomputer
f. Built by Control Data Corporation
Page |3
g. Designed by Seymour Cray, who would go on to found Cray Research
h. A single CPU
i. 40MHz
j. Cost $8 million, sold for $60 million
k. Cray 2
l. Released: 1985
m. Speed: 1.9 gigaflops
n. Twice as fast as the previous Cray X-MP
o. Fastest computer in the world
p. Remained fastest for over 5 years
q. Had 8 CPUs
r. Was as fast as an iPad 2 — released in 2011
Operating systems
Since the end of the 20th century, supercomputer operating systems have undergone
major transformations, based on the changes in supercomputer architecture.[77] While early
operating systems were custom tailored to each supercomputer to gain speed, the trend has
been to move away from in-house operating systems to the adaptation of generic software
such as Linux.[78]
Although most modern supercomputers use a Linux-based operating system, each
manufacturer has its own specific Linux-derivative, and no industry standard exists, partly
due to the fact that the differences in hardware architectures require changes to optimize the
operating system to each hardware design.
Today’s Supercomputer Environment
1) The world of supercomputers is constantly changing
2) TOP500 provides a list of most powerful supercomputers
a) Computers ranked by ability to solve sets of equations by the TOP500 project.
b) Countries with the most supercomputers
i) United States: 233
ii) China: 76
iii) Japan: 30
Coming Soon: World’s Most Powerful Supercomputer
a. Japanese Fujitsu Exascale
b. Project called FLAGSHIP 2020
c. Meant to be the top supercomputer in the world
d. Planned to be operational by April 2021
e. Characteristics
f. Multi-core architecture
g. General-purpose CPUs
Page |4
h. Network interfaces in CPU chips
i. Multidimensional torus network topology
j. A way of connecting the units to create a powerful and fast computing link between
units
k. 1,000 petaflops (1 exaflop)
l. 1 quintillion calculations every second (1 followed by 18 zeros)
m. 30x faster than today’s top supercomputers
World’s Fastest Supercomputer:
Summit
Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge
National Laboratory, which as of November 2018 is the fastest supercomputer in the
world, capable of 200 petaflops.[3] Its current LINPACK benchmark is clocked at 148.6
petaflops.[4] As of November 2018, the supercomputer is also the 3rd most energy
efficient in the world with a measured power efficiency of 14.668 GFlops/watt.[5]
Summit is the first supercomputer to reach exaop (exa operations per second) speed,
achieving 1.88 exaops during a genomic analysis and is expected to reach 3.3 exaops
using mixed precision calculations.
1. Application Performance. 200
PF.
2. Number of Nodes. 4,608.
3. Node performance. 42 TF.
4. Memory per Node. 512 GB
DDR4 + 96 GB HBM2.
5. NV memory per Node. 1600
GB.
6. Total System Memory. >10 PB
DDR4 + HBM2 + Non-volatile.
7. Processors. 2 IBM POWER9™ 9,216 CPUs. 6 NVIDIA Volta™ 27,648 GPUs.
8. File System. 250 PB, 2.5 TB/s, GPFS™
9. Power Consumption.13M
10. Operating System. Red Hat Enterprise Linux (RHEL) version 7.4
Page |5
INDIAN SUPERCOMPUTER
Pratyush and Mihir are
the supercomputers established at Indian
Institute of Tropical
Meteorology (IITM), Pune and National
Center for Medium Range Weather
Forecast (NCMRWF), Noida respectively.
As of January 2018, Pratyush and Mihir are
the fastest supercomputer in India with a
maximum speed of 6.8 PetaFlops at a total
cost of INR 438.9 Crore. The system was
inaugurated by Dr. Harsh Vardhan, Union
Minister for science and technology, on 8
January 2018.The word 'Pratyush'
(Hindi: प्रत्युष) defines the rising sun
Location: Indian Institute of Tropical Meteorolo...
Speed: 6.8 PetaFlops
Purpose: Weather forecasting, Climate research
Construction of Supercomputer
The construction of supercomputers is something you have to plan very careful
because, once underway, there is no going back to do some major revision. If that happens,
the company loses millions of dollars and that can result in either canceling the project and
trying to make some money from the technology developed or causing the company to go
broke or almost broke. An example is Cray, since 2000 an independent company again, but
they had a few difficult years.
Mismanagement is another factor that causes supercomputer projects to go bust. One
example is the fifth generation project in Japan. A lot of spin off came from that project, very
true. But imagine the possibilities if the Japanese had succeeded.
Third is, of course, periods of economic malaise. Projects get stalled. A fine example of that
is Intel. In 2002, that company scrapped its supercomputer project and took the losses.
All this does not tell us much about how supercomputers are built, but it gives a picture that
not only science dictates what is build or successful. Surprisingly enough, supers are often
built from existing CPUs but there ends all likeliness with existing hardware.
Terms like super scalar, vector oriented computing, and parallel computing are just
some of the terms used in this arena.
Since 1995, supers have been built up from a GRID, meaning an array or cluster of CPUs
(even ordinary PC's) connected by a special version of, for example, Linux; thus acting like
one big machine. The cost of this type of super is dramatically lower compared to the
millions needed to build "conventional" types of supercomputers. As if we can say
"conventional" without raising an eyebrow or two.
Page |6
A fact is that supers are the fastest machines in their time. Yes, we smile looking back at the
ENIAC but, back in 1942, it was a miracle machine and the fastest around.
There are three primary limits to performance at the supercomputer level: (1)
individual processor speed,
the overhead involved in making large numbers of processors work together on a single task,
the input/output speed between processors and between processors and memory.
Input/output speed between the data-storage medium and memory is also a problem, but no
more so than in any other types of computers and, since supercomputers all have amazingly
high RAM capacities, this problem can be largely solved with the liberal application of large
amounts of money.(1)
The speed of individual processors is increasing all the time, but at great costs in research and
development. The reality is that we are beginning to reach the limits of silicon based
processors. Seymour Cray showed that gallium arsenide technology could be made to work,
but it is very difficult to work with and very few companies now are able to make usable
processors based on GeAs. It was such a problem back in those years that Cray Computer
was forced to acquire their own GeAs foundry so that they could do the work themselves. (1)
The solution the industry has been turning to, of course, is to add ever-larger numbers of
processors to their systems, giving them speed through parallel processing. This approach
allows them to use relatively inexpensive third-party processors, or processors that were
developed for other, higher-volume applications such as personal- or workstation-level
computing. Thus, the development costs for the processor are spread out over a far larger
number of processors than the supercomputing industry could account for on its own. (1)
However, parallelism brings problems of high overhead and the difficulty of writing
programs that can utilize multiple processors at once in an efficient manner. Both problems
had existed before as most supercomputers had from two to sixteen processors, but they were
much easier to deal with on that level than on the level of complexity arising from the use of
hundreds or even thousands of processors. If these machines were to be used the way
mainframes had been used in the past, then relatively little work was needed as a machine
with hundreds of processors could handle hundreds of jobs at a time fairly efficiently.
Distributed computing systems, however, are (or are becoming, depending on who you ask)
more efficient solutions to the problem of many users with many small tasks.
Supercomputers, on the other hand, were designed, built, and bought to work on extremely
large jobs that could not be handled by another type of computing system. Ways had to be
found to make many processors work together as efficiently as possible. Part of the job is
handled by the manufacturer: extremely high-end I/O subsystems arranged in topologies that
minimize the effective distances between processors while also minimizing the amount of
intercommunication required for the processors to get their jobs done.(1) For example, the
ESS project has connected their clusters with glasfiber cables and uses special vector
processors.
Page |7
Applications of SuperComputer
Supercomputers are used to perform the most compute-intensive tasks of modern
times
Computational Science
weather Forecasting, Climate Research
nuclear explosion dynamics
Oil and Gas Exporation
credit card transaction processing
design and testing of modern aircraft
cryptology
Manufacturers of SuperComputer
IBM Hewlett-Packard
Aspen Systems Thinking Machines
SGI Cray Computer Corporation
Cray Research Control Data Corporation
Compaq
Performance Measurement
1) Capability versus capacity
Supercomputers generally aim for the maximum in capability computing rather
than capacity computing. Capability computing is typically thought of as using the maximum
computing power to solve a single large problem in the shortest amount of time. Often a
capability system is able to solve a problem of a size or complexity that no other computer
can, e.g., a very complex weather simulation application.[95]
Capacity computing, in contrast, is typically thought of as using efficient cost-effective
computing power to solve a few somewhat large problems or many small
problems.[95] Architectures that lend themselves to supporting many users for routine
everyday tasks may have a lot of capacity but are not typically considered supercomputers,
given that they do not solve a single very complex problem.[
2) Performance metrics
In general, the speed of supercomputers is measured and benchmarked in "FLOPS"
(FLoating point Operations Per Second), and not in terms of "MIPS" (Million Instructions
Per Second), as is the case with general-purpose computers.[96] These measurements are
Page |8
commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS"
(1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS"
(1015 FLOPS, pronounced petaflops.) "Petascale" supercomputers can process one quadrillion
(1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS
(EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS).
The Advantages of a Supercomputer
Supercomputers are specialized devices built to perform extremely difficult calculations
extremely quickly. They can be used to play chess, render high-quality computer graphics or
accurately simulate weather systems. Supercomputers require special maintenance intended
to keep them cool, and they consume prodigious amounts of electricity, but the advantages of
a supercomputer are so great that they continue to be developed with ever-increasing
capabilities
1) Decreasing Processing Time
The primary advantage that supercomputers offer is decreased processing time. Computer
speed is commonly measured in "floating point operations," or "FLOPS." Average home
computers can perform up to a hundred billion of these operations per second, or 100
"gigaflops." Supercomputers, however, are tens of thousands of times faster, meaning that
calculations that would take your home computer hours or days can be solved by a
supercomputer in a matter of seconds.
2) Solving New Problems
The sheer processing power of supercomputers means that they can be used to do things
that ordinary computers simply couldn't handle. For example, weather forecasting is highly
complex and requires extremely sophisticated algorithms. Only supercomputers have the
ability to perform these calculations in a timely fashion. Supercomputers have also permitted
great strides in filmmaking and special effects. Pixar uses a supercomputer with more than
1,000 individual CPUs; even using this computer, each frame of their movies can take up to
90 hours to render.
Supercomputers are specialized devices built to perform extremely difficult calculations
extremely quickly. They can be used to play chess, render high-quality computer graphics or
accurately simulate weather systems. Supercomputers require special maintenance intended
to keep them cool, and they consume prodigious amounts of electricity, but the advantages of
a supercomputer are so great that they continue to be developed with ever-increasing
capabilities.
3) Lowering Costs
By decreasing the amount of time needed to complete processing tasks, supercomputers
can lower costs, saving money in the long run through increased efficiency. For this reason,
some companies specialize in renting supercomputers to clients who don't need a full-time
computer but do need occasional bursts of processing power. Supercomputers can also lower
Page |9
costs by allowing engineers to create computer simulations that remove the need for
expensive, high-precision physical models or testing environments.
4) Improving Safety
Beyond CGI and scientific applications, supercomputers can also help to make the world
a safer place. Simulations or tests that would be difficult or extremely dangerous in the real
world can be performed on a supercomputer instead. For example, nuclear weapons must be
tested to make sure that they function. Without supercomputers, the testing process would
have to involve detonating a nuclear bomb; computers allow engineers to obtain the same
results without running the risks of an actual nuclear explosion.
Disadvantages of a Supercomputer
Even if your organization has researched the benefits and advantages of using a
supercomputer to tackle tough and complicated problems, you will find that supercomputers
also present some disadvantages. The larger and more powerful the supercomputer is, the
more infrastructure and maintenance it requires to perform the calculations you intend to
achieve. A supercomputer can perform as fast as one million ordinary PCs.
1) Storage and Bandwidth
Researchers use supercomputers to do work with enormous sets of data which they
process at a high rate while generating increasingly large amounts of additional data, such as
when scientists work on weather forecasting or simulate nuclear bomb detonations. A
disadvantage is that supercomputers require massive external storage drives whose bandwidth
is fast enough to accommodate the data being analyzed and produced. If storage and
bandwidth can't keep up with the data flow, the supercomputer will not be able to work at its
full capacity.
2) Maintenance and Support
Supercomputer systems are built by connecting multiple processing units and can require
large rooms to store them. The large number of processors gives off greater heat than
standard computers, which is a disadvantage because they require a cooling infrastructure.
The supercomputer also needs software to monitor how it is used and to detect failures, and a
larger than typical support staff to administer and support the computer, its external storage
and high-speed network.
3) Cost
A supercomputer that can simulate the location of potential oil deposits or the progress of
various permutations of a hurricane system can cost a lot of money, which could be a
disadvantage if your organization must work with a limited budget. For example, an IBM
Watson supercomputer" costs about $3 million, according to Computerworld magazine. The
Fujitsu's K Computer for the Riken Advanced Institute of Computational Science in Kobe,
P a g e | 10
Japan cost $1.2 billion to build and requires $10 million per year for operational costs,
according to the Atlantic.
4) Processing Time
Unlike ordinary desktop computers that may finish calculating a problem in a few
minutes or overnight, supercomputers work on tasks that require intensive calculations which
can take extremely long periods to complete. For example, a supercomputer could spend
months performing calculations to support research on climate change or to help cure a
disease, presenting a disadvantage to people who are in a hurry for quick results.
Conclusion
Supercomputing has always been a specialized form at the cutting edge of computing.
Its share of overall computing has decreased as computing has become ubiquitous.
Supercomputing has played, and continues to play, an essential role in national security and
in scientific discovery. The ability to address important scientific and engineering challenges
depends on continued investments in supercomputing. Moreover, the increasing size and
complexity of new applications will require the continued evolution of supercomputing for
the foreseeable future. Commodity clusters satisfy the needs of many supercomputer users.
However, some important applications need the better main memory bandwidth and latency
hiding that are available only in custom supercomputers; many need the better global
bandwidth and latency interconnects that are available only in custom or hybrid
supercomputers; and most would benefit from the simpler programming model that can be
supported well on custom systems.
The increasing gap between processor speed and communication latencies is likely to
increase the fraction of supercomputing applications that achieve acceptable performance
only on custom and hybrid supercomputers. Advances in algorithms and in software
technology at all levels are essential to further progress in solving applications problems
using supercomputing. All aspects of a particular supercomputing ecosystem, be they
hardware, software, algorithms, or people, must be strong if the ecosystem is to function
effectively. The supercomputing needs of the government will not be satisfied by systems
developed to meet the demands of the broader commercial market. The government has the
primary responsibility for creating and maintaining the supercomputing technology and
suppliers that will meet its specialized needs.
P a g e | 11
Bibliography
https://en.wikipedia.org/wiki/Supercomputer
https://www.atlasobscura.com/
https://www.networkworld.com/.../embargo-10-of-the-worlds-fastest-supercomputers.
https://whatis.techtarget.com/definition/supercomputer
https://www.techopedia.com/definition/4599/supercomputer
https://www.webopedia.com/TERM/S/supercomputer.html
https://www.cray.com/
P a g e | 12
Abstract
Supercomputer, any of a class of extremely powerful computers. The term is commonly applied to
the fastest high-performance systems available at any given time. Such computers have been used
primarily for scientific and engineering work requiring exceedingly high-speed computations.
Common applications for supercomputers include testing mathematical models for complex physical
phenomena or designs, such as climate and weather, evolution of the cosmos, nuclear weapons and
reactors, new chemical compounds (especially for pharmaceutical purposes), and cryptology. As the
cost of supercomputing declined in the 1990s, more businesses began to use supercomputers for
market research and other business-related models.
Supercomputers have certain distinguishing features. Unlike conventional computers,
they usually have more than one CPU (central processing unit), which contains circuits for
interpreting program instructions and executing arithmetic and logic operations in proper
sequence. The use of several CPUs to achieve high computational rates is necessitated by the
physical limits of circuit technology. Electronic signals cannot travel faster than the speed of
light, which thus constitutes a fundamental speed limit for signal transmission and circuit
switching. This limit has almost been reached, owing to miniaturization of circuit
components, dramatic reduction in the length of wires connecting circuit boards, and
innovation in cooling techniques (e.g., in various supercomputer systems, processor and
memory circuits are immersed in a cryogenic fluid to achieve the low temperatures at which
they operate fastest). Rapid retrieval of stored data and instructions is required to support the
extremely high computational speed of CPUs. Therefore, most supercomputers have a very
large storage capacity, as well as a very fast input/output capability.
Still another distinguishing characteristic of supercomputers is their use of vector
arithmetic—i.e., they are able to operate on pairs of lists of numbers rather than on mere pairs
of numbers. For example, a typical supercomputer can multiply a list of hourly wage rates for
a group of factory workers by a list of hours worked by members of that group to produce a
list of dollars earned by each worker in roughly the same time that it takes a regular computer
to calculate the amount earned by just one worker.