Browse Definitions :
Definition

supercomputer

What is a supercomputer?

A supercomputer is a computer that performs at or near the highest operational rate for computers.

Traditionally, supercomputers have been used for scientific and engineering applications that must handle massive databases, do a great amount of computation or both. Advances like multicore processors and general-purpose graphics processing units have enabled powerful machines that could be called desktop supercomputers or GPU supercomputers.

By definition, a supercomputer is exceptional in terms of performance. At any time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term supercomputer is sometimes applied to far slower -- but still impressively fast -- computers.

How supercomputers work?

Supercomputer architectures are made up of multiple central processing units (CPUs). These CPUs have groups composed of compute nodes and memory. Supercomputers can contain thousands of nodes that use parallel processing to communicate with one another to solve problems.

The largest, most powerful supercomputers are multiple parallel computers that perform parallel processing. There are two parallel processing approaches: symmetric multiprocessing and massively parallel processing. In some cases, supercomputers are distributed, meaning they draw power from many individual PCs in different locations instead of housing all the CPUs in one location.

Supercomputer processing speed is measured in quadrillion floating point operations per second, also known as petaFLOPS or PFLOPS.

Differences between general-purpose computers and supercomputers

Supercomputers are general-purpose computers that function at the highest operational rate or peak performance for computers. Processing power is the main difference between supercomputers and general-purpose computer systems. A supercomputer can perform 100 PFLOPS. A typical general-purpose computer can only perform hundreds of gigaflops to tens of teraflops.

Supercomputers consume lots of power. As a result, they generate so much heat that they need to be stored in cooling systems.

Both supercomputers and general-purpose computers differ from quantum computers, which operate based on the principles of quantum physics.

What are supercomputers used for?

Supercomputers perform resource-intensive calculations that general-purpose computers can't handle. They often run engineering and computational sciences applications, such as the following:

  • weather forecasting to predict the impact of extreme storms and floods;
  • oil and gas exploration to collect huge quantities of geophysical seismic data to aid in finding and developing oil reserves;
  • molecular modeling for calculating and analyzing the structures and properties of chemical compounds and crystals;
  • physical simulations like modeling supernovas and the birth of the universe;
  • aerodynamics such as designing a car with the lowest air drag coefficient;
  • nuclear fusion research to build a nuclear fusion reactor that derives energy from plasma reactions;
  • medical research to develop new cancer drugs, understand the genetic factors that contribute to opioid addiction and find treatments for COVID-19;
  • next-gen materials identification to find new materials for manufacturing; and
  • cryptanalysis to analyze cyphertext, ciphers and cryptosystems to understand how they work and identify ways of defeating them.

Like any computer, supercomputers are used to simulate reality but on a larger scale. Some of the functions of a supercomputer can also be carried out with cloud computing. Like supercomputers, cloud computing combines the power of multiple processors to achieve power that is impossible on a PC.

list of ways supercomputers are used
Scientists and engineers use supercomputers to simulate reality and make projections.

Notable supercomputers throughout history

Seymour Cray designed the first commercially successful supercomputer. It was the Control Data Corporation (CDC) 6600, released in 1964. It had a single CPU and cost $8 million -- the equivalent of $60 million today. CDC 6600 could handle 3 million FLOPS and used vector processors.

Cray went on to found a supercomputer company named Cray Research in 1972. Although the company has had several different owners, it is still in operation as Cray Inc. In September 2008, Cray Inc. and Microsoft launched CX1, a $25,000 personal supercomputer aimed at the aerospace, automotive, academic, financial services and life sciences markets.

IBM has been a keen competitor. IBM Roadrunner was the top-ranked supercomputer when it was launched in 2008. It was twice as fast as IBM's Blue Gene and six times as fast as any other supercomputer at that time. IBM Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on the popular quiz show Jeopardy!

Top supercomputers of recent years

Sunway's Oceanlite supercomputer is reported to have been competed in 2021. It is thought to be an exascale supercomputer, which is one that can calculate at least 1018 FLOPS.

In the United States, some supercomputer centers are interconnected on an internet backbone known as the very high-speed Backbone Network Service, or vBNS, which is part of the National Science Foundation Network (NSFNET). NSFNET is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2, a university-led project, is part of this initiative.

At the lower end of supercomputing, data center administrators can use clustering for a build-it-yourself approach. The Beowulf Project offers guidance on how to put together off-the-shelf PC processors, using Linux operating systems, and interconnecting them with Fast Ethernet. Applications must be written to manage the parallel processing.

photo of Berzelius supercomputer
Berzelius is a Swedish supercomputer designed for AI research.

Countries around the world are using supercomputers for research purposes. One example is Sweden's Berzelius, which began operation in the summer of 2021. The system will be used for AI research primarily in Sweden.

Some top supercomputers of the last two decades

Year Supercomputer Peak speed (Rmax) Location
2021 Sunway Oceanlite 1.05 exaFLOPS (unofficial) Qingdao, China
2021 Fujitsu Fugaku 442 PFLOPS Kobe, Japan
2018 IBM Summit 148.6 PFLOPS Oak Ridge, Tenn.
2018 IBM Sierra 94.6 PFLOPS Livermore, Calif.
2016 Sunway TaihuLight 93.01 PFLOPS Wuxi, China
2013 NUDT Tianhe-2 33.86 PFLOPS Guangzhou, China
2012 Cray Titan 17.59 PFLOPS Oak Ridge, Tenn.
2012 IBM Sequoia 17.17 PFLOPS Livermore, Calif.
2011 Fujitsu K computer 10.51 PFLOPS Kobe, Japan
2010 NUDT Tianhe-1A 2.566 PFLOPS Tianjin, China
2009 Cray Jaguar 1.759 PFLOPS Oak Ridge, Tenn.
2008 IBM Roadrunner 1.105 PFLOPS Los Alamos, N.M.

Supercomputers and artificial intelligence

Supercomputers often run artificial intelligence (AI) programs because they typically require supercomputing-caliber performance and processing power. Supercomputers can handle the large amounts of data that AI and machine learning application development use.

Some supercomputers are engineered specifically with AI in mind. For example, Microsoft custom built a supercomputer to train large AI models that work with its Azure cloud platform. The goal is to provide developers, data scientists and business users with supercomputing resources through Azure's AI services. One such tool is Microsoft's Turing Natural Language Generation, which is a natural language processing model.

Another example of a supercomputer engineered specifically for AI workloads is Nvidia's Perlmutter. It is No. 5 in the most recent TOP500 list of the world's fastest supercomputers. It contains 6,144 GPUs and will be tasked with assembling the largest-ever 3D map of the visible universe. To do this, it will process data from the Dark Energy Spectroscopic Instrument, a camera that captures dozens of photos per night containing thousands of galaxies.

Photo of the Perlmutter supercomputer
Nvidia's Perlmutter supercomputer was launched in 2021 and is being used to find solutions to problems in astrophysics and climate science.

The future of supercomputers

The supercomputer and high-performance computing (HPC) market is growing as more vendors like Amazon Web Services, Microsoft and Nvidia develop their own supercomputers. HPC is becoming more important as AI capabilities gain traction in all industries from predictive medicine to manufacturing. Hyperion Research predicted in 2020 that the supercomputer market will be worth $46 billion by 2024.

The current focus in the supercomputer market is the race toward exascale processing capabilities. Exascale computing could bring about new possibilities that transcend those of even the most modern supercomputers. Exascale supercomputers are expected to be able to generate an accurate model of the human brain, including neurons and synapses. This would have a huge impact on the field of neuromorphic computing.

As computing power continues to grow exponentially, supercomputers with hundreds of exaflops could become a reality.

Supercomputers are becoming more prevalent as AI plays a bigger role in enterprise computing. Learn the top nine applications of AI in business and why businesses are using AI.

This was last updated in March 2022

Continue Reading About supercomputer

Networking
  • firewall as a service (FWaaS)

    Firewall as a service (FWaaS), also known as a cloud firewall, is a service that provides cloud-based network traffic analysis ...

  • private 5G

    Private 5G is a wireless network technology that delivers 5G cellular connectivity for private network use cases.

  • NFVi (network functions virtualization infrastructure)

    NFVi (network functions virtualization infrastructure) encompasses all of the networking hardware and software needed to support ...

Security
  • virus (computer virus)

    A computer virus is a type of malware that attaches itself to a program or file. A virus can replicate and spread across an ...

  • Certified Information Security Manager (CISM)

    Certified Information Security Manager (CISM) is an advanced certification that indicates that an individual possesses the ...

  • cryptography

    Cryptography is a method of protecting information and communications using codes, so that only those for whom the information is...

CIO
  • B2B (business to business)

    B2B (business-to-business) is a type of commerce involving the exchange of products, services or information between businesses, ...

  • return on investment (ROI)

    Return on investment (ROI) is a crucial financial metric investors and businesses use to evaluate an investment's efficiency or ...

  • big data as a service (BDaaS)

    Big data as a service (BDaS) is the delivery of data platforms and tools by a cloud provider to help organizations process, ...

HRSoftware
  • talent acquisition

    Talent acquisition is the strategic process an organization uses to identify, recruit and hire the people it needs to achieve its...

  • human capital management (HCM)

    Human capital management (HCM) is a comprehensive set of practices and tools used for recruiting, managing and developing ...

  • Betterworks

    Betterworks is performance management software that helps workforces and organizations to improve manager effectiveness and ...

Customer Experience
  • martech (marketing technology)

    Martech (marketing technology) refers to the integration of software tools, platforms, and applications designed to streamline ...

  • transactional marketing

    Transactional marketing is a business strategy that focuses on single, point-of-sale transactions.

  • customer profiling

    Customer profiling is the detailed and systematic process of constructing a clear portrait of a company's ideal customer by ...

Close