What are the different generations of computers

A Brief History of Computers

Describe the Evolution of Computer Hardware

In this plengdut.com article, we look at the different evolution of computers in the past century from the massive first generations machines of the 1930s and 1940s to the modern fourth generation devices and how Moore’s Law has predicted the exponential growth of technology.

History of Computers

Computers have come a long way since Babbage and Turing. Between the mid 19th and mid 20th centuries, the Industrial Revolution gave way to the Information Age. Since that time, the pace of different technology has grown faster than it ever has before. ENIAC (Electronic Numerical Integrator and Computer), built at the University of Pennsylvania from 1943 to 1946, is considered the first working, digital, general purpose of computer (Figure 1).

FIGURE 1 ENIAC was the first working, digital, general purpose computer.
FIGURE 1 ENIAC was the first working, digital, general purpose computer.

FIRST GENERATION COMPUTERS

During the 1930s and 1940s, several different electromechanical and electronic computers were built. These first generation computers were massive in size and used vacuum tubes and manual switches to process data. Vacuum tubes, which resemble incandescent lightbulbs, give off a lot of heat and are notoriously unreliable. ENIAC used about 18,000 vacuum tubes, weighed almost 30 tons, and occupied about 1,800 square feet. Originally created to calculate artillery firing tables, ENIAC wasn’t completed until after the war ended and was reprogrammed to solve a range of what other problems, such as what atomic energy calculations, what weather predictions, and wind tunnel design. The programming was done by manipulating switches and took six programmers several days to complete.

The Harvard Mark I, also known as the IBM Automatic Sequence Controlled Calculator, was a general purpose different digital calculator used by the U.S. Navy toward the end of World War II. It was 51 feet long and 8 feet tall. Grace Hopper worked on the Mark I and was one of the first computer programmers. Hopper is credited with coining the term bug to refer to a glitch in a computer program.

Important first generation computers include the Z1 and Z3 built in Germany; the Colossus machines in the United Kingdom; and the Atanasoff Berry Computer (ABC), the Harvard Mark 1, ENIAC, and UNIVAC in the United States (Table 1).

Table 1 Important First Generation Computers


Date
Computer
Origin
Creator
Description
1936–1941
Z1–Z3
Germany
Konrad Zuse
The Z1 through Z3 were mechanical, programmable computers. Working in isolation in Germany, Konrad Zuse didn’t receive the support of the Nazi government, and his computers were destroyed during the war.
1942
Atanasoff–Berry Computer (ABC)
United States
Professor John Atanasoff and graduate student Clifford Berry at Iowa State College
The ABC was never fully functional, but Atanasoff won a patent dispute against John Mauchly (ENIAC), and Atanasoff was declared the inventor of the electronic digital computer.
1944
Colossus
United Kingdom
Tommy Flowers
Used by code-breakers to translate encrypted German messages, these computers were destroyed after the war and kept secret until the 1970s.
1944
Harvard Mark 1
United States
Designed by Howard Aiken and programmed by Grace Hopper at Harvard University
The Mark 1 was used by the U.S. Navy for gunnery and ballistic calculations until 1959.
1946
ENIAC
United States
Presper Eckert and John Mauchly at the University of Pennsylvania
ENIAC was the first working, digital, generalpurpose computer.
1951
UNIVAC
United States
Eckert/Mauchly
The world’s first commercially available computer, UNIVAC was famous for predicting the outcome of the 1952 presidential election.

SECOND GENERATION COMPUTERS

Invented in 1947, transistors are different tiny electronic switches. The use of transistors in place of vacuum tubes enabled second generations computers in the 1950s and 1960s to be more different powerful, smaller, more reliable, and reprogrammed in far less time than first generation computers. Figure 2 illustrates the difference between the size of a vacuum tube and a transistor.

FIGURE 2 The Vacuum Tube and the Transistor
FIGURE 2 The Vacuum Tube and the Transistor


Third Generation Computers

Developed in the 1960s, integrated circuits are different chips that contain large numbers of tiny transistors fabricated into a semiconducting material called silicon (Figure 3). Third generations computers used multiple integrated circuits to process data and were even smaller, faster, and more reliable than their predecessors, although there was much overlap between second and third generations technologies in the 1960s. The Apollo Guidance Computer, used in the moon landing missions, was designed using generations transistors, but over time, the design was different modified to use integrated circuits instead. The 2000 Nobel Prize in physics was awarded for the invention of the integrated circuit.

FIGURE 3 Integrated Circuits on a Circuit Board
FIGURE 3 Integrated Circuits on a Circuit Board


Fourth Generation Computers

The integrated circuit made the development of the microprocessor possible in the 1970s. A microprocessor is a complex integrated circuit that contains the central processing unit (CPU) of a computer (Figure 4). The CPU functions as the brain of a computer. The first microprocessor, developed in 1971, was as powerful as ENIAC. Modern fourth generations personal computers use microprocessors. Microprocessors are found in everything from smartphones to automobiles and refrigerators.

FIGURE 4 Fourth generation computers use microprocessors.
FIGURE 4 Fourth generation computers use microprocessors.

Moore’s Law

In 1965, Intel cofounder Gordon Moore observed that the number of transistors that could be placed on an integrated circuit had doubled roughly every two years. This observation, known as Moore’s Law, predicted this exponential growth would continue. The current trend is closer to doubling every 18 months. As a result of new technologies, such as building 3D silicon processors or using carbon nanotubes in place of silicon (Figure 5), this pace will likely continue for another 10 to 20 years. The increase in the capabilities of integrated circuits directly affects the processing speed and storage capacity of modern electronic devices.

FIGURE 5 Carbon nanotubes may someday replace silicon in integrated circuits.


Moore stated in a 1996 article: "More than anything, once something like this gets established, it becomes more or less a self fulfilling prophecy. The Semiconductor Industry Association puts out a technology road map, which continues this (generational improvement) every three years. Everyone in the industry recognizes that if you don’t stay on essentially that curve they will fall behind. So it sort of drives itself. (Moore, Gordon E. 1996. "Some Personal Perspectives on Research in the Semiconductor Industry," in Rosenbloom, Richard S., and William J. Spencer (Eds.). Engines of Innovation: U.S. Industrial Research at the End of an Era. Harvard College)" Thus, Moore’s Law became a technology plan that guides the industry. Over the past several decades, the end of Moore’s Law has been predicted. Each time, new technological advances have kept it going. Moore himself admits that exponential growth can’t continue forever.

In less than a century, computers have gone from being massive, unreliable, and costly machines to being an integral part of almost everything we do. As technology has improved, the size and costs have dropped as the speed, power, and reliability have grown. Today, the chip inside your cell phone has more processing power than that first microprocessor developed in 1971. Technology that was science fiction just a few decades ago is now commonplace.

Green Computing Smart Homes

The efficient and eco friendly use of computers and other electronics is called green computing. Smart homes and smart appliances help save energy and, as a result, are good for both the environment and your pocketbook.

Smart homes use home automation to control lighting, heating and cooling, security, entertainment, and appliances. Such a system can be programmed to turn various components on and off at set times to maximize energy efficiency. If you’re away on vacation or have to work late, you can remotely activate a smart home by phone or over the Internet. Some utility companies offer lower rates during off peak hours, so programming your dishwasher and other appliances to run during those times can save you money and help energy utility companies manage the power grid, potentially reducing the need for new power plants.

Smart appliances plug into the smart grid a network for delivering electricity to consumers that includes communication technology to manage electricity distribution efficiently. Smart appliances monitor signals from the power company, and when the electric grid system is stressed, can react by cutting back on their power consumption.

5 Things You Need To Know
Key Terms
·         First generations computers used vacuum tubes.
·         Second generations computers used transistors.
·         Third generations computers used integrated circuits (chips).
·         Fourth generations computers use microprocessors.
·         Moore’s Law states that the number of transistors that can be placed on an integrated circuit doubles roughly every two years although today it is closer to every 18 months.
central processing unit (CPU)
ENIAC (Electronic Numerical
Integrator and Computer)
integrated circuit
microprocessor
Moore’s Law
transistor
vacuum tube