The third generation of computers covered a period between 1964 and 1971. It marked a new era that was defined by the improvement of the C.I., that is to say, integrated circuits that were created in 1958, which are silicon chips or tablets that allow the placement of enormous quantities of electronic devices in a minimum space, managing to diminish again the size of the computer and incorporating multiprogramming. Its use revolutionized the way of building computers, which is still used today in the manufacture of cell phones and calculators.
Third-generation computers had integrated circuits, in other words, circuits obtained, recording hundreds and later thousands of microscopic transistors on silicon chips. These devices were known as semiconductors. On the other hand, the memory capacity of these computers reached up to 2 megabytes and the processing speed reached 5 million instructions per second. This generation of computers allowed the introduction of programs capable of being manipulated by users without technical training.
The incorporation of integrated circuits made possible a new generation of computers with the following characteristics:
The third generation of computers began to be created with the invention of integrated circuits better known as microchips.
In 1964, two physicists, Jack St. Claire Kilby and Robert Noyce were its creators and with it, they revolutionized the electronic industry and gave origin to the beginning of a high technology era. From this moment on, events were triggered that made history.
First, Intel’s hardworking physicist Ted Hoff invented the microprocessor. Then, thanks to George Gamow, a new way of programming emerged, and with the discovery of the structure of DNA, the Russian physicist and astronomer proposed that the sequence of it formed a code.
On April 7, 1964 IBM announced the S/360, designed by chief architect Gene Amdah. It was one of the first commercial computers to use integrated circuits. The 360 was considered one of the most important in history, as it influenced the design of computers in later years and marked the starting point for the third generation of computers.
It is a mainframe computer system designed to cover applications, regardless of their size or environment (scientific, commercial). This first group of machines built with integrated circuits was called the Edgar series, and could perform both numerical and administrative analysis or file processing. The models ranged in speed from 0.034 MIPS to 1.700 MIPS (50 times the speed) and between 8 KB and 8 MB of main memory. In the design, a clear distinction was made between architecture and implementation, allowing customers to buy a smaller system knowing that they could always migrate to a higher-capacity system, making it a resounding success in the marketplace.
Control Data Corporation introduces to the market the CDC, a supercomputer capable of executing multiple instructions per second making it the most powerful at that time.
New storage units appear, 9-channel magnetic tapes, and although some still used punched cards for data entry, they had fast readers.
The company Digital Equipment Corporation DEC, foreseeing that IBM had monopolized an important sector of the market, decided to focus on making smaller computers, which were cheaper and easier to operate, reaching popularity. At the end of this technological era, the minicomputer emerged.
With each invention, the space requirements required by a computer for its operation became less. First was the creation of transistors to process information that replaced the vacuum tubes and marked an era (second generation) considerably decreasing the size of computers by accommodating 200 transistors in the same space. Then, the integrated circuits better known as microchips were created.
Among the most important advantages of integrated circuits is their small size in relation to electronic circuits built with discrete components. By comparison, an integrated circuit can contain thousands to millions of transistors in just a few square millimeters, thus achieving the miniaturization of its components. Thanks to this, the computers reduced their dimensions, increased their operating capacity (faster), emitted less heat, becoming more efficient equipment.
Initially computers handled a single function, either math or business, but never both.
The ideas and beginnings of the integrated circuit were given years before this generation. The composition of this small germanium device grouped six transistors that formed a rotating oscillator on the same semiconductor base. They were very economical because they were manufactured by printing them using photolithography in one piece, and also because they could be produced in series almost without defects. They were also very efficient because their energy consumption was considerably lower.
Thanks to integrated circuits, computer manufacturers gained more flexibility in the programs, and were even able to standardize their models.
Today we can see them in multiple electronic devices such as cell phones, clocks, television vehicles, etc.
Here, we must highlight two of them, the winner of the Nobel Prize in Physics in 2000, Jack St. Clair Kilby, and the co-founder of Intel and Fairchild, Robert Norton Noyce, also known as “the Mayor of Silicon Valley“. The first one was responsible for developing the integrated circuit in 1959. For his part, Robert Norton Noyce, developed his own, only six months later, solving some of the problems presented by the model of Jack St. Clair Kilby.