Beyond Moore’s Law: Is Quantum Computing a Commercially Viable Replacement for Classical Computing?

Modified: 8th Feb 2020
Wordcount: 5071 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.ae.

Cite This

Abstract

Moore’s law states that the number of transistors on a microprocessor doubles approximately every 18 months. More transistors on a processor means more speed. But if this trend continues, the size of transistors is projected to reach atomic size between the year 2020 and 2030. The behavior of matter on the atomic scale does not follow the same principles as classical computers. With the aim of upholding the associated exponential increase in computation power of Moore’s law, scientist and researchers are experimenting with different technologies with the research into quantum computing spearheading this effort. The idea of quantum computing is to exploit quantum mechanics properties such as superposition and entanglement to create computers that will be exponentially faster than classical computers. But, will quantum computers be a commercially viable replacement for classical computers?

  1. Introduction

The impact of the semiconductor integrated circuit on our modern life is overwhelmingly hard to ignore. Almost every aspect of our daily lives has been affected. From communication to transportation, entertainment to education, the advancements in electronics technology, driven by the improvements of semiconductor chips, has been remarkable. The continuous development of electronics technology has become very common that it is now often taken for granted. Today, consumers generally expect progressively complex and faster electronics at ever lower prices.

The transistor, arguably the most important invention of the past century underlies this electronics revolution. Invented by John Bardeen, Walter Brattain and William Shockley at Bell Labs in December, 1947, the transistor started to replace the cumbersome vacuum tubes and ultimately led to the invention of the integrated circuit and the microprocessor. Since the 1960’s, the number of transistors in integrated circuits has doubled approximately every 18 months. Fairchild Semiconductor’s Director of R&D, Gordon Moore described this exponential growth in 1965 and by extrapolating the trend, he predicted the exponential increase of transistor density and the its accompanying exponential increase in computation power. (The Silicone Engine, 2006) The clock speed of computers increased exponentially as well until the early 2000s, when the clock speeds of integrated circuits (IC) were limited due to proper cooling and thermal management. (Mattsson, 2014) With the aim of maintaining the exponential increase in computation power, the size of transistors was reduced and multiple cores in ICs were introduced. But in the long run, the size of transistors will reach the atomic level, the ultimate size limit.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The idea of using atoms as bits is starting a totally new age in computing. With the promise of helping scientists develop magical new materials, encrypt data with impregnable security and perfectly predict the Earth’s climate, this idea is gaining significant research momentum from big companies such as Google, IBM, Microsoft and Intel. However, the behavior of matter on the atomic scale follows quantum mechanics or quantum physics, a fundamental concept in modern physics which differs significantly from classical physics. Computing using quantum physics properties such as superposition and entanglement is referred to as quantum computing and computers manufactured based on these concepts are called quantum computers. Computers used currently in our everyday lives operate based on classical physics concepts and are now referred to as classical computers in the scientific community. With the clear fundamental difference between classical computers and quantum computers, it is obvious that atoms cannot be manipulated and used like bits in transistors.

Contrary to popular predictions of the potential of quantum computers, which are based on unproven and incorrect information from the media, I propose that quantum computers are not a commercially viable replacement of classical computers – at least, not for the next two decades. The aim of this research paper is to put the difficulties of quantum computing into perspective and provide an accurate status of the current state of quantum computing and proof that quantum computers are not a commercially viable replacement due to difficulties in the implementation of quantum computers, such as de-coherence, errors and their correction, sensitivity to interaction with the environment, difficulties reading the output of quantum computers, the fact that we are yet to have practical solutions related to the fundamental architecture of a quantum computer and lastly, the fact that we do not have an idea of how to write a useful software for quantum computers.

  1. Classical computing

In this section, the fundamentals of classical computing that are mostly applicable to quantum computers are defined.

A computer is a device that executes a sequence of instructions in order to perform processes, calculations and operations. (Neumann, 1945) A sequence to instructions used to perform a particular task is referred to as a program. How a computer solves problems and what types of problems it can accurately solve is dependent on the integrated architecture. Subsequently, the architecture selected for a computer directly relates to how competently one can resolve given problems.

Almost all computer architectures used currently are based on John von Neumann’s architecture, which is depicted in Figure 1.

Figure 1: The von Neumann computer architecture

The basic idea behind the von Neumann architecture is to partition the computer into individual parts: the arithmetic logic unit (ALU), the control unit (CU), the bus system, the memory, and the input and output (IO). The ALU is able to implement certain operations on its input registers such as addition, subtraction, multiplication, and division to allow calculation using the input registers. The control unit together with the arithmetic logic unit form the central processing unit (CPU). The control unit directs the operation of the processor by causing the opening and closing of logic gates, resulting in the transfer of data to and from registers and also, the operation of the ALU. The control unit uses a bus system which enables reading new commands and data from the bus into the CPU and the results are written back to the bus. A memory is connected to the bus so that one can read or store information. Also, an input and output system is connected to the bus to allow the computer to read or store information, and react to external inputs from a user by accessing the bus. (Neumann, 1945, p. 2-4)

To execute a command in a von Neumann architecture, the control unit transfers the command into the CPU from the memory and inside the CPU, it is decoded and executed. This is referred to as the fetch–decode–execute cycle or the instruction cycle. The result of this execution which is held in the ALU can be accessed by the IO interface or storage in the memory. (Snyder, 2018, p. 46) For instance, if a user wants to add two numbers, both numbers will be loaded or fetched from the memory to the CPU and stored in the registers of the ALU, which are normally called accumulators. Then, the ALU decodes the instruction by finding the memory address of the instruction’s data, in this case the “ADD” instruction. Finally, the ALU executes the addition instruction and holds the result in the ALU’s circuitry or it may write the result back in the same registers where it can be used for further processing or returned to the memory.

 Computer architecture of classical computers has undergone multiple changes in the past two decades but one element that has not been altered is the von Neumann concept of computer design. This foundational concept is the responsible for the wide scale commercial adoption of classical computers. In the following sub-sections, important concepts that improved on the von Neumann’s architecture are discussed.

2.1.The abstraction layers of classical computers

A classical computer can only interpret a sequence of instructions to manipulate data to ultimately solve a problem. In favor of simplifying the usage of classical computers, abstraction layers were introduced to help make interfacing with the computers a more natural process. Essentially, the idea behind these layers is that each layer conceals the implementation details of the layer underneath and provides simple functions for the layer overhead. To execute a program at a higher level, the lower level decodes, or converts the program to instructions that can be processed by the higher level which then executes it. This process is repeated until the program is eventually executed on the highest level. (Tanenbaum, 2005, p.1)

To expand on the idea of the abstraction layers, let’s imagine a computer design which has five abstraction layers, and let’s assume a user wants to develop a program that reads a number from a file and then displays on the screen whether the number is smaller than 100, or not. First, the user develops the program in the top layer, referred to as the process layer. The underlying layer, the operating system layer, then provides the necessary functions such as opening files or comparing numbers. To execute the user’s program, a compiler then translates the functions of the operating system layer to assembly code for the third layer, the assembler layer. These assembler commands are then translated to hardware commands that can be executed on second layer, the firmware layer. Lastly, the hardware layer executes the hardware commands on the computer hardware.

The idea of abstraction layers is important in computer science. Without it, the use of modern computers would not be as simple and commercially adopted as it is today. Abstraction layers helps speed up application development time, meaning that software developers are liberated from writing instructions in machine code, which is a tedious exercise. But on the flip side, it also increases the amount of time for executing commands on the hardware level due to the repeated steps involved in translating code from one layer to the other.

2.2. Parallel computing in classical computers

In order to speed up computation in a classical computer, one can either increase the clock speed of the CPU, or if this is not feasible due to hardware limitations, one can execute multiple instructions concurrently in parallel. (Tanenbaum, 2005, p.547)

Parallel computing combines multiple CPUs into one computer, as clearly shown in Fig. 2c. Such systems demonstrate a problem of the von Neumann architecture. Data processing in the CPU may perhaps be fast but all data has to pass through the bus as shown in figure 1. The clock of a bus is actually slower than the clock of a CPU. Consequently, even for single processor systems high speed CPUs, the bus system restricts the performance of the computer. This is commonly called the “von Neumann bottleneck”.

Figure 2: (a) On chip parallelism (b) A co-processor (c) A multiprocessor (d) A multicomputer (e) A grid

In parallel computing, when two CPUs or processing elements are close together, have a high bandwidth and low delay between them, and are computationally intimate, they are said to be tightly coupled. On the other hand, when they are far apart, have a low bandwidth, high delay and are computationally remote, they are said to be loosely coupled. (Tanenbaum, 2005, p.548)

Multiprocessors and multi-computers are the most common used in parallel computing. In a multiprocessor with shared memory, all of the processors are tightly coupled through a high-speed bus on the same motherboard. This allows the processors in a multiprocessor to communicate with each other by reading and writing to the memory. A multicomputer, on the other hand, is a system with several processors that are coupled together to solve a problem. This enables a multicomputer to divide the task between the processors to complete a task. It is simpler and cost effective to build a multicomputer than a multiprocessor but programming a multicomputer is more difficult. (Tanenbaum, 2005, p.548)

2.3. Pipelining in CPUs

As discussed before, the execution of a single command in a CPU can be partitioned into several sub-operations. To summarize, a command can be divided into four parts i.e. Fetch/Load, Decode, Execute and Write. To start execution, the command is fetched from the memory. Secondly, the command is decoded to find out what the CPU has to do. Following up, the command is executed. Lastly, the results are written back or returned to the memory. To speed up execution, pipelining in the CPU itself, demonstrated in Figure 3, is introduced where a CPU has the capacity to complete executing all four differing stages of a command in parallel. Since the separate steps of a command have to be processed sequentially, the execution of one command takes the same time as without pipelining.

Figure 3: Execution of instruction that can be split into four separate steps without and with pipelining.

But using CPU pipelining, the tasks of the different stages of a pipeline can be executed concurrently. Therefore, the rate of execution of algorithms is increased by a factor of the number of stages in the pipeline. In the example above, that would be four. (Hamacher, 2011, p.194)

2.4. Memory architecture of Classical Computers

Figure 4: Computer memory hierarchy

Classical computers use many different types of memory to store data and programs as shown in Figure 4. In this section, the way that main memory (RAM) is organized is discussed.

CPU registers and cache memory offer computers blistering speeds for storing data at the expense of a minimal storage size. Main memory is the next fastest memory within a computer and is much larger in size. Fundamentally, the RAM’s architecture is much like an arrangement of cells in a table or an excel sheet for simplicity in which each cell can hold a 0 or a 1. Each cell has a distinct address that can be reached by counting across columns and then counting down by row. There is an address line, a thin electrical line engraved into the chip, for each row and each column in the set of cells. When the CPU receives an instruction to be executed, the instruction may include a RAM address from which data is to be read. In order to execute this command, the CPU sends a request to the RAM. A RAM controller handles this request and sends it down the address lines so that the transistors along the lines open up the cells so each capacitor value can be read. (Haugen, 2000, p.2)

  1. Quantum computing

Research on quantum computing begun in 1982 when Richard Feynman proposed using a quantum system to simulate another quantum system. The fundamental idea is to harnesses two underlying concepts of quantum mechanics, superposition and entanglement to offer exponential compute capacity in order to solve problems that are difficult to be solved using classical computers.

In the classical model of a computer, a bit can only exist in two different states, a 0 or a 1. Bits in quantum computers follow different rules. A quantum bit, commonly called a qubit can be in the classical 0 and 1 states and it can also exist in a state of both 1 and 0. A qubit existing in a state of both 1 and 0 at the same time is said to be in a superposition of both. This strange behavior is contrary to the understanding that we humans have about the world we live in. A qubit in this state can be imagined as existing in two spaces at the same time. As a 0 in one space and as a 1 in the other. Consequently, an operation on such a qubit potentially acts on both values at the same time. Therefore, by performing a single operation on the qubit, we essentially have executed the operation on two values. Similarly, a two-qubit system would execute the operation on four values, and on a three- qubit system, eight values. Consequently, increasing the number of qubits exponentially increases the execution speed of the system. There is the potential to use this parallelism to solve certain problems in a fraction of the time taken by a classical computer with an efficiently designed algorithm. (Bone et al, 2014)

Current state-of-the-art quantum processors boasts 72 qubits and 50 qubits by Google and IBM respectively. But, unlike von Neumann’s architecture used by classical computers, no complete solution with a scalable and versatile microarchitecture exists or has been proposed yet for quantum computers.

3.1. Abstraction Layers of Quantum Computers

The idea of abstraction layers is essential in computer science and is relevant to quantum computers as well. With the technology in its infancy stages, only a few number of research papers have tried to develop, in a methodical way, the separate physical components and abstraction layers for a quantum computer. Similar research with a high level view rather than an implementable architecture has been proposed by multiple researchers. Research scientists like Cody Jones with HRL laboratories have proposed a layered control stack for a quantum computer architecture but focuses more on the gate abstractions instead of the architectural support. (Jones, 2012).

Realizing a general architecture for a quantum computer, like von Neumann’s architecture for a classical computer, is difficult as the interfaces for qubit manipulation differ immensely from technology to technology. Consequently, the field of quantum computer architecture remains in its infancy. The best architecture for achieving this goal is yet to be identified, but many are being experimented with.

3.2. Parallel Computing for Quantum Computers

As discussed earlier, parallel computing is achieved in quantum computers by harnessing the superposition property of quantum computing. To increase the speed of a quantum computer, one has to increase the number of qubits in the quantum computer. However, increasing the number of qubits introduces a huge margin of computation error.

Physicists in China have successfully demonstrated 18-qubit entanglement in an experiment. This is by far the most amount of entangled state realized with individual control of each qubit. Eighteen qubits can generate a total of 2^18 or 262,144 combinations of output states but, this is nowhere close to the computational power of an average classical computer. (Zyga, 2018)

3.3. Quantum Error Correction

Computation always involves errors and these errors may be due to internal or external dependencies. Qubits are very fragile and difficult to control and almost any external force can cause qubits to “de-cohere” i.e. cause their state of superposition to collapse. Heat or microwaves from wireless devices can cause the qubits to lose superposition.  For a qubit to sustain coherence in the superposition state, quantum devices are super cooled to nearly absolute zero and shielded from any kind of electromagnetic waves to prevent their atoms from vibrating. In addition, directly measuring the superposition state of a qubit is very challenging because of the qubit’s extreme sensitivity to any sort of external force. In order to solve these issues, quantum error correction (QEC) mechanisms are needed to make quantum computing fault-tolerant.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our academic writing services

The first quantum error correcting code was discovered by Peter Shor. Shor’s code proved that 9 qubits could be used to protect a single qubit against general errors. Since then, numerous research scientists have developed their codes for quantum error correction.Error correction is much more difficult in quantum computing than it is classical computing because measurement of qubits may destroy the information stored in them and these errors are continuous in quantum computing due to qubits entangling with each other.  

3.4.Memory architecture for quantum computers

In theory, quantum computers have the potential of computing with extraordinary speed as demonstrated by the IBM Q team in their paper, “Quantum advantage with shallow circuits”. (IBM, 2018) But, to execute most of these computations efficiently these machines will eventually need to access a form of storage device like the RAM in classical computing otherwise a large scale quantum computer cannot be realized. 

 De-coherence and disentanglement of qubits are a challenge for storing data on qubits. The longest amount of time recorded so far for storing data on a qubit is only 1.3 seconds. (Galeon, 2017) Using classical computing techniques, solid state drives (SSD) like Samsung SSD 750 Evo is capable of holding data for more than 70 years without degradation. (Nuncic, 2018) Comparing these numbers, it is apparent that the de-coherence and disentanglement hurdles of quantum computing will need to be overcome in order to hold the superposition states of qubits longer. This is key to the commercial adoption of quantum computers.

  1. An alternative approach to parallel computing

There are other approaches to parallel computing other than quantum computing such as DNA computing and microfluidics-based computing. But, these approaches have not been realized yet on a large scale perspective.

Researchers from Lund university in Sweden, have another approach, a bio-computing approach which harnesses the properties of a protein called myosin found in muscle tissue. Myosin can be thought of as a tiny molecular motor which converts chemical energy to mechanical energy. Heiner Linke, the research director, explains in simple terms, “it involves building a network of nano-based channels that give specific traffic regulations for protein filaments. The solution in the network corresponds to the answer of a mathematical question and many molecules can find their way through the network at the same time.” (Nicolau, 2016) So instead of having bulky classical super computers performing multiple simultaneous computations, a nano-scale molecular motor computer can be used to do the same thing. This means a much smaller and much powerful computer. Additionally, it is much simpler and easier to build and write software for than a quantum computer. Because, the major components needed are found in nature and, these computers can be programmed using already known programming languages, a nano-scale molecular motor computer could be a better replacement for classical computers.

  1. Conclusion

In conclusion, several classical computing and quantum computers concepts have been compared and the current state and problems of the implementation of quantum computers have been presented. Firstly, due to differences in the implementation of both approaches, quantum computers will never be able to run the “if then” and “if else” type of logic used in classical computing due to the superposition and entanglement properties of quantum computing. Secondly, with quantum computers having to be super cooled to temperatures nearly close to absolute zero and shielded from almost any kind of external forces including microwaves and heat, quantum computing is an impractical solution from a fabrication and operational perspective and for general use outside a specialized lab. Thirdly, we currently do not have technical solutions related to the fundamental architecture of a quantum computer and for resolving errors in a large scale system. Nor, do we have an idea of how to write a useful quantum software.

So contrary to the public perception of quantum computing as displayed in the media, quantum computing is not a commercially viable replacement for classical computing and will not be for at least two decades as predicted – at least not until the hurdles listed are overcome.

References

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1837 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider