lecturenotes1 - CSE 599d - Quantum Computing Introduction...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CSE 599d - Quantum Computing Introduction and Basics of Quantum Theory Dave Bacon Department of Computer Science & Engineering, University of Washington I. COURSE ADMINISTRIVIA Information about the structure of the course can be found in the course syllabus, handed out the first day of class and available on the course website at http://www.cs.washington.edu/cse590d . II. WHENCE QUANTUM COMPUTING? Digital machines dominate our everyday life to such a degree that it is hard to imagine a time when the idea of a programmable computer was but a twinkle in a few oddballs eyes. But thats the way it was, for example, way back in 1936 when Alan Turing wrote his famous paper On computable numbers, with an application to the Entscheidungsproblem where the notion of a universal Turing machine was first introduced. In the years that followed, the optimism of the earlier pioneers in computing must have seemed insane: machines which can execute billions of arithmetic operations per second? Crazy! Even after the invention of the transistor, few suspected that digital computers would be as ubiquitous and useful as they are today. The force (or whatever you want to call it, especially if you arent Luke Skywalker) behind this, of course, is Moores Law (or Moores self-fullfilling prophecy if you want to be a cynic, but I prefer not to get so cynical, especially at the beginning of a class): the feature size on silicon chips is cut in half approximately every two years. Since Gordon Moore (whose hand I got to shake...twice!) wrote down his famous prediction forty years ago, we have been in an era of unprecedented growth. But Moores law is not really a law, but a statement about the rate of technological progress for computers. And if one thinks for a while, one begins to wonder when Moores law will end (and perhaps, what will happen to your job when this happens!?) If we blindly extrapolate Moores law into the future we see that somewhere around 2050, the feature size of computers would need to be the size of an atom. The size of an atom! Which makes us wonder (1) whether Moores law will be able to reach the atomic size? and (2) what happens when we get to machines that are so small? Of course we dont know the answer to either of these questions because foresight isnt 20/20, but we can, as engineers, physicists, chemists, and material scientists make some rough guesses about (1). What we know is that even today we can construct transistors which are molecular sized. Now these arent very reliable, and are far from capable of being used in any sort of modern fab plant, on the other hand, they do indicate to us that there is a real possibility of designing computers whose individual components are a few atoms in size. As a side note, you might be wondering why should computers stop at the size of an atom. Well certainly I dont know the answer to this: it is conceivable that one could engineer nuclear matter such that it functions as a computer. However, I dont knowis conceivable that one could engineer nuclear matter such that it functions as a computer....
View Full Document

This note was uploaded on 11/06/2011 for the course CSE 599 taught by Professor Staff during the Fall '08 term at University of Washington.

Page1 / 10

lecturenotes1 - CSE 599d - Quantum Computing Introduction...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online