FEYNMAN LECTURES ON COMPUTATION PDF

adminComment(0)

Feynman Lectures on Computation - Free ebook download as PDF File .pdf), Text File .txt) or read book online for free. richard feynman one of the eminent. The Feynman Lectures on Computation were finally published in September , Feynman wanted me to help write up his lecture notes on computation. The Feynman Lectures on Physics, Desktop Edition Volume II This New Millennium Edition ushers in a new era for The Feynman Lectures on Physics ( FLP.


Feynman Lectures On Computation Pdf

Author:PAMULA UNTERBURGER
Language:English, Indonesian, Portuguese
Country:Indonesia
Genre:Personal Growth
Pages:360
Published (Last):01.05.2016
ISBN:214-5-55663-226-7
ePub File Size:15.62 MB
PDF File Size:13.74 MB
Distribution:Free* [*Sign up for free]
Downloads:26636
Uploaded by: JONG

computation: discrete logarithms and factoring. - Foundations of Computer. Science, Proceedings.,. 35th Annual Symposium on. PDF | The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and. Request PDF on ResearchGate | Feynman Lectures on Computation | From the Publisher:From to , the legendary physicist and teacher Richard.

That was "up"; now it's time to go down. How can anything be simpler than our dumb file clerk model and our simple list of instructions? What we have not considered is what our file clerk is made of; to be more realistic, we have not looked at how we would actually build electronic circuits to perfonn the various operations we have discussed.

This is where we are going to go next, but before we do, let me say what I mean by moving "sideways". Sideways means looking at something entirely different from our Von-Neumann architecture, which is distinguished by having a single Central Processing Unit CPU and everything coming in and going out through the "fetch and execute" cycle.

Many other more exotic computer architectures ate now being experimented with, and some are being marketed as machines people can download. Going "sideways" therefore means remaining at the same level of detail but examining how calculations would be perfonned by machines with differing core structures.

We already invited you to think of such "parallel" computers with the problem of organizing several file clerks to work together on the same problem. We shall begin our trip downwards by looking at what we need to be able to perform our various simple operations adds, transfers, control decisions, and so forth. We will see that we will need very little to do all of these things! To get an idea of what's involved, let's start with the "add" operation.

Our first, important, decision is to restrict ourselves to working in base 2, the binary system: In the meantime, we shall adopt a somewhat picturesque, and simpler, technique for depicting binary numbers: Now let us take two such strips, and pretend these are the numbers to be added - the "summands".

Underneath these two we have laid out one more, to hold the answer Fig. This turns our abstract mathematical problem into a matter of real world "mechanics".

All we need to do the addition is a simple set of rules for moving the pebbles. The basic problem is the same: For binary addition the basic rules are:. So now you can imagine giving instructions on how to move the pebbles to someone who is a complete idiot: The marvellous thing is, with sufficiently detailed rules this "idiot" is able to add two numbers of any size!

With a slightly more detailed set, he can graduate to multiplication. He can even, eventually, do very complicated things involving hypergeometric functions and what have you. What you tell an apparent idiot, who can do no more than shuffle pebbles around, is enough for him to tackle the evaluation of hypergeometric functions and the like.

If he shifts the pebbles quickly enough, he could even do this quicker than you in that respect, he is justified in thinking himself smarter than you! Of course, real machines do not calculate by fiddling with pebbles although don't forget the abacus of old! They manipulate electronic signals. So, if we are going to implement all of our notions about operations, we have to start thinking about electric circuits. Let us ditch our ice trays and stones and look at the problem of building a real, physical adder to add two binary digits A and B.

This process will result in a sum, S, and a carry, C; we set this out in a table as follows:. Let us represent our adder as a black box with two wires going in A and B and two coming out Sand Cl Fig. We will detail the actual nature of this box shortly. For the moment, let us take it for granted that it works. As an aside, let us ask how many such adders we would need to add two r-bit numbers?

You should be able to convince yourself that 2r-1 single-bit adders are required. This again illustrates our general principle of systematically building complicated things from simpler units. Let us go back to our black box, single-bit adder. Suppose we just look at the carry bit: This corresponds precisely to the behavior of the so-called AND gate from Boolean logic.

Such a gate is itself no more than a black box, with two inputs and one output, and a "truth table" which tells us how the output depends on the inputs. This truth table, and the usual pictorial symbol for the AND gate are given below:. IThis box is sometimes known as a "half adder". Simple enough: Although I have described the gate as a black box, we do in fact know exactly how to build one using real materials, with real electronic signals acting as values for A, Band C, so we are well on the way to implementing the adder.

Feynman Lectures on Computation

The sum bit of the adder, S, is given by another kind of logic gate, the "exclusive or" or XOR gate. Like the AND, this has a defming truth table and a pretty symbol Fig. XOR is to be distinguished from a similar type of gate, the conventional OR gate, which has truth table and symbol shown in Figure 2.

All of these gates are examples of "switching functions", which take as input some binary-valued variables and compute some binary function. Shannon was the first to apply the rules of Boolean algebra to switching networks in his MIT Master's thesis in Such switching functions can be implemented electronically with basic circuits called, appropriately enough, "gates".

The presence of an electronic signal on a wire is a "I" or "true" , the absence a "0" or "false". Let us continue going down in level and look in more detail at these basic gates. This is just a wire coming into a box and then out again, with the same signal on it. This just represents a wire Fig. In a real computer, this element would be considered a "delay": But let us skip this operation and look at the next simplest, namely, a box which "negates" the incoming signal.

If the input is aI, then the output will be 0, and vice versa. This is the NOT operation, with the obvious truth table Fig. Diagrammatically, the NOT is just the delay with a circle at its tip. One of the nice games you can play with logic gates is trying to find out which is the best set to use for a specific purpose, and how to express other operators in terms of this best set.

A question that naturally arises when thinking of this stuff is whether it's possible to assemble a basic set with which you could, in principle, build all possible logic functions: We will not consider this matter of "completeness" of a set of operators in any detail here; the actual proof is pretty tough, and way beyond the level of this course.

We will content ourselves with a hand-waving proof in section 2. To tempt you to go further with all this cute stuff, I will note that there exist single operators that are complete I. We now have pretty much all of the symbols used by engineers to depict the various gates. They're a useful tool for illustrating the links between their physical counterparts.

Note that we have adopted the common convention of writing the NOTs as circles directly on the relevant wires'; we don't need the triangles. A B AeB. Notice the convention we are using: If the lines cross without connection, there is no dot.

About preskill

Of course, you have to check that this combination works for the other two input sets of A and B; and indeed it does.

Note that this circuit is not unique. Another way of achieving an XOR switch is as follows. Which way should we make the XOR circuit in practice? It just depends on the details of the particular circumstance the hardware, the semiconductor technology, and so on. We might also be interested in other issues, such as which method requires the fewest elements.

As you can imagine, such stuff amounts to an interesting design problem, but we are not going to dwell on it here. Let us look at another example: This has four inputs but still just one output, and by extension from the two-input case, we declare that this gate only "goes off' that is, gives an output of one - when all four inputs are 1. Sometimes people like to write this problem symbolically thus:. Of course, when logicians write something like this they have no particular circuit in mind which can perform the operation.

We, however, can design such a circuit, built up from our primitive black box gates: For now, I will give some quick pointers to gate construction that should be intelligible to those of you with some grasp of electronics. Central to the construction of all gates is the transistor. This is arguably the most important of all electronic components, and played a critical role in the development and growth of the industry.

Few electronic devices contain no transistors, and an understanding of the basic properties of these elements is essential for understanding computers, in which they are used as switches. Let us see how a transistor can be used to construct a NOT gate. Consider the following circuit Fig. A transistor is a three-connection device: The central property of the transistor is that if the gate has a distinctly positive voltage the.

Now look at the behavior of the output voltage as we input a voltage to the gate. If we input a positive voltage, which by convention we label aI, the transistor conducts: What about an AND gate? Due to the nature of the transistor, it actually turns out to be more convenient to use a NAND gate as our starting point for this. Such a gate is easier to make in a MaS environmeht than an AND gate, and if we can make the former, we can obtain the latter from it by using one of de Morgan's rules: So consider the following simple circuit Fig.

In order for the output voltage to be zero here, we need to have current flow through both A and B, which we can clearly only achieve if both A and B are positive. The resultant output is our AND. Whilt about an OR gate? Well, we have seen how to make an OR from ANDs and NOTs, and we could proceed this way if we wished, combining the transistor circuits above; however, an easier option both conceptually and from. If either A or B is positive, or both positive, current flows and the output is zero.

So again, we have the opposite of what we want: All we do now is send our output through a NOT, and all is well. Hopefully this has convinced you that we can make electrical circuits which function as do the basic gates.

We are now going to go back up a level and look at ,some more elaborate devices that we can build from our basiC building blocks. The first device that we shall look at is called a "binary decoder". It works like this. Suppose we have four wires, A, B, C, D coming into the device.

These wires could bring in any input. However, if the signals on the wires are a specific set, say , we want to know this: It is as if we have some demon scanning the four bits coming into the decoder and, if they turn out to be , he sends us a signal!

This is easy to arrange using a modified AND gate and much cheaper than hiring a demon. The following device Fig. This is a very special type of decoder. Suppose we want a more general one, with lots of demons each looking for their own particular number amidst the many possible input combinations. Such a decoder is easy to make by connecting individual decoders in parallel. A full decoder is one that will decode every possible input number.

Let us see how this works with a three-to-eight binary decoder. We therefore have eight output wires, and we want to build a gate that will assign each input combination to a distinct output line, giving a 1 on just one of these eight wires, so that we can tell at a glance what input was fed into the decoder.

Free Feynman!

We can organize the decoder as follows Fig. We have introduced the pictorial convention that three dots on a horizontal line implies a triple AND gate see the discussion surrounding Figure 2. As we have arranged things, only the bottom four wires can go off if A is one, and the top four if A is zero.

Thus, we have explicitly constructed a three-to-eight binary decoder.

Now, there is a profound use to which we can put the device in Fig. Suppose we feed l' s from the left into all of the horizontal input wires of the decoder. Now interpret each dot on an intersection as a two- way AND:.

Similarly for Band C. So we still have a binary decoder; nothing has changed in this regard. However, we have also invented something else, which a little thought should show you is indispensable in a functioning computer: The original input lines of the decoder, A, B, C now serve as "address" lines to select which output wire gives a signal which may be 1 or 0.

This is very close to something called a "multiplexer": In our example, we can make our device into a true multiplexer by adding an eight- way OR gate to the eight output lines Fig. This rather neat composite device clearly selects which of the eight input lines on the left is transmitted, using the 3-bit address code.

Multiplexers are used in computers to read and write into memory, and for a whole host of other tasks. Problem 2. Design an 8 to 3 encoder. In other words, solve the reverse problem to that considered earlier: Make an r-bit full adder using r 1-bit full adders.

We stated earlier, without proof, that the combinational circuits for AND and NOT are sufficient building blocks to realize any switching function.

These are the "fanout" and "exchange" operations Fig. If, on the other hand, the information were carried by pebbles, then a fanout into two means that one pebble has become two, so it is quite a special operation.

Similarly, if the information were stored in separate boxes in distinct. We are emphasizing the logical. The other thing we will assume we have is an endless supply of D's and 1's; a store somewhere into which we can stick wires and get signals for as long as we want. This can have unforeseen uses.

We want to discuss a rather different problem, which will enable us to look at some rather more exotic logic gates. By this I mean simply that from the output of the gate you cannot reconstruct the input: If the output of an AND gate with four inputs is zero, it could have resulted from anyone of fifteen input sets, and you have no idea which.

We would like to introduce the concept of a reversible operation as one with enough information in the output to enable you to deduce the input. It will make it possible for us to make calculations about the free energy - or, if you like, the physical efficiency of computation. The problem of reversible computers has been studied independently by Bennett and Fredkin. Our basic constructs will be three gates: Let us explain what these are. It works in the following way.

We have two wires, on one of which we write a circle, representing a control, and on the other a cross Fig. B B' Fig. The "X" denotes a NOt operation: Specifically, if the input to the O-wire is I, then the input to the X-wire is inverted; if the O-input is zero, then the NOT gate does not work, and the signal on the X-wire. The O-output, however, is always the same as the O-input - the upper line is the identity. The truth table for this gate is simple enough:.

One of the most important properties of this CN gate is that it is reversible from what comes out we can deduce what went in. Notice that we can actually reverse the operation of the gate by merely repeating it:. We can use a CN gate to build a fanout circuit.

As an exercise, you might like to show how CN gates can be connected up to make an exchange operator Hint: Sadly, we cannot do everything with just N and CN gates.

C C' Fig. Notice that this single gate is very powerful. But things are even better: So clearly, we can generate any gate we like with just a CCN gate: The next thing we must do is show that we can do something useful with only these reversible operations. This is not difficult, as we have just shown that we can do anything with them that we can do with a complete operator set!

However, we would like whatever we build to be itself reversible. Consider the problem of making a fun adder:. We need to add A, B and C and obtain the sum and carry. Now as it stands, this operation is not reversible one cannot, in general, reconstruct the three inputs from the sum and carry. We have decided that we want to have a reversible adder, so we need more information at the output than at present. It is a worthwhile exercise to work this out in detail.

Fredkin added an extra constraint on the outputs and inputs of the gates he considered. He demanded that not only must a gate be reversible, but the number of 1s and Os should never change.

There is no good reason for this, but he did it anyway. A Controlled Exchange. In his honor, we will call this a Fredkin gate.

You should be used to the notion of control lines by now; they just activate a more conventional operation on other inputs. In this case, the operation is exchange. Fredkin's gate works like this: You can check that the numbers of Is and Os is conserved. As a further, and more demanding, exercise, you can try to show how this Fredkin gate can be used perhaps surprisingly to perform all logical operations instead of using the CCN gate.

Supplementary readings

I have introduced you to the notion of reversible gates so that you can see that there is more to the subject of computer logic than just the standard AND, NOT and OR gates. We will return to these gates in chapter five. Now I've been very happy to say that with a so- called "complete set" of operators, you can do anything, that is, build any logical function. The problem I would like to address is how we can know that this set is complete.

Ym, where m is not necessarily equal to n. What we want to demonstrate is that for any set of functions Fi we can build a circuit to perfonn that function on the inputs using just our basic set of gates.

Let us look at a particular example, namely, the sum of the input wires. We can see how in principle we can do this as follows. In our binary decoder, we had n input wires and 2n output wires, and we arranged for a particular output wire to fire by using a bunch of AND gates.

This time we want to arrange for that output to give rise to a specific signal on another set of output wires. In particular, we can then arrange for the signals on the output wires to be the binary number corresponding to the value of the sum of the particular input pattern.

Let us suppose that for a particular input set of XS we have selected one wire. One wire only is "hot", and all the others "cold". This is the opposite problem to the decoder. What we need now is an encoder. So you see, we have separated the problem into two parts. The first part that we looked at before was how to arrange for different wires. The answer was our decoder. Our encoder must have a lot of input wires but only one goes off at a time. We want to be able to write the.

A three-bit encoder may be built from OR gates as follows Fig. If you succeeded in solving any of the problems 2. Some of the logical functions we could construct in this way are so simple that using Boolean algebra we can simplify the design and use fewer gates.

In the past people used to invest much effort in finding the simplest or smallest system for particular logical functions. However, the approach described here is so simple and general that it does not need an expert in logic to design it!

Moreover, it is also a' standard type of layout that can easily be laid out in silicon. These are often used to produce custom-made chips for which relatively few copies are needed. The customer only has to specify which ANDs and which DRs are connected to get the desired functionality. For mass- produced chips it is worthwhile investing the extra effort to do the layout more efficiently.

Now I want to come onto something different, which is not only central to the functioning of useful computers, but should also be fun to look at. We start with a simple question: That is, can we build a computer's memory from the gates and tidbits we've assembled so far? A useful memory store will allow us to alter what we store; to erase and rewrite the contents of a memory location. Let's look at the s,implest possible memory store, one which holds just one bit a I or 0 , and see how we might tinker with it.

As a reasonable first guess at building a workable memory device, consider the following black box arrangement:.

Feynman Lectures on Computation

A c Fig. We take the signal on line C to represent what is in our memory. The input A is a control line. However, if we switch A to 1, then we change C: We can write a kind of "truth table" for this:. Will this work? Well, it all depends on the timing! We have to interrupt our abstract algebra and take note of the limitations on devices imposed by the physical world. Let's suppose that A is 0 and C is 1. Then everything is stable: Now change the input A to 1.

What happens? C changes to 0, by definition, which is what we want. We then start all over again. However, if you think about it, you can see that we can salvage the gate somewhat by building in delays to the various stages of its operation; for example, we can make the XOR take a certain amount of time to produce its output. However, we cannot stop it oscillating. Clearly, the crucial troublesome feature in this device is the element of feedback. Can we not just dispense with it?

The answer is yes, but-this would be at quite a cost. For reasons of economy and space, one thing we would like our computer to be able to do is repeated calculations with the same pieces of operating equipment. For example, if we used a certain adder to do part of a calculation, we would like to use the same adder to do another subsequent part of the calculation, which might involve using its earlier output.

We would not want to keep piling more and more adders into our architecture. So we will want to crack this problem! What we want is a circuit that can hold a value, 0 or 1, until we decide to reset it with a signal on a wire.

The circuit that turns out to do the job for us is called ajlip-jlop, schematically drawn as shown in Figure 2. This latter labeling reflects the fact that one is always the logical complement - the inverse of the other.

They are sometimes misleadingly referred to as the 0 and 1 lines; misleading, because each can take either value, as long as the other is its inverse. We can actually use NOR gates for example to build a circuit that functions as a flip-flop:.

Note that the device incorporates feedback!

Despite this, it is possible to arrange things so that the flip-flop does not oscillate, as happened with our naive XOR store.

It is important to ensure that Sand R are never simultaneously 1, something which we can arrange the architecture of our machine to ensure. How does this help us with memory storage? The signal on the Q-line is interpreted as the contents of the flip-flop, and this stays the same whenever S and R are both O. Let us first consider the case when the reset line, R, carries no signal. In other words, the S-line sets the contents of the flip-flop to 1, but subsequently manipulating S does nothing; if the flip-flop is already at 1, it will stay that way even if we switch S.

Now look at the effect of the reset line, R. So the R line clears the contents of the flip-flop. This is pretty confusing upon first exposure, and I would recommend that you study this set-up until you understand it fully. We will now examine how we can use this flip-flop to solve our timing problems. We have now designed a device a flip-flop which incorporates feedback, and doesn't suffer from the oscillations of naive structures.

However, there is a subtle and interesting problem concerning this gadget. As I pointed out in the last lecture, the signals traveling between the various components take differing times to arrive and be processed, and sometimes the physical volatility of the components you use to build your equipment will give you freaky variations in these times in addition, which you wou: This means that often you will find signals arriving at gates later than they are supposed to, and doing the wrong job!

We have to be aware of the' possible effects of this. For the flip-flop, for example, what would happen if both the outputs turned out to be the same? We have assumed, as an idealization, that they would be compiementary, but things can go wrong! You can see that if this happens, then the whole business of the set and reset would go out the window.

The way around this is to introduce into the system a clock, and have this send an "enable" signal to the flip-flop at regular intervals. We then contrive to have the flip-flop do nothing until it receives a clock signal. These signals are spaced far enough apart to allow everything to settle down before operations are executed.

We implement this idea by placing an AND gate on each input wire, and also feeding into each gate the clock signal: This is sometimes called a transparent latch since all the time the clock is asserted any change of input is transmitted through the device.

But of course, we have created another one! We have merely deferred the difficulty: We still have delays. It can be done, and the resultant system is fast and efficient, but it's also very expensive and difficult to design.

Don't forget that in all this we are using the abstractions that 1 all levels are oor 1 not true: They are never exactly one or zero, but they are near saturation , and 2 there is a definite, uniform delay time between pulses: It is possible to design a variety of flip-flop devices, and learning how and why they work is a valuable exercise. One such device is the D-type flip-flop, which has the structure shown in Figure 2.

It is unclear why this device is labeled a "D-type" flip-flop. One plausible suggestion is that the "0" derives from the "delaying" property of the device: A very useful device that may be built from flip-flops, and one which we shall take the trouble to examine, is a shift register.

This is a device which can, amongst other things, store arbitrary binary numbers - sequences of bits - rather than just one bit. It comprises a number of flip-flops, connected sequentially, into which we feed our binary number one bit at a time. We will just use our basic S-R's, with a delay built in.

The basic structure of a shift register is as follows:. Each unit of this register is essentially a stable delay device of the kind I described earlier. We start with the assumption not necessary, but a simplifying one that all of the flip-flops are set to zero. Suppose we wish to feed the number into the device.

What will happen? We feed the number in lowest digit ftrst, so we stick a 1 into the left hand S-R, which I've labeled A, and wait until the clock pulse arrives to get things moving. We now feed the next bit, 0, into A.

Nothing happens until the next clock pulse. After this arrives, the next S-R in the sequence, B, gets a 1 on its output the original 0 has been reset.

However, the output of A switches to 0, reflecting its earlier input. Meanwhile, we have fed into A the next bit of our number which is 1. Again, we wait for the next clock pulse.

Now we find that A has an output of 1, B of 0 and C of 1 - in other words, reading from left to right, the very number we fed into it!

Generalizing to larger binary strings is straightforward note that each flip-flop can hold just the one bit, so a register containing n flip-flops can only store up to 2n. It is not necessary to go any further with them; the reader should be able to see that registers clearly have uses as memory stores for numbers and as shifting devices for binary arithmetical operations, and that they can therefore be built into adders and other circuits.

We now come on to address an issue that is far more fundamental: It is easy to imagine that if we built a big enough computer, then it could compute anything we wanted it to.

Is this true? Or are there some questions that it could never answer for us, however beautifully made it might be?

Ironically, it turns out that all this was discussed long before computers were built! Computer science, in a sense, existed before the computer.

It was a very big topic for logicians and mathematicians in the thirties. There was a lot of ferment at court in those days about this very question - what can be computed in principle?

Mathematicians were in the habit of playing a particular game, involving setting up mathematical systems of axioms and elements - like those of Euclid, for example - and seeing what they could deduce from them.

An assumption that was routinely made was that any statement you might care to make in one of these mathematical languages could be proved or disproved, in principle. Mathematicians were used to struggling vainly with the proof of apparently quite simple statements -like Fermat's Last Theorem, or Goldbach's Conjecture - but always figured that, sooner or later, some smart guy would come along and figure them oue. However, the question eventually arose as to whether such statements, or others, might be inherently unprovable.

The question became acute after the logician Kurt GOdel proved the astonishing result - in "Godel's Theorem" - that arithmetic was incomplete. The struggle to define what could and could not be proved, and what numbers could be calculated, led to the concept of what I will call an effective procedure.

If you like; an effective procedure is a set of rules telling you, moment by moment, ,what to do to achieve a partiCUlar end; it is an algorithm. Let me.

The proof, long believed impossible to derive mathematical societies even offered rewards for it! Suppose you wanted to calculate the exponential function of a number x, If.

There is a very direct way of doing this: Plug in the value of x, add up the individual terms, and you have if. As the number of terms you include in your sum increases, the value you have for If gets closer to the actual value. So if the task you have set yourself is to compute If to a certain degree of accuracy, I can tell you how to do it - it might be slow and laborious, and there might be techniques which are more efficient, but we don't care: It is an example of what I call an effective procedure.

Another example of an effective procedure in mathematics is the process of differentiation. It doesn't matter what function of a variable x I choose to give you, if you have learned the basic rules of differential calculus you can differentiate it.

Things might get a little messy, but they are straightforward. This is in contrast to the inverse operation. As you all know. Do we have the derivative of a function divided by the function itself? Is integration by parts the way to go? Feynman's primary interests are in exploring just how far we can push computers given the laws of physics: how fast can they go, what can or cannot be computed, and how much energy must we use?

Despite the quality of the lectures, this book's finest feature is the exercises. Feynman frequently preaches the "pleasure of discovery" and embeds his lectures with creative, fun, and instructive exercises. In fact, the most memorable lesson I drew from this book was that an hour of thinking and playing with an idea is often worth more than 24 of reading about it.

To those who don't see the point of solving problems that were solved decades or centuries before by others, Feynman offers the following wonderful characterization of science paraphrased : The life of a young scientist is spent rederiving old results, gradually rediscovering more and more recent ideas, until one day, he discovers something that no one else has ever discovered before.

They key point is that without all that practice on "old" problems, it's insanely difficult to develop the skill and confidence to work on "new" ones.

The major weakness of this book is that parts of it are quite dated. Feynman gave his lectures in the early 80s and even by the time this book was published in , much of the hardware physics was already archaic. However, the parts of the book on theory the bulk are still quite relevant. Even the dated bits are quite useful to simply get the flavor of how the laws of physics can be exploited to do useful computations. Although led by Feynman.

Charles Bennett. Although the lectures are now thirteen years old. SlideShare Explore Search You. Submit Search.

Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. Upcoming SlideShare. Like this presentation? Why not share! An annual anal Embed Size px. Start on. Show related SlideShares at end.Feynman Pages: We examine the tape, and the digit directly above the head is the answer to our question.

However, if you think about it, you can see that we can salvage the gate somewhat by building in delays to the various stages of its operation; for example, we can make the XOR take a certain amount of time to produce its output.

At a given time t, we have the machine in a state Q t receiving a symbol Set. Incidentally, it is the need for an infinite memory that debars the construction of an FSM for binary multiplication. We have introduced the pictorial convention that three dots on a horizontal line implies a triple AND gate see the discussion surrounding Figure 2. It is interesting to take a look at what they are.

Can you see how? In his honor, we will call this a Fredkin gate.

HARRIETTE from Boulder
I enjoy sharing PDF docs potentially. Please check my other articles. I take pleasure in macramé.
>