Read PDF The Logic of Computer Arithmetic

Free download. Book file PDF easily for everyone and every device. You can download and read online The Logic of Computer Arithmetic file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The Logic of Computer Arithmetic book. Happy reading The Logic of Computer Arithmetic Bookeveryone. Download file Free Book PDF The Logic of Computer Arithmetic at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The Logic of Computer Arithmetic Pocket Guide.

If speed is para- mount and cost is no obstacle, it may be achieved by two methods. Decreasing the response time of the building blocks by using more expensive components will fulfill the aim. There is a state-of-the-art limit to this procedure where no further expense produces a time reduction. Changing the organization of the arithmetic unit can also afford an improvement. It is then necessary to take several simultaneous actions upon the participants in the arithmetic process.

Hence, compound decisions are required which, in turn, impose a more complex organization upon the arithmetic unit. This is a matter of improving components and circuits and is a specialty unto itself. Given the building blocks, what alternatives are available for the organization of the arithmetic section and what are the trade-offs?

Serial organization requires only one full adder for binary or one digit adder for binary coded decimal and so entails less expense than the parallel organization. Detailed discussion of this kind of logic abounds. For given block speeds, the fastest arithmetic is done by parallel or- ganization, not by serial or hybrid serial-parallel organization.

By adding more logic for more complex decision and action, speed can be increased. Here, too, there is a limit. We investigate how to improve speed by or- ganization and what the limiting factors are. The most efficient representation of numerical information is binary. Most of the fastest scientific computers use a binary representation.

There- fore, the principles discussed here are in terms of a binary machine. The extension to decimal machines and even to serial or hybrid machines should be well within the capability of the reader when he finishes this volume. Usually, this is continued further, to the point where each module is specified, assigned a number, and given a physical location on a physical chassis. This stage would include enumera- tion of all interconnections between the modules. The term module refers to one or more logical blocks in physical juxtaposition within a small detachable unit.

The contents of a given module depends upon the de- signer and manufacturer. Some modules contain only one or two logical blocks in a small unit, such as those used by Philco in their Model computer; others have ten, twenty, or more logical blocks, as in the Honey- well Some designers use small modules, placing them on a large "mother" board which is also detachable from the main hardware.

The province of the logical design group differs from one organization to another; this difference occurs in several areas. The first is the extent to which logical design is performed, that is, how much system design is performed by the group and how much of the job of converting the logical description into physical hardware is performed by the group.

Another facet is the amount of detail evolved by the logical design group. Another point of difference among groups is the extent to which Boolean equations are depended upon and the phase of logical design during which they are incorporated. Despite the argument which may have existed between the Eastern and the Western schools of logical design, it is evident that both schools use both an equation approach and a block-diagram approach. It is only the extent and the tune factor which distinguishes these schools of thought.

Design Philosophy We now discuss the essential steps in logical design. First these steps are enumerated, then each one in turn is discussed in greater detail. A preliminary operational and functional configuration is constructed as a jumping-off point, with the understanding that it may change radically during future developments.

The sequence of events which occurs in the subsystem under in- spection is carefully examined, and a preliminary timing plan is evolved. The set of units which are required and the sequence in which they are activated are examined with emphasis upon the dependency of one upon the other. A set of logic-time equations is generated. The requirements for auxiliary logical devices apart from the main functional units are listed. Boolean equations are written for each and every functional unit not already specified. Hardware equations are derived and the hardware configuration is planned.

The System Configuration The layout of the subsystem depends most intimately on the system specification. The subsystem does not exist in isolation but, rather, in symbiosis with the system for which it was designed. The subsystem specification is usually generated during or just after the system specifica- tion is determined. Such a specification indicates the time and quanity of information to be processed by the subsystem as well as other details of the process. How is information transmitted to and from this subsystem?

How is the processing of informa- tion controlled? If the subsystem control is autonomous, the means by which control is transferred from another subsystem to this subsystem must be specified as well as the means by which main control is relinquished. Timing is always of importance when units are talking together; the con- version must not only be comprehensible to both units, but it must occur when the computers have time to "listen" to each others' message!

Before further design commences, the algorithms by which processing is performed must be investigated and the proper ones specified. This is obvious for the arithmetic unit. Comparable to algorithms, other sub- systems have organizing principles which set forth the interrelation of the functional units. For instance, for the control unit we concern ourselves with the method by which sequential instructions are procured, how indexing is performed, how operands are fetched, how instructions are decoded and delegated, and so forth.

The algorithm or organizational principles must be set forth before the quantity and relation among the functional units can be specified. The Functional Units The type and internal organization of the functional units depend upon many factors related to data and control: 1. What is the format of the data as it enters and leaves the subsystem and during processing by the subsystem?

Is serial, parallel, or a combination of the two the method by which data is handled? What kind of processing is required at the various stages? How is the timing of the functional unit initiated, maintained, and terminated? What means is used to control the processing and the flow of infor- mation among the functional units?

The design experience of the logical design group undoubtedly affects to a large extent the choice of the functional units used by that group in their design. Having chosen the functional units, they must be interrelated. It is also affected by other considerations such as 1. Other system specifications. The interfaces. The ability and past history of the design group. The Event Sequence The sequence in which events take place is primarily determined by the algorithm or organizational principle.

At this stage of the game, we should be able to determine how data flows among the various functional units. Usually, data is held in registers temporary storage devices , therefore these are the units with which we will be most concerned. However, we wish to consider when the other functional units will be occupied. Events will take place in two realms—time and space. Thus, we associate, with each event, a place or functional unit and a time in the subsystem history at which it is taking place.

Frequently, events will take place conditionally in the sense that their nature is determined by other events which may have preceded them. Thus, in multiplication an addition is performed upon the sum-so-far in some cases if a 1 appears as the multiplier bit; another action such as a shift takes place if this bit is a 0 instead. A list of events and alternative actions is necessary. However, a visual aid, a flow diagram of the activities, usually clarifies the designer's under- standing of the subsystem requirements. Such flow diagrams are used in explaining arithmetic and control in many introductory texts including Computer Logic; this device will be resorted to in the future when the complexity of the system structure is such as to warrant it.

Logic-Time Equations One policy we might adopt is to express time with respect to the initia- tion of the subsystem. Usually the start pulse, ST, is the handy little reference. Data entering the given unit—whether a bit is 0 or 1. Control equations—information in the form of control voltages from one or more control units. Time as indicated above. Auxiliary settings—data or control information is frequently dis- tilled into the setting of specific auxiliary devices which are referred to in controlling given functional units. Algorithm in progress—this is necessary when a unit performs one or more procedures.

A given functional unit relates the inputs in a way characteristic to it and thus produces an output. This output can be expressed as a Boolean equation involving time and the specific inputs. Auxiliary Units As touched on above, auxiliary storage units may distill a large amount of information into a single setting. A unit may refer to control information, intermediate results, and a time factor, and compress these into one or a few settings.

Similar to the functional unit, the auxiliary unit has an equation which specifies its input in terms of data, control, and time. When such units are bistable devices, specifying its two inputs indicates when each of its mutually exclusive states prevails. In some cases such auxiliary devices may be time-shared; that is, they may be used to store several auxiliary functions at different times within the operating sequence of the subsystem. Boolean Equations Each functional unit has a Boolean equation which describes its char- acteristic.

The logical designer working on a complicated subsystem deals most frequently with functional units which are already fixed and hence have specific Boolean equations. These units may be described either by their function or by their equation. In addition, he must make up some special purpose, functional units; in that case, a description in Boolean equation form is usually preferable. Such an equation, together with a Karnaugh map, provides simplification not immediately apparent in most other forms.

The logic-time equations are occasionally amenable to simplifications after they have been put into the Boolean form. Hardware Phase Once the subsystem has been specified, as described in the previous subsections, we are ready to pass into what frequently consists of a sepa- rate phase. This phase converts the paper which is specifying a system into paper which is specifying a machine layout. This is done in several sub- phases. The Boolean equations and logic-time equations have to be con- verted into available logic-block form.

This conversion process must heed the fan-in and fan-out precautions relative to each logic block. The fan-in is the maximum number of circuits which may drive a given block; the fan-out is the maximum number of circuits which the given block may drive. The logic thus generated may require the addition of nonlogical elements. Thus, when a logic block is required to drive more than its allowable fan-out, this may be achieved by inserting an amplifier between this block and the one that it drives. After this conversion, another look should be made to see if simplifica- tions can be made to reduce the design in its new form.

Now we must convert from blocks to modules. A module comprises several similar or dissimilar blocks. The blocks must be assigned to specific modules with an allotment for spares which inevitably are required. Along with the module assignment is the module arrangement. The geometric arrangement of the modules on the backboard is of crucial im- portance when we get into the area of very high-speed computers. The difference in delay incurred by transmitting some pulses over a few feet of wire while others travel only a few inches may be enough to jeopardize the proper functioning of the subsystem.

Some work has been done on this topological problem, but much more should be done in the future to develop an effective science of module placement. The final step in the hardware phase is the pin assignment for each module and the wiring layout which indicates where each wire of the computer begins and ends and how long it should be. From this information, cable layouts can be made and the wiring done.

However, the committee that suggested these symbols for adoption did not prove successful and at present there are still no universally accepted standards. For consistency, we will continue to use the symbols presented in the first volume. The total set of symbols used here is displayed in Fig- ure 1. AB AB 8 AvB 11 Astoble multivibrotor Shoper S Multi, fhptlop, bistoble multivibrotor Inputs to the D-symbol are always on the flat side; the output is taken from the semi- circular side. The multiple input V-mixer is indicated, as in Figure 1.

Inversion is indicated by a small circle at the termination of an input line or at the start of an output line. Examples are found in Figures 1. The inhibit function is indicated as shown in Figure 1. Notice that for the D-symbols no directional arrows are required since signals enter the D-symbol only at the diameter and leave only at the side opposite.

The Boolean connective or is symbolized by "vel" or inclusive or, V. It is defined by a truth table where A V B is true for all entries except when A is false and B is false. The overbar is used to deny a variable the not function ; thus, A is true only when A is false. In dealing with multiple logical devices so common in parallel pro- cessing, we have adopted the pipeline notation. First consider a number of parallel signals indicated by the use of bold type as A. If all of these are gated by single signal B, the logical symbol and its equivalent are indi- cated in Figure 1.

When multiple signals gate multiple signals this is indicated as in Figure 1. Finally, multiple signals can be mixed in a similar fashion, as shown in Figure 1. Bit Storage Devices which store single bits of information are called bit storage devices. They are also commonly referred to as flip-flops, bi-stable multi- vibrators, or simply mullis My symbol for the bit storage device is found in Figure 1.

Note several aspects of this symbol. It may seem confusing at first that no arrows are shown. Since the symbol is not used in a vacuum, however, signals entering the device will have their direction indicated by their source; those leaving the device will have their direction indicated by their termination. Only when the source or destination lies off the paper are arrowheads necessary. For instance, when the line entering the bit storage device at the one input contains a pulse signal, it will cause the device to change state or remain in its present state, depending on its present condition and provided that the signal is of the proper amplitude and duration this always being assumed here.

We attach the output wire corresponding to the 1-state to the termi- nal of the device which yields the signal polarity that we desire. This requires that the other terminal be of the opposite polarity. It also requires that a signal which resets the device to its 0-state will cause the one output to become reversed. The uni, single shot or delay flip-flop is shown in Figure 1. This device can be set to the 1-state by an incoming signal.

After a fixed period of time, it resets itself to the alternate state. Arbitrarily, we say that the signal sets it to the 1-state and then it resets itself to 0. Therefore, all inputs to the device enter the 1 half-box; the 0 half-box has an X at its input to indicate that no input is required here—that the uni resets without intervention. Where a time constant is explicitly required, it is placed close to the box to which it applies.

The astable multi-vibrator or pulse generator is a flip-Hop which is self- triggering. It is indicated in Figure 1. Shaper The shaper is attached to a bit storage device to recover a pulse when the bit storage device switches to a given state, Figure 1.

When the bit storage device is set to 1 by a pulse, there is no output from the shaper, S; when RS is reset to 0 a pulse is emitted from S such that the pulse front of the pulse emitted from S corresponds to the time at which RS assumes the 0-state.

Computer Arithmetic

Delay The symbol for delay is shown in Figure 1. Any signal at the input the left-hand side of the symbol as shown appears at the output in exactly the same form but delayed a fixed length of time according to the parameter associated with the delay symbol. In my approach to logical design, I have omitted all references to amplifiers and like circuits.

Thus it is assumed that the designer will compensate for any attenuation and distortion presented by the delay by incorporating the required circuitry, although the symbols for such additional circuits are omitted throughout. The directed pipelines indicate inputs and outputs, both series and parallel, whose direction is determined by the associated arrowheads. A single line which enters the box and has attached an arrowhead within the box indicates that this is the means by which the register is set to the incoming signal.

It is unnecessary to dwell on the register symbol since we will elaborate upon it to a large extent in Section 4. Well, this chapter is devoted to arithmetic in various notations. Although they do not have to be learned by heart, still the rules that apply to them have to be understood in depth. The chapters which follow develop the methods by which Arithmetic Units of various extant computers perform arithmetic.

These methods, numerical manipulations, founded upon mathematical justification and realized in hardware, are called algorithms. In the first section of this chapter, the three main representations for signed numbers are explained. A design group must decide which of these representations is best suited to the computer under consideration. There is no immediate clear-cut choice; there are advantages and disadvantages to each representation; each is used as the basis for at least one machine.

Actually, Webster's present definition is even broader and, referring to a treatise by al-Khurar-izmi, applies to the study of decimal arithmetic. On the contrary, often the choice is made solely on the basis of compatability of the machine being designed with the previous product of the manufacturer.

To contrast the representations of signed numbers requires an under- standing of how arithmetic is done in each, and this is presented in this and the next chapter. Although three representations are presented here, for brevity, chapters on high-speed arithmetic are based entirely on 2's complement notation. It will best serve the reader's need to comprehend the workings of the repre- sentations presenting the most hardware problems—then he should cer- tainly understand the others! The tools, in the form of comprehension and use of the different representations, are furnished here to enable the reader to complete a design using any of them.

Besides developing a comprehension of available representations, the reader may gain two things more from these chapters: 1. A reference for later chapters. For instance, in discussing high-speed multiplication of two negative 2's complement numbers using Method 2, Figure 3. Practice in thinking in binary.

Following each example, although a chore, is amazingly helpful in that respect. The first few examples will demand intensive concentration, but, near the close of the chapter, things fall into place and soon the problems can be worked without referring to the book. Since Chapters 15 and 16 are devoted to floating-point arithmetic, expansion of binary notation to floating-point numbers is tabled until then. Suffice it to say that the conventions established here are applicable and need only be extended.

In the sign and magnitude notation, numbers which have the same absolute value are represented identically, except for the sign position. Therefore, it is appropriate to study how arithmetic is performed using these other notations. Also, since an understanding of these representations is basic to the comprehension of how computers perform fast arithmetic, it must be acquired before we discuss the logic of high-speed arithmetic.

Binary Point The binary point is implicit in the manipulation of numbers in most modern computers.

Description:

It is the same as a decimal point in every-day notation, except that it applies to binary numbers. That is, it separates a number into a whole part and a fraction; the whole part is to the left of the binary point and the fraction is to the right. The conventions in many modern computers, as we will see later, re- quires that all numbers handled internally by the computer be less than unity; therefore, these numbers are proper binary fractions.

Hence, we might expect that the position to the left of the binary point would always contain 0. However, this position is used to indicate the sign of the number and therefore may contain 0 or 1, according to the sign of the number represented. Customarily, a 0 indicates a positive number and a 1 a nega- tive number. Using these conventions, all numbers consist of a sign bit at the left, followed proceeding to the right by a binary point, and then a number of bits which determine the magnitude of the number. One objection to the binary point notation is that, in reality, the computer can only manipulate truth values or binary information.

Since this binary information is limited in extent, it can be said to correspond to a limited range of integers, a subset of the numbers of counting with a sign associated. Consequently, the binary point representation might easily make fractions seen to be acceptable numbers when only a small range of fractions are acceptable to the com- puter. However, there are advantages which might outweigh the disad- vantages mentioned above: this notation permits one to keep track of numbers easily during discussion; it facilitates scaling during programming; and it provides a left-hand-oriented system, which is of advantage in high- speed division, for instance.

Let us adopt the binary point notation, and from here on we will con- sider that all numbers are less than one, that the sign indication is at the left-hand bit of the computer word, and that the binary point is just to the right of the sign bit. A negative number has a 1 in the sign position and a positive number contains a 0 in the sign position.

In fact, in the three systems discussed, positive numbers are represented identically. To form the representation of a negative number in 1 's complement notation, we subtract the magnitude of the number from W, where we define W as the largest numerical word that can be stored in a one-word register ignoring the sign bit. In practice, this means that negative and positive numbers of equal magnitude in the 1's complement notation are exact complements of each other.

That is, wherever a 1 appears in the first number, including the sign position, a 0 appears in its negative counterpart, and vice versa.

Arithmetic and Logic in Computer Systems

This means that two numbers of the same magnitude but opposite sign, when added together, will form a word consisting entirely of 1's. A negative number is formed by taking its positive counterpart and subtracting it from 2. Since this exceeds the word size of the register, it will be stored there in the form of all 0's, which is desirable. A simple rule for finding the 2's complement is: find the 1's complement and add e. Truly it is quite simple but requires an addition. An alternate rule, especially applicable to multiply and divide where serial right-to-left examination is done anyway, is: Starting at the right, examine bits of the positive number in turn.

For each 0 in the positive number place a 0 in its negative counterpart. When the first 1 is reached, place a 1 in the negative counterpart. Thereafter, for each 0, place a 1 in the negative quantity and for each 1 in the original word place a 0 in the nega- tive quantity for all bits including the sign bit. This method also works in converting negative numbers to their positive equivalents. For example, examine 2. These pertain to the largest and smallest number represented by each and the representation of 0. These peculiarities are summarized in Table 2. TABLE 2. Sub- traction of a number from itself or the addition of a number to its comple- ment may produce a bit combination not otherwise used.

Therefore, this combination also may be used for 0. The resulting confusion is more than compensated for by the ease of handling this problem. In 2's complement, the 0 resulting from arithmetic is the same as the normal 0; namely, 0. Hence, only one 0 is required in this system. Subtracting e from this we have 1. Alternatively, we may use it to represent —0. Unfortu- nately, it is conventional to use 1. Although the convention solves some problems, like normalizing — J, it creates others.

One problem is that the range of numbers for the 2's complement machine is then instead of as for the other systems. Hence, provision must be made to handle or prevent addition, sub- traction, multiplication, and division for — 1. Notice, for instance, that multiplication is closed for the numbers represented in sign and magnitude or 1's complement. That is, the product of any two numbers is a repre- sentable number. Addition is done in the conventional manner for binary numbers; see Figure 2. The one precaution to be observed is that the sum must not exceed 1. In that case, the sum of the numbers we are adding exceeds the word size of the register, W.

The computer must detect this and prevent further computations using this number. Overflow detection by some computers automatically causes them to jump to a subroutine in the program to correct the error. Other computers, not automatically equipped, simply stop for operator intervention to remedy the condition. We preserve the sign and place it in the sign position of the sum. Again, we must beware of num- bers whose sum exceeds the numerical word size of the register. Oppositely Signed Numbers When the signs of the numbers to be added are different, the result and the procedure depend upon which is larger, the augend or the addend.

We add the addend, in 1's complement form, to the augend, as in the examples given in Figure 2. This produces an end-around carry, which is added to the sum-so-far. The sign bit is that of the augend. How are you sure a carry occurs? Well, the complement of 6 is larger than the complement of a; hence, when the former is added to a, the result must be greater than 1. This is described more succinctly and less accurately as "adding in the end-around carry.

That is, we add numbers into the numerical portion of the register and do not transmit carries, if they should occur, into the sign bit. This manipulation is defined as addition modulo 1 and is recorded as in 2. For a negative augend with magnitude larger than the addend, we per- form addition as above. We take the 1's complement of the addend and add it to the augend. The result should not produce a carry.

Because the result is negative and it is produced in the form of the 1's complement of the proper sum. In order to find the proper sum, we must take the 1's complement of the sum-so-far.


  1. The gamma function (Athena series; selected topics in mathematics)!
  2. The Logic of Computer Arithmetic;
  3. [] On the logical complexity of cyclic arithmetic?
  4. AES - tub: Computer Arithmetic: Circuit Perspective?
  5. Catalog Record: The logic of computer arithmetic | HathiTrust Digital Library!
  6. The Conqueror (Zebra Debut)!

The sign of the sum is that of the addend. Let us see how this looks in equation form. The result inside the brackets is not correct; it must be complemented, which is in- dicated by subtracting the number in the brackets from 1. The braces hold the sum of the augend, and the complement of the addend magnitude brackets indicate the complementation of the sum.

Finally, 1 is added to form a negative number. All the bits of the numbers are added including the sign bit. The carry which must be 1. The result should then be correct. The final result should be a negative number, indi- cated by a 1 in the sign position. This discrepancy cannot be detected by observing the carryout from the sign position; rather, the carryout from the most significant bit of the sum must be monitored to be sure it occurs.

Oppositely Signed Numbers To add numbers of different signs, the procedure depends, to some ex- tent, on sign of the result. In any case, we add the number bit by bit, in- cluding the sign bit. The result should then be correct, as illustrated in Figure 2. The result, including the sign, should then be correct.

See Figure 2. In other words, the representa- tions are added together, and the result is stored in the register. In the addition, a carry occurs from the sign positions; this carry is disregarded. The result register now records a number which is shown by 2. When the extra 2 is accounted, the register is found to hold the proper representa- tion of the result 2. Carryout of the sign bit is always ignored. Columns 1 and 2 indicate, respectively, the sign of the augend and addend. The third column indicates which of these is larger.

The fourth column indicates the proper form the sum should take. The fifth column indicates the equation for arriving at this sum. Here e is added a 1 in the least significant bit position to the partial sum when the end-around carry is detected. The sixth column indicates the correction to the sum which may be required to put the result into proper form. The last column gives the representation of the final result. Oppositely Signed Numbers First consider two differently signed numbers. The old rule for subtrac- tion says, "Change the sign of the subtrahend and add.

This calls for the addition of the magnitude of the numbers as shown at the top of Figure 2. As soon as we detect that the numbers have different signs, we add the numerical positions and give the result the sign of the minuend. The method we adopt is to first find the Is complement of the magnitude of the subtrahend and add it to the minu- end. If an overflow from the most significant bit occurs, e is added and the "suit is correct.

If there is no overflow from the most significant bit, then tasum is incorrect, because it is in 1's complement form. For an overflow, the sign of the difference is that of the minuend; for no over- flow, the sign of the difference is that of the minuend reversed. An example of this appears in Figure 2. Whenever a carry occurs out of the sign bit, a 1 is added to the least significant bit of the result; when no carry occurs from the sign bit, this result is left intact.

The sign is then correct.

Suchfunktion

The equations are presented in Table 2. The 1 's complement of the subtrahend, including the sign bit, is added to the minuend. At the same time, a 1 e is added into the least significant bit regardless of the signs of the numbers or their relative magnitude. It always occurs. Examples are found illustrating the procedure in Figure 2. The equations are found in Table 2. It is organized in the same fashion as Table 2. Error or Overflow Detection There is a problem which arises and is treated differently in the three notations: addition or subtraction may produce a result which exceeds the word size of the computer.

In using the sign and magnitude notation, the magnitudes of numbers are added together regardless of whether we are performing addition or subtraction. Hence, an error is detected by ex- amining whether a carry is produced in the most significant bit of the sum. If such an overflow does not appear, then an error has taken place. He can have the computer stop until the operator intervenes.

If this kind of fault is less important, he can have the error recorded in memory for refer- ence when the results are-printed out.

Computer Arithmetic

He may also provide a long error procedure to check through the data for possible contamination. This is up to the user and programmer. Such is now done, starting with the simplest case. This is easy. All bits are added including the sign bit and any carry from the sign bit is ignored. For addition subtraction the addend 1's complement of the subtrahend is added to the augend minuend. All bits including the sign bit are added. An e is added to the least significant position only when a carry occurs from the sign position. Change the sign of the subtrahend only for subtraction. When the operands have the same sign, add their magni- tudes and give the result that sign.

When the signs differ, add the 1's com- plement of the addend subtrahend , adding all except the sign bit. If a carry is produced at the most significant position, add 1 to the least signifi- cant bit and give the result the sign of the augend minuend. If no carry is produced, complement the sum, appending the sign of the addend oppo- site from the original subtrahend. Comparison Exactly one cycle is required for addition or subtraction of any two 2's complement numbers.

Additional operations are sometimes required for the representations. This would seem to give a definite advantage to 2's com- plement representation. However, we will see that extra hardware is re- quired for multiplication and an extra quotient correction cycle is always required for 2's complement division.

Hence the selection of machine representation is not cut and dried and depends upon the trade-offs. The reader can draw his own conclusions after reading about multiplication and division. Assuming that the sign of the operands is random a questionable assumption , oppositely signed numbers are added half the time. In half of these cases, for sign and magnitude notation, the wrong number is sub- tracted and the result must be complemented. Hence we can say that 25 per cent of the time an extra cycle is required for complementation. This must be contrasted with the straightforward manner in which multiplica- tion and division are done in this notation.

This may require a partial or com- plete add cycle. In other words, ten-bit binary numbers with one additional bit at the left reserved for the sign. The binary point follow the sign bit. Thus the magnitude of all representable numbers is less than 1 or in certain exception, exactly 1. All such numbers can be considered as fractions with decimal denominators and integral decimal numerators. In the following problems the numbers used are the decimal numerators of fractions with the decimal denominator For sign-and-magnitude notation.

Subtraction Geometric Depiction of Twos Complement Integers Hardware for Addition and Subtraction Multiplication Hardware Implementation of Unsigned Binary Multiplication Flowchart for Unsigned Binary Multiplication Twos Complement Multiplication Comparison Division Flowchart for Unsigned Binary Division Example of Restoring Twos Complement Division Typical Bit Floating-Point Format IEEE Standard Most important floating-point representation is defined Standard was developed to facilitate the portability of programs from one processor to another and to encourage the development of sophisticated, numerically oriented programs Standard has been widely adopted and is used on virtually all contemporary processors and arithmetic coprocessors IEEE covers both binary and decimal floating- point representations IEEE Formats Floating-Point Addition and Subtraction Floating-Point Multiplication Floating-Point Division.

You just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips. Published in: Education. Full Name Comment goes here. Are you sure you want to Yes No. Maraming salamat Emong Philippines. Gummadi Venkatesh. No Downloads.

Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. Csc lecture03 - computer arithmetic - arithmetic and logic unit alu 1. ALU Inputs and Outputs 6. Unsigned 2. Sign Magnitude 3. Biased not commonly known 8. Table Range Extension — Range of numbers that can be expressed is extended by increasing the bit length — In sign-magnitude notation this is accomplished by moving the sign bit to the new leftmost position and fill in with zeros — This procedure will not work for twos complement negative integers — Rule is to move the sign bit to the new leftmost position and fill in with copies of the sign bit — For positive numbers, fill in with zeros, and for negative numbers, fill in with ones — This is called sign extension Addition Subtraction