Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

question 2 c. and d. Now 2: Dow\'seribble answers on this sheer Note 3: This pag

ID: 3837570 • Letter: Q

Question

question 2 c. and d.


Now 2: Dow'seribble answers on this sheer Note 3: This page should be submitted along with your answers don't take ir away. Note 4: Den forget the write in your name prior to submission. 1. Bring out the difference between the following pairs ofterms by clearly defining what they refer to: 5 Marks) (5x1 a. Harvard versus Von Neumann Architecture b. Computer Architecture versus Computer Organization c. RISC versus CISC computers d. Compiler versus Assembler e. Desk Laptops versus Embedded systems 2. a. Write MIPs code to perform addition of two unsigned numbers in registers St1 and St2 and then verify whether the result has overflowed the 32-bit long destination register StB. Explain the overflow logic with comments to elaborate upon each line of MIPs code. Marks) b. Indicate with a diagram how the same logic can be implemented in hardware, (1 Mark) c. Explain with a flow chart and architecture diagram hardware efficient division algorithm Marks) may be performed d. Explain why massively parallel implementation of the division algorithm not possible? (i Mark) a. Describe the IEEE 754 standard for representing single precision and double precision Marks) formats, and illustrate it with the number (-0.875)10. b. Describe with flow chart and architecture diagram how the floating point addition algorithm (6 Marks) works.

Explanation / Answer

Difference between Harvard architecture versus and Neumann architecture?

A mathematician design has just one bus that is employed for each information transfers and instruction fetches, and so information transfers and instruction fetches should be scheduled - they can not be performed at constant time. it's potential to own 2 separate memory systems for Harvard design.

Harvard design has separate information and instruction busses, permitting transfers to be performed at the same time on each busses. A mathematician design has just one bus that is employed for each information transfers and instruction fetches, and so information transfers and instruction fetches should be scheduled - they can not be performed at constant time.

It is potential to own 2 separate memory systems for Harvard design. As long as information and directions are often fed in at constant time, then it does not matter whether or not it comes from a cache or memory. however there square measure issues with this. Compilers usually enter information (literal pools) inside the code, and it's usually additionally necessary to be able to write to the instruction memory area, for instance within the case of self modifying code, or, if AN ARM program is employed, to line software package breakpoints in memory. If there square measure 2 utterly separate, isolated memory systems, this can be impractical. There should be some quite bridge between the memory systems to permit this.

Using a straightforward, unified memory system along with Harvard design is extremely inefficient. Unless it's potential to feed information into each busses at constant time, it'd be higher to use a mathematician design processor.

Use of caches

At higher clock speeds, caches square measure helpful because the memory speed is proportionately slower. Harvard architectures tend to be targeted at higher performance systems, and then caches square measure nearly continuously utilized in such systems.

Von Neumann architectures sometimes have one unified cache, that stores each directions and information. The proportion of every within the cache is variable, which can be an honest issue. it'd in theory be potential to own separate instruction and information caches, storing information and directions severally. This in all probability wouldn't be terribly helpful because it would solely be potential to ever access one cache at a time.

Caches for Harvard architectures square measure terribly helpful. Such a system would have separate caches for every bus. making an attempt to use a shared cache on Harvard design would be terribly inefficient since then just one bus are often fed at a time. Having 2 caches suggests that it's potential to feed each buses at the same time....exactly what's necessary for Harvard design.

This additionally permits having a awfully straightforward unified memory system, exploitation constant address area for each directions and information. This gets round the downside of literal pools and self modifying code. What it will mean, however, is that once beginning with empty caches, it's necessary to fetch directions and information from the one memory system, at constant time. Obviously, 2 memory accesses square measure required thus before the core has all the information required. This performance are going to be no higher than mathematician design. However, because the caches extra service, it's way more probably that the instruction or information worth has already been cached, and then just one of the 2 should be fetched from memory. the opposite are often provided directly from the cache with no extra delay. the most effective performance is achieved once each directions and information square measure provided by the caches, with no ought to access external memory in the least.

This is the foremost smart compromise and therefore the design utilized by ARMs Harvard processor cores. 2 separate memory systems will perform higher, however would be troublesome to implement.

Difference between computer architecture and computer organization

This is a ordinarily asked question that sadly confuses several computing students. Confusion comes from the very fact that the literal which means of the 2 terms is incredibly shut. Also, the historical context of the 2 terms doesn't facilitate very much like totally different individuals use the terms otherwise.

In this post i'm getting to summarize the variations between laptop design and laptop organization in a straightforward to study tabular kind as shown below:

Intel and AMD build X86 CPUs wherever X86 refers to the pc design used. X86 is AN example on a CISC design CISC stands for advanced Instruction Set Computer. CISC directions square measure advanced and will take multiple mainframe cycles to execute. As you'll be able to see, one design (X86) however 2 totally different laptop organizations Intel and AMD flavors. ARM is AN example on a risc|reduced instruction set computing reduced instruction set computerRIS Computer design architecture (RISC stands for Reduced Instruction Set Computer. Directions in ARM design square measure comparatively straightforward and usually execute in one clock cycle. Similarly, ARM here is that the laptop designs whereas each nVidia and Qualcomm develops their own flavor of laptop organization.

Computer design deals with giving operational attributes of the pc or Processor to be specific. It deals with details like physical memory; ISA (Instruction Set Architecture) of the processor, the quantity of bits won’t to represent the information sorts, Input Output mechanism and technique for addressing reminiscences. Laptop Organization is realization of what's fixed by the pc design .It deals with however operational attributes square measure coupled along to fulfill the necessities fixed by laptop design. Some structure attributes square measure hardware details, management signals, peripherals.

Difference between RISC and CISC

The main distinction between reduced instruction set computer and CISC is within the variety of computing cycles every of their directions take. The distinction the quantity of cycles is predicated on the complexness and therefore the goal of their directions. The term reduced instruction set computer stands for ‘Reduced Instruction Set Computer’. it's a mainframe style strategy supported straightforward directions and quick performance.

RISC is tiny or reduced set of directions. Here, every instruction is supposed to attain terribly tiny tasks. During a reduced instruction set computer machine, the instruction sets square measure straightforward and basic, that facilitate in composing a lot of advanced directions. every instruction is of constant length; the directions square measure arrange along to induce advanced tasks drained one operation. Most directions square measure completed in one machine cycle. This pipelining could be a key technique wont to speed up reduced instruction set computer machines.

RISC could be a micro chip that's designed to hold out few directions at constant time. supported tiny directions, these chips need fewer transistors, that build the transistors cheaper to style and manufacture. Another options of reduced instruction set computer include:

The term CISC stands for ‘Complex Instruction Set Computer’. it's a mainframe style strategy supported single directions, that square measure capable of performing arts multi-step operations.

CISC computers have shorted programs. it's an oversized variety of advanced directions, that takes lasting to execute. Here, one set of instruction is roofed in multiple steps; every instruction set has over 300 separate directions. Most directions square measure completed in 2 to 10 machine cycles. In CISC, instruction pipelining isn't simply enforced.

The CISC machines have smart performances, supported the simplification of program compilers; because the vary of advanced directions square measure simply offered in one instruction set. They style advanced directions in one straightforward set of directions. They perform low level operations like AN operation, or a load from memory and memory store. CISC makes it easier to own massive addressing nodes and a lot of information sorts within the machine hardware. However, CISC is taken into account less economical than reduced instruction set computer, as a result of it unskillfulness to get rid of codes that results in wasting of cycles. Also, micro chip chips square measure troublesome to know and program for, as a result of the complexness of the hardware.

What is the difference between an Assembler and a Compiler?

Compiler could be a bug that reads a program written in one language and interprets it in to a different language, whereas AN computer programme are often thought-about a special style of compiler that interprets solely programming language to code. Compilers sometimes manufacture the machine workable code directly from a high level language, however assemblers manufacture AN computer code which could got to be coupled exploitation linker programs so as to run on a machine. as a result of programming language encompasses a one to at least one mapping with code, AN computer programme could also be used for manufacturing code that runs terribly expeditiously for occasions within which performance is incredibly necessary.

Difference Between an Embedded System & a Computer

The distinction between AN embedded system and a general purpose ADP system is one in every of purpose, and to a way lesser extent, design. Whereas a general purpose system is often used for several things, AN embedded system is just meant for one purpose.