write, assemble and run an assembly language program that will simulate followin
ID: 3547509 • Letter: W
Question
write, assemble and run an assembly language program that will simulate following c++ code to pep8:
{
const int m=4;
const int n=3;
int i,j;
for(i=0;i<m;i++)
for(j=0;j<n;j++)
cout<<i<<j<<endl;
return;
}
fill in the rest of the code using the code below using pep8:
br main
m: .equate 4
n: .equate 3
i: .block 2
j: .block 2
main: lda 0x0000,i
sta i,d ;initialize 'i'
ag1: sta j,d ;initialize 'j'
;INSERT CODE HERE TO IMPLEMENT C++ NESTED LOOP
ag2:
;INNER LOOP:UPDATE AND CHECK 'J'
ldx j,d
addx 0x0001,i;increment 'j'
stx j,d
cpx n,i
brlt ag2
;OUTER LOOP:UPDATE AND CHECK 'I'
ldx i,d
addx 0x0001,i ;increment i
stx i,d
cpx m,i
brlt ag1
stop
.end
Explanation / Answer
C++ better at matrix inversion; ASM better at CRCs and BigNums?
Hmm. I'd agree that for some processors and some problems, asm is as easy as C++ - it might even be easier. But it's a lot less portable. Win32 code is a good example because:
Hand-optimizing assembler code is very difficult; compilers are usually much better at it than humans (exceptions include things like DSP code). If you don't believe me, you should attempt to write an NxN matrix inversion routine (in assember) that beats the performance of the MatrixTemplateLibrary (actually, that's pretty-much impossible in C, too).
Some processors are easier than others. If you use a typical purist RISC processor, you'll find that the pipeline is exposed to the assember programmer. Once you start worrying about things like delay slots and manual resource scheduling, you'll find that hand-coded assembler isn't such a good idea.
-- DaveWhipp
Writing in assembler makes some things very much easier because you have access to the processor flags. Computing CRCs one bit at a time requires shifting a bit into the bottom of a register, and then doing an XOR with a magic word if the shift generated a carry. You can't do that in C.
If you really want to hand tune assembly, then you can pass values in registers instead of pushing them onto the stack. This eliminates memory access times and greatly reduces the cost of subroutine calls. The downside is that your subroutines become very tightly coupled and almost non-reuseable. I have used this optimization in the past, but would probably avoid it now and really insist on a faster processor. --WayneMack
Or a compiler that can put procedure arguments in registers, and that can do interprocedural register targeting. E.g. the Larcency Scheme compiler does this, and even better, you can read all about it athttp://www.ccs.neu.edu/home/will/Twobit/ultimate.html. -- StephanHouben
x86 assembly language is better at math than C. In the x86, addition and subtraction update the carry flag (the carry flag is used for borrows in subtraction), and there are "add with carry" and "subtract with borrow" operations which you can use to achieve arbitrary precision. You can also multiply two 32-bit numbers to produce a 64-bit result, and you can divide a 64-bit number by a 32-bit number to get a 32-bit result (producing a divide overflow if the result doesn't fit in 32 bits) and a 32-bit remainder. These are the building blocks you need to produce arbitrary precision math. In C, if you multiply two 32-bit numbers, you get only the 32 least significant bits of the result; the upper half is truncated. If you want a 64-bit result, you have to cast the factors to 64-bit, and then if your compiler is stupid about such things, it will generate code which paranoidly checks the the upper 32-bits of both 64-bit factors, even though in this specific case they should be zero. It is also useful that in assembly language the quotient and the remainder are generated at the same time; in C and C++ you have to write two separate operations, and a stupid enough compiler might actually emit the divide opcode twice.
In floating point, x86 assembly allows you to set the rounding modes. This is a feature that the IeeeSevenFiftyFour floating-point standard requires CPUs to make available, but most languages don't make it available. (Sometimes it is available as a library function.) It makes it possible to calculate upper and lower bounds for a value, and detect whether round-off error is significant in a specific calculation.
But beware, the OS you're using may or may not save and restore your floating point mode on context switches (due to the expense of doing so). If it does not, then your settings may screw up other processes, and their changes to it may screw up your process.
asm is as easy as C++
Assembly language has its place, mainly to do things that higher level languages cannot do. Obviously, everything a higher level language can do, assembly can do. This does not mean that it is easier in assembly. For example:
Anyone got any short sample programs?
http://web.archive.org/web/20020203150017/http://www.webgurru.com/tutorials/assembly/chap6.htm#2
A short sample: RET
Example of reboot: jmp 0xFFFF:0x0000
See also ForthAssistedHandAssembly.
Assembly vs Assembler
My understanding was that the Assembler assembles binary code from AssemblyLanguage. I think the two phrases get interchanged a lot now that few people use AssemblyLanguage. -- BrianMcCallister
You are quite correct. An assembler is the "compiler" for AssemblyLanguage. -- GarryHamilton
[When Assembly is compiled, does the Assembler perform any notable optimizations, or is the process just something similar to macro expansion into MachineCode?]
Traditionally, no, but vendors often try to spice up their assembler with extensions to take some of the burden off of the programmer. Borland's Turbo Assembler, for instance, included extensions that automatically handle C-style subroutine declarations, local variables, and certain kinds of loops. These things are not all that commonly used.
Optimizing assemblers have been implemented from time to time, but this is relatively unusual, since the usual point of writing in assembly is to get 100% control over what is going on, and if a compilation from a higher level language is involved, the optimizer is usually a completely different pass, rather than being combined with the assembler. When RISC processors were new, it became more common for assemblers to do certain minor kinds of optimizations, such as automatically filling branch delay slots, but usually not full fledged peephole optimization.
The first step beyond machine code, where human readable (relatively speaking) symbols are used to generate programs.
Relatively speaking, as in an assembly program is as easy to read as an old BASIC program, even when decompiled from machine code. You just need to know how the processor behaves when it encounters a particular opcode, and handle it.
Hand-coded assembly is easy to write and maintain if you're good about naming the call and jump flags, and put in decent comments. The operations might be lower-level and constrained to certain integer maths, but that's no reason to write something unmaintainable or unreadable.
AssemblyLanguage is one of the few (readable) levels at which the actual processor behaviors are exposed. Consequently, the most complete control of the "currently defined" processor behaviors is available in AssemblyLanguage and those few others that grant direct access to the CPU.
Most high-level languages will not represent the low-level behaviors of the processor - ForthLanguage and derivatives being exceptions.
The "currently defined" behaviors of the processor are the subject of MicroCode, which establishes the behaviors of the processor. Some exotic processors have writable MicroCode, but for the vast majority of cases this will only be a curiosity: for all intents and purposes, the MicroCode can be considered part of the processor.
Programming at any level requires reasoning about the semantic structures at hand. High level languages claim to offer simpler or more appropriate structures, but often fall short of this goal when needs change or one is forced to reason about lower levels anyway. A regular dose of assembly programming will remind us what we should demand of our higher level languages.
Most of the discussion here seems to concern PC Assembly Language(s) - I am wondering if we should have a separate page for MainframeAssemblerLanguage? (esp. IBM S/360/370/390/z90 Assembler Language). I think there is quite a lot to be said about this - in particular it has been going strong for almost 40 years, which has got to be some kind of record for a computer language...
No new page required. The title says AssemblyLanguage; that's generic. Most of the discussion is about x86 assembly because that's what most people are familiar with. If you want to talk about 360 assembly, go right ahead. If the discussion becomes huge, then it would be interesting to calve a new page.
So...how 'bout them base registers? Can't beat 'em.
Hey, I think ARM assembly language is pretty cool.
humorous assembly language
fictional opcodes:
http://rdrop.com/users/jimka/assembly.html
BetterAssembly?
PreferredOrderOfSrcDstArguments and ForthAssistedHandAssembly and http://terse.com/ have some interesting ideas on how to make a "better" assembly language (without going all the way to C).
ObjectOrientedAssembler mentions "assembler doesn't give any support for non-procedural methods". Would it be crazy to write a "better" assembly language that provides some support for object-oriented methods?
Anyone got any short sample programs?
You now have SET9600.com which ... does exactly that
PreferredOrderOfSrcDstArguments DigitalSignalProcessing ObjectOrientedAssembler WriteAssembler
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.