Love4Boobies wrote:I once amused myself by implementing the syntactic sugar useful for excepting handling standard C, in the form of try-catch-finally blocks, using setjmp/longjmp and macros. I am no stranger to the difference between design patterns and explicit language support for them. However, you will notice that I choose my words quite carefully. Being able to use something in principle is not the same as being able to use it in practice. With regards to design patterns, what I said was this: "very few design patterns can be realistically used." For instance, I would count procedural abstraction as more than realistic. However, consider interface abstraction, as an example. If you develop an OOP API for assembly, not only will there be more opportunities for errors at both ends (compilers automate a lot of the process so they can get it right everytime) but the resulting code will be much more difficult to maintain. In fact, a few assemblers have even tried to introduce explicit language support for classes and their common features (TASM comes to mind) and have all been unsuccessful for this exact reason.
I know that you have chosen your words carefully, but I don't agree. It's proven that using design patterns are not unrealistic in asm. Actually, assembler were not unsuccessful at all. According to OO ASM, here are a few links:
http://www.heiho.net/download/oo-asm.txt
http://x86asm.net/articles/oop-from-low ... rspective/
http://www.drdobbs.com/embedded-systems ... /184408319
And I have also implemented a compiler few years ago, that supported OO and interfaces and made implementing design patterns easy yet were able to write functions and methods in assembly. It had a very simple layout, took a Cish like source, made the checks that were difficult to express in ASM (or were compiler time checks) and outputted FASM source. It's not that hard as everybody thinks it is. There are no more opportunities for errors, just different ones. Defining an object in pure ASM is as easy as in C++, you just use a different syntax. Consider this:
Code: Select all
CLASS EQU STRUC
ENDC EQU ENDS
CLASS ClassName
member1 db ?
member2 dw ?
; virtual table of member functions
Constructor dd ?
Destructor dd ?
memberfunction1 dd ?
memberfunction2 dd ?
ENDC
It is not that difficult, isn't it?
Love4Boobies wrote:This problem is, of course, language-agnostic. For instance, you can very well write functional code in C but do you know why no one does it? Because, among the usual suspects, C lacks language support for closures, lambda functions, currying, and so on. While there is no strict requirement for any of them, it does make things messy. You could also write a two-page C program that does what AWK can do in a single line. My point is that even if a set of tools is capable of the same things in principle, that doesn't make them equally suited, as they are meant to be used by humans who have all sorts of intellectual limitations. The whole field of engineering evolved as a way to overcome this. Pick the right tool for the right task.
Agreed. But I also think there's a considerably big effort on hiding the low details from future generations. Not sure what are the reasons behind this, but I think it's not good if a web programmer does not know how HTTP works for example. They tend to write uneffective code if they don't know what really going on beneath. I've seen it several times, specially among java programmers. Also I'm curious how many JavaScript programmers are aware of the memory and speed impact of using closures on DOM events. Undeniably the same algorithm written in pure JavaScript outperforms JQuery version (here again, to write effective code you need to know what's under the hood of JQuery. Introducing another layer could be bad for performance).
Love4Boobies wrote:The point about mathematical optimization is nonsense because people can tune high-level code in just the same way they tune low-level code. Have you ever heard anyone say "I'm limited to bubble sort because I'm using Java" or something of the sort (pun intended)?
It's not a non-sense, you think that because you have never heard of
demo scene. I really would like to see if anybody can write for example
this 64k demo in a high language using only compiler optimizations to reduce executable size to 64k. I suggest you to watch
this documentary. It's mostly in hungarian, but subtitled. They explain how they did the forementioned demo at around 54:10. (Hint: they generate everything procedurally. I can't image a compiler would be able to do that. For example, given a bitmap file, create an algorithm to generate that so that you won't have to store the entire bitmap as data).
Also I agree with you on people can tune high-level code the same way they tune low-level code, therefore it is possible to implement ANY algorithm and design pattern in assembly. You've have provided a proof that you were wrong earlier.
Love4Boobies wrote:At any rate, your claim is empirically falsified: experienced assembly programmers are simply unable to optimize code as well as machines because the space of possibilities is way too large for them to navigate in their heads. Take something as trivial as register allocation. We have algorithms that can find optimal solutions to this problem (which is why the "register" keyword has lost its meaning as an optimization hint in C and C++ --- nowadays, it's sometimes used to avoid enquiring references). The best assembly programmers don't come close to their register allocation even for short programs. It's somewhat funny to me that your average Joe knows that Deep Blue beat Kasparov at chess, Watson beat Brad Rutter and Ken Jennings at Jeopardy!, and AlphaGo beat Lee Sedol at Go, yet people working in the industry quote 80s books about assembly's supposed advantages while being oblivious that compilers have been beating them since before all the accomplishments listed above.
I don't want to disappoint you, but it is you who is empirically falsified, watch the links above.
Love4Boobies wrote:Compilers for higher-level languages have a deep understanding of what the code they are compiling is trying to accomplish (lower-level languages don't just miss out on the syntactic sugar, you know, but also on the semantic interpretation that comes with it) so they perform all sorts of neat large-scale optimizations. On top of that, as I've already mentioned, they'll even go out of their way to not write efficient code at times, because they need to have a clear understanding of what is going on---not an issue for compilers.
You are still stick with code optimization. I'm talking about optimizations at higher level. Let me give you yet another example. Assuming you need to read a config file with xml in it. You know that it's written by a program (hence it's format is very strict) and only contains 3 tags. Now you can use an universal xml parser for that, that would produce large code (including an additional library) and large memory footprint (as universal xml parser will convert xmls into a tree representation). No compiler would be able to optimize that away. On the other hand, a human can choose to use simple libc sscanf to parse that config file with minimal effort. That's perfectly viable as the config file is strict and only contains limited number of tags. That will result in a small code (no additional libraries) and small memory footprint (as no tree representation involved). You see what I mean?
Love4Boobies wrote:Don't even get me started on correctness. Have you ever tried writing something in Brainfuck? It's Turing-complete so it should be possible to write, say, a browser in it. Can you think of any reasons you might want to avoid it?
Yes. Please compare the number of available elements in Brainfuck language to the number of elements in assembly or in C. You'll see which language has more power to express an algorithm.
Love4Boobies wrote:Now, is there ever a time when writing general-purpose assembly in a new code base useful? Sure, small snippets for very tight loops. Despite how good today's optimizing compilers are, they still have a few weak spots (e.g., vectorization --- people generally use intrinsics).
Again, you are talking about micro-optimizations, and I'm talking about macro ones.
Love4Boobies wrote:
bzt wrote:After all, all high languages such as C/C++ will produce assembly source (or equivalent machine code) at the end.
Please don't say something like this at a job interview.
Which part do you question? That C/C++ compilers produce asm source or that assembly and machine code has a one-to-one relation? I hope I won't have to do any interview with you, as I'm afraid I'd have to decline your application...
Basically there are two kind of programmers: one that copy'n'paste and use large libraries for everything, which I call a coder; and the one who give it deep thinking and gives optimal algorithms which I call
a real programmer. Clearly you are a coder (no shame on that, I did not want to offend you with that in any way, people are different, that's all).