Page 10 of 10

Posted: Wed May 21, 2008 4:49 pm
by iammisc
Let me respond to some of the arguments leveled against me.

In regards to the whole fiasco about pointer arithmetic:

Yes. You can say you don't like it, but the fact is it is well understood. Therefore, statements such as C can't do arithmetic or C doesn't know math is simply stupid. This is also what I meant by personal attacks on C. C does know arithmetic and its semantics are understood by almost everyone. If you don't like it, its your own opinion, but that doesn't make it bad.

Secondly, I made a mistake when I said a pointer is an integer. I should have stated that it is stored as an integer, but that it is different from an integer and that therefore, its semantics are different.
Since when did pointers get a concept of next?
In C, the semantics of how a pointer works is understood and in C, it does have the concept of next. Since when did pointers not have the concept of next? would be the complementary and none the lesser question.

About the assignment operator as a conditional. I believe that in C, an assignment returns the value of what was assigned. I also know that any number aside from 0 is considered false. Therefore, to anyone who knows C, the semantics of these operators are understood. The idea of equality and assignment are distinct and your brain thinks about them differently. Therefore, anyone who has been trained in C would only make this mistake because of a typo. Then again, anyone trained in C would try to eliminate all warnings...
I read somewhere that K&R never intended for C to be used without lint. I wonder if there's any truth to that.
Come on, if you're telling me that you can write a program of sufficient in some language other than C and not make a single mistake, then I shouldn't even be having this conversation.

Posted: Wed May 21, 2008 5:40 pm
by Colonel Kernel
iammisc wrote:
I read somewhere that K&R never intended for C to be used without lint. I wonder if there's any truth to that.
Come on, if you're telling me that you can write a program of sufficient in some language other than C and not make a single mistake, then I shouldn't even be having this conversation.
It was not a rhetorical question or sarcastic comment. My point was that by default C's semantics are very type-unsafe, and lint can help to compensate for that. In other words, the number of errors that a C compiler without lint can catch is in general much smaller than it would be for a more strongly-typed language. But if everyone used lint as well, then maybe C's safety reputation would be a lot different.

Re: Who here doesn't use C and why?

Posted: Wed May 21, 2008 10:05 pm
by esa
@Combuster

I still can't bring myself to agree with you. :)

That is... on a practical level. In theory C might have homoiconic properties. I can agree with you on that. :)

However no sane person would probably use C like that to write a real program. Yes, I know its a cheap shot but I'm resorting to the "not the right tool for the job" argument. What a Turing complete language can do, what it was designed for and what is practical are different things. I think you can agree with me on this. :)

I think your example also relies on the fact that in the end C compiles to machine code which is homoiconic... after all its all just raw bytes... both code and data. In this sense any language which allows you to define a symbol as a vector of bytes that gets compiled as it is (like in your example) could be seen as homoiconic.

If we leave the definitions aside the best way to explain how I understand what homoiconicity is (in a practically useful form) is through code. In this case I'll use Common Lisp.

Code: Select all

;; this is code
(lambda (x) (+ x x))

;; and this is data... because it is quoted by the ' in front of it
'(lambda (x) (+ x x))

;; I can also do this...
;; - assign a list describing an anonymous function to a variable
;; - turn it into a named function definition
;; - eval it i.e., make an actual function out of it
;; - and apply it to some argument
(setf foo '(lambda (x) (+ x x)))
=> (lambda (x) (+ x x))
(setf goo (append '(defun zoo) (cdr foo)))
=> (defun zoo (x) (+ x x))
(eval goo)
=> zoo
(zoo 3)
=> 6
The difference here being to your example that I'm doing something that is not a hack or a kludge but much closer to normal use of the language (normally you wouldn't use #'eval but instead macros, etc).

So could we just agree that any Turing complete language is homoiconic in the sense that it can emulate other equally powerful languages which are homoiconic in a practical sense but that being able to do that doesn't make itself a practical tool for writing programs that are built on the homoiconic properties of the language?

Posted: Fri May 23, 2008 3:32 am
by Steve the Pirate
C is not bad (and is also very readable in my opinion), but I much prefer C++ - as it just makes everything easier... Classes and inheritance etc. are all even handy, but I'd even use it just for std::string!

Re: Who here doesn't use C and why?

Posted: Fri May 23, 2008 8:04 pm
by kmcguire
esa wrote:@Combuster

I still can't bring myself to agree with you. :)

That is... on a practical level. In theory C might have homoiconic properties. I can agree with you on that. :)
I think C is not homoiconic, and does not posses properties after reading more about the matter; where in an external document someone explains the term a little better as:

Here's a RuleOfThumb regarding homoiconicity, as I understand it: if a value in a first-class variable type can be passed to the primary language evaluator directly and interpreted without any further modification (that is to say, the internal representation of the value is the same as the internal representation of code), then the language is homoiconic.

And, this is not true for the language C. It also makes sense, because as esa states using C to modify it's self in that way is not practical (which is a clue to it not being homoiconic) and very difficult because of the possibility of different architectures (which is also keen to it not being homoiconic), because the internal representation that is compiled, executed, or interpreted differs from the source code representation which is not homo, but heteroiconic.

So is the blind still leading the blind? *laughing*

http://c2.com/cgi/wiki?HomoiconicLanguages
http://www.reference.com/browse/wiki/Li ... _language)

Re: Who here doesn't use C and why?

Posted: Tue Jun 03, 2008 9:38 pm
by Shark8
Combuster wrote:
esa wrote:You just proved my point. C is obviously not homoiconic if that is the best C has.
IMO I just disproved that first part of your claim - there's no real distinction between code and data. (in formal logic that means that I disproved your entire claim as you say both to be true :twisted:) I give you the second one but only because of its ambiguous definition - you can use a function as an data structure (program code can be referred to as an array of bytes or anything) and vice versa (an array can represent code) but you are expected to use only one representation.
Actually, that's not correct. You see a homoiconic language _must_ represent its control structures as the same as data. In C you _may_ use data (ie the array) as code. There is a big difference in the two, though you might not see it at first: the difference between 'may' and 'must'.

Now, the reason this is possible is that, strictly speaking, the computer cannot distingsh between code and data as a stream of bytes. And as such, all you need to do to load some code like that is to JMP to the address of the arry. In fact, this is common in self-modifying code. (It is true that there are code and data segments... but those are merely conventions, like driving on the right or left hand side of the road.) IE is the hexstring "A1EF2304" part of some code, some data, or some gibberish left over from something else? Who knows.