If you don't like a given language, that's all good; unless your job is forcing you to use them, then you don't have to.
That having been said, it helps to understand what it is you are saying you don't like, and why they are the way they are.
Solar wrote:That's why I like sticking to the older languages.
Do I even need to say what I am thinking, here? You, of all people here, should have a better idea of why that are the way they are, and more to the point, how long this approach has been around.
Code: Select all
> (define a 1)
> (define b a)
> a
1
> b
1
> (set! a 100)
> a
100
> b
1
OK, so
Scheme isn't the original dialect, but it is only five years newer than C (1973 vs 1968), and the same behavior was true in nearly every Lisp since the
original m-expression language definition, before it was even implemented. The basic behavior is mainly due, as Rusky states, to that fact that a symbol (the equivalent of an identifier or variable name) is an name for a reference to the data's address rather than the data's address itself. Since everything is (or can be treated as) going through indirection, assignment becomes setting the reference.
Defining
a doesn't set
a to 1, it sets
a to a pointer to a data cell holding the value of 1; the same pointer is used any time the value 1 is needed. By defining
b as
a, it is copying the pointer to the value cell, not setting
b to a reference to
a (to put it another way, it automatically snaps the pointers; there is only one level of indirection in both variables). When
a is set (assigned) to 100, it changes the reference
a holds, but since
b still has its own reference to 1 -
not a reference to
a - it doesn't change anything about
b's value.
This may sound inconsistent, because is looks as if it is treating the primitive values differently; but in fact, it isn't. The operations on numbers and other primitives are
always indirected, or behave as if they were - in both
(define a 1) and
(define b a), it is just copying an address from one symbol to another, but in the first case the 'symbol' is the literal
1. In Lisp,
a literal is a symbol, not a syntactic structure, though one which is treated differently in regards to how it gets its own value. Similarly, in Python and Ruby, a literal is just another object, just one whose value is constructed implicitly.
I think that this is what is tripping people up, because they assume that
is assigning the value of
1 to
a, while
is assigning a pointer to
a. Both are, in fact, copying a pointer to an object, but to a C programmer, it looks like the second one is copying a pointer to a
variable - something that doesn't quite exist in these languages (but see below).
Mind you, to Lisp, that's just an implementation detail, and most Lisp compilers* (and some Lisp interpreters) will short-circuit references to primitives and smaller data structures. I am not sure if the 'standard' implementations of either Python or Ruby do. I would expect they do, if the developers are anything close to being on the ball about this. Then again, I would have said the same thing about
TCO, but Guido stated outright that he disliked it because it smashes the stack (OK, so there are decorators for tail recursion removal, but those are a bit kludgey).
Obviously, Java and C# don't, because they have a separate set of explicitly primitive types that are treated them as local values (data addresses) rather than references, and you cannot have references to them without a wrapper class. IME, this is the worst option of all - it is neither completely explicit, not completely implicit, and the differences in behavior can be confusing even with the explicit typing those languages use. It was originally selected in Java for efficiency reasons, but in retrospect, that makes no sense - efficient algorithms which could convert conceptual references in heap to concrete values on the stack, and which play well with garbage collectors, were already well known (e.g., Chaney's algorithm), and the only thing it did was simplify the compiler (and by only a trivial degree at that).
There
are ways to get multiple levels of indirection in these languages, but not with simple assignment - you need to use a list structure, which in Lisp at least, is just pairs of references chained together and with a
null in the second value of the last pair. Pairs are shown like this:
where the first is a pair with a reference to the constant 2 and a reference to the constant 3, and the second is a pair of references to the symbols (references) a and b (these are sometimes called 'improper' lists because they fill both cells with non-null references).
A list, then, would actually be a series of pairs which contain one or more references to either another pair, or a null, so:
Code: Select all
; simple list
(a b c d) => (a . (b . (c . (d . null))))
; nested list, i.e. , a tree
(a b (c d (e f) g)) => (a . (b . ((c . (d . ((e . (f . null)) . (g . null)))) . null))
The same can be done in Python, but Python also has more specialized data types which get used more often (most modern Lisps do too, actually).
--------------
*) Despite its reputation, Lisp is, and always has been, primarily a compiled language; while Slug Russell's big insight involved using the eval and apply functions described in
MacCarthy's paper as an interpreter**, that was pretty much an accident, and the original interpreter was used first and foremost for bootstrapping the first compiler. The misconception comes in part from the use of the REPL in development - which behaves like an interpreter, but often is actually a compile-and-go native compiler - and from the ease with which meta-circular interpreters are written in Lisp. The fact that any fool could write a Lisp interpreter - in any language, but especially in Lisp itself - led to a culture that focuses heavily on defining DSLs and extension languages for things which would be done in a more direct and less abstracted manner in most other languages.
**) The eval and apply functions were defined in a publication language which MacCarthy called the 'm-expression' form, but operated on a symbolic list data structure MacCarthy called an 's-expression'. They had the effect of treating an s-expression as a an expression in a modified lambda calculus, with eval calling apply on the first element of the list with the rest of the list as function arguments, and apply taking the list of arguments and calling eval on each one until it got to just a final literal or value, at which point it would call the function being applied with those final return values as it's arguments.
In the paper, they were only intended to demonstrate that the new language was Turing-equivalent (or rather, Church-equivalent, since he based them on the lambda calculus), something that wasn't really clear for a lot of languages at the time. Steve 'Slug' Russell, one of MacCarthy's grad students, was tasked with hand-converting several of the base Lisp functions into assembly code to demonstrate that they were viable as executables, and help plan out the m-expression compiler. However, around Dec 1958, he decided to try implementing the eval and apply functions to see if they could actually work; they did, and it struck him that they could use the s-expressions
instead of the m-expressions, and just use eval and apply in a loop - in other words, he realized he had written an interpreter.