...
If your environment doesn't have reflection, nothing prevents you to add symbolic information or generate serializer/deserializer at compile time.
Sure, but doing it with a pre-processor is a pain in the ***.
Plus, why have that code pre-generated if it doesn't need to be? Let's say you're writing class Thing and selling it as part of your spiffy 3rd party library. Perhaps you can't predict whether or not people will care about (de-)serializing Things, so you make it Serializable just in case. Your binary is now maybe a few bytes bigger. If you pre-compiled all that functionality in, it would be a lot more, all for a feature that people may or may not use. Generally speaking, doing this kind of stuff at compile-time takes away flexibility.
Am i wrong thinking that from times to times, you still have to write code for unserialization support (e.g. when you need to *reconstruct* state rather than storing it).
It depends on what kind of state is being reconstructed. If all the fields of your object can be automatically serialized, then they can be automatically deserialized as well. But if you have a transient field that doesn't get serialized, like say an open File or something, then a freshly-deserialized object would need to open the File for itself (although I would argue that objects that require these kinds of resources probably shouldn't be serialized in the first place).
When you build a Swing application, register an event handler for some widget, and then let the widget go out-of-scope without un-registering the event handler, you have a memory leak in your Java VM. The event handler is still registered and hinders the widget (and the containing frame) to be GC'ed. You also no longer have a handle on your widget, so you can't un-register the event handler.
The bit about being unable to unregister the event handler is expected, but if the widget itself isn't being GC'd after going out of scope, that sounds like a bug for sure.
The way it should work is: Thing registers a handler with Widget. Now Widget refers to Thing, and something else refers to Widget. If all references to Widget are dropped first, then it can be GC'd, even though it refers to Thing. If all (other) references to Thing are dropped first, it will stay alive as long as Widget is referred to, but as soon as Widget becomes candidate for collection, then Thing is too. If this isn't what happens, then I hope the fix it. In my experience (2+ years developing with .NET), this kind of thing has never happened to me.
Bottom line: I agree that VM's will play an even bigger role in the future. But they aren't the cure-all, just like the internet, thin clients, and XML were just that: Good solutions for a given problem, but not the solution to solve all problems.
We're approaching the rational happy medium.
I didn't say VMs were a cure-all or the solution to all problems, but I think the class of problems they solve is rapidly expanding.
Mainly, I was just tired of hearing that VMs are for the "lazy" or less capable developers. Then again, the people that say this are the same people who would probably write everything in assembler if they could.
As with all the things that are proclaimed to be silver bullets and kill vampires (only good), they also kill people (do bad things).
;D