tjmonk15 wrote:Howdy
Hello!
Before I begin my response, please keep one thing in mind: This is ultimately just my opinion. I'm giving my thoughts so that others may learn from them. Or not. It's up to you. Your kernel is your kernel, and I wish each developer the best in attempting to prove their own opinions. So please take it for what it is: an opinion. And as we know, everyone has one.
I have worked (thoroughly) with various programming languages such as PHP, C#, Java, JS, and technologies (read VMs in this case) such as Apache, IIS, Browsers, etc., and continue to do so. These language are considered to be, from not dynamic (C# and Java, which is flat out wrong in the case of C# if not Java as well) to very dynamic (JS is the most "dynamic" language I have yet to encounter.)
I have also worked with other languages such as C, C++, VB5-6, VB.Net, Basic, Fortran (vaguely), asm, Python, and Perl. (I'm sure I am missing some...)
For what it's worth, I would expect no less from members of this forum. Developing an OS is not an amateur's game.
Last I checked, I only need to replace the "interface" and the rest of *MY* code fails to compile until i replace said Interface-defined-methods/properties.
That's the classic argument against dynamic solutions. The
problem is that there is no evidence that static compilation checking reduces bug rates. In perhaps the most famous work on this problem, Les Hatton's Software failures-follies and fallacies, the following conclusion was drawn:
We can conclude that programming language choice is at best weakly related to reliability.
A similar conclusion was reached in Steve McConnell's book "Code Complete", where code size was deemed the most important indicator of bugs. A very good discussion on the book and the specific issue of code size can be read at the following URL:
http://mayerdan.com/ruby/2012/11/11/bug ... ode-ratio/
More specifically on the problem of Dynamic vs. Statis, the following
paper was published by Microsoft researchers stated the following:
Static typing provides a false sense of safety, since it can only
prove the absence of certain errors statically [17]. Hence, even
if a program does not contain any static type-errors, this does
not imply that you will not get any unwanted runtime errors.
Interestingly, that paper argues that a mix of static and dynamic can produce the most reliable results. Which is something I tend to agree with. The underlying problem is that the amount of external data that a program must consume continues to grow. This data forces a program to handle the typing dynamically, which makes its static typing useless. Thus test cases must provide a stand-in solution. However, anywhere where your program is internally consistent, you can absolutely use typing to reduce errors.
It's pretty clear you care less about reliability than you do Cost of Development. As a business owner, this makes sense, as a developer, a user, or an investor (that cares about maintainability, ie. future profits/ROI)
Unfortunately, that is a provable untrue statement. I did quite a bit of study on the topic over the last few years. What I found was that by switching to more functional and/or dynamic techniques (note that I did not say that it was always a dynamic
language) I was able to reliably reduce code sizes by 90% in comparison to similar software being developed in parallel using traditional techniques. When the techniques were applied to the software developed in parallel (i.e. by replacing modules using the newer techniques) code sizes dropped by the expected 90%.
As one might expect from such a drop in code sizes, reliability went up considerably. While I'm afraid I don't have the exact numbers available to me anymore (it was proprietary information) they were similarly shocking. Bug rates were between 60 - 90% less when compared between projects and/or previous versions. In fact, the conclusion reached was that the rates for the older techniques were so high that there were even more bugs being masked by the existing problems.
Again, I will stress that dynamic languages were only part of the solution. Our change was from server-side Java code using traditional MVC and object mapping techniques to Java and Javascript with the Java "speaking" dynamic maps and lists instead of objects. We also utilized a large number of anonymous inner classes to serve as replacements for the natural closure and lambda function passing you get in a true functional language.
In terms of a class being "static", that is either a bonus, or a determinate. Most career developers would likely say that it is a bonus. (As the static verification makes their life easier in the long run)
The following blog post has been making the rounds among the Java and wider OOP community for a few years now:
http://steve-yegge.blogspot.com/2006/03 ... nouns.html
He does a fairly good job of explaining a perspective on the exact limitations of the class hierarchy. Class typing by itself isn't necessarily a bad thing (I used it to my advantage in making our dynamic Java more reliable by making sure highly pluggable Spring components could only fit together one way), but when used in a strict object model you end up with an inflexible system that cannot easily evolve, change, or even represent the numerous exceptions that occur due to leaky abstraction.
Basically it boils down to this:
Translating a domain model to an object model is a really bad idea.
jbanes wrote:But why are they static? No real reason. It just happened to be the easiest implementation when OOP was originally designed. Thus static classes became codified into the concept.
Proof requested. (--Snip-- Please cite verifiable sources when you make such a wild claim...)
It's not that wild of a claim. If you trace through the history, it's easy enough to understand what happened. Simula 67 provided the first concept of classes, but it did not treat them as first-class citizens like later OOP systems like Java. Instead, it was a dynamic type that could be setup and utilized in a program without attempting to utilize it as a modular software solution. This made sense because the "Class" was attempting to model a real-world object with particular attributes.
This same concept was implemented in Common LISP with dynamic objects using a "Metaobject" syntax. This class-less design meant that objects could be constructed rather than pre-defined at compile time.
Smalltalk introduced the first language with truly first-class objects. In Smalltalk, everything was an object. Yet the type system was dynamic! This meant that Smalltalk shared a great deal with Simula, where the Object system existed to provide a mechanism for structured objects and not a unit of compilation and/or typing.
In fact, Smalltalk and Javascript share a great deal in common. Both languages are dynamically typed. In both languages, everything is an object. (Though Javascript quietly switches back and forth between primitives and objects as needed.) Both languages support the concept of duck-typing natively. (As opposed to Java's overweight reflection mechanism.) And both languages allow for dynamic object definition, with the primary caveat being Smalltalk requiring the class definition first while Javascript allows you to create the object dynamically. In that respect, Javascript ends up being somewhere between Smalltalk and Common LISP. Otherwise, Javascript is a "better" OOP language according to the Smalltalk "way" than Java is!
To actually get to this modern idea of statically defined "classes" as we think of them in Java and C#, we need to go to C++. When Stroustrup developed the language, he was trying to develop a way of bringing Simula-like classes to the C language. Thus rather than inlining definitions in a dynamic manner like Simula, he found a way to use C structures in combination with a pre-processor to define classes through a combination of a header file and a code file. These combined would utilize Classes as a modularization mechanism with extremely strong typing.
Java was influenced by this idea as Gosling started by trying to create a better C++ compiler. Eventually this morphed into Java. Source:
http://www.cs.dartmouth.edu/~mckeeman/c ... epaper.pdf
The rest, as they say, is history. The idea of a class as a static modularization solution took on a life of its own and gave us the modern world of domain models being translated into object hierarchies, which create difficult to manage code that is often
less reliable than the non-OOP solutions.
As a final thought on the combined issue of reliability with dynamic code, I'd like to point out that some of the most reliable systems in the world run on Erlang. For those of who are familiar with Erlang know, it's used heavily in the telecommunications sector where system-wide failure simply isn't an option. If you're not familiar with the language, I highly recommend starting with Joe Armstrong's 2003 paper on the topic:
http://www.sics.se/~joe/thesis/armstron ... s_2003.pdf
jbanes wrote:... Which means that we should make a computer do it.
When you can deterministically generate all possible optimizations/possibilities for a given set of code without any developer input... Go join the GCC team.
Of course we can't. My point was not that we can predict everything ahead of time. My point was that as soon as you find yourself doing something repetitively, you should refactor to optimize it away. Using object-mapping code as the example, the problem was that programmers were spending all their time changing the Objects to match the latest iteration of the model. This made no sense as these objects were adding no value over the raw data structures. We "made the computer do it" in two stages. The first was moving to Maps and Lists. This dynamic structure enabled the second change, which was the ability to configure operations like common transformations (e.g. We could say "add key x and key y, output result to key z) rather than repetitively coding these operations to each object structure. This worked much better than the rats' nest of meta-typing needed to, for example, "generically" identify two arbitrary numbers as being capable of being added.
Defining these small pieces of highly reusable functionality means that the functionality can not only be driven through configuration, but also dynamically through runtime recognition. For example, in Javascript I've written filters that can recursively traverse a set and apply the filter logic anywhere in the hierarchy. Thus a simple definition like this:
exact match where [a, b, c] is 'Hi'
Can dynamically "match" the parent record for any of the following structures:
Code: Select all
{
a: {
b: {c: 'Hi'}
}
}
{
a: {
b: [{c: 'Bye'}, {c: 'Hi'}]
}
}
{
a: [
{b: [{c: 'Bye'}],
{b:[{c: 'Hi'}]}
]
}
I can create such a filter in very few lines of a dynamic, functional language like Javascript. Trying to define a similar solution in Java has nearly given me an aneurysm. The type system keeps getting in the way, requiring very complicated code to reproduce.
Hopefully that addresses most of the concerns you raised and cites many of the specific arguments I'm attempting to make.