Functional Programming (was Pitfalls of lacking a MMU)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
turdus
Member
Member
Posts: 496
Joined: Tue Feb 08, 2011 1:58 pm

Re: Pitfalls of lacking a MMU

Post by turdus »

Brendan wrote:I don't think you understand the difference between 100% and 99.9999995%.
I'm agree with you Brendan, except this one. There's a mathematical proof that 100% equals to 99.999*% (where * means infinite number of digits). Of course, if you put precisely a 5 at the end they won't.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Pitfalls of lacking a MMU

Post by Brendan »

Hi,
Combuster wrote:Haskell for example has a special "monad" object called IO that indicates system state.
It's not that pure functional languages don't allow state, it's that they don't allow side-effects (which includes state). A monad is a "non-pure" addition which allows side-effects and/or state, which therefore make Haskell "functional programming" rather than "pure functional programming" (or at least, if/when monads are used).
turdus wrote:
Brendan wrote:I don't think you understand the difference between 100% and 99.9999995%.
I'm agree with you Brendan, except this one. There's a mathematical proof that 100% equals to 99.999*% (where * means infinite number of digits). Of course, if you put precisely a 5 at the end they won't.
I deliberately put a 5 at the end for a reason.. ;)

My point was mostly about accurate terminology - "100%" (or "99.99999*%") rather than "99.9999995%" , or "all" rather than "almost all", or "none" rather than "least needed (for some definitions of "needed")".


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Pitfalls of lacking a MMU

Post by Combuster »

Brendan wrote:therefore make Haskell "functional programming" rather than "pure functional programming"
wikipedia wrote:Haskell (pronounced /ˈhæskəl/)[3] is a standardized, general-purpose purely functional programming language
Combuster wrote:Every single functional language allows state, which works as follows
Never thought I would accuse you of not reading :shock:

Point in case: a haskell IO function is an equation that describes the resulting IO state given the explicit previous state and input. There are no side effects, only than that you are dealing with an type of which you can't access the internals.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Brendan wrote:You're avoiding the issue.

How about, to avoid run-time checks, internally the compiler converts "int foo(int value)" to "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)", in a similar way to the name mangling C++ does to ensure type correctness during linking?
What issue am I avoiding? The issue I'm not avoiding is that it is possible for the compiler to verify that all memory accesses will be in range, without significantly restricting the tools available to the programmer, thus obviating the need for hardware memory protection. Name mangling has absolutely nothing to do with that- it could do it with ordinary separate compilation, using header files or (as is more likely with managed code) by reading the metadata in other modules. By the time it gets to the linker everything's been verified.
Brendan wrote:In the same way, I should've said "pure functional car", so people don't get confused and think I'm talking about cars that were wrapped in plastic to work-around the idiocy of the primary design goal (preventing people from driving in the rain).

The third paper applies dependent types to some contrived language they had to invent specifically for the purpose - it's hardly "regular" or "old".
I still don't understand your aversion to research. It's got nothing to do with functional programming, and it's got everything to do with verifying memory access. It doesn't matter how "contrived" or "non-functioning" their languages are if they still have the same underlying features to work with - mutable state, control structures like branching and looping, etc. Dependent types are a language feature, so it's either "contrive" one or do something like this paper where they extend C instead.
Combuster wrote:As for array indices, it is possible as per the proof I posted earlier that you can generate the least amount of checks needed for any terminating algorithm. It is however completely infeasible to do so as it has not yet turned out to be achievable in polynomial time. Therefore compilers use proving techniques (with heuristics to find correctness proofs) to eliminate many access checks. If you have loops over arrays you can for instance check that the index is bounded by zero and the array size, and that the array size does not change during the course of the loop. This becomes more and more difficult as the amount of code in the loop changes...
It is not infeasible to use an algorithm rather than a heuristic. In the first paper I referenced, despite using a double-exponential time algorithm, they found in practice that there just aren't that many constraints to deal with- see, for example, section 10.3. Further, there are more efficient algorithms as well, including one with polynomial average-case complexity and exponential worse-case complexity (section 10.1).
Combuster wrote:...and becomes impossible if the effects of called code on any argument becomes unavailable. Whole-program optimizers can pull the boundary of knowledge over the file border and make completions of correctness proofs more feasible.
This is simply not true. Dependent type annotations localize the problem- there's no need to look at the effects of called code if you know its type, which can be checked separately.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Combuster wrote:Every single functional language allows state...
False. Learn about functional reactive programming.

Monads like IO that mimic imperative languages are not the only way to deal with what you view as mutable state- imagine defining a program's behavior over time declaratively, i.e. as a function of its inputs. In many cases, this is in fact a clearer way of doing things.
Brendan wrote:It's not that pure functional languages don't allow state, it's that they don't allow side-effects (which includes state). A monad is a "non-pure" addition which allows side-effects and/or state, which therefore make Haskell "functional programming" rather than "pure functional programming" (or at least, if/when monads are used).
This is, in fact, not what monads are. Only the IO monad, because it is "special" and understood by the runtime (in the way Combuster describes), has anything at all to do with side effects- other monads are entirely defined within the pure-functional language. Further, as I described above, the IO monad style of adding side effects to the language runtime is not the only way of writing functional programs.

Most monads are things like optional types, lists, parsers, etc. Wikipedia has a good collection of examples. Another very good example of monads is Haskell's Parsec parser combinator. The fact that monads are used for side effects and IO is a (sad, in my opinion) side effect of people stuffing Haskell into mainstream operating systems.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Pitfalls of lacking a MMU

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:You're avoiding the issue.

How about, to avoid run-time checks, internally the compiler converts "int foo(int value)" to "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)", in a similar way to the name mangling C++ does to ensure type correctness during linking?
What issue am I avoiding? The issue I'm not avoiding is that it is possible for the compiler to verify that all memory accesses will be in range, without significantly restricting the tools available to the programmer, thus obviating the need for hardware memory protection. Name mangling has absolutely nothing to do with that- it could do it with ordinary separate compilation, using header files or (as is more likely with managed code) by reading the metadata in other modules. By the time it gets to the linker everything's been verified.
So, to verify that the callers are using valid values when calling functions in separate object files; for a function named "foo" where only some ranges of input values are valid, you are suggesting that either:
  • the programmer has to put something like "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)" in a header file and include that header file in the file that defines "foo()" (and not just in the files that use "foo()")
  • the compiler has to put the equivalent of something like "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)" in some metadata somewhere
The first alternative is a huge pain in the neck that no sane programmer is ever going to want to do (wouldn't be so bad if it was just the minimum and maximum values, but that's not what we're talking about).

The second alternative has other issues. If the first source file contains "foo()" and calls the function "bar()" in the second source file; and the second source file contains "bar()" and calls "foo()" in the first source file; then should the first source file be compiled first (to create the metadata needed to compile the second source file) and fail because the metadata for the second source file doesn't exist yet; or should the second source file be compiled first (to create the metadata needed to compile the first source file) and fail because the metadata for the first source file doesn't exist yet?

To get around that, maybe you could add a new stage to compiling: compile all source files to an intermediate representation and generate metadata while postponing any verification that relies on metadata that may not exist yet; then once all the metadata exists do the compiling and verification properly and create object files; then link the object files. In this case, why not combine the last 2 steps and have a special linker, and just put the metadata in the object files? Basically, put "int foo(int value)" to "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)" in the object file, and let the linker verify that all the callers (from other object files) are passing valid input values?
Rusky wrote:
Brendan wrote:In the same way, I should've said "pure functional car", so people don't get confused and think I'm talking about cars that were wrapped in plastic to work-around the idiocy of the primary design goal (preventing people from driving in the rain).

The third paper applies dependent types to some contrived language they had to invent specifically for the purpose - it's hardly "regular" or "old".
I still don't understand your aversion to research. It's got nothing to do with functional programming, and it's got everything to do with verifying memory access. It doesn't matter how "contrived" or "non-functioning" their languages are if they still have the same underlying features to work with - mutable state, control structures like branching and looping, etc. Dependent types are a language feature, so it's either "contrive" one or do something like this paper where they extend C instead.
From that "extending C" paper (in the introduction):
  • Decidability: Dependent type checking involves reasoning about the run-time
    values of expressions. In most previous dependent type systems, dependen-
    cies are restricted to the point where all checking can be done statically.
    Instead, we propose the use of run-time checks where static checking is not
    sufficient. This hybrid type-checking strategy, which has also been used re-
    cently by Flanagan [7], is essential for handling real-world code.
Earlier in this discussion you were saying "everything verified with no run-time checks". That's what I disagreed with. After that, you revised it to "everything verified with no run-time checks, except things from untrusted sources (e.g. file IO) that a sane programmer would check at run-time anyway". I agreed that was possible in theory, but didn't agree that it's possible in practice (in the same way that a compiler's optimiser is able to generate "100% optimal code" in theory, but not in practice). Now you're referring me to papers that say a hybrid strategy (compile-time checks and run-time checks) is "essential for handling real-world code."?

I don't have an "aversion to all research". I do think that in some cases researchers try to write things in ways that make it unnecessarily difficult for people outside their special area of research to understand, rather than doing the opposite (the intended audience should be those who might be interested in the research in the real world, and not other researchers and/or the researcher's professor). For example, why are researchers too stupid to understand what a glossary is for? I don't like people using research in inappropriate ways (for example, a research paper on "the best way to skin a cat" is not an endorsement of skinning cats). I also don't like it when people fail to realise that most things that are useful in practice have been extensively researched already, and therefore a lot of new research is being done on things that are unlikely to be useful in practice.

I do have a problem with "functional programming". Ask your grandmother how to make a cake. I can almost guarantee your grandmother will give you a list of steps ("make_cake() {put flour in a bowl; add eggs to the bowl; mix the ingredients; bake in oven; let cool; make_icing(); put icing on cake;}") with subroutines/procedures/functions ("make_icing() {put icing sugar in a bowl; boil water; put butter in bowl; put hot water in bowl; mix the ingredients;}". Imperative programming is the way humans naturally think. Functional programming requires strange and unnatural thought; and is therefore less suitable for languages intended to be used by humans (e.g. high level programming languages). Basically, imperative programming does have problems in certain areas (e.g. concurrency); but the "functional programming" solution is worse than the problems it tries to solve.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Pitfalls of lacking a MMU

Post by Combuster »

False. Learn about functional reactive programming.
Show me any functional language that matches the criterium (there are none as far as wikipedia goes) and a sample implementation of maths performed on 3D vectors, and I'll show you how state can be implemented in that language. After all, I have already demonstrated the general solution to the problem.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Brendan wrote:So, to verify that the callers are using valid values when calling functions in separate object files; for a function named "foo" where only some ranges of input values are valid, you are suggesting that either:
  • the programmer has to put something like "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)" in a header file and include that header file in the file that defines "foo()" (and not just in the files that use "foo()")
  • the compiler has to put the equivalent of something like "int_with_standard_range foo(int_that_accepts_values_1to3_45_23_1983_6to15_4321to4444_elephant_and_69)" in some metadata somewhere
The first alternative is a huge pain in the neck that no sane programmer is ever going to want to do (wouldn't be so bad if it was just the minimum and maximum values, but that's not what we're talking about).
You've fundamentally misunderstood dependent types, despite my efforts to provide plenty of explanations and example code... There's no "int_with_standard_range" or "int_made_of_trolls", there's (to use the syntax of DML, which you obviously still haven't paid any attention to) [i:int] int(i) and {i:nat | sqrt(i) & 251 < 1234} int(i), both of which can be inferred by the compiler (thus the suggestion of metadata rather than human-maintained header files).
Brendan wrote:whining about circular dependencies
Go ask the C# guys- they've been dealing with that exact problem in .NET for years. It's got nothing to do with dependent types.
Brendan wrote:Earlier in this discussion you were saying "everything verified with no run-time checks". That's what I disagreed with. After that, you revised it to "everything verified with no run-time checks, except things from untrusted sources (e.g. file IO) that a sane programmer would check at run-time anyway". I agreed that was possible in theory, but didn't agree that it's possible in practice (in the same way that a compiler's optimiser is able to generate "100% optimal code" in theory, but not in practice). Now you're referring me to papers that say a hybrid strategy (compile-time checks and run-time checks) is "essential for handling real-world code."?
I've never changed my position. My exact words were, in fact, "When your compiler can verify, with no runtime overhead, that all memory accesses are in bounds, why would you not use that method?" As we've already established, the definition (at least the one I'm using) of "runtime overhead" does not include things that must be checked anyway. And as you should know, the fact that I link to a paper does not mean I agree with every word of it, especially considering the way it was cited.
Brendan wrote:I don't have an "aversion to all research"...
Could've fooled me, "wanky academic crap".
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Combuster wrote:Show me any functional language that matches the criterium (there are none as far as wikipedia goes) and a sample implementation of maths performed on 3D vectors, and I'll show you how state can be implemented in that language. After all, I have already demonstrated the general solution to the problem.
I don't care if you can implement state. I know you can- Haskell has the State monad, for example. That kind of state is just fine, because it doesn't hurt referential transparency or cause race conditions or "spooky action at a distance". That kind of state is not what functional programming is concerned with.

Neither the IO monad nor functional reactive programming are necessary for 3D vector math:

Code: Select all

-- fixed-size vectors
data Vec3 = Vec3 Int Int Int
dot (Vec3 x1 y1 z1) (Vec3 x2 y2 z2) = x1 * x2 + y1 * y2 + z1 * z2
cross (Vec3 x1 y1 z1) (Vec3 x2 y2 z2) = Vec3 (y1 * z2 - y2 * z1) (x2 * z1 - x1 * z2) (x1 * y2 - x2 * y1)

-- arbitrary-sized vectors
dot a b = sum $ zipWith (*) a b
cross = -- not going to implement gram determinant here... you get the point
However, functional reactive programming is useful to replace the IO monad. Instead of describing a sequence of actions, you describe the behavior of a program by composing other behaviors- the spaceship's position is the integral of its velocity, its velocity is 3 * (key_right - key_left); the window is two panes laid out horizontally, the first pane is a tree view with this or that backing data, etc. See Tangle, flapjax, Reactive Extensions, Fran, wxFruit, this example, and google.

Yes, yes, it's all stateful somewhere underneath blah blah blah. That's just a consequence of building computers as state machines- it would also be possible, if you wanted, to build a computer the same way as a reactive or dataflow program. In fact, that is the natural way to build electronic systems, until people start adding flip flops and such.
Brendan wrote:I do have a problem with "functional programming". Ask your grandmother how to make a cake. I can almost guarantee your grandmother will give you a list of steps ("make_cake() {put flour in a bowl; add eggs to the bowl; mix the ingredients; bake in oven; let cool; make_icing(); put icing on cake;}") with subroutines/procedures/functions ("make_icing() {put icing sugar in a bowl; boil water; put butter in bowl; put hot water in bowl; mix the ingredients;}". Imperative programming is the way humans naturally think. Functional programming requires strange and unnatural thought; and is therefore less suitable for languages intended to be used by humans (e.g. high level programming languages). Basically, imperative programming does have problems in certain areas (e.g. concurrency); but the "functional programming" solution is worse than the problems it tries to solve.
What a load of BS. This is the standard whiny fallback complaint from people who are stuck in the imperative programming rut. Of course your grandmother describes something procedurally when it's done by a single actor doing one thing at a time! That doesn't mean everything is like that- for every example of something described well by imperative programming, I can come up with one that is described well by functional programming.

We've all written code that involves expressions. Expressions are just functional programming "leaking" into your imperative world. Take these two ways to align a pointer, for example: "p + (size - p % size)" and "mov ax, si; mov cx, $size; div cx; sub cx, dx; add si, cx". Which is more "natural"? Which is closer to what you mean, or how you think? Which is easier to understand at a glance? Which is easier for a compiler to optimize with e.g. strength reduction?

How about spreadsheets? You don't write imperative programs to calculate sums, sort tables, or graph data. Instead, you simply enter data and its relationships with other data, as what is essentially a functional program. Would you really claim that spreadsheets are an unnatural way of thinking about things?

Like my FRP GUI example above, functional programming is also "leaking" in through declarative GUI description tools like WPF. Which is easier- describing a GUI as a hierarchy of widgets whose contents are based on values that vary over time (position of scrollbar, color in picker, position of mouse, contents of textbox/bitmap etc.), or describing a GUI by making a procedural list of calls to an API and then switching in an event loop, manually updating things like the current paintbrush color or whatever?

The existence of tools like regular expressions or lex and yacc is further evidence that functional programming is not "unnatural". Just describe the patterns that make up what you want to parse and (using expressions) what they represent in your internal representation, rather than manually incrementing a pointer and writing data to a chunk of memory.

For a non-programming example, think about how your grandma would describe her house. "It has a front porch with a rocking chair on the left, a front room with blue carpet and pictures of the grandkids on the walls, a kitchen through a door at the back with a stove, a sink, and a refrigerator..." A construction company could build a house with a description like that (albeit much more detailed)- blueprints aren't step-by-step instructions, they're functional programming. 8)
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Pitfalls of lacking a MMU

Post by Combuster »

Rusky wrote:
Combuster wrote:Show me any functional language that matches the criterium (there are none as far as wikipedia goes) and a sample implementation of maths performed on 3D vectors, and I'll show you how state can be implemented in that language. After all, I have already demonstrated the general solution to the problem.
I don't care if you can implement state. I know you can- Haskell has the State monad, for example. That kind of state is just fine, because it doesn't hurt referential transparency or cause race conditions or "spooky action at a distance". That kind of state is not what functional programming is concerned with.
I posted a C-style solution here because I wouldn't expect everybody to be familiar with Haskell (even though <troll>it happens to be the most beautiful example of the construct in question</troll>). In which case:

Code: Select all

data Object = Posvel Vec3 Vec3
data State = ScoreObjects Int [Object]

physicsloop :: State -> Int -> State
physicsloop (ScoreObjects n xs) t = ScoreObjects n (map f xs)
   where f (Posvel pos vel) = (Posvel (pos + t * vel) vel)
As such, we have state, without side effects.

Now imagine the following:

Code: Select all

data State = BigObject [Files] [Sockets] [Processes] [Events] Morejunk
println :: State -> String -> State
That's still purely functional, because we still do not have side effects. All the effects are there and accounted for. The compiler knows that any modification on State depends on any previous modifications of that state, and therefore does not need to do anything language specific construct for input and output.

The only thing Haskell does beyond that is hide that clumsy State object in an opaque IO Monad because it's too unwieldy in its explicit form. That does not otherwise change that all changes in state are explicit, and there are therefore no side effects. That absolute lack of side effects is also why Haskell is considered a pure functional language.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Oops, you got me. I guess technically every language "allows" state.

My point is that not every language (needs to) allow accessing the "outside world" in that way. It's perfectly possible to expose files, network, whatever in, for example, a functional-reactive way instead, with no state threading, while remaining purely functional.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Pitfalls of lacking a MMU

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:I do have a problem with "functional programming". Ask your grandmother how to make a cake. I can almost guarantee your grandmother will give you a list of steps ("make_cake() {put flour in a bowl; add eggs to the bowl; mix the ingredients; bake in oven; let cool; make_icing(); put icing on cake;}") with subroutines/procedures/functions ("make_icing() {put icing sugar in a bowl; boil water; put butter in bowl; put hot water in bowl; mix the ingredients;}". Imperative programming is the way humans naturally think. Functional programming requires strange and unnatural thought; and is therefore less suitable for languages intended to be used by humans (e.g. high level programming languages). Basically, imperative programming does have problems in certain areas (e.g. concurrency); but the "functional programming" solution is worse than the problems it tries to solve.
What a load of BS. This is the standard whiny fallback complaint from people who are stuck in the imperative programming rut. Of course your grandmother describes something procedurally when it's done by a single actor doing one thing at a time! That doesn't mean everything is like that- for every example of something described well by imperative programming, I can come up with one that is described well by functional programming.
"Described well" isn't the same as "described in ways that people think naturally". Maybe you're right and functional programming is so natural that you've never needed to explain anything to anyone, and haven't become so tired of valid criticism that you resort to calling them "whiny fallback complaints". Maybe the problem is that everyone in the world is wrong except you.
Rusky wrote:We've all written code that involves expressions. Expressions are just functional programming "leaking" into your imperative world. Take these two ways to align a pointer, for example: "p + (size - p % size)" and "mov ax, si; mov cx, $size; div cx; sub cx, dx; add si, cx". Which is more "natural"? Which is closer to what you mean, or how you think? Which is easier to understand at a glance? Which is easier for a compiler to optimize with e.g. strength reduction?
Both ways to align a pointer are the same, except the assembly version is in assembly and has a bug that will probably cause a "divide error" because you don't know what DX contains before the division. For your larger point, almost everything allows expressions that can have state and side-effects (e.g. "foo = bar * myArray[x++]").
Rusky wrote:How about spreadsheets? You don't write imperative programs to calculate sums, sort tables, or graph data. Instead, you simply enter data and its relationships with other data, as what is essentially a functional program. Would you really claim that spreadsheets are an unnatural way of thinking about things?
A spreadsheet as a grid of procedures (that are executed in order). The procedures have no arguments, and everything is side-effects. Except for problems with type-checking, it'd be simple to convert a spreadsheet into C.
Rusky wrote:Like my FRP GUI example above, functional programming is also "leaking" in through declarative GUI description tools like WPF.
It's far more likely that you're "seeing" functional programming where it never was. If someone works with pancakes all day and you show them a picture of a full moon their first thought will be "it's a black and white picture of a pancake". If someone looks through a microscope all day their first though might be "leather at 1000x zoom". Of course the incorrect illusion only last for a very small fraction of time - people realise it's not what they thought it was quickly and recognise it properly because it's easy to distinguish a moon from a pancake. When it's not easy to distinguish what something is people can think it's something its not for a very long time. Worst case is where you can't tell the difference at all. For example:
Image
What is this picture? Someone who works with vases a lot would think it's a vase, and someone that works with people a lot would think it's faces.
Rusky wrote: Which is easier- describing a GUI as a hierarchy of widgets whose contents are based on values that vary over time (position of scrollbar, color in picker, position of mouse, contents of textbox/bitmap etc.), or describing a GUI by making a procedural list of calls to an API and then switching in an event loop, manually updating things like the current paintbrush color or whatever?
I'd have a hierarchy of objects, where the code for each object is an event loop, and each event handler uses side-effects to change the object's state, location, appearance, etc. I'd assume most people would use something "OOP-like" for this too; with state and side-effects.
Rusky wrote:The existence of tools like regular expressions or lex and yacc is further evidence that functional programming is not "unnatural". Just describe the patterns that make up what you want to parse and (using expressions) what they represent in your internal representation, rather than manually incrementing a pointer and writing data to a chunk of memory.
So you're saying that imperative programming can use functional programming where it's beneficial (e.g. regular expressions) and ignore it when it's not beneficial; while functional programming can't use imperative programming when it's beneficial; and therefore imperative programming is superior because it's more flexible?

Note: I've never learnt/used lex or yacc (and I'm too lazy to find out), so I don't know if they actual are functional programming or just another one of your delusions.
Rusky wrote:For a non-programming example, think about how your grandma would describe her house. "It has a front porch with a rocking chair on the left, a front room with blue carpet and pictures of the grandkids on the walls, a kitchen through a door at the back with a stove, a sink, and a refrigerator..." A construction company could build a house with a description like that (albeit much more detailed)- blueprints aren't step-by-step instructions, they're functional programming. 8)
Blueprints aren't programming, and therefore can't be functional (or imperative) programming. Why don't you try to convince people that "emacs" is female?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Pitfalls of lacking a MMU

Post by Rusky »

Nothing to say about dependent types? Or did you just run out of excuses not to do your research?
Brendan wrote:"Described well" isn't the same as "described in ways that people think naturally". Maybe you're right and functional programming is so natural that you've never needed to explain anything to anyone, and haven't become so tired of valid criticism that you resort to calling them "whiny fallback complaints". Maybe the problem is that everyone in the world is wrong except you.
I almost don't know how to reply to that, it's so ridiculous. By any reasonable criteria (especially your own of being suitable for use by humans), "described well" is exactly the same as "described in ways that people think naturally." As someone who has actually written programs using functional programming, I'm going to have to assert (again) that functional programming is as natural a way to think about some things as imperative programming is about others, and that anyone who claims otherwise is full of BS.
Brendan wrote:It's far more likely that you're "seeing" functional programming where it never was. If someone works with pancakes all day and you show them a picture of a full moon their first thought will be "it's a black and white picture of a pancake". If someone looks through a microscope all day their first though might be "leather at 1000x zoom". Of course the incorrect illusion only last for a very small fraction of time - people realise it's not what they thought it was quickly and recognise it properly because it's easy to distinguish a moon from a pancake. When it's not easy to distinguish what something is people can think it's something its not for a very long time. Worst case is where you can't tell the difference at all. For example:...What is this picture? Someone who works with vases a lot would think it's a vase, and someone that works with people a lot would think it's faces.
This argument applies far, far, more to you, who have obviously dismissed functional programming as "wanky academic crap", than to someone who reads and writes code in both paradigms regularly. You selectively choose an example (grandma baking a cake) that supports your position, and when confronted with anything slightly outside of your view (expressions, spreadsheets, grammars, blueprints...) you twist it to fit.
Brendan wrote:So you're saying that imperative programming can use functional programming where it's beneficial (e.g. regular expressions) and ignore it when it's not beneficial; while functional programming can't use imperative programming when it's beneficial; and therefore imperative programming is superior because it's more flexible?
To "imperative programming can use functional programming where it's beneficial": Absolutely! More generally, most programming paradigms can be improved by borrowing ideas from other paradigms- imperative programming is immensely improved using several ideas from functional programming. Expressions (pretty much the birth of functional programming), closures/nested functions (from Lisp), higher order functions (i.e. function pointers, in your terms), immutable values and non-destructive updates, etc.
Brendan wrote:Both ways to align a pointer are the same, except the assembly version is in assembly and has a bug that will probably cause a "divide error" because you don't know what DX contains before the division. For your larger point, almost everything allows expressions that can have state and side-effects (e.g. "foo = bar * myArray[x++]").
Isn't the assembly bug a reason for the C version? If they're otherwise the same, isn't it better to let the compiler perform register allocation, handle the details of the target ISA, perform strength reduction optimizations, etc? By the way, your "almost everything" includes many functional languages, including ML, Lisp, erlang, etc... In fact, expression were invented by Backus for, guess what, the same reasons he took further to advocate functional programming.
Brendan wrote:A spreadsheet as a grid of procedures (that are executed in order). The procedures have no arguments, and everything is side-effects. Except for problems with type-checking, it'd be simple to convert a spreadsheet into C.
The user of a spreadsheet never specifies anything imperatively. How are "=sum(a1:a4)" or "=average(d3:f3)" anything but functional programming? They take input as arguments and give output as return values, with no other state or side effects. How is a graph a side effect? It just takes input and gives output, also with no side effects. Cells are transparently current, reflecting the rest of the spreadsheet- sounds an awful lot like functional reactive programming...
Brendan wrote:I'd have a hierarchy of objects, where the code for each object is an event loop, and each event handler uses side-effects to change the object's state, location, appearance, etc. I'd assume most people would use something "OOP-like" for this too; with state and side-effects.
That's half of what I described- a hierarchy of widgets can be much more easily described declaratively. Use imperative event handlers if you like, but know that it's possible and often natural to describe a GUI functionally.
Brendan wrote:Note: I've never learnt/used lex or yacc (and I'm too lazy to find out), so I don't know if they actual are functional programming or just another one of your delusions.
Surely you've seen BNF or similar descriptions of languages.
Brendan wrote:Blueprints aren't programming, and therefore can't be functional (or imperative) programming. Why don't you try to convince people that "emacs" is female?
Blueprints are at least as much programming as your grandma describing how to bake a cake. Or else you'd better stop seeing imperative programming where it never was... :roll:
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Pitfalls of lacking a MMU

Post by Brendan »

Hi,
Rusky wrote:Nothing to say about dependent types? Or did you just run out of excuses not to do your research?
I thought I made it reasonably clear that I (like most programmers in the real world) couldn't be bothered deciphering academic research papers. Your opinion doesn't matter enough to justify the lost time.
Rusky wrote:
Brendan wrote:"Described well" isn't the same as "described in ways that people think naturally". Maybe you're right and functional programming is so natural that you've never needed to explain anything to anyone, and haven't become so tired of valid criticism that you resort to calling them "whiny fallback complaints". Maybe the problem is that everyone in the world is wrong except you.
I almost don't know how to reply to that, it's so ridiculous. By any reasonable criteria (especially your own of being suitable for use by humans), "described well" is exactly the same as "described in ways that people think naturally."
I'd define "described well" as "described in a concise and unambiguous manner". If you're familiar with languages like English, you'll know that "described naturally" is often neither concise nor unambiguous.
Rusky wrote:As someone who has actually written programs using functional programming, I'm going to have to assert (again) that functional programming is as natural a way to think about some things as imperative programming is about others, and that anyone who claims otherwise is full of BS.
Oh, sorry - I didn't realise you're right - functional programming is so natural that I'm a competent "function programming" programmer too, and didn't even know it!

Of course there might be a small amount of sarcasm in that last sentence... ;)
Rusky wrote:
Brendan wrote:It's far more likely that you're "seeing" functional programming where it never was. If someone works with pancakes all day and you show them a picture of a full moon their first thought will be "it's a black and white picture of a pancake". If someone looks through a microscope all day their first though might be "leather at 1000x zoom". Of course the incorrect illusion only last for a very small fraction of time - people realise it's not what they thought it was quickly and recognise it properly because it's easy to distinguish a moon from a pancake. When it's not easy to distinguish what something is people can think it's something its not for a very long time. Worst case is where you can't tell the difference at all. For example:...What is this picture? Someone who works with vases a lot would think it's a vase, and someone that works with people a lot would think it's faces.
This argument applies far, far, more to you, who have obviously dismissed functional programming as "wanky academic crap", than to someone who reads and writes code in both paradigms regularly. You selectively choose an example (grandma baking a cake) that supports your position, and when confronted with anything slightly outside of your view (expressions, spreadsheets, grammars, blueprints...) you twist it to fit.
Of course. It also applies to the majority of programmers in the world, spread across many very different fields. Like I already said - the problem is that everyone in the world is wrong except for you.

Rusky wrote:
Brendan wrote:So you're saying that imperative programming can use functional programming where it's beneficial (e.g. regular expressions) and ignore it when it's not beneficial; while functional programming can't use imperative programming when it's beneficial; and therefore imperative programming is superior because it's more flexible?
To "imperative programming can use functional programming where it's beneficial": Absolutely! More generally, most programming paradigms can be improved by borrowing ideas from other paradigms- imperative programming is immensely improved using several ideas from functional programming. Expressions (pretty much the birth of functional programming), closures/nested functions (from Lisp), higher order functions (i.e. function pointers, in your terms), immutable values and non-destructive updates, etc.
Finally we agree on something - functional programming is good, because you can abandon the entire idea of "no side effects" and turn it into imperative programming.
Rusky wrote:
Brendan wrote:Both ways to align a pointer are the same, except the assembly version is in assembly and has a bug that will probably cause a "divide error" because you don't know what DX contains before the division. For your larger point, almost everything allows expressions that can have state and side-effects (e.g. "foo = bar * myArray[x++]").
Isn't the assembly bug a reason for the C version? If they're otherwise the same, isn't it better to let the compiler perform register allocation, handle the details of the target ISA, perform strength reduction optimizations, etc?
So you've given up on functional programming, and now you're a C advocate? Are you claiming C is a functional programming language and not an imperative language? Have you suddenly changed to a "some imperative languages are better than others" argument for no obvious reason?
Rusky wrote:By the way, your "almost everything" includes many functional languages, including ML, Lisp, erlang, etc... In fact, expression were invented by Backus for, guess what, the same reasons he took further to advocate functional programming.
The same "functional programming is good, because you can abandon the entire idea of "no side effects" and turn it into imperative programming" argument?
Rusky wrote:
Brendan wrote:A spreadsheet as a grid of procedures (that are executed in order). The procedures have no arguments, and everything is side-effects. Except for problems with type-checking, it'd be simple to convert a spreadsheet into C.
The user of a spreadsheet never specifies anything imperatively. How are "=sum(a1:a4)" or "=average(d3:f3)" anything but functional programming? They take input as arguments and give output as return values, with no other state or side effects. How is a graph a side effect? It just takes input and gives output, also with no side effects. Cells are transparently current, reflecting the rest of the spreadsheet- sounds an awful lot like functional reactive programming...
Where's the "take input as arguments" part?

Code: Select all

someCell(void) {
   return average(d3:f3);
}
Rusky wrote:
Brendan wrote:Blueprints aren't programming, and therefore can't be functional (or imperative) programming. Why don't you try to convince people that "emacs" is female?
Blueprints are at least as much programming as your grandma describing how to bake a cake. Or else you'd better stop seeing imperative programming where it never was... :roll:
There's a major difference between "describing how" and "describing what". If you think a blueprint is programming, then you probably think a photo of my hairy arse is programming too.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re: Pitfalls of lacking a MMU

Post by Colonel Kernel »

Brendan wrote:Hi,
Rusky wrote:Nothing to say about dependent types? Or did you just run out of excuses not to do your research?
I thought I made it reasonably clear that I (like most programmers in the real world) couldn't be bothered deciphering academic research papers. Your opinion doesn't matter enough to justify the lost time.
Brendan, by ignoring matters of theory you are missing a huge opportunity.

Also, side effects are a rampant source of bugs and should always be avoided (think global variables). Explicit effects are always better, because they're explicit (so the programmer can't miss them). I think what you're arguing for is mutable state, not side effects. Sometimes mutable state is the most appropriate way to solve a problem (and sometimes not).

@FP guys: I think the biggest problem with encouraging adoption of FP is that Haskell is used as the poster-child, but it looks so alien to most programmers. The important points to make are about the benefits of individual FP features (purity, immutability, etc.) in the right context. Personally I've always had a hard time understanding Haskell just because the syntax is so different than what I'm used to, but I've use FP techniques all the time in the languages I use in my day job (C++ and C#).
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Post Reply