Implementing non-English language in OS

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Your definition of "intelligence" is so weak that it includes complex machines; and therefore must include slightly less complex machines (which would have intelligence but be slightly less intelligent), and simple machines like toasters and washing machines (which would have intelligence but only a very small amount), and extremely simple machines like doors and windows (which would have intelligence but only an extremely tiny amount).
Absolutely not. Intelligence arises from a particular combination of non-intelligent parts, not from some mystical substance that you just accumulate to get more intelligence.
The illusion of intelligence, just like the illusion of magic, arises out of ignorance (not understating and/or not being able to understand). If you don't understand how a calculator calculates "1 + 2 = 3" then you assume the calculator must be intelligent (even though the calculator is completely unintelligent once you understand how it works). If you don't understand how a piece of software uses a brute force search to find a formula that matches training data, and then uses that formula to "guess" the answer for values that weren't in the training data, then you assume it's intelligent (even though a neural network is completely unintelligent once you understand how it works).

Your definition of "intelligent" is the same as the illusion of intelligence; possibly with some sort of unspecified threshold (e.g. if it's complicated enough to delude %N of people, then it's intelligent and not just an illusion). To me, a definition of intelligence that depends on nothing more than the ignorance of observers (possibly combined with an arbitrary threshold) is completely unacceptable. It's like calling something that has wheel/s "a hoverboard", and getting away with it because some people are stupid and are willing to accept lies.
Rusky wrote:The line of reasoning you use here, and that you used before in reference to transistors and neurons, implies that intelligence can only come from something outside this universe (because otherwise we could make an intelligent computer out of it), which is why you keep saying that maybe intelligence is a myth. But as we've already been over, that doesn't mean intelligence doesn't exist. It just means your idea of it is wrong, because your definition is excluding the very thing you set out to describe.
My definition of intelligence is something that is not merely the illusion of intelligence, doesn't depend on the ignorance of observers, and doesn't depend on stupid people's willingness to accept lies.

Imagine an alien race that's several million times more intelligent than humans. Would they consider humans intelligent; or would they think humans are just simple biologic machines (in the same way that most people don't consider plants to be intelligent)?

If humans possess something that makes them more than just a complex machine, then they remain intelligent regardless of how intelligent the observer is. Otherwise, "intelligence" is just a self-delusion that vanishes as soon as the observer is sufficiently more intelligent.

Note (again) that I am NOT saying that "something that makes them more than just a complex machine" exists, nor am I saying that (if it exists) it's something outside this universe (ie. it could be an currently unknown natural phenomena, just like electricity was about 500 years ago).
Rusky wrote:
Brendan wrote:My definition includes free will, and it's trivial to say anything that has free will is enslaved (and not merely used), and therefore anything that is intelligent (and has free will) is enslaved and not merely used.
Free will has nothing to do with intelligence (did you even read that article?), it's a separate quality that, like sentience, is much more useful (or useless, depending on your position on what it even means) for the purpose of ethics.
If there's no free will it's deterministic. If it's deterministic it can be modelled by a finite state machine (of sufficient proportions). If it can be modelled by a finite state machine then it can't have more intelligence than a finite state machine. A finite state machine is a glorified table lookup ("state = table[state][input]").

If there's no free will, it can't have more intelligence than a table lookup.

I don't consider a table lookup intelligent, therefore I must conclude that intelligence requires free will.
Rusky wrote:
Brendan wrote:I fail to see how my OS doesn't meet your flimsy definition of intelligence. If you wouldn't call my OS intelligent (even though it learns, adapts to its environment and solves problems) then where do you draw the line between intelligent and unintelligent?
Like I said, your OS doesn't do any of that itself, it just relies on your intelligence having figured it all out beforehand. For it to have any degree of intelligence in the tasks you mention, it can't just be reusing your solutions (disable memory determined to be faulty in this way; detect hardware in this way; blit pixels by combining these pieces of machine code). Write a program that does those things without you providing the solution in beforehand and it'll be closer to intelligence.
Neural networks are not intelligent, they just rely on their creator having figured out "brute force training data matching" beforehand. For neural networks to have any degree of intelligence (in the tasks they're used for), they can't just be reusing a human's "brute force training data matching" solution.
Rusky wrote:Take, for example, a game AI using goal-oriented action planning. The designer doesn't tell it how to solve problems, it only gives it the ability to sense its environment and a set of actions it's capable of, and it figures out the rest on its own. Whether or not we consider that intelligent, it's certainly closer to it than your OS.
For GAOP the developer tells it to solve the problem by searching for an action that meets a goal; where the developer provides the pre-conditions, post-conditions and costs of all the actions; and where the developer also provides the "search for an action" code. The only intelligence is human intelligence (e.g. the intelligence needed to create hype that successfully convinces gullible fools).
Rusky wrote:But really, all of this is secondary to the idea that humans are machines. If humans aren't machines, what are they? How do they manage to function without their intelligence following any rules whatsoever? If it follows no rules, why can we measure and influence thoughts and behavior by poking brains in various ways (in the extreme, to the point of terminating the intelligence by destroying the brain)? If it follows no rules, what is psychology studying? If it follows no rules, what is all the physical matter in your brain for?
Do you understand the difference between "using rules and nothing else", "using rules in conjunction with something else" and "only using something else (not using any rules)"? I have never suggested that humans are the latter. I have only suggested that humans are not the latter (ie. they use rules exclusively and are therefore just complex unintelligent machines, or use a combination of rules and something else and are more than just complex unintelligent machines).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
DavidCooper wrote:
Brendan wrote:For some definitions of sentience; a computer can have human senses (hearing/microphones, touch, sight/cameras) in addition to non-human senses (the ability to sense wifi signals, etc); and something as simple as keeping track of networking load or CPU temperature and making decisions based on those statistics would be considered sentience.
It's not sentience unless there's actual sensation (feelings). If a program prints "ouch" to the screen when you press a key, it's unlikely that the software or machine has felt anything that it wouldn't also have felt if it was programmed to print "LOL" instead. The machine has no idea that one word is associated with pain and the other word not beecause it isn't being programmed to experience any sensation in either case. It may be that the machine is experiencing feelings all the time, but any relationship between those feelings and the ideas of feeling represented by the data are entirely accidental, to the point that it makes no difference whether the machine is on, off or destroyed: any one of those states could be unpleasant or pleasant to it, and it's all down to luck as to which is which. We can build and destroy intelligent machines without worrying, even though it's possible that we're causing more harm by what we do with them because it's equally possible that what we're doing is causing less harm to them instead: we cannot know which way it might be going, and there is no way to play it safe.
If my OS has a "current_CPU_temperature" variable and makes (scheduling, power management) decisions based on that variable (possibly including turning the computer off for critical over-temperature conditions) then is it sentient? What if I do nothing more than rename the variable (and call it "current_level_of_pain" instead) so that it's using sensors to make decisions to avoid "pain" (possibly including "suicide" when the "pain" is unbearable)?

How would you define "sentience" in a way that prevents the "hype causes something that wasn't considered sentient to be considered sentient" problem? How do you define "feelings" to avoid the same hype problem?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Implementing non-English language in OS

Post by Rusky »

Brendan wrote:If you don't understand how a piece of software uses a brute force search to find a formula that matches training data, and then uses that formula to "guess" the answer for values that weren't in the training data, then you assume it's intelligent (even though a neural network is completely unintelligent once you understand how it works).
So what does a "true" intelligence do to find a solution if it's not (modified) brute force search and (educated) guessing? Is that not exactly the process human children go through to learn how to talk, walk, etc?
Brendan wrote:To me, a definition of intelligence that depends on nothing more than the ignorance of observers (possibly combined with an arbitrary threshold) is completely unacceptable.
Good thing my definition of intelligence doesn't depend on any quality of the observers at all, then. I still call brains intelligent even though I think (unlike you, from what I can tell) we understand their low-level and mid-level mechanisms pretty well, and I'll continue to do so regardless of how well we understand how they work, and thus how intelligence works.
Brendan wrote:Note (again) that I am NOT saying that "something that makes them more than just a complex machine" exists, nor am I saying that (if it exists) it's something outside this universe (ie. it could be an currently unknown natural phenomena, just like electricity was about 500 years ago).
And I'm saying that this phenomenon does exist, and is a method of producing intelligent behavior, rather than being any particular physical substance. (This is, in fact, rather like electricity, which is not a substance but a particular type of motion.)
Brendan wrote:If there's no free will it's deterministic.

...

Do you understand the difference between "using rules and nothing else", "using rules in conjunction with something else" and "only using something else (not using any rules)"? I have never suggested that humans are the latter. I have only suggested that humans are not the latter (ie. they use rules exclusively and are therefore just complex unintelligent machines, or use a combination of rules and something else and are more than just complex unintelligent machines).
You keep claiming that intelligence must have some component that doesn't follow any rules, but you never explain how this is even logically possible. How can "will" be considered "free" if it's completely random? That sounds like the opposite of "free will" to me- we make decisions for reasons, not because of some mystical non-deterministic component of our brains.
Brendan wrote:If there's no free will, it can't have more intelligence than a table lookup.

I don't consider a table lookup intelligent, therefore I must conclude that intelligence requires free will.
The intelligence isn't in the table lookup, it's in the construction of the table. You said that yourself- "intelligence is what arranges the neurons," etc.
Brendan wrote:For GAOP the developer tells it to solve the problem by searching for an action that meets a goal; where the developer provides the pre-conditions, post-conditions and costs of all the actions; and where the developer also provides the "search for an action" code.
You'll notice that I never claimed GOAP was intelligence, only that it was closer to it than your OS. Go a bit further and let the AI experiment on its own to determine pre and post-conditions and costs and you'll be even closer. This is how humans learn, so go far enough along this route and what's the difference?
Brendan wrote:How would you define "sentience" in a way that prevents the "hype causes something that wasn't considered sentient to be considered sentient" problem? How do you define "feelings" to avoid the same hype problem?
Sentience as a concept was created specifically to describe something that is hard to pin down. We all claim to have subjective experience, we all behave as if we have subjective experience, but by definition there's no way to verify that something else has subjective experience. We just have to infer it based on behavior, and that's the entire point of any discussion about sentience.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:If you don't understand how a piece of software uses a brute force search to find a formula that matches training data, and then uses that formula to "guess" the answer for values that weren't in the training data, then you assume it's intelligent (even though a neural network is completely unintelligent once you understand how it works).
So what does a "true" intelligence do to find a solution if it's not (modified) brute force search and (educated) guessing? Is that not exactly the process human children go through to learn how to talk, walk, etc?
We don't know what true intelligence does to find a solution. We don't know exactly what process humans go through to learn to walk, talk, etc.

You're "begging the question" - using an assumption (that humans are merely complex machines) to imply a conclusion (that true intelligence must be based purely on simple rules and nothing else).
Rusky wrote:
Brendan wrote:To me, a definition of intelligence that depends on nothing more than the ignorance of observers (possibly combined with an arbitrary threshold) is completely unacceptable.
Good thing my definition of intelligence doesn't depend on any quality of the observers at all, then. I still call brains intelligent even though I think (unlike you, from what I can tell) we understand their low-level and mid-level mechanisms pretty well.
I have sad news for you. The only reason you think neural networks are "intelligent" and I don't is that you are a less intelligent observer than I am. ;)
Rusky wrote:
Brendan wrote:If there's no free will it's deterministic.

...

Do you understand the difference between "using rules and nothing else", "using rules in conjunction with something else" and "only using something else (not using any rules)"? I have never suggested that humans are the latter. I have only suggested that humans are not the latter (ie. they use rules exclusively and are therefore just complex unintelligent machines, or use a combination of rules and something else and are more than just complex unintelligent machines).
You keep claiming that intelligence must have some component that doesn't follow any rules, but you never explain how this is even logically possible. How can "will" be considered "free" if it's completely random? That sounds like the opposite of "free will" to me- we make decisions for reasons, not because of some mystical non-deterministic component of our brains.
You tried to pretend I said "intelligence uses no rules at all" when I didn't, I respond by making it clear that I never said this, and you continue to pretend I said "intelligence uses no rules at all" but also start trying to imply that I said "free will is random" when I didn't say that either. Are you trying to prove that intelligence doesn't exist by being as unintelligent as possible?

Note that your "humans are just complex machines" religion denies the existence of free will and makes it hard for you to accept that a person's choices may not be deterministic/predictable (unlike a machine's choices).
Rusky wrote:
Brendan wrote:If there's no free will, it can't have more intelligence than a table lookup.

I don't consider a table lookup intelligent, therefore I must conclude that intelligence requires free will.
The intelligence isn't in the table lookup, it's in the construction of the table. You said that yourself- "intelligence is what arranges the neurons," etc.
Brendan wrote:For GAOP the developer tells it to solve the problem by searching for an action that meets a goal; where the developer provides the pre-conditions, post-conditions and costs of all the actions; and where the developer also provides the "search for an action" code.
You'll notice that I never claimed GAOP was intelligence, only that it was closer to it than your OS. Go a bit further and let the AI experiment on its own to determine pre and post-conditions and costs and you'll be even closer. This is how humans learn, so go far enough along this route and what's the difference?
The difference is that it doesn't matter how far you go along this route you still end up with something that can be represented as a finite state machine with zero intelligence.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Implementing non-English language in OS

Post by Combuster »

Brendan wrote:Note that your "humans are just complex machines" religion
Be advised, your arguments are as unsound that they are indistinguishable from religion as well. Do you want to continue a religion versus religion debate in your position as moderator?
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Hellbender
Member
Member
Posts: 63
Joined: Fri May 01, 2015 2:23 am
Libera.chat IRC: Hellbender

Re: Implementing non-English language in OS

Post by Hellbender »

Brendan wrote:If there's no free will it's deterministic. If it's deterministic it can be modelled by a finite state machine (of sufficient proportions). If it can be modelled by a finite state machine then it can't have more intelligence than a finite state machine. A finite state machine is a glorified table lookup ("state = table[state][input]").

If there's no free will, it can't have more intelligence than a table lookup.

I don't consider a table lookup intelligent, therefore I must conclude that intelligence requires free will.
You can record every sensory input your brain receives, and every sensory output your brain produces. You can then replay those things inside a simulator where the brain is replaced with a lookup table, and the behavior of simulated Brendan would exactly match the behavior of the original intelligent Brendan. Would you say that the simulated Brendan has no intelligence because it's nothing but a giant look-up table? If the behavior of the intelligent Brendan can be exactly simulated using a look-up table, what makes you think that the intelligent Brendan reached those actions by some other means that using a huge look-up table? If the behavior of the intelligent Brendan is not more complex than what can be reached by the table look-up Brendan, why assume that the intelligent Brendan has any other capabilities besides a table look-up? Occam's razor.
Hellbender OS at github.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
Hellbender wrote:
Brendan wrote:If there's no free will it's deterministic. If it's deterministic it can be modelled by a finite state machine (of sufficient proportions). If it can be modelled by a finite state machine then it can't have more intelligence than a finite state machine. A finite state machine is a glorified table lookup ("state = table[state][input]").

If there's no free will, it can't have more intelligence than a table lookup.

I don't consider a table lookup intelligent, therefore I must conclude that intelligence requires free will.
You can record every sensory input your brain receives, and every sensory output your brain produces. You can then replay those things inside a simulator where the brain is replaced with a lookup table, and the behavior of simulated Brendan would exactly match the behavior of the original intelligent Brendan. Would you say that the simulated Brendan has no intelligence because it's nothing but a giant look-up table? If the behavior of the intelligent Brendan can be exactly simulated using a look-up table, what makes you think that the intelligent Brendan reached those actions by some other means that using a huge look-up table? If the behavior of the intelligent Brendan is not more complex than what can be reached by the table look-up Brendan, why assume that the intelligent Brendan has any other capabilities besides a table look-up? Occam's razor.
This was covered days ago. If a human mind can be matched by an unintelligent lookup table, then humans are as unintelligent as a lookup table (e.g. and only think they're intelligent because they're too stupid to understand that they're not). In this case, true intelligence can't be achieved by AI because true intelligence is a concept that doesn't exist in practice.

The other alternative is that a human mind can't be matched by an unintelligent lookup table. In this case, true intelligence can't be achieved by AI because AI is limited to things that can be represented as a lookup table.

For both cases, there is no possibility that AI can ever have true intelligence (rather than just the illusion of intelligence).

Of course I'm talking about the absolute maximum capability that AI could ever hope to achieve (on conventional hardware) here. AI as it actually exists now is pure hype that doesn't begin to come close to "illusion of intelligence" (and primarily involves taking old techniques programmers have been using for decades, and adding buzzwords to make them seem new and "revolutionary" for the sake of publishing research papers and attracting grants/funding).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Kevin
Member
Member
Posts: 1071
Joined: Sun Feb 01, 2009 6:11 am
Location: Germany
Contact:

Re: Implementing non-English language in OS

Post by Kevin »

Hellbender wrote:You can record every sensory input your brain receives, and every sensory output your brain produces. You can then replay those things inside a simulator where the brain is replaced with a lookup table, and the behavior of simulated Brendan would exactly match the behavior of the original intelligent Brendan. Would you say that the simulated Brendan has no intelligence because it's nothing but a giant look-up table?
Yes. There is a difference between replaying something recorded and performing the recorded thing in the first place.

It should become even more obvious with a simpler example: It would be silly to say that doing all the time consuming calculations for finding the largest currently known prime number was unnecessary because now that it's recorded we could instead just look up the records. Of course looking up results that have previously calculated is much easier, but it doesn't mean that it's a good method to calculate prime numbers. It can only replay what we have, it will never find new numbers.

The same goes for Brendan's behaviour. We can record what he does and replay that. But this recording can't tell how he would behave in a new situation (even if he's so predictable sometimes ;)). That's enough for me to conclude that the Brendan simulator isn't as intelligent as the original Brendan.


The other thing I wanted to comment on, but can't find the right posting to quote any more, is non-determinism and randomness. In my opinion, that's not the same thing. For free will to exist, I think it must be non-deterministic (otherwise it wouldn't be free), but at the same time it mustn't be completely random (otherwise it would hardly be justified to call it a will).
Developer of tyndur - community OS of Lowlevel (German)
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Implementing non-English language in OS

Post by DavidCooper »

gerryg400 wrote:DavidCooper, I have a concern about removing the requirement for people to work. I think that work represents modern man's effort to survive and I think that life exists and flourishes because of the effort exerted to survive. Without the effort I think that life has little meaning.

If you remove the requirement to work to survive, humans will look for other ways to obtain the pleasure that work gives them. Sightseeing will not do it. It will be drugs and alcohol.
Do you think life is better if people are given unnecessary work to do just for the sake of making their lives full of pointless toil to make their lives better? There's plenty of worthwhile non-essential work in the arts that people can do which is valuable in itself, such as writing novels or music, making films, exploring the possibilities of dance, etc. - an infinite range of possibilities, and of course there are virtual worlds to invent too. But if that isn't enough for people and they want to do useless paperwork or break big stones into smaller ones with a hammer for many hours a day to feel that they're doing something more profound, they'll still be allowed to do that.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Implementing non-English language in OS

Post by DavidCooper »

Brendan wrote:If my OS has a "current_CPU_temperature" variable and makes (scheduling, power management) decisions based on that variable (possibly including turning the computer off for critical over-temperature conditions) then is it sentient?
How have you wired it up? Is there a "read qualia" instruction that the software can use to tell how the machine feels? And if there is a "read qualia" instruction, how does the software know what the returned data means?
What if I do nothing more than rename the variable (and call it "current_level_of_pain" instead) so that it's using sensors to make decisions to avoid "pain" (possibly including "suicide" when the "pain" is unbearable)?
How would it ever be unbearable? If you rename it as pleasure, will it suddenly want more of it or will it still try to minimise it because it can't tell the difference?
How would you define "sentience" in a way that prevents the "hype causes something that wasn't considered sentient to be considered sentient" problem? How do you define "feelings" to avoid the same hype problem?
Case 1: if you understand how the program works in terms of cause and effect and there is no role for sentience in it, it is not coupled to any sentience in any systematic way (other than perhaps by accident, but whatever feelings might be generated in the machine have no impact on the functionality of the software).

Case 2: if you understand how the program works in terms of cause and effect and you can see the role that sentience has in it, you can then see any points where the machine can be made to suffer or to experience pleasure, and then you'll need to bring moral rules into play to work out how to treat that machine so as not to be mean to it.

Case 3: if no one is able to work out how the program works, perhaps because it's a neural computer and is just too complex to untangle, there may be a role for sentience in it which you can't detect, in which case you need to play it safe and try not to do anything that might harm it. It's possible that it will report to you that it is feeling pain or is upset in certain cases, but it may be like a dumb animal and be just as distressed but unable to say so. The big problem with the whole idea of sentience though is that you can simulate a neural computer in a conventional one and it doesn't look as if there's any way for sentience to appear in that simulation. Perhaps there's something more complex than neural computing going on in our brains though, and if it wanders off into quantum weirdness, who knows what might be possible in there.

Importantly though, what vital role could sentience have in intelligence that can't be achieved without it? We can program non-sentient machines to avoid things that might harm them and to seek out things that will aid them, so it doesn't appear to be at all essential. If sentience in animals (including people) is real, it is just one mechanism that nature has found for doing something that can be done perfectly well in other ways.


Turning to the issue of free will, I can see no evidence that there is any such thing. We are all running a program which drives us to try to do the best thing in every situation we find ourselves in, and we judge what's best by how we feel. If you have the choice between eating a piece of chocolate or having a bucket of icy water poured over your head, you will be driven to choose the former, unless you actually prefer the latter (perhaps because chocolate doesn't agree with you). Of course, you might decide to have the bucket of icy water poured over you just to demonstrate that you have free will, but that would fail to prove the point as you'd merely have judged that it would feel better to imagine that you've been proved right even though it's uncomfortable, so it would be that desire that has dictated your decision, and that isn't free will.

I challenged people to provide a demonstration of free will on a science forum a few years ago, but no one managed to do so. In every case it was clear that the decisions could easily be accounted for by identifiable factors unless they were simply random, and random behaviour is not free will: it is not will at all.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Implementing non-English language in OS

Post by Rusky »

Brendan wrote:We don't know what true intelligence does to find a solution. We don't know exactly what process humans go through to learn to walk, talk, etc.

You're "begging the question" - using an assumption (that humans are merely complex machines) to imply a conclusion (that true intelligence must be based purely on simple rules and nothing else).
No, I'm merely observing (not concluding based on anything else) that humans walk and talk poorly at first and slowly improve as they try things, practice, and observe others. This is my evidence that they are doing a search of the solution space just like a machine would, not the other way around.
Brendan wrote:
Rusky wrote:You keep claiming that intelligence must have some component that doesn't follow any rules, but you never explain how this is even logically possible. How can "will" be considered "free" if it's completely random? That sounds like the opposite of "free will" to me- we make decisions for reasons, not because of some mystical non-deterministic component of our brains.
You tried to pretend I said "intelligence uses no rules at all" when I didn't, I respond by making it clear that I never said this, and you continue to pretend I said "intelligence uses no rules at all"
No, I very intentionally rephrased what I said from "no rules at all" to "some component that doesn't follow any rules." You claim that anything that just follows rules can't be intelligent, so by your definition intelligence must at least have some aspect that follows no rules whatsoever.
Brendan wrote:Note that your "humans are just complex machines" religion denies the existence of free will and makes it hard for you to accept that a person's choices may not be deterministic/predictable (unlike a machine's choices).
No, I use a definition of "free will" that allows for determinism (again, did you even read the wiki article you linked?), because to me a definition of "free will" that is based on nondeterminism is no more "free" than a coin flip. In fact, if some (most?) of the input to my deterministic decision making process comes from my personality and experience, I'd say that's very much "free." Perhaps we should define what our will is free in relation to

Further, my position is not a religion: I see no evidence that brains are any different from other matter, so I proceed on the assumption that they are not. I am discussing the easily-observed phenomena that we currently call intelligence, not some arbitrary implementation requirements that involve unknown (and potentially unknowable) components. Show me evidence of anything you claim about intelligence and I'll accept it, not change my definition to exclude it.
Brendan wrote:The difference is that it doesn't matter how far you go along this route you still end up with something that can be represented as a finite state machine with zero intelligence.
We've reached a point where we can't make any progress, because you're just butting your head against a brick wall while I'm over here talking about something else entirely. For the last time:
  • "Intelligence" is just a word that we use to refer to a set of very familiar phenomena. This should not be controversial.
  • I'm talking about that stuff regardless of how deterministic it is under the covers. If you want to classify it as unintelligent that's your business, but the rest of the world is going to continue to call it "intelligence."
  • Thus, when someone attempts to simulate some aspect of that, it's okay to call that "artificial intelligence," even if it doesn't meet your preconceptions of how you want intelligence to work.
  • Thus, any further attempts to exclude something solely on the basis of determinism are worthless and will be ignored.
Last edited by Rusky on Fri May 20, 2016 12:43 pm, edited 1 time in total.
Hellbender
Member
Member
Posts: 63
Joined: Fri May 01, 2015 2:23 am
Libera.chat IRC: Hellbender

Re: Implementing non-English language in OS

Post by Hellbender »

Brendan wrote:The other alternative is that a human mind can't be matched by an unintelligent lookup table. In this case, true intelligence can't be achieved by AI because AI is limited to things that can be represented as a lookup table.
If there is no perceived difference between an intelligent person, and a look-up person, why bother to try to create some magical division between the two? If you look a video of a person living in this world, how could you tell if the person is making intelligent decisions, or if the person is just following a pre-made look-up table? If there is no way of separating two different things from each other, it is better to treat them as the same thing (because a theory that treats them the same way is simpler than a theory that treats them separately, while producing exactly the same predictions). You always opt for the simplest theory that can explain any phenomena.

Anyway, intelligence is not be about the complexity of the machinery, but the complexity of the data. It does not matter if the mechanism is as simple as a look-up table. Intelligence is in building the table in the first place.
Hellbender OS at github.
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: Implementing non-English language in OS

Post by gerryg400 »

DavidCooper wrote:
gerryg400 wrote:DavidCooper, I have a concern about removing the requirement for people to work. I think that work represents modern man's effort to survive and I think that life exists and flourishes because of the effort exerted to survive. Without the effort I think that life has little meaning.

If you remove the requirement to work to survive, humans will look for other ways to obtain the pleasure that work gives them. Sightseeing will not do it. It will be drugs and alcohol.
Do you think life is better if people are given unnecessary work to do just for the sake of making their lives full of pointless toil to make their lives better? There's plenty of worthwhile non-essential work in the arts that people can do which is valuable in itself, such as writing novels or music, making films, exploring the possibilities of dance, etc. - an infinite range of possibilities, and of course there are virtual worlds to invent too. But if that isn't enough for people and they want to do useless paperwork or break big stones into smaller ones with a hammer for many hours a day to feel that they're doing something more profound, they'll still be allowed to do that.
Yes, you must be right. The arts, novel writing, film-making and music are good replacements for work because nobody who ever did any of those things wound up using drugs and alcohol or being socially dysfunctional in any way.
If a trainstation is where trains stop, what is a workstation ?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:The difference is that it doesn't matter how far you go along this route you still end up with something that can be represented as a finite state machine with zero intelligence.
We've reached a point where we can't make any progress, because you're just butting your head against a brick wall while I'm over here talking about something else entirely. For the last time:
  • "Intelligence" is just a word that we use to refer to a set of very familiar phenomena. This should not be controversial.
  • I'm talking about that stuff regardless of how deterministic it is under the covers. If you want to classify it as unintelligent that's your business, but the rest of the world is going to continue to call it "intelligence."
  • Thus, when someone attempts to simulate some aspect of that, it's okay to call that "artificial intelligence," even if it doesn't meet your preconceptions of how you want intelligence to work.
  • Thus, any further attempts to exclude something solely on the basis of determinism are worthless and will be ignored.
Intelligence is a word that was created before complex machines existed, that we use to refer to a set of familiar phenomena that makes humans different from inanimate objects, plants and machines.

You are only interested in diluting the meaning of "intelligence" to make it fit your unproven (and potentially false) "humans are machines" assumption; and because of this you are redefining the word to have a meaning that it could not have had when it was created (before complex machines existed, when humans believed they were not machines).

By diluting the meaning of "intelligence" to make your flawed logic work, it loses any meaning at all - unintelligent (but complex) machines become intelligent, so slightly less complex machines are intelligent, so..... A dog turd is "intelligent" because when you step on it it "learns" your footprint.

It's anthropomorphism.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing non-English language in OS

Post by Brendan »

Hi,
Hellbender wrote:
Brendan wrote:The other alternative is that a human mind can't be matched by an unintelligent lookup table. In this case, true intelligence can't be achieved by AI because AI is limited to things that can be represented as a lookup table.
If there is no perceived difference between an intelligent person, and a look-up person, why bother to try to create some magical division between the two? If you look a video of a person living in this world, how could you tell if the person is making intelligent decisions, or if the person is just following a pre-made look-up table? If there is no way of separating two different things from each other, it is better to treat them as the same thing (because a theory that treats them the same way is simpler than a theory that treats them separately, while producing exactly the same predictions). You always opt for the simplest theory that can explain any phenomena.
Humans believe they are special and created words to describe things that make them special (intelligence, sentience, etc). If humans were wrong and humans are not special, then words created to describe what makes them special are all words that can only really be used for things like fiction.
Hellbender wrote:Anyway, intelligence is not be about the complexity of the machinery, but the complexity of the data. It does not matter if the mechanism is as simple as a look-up table. Intelligence is in building the table in the first place.
Would you consider blindly trying all the possibilities intelligent (even though brute force approaches waste a huge amount of time on discarded work and are considered "least desirable" because of that)? I wouldn't - that's just an unintelligent machine creating an unintelligent machine.

This is a nice description of neural networks - an unintelligent brute force approach to build an unintelligent table, with hype to scam fools into thinking it's "intelligent". This is the problem.

For an example of the way AI works...

Several years ago I was looking for a way to solve the standards problem for file formats. My solution was/is to have "file format converters"; where processes ask the VFS to open a file as a certain file type, and if the file isn't already in the requested format the VFS tries to find a suitable file format converter to use. If there's no file format converter to convert directly from the file's format into the requested format, the VFS tries to find a pair of file format converters (to convert from the file's format to something else and then convert from something else into the requested format); and if the VFS can't find a pair of file format converters it tries to find a sequence of 3 file format converters, then 4, and so on. Of course new file format converters can be installed at any time, to allow the VFS to handle more file formats.

There is nothing "intelligent" about this in any way whatsoever. Now, let's add hype in a deliberate attempt scam some stupid people!

It all involves the VFS, so we'd want to start by changing the VFS's name to something like "AI file engine". Processes just use an "open()" function (which has an extra "requested file type" parameter) and that doesn't sound very impressive, so let's change the function's name to something like "create_goal()". We could call the file format converters "actions"; and when a new file format converter is installed on the OS (and the VFS is told about it) we can call that "learning an action". For the actual steps the VFS uses to find a suitable sequence of file format converters, it's probably better to not describe that at all (to make it harder for stupid people to see that it's not intelligent at all) and just refer to it as "intelligence".

With these changes, the description becomes:

A processes submits a goal to the AI file engine, and (if necessary) the AI file engine uses intelligence to try to find a sequence of actions that achieve the goal. Of course new actions can be taught to the AI file engine at any time, to allow the AI file engine to learn to become better at achieving goals.

Now it's suddenly "intelligent". Yay! 8)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply