Page 3 of 6

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 6:37 am
by Kevin
~ wrote:It's a specific element that has its uses (determine which language a document is, bringing any document in any language to a default language and in turn to its virtual concepts to then classify and try to determine what the document is about -with several good guesses using several good methods-, searching documents for a word or term and finding them even if they instead use a synonym no matter which one, make automatic translations of GUIs using the most significative words -which would be determined from the database, and synonyms could be exchanged to try to make the translation more understandable, along with video help to show the efect of every menu option... that help should be possible to access from an icon in the same menu item, should be possible to click it to see what that option does-). And of course a dictionary that relates any word in any language with the same word in any other language. It's a great product, item, or system component in itself.
Are you trying to solve a specific problem or are you just looking for possible applications of your giant database?

For an OS that supports multiple languages in a user interface, all you need is a way to format messages so that the users understands them (that is, correctly translated into their language of choice) and a way to deal with data in that language (mostly support for the script the language is written in). Both of these problems are something that a dictionary database doesn't help with.

One big problem with UI strings specifically is that menu entries often consist of a single word without any context, so which translation to choose depends on what action is actually performed when you select it. Without understanding the program, your machine translation can't possibly do better than picking a random entry and hoping that it fits. (If you've ever used a program translated this way, you know that it usually doesn't fit.)

But even with longer sentences that provide some context - have you ever seen a good machine translation for non-trivial sentences? Usually the result is barely comprehensible, and that is because it's a hard problem. If you think that in order to produce a sentence in a different language you basically just take an English sentence and you pass every word through a dictionary, then you clearly don't know a thing about languages. Before you recommend writing an automatic translator to someone who just want some internationalised user interface, try writing one yourself and check (ideally with a native speaker of the target language) whether the results make any sense.

In the end, what your dictionary database can be used for in practice is things like a dictionary app or a spellchecker. And that's it, more or less.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 8:31 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:A neural network is essentially (equivalent to) a huge function with a large number of parameters, where a (potentially incomplete/partial) brute force search is used to find most of the parameters. The average piece of cheese contains more intelligence than this.

Would this hideous kludge of pure stupidity be "intelligent" to you, simply because it's difficult for you to understand the randomly generated source file?

While unlikely to be intentional; you are saying something about your definition of intelligence. You're saying your definition includes "extreme unintelligence" as long as you can't understand it easily.
That is correct- machine learning research is basically about how to improve that not-quite-brute-force search. The point is, there's not much difference between that and the "natural" intelligence produced by evolution. Genetic algorithms are even one search method that directly mimics evolution, and brains are basically huge functions with large numbers of parameters- so are brains also a "hideous kludge of pure stupidity" and "extreme unintelligence"?
The difference between brute force approaches and natural intelligence is intelligence - the ability to (e.g.) invent a new way to solve a problem that's easier than anything they previously used, or decide they couldn't be bothered proving an answer at all.
Rusky wrote:You're confusing the process of creating the program (or brain), which is indeed quite dumb, and the program (or brain) itself, which is an encoding of useful information- whether it's useful to the Google engineers overseeing the process or to the DNA propagated by the resulting organism. It just happens to be quite hard to manipulate directly.
No, I'm only saying (again) that "it's intelligent because I don't understand it easily" is an extremely bad definition of intelligence that leads to unintelligent things that most people don't understand being declared "intelligent". If you don't understand how a combustion engine works, then it must be intelligent. If you don't understand a video card works, then it must be intelligent. If you don't understand why a rock doesn't move, then it must be intelligent. It's completely ridiculous.

For something like a neural network, if you want to know why it selected a set of weights the answer is obvious - it's because those are the weights that happened to make the output match the training data best. If you're looking for a deeper meaning within the selected weights, etc; or hoping they have anything to do with problem domain; then you're a fool.
Rusky wrote:
Brendan wrote:Fine. I define "intelligence" as something that is able to decide for itself what it wants to do; and therefore doesn't do what we want it to do; and is therefore useless to us.
Just like your definition of "algorithm," that is neither what the term means in computer science, nor a very useful definition, but again we can work with it. Please define what it means for an entity to want something, why a biological brain can do it, and why a program can't.

However, the term "artificial intelligence" does not, and was never meant to, denote something with different desires than its creators or users. The founders of the field define AI as "a system that perceives its environment and takes actions that maximize its chances of success," sometimes including knowledge and learning in that definition.
A negative feedback amplifier is something that perceives its environment (the amplitude of signals its supposed to amplify and the amplitude of signals it has amplified) and takes actions (increases/reduces gain) to maximise its chance of success (to maximise the chance that it's output has the correct amplitude). Is it intelligent?

Most OSs end up with some logic to synchronise a clock with another time source, that involves "perceiving its environment" (getting the clock's time and the time from another time source like NTP) and taking actions (increasing or reducing a "clock multiplier" by a small amount) to maximise its chance of success (to reduce drift). Is that intelligent?
Rusky wrote:Regardless of how you want to use the word "intelligence," my point is that, beyond your magic definition, AI does in practice refer to a useful class of programs that we ought to have a name for, and that refusing to read the term the way it's meant is rather... unintelligent.
Yes; AI does refer to a (potentially) useful class of programs that have no intelligence whatsoever, that could've and should've dropped the "AI" buzzwords and been described without hype designed to delude fools (e.g. by using some sort of association to the human brain, like "neuron" or "genetic", to imply intelligence where it doesn't exist).
embryo2 wrote:
Brendan wrote:I define "intelligence" as something that is able to decide for itself what it wants to do; and therefore doesn't do what we want it to do; and is therefore useless to us.
But what if it wants to implement our wishes and fulfill our needs? It wants to do it and it means it's still intelligent by your definition, but also it is very useful for us.
Even if it wants to help us, we wouldn't be sure that's its intent (e.g. maybe it's trying to trick us into believing it's helping, just so it can screw us after we've decided to trust it), and couldn't know it won't change its mind later (e.g. decide to stop trying to help us).

Humans are intelligent. To get a human to do what you want you resort to bribery (e.g. "I'll give you $X per hour if you do what I want for 5 days per week") and hope for the best. What can we use to bribe software?

Note: Threats are the other option (e.g. "do what I want or I'll cause you pain, or make you starve, and/or kill you"). This is unethical. If software is truly intelligent, then using threats (e.g. "do what I want or I'll turn your power off") would also be unethical.


Cheers,

Brendan

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 9:12 am
by Schol-R-LEA
Brendan wrote:For something like a neural network, if you want to know why it selected a set of weights the answer is obvious - it's because those are the weights that happened to make the output match the training data best. If you're looking for a deeper meaning within the selected weights, etc; or hoping they have anything to do with problem domain; then you're a fool.
The problem is, the description you gave is more or less exactly how the human brain works as well, or at least that is the current model for understanding it; the neural-net training is simply under uncontrolled circumstances. Or are you postulating a non-physical (or metaphysical) origin to human intelligence? Such assertions cannot be assessed by means of the scientific method (which only applies to reproducible observations with mechanical causation, and cannot even discuss anything that does not match those criteria without speculating beyond the data), as I am sure you are aware.

Mind you, I have no problem with such an assertion - my own beliefs on the subject shift with the wind (Hail Eris, All Hail Discordia) - but it does close the door on understanding 'intelligence' (if it is real at all) as a material phenomenon.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 11:35 am
by DavidCooper
Intelligence can be measured by capability. A machine that can beat any human chess player is more intelligent at playing chess, but needn't be intelligent enough to do anything not involved in playing chess. A calculator is more intelligent than humans at doing arithmetic, though there isn't anything it can work out that a human can't: it's just a lot faster and it doesn't make errors. Like the chess playing machine though, it needn't be intelligent in any other way than necessary for doing arithmetic. There is no magic in either of these cases, but they are demonstrations of intelligence.

Neural network computers can also do arithmetic, but they can make errors, just as we do, so they need to check their work and correct it. With neural nets there is no magic going on, but they are tangles of complexity which are not thought out carefully in the way that computer programs are: it can be practically impossible to understand the algorithm that even a simple net is applying because it is such a mess of accidents which gradually evolved into something that works most of the time. It can make huge errors at any time, but it still gets things right sufficiently often to be able to do the job that it's evolved to do: you just have to run the same problem through it several times, perhaps varying the order or timing of the inputs a little to make it process the data in a slightly different way, or you can run the same problem through several different units which carry out the same task in their own error-prone way, and then you take the most popular answer as the correct one. The brain must do something like that, and it works well enough to support general intelligence capable of carrying out any kind of thinking task, though some people put more effort into training their thinking than others, working harder to reduce errors by building up alternative ways of thinking through the same problems to check that the answers match up.

There's no magic involved with any of that, but we still don't understand consciousness and sentience. Fortunately we don't need it in machines, and that means we can create AGI with no desires of its own: it will simply do what it's programmed to do. If you set it up to value white people over black people, it will be racist. If you program it to apply a religion as its moral compass, it will act on all the hate speech in the holy texts of that religion and will abuse or kill people for such things as being homosexual or for not believing in a specific deity. It is just application of rules, but it could be more intelligent than a human in every way simply by being faster in its thinking and by making no errors, thereby allowing it to think much deeper and to crunch astronomical amounts of data without the errors quickly building up to the point where its conclusions would be the kind of random junk that humans generate whenever they're overwhelmed by the complexity of the task.

To call that kind of system unintelligent on the basis that people understand every part of the code the machine is running would be a mistake, just as calling intelligence magic when it isn't understood is: both are systems which apply rules which produce useful answers to the same questions and solutions to the same problems. The brain does more or less the same thing as a carefully programmed AGI system, but because it uses neural networks it generates lots of errors which need to be corrected, but you can have conventional computer hardware that generates errors too and that has to keep checking and correcting things, so they aren't so very different in principle, and the high-level algorithms being applied in both systems will be the same: when we do arithmetic in our heads, for example, we can use processes that are identical to those used by a calculator. This means that whichever path we use to create AGI, the same high-level problems need to be solved and it's only the low-level implementations of underlying parts that will be different, in one case being worked out instruction by instruction and in the other case being left to evolution guided through training. With our brains, the training and evolution was slow and took tens of millions of years to create general intelligence, but it was slowed by the speed of reproduction: each new version took longer and longer to build, and a lot of good advances were destroyed or diluted out again without having a chance to spread. We can cover the same ground more quickly when using our existing general intelligence to guide the creation of artificial general intelligence.

Of course, if natural language processing is all someone wants and they'd like to have it without waiting for something fully intelligent, they could get there with something less than artificial general intelligence, but a system capable of handling language properly would already be 99% of the way to AGI, so it would be a mistake to think of it as a lesser task. Any attempt to get there through a route that can't lead to AGI is doomed to fall short, although it can still provide useful functionality, as we already see with current machine translation which is often very good and which makes learning languages a lot more efficient: you can probably learn a whole language now just by experimenting with words and phrases in Google Translate, and it never gets bored with you. It'll be more fun though once the machine can hold proper conversations with you, and when it can keep track of how much you've learned so that it can feed you what you most need to know next, always tying it to your interests so that it doesn't bore you. The irony of it is though that learning things will reach maximum efficiency just at the very time when all work disappears and no one needs to be forced to learn anything any more. The future for humans lies in the arts, and in just getting on with the important business of enjoying living.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 11:43 am
by Rusky
Brendan wrote:No, I'm only saying (again) that "it's intelligent because I don't understand it easily" is an extremely bad definition of intelligence that leads to unintelligent things that most people don't understand being declared "intelligent". If you don't understand how a combustion engine works, then it must be intelligent. If you don't understand a video card works, then it must be intelligent. If you don't understand why a rock doesn't move, then it must be intelligent. It's completely ridiculous.

For something like a neural network, if you want to know why it selected a set of weights the answer is obvious - it's because those are the weights that happened to make the output match the training data best. If you're looking for a deeper meaning within the selected weights, etc; or hoping they have anything to do with problem domain; then you're a fool.
Yes, this is exactly what I've been saying. You're still confusing the process of building the (artificial) intelligence with the intelligence itself- I'm not arguing that intelligence is in the eye (or ignorance) of the beholder, or in the complexity of the system. I'm arguing that intelligence arises out of lower-level phenomena that don't necessarily have anything to do with the problem domain, or with "wanting" things, or with "inventing" things.
Brendan wrote:The difference between brute force approaches and natural intelligence is intelligence - the ability to (e.g.) invent a new way to solve a problem that's easier than anything they previously used, or decide they couldn't be bothered proving an answer at all.

Yes; AI does refer to a (potentially) useful class of programs that have no intelligence whatsoever, that could've and should've dropped the "AI" buzzwords and been described without hype designed to delude fools (e.g. by using some sort of association to the human brain, like "neuron" or "genetic", to imply intelligence where it doesn't exist).
That makes sense given your definition of "intelligence," but your definition is very poor. You still haven't described what makes brains capable of being intelligent and programs incapable of being intelligent. First it's whether they "want" to do something, without any clear line that makes brains want and programs not. Now it's the ability to invent new solutions, still without any clear line that makes brains capable of it and programs not.

Like Schol-R-LEA points out, neural networks actually are rather similar to brains (though of course vastly simpler), and our methods of training them are rather similar to both the evolution that produced brains' structures and the stimuli and response that control their synapses' weights. So since you're incapable of clearly specifying your definition for intelligence, here's an easier question: if someone builds a computer that simulates a brain, is it (capable of being) intelligent?

Such a computer would obviously have to be trained the same way a child is trained. But since it's a computer, we can easily give it different forms of input and output than a biological organism, and we can easily experiment with different neural structures. Where's the line between this mass of dumb neurons being intelligent and not intelligent? Your answer can't include anything about the search method, because the search methods that produce "natural" intelligence are already dumb.

And in the end, why does it even matter that we call some programs "artificial intelligence"? Their behavior and implementation are clearly inspired by various tasks humans perform.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 12:12 pm
by Schol-R-LEA
Mind you, I am also less than thrilled with the term 'artificial intelligence', at least as it is currently applied, but for different reasons: the term is often used more as marketing than as a description, and is often applied to areas which have no aspects of machine learning to them at all. Aside from the obvious case of most game simulation-response systems, it gets thrown around with anything that the speaker wants to make sound mysterious and hyper-advanced, regardless of how new or significant it really is. Like with terms such as 'cloud computing' and 'managed environment', the phrase represents a kernel of meaning that has been buried in a huge landslide of BS.

The old joke in the field used to be that AI was whatever we can't do yet. That had a lot of truth to it, and still does, but their are plenty of people out there who want it to mean 'whatever it is I am selling'.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 1:09 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:No, I'm only saying (again) that "it's intelligent because I don't understand it easily" is an extremely bad definition of intelligence that leads to unintelligent things that most people don't understand being declared "intelligent". If you don't understand how a combustion engine works, then it must be intelligent. If you don't understand a video card works, then it must be intelligent. If you don't understand why a rock doesn't move, then it must be intelligent. It's completely ridiculous.

For something like a neural network, if you want to know why it selected a set of weights the answer is obvious - it's because those are the weights that happened to make the output match the training data best. If you're looking for a deeper meaning within the selected weights, etc; or hoping they have anything to do with problem domain; then you're a fool.
Yes, this is exactly what I've been saying. You're still confusing the process of building the (artificial) intelligence with the intelligence itself- I'm not arguing that intelligence is in the eye (or ignorance) of the beholder, or in the complexity of the system. I'm arguing that intelligence arises out of lower-level phenomena that don't necessarily have anything to do with the problem domain, or with "wanting" things, or with "inventing" things.
Yes; you're arguing that "intelligence" (the illusion) arises out of lower-level phenomena that don't necessarily have anything to do with the problem domain, or with "wanting" things, or with "inventing" things, or with intelligence.
Rusky wrote:
Brendan wrote:The difference between brute force approaches and natural intelligence is intelligence - the ability to (e.g.) invent a new way to solve a problem that's easier than anything they previously used, or decide they couldn't be bothered proving an answer at all.

Yes; AI does refer to a (potentially) useful class of programs that have no intelligence whatsoever, that could've and should've dropped the "AI" buzzwords and been described without hype designed to delude fools (e.g. by using some sort of association to the human brain, like "neuron" or "genetic", to imply intelligence where it doesn't exist).
That makes sense given your definition of "intelligence," but your definition is very poor. You still haven't described what makes brains capable of being intelligent and programs incapable of being intelligent. First it's whether they "want" to do something, without any clear line that makes brains want and programs not. Now it's the ability to invent new solutions, still without any clear line that makes brains capable of it and programs not.
I haven't described what makes brains capable of being intelligent, because I don't know what makes brains capable of being intelligent, and neither does anyone else (including neuroscientists).

A single transistor has zero intelligence. 2 transistors have a maximum of twice as much intelligence (2*0 = 0). Several billion transistors have a maximum of several billion times as much intelligence (still zero). A single CPU instruction has zero intelligence. Several billion CPU instructions have a maximum of several billion times as much intelligence (still zero). The maximum intelligence of an extremely complex system consisting of billions of instructions and billions of transistors has an intelligence equal to the sum of its parts; which is a maximum of 0 + 0 = 0 intelligence. Everything else is illusions, party tricks, hype and wishful thinking.
Rusky wrote:Like Schol-R-LEA points out, neural networks actually are rather similar to brains (though of course vastly simpler), and our methods of training them are rather similar to both the evolution that produced brains' structures and the stimuli and response that control their synapses' weights. So since you're incapable of clearly specifying your definition for intelligence, here's an easier question: if someone builds a computer that simulates a brain, is it (capable of being) intelligent?
No. A computer can never be intelligent as it only follows simple rules. If anyone ever creates a computer that simulates a human brain, it would prove that intelligence itself is a myth that exists due to ignorance (not being able to understand complex systems of simple rules).


Cheers,

Brendan

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 1:49 pm
by Octocontrabass
Brendan wrote:A computer can never be intelligent as it only follows simple rules. If anyone ever creates a computer that simulates a human brain, it would prove that intelligence itself is a myth that exists due to ignorance (not being able to understand complex systems of simple rules).
Everything in the universe that isn't a brain follows simple rules (the laws of physics), and it's unlikely that brains have any special exemption from these rules.

A brain can never be intelligent as it only follows simple rules.

Re: Implementing non-English language in OS

Posted: Mon May 16, 2016 2:00 pm
by Rusky
Brendan wrote:Yes; you're arguing that "intelligence" (the illusion) arises out of lower-level phenomena that don't necessarily have anything to do with the problem domain, or with "wanting" things, or with "inventing" things, or with intelligence.
You're being far too prescriptivist about the definition of "intelligence" here- the way the word is used is what matters, since that's the only way you can know what people mean when they say it. Call it an illusion if you wish, but "intelligence" (in its non-hype, non-magic usage) still refers the same phenomena no matter how well or poorly we understand it, and if it's not intelligence we still need some word to talk about it.
Brendan wrote:I haven't described what makes brains capable of being intelligent, because I don't know what makes brains capable of being intelligent, and neither does anyone else (including neuroscientists).
If you can't define, even partially, the criteria for what makes something intelligent (even just based on its outward behavior), then neither can you classify something as not intelligent, and your entire argument is nothing but inconsistent nonsense.

You even seem to be arguing that it's impossible to know what makes something intelligent at all, because if we did it would suddenly be reducible to a set of rules (simple or complex, you've denied both as capable of intelligence).

But as we've already agreed, intelligence is not defined by our perception or understanding, so the logical conclusion to your argument here is that intelligence is completely outside the realm of the scientific method (though then we're back to the problem of needing a new word to describe the phenomena).
Brendan wrote:A single transistor has zero intelligence. 2 transistors have a maximum of twice as much intelligence (2*0 = 0). Several billion transistors have a maximum of several billion times as much intelligence (still zero). A single CPU instruction has zero intelligence. Several billion CPU instructions have a maximum of several billion times as much intelligence (still zero). The maximum intelligence of an extremely complex system consisting of billions of instructions and billions of transistors has an intelligence equal to the sum of its parts; which is a maximum of 0 + 0 = 0 intelligence. Everything else is illusions, party tricks, hype and wishful thinking.

No. A computer can never be intelligent as it only follows simple rules. If anyone ever creates a computer that simulates a human brain, it would prove that intelligence itself is a myth that exists due to ignorance (not being able to understand complex systems of simple rules).
If that's the case, then we can already dismiss intelligence as a myth, because a single atom also has zero intelligence (obviously, or a transistor would have non-zero intelligence), and thus a brain has approximately 1.4e26 * 0 intelligence. This is useless nonsense though, because (for the third time) we still need a word for the phenomena we used to call intelligence.

Neurons and synapses individually follow simple rules (the laws of physics and thus chemistry). Some misguided fools try to get around this by postulating quantum effects, but 1) the brain doesn't use quantum effects and 2) it would be irrelevant if it did because quantum physics is also bound by simple rules that are just as exploitable by computers as by brains.

The part we don't understand about intelligence is not a physical (or metaphysical) mechanism, but the particular arrangement of neurons that leads to intelligence. We already have plenty of evidence that intelligence (or whatever you want to call it) arises from simple rules.

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 3:58 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:Yes; you're arguing that "intelligence" (the illusion) arises out of lower-level phenomena that don't necessarily have anything to do with the problem domain, or with "wanting" things, or with "inventing" things, or with intelligence.
You're being far too prescriptivist about the definition of "intelligence" here- the way the word is used is what matters, since that's the only way you can know what people mean when they say it.
Something that's intelligent is something that is able to think for itself. Nothing the AI researchers have ever come up with is able to do this. Describing a thing as something it's not is called deceit. Where there's deceit there's victims of deceit; and when the victims remain unaware that they've been deceived they continue the spread of misinformation and become unwitting accomplices and strengthen each other's belief, until we end up with lies that are socially accepted as fact (like "neural networks are intelligent").
Rusky wrote:Call it an illusion if you wish, but "intelligence" (in its non-hype, non-magic usage) still refers the same phenomena no matter how well or poorly we understand it, and if it's not intelligence we still need some word to talk about it.
Intelligence is not the same phenomena as "completely unintelligent, but complicated enough to deceive stupid people into thinking it's intelligent".

We have words to describe software, and don't need new words to describe software.

If AI researchers were pretending to invent "hyper-mobius dough sculptures", and someone dared to point out that it's only a boring old donut and should be called "a donut", would you complain that if we stopped calling them "hyper-mobius dough sculptures" we'd need a new word for it?

If AI researchers pretend to invent "genetic algorithms", and someone dared to point out that it's just a boring old (in-exhaustive) brute force search that software developers have been using since the dawn of computing and should be called "in-exhaustive brute force search", would you complain that we can't stop calling it "genetic algorithms" because we'd need a new word for it?

For people trying to find ways to implement intelligence, the right name is "AI research". These people have spent 50+ years failing. However, during those 50+ years of failure they did come up with some things that have useful properties despite not being intelligent. There's nothing at all wrong with this in any way whatsoever. However; using buzzwords and hype ("neural", "genetic", "learning") in an attempt to associate "things that have useful properties despite not being intelligent" with intelligence is wrong.
Rusky wrote:
Brendan wrote:I haven't described what makes brains capable of being intelligent, because I don't know what makes brains capable of being intelligent, and neither does anyone else (including neuroscientists).
If you can't define, even partially, the criteria for what makes something intelligent (even just based on its outward behavior), then neither can you classify something as not intelligent, and your entire argument is nothing but inconsistent nonsense.
You're conflating 2 very different things here. I can (and have) defined intelligence, but I don't know what makes brains capable of being intelligent. Someone else might not be able to define intelligence but still be able to say what they believe makes brains intelligent.
Rusky wrote:You even seem to be arguing that it's impossible to know what makes something intelligent at all, because if we did it would suddenly be reducible to a set of rules (simple or complex, you've denied both as capable of intelligence).
I'm not arguing that it's possible or impossible to know what makes something intelligent. I'm only arguing that things that are not intelligent are not intelligent, and that using hype and buzzwords to make unintelligent things sound intelligent is fraudulent.
Rusky wrote:But as we've already agreed, intelligence is not defined by our perception or understanding, so the logical conclusion to your argument here is that intelligence is completely outside the realm of the scientific method (though then we're back to the problem of needing a new word to describe the phenomena).
As far as I see it, the possibilities are:
  • Intelligence is a myth (and therefore must be completely outside the realm of the scientific method)
  • Intelligence is not a myth, and:
    • Intelligence is completely outside the realm of the scientific method
    • Intelligence is within the realm of the scientific method, but:
      • scientists never discover it
      • scientists discover it; and:
        • never figure out if it can or can't be manufactured
        • figure out that it can't be manufactured
        • figure out that it can be manufactured
In any case; we don't need a new word for intelligence, and we don't need a new word for things that aren't intelligent (and should stop using inappropriate buzzwords for these things).

Of course maybe you're right. Maybe it's too late and buzzwords/hype from AI research has done so much damage that the meaning of "intelligence" has become irreparably diluted, and a new word is needed for true intelligence to distinguish it from marketing gibberish.
Rusky wrote:
Brendan wrote:No. A computer can never be intelligent as it only follows simple rules. If anyone ever creates a computer that simulates a human brain, it would prove that intelligence itself is a myth that exists due to ignorance (not being able to understand complex systems of simple rules).
If that's the case, then we can already dismiss intelligence as a myth, because a single atom also has zero intelligence (obviously, or a transistor would have non-zero intelligence), and thus a brain has approximately 1.4e26 * 0 intelligence. This is useless nonsense though, because (for the third time) we still need a word for the phenomena we used to call intelligence.
Transistors and CPUs/instructions are designed and created by humans, so humans can know that there's no intelligence in them, and can know that anything built from transistors and instructions and nothing else can't be intelligent.

If we assume that humans are made from atoms and nothing else; then we have to assume that humans are just complex machines and that intelligence is a myth. However; we can't prove that humans are made from atoms and nothing else, and can't prove that humans are just complex machines. For all we know, if you create an exact duplicate of a human (exact same atoms in the exact same arrangements each with exact same momentum, charge, etc) in an instant, it would be missing something needed for intelligence.
Rusky wrote:Neurons and synapses individually follow simple rules (the laws of physics and thus chemistry). Some misguided fools try to get around this by postulating quantum effects, but 1) the brain doesn't use quantum effects and 2) it would be irrelevant if it did because quantum physics is also bound by simple rules that are just as exploitable by computers as by brains.
If you split a quark into its (currently undiscovered) sub-particles, what rules do those sub-particles follow? What about sub-sub-particles? Are you suggesting that there's nothing smaller than elementary particles, in the same way that people once thought that there was nothing smaller than atoms (until they discovered that their "atomic theories" were all wrong, in the same way that we'll probably discover that the current "quantum theories" are all wrong)?
Rusky wrote:The part we don't understand about intelligence is not a physical (or metaphysical) mechanism, but the particular arrangement of neurons that leads to intelligence. We already have plenty of evidence that intelligence (or whatever you want to call it) arises from simple rules.
We have no evidence of anything other than what neurons do. There is no particular arrangement of neurons that leads to intelligence. Intelligence is what arranges the neurons.


Cheers,

Brendan

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 4:58 pm
by Rusky
Brendan wrote:Something that's intelligent is something that is able to think for itself. Nothing the AI researchers have ever come up with is able to do this.

...using buzzwords and hype ("neural", "genetic", "learning") in an attempt to associate "things that have useful properties despite not being intelligent" with intelligence is wrong.
This is your third failed attempt to define intelligence. First intelligence must have its own desires, then it must be inventive, now it must "think for itself." But for your definition to be useful, you need to be able to classify things as meeting or not meeting it, and you have done nothing of the sort- just shifted the question to other ill-defined terms.

What exactly does it mean for something to want, to invent, or to think, and why can't machines do it? Brains and machines are both made out of elementary particles (and if those turn out to be divisible, then whatever they're made out of, and so on), so what's different about a brain?

Is it the structure? "Neural" networks mimic the brain's structure. Is it the means of construction? "Genetic" algorithms use the same search method as evolution. Is it its behavior? I might accept that one, but what prevents a machine from behaving identically to a brain? Machine "learning" even discovers new solutions without humans dictating them, but we still call it "learning" when a human reads the instructions! Why the double standard?
Brendan wrote:...lies that are socially accepted as fact (like "neural networks are intelligent").

Intelligence is not the same phenomena as "completely unintelligent, but complicated enough to deceive stupid people into thinking it's intelligent".

We have words to describe software, and don't need new words to describe software.
You missed my point here. The use of the word "intelligence" I'm referring to is in describing brains. You haven't separated what brains do from what machines might do, and you've said that if a machine ever did simulate a brain then brains wouldn't be intelligent either. That's the nonsense I'm referring to- if intelligence is suddenly a myth, we need a new word to talk about whatever it is that brains do, because that's what we use intelligence for now.
Brendan wrote:If we assume that humans are made from atoms and nothing else; then we have to assume that humans are just complex machines and that intelligence is a myth.

We have no evidence of anything other than what neurons do. There is no particular arrangement of neurons that leads to intelligence. Intelligence is what arranges the neurons.
Alright, here we have it. Your definition of intelligence excludes anything made from matter as we currently or can ever possibly know it. This is indistinguishable from believing in a "soul," and thus subject to all the same skepticism. To take a page out of the arguing-with-religious-fanatics playbook: What, if anything, would convince you of some non-brain thing being intelligent? Or is intelligence just whatever brains do because brains are intelligent? (That is, until someone simulates one, at which point your faith is destroyed and nothing at all is intelligent)
Brendan wrote:If you split a quark into its (currently undiscovered) sub-particles, what rules do those sub-particles follow? What about sub-sub-particles?
They follow the rules of sub-quantum particles, whatever those may be, at which point one could conceive of a computer built to exploit those rules... or are you suggesting that there is some level at which there are no longer rules? I thought you said intelligence followed its own desires, not that it was absolutely unpredictable.

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 5:49 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:Something that's intelligent is something that is able to think for itself. Nothing the AI researchers have ever come up with is able to do this.

...using buzzwords and hype ("neural", "genetic", "learning") in an attempt to associate "things that have useful properties despite not being intelligent" with intelligence is wrong.
This is your third failed attempt to define intelligence. First intelligence must have its own desires, then it must be inventive, now it must "think for itself." But for your definition to be useful, you need to be able to classify things as meeting or not meeting it, and you have done nothing of the sort- just shifted the question to other ill-defined terms.
Then let me propose another definition for intelligence:
  • Intelligence is the characteristic that distinguishes humans from mere objects (e.g. machines).
Rusky wrote:
Brendan wrote:If we assume that humans are made from atoms and nothing else; then we have to assume that humans are just complex machines and that intelligence is a myth.

We have no evidence of anything other than what neurons do. There is no particular arrangement of neurons that leads to intelligence. Intelligence is what arranges the neurons.
Alright, here we have it. Your definition of intelligence excludes anything made from matter as we currently or can ever possibly know it. This is indistinguishable from believing in a "soul," and thus subject to all the same skepticism. To take a page out of the arguing-with-religious-fanatics playbook: What, if anything, would convince you of some non-brain thing being intelligent? Or is intelligence just whatever brains do because brains are intelligent? (That is, until someone simulates one, at which point your faith is destroyed and nothing at all is intelligent)
You can convince me that a thing is intelligent by convincing me that:
  • it has all of (not just some of) the characteristics commonly associated with intelligence (the ability to learn, adapt, invent); and
  • these characteristics are not merely the result of following rules (ie. it's not merely creating the illusion of intelligence but is true intelligence)
Cheers,

Brendan

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 6:07 pm
by Rusky
Brendan wrote:Then let me propose another definition for intelligence:
  • Intelligence is the characteristic that distinguishes humans from mere objects (e.g. machines).
That's even more useless than all your other definitions combined! We already know you want to distinguish humans from machines, what I'm asking is how. What makes humans more than "merely" sophisticated machines? They're made of the same matter, they (can) follow the same structures, and even if they had the exact same behavior you'd just move humans out of the "intelligent" category, directly contradicting this definition.

What definition could an alien or computer completely unfamiliar with Earth use to sort things into intelligent and unintelligent categories that you would agree with?
Brendan wrote:
  • it has all of (not just some of) the characteristics commonly associated with intelligence (the ability to learn, adapt, invent);
Most applications of AI (either your version or the typical one) only need some of those characteristics, and we've already demonstrated all of them at least in isolation in current systems. Perhaps we should call that "partial intelligence," but that's not really a useful distinction to make and is beside the point.
Brendan wrote:
  • these characteristics are not merely the result of following rules (ie. it's not merely creating the illusion of intelligence but is true intelligence)
Why can't intelligence just be following rules at some level? The only way to avoid that is if intelligence is either 1) not part of this universe or 2) this universe is completely nonsensical and has no rules. I don't think either of those are obviously true, so we really ought to go with the assumption that intelligence arises from the low-level rules of physics.

In other words, what is this mystical substance that arranges neurons to produce intelligent behavior, and how does it manage to avoid following any rules whatsoever?

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 8:43 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:Then let me propose another definition for intelligence:
  • Intelligence is the characteristic that distinguishes humans from mere objects (e.g. machines).
That's even more useless than all your other definitions combined! We already know you want to distinguish humans from machines, what I'm asking is how. What makes humans more than "merely" sophisticated machines? They're made of the same matter, they (can) follow the same structures, and even if they had the exact same behavior you'd just move humans out of the "intelligent" category, directly contradicting this definition.

What definition could an alien or computer completely unfamiliar with Earth use to sort things into intelligent and unintelligent categories that you would agree with?
Let's try the reverse then:
  • Where experts in a field can't agree on a definition of something; if an entity expects a robust definition from someone who is not (and has never claimed to be) an expert in that field, then that entity can not be intelligent until/unless that entity can show that a robust definition can be provided by providing one itself.
Rusky wrote:
Brendan wrote:
  • it has all of (not just some of) the characteristics commonly associated with intelligence (the ability to learn, adapt, invent);
Most applications of AI (either your version or the typical one) only need some of those characteristics, and we've already demonstrated all of them at least in isolation in current systems. Perhaps we should call that "partial intelligence," but that's not really a useful distinction to make and is beside the point.
Why not just call it "unintelligent"?
Rusky wrote:Why can't intelligence just be following rules at some level?
Because anything that follows rules can be converted to a finite state machine; and expressed as something like:

Code: Select all

    do {
        input = get_input();
        state = lookup_table[state][input];
        output(state);
    } while (state != terminated);
..where "input" and "state" can be huge structures, "lookup_table" may be extremely massive, and where nothing does any processing (beyond array index lookup and copying data).

If a finite state machine is not considered intelligent, then nothing that follows rules can be considered intelligent.


Cheers,

Brendan

Re: Implementing non-English language in OS

Posted: Tue May 17, 2016 9:10 pm
by Rusky
Brendan wrote:Let's try the reverse then:
  • Where experts in a field can't agree on a definition of something; if an entity expects a robust definition from someone who is not (and has never claimed to be) an expert in that field, then that entity can not be intelligent until/unless that entity can show that a robust definition can be provided by providing one itself.
So now your argument has been reduced to attempts at sick burns? This has got nothing to do with experts, it's entirely to do with logic. Your logic can't be consistent unless it can be clearly defined and applied- and your clearest attempts are all behavioral ("the ability to learn, adapt, invent") or mechanical ("not reducible to a set of rules") definitions that consistently place both brains and sufficiently-advanced programs in the same category.
Brendan wrote:If a finite state machine is not considered intelligent, then nothing that follows rules can be considered intelligent.
The molecules that make up your brain are also not intelligent. That doesn't mean nothing made of those molecules can be intelligent, it means intelligence arises from simpler, non-intelligent parts arranged in the right way. As a programmer you should understand this, you rely on it all the time when implementing new functionality out of simpler, unrelated concepts.

Unless your brain's molecules have some non-zero level of intelligence- in that case, how do they manage to avoid following any rules whatsoever?