Brendan wrote:Your definition of "intelligence" is so weak that it includes complex machines; and therefore must include slightly less complex machines (which would have intelligence but be slightly less intelligent), and simple machines like toasters and washing machines (which would have intelligence but only a very small amount), and extremely simple machines like doors and windows (which would have intelligence but only an extremely tiny amount).
Absolutely not. Intelligence arises from a particular combination of non-intelligent parts, not from some mystical substance that you just accumulate to get more intelligence.
The line of reasoning you use here, and that you used before in reference to transistors and neurons, implies that intelligence can only come from something outside this universe (because otherwise we could make an intelligent computer out of it), which is why you keep saying that maybe intelligence is a myth. But as we've already been over, that doesn't mean intelligence doesn't exist. It just means your idea of it is wrong, because your definition is excluding the very thing you set out to describe.
Brendan wrote:My definition includes
free will, and it's trivial to say anything that has free will is enslaved (and not merely used), and therefore anything that is intelligent (and has free will) is enslaved and not merely used.
Free will has nothing to do with intelligence (did you even
read that article?), it's a separate quality that, like sentience, is much more useful (or useless, depending on your position on what it even means) for the purpose of ethics.
Brendan wrote:I fail to see how my OS doesn't meet your flimsy definition of intelligence. If you wouldn't call my OS intelligent (even though it learns, adapts to its environment and solves problems) then where do you draw the line between intelligent and unintelligent?
Like I said, your OS doesn't do any of that itself, it just relies on your intelligence having figured it all out beforehand. For it to have any degree of intelligence
in the tasks you mention, it can't just be reusing your solutions (disable memory determined to be faulty in this way; detect hardware in this way; blit pixels by combining these pieces of machine code). Write a program that does those things without you providing the solution in beforehand and it'll be closer to intelligence.
Take, for example, a game AI using goal-oriented action planning. The designer doesn't tell it how to solve problems, it only gives it the ability to sense its environment and a set of actions it's capable of, and it figures out the rest on its own. Whether or not we consider that intelligent, it's certainly closer to it than your OS.
Brendan wrote:For some definitions of sentience; a computer can have human senses (hearing/microphones, touch, sight/cameras) in addition to non-human senses (the ability to sense wifi signals, etc); and something as simple as keeping track of networking load or CPU temperature and making decisions based on those statistics would be considered sentience.
That has nothing to do with sentience. Sentience is more to do with consciousness and subjective experience than sensory input.
But really, all of this is secondary to the idea that humans are machines. If humans aren't machines, what are they? How do they manage to function without their intelligence following any rules whatsoever? If it follows no rules, why can we measure and influence thoughts and behavior by poking brains in various ways (in the extreme, to the point of terminating the intelligence by destroying the brain)? If it follows no rules, what is psychology studying? If it follows no rules, what is all the physical matter in your brain for?