Page 5 of 9

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 2:41 pm
by Venn
Well, if that were the case, then no, no computer could ever be given consciousness and sentience on the level humans have. Which is a perfectly valid outcome to the experiment. This, in a very vague and fuzzy way, tells us when we might be able to produce a sentient computer - when we can produce a computer capable of simulating something as complex as the human brain on the quantum level in real time. For me, this means a very, very long time, in the order of centuries. And when we finally reach that point, part 2 of the experiment kicks in. Why should we make a sentient computer? What added benefits will sentience provide what is essentially just a tool, especially when it is logical to assume that the vast majority of the computational power and memory will be devoted to maintaining that state of consciousness.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 7:04 pm
by bonch
I'm sure, like everything else, once the technology was understood the costs required to produce it would come down. I was hoping for a mass-production type scenario where sentient robots were mass produced in factories and available at department stores, with their own preprogrammed personalties and such.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 8:09 pm
by VolTeK
in soviet russia. Computer process you

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 11:27 pm
by Combuster
Venn wrote:A computer of sufficient computational power and memory capacity (the exact numbers are not relevant to the experiment) is running a program which mimics exactly, down to the molecular level, the brain of a human being (to provide for the notion that consciousness may be nothing more than chemical and electrical interactions). Will sentience and sapience emerge?
Not like this. You didn't specify any inputs and outputs.

However, if we built a machine that perfectly emulates a brain and replaced a real brain with it, then it would act as such. A human emulation would probably even come to the conclusion that she is in fact a computer when given the right evidence.

The problem is that the nervous system already configures itself in the embryonal stages, which makes actually substituting a brain difficult. Not to mention that certain behavioural things are due to protein cycles outside the brain. All this makes a practical experiment extremely difficult.

Re: where are the 1's and 0's?

Posted: Wed Oct 05, 2011 3:02 am
by SDS
DavidCooper wrote:On the contrary, I was getting directly to the core of consciousness. You can torture a human because people feel pain and other kinds of discomfort. If you try to torture a computer, you're an idiot. Pain and pleasure are used to control an animal's behaviour, encouraging to eat suitable food and discouraging it from allowing itself to be eaten by other organisms. We have this, and we evolved from simpler creatures which almost certainly have it too - the mechanism works fine in a worm, so if the worm doesn't feel pain, why should we have taken the trouble to evolve the ability to feel pain when it's completely superfluous? I chose to focus on this aspect of consciousness because it is simple and primal, not requiring great intelligence, the ability to recognise oneself in a mirror, or the capability to store data about "me".
I'm not entirely sure what you are getting at. I certainly would agree the he ability to feel pain is indicative of consciousness - as opposed to the more straightforward case of pain receptors stimulating an autonomic nervous response. i.e. it isn't the existence of the pain-in response-out mapping, but the (slightly) more abstract awareness of pain, and dislike of pain, and that this informs choice in our behaviour rather than dictating it. There is a certain level of self-awareness there.
DavidCooper wrote:You're simply introducing complexities which result from mixing inner simplicities.
Certainly introducing complexities. But that is important. There are additional degrees of abstraction and obfuscation in a conscious (well, at least our, human, conscious) perception of the world. I think it very important that we can attach perception and meaning to things which are not directly implied by the stimulus.

The example came from a lecture I went to about the challenge of the existence of consciousness (although the lecturer expressed his ideas more clearly than I seem to have managed). His ultimate argument was that consciousness provides greater ability to adapt to circumstances very different from normal evolutionary pressures, and also that the sensation of being aware of oneself, and that you are alive, makes you actively WANT to stay alive - and fight harder for it - than simple responses could ever achieve. Not least by understanding a novel threat for what it is.
DavidCooper wrote:... Emergent properties are ones that show up when things interact that are damned hard or downright impossible to predict in advance when you start with incomplete knowledge of the simple components and the environment in which they are to operate, but every aspect of the emergent properties is dictated from the base level. When people talk of consciousness being something that emerges out of complexity, they are resorting to a belief in magic. If pain is felt, something has to exist that feels it, and that cannot be just a geometrical arrangement (which is the only new thing involved when the components are put together in a particular way). If you stick a whole lot of components together and something suddenly starts to feel pain, it has to be one of the components (though the components include the fabric of space, so it isn't necessarily matter that you should point to).
Agreed up to a point. I think another analogy (although crude) is appropriate. A computer is (largely) made up of transistors in varying arrangements. A computer can be given inputs, and produce outputs. These are not a function of the transistors (and other hardware components) per se. They are a function of the software. It is plausible to imagine the neural activity in the brain as closer to software running on hardware. This gives a lot more flexibility for introspection. And makes a lot more sense than asking what molecule is feeling pain! The harder question is at what stage and in what way is the brain bootstrapped...

Edit: Fix some formatting. That's what happens if you post when on the train...

Re: where are the 1's and 0's?

Posted: Wed Oct 05, 2011 4:16 am
by Solar
DavidCooper wrote:I was getting directly to the core of consciousness. You can torture a human because people feel pain and other kinds of discomfort. If you try to torture a computer, you're an idiot.
What is torture? It is the threat of damage to an entity, with the undisclosing of desired information presented as the "way out" of that threat.

If I have a sophisticated robot programmed to react to environment, and capable to "comprehend" the situation, I can apply "environmental pressure" to make the robot undisclose desired information if the robot is programmed to value its own continued operation higher than said information.

The line between applying a red-hot poker to the feet of a person or to the sensors of a robot becomes somewhat blurred if you break it down to the basics, don't you think?

(The thing is that a robot can easily be programmed to accept its own destruction. It's much more difficult with humans, as their "firmware" can't really be hardwired. ;-) )

Re: where are the 1's and 0's?

Posted: Wed Oct 05, 2011 12:47 pm
by DavidCooper
SDS wrote:I'm not entirely sure what you are getting at. I certainly would agree the he ability to feel pain is indicative of consciousness - as opposed to the more straightforward case of pain receptors stimulating an autonomic nervous response. i.e. it isn't the existence of the pain-in response-out mapping, but the (slightly) more abstract awareness of pain, and dislike of pain, and that this informs choice in our behaviour rather than dictating it. There is a certain level of self-awareness there.
"Pain" without any awareness of that "pain" cannot be pain - if it's actually pain, it must be noticed and it will automatically be disliked by whatever it is that experiences it. No one enjoys pain, but pain can lead to other feelings that may be enjoyable, such as the satisfaction of being more manly and robust than other people, but the pain itself is never pleasant.
DavidCooper wrote:You're simply introducing complexities which result from mixing inner simplicities.
Certainly introducing complexities. But that is important. There are additional degrees of abstraction and obfuscation in a conscious (well, at least our, human, conscious) perception of the world. I think it very important that we can attach perception and meaning to things which are not directly implied by the stimulus.
Of course it's important, but if you don't start by trying to understand the underlying simpler cases first, there's no point in trying to understand what happens when you start mixing them together into something so complex that you haven't a hope of seeing what's going on.
The example came from a lecture I went to about the challenge of the existence of consciousness (although the lecturer expressed his ideas more clearly than I seem to have managed). His ultimate argument was that consciousness provides greater ability to adapt to circumstances very different from normal evolutionary pressures, and also that the sensation of being aware of oneself, and that you are alive, makes you actively WANT to stay alive - and fight harder for it - than simple responses could ever achieve. Not least by understanding a novel threat for what it is.
That lecturer needs to be retrained - he is pushing baseless assertions, though that's very much the norm in this field. People want to survive, but many of them give up fairly easily when they're in dangerous situations - young children in particular can drown in shallow water because they're used to being rescued every time they fall over and they don't fight hard to survive. People often give up just because it's easier to die than to struggle on when the odds against their survival are very low. A machine, on the other hand, can be programmed never to give up trying to escape from a situation where it might be destroyed, no matter how bad or impossible the odds. The existence of consciousness in animals is probably just an accident - it may just be a side-effect of using the simplest method of building a response mechanism. Beyond sensations (pain, pleasure, colours, other qualia), most of the things people consider to be part of consciousness can be shown to be illusions. It isn't even clear that we are capable of understanding anything consciously - we experience a sequence of qualia associated with the idea's we're processing, and at the end we feel a reward sensation if we've understood the complete idea, but we can't genuinely be conscious of the entire idea in one go in order to have a full conscious understanding of it. The actual understanding of any idea is gained when the data fits into your database of knowledge without contradicting the data already in place (unless you're understanding the idea to be wrong, or some of the existing data in your database to be wrong), and all you get at the end of the process is a feeling that you understand it consciously, even though you don't. I doubt you'll believe this right away, because I didn't believe it for a long time myself, but if you put a lot of time into trying to monitor how your mind works by thinking about how you think, you'll maybe change your mind. There are certain cases where you think you understand something consciously, but as you think about it carefully you realise that the understanding is empty and you can't even remember what it is you think you were conscious of.
A computer is (largely) made up of transistors in varying arrangements. A computer can be given inputs, and produce outputs. These are not a function of the transistors (and other hardware components) per se. They are a function of the software. It is plausible to imagine the neural activity in the brain as closer to software running on hardware. This gives a lot more flexibility for introspection.
I'm sure it does, but it won't actually help you investigate consciousness.
And makes a lot more sense than asking what molecule is feeling pain!
The issue is that consciousness concerns sensations (qualia, feelings). If you focus on exploring everything except that, you aren't investigating consciousness.
The harder question is at what stage and in what way is the brain bootstrapped...
An interesting question indeed, but again it's taking you away from the issue.

Re: where are the 1's and 0's?

Posted: Wed Oct 05, 2011 1:01 pm
by DavidCooper
Solar wrote:
DavidCooper wrote:I was getting directly to the core of consciousness. You can torture a human because people feel pain and other kinds of discomfort. If you try to torture a computer, you're an idiot.
What is torture? It is the threat of damage to an entity, with the undisclosing of desired information presented as the "way out" of that threat.
Torture doesn't always offer a way out - it can be done for fun (from the point of view of the person doing the torturing). It's about making someone suffer through pain, fear or any other unpleasant feeling that can be generated in them.
If I have a sophisticated robot programmed to react to environment, and capable to "comprehend" the situation, I can apply "environmental pressure" to make the robot undisclose desired information if the robot is programmed to value its own continued operation higher than said information.
That's true enough - it might be worth allowing some information to slip out in order to protect a valuable machine in some circumstances, though the odds are that you won't get your machine back anyway if it's fallen into enemy hands, so it might as well be programmed to delete any important knowledge (or encrypt it and destroy the key so that it can only be accessed if recovered by the owner) and allow itself to be destroyed by the enemy.
The line between applying a red-hot poker to the feet of a person or to the sensors of a robot becomes somewhat blurred if you break it down to the basics, don't you think?
No - the machine feels nothing and doesn't care about the damage being done. The heat sensors warn it that damage is being done, but it knows that it's best to let it happen and do nothing. Put a person inside the robot and let them read a screen that says a temperature sensor is getting hot in one of the robot's feet - that person determines that it's better to let the damage be done, and neither he nor the robot feels any pain. You're only going to generate pain by getting at a device which can experience pain and triggering that pain, so there is a clear divide between the two cases.

Re: where are the 1's and 0's?

Posted: Wed Oct 05, 2011 3:34 pm
by SDS
DavidCooper wrote:No - the machine feels nothing and doesn't care about the damage being done. The heat sensors warn it that damage is being done, but it knows that it's best to let it happen and do nothing. Put a person inside the robot and let them read a screen that says a temperature sensor is getting hot in one of the robot's feet - that person determines that it's better to let the damage be done, and neither he nor the robot feels any pain. You're only going to generate pain by getting at a device which can experience pain and triggering that pain, so there is a clear divide between the two cases.
I think this demonstrates that in many ways we are saying the same thing! There is more than a warning from heat sensors that damage is being done, with an autonomic response, there is a sensation and an awareness of 'hotness' and associated feelings. An additional level of abstraction or introspection (or things which could be described with different words).

I think you are saying the same when you say '"Pain" without any awareness of that "pain" cannot be pain ... it must be noticed ... bet can lead to other feelings'. i.e. there is an additional level of awareness there over and above a simple feedback loop. Just emulating observed feedback behaviour does not emulate the consciousness behind it.
DavidCooper wrote:... but if you don't start by trying to understand the underlying simpler cases first, there's no point in trying to understand what happens when you start mixing them together into something so complex that you haven't a hope of seeing what's going on.

I'm not sure I agree with you. Going back to computers, someone with a good grasp of the complexity involved in a large software project with many levels of abstraction can understand it very well without understanding how doping affects the potential gradients in an NP junction, or anything of the correlation effects of electrons in solid state systems. Indeed, the important thing to understand a system is to pick the most effective level of abstraction to understand it at - whilst understanding that that level of abstraction can itself be understood in terms of lower levels.

While it seems that consciousness involves self-awareness, introspection and various other self-diagnostic properties, it seems sensible to look for a level of abstraction which can at some point hope to reference these.
DavidCooper wrote: The existence of consciousness in animals is probably just an accident - it may just be a side-effect of using the simplest method of building a response mechanism. Beyond sensations (pain, pleasure, colours, other qualia), most of the things people consider to be part of consciousness can be shown to be illusions.
I would be a little careful with that. As mentioned several times the autonomic nervous system is substantially simpler than the conscious, and the majority of cognitive capacity is contained within the subconscious. It would seem unlikely that something as significant to the experience of life as consciousness occurred entirely as a side effect, as it has huge consequences for the way that we choose to live. It seems almost certain that it confers some direct and powerful evolutionary advantages. The most obvious would be that it is a response mechanism which is both learnable and self-manipulable, which provides clear advantages in terms of adapting to changes in circumstance unforseen on an evolutionary timescale (i.e. most of human history...).

You'll have no argument from me about the illusory nature of much of consciousness. I am particularly fond of how in circumstances where physical responses to a stimulus have to be made rapidly, the conscious brain can be shown to become aware of a stimulus after action has already begun - as it is much slower than the conscious brain. The conscious brain persuades itself that it has decided retrospectively. It is worth noting, however, that it is then able to override the subconscious brain.

Processes of understanding also are much more interesting when you consider consciously changing long held beliefs. We have a depressingly strong confirmation bias for one thing...

Re: where are the 1's and 0's?

Posted: Thu Oct 06, 2011 12:25 pm
by DavidCooper
SDS wrote:
DavidCooper wrote: The existence of consciousness in animals is probably just an accident - it may just be a side-effect of using the simplest method of building a response mechanism.
I would be a little careful with that. As mentioned several times the autonomic nervous system is substantially simpler than the conscious, and the majority of cognitive capacity is contained within the subconscious. It would seem unlikely that something as significant to the experience of life as consciousness occurred entirely as a side effect, as it has huge consequences for the way that we choose to live.
What I have in mind as a response mechanism is something that takes many inputs and does some kind of averaging, so all the pain, pleasure and other inputs are fed into something where they generate an average feeling, and that feeling is used to send out an output which may lead to some kind of action. Clearly in many simple creatures the system is horribly primitive, so an insect which is eating a meal may at the same time be being eaten by a larger insect behind it - it doesn't feel any pain because the pleasure inputs from eating food are weighted higher and swamp out the pain input (assuming that insects feel anything consciously at all and that they aren't just machines).
It seems almost certain that it confers some direct and powerful evolutionary advantages. The most obvious would be that it is a response mechanism which is both learnable and self-manipulable, which provides clear advantages in terms of adapting to changes in circumstance unforseen on an evolutionary timescale (i.e. most of human history...).
That isn't at all certain as these capabilities can be programmed into non-conscious devices, although there may be an advantage somewhere in terms of simplicity of design, such as averaging weighted inputs in order to select the most urgent response.
Processes of understanding also are much more interesting when you consider consciously changing long held beliefs. We have a depressingly strong confirmation bias for one thing...
Like our beliefs about what we are. We generally believe that we used to be children because our memories tell us we once were, but every atom in our brains is replaced every few years, so there's nothing in there for very long. We are probably just being fooled by our memories into thinking we lived through all the events that we remember. Indeed, whatever we are that is conscious may only last for a fleeting moment before being pushed aside and replaced with someone else. We don't want to believe this kind of thing, of course, so we tend to avoid thinking about it and just allow ourselves to go on believing that we inhabit these animals for many decades regardless of any of the evidence to the contrary.

Re: where are the 1's and 0's?

Posted: Thu Oct 06, 2011 4:15 pm
by SDS
DavidCooper wrote:What I have in mind as a response mechanism is something that takes many inputs and does some kind of averaging, so all the pain, pleasure and other inputs are fed into something where they generate an average feeling, and that feeling is used to send out an output which may lead to some kind of action. Clearly in many simple creatures the system is horribly primitive, so an insect which is eating a meal may at the same time be being eaten by a larger insect behind it - it doesn't feel any pain because the pleasure inputs from eating food are weighted higher and swamp out the pain input (assuming that insects feel anything consciously at all and that they aren't just machines).
I take great issue with this again, because it deviates away from any notion of sensation. You take a whole load of inputs, average them out and have some form of coordinated response. This is too autonomic - and could occur without any awareness of the the inputs. As you were indicating, feeling pain is different from a response to the stimulus, where feeling is tied with consciousness. This primitive response system you described would not be conscious.

As to insects being conscious - I'm not sure how easy it would be to gather evidence either way on that.
That isn't at all certain as these capabilities can be programmed into non-conscious devices, although there may be an advantage somewhere in terms of simplicity of design, such as averaging weighted inputs in order to select the most urgent response.
I'm not sure I agree with you. If all we had was an autonomic system to use our limbs to effectively trap animals and eat them, I don't think it possible that you could come across and learn to use a computer without the system itself being adapted (on an evolutionary timescale) without consciousness.

In general, consciousness provides functionality more similar to a computer than a simple integrated circuit - in that it can be reprogrammed at runtime rather than just its operating data. The way that you think is learned, not even just specific responses. This is an extraordinary level of adaptability, which confers huge advantages.
Like our beliefs about what we are. We generally believe that we used to be children because our memories tell us we once were, but every atom in our brains is replaced every few years, so there's nothing in there for very long. We are probably just being fooled by our memories into thinking we lived through all the events that we remember. Indeed, whatever we are that is conscious may only last for a fleeting moment before being pushed aside and replaced with someone else. We don't want to believe this kind of thing, of course, so we tend to avoid thinking about it and just allow ourselves to go on believing that we inhabit these animals for many decades regardless of any of the evidence to the contrary.
There are two things in what you say. I have no issue with saying that I have lived through childhood, despite containing no molecules which are the same. This is in the same way that a river can still exist despite sharing no molecules of water, and even its path having deviated for a substantial period. There is a continuity, in the case of humans that is of memories and experiences. The fact that these are non-tangible does and represented in a non physically-obvious way in the brain does not make them real. Similarly the environment I lived in (particularly with regards to food) has affected my current physical condition hugely - despite a lack of molecular continuity.

The other way of reading what you said is that we could have suddenly appeared with a whole fake past and memories. This is similar to the creationist argument that God could have created the universe as it was now to appear old to 'test' us. My argument would be - if it looks old behaves as it it is old, it is probably best to treat it as old. If it is fake, you'll never know, and it won't matter.

Re: where are the 1's and 0's?

Posted: Fri Oct 07, 2011 5:41 am
by SDS
berkus wrote:Wasn't it a present model of a neuron - weighted average over all the inputs produces the neuron output.

Wikipedia tends to agree in general:

Image
Which provides a lovely model which could explain the autonomic nervous system. Once again, consciousness goes a little further.

It is worth noting that understanding how a transistor works gives a good explanation of a simple amplifier. It does not in itself explain the observed behaviour of a modern operating system with its user. An explanation of a transistor may be necessary in an explanation, but it is hardly sufficient.

Re: where are the 1's and 0's?

Posted: Fri Oct 07, 2011 3:09 pm
by DavidCooper
SDS wrote:I take great issue with this again, because it deviates away from any notion of sensation. You take a whole load of inputs, average them out and have some form of coordinated response. This is too autonomic - and could occur without any awareness of the the inputs. As you were indicating, feeling pain is different from a response to the stimulus, where feeling is tied with consciousness. This primitive response system you described would not be conscious.
You're replying without thinking things through carefully. Have you ever been so absorbed with what you're doing on a computer that you haven't shifted position on your chair for perhaps an hour? (Dangerous - DVT risk, and that's why my OS beeps every half hour to remind me to get up and do some exercise.) Anyway, you only notice the pain in your legs when you finish what you're doing or get interrupted, and that means it wasn't actually pain until you noticed it. The pain inputs were being sent to your brain, but at some stage they were being drowned out to the point that pain wasn't generated. The sensation of pain only occurs when that input is able to overpower the other inputs, and the only feeling you ever experience is the average of the probably-weighted inputs. The inputs may vary moment by moment, so you may pick up a hint of pain here and there, but sometimes none gets through at all until the activity you're absorbed in is ended. The system I'm describing would be conscious, but just not conscious of any individual inputs until they overpower the rest.
As to insects being conscious - I'm not sure how easy it would be to gather evidence either way on that.
It's likely that the only way to do it would be to find out how consciousness works first and then see if the right kind of mechanism exists in insects. I suspect it does, but I don't understand why they're designed to let themselves be eaten so easily... except that it may be a clone thing - often the DNA of many insects is identical due to the way they reproduce, so if something's started nibbling the back end off one insect, it must be better to let it continue eating that one rather than having the injured bug run away and die while the predator turns to nibbling the back end of one of the original bug's identical sisters instead.
That isn't at all certain as these capabilities can be programmed into non-conscious devices, although there may be an advantage somewhere in terms of simplicity of design, such as averaging weighted inputs in order to select the most urgent response.
I'm not sure I agree with you. If all we had was an autonomic system to use our limbs to effectively trap animals and eat them, I don't think it possible that you could come across and learn to use a computer without the system itself being adapted (on an evolutionary timescale) without consciousness.
What is a computer and how much do you need to be able to do to be able to learn to use one? A pigeon can learn to use a computer, pecking at buttons in response to lights coming on in order to request food. A robot could be programmed by many of the people on this forum to do the same, if it had the right sensors on it. Our capabilities evolved in small steps, giving us a greater and greater ability to solve problems over time firstly by trying all sorts of things to see what happens, and later by associating certain actions with success in such a way that new problems can be tackled using methods which work for similar things. We eventually reached the point where we became so capable that we were able to program our minds to do just about anything, and it will be the same with computers - get enough of the right capabilities programmed into it, and at some point it will be capable of doing the same as us, but without consciousness. Our ability to do complex stuff and the fact that we appear to be conscious should certainly make you consider whether consciousness has a role in our thinking ability, but that's as far as it goes - it is fully possible that there is no link between them at all, and indeed there is no example (other than with feelings) of any specific thing (component capability) which consciousness helps with which cannot in principle be done by a machine which lacks consciousness entirely. There may be some things (compound capabilities) where you might imagine a role for consciousness due to your lack of understanding of how those compound capabilities are built up our of component capabilities, but ultimately you're just guessing. In my own analysis of a wide range of compound capabilities, I have broken them down to their components and found no essential role for consciousness in any of them.
There are two things in what you say. I have no issue with saying that I have lived through childhood, despite containing no molecules which are the same. This is in the same way that a river can still exist despite sharing no molecules of water, and even its path having deviated for a substantial period. There is a continuity, in the case of humans that is of memories and experiences. The fact that these are non-tangible does and represented in a non physically-obvious way in the brain does not make them real. Similarly the environment I lived in (particularly with regards to food) has affected my current physical condition hugely - despite a lack of molecular continuity.
It seems you haven't covered all the basics, so maybe you need to work through a couple of thought experiments.

(1): Imagine a machine that can make a perfect copy of anything you stick inside it. You stand in one side of it and a working copy of you is created in the other. The copy thinks he's you (assuming you're a he). Can both the original you and the copy both be you at the same time? If someone sticks a pin in him, do you feel it too?

(2): Imagine that all the atoms that were in your head many years ago have been collected and built back together into the same arrangement as they were once in when you were a child. We're now dealing with a copy of you as a child which is actually built up from the same material as the original (hard to do this for the whole body as some of the material is retained and reused, such as with calcium in the bones, but this is just a thought experiment so we can ignore all the technical difficulties). You can stand looking at the child that you once were, and yet he isn't you. Stick a pin in him and you feel nothing (until someone hits you over the head with a baseball bat in return).

You compared yourself to a river which always has different water flowing through it, but a river is not conscious. Something in you is conscious (unless consciousness is a complete illusion), and that thing is the "I" in the machine - it is you, in your case. If there is nothing in there that's constant because everything's being replaced, then why do you imagine that you stay in there rather than being flushed out and replaced with a different "I" which is then fooled into thinking it's the same "I" by the memories stored in the brain? The constant stuff in the brain is just data (patterns in the structure of the brain), so you'd require data to be conscious for there to be a constant "I" in the machine. Of course, the consciousness puzzle is so hard to make any sense of that conscious data isn't necessarily such an outlandish idea. I wonder if the whole universe is virtual and designed in such a way as to hide our true nature from us - maybe within the reality external to it data is a thing of substance rather than mere geometry, and perhaps it's conscious too: the very act of representation may be a thing, and it may be a thing capable of multiple simultaneous representations while the things represented may be entirely virtual, and all the sensations (qualia) are states of this representation thing itself. A weird idea, but if consciousness is real, it's probably going to need a weird explanation.

Re: where are the 1's and 0's?

Posted: Fri Oct 07, 2011 3:49 pm
by SDS
DavidCooper wrote:You're replying without thinking things through carefully. Have you ever been so absorbed with what you're doing on a computer that you haven't shifted position on your chair for perhaps an hour? (Dangerous - DVT risk, and that's why my OS beeps every half hour to remind me to get up and do some exercise.) Anyway, you only notice the pain in your legs when you finish what you're doing or get interrupted, and that means it wasn't actually pain until you noticed it. The pain inputs were being sent to your brain, but at some stage they were being drowned out to the point that pain wasn't generated. The sensation of pain only occurs when that input is able to overpower the other inputs, and the only feeling you ever experience is the average of the probably-weighted inputs. The inputs may vary moment by moment, so you may pick up a hint of pain here and there, but sometimes none gets through at all until the activity you're absorbed in is ended. The system I'm describing would be conscious, but just not conscious of any individual inputs until they overpower the rest.
I don't think I have been - maybe just that my thoughts head in a different direction. I don't like your example; the reason you don't feel the pain when sitting still is that a significant portion of it is due to increased bloodflow when you move.

The point, however, is more nuanced. Firstly, you experience more sensations than one at a time. I am aware independently of pressure on my wrists, and on my ankle where I am sitting strangely. I can also see the text on the screen etc. We don't only respond to one input at a time with everything else filtered. I also don't think that that filtering is necessary for consciousness (even if it is a feature of it). A more interesting example is that you aren't continuously aware of the sensation of wearing clothes - but if you think about it you can feel them.

Secondly, such filtering does occur in the autonomic nervous system, and in the subconscious brain. As such, it is also not sufficient for consciousness. It is more of a side topic on how the neural system pre-processes inputs.
Our ability to do complex stuff and the fact that we appear to be conscious should certainly make you consider whether consciousness has a role in our thinking ability, but that's as far as it goes - it is fully possible that there is no link between them at all, and indeed there is no example (other than with feelings) of any specific thing (component capability) which consciousness helps with which cannot in principle be done by a machine which lacks consciousness entirely. There may be some things (compound capabilities) where you might imagine a role for consciousness due to your lack of understanding of how those compound capabilities are built up our of component capabilities, but ultimately you're just guessing. In my own analysis of a wide range of compound capabilities, I have broken them down to their components and found no essential role for consciousness in any of them.
I disagree. This is not about whether a non-conscious system is able to learn in a complicated manner, but about whether a system is able to learn and apply itself to problems which were outside of the scope considered when it was first 'created' or 'designed'. It would be perfectly possible to design a robot to learn to use a computer - but one which was designed soley to deal with the domain of problems associated with nomadic life in a prey/predator environment would not be likely to have the correct range of learnable patterns. I accept that such flexibility can arise as the complexity is increased - our subconscious is EXTREMELY powerful, much more than we would like to accept - but consciousness seems likely to play a part in this. From an evolutionary perspective, if consciousness fills this role, and evolved, then it will do that - even if there is another way that the problem could have been solved.
It seems you haven't covered all the basics, so maybe you need to work through a couple of thought experiments.
You might be surprised. But I won't take the implied slight.
(1): Imagine a machine that can make a perfect copy of anything you stick inside it. You stand in one side of it and a working copy of you is created in the other. The copy thinks he's you (assuming you're a he). Can both the original you and the copy both be you at the same time? If someone sticks a pin in him, do you feel it too?
At the time of copying the two copies are indistinguishable. From the perspective of either one, it would feel and behave as if 'old'. It would be disingenuous for either to act as though they did not have that age - how they achieved it does not matter from their perspective. As time passes, the two copies would diverge. They would become increasingly less subtly different from each other - but with a certain element of apparent shared history.

I'm not entirely sure what this has to do with consciousness though.
(2): Imagine that all the atoms that were in your head many years ago have been collected and built back together into the same arrangement as they were once in when you were a child. We're now dealing with a copy of you as a child which is actually built up from the same material as the original (hard to do this for the whole body as some of the material is retained and reused, such as with calcium in the bones, but this is just a thought experiment so we can ignore all the technical difficulties). You can stand looking at the child that you once were, and yet he isn't you. Stick a pin in him and you feel nothing (until someone hits you over the head with a baseball bat in return).
In the light of answer (1), as a thought experiment, so what. From that child's perspective, its apparent history coincides with a portion of mine ... your point is?
You compared yourself to a river which always has different water flowing through it, but a river is not conscious. Something in you is conscious (unless consciousness is a complete illusion), and that thing is the "I" in the machine - it is you, in your case.
Point taken. The river example was purely to demonstrate that we already ascribe labels to things which are not as tangible as something consistent and physical - and that we shouldn't be surprised at that. Our notion of existence is wider.
If there is nothing in there that's constant because everything's being replaced, then why do you imagine that you stay in there rather than being flushed out and replaced with a different "I" which is then fooled into thinking it's the same "I" by the memories stored in the brain?
Because I don't think consciousness is some external quantity which is 'flushed through' the brain. It is a product of the operation of the brain - which does have continuity of structure, and stored information. As such there is continuity in the MANNER that these processes occur (i.e. the way in which we think), rather than any tangible thing.
The constant stuff in the brain is just data (patterns in the structure of the brain), so you'd require data to be conscious for there to be a constant "I" in the machine.
Absolutely not. I don't think the same thoughts. The constant is a mixture of slowly accumulating data mixed with learnt/developed patterns or manners of thought.
A weird idea, but if consciousness is real, it's probably going to need a weird explanation.
Or at least an idea which is not quite what we were expecting. If the roots of consciousness were exactly what we thought they were, the many people (particularly scientists) who have spent a lot of time looking would have come up with some rather more robust answers by now.

As much fun as this debate is, I have to leave to go to Ireland over the weekend in 5 hours. And it is probably a good idea to get at least some sleep tonight. If it is still running when I get back, I look forward with relish to continuing this chat!

Re: where are the 1's and 0's?

Posted: Sat Oct 08, 2011 4:58 pm
by DavidCooper
SDS wrote:I don't like your example; the reason you don't feel the pain when sitting still is that a significant portion of it is due to increased bloodflow when you move.
It doesn't involve movement - it's just a shift of attention when the absorbing task is finished with.
The point, however, is more nuanced. Firstly, you experience more sensations than one at a time. I am aware independently of pressure on my wrists, and on my ankle where I am sitting strangely.
How can you be sure of that? When I try to work out if I can experience more than one sensation at a time, just trying to monitor this can itself block both of them, but it feels to me as if there is a rapid switching going on with only one sensation actually being experienced at any one moment. It's impossible to be sure though. I'm sitting in a cold room with a heater to one side, trying to feel the cold on one side and the warmth on the other, but I don't think I can be aware of both at once. It's like the picture that switches between being a candlestick or two faces, but it seems impossible to see it as both these things at the same time.
I can also see the text on the screen etc. We don't only respond to one input at a time with everything else filtered.
As I look at this screen, I find my attention switching around at high speed, but only landing on one thing at a time. There's movement over to the left where a pack of smilies keep moving - you won't be able to see them as they're only there when writing a post, so I'll stick some in here to illustrate the point. =D> #-o [-X [-o< :lol: :roll: If you focus your attention on a word some distance away from them and try to concentrate on the shape of one of the letters in it, the movement of the smilies repeatedly distracts your attention away from that, though only for a fraction of a second at a time. Consciousness appears to me to have an extremely small focus of attention, but it can flit between different things at very high speed rather like multitasking on a single processor.
I also don't think that that filtering is necessary for consciousness (even if it is a feature of it). A more interesting example is that you aren't continuously aware of the sensation of wearing clothes - but if you think about it you can feel them.
If there was only one input, filtering wouldn't be involved, but there are many and I'm very sure that you can't experience many at exactly the same point in time - I don't even think two at a time is possible, though trying to monitor that may be blocking it, so it's hard to tell. With your clothes example, the inputs must be being generated all the time, but they are filtered out and don't make it through to conscious awareness until more significant inputs remove themselves. Having said that though, it isn't beyond possibility that subconscious systems in the brain may be conscious in their own right, but without any mechanism to say so.
Secondly, such filtering does occur in the autonomic nervous system, and in the subconscious brain. As such, it is also not sufficient for consciousness. It is more of a side topic on how the neural system pre-processes inputs.
I never said filtering was part of consciousness, but I did suggest that whatever it is that's conscious may be capable of averaging out many inputs and experiencing the resulting average sensation. Alternatively, all that may be done non-consciously and then a specific feeling be generated in whatever it is that's conscious, though there doesn't seem to be any point in doing that if there's no function in it other than to receive an input, feel it and then send out an output to say that it was felt.
Our ability to do complex stuff and the fact that we appear to be conscious should certainly make you consider whether consciousness has a role in our thinking ability, but that's as far as it goes - it is fully possible that there is no link between them at all, and indeed there is no example (other than with feelings) of any specific thing (component capability) which consciousness helps with which cannot in principle be done by a machine which lacks consciousness entirely. There may be some things (compound capabilities) where you might imagine a role for consciousness due to your lack of understanding of how those compound capabilities are built up our of component capabilities, but ultimately you're just guessing. In my own analysis of a wide range of compound capabilities, I have broken them down to their components and found no essential role for consciousness in any of them.
I disagree. This is not about whether a non-conscious system is able to learn in a complicated manner, but about whether a system is able to learn and apply itself to problems which were outside of the scope considered when it was first 'created' or 'designed'. It would be perfectly possible to design a robot to learn to use a computer - but one which was designed solely to deal with the domain of problems associated with nomadic life in a prey/predator environment would not be likely to have the correct range of learnable patterns. I accept that such flexibility can arise as the complexity is increased - our subconscious is EXTREMELY powerful, much more than we would like to accept - but consciousness seems likely to play a part in this. From an evolutionary perspective, if consciousness fills this role, and evolved, then it will do that - even if there is another way that the problem could have been solved.
If you program a robot to be able to solve any problem it encounters (though let's limit ourselves to just those problems which a human can solve), then all such problems are automatically within the scope considered when it was designed, regardless of whether they were specifically considered or not. Such a machine would be able to solve problems which are new to it without needing consciousness. It isn't clear as to how consciousness would make the process any more efficient as it has no obvious role, but we won't be able to tell until we can see the actual mechanism of it.
It seems you haven't covered all the basics, so maybe you need to work through a couple of thought experiments.
You might be surprised. But I won't take the implied slight.
No slight intended, but there appears to be something missing in the way you see things.
(1): Imagine a machine that can make a perfect copy of anything you stick inside it. You stand in one side of it and a working copy of you is created in the other. The copy thinks he's you (assuming you're a he). Can both the original you and the copy both be you at the same time? If someone sticks a pin in him, do you feel it too?
At the time of copying the two copies are indistinguishable. From the perspective of either one, it would feel and behave as if 'old'. It would be disingenuous for either to act as though they did not have that age - how they achieved it does not matter from their perspective. As time passes, the two copies would diverge. They would become increasingly less subtly different from each other - but with a certain element of apparent shared history.

I'm not entirely sure what this has to do with consciousness though.
And that's what's missing.
(2): Imagine that all the atoms that were in your head many years ago have been collected and built back together into the same arrangement as they were once in when you were a child. We're now dealing with a copy of you as a child which is actually built up from the same material as the original (hard to do this for the whole body as some of the material is retained and reused, such as with calcium in the bones, but this is just a thought experiment so we can ignore all the technical difficulties). You can stand looking at the child that you once were, and yet he isn't you. Stick a pin in him and you feel nothing (until someone hits you over the head with a baseball bat in return).
In the light of answer (1), as a thought experiment, so what. From that child's perspective, its apparent history coincides with a portion of mine ... your point is?
The point is about what you are. You think you were a child in the past, but now that child is standing there in front of you: he isn't merely an identical copy, but is actually built out of the exact same atoms as the original and in the exact same arrangement. You are clearly not him. Were you ever him?
If there is nothing in there that's constant because everything's being replaced, then why do you imagine that you stay in there rather than being flushed out and replaced with a different "I" which is then fooled into thinking it's the same "I" by the memories stored in the brain?
Because I don't think consciousness is some external quantity which is 'flushed through' the brain. It is a product of the operation of the brain - which does have continuity of structure, and stored information. As such there is continuity in the MANNER that these processes occur (i.e. the way in which we think), rather than any tangible thing.
So let's go back to the pain issue. If pain is felt, something feels it. What is that thing? If I stick some atoms together and create a structure which can feel pain, but none of the atoms and no part of the atoms feels any pain, then what are you left with to feel the pain? Is it the geometrical arrangement of the atoms that feels pain? Is it energy passing through that feels the pain? You can pick any answer you like to try, but what happens on another occasion when you run the experiment again with all the atoms replaced and only the geometrical arrangement of atoms preserved? If the same thing is to feel the pain on both occasions, the same thing has to be there on both occasions, so if you want to claim that you are a constant throughout the life of the animal you live in, you're going to be hard pushed to identify anything in there that you, the thing that experiences the pain, can be.
The constant stuff in the brain is just data (patterns in the structure of the brain), so you'd require data to be conscious for there to be a constant "I" in the machine.
Absolutely not. I don't think the same thoughts. The constant is a mixture of slowly accumulating data mixed with learnt/developed patterns or manners of thought.
So what feels the pain? Data? Geometry?
As much fun as this debate is, I have to leave to go to Ireland over the weekend in 5 hours. And it is probably a good idea to get at least some sleep tonight. If it is still running when I get back, I look forward with relish to continuing this chat!
Well, I can't guarantee that I'll be here as I don't know how long I can exist for, but someone will probably continue to answer in the same name and with reference to the same thinking set. Disagreement on this subject is always useful as it may provide clues.