Page 2 of 2

Posted: Thu Dec 21, 2006 8:54 am
by jvff
Brynet-Inc: you could also see potential threats. Imagine for example based on a statistical mapping scheme you could upload your mind (possibly also the personality), and based on that info you can predict every single decision the uploader will take during his life (as long as the same choices and circumstances are tested on the map, which brings me to a potential paradox: the uploading must happen also inside the map and every knowledge about the system, which means it's kinda reciprocal). At first it would be a way to help you think (maybe offload some thougts), but imagine if advertises use them to test ways to get your atention.

It could become a form of government. A hyper-pseudodemocracy. Where every citizen is mapped, and the rules of society is based on the predictions of the majority (it would be almost exactly what the majority wants). Then it could become a method of censorship, since your brain is mapped, people with power can eliminate people with diverging thoughts.

And finally what if it becomes the way of immortality? All we become is data and all of a sudden nature doesn't matter anymore (because we don't need it anymore to propagnate our species, which can be done by creating more data in whatever form possible).

Anyway, just thinking outside the box. Adding some spice to it =)


Back to the present... Inteligence is a term that is too abstract. What is a smart thing? For example smart could be something that can use all it has learned (and possibly learn some more). Intelligent could mean smart and creative (ie. trying something outside of it's scope of information; "thinking outside the box").

Do we need smart machines? It sure helps... Do we need self aware machines? Maybe they only need to know about us, humans. Maybe by being self aware they can be creative (by knowing their limitations).

Could it have personality? I think not. But it could learn to hate. In a sense, feelings on us can be described as chemical reactions (rather than electrical ones), that's why you become happy, sad, etc. But it can by logic draw to a conclusion we are inefficient (or bad), and it could be a form of hate.

Perhaps the simplest form of a "possible" AI is based on natural selection. Trial-and-error. Dunno. Rambling.... ;)

JVFF

Posted: Thu Dec 21, 2006 9:08 am
by Brendan
Hi,
jvff wrote:While thinking about your method I understood the complexity. Mainly in the form of ambiguity. Also I think it could be better to define a three entry system, where we can create relationships. That way we "optimize" the two way system by providing sling-shots. Therefore A leads to B but AB also leads to C. Example: "cat is animal", "animal isn't plant", "elephant bigger than mouse".
That depends on what a "reference type" is.

For my original thinking, there were concepts (equals, less, split, create, etc) and a reference type was one of 6 different forms of a concept - 3 different temporal values (past tense, present tense and future tense) and opposites. For example, the "equals" concept becomes the reference types "was equals", "is equal", "will be equal", "wasn't equal", "isn't equal" and "won't be equal".

If a node is either an object (cat, elephant, fox, dog) or an attribute (animal, plant, red, lazy) then this seems to work well. For example, "cat" linked to "animal" with the "equals" reference type establishes that the cat is an animal.

It also seems to work well for queries - for example, a question like "did elephant create cat?" can be solved by checking if elephant and cat are linked by the "paste tense create" reference type or it's opposite ("elephant did create cat", "elephant didn't create cat"). If this fails then further information can be obtained by checking for links of the other "create" reference types ("elephant will create cat", "elephant won't create cat", "elephant is creating cat", "elephant isn't creating cat"). If this also fails you can check cat or elephant for links to other nodes with the create reference types ("I don't know, but elephant created cheese", "I don't know, but dog didn't create cat").

m wrote:It seems to have something to do with cache algorithms,huh? :roll:
Cache algorithms are a little different in that information is either in the cache or it's not - there's no "half remembered" state.

The human mind is a strange thing, in that it seems to be able to store an infinite amount of information. My theory is that the more stuff you try to learn the more you forget about what you already knew, but you only forget small details that aren't so noticeable.

For a database of linked nodes, you can keep track of which nodes are used least often and "forget" nodes and/or links to make space for more information. If the database allows nodes to refer to pictures, videos, sounds, etc, then you could also reduce the size of node using lossless and lossy compression. For a simple example, imagine a node for "dog" which includes a high quality picture of a dog and the sound a dog makes, where the picture and the sound could be compressed and reduced to a small low quality picture or sound (and the original picture and sound disposed of).

This is more like the human mind - I remember doing my laundry yesterday and even remember how many loads it was and details of which items I washed (not compressed yet), I remember doing my laundry a few weeks ago but not many of the details (partially compressed), but I can't actually remember doing my laundry before that (compressed into oblivion), except for once when I forgot I had a tap running and flooded the laundry (compressed a lot, but not compressed into oblivion because the information is referenced occasionally - I think that node is linked to the "don't forget the tap is on" node ;) ).


Cheers,

Brendan

Posted: Thu Dec 21, 2006 9:43 am
by Brynet-Inc
Thats a really interesting theory Brendan..

But It's possible to remember things that happen decades or even further in your past, Seems unlikely that we actually permanently forget things (We just can't access some memories at will.. but might remember them years later..)

Another thing that might change if we could upload our minds into machines :lol:

But how can we make an artificial intelligence when we only have minimal understanding of how our minds work currently..

Posted: Thu Dec 21, 2006 9:53 am
by Colonel Kernel
That bit about compression reminds me of something I realized recently about my perception of time. It seems that between seasons and holidays, time seems to drag on, but for any given notable season or holiday ("It's Christmas again already??") the time since the last occurrence a year ago doesn't seem like a long time at all. It's easier to remember which things happened this year over the whole year than it is to remember in which year's Christmas did x, y, and z happen?

The laundry example is particularly interesting -- similar events that happen very frequently almost get "run-length encoded". :) I doubt very much that I will ever remember the particular details of any given time I've done laundry, but I do know approximately how many times I've done laundry this year.

Weird stuff.

Posted: Thu Dec 21, 2006 9:55 am
by bubach
Brendan wrote:The human mind is a strange thing, in that it seems to be able to store an infinite amount of information. My theory is that the more stuff you try to learn the more you forget about what you already knew, but you only forget small details that aren't so noticeable.
I remember seeing a documentary on TV not long ago about the guy that the movie "Rain man" is based on. He had some social troubles, but could read on both pages in a book at the same time, using both eyes and finishing a 100 page book in like 10 minutes.

He also remebered everything he read, and claimed to be able to do the poker thing (remembering the cards on a casino) but didn't want to show it becasue it wasn't right. He slowly learned how to be a bit social by learning what to say and how to behave in diffrent situations.

Apparently NASA wanted to take som tests on his brain and use him as a model for some super computer.

Posted: Thu Dec 21, 2006 10:17 am
by Brynet-Inc
Rain Man is a story that was written by Barry Morrow..

It was based on actual events? :?:

Posted: Thu Dec 21, 2006 10:27 am
by bubach
Not the events, but he got the idea to it when he met this guy.

Posted: Thu Dec 21, 2006 10:46 am
by Brendan
Hi,
Brynet-Inc wrote:Thats a really interesting theory Brendan..

But It's possible to remember things that happen decades or even further in your past, Seems unlikely that we actually permanently forget things (We just can't access some memories at will.. but might remember them years later..)
IMHO it depends on how often something is referenced and how long ago it occured, and details are reduced rather than the entire thing being removed.

For example, you might remember the first time you used a computer, but do you remember what the time and date was, what colour socks you were wearing, and what you did on that computer? For some of this information the mind can use other links and logic to fill in the gaps.

For example (for me), I think it must've been about 24 years ago and was probably in the afternoon because my Dad was there and I don't think it was a weekend (but don't know why I think that - probably because there's 5 chances in 7 of being right). This means I was probably wearing either grey or blue socks because that's what I remember wearing back when I was at school. I think I played some sort of computer game because we didn't have wordprocessors and spreadsheets back then, and I couldn't have known about programming when I first saw a computer.

Did I actually remember any of this? No - it's all guesswork based on other unrelated memories and logic. Do I actually remember the first time I used a computer? Honestly, I've been trying and the best I can do is some vague memory of my Dad borrowing a Commodore VIC-20 from one of his friends.

How long would it take for me to forget my own name? The "reference count" is probably fairly high and I might not forget it for 40 years or more (and I might die of old age before it happens). I'd also be reminded of it occasionally which would increase the reference count again. I'd need to prevent this - get people to call me a different name, change my email address, find a way to get mail delivered to a different name, etc (I'd probably have to legally change my name to something else because of my driver's licence, taxation, etc). This still might not be enough - for e.g. hearing someone else's name that is similar, or seeing the letters BST (my initials) would be hard to avoid and would also increase the reference count. I know I can't deliberately forget my name because the act of trying to forget something includes increasing the reference counter for it ("I need to forget <oops>").

Does this mean my name is permanently burnt into my brain and impossible to forget? I really don't think it is (it's just incredibly difficult to forget in practice).


Cheers,

Brendan

Posted: Thu Dec 21, 2006 11:06 am
by Brendan
Hi,
Colonel Kernel wrote:That bit about compression reminds me of something I realized recently about my perception of time. It seems that between seasons and holidays, time seems to drag on, but for any given notable season or holiday ("It's Christmas again already??") the time since the last occurrence a year ago doesn't seem like a long time at all. It's easier to remember which things happened this year over the whole year than it is to remember in which year's Christmas did x, y, and z happen?
I'm incredibly bad a remembering times of otherwise unrelated events. For e.g. I remember when Princess Diana crashed and died, including footage of the wreckage and some of the media surrounding the event at the time, but have no idea of which events also happened in that year (or exactly which year it was).
Colonel Kernel wrote:The laundry example is particularly interesting -- similar events that happen very frequently almost get "run-length encoded". :) I doubt very much that I will ever remember the particular details of any given time I've done laundry, but I do know approximately how many times I've done laundry this year.
Do you remember doing laundry X times this year, or do you remember how often you do laundry? 52 weeks per year divided by doing laundry every 2 weeks adds up to 26 times this year...

As a test, write an email containing what you ate today and send that email to yourself. As the subject write "WHAT I ATE". Every time you check your email and see this subject line shut your eyes and try to remember all the details, but don't read the email. After a month, write down how much you remembered, and then compare it to the email.

I'm guessing that each time you see the subject line and try to remember the details you'll increase the "reference counts" for this information, and after a month you'll remember much more than you'd expect (but won't remember anything about what you ate the day before or the day after).


Cheers,

Brendan

Posted: Thu Dec 21, 2006 11:20 am
by Brynet-Inc
Well, It looks like Brendan's talent doesn't end.

He knows about programming/computers and how the human mind works :lol:

But a good example of his theory, Try to remember what you did Last Wednesday. (I for example do practically the same thing every day, So for me Wednesday is the same as every other day..)

Posted: Thu Dec 21, 2006 11:26 am
by Tyler
Brynet-Inc wrote:Well, It looks like Brendan's talent doesn't end.

He knows about programming/computers and how the human mind works :lol:

But a good example of his theory, Try to remember what you did Last Wednesday. (I for example do practically the same thing every day, So for me Wednesday is the same as every other day..)
Well his data storage theory is good... but i don't believe (though we don't actually know) that is how the human mind works. For one thing it is beign compared to a database... the human mind is probably not likely that much like the computers we use. Also his technique requires loss of data, or removal to long term storage which is far more finite then the human mind seems to be. Alas, it is a good theory and though he doesnt mention "display methods" is similar to how i viewed a wikipedia that had a memory.

Posted: Thu Dec 21, 2006 1:16 pm
by JAAman
I've always thought of human memory, as sort of a 'lossy compression', as events get older, they are compressed more, and more details are left out -- you still see the whole picture, but fewer and fewer details (as brendan pointed out, you dont remember details, and remember less of them as they get older)

human memory will drift -- if you remember something good, over time you will remember it as being better than it actually was, likewise, if it was bad, you will remember it as being worse than it actually was -- this is part of the compression, which works with the links between mood/feeling memories and event/action/object memories, compressing this link, results in increasing feelings about things you felt strongly about (for a simple, obvious example -- a movie you may have really enjoyed a couple decades ago, will be remembered more fondly than it actually deserved, likewise for a bad memory) -- kind of like a scale -- using a 32bit number to represent a value between 'good' and 'bad' then convert that to a 8bit number, and you will get the same result -- things which are good will 'shift' towards good, and things which are bad will 'shift' towards bad, things more neutral will become even more neutral -- and will be remembered less clearly, as the minds compression will count it as a less important memory (since your feelings toward it are neither good nor bad)
Also his technique requires loss of data, or removal to long term storage which is far more finite then the human mind seems to be
this compression combined with relational data (as brendan suggested), makes the mind appear to remember more than what it actually needs to store, and therefore it doesnt need to be quite so 'infinite'



just my thoughts

Posted: Fri Dec 22, 2006 10:57 am
by Candy
Brendan wrote:For my original thinking, there were concepts (equals, less, split, create, etc) and a reference type was one of 6 different forms of a concept - 3 different temporal values (past tense, present tense and future tense) and opposites. For example, the "equals" concept becomes the reference types "was equals", "is equal", "will be equal", "wasn't equal", "isn't equal" and "won't be equal".

If a node is either an object (cat, elephant, fox, dog) or an attribute (animal, plant, red, lazy) then this seems to work well. For example, "cat" linked to "animal" with the "equals" reference type establishes that the cat is an animal.

It also seems to work well for queries - for example, a question like "did elephant create cat?" can be solved by checking if elephant and cat are linked by the "paste tense create" reference type or it's opposite ("elephant did create cat", "elephant didn't create cat"). If this fails then further information can be obtained by checking for links of the other "create" reference types ("elephant will create cat", "elephant won't create cat", "elephant is creating cat", "elephant isn't creating cat"). If this also fails you can check cat or elephant for links to other nodes with the create reference types ("I don't know, but elephant created cheese", "I don't know, but dog didn't create cat").
That's pretty much a bit of discrete mathematics. The types you're looking for are called relations and the aspects of which you talk are transitivity (elephant created cat, cat created cheese, so elephant created cheese), reflexivity (cat is cat, which for instance doesn't hold with ordering, cat smaller than cat), associativity (Molly is cat, cat is Molly, which does hold for 1+2 = 2+1) and so on. You can calculate closures of transitivity, so you can determine mathematically which types of creature a cat is and at which level it's related to a block of ice for instance).
The human mind is a strange thing, in that it seems to be able to store an infinite amount of information. My theory is that the more stuff you try to learn the more you forget about what you already knew, but you only forget small details that aren't so noticeable.

For a database of linked nodes, you can keep track of which nodes are used least often and "forget" nodes and/or links to make space for more information. If the database allows nodes to refer to pictures, videos, sounds, etc, then you could also reduce the size of node using lossless and lossy compression. For a simple example, imagine a node for "dog" which includes a high quality picture of a dog and the sound a dog makes, where the picture and the sound could be compressed and reduced to a small low quality picture or sound (and the original picture and sound disposed of).
Instead of trying to remove nodes at a time, consider an encoded movie. I can take the mpeg encoded movie and chop off a bit of data at the end. That shortens the movie. I can leave out a few frames, leaving a notice saying "interpolate this bit". I can leave off the last few kbytes of a frame, which contain the finest detail. The more I cut out, the less properly you'll know the details. For computer encoded movies, this'll be obvious - stuff like MPEG2 artifacts are well known. If you confabulate the bits you don't know from what you can guess (blur the image, for instance) you'll "know" the entire image and you won't know that anything is missing. When asked, if you're a normal person, you'll know which bits you're actually remembering and which parts you don't remember but just thought up to make up for missing information.

I think you can conclude that the human mind isn't infinite. The human mind most likely is smaller than the harddisk in my current computer. It's just using a lot better algorithms and the algorithms it uses for compression are the same algorithms I use for recognising images so I don't notice it. The more computer algorithms approach these algorithms, the more information you can leave out without changing the apparent quality of the signal.