Page 1 of 2

Is Wikipedia a potential Brain?

Posted: Wed Dec 20, 2006 1:16 pm
by jvff
Hi,

I was brainstorming about the way we handle and process information and suddenly I drew a connection to our brains and wikipedia. This is just theory, but I put the brain into a concept. First, given an input subject, it get's all related information to that subject, then the ones related to these, and so on. Imagine a two dimensional net of information where the knots are subjects and the lines are conections between them, when you "bring up" a subject what you're doing is grabbing the knot of the subject and pulling the net perpendicular to the two dimensions (ie. you're pulling in three dimensions, like holding into a knot and lifting upwards the net that was lying on a table).

By travelling between the information you create a line of thought. Eventually the line of thought (thought as a unidimensional line with some knots) gets translated into words, and through another traversal on the net you form a sentence. Hence producing output.

Now, if you look at wikipedia's articles from a computer's point-of-view. It's just a bunch of words with some of them linking to others. So when you enter an article, it already links to the related subjects. So it's a static brain, where the only thing remaining for it to become a brain (as in brain concept), is the traveler. It's normally the user, but imagine if we made a program to do that... AI?

Also I would like to point out about conscience, like SkyNet in the Terminator series gaining self-awareness. For the concept to become self-aware, it needs some subjects relating to "I". These can't be learned. They must be hand placed into the net, so that it knows that every step it does can be related to something outside itself. This leads to a final theory: who put the "I" knots in us? Is it soul related?

Anyway, just rambling... But it would be interesting to see what you think about it (but most important why you think what you think about it =) ).

Thx, sorry for anything,

JVFF

Posted: Wed Dec 20, 2006 1:27 pm
by Brynet-Inc
Who isn't interested in Artificial Intelligence? :lol:

Sadly once you learn about available concepts and technology, You realize the types of A.I that you see in movies won't be possible for a very long time.. decades even (Or longer..)

I had a good laugh at the Wikipedia bit though LOL.. Thanks 8)

Posted: Wed Dec 20, 2006 1:36 pm
by jvff
Yeah, I know :D

However I'm more interested in the philosophy behind it. I find it interesting that we are copying the way we handle information iniside us to the outside. Also I wonder if we could create bots that travel through wikipedia and poll them stuff like "computer = software" and it returns something like "13%" which is something like the number of links, which we would interpret as "No, but they are connected". More intersting would be if they could be smarter and return "computer = (software, ...)" meaning software is a sub"thing" of computers. Thx for reply,

JVFF

Posted: Wed Dec 20, 2006 5:56 pm
by Tyler
I can assure you... before you become overly worried about it become self aware and destroying us... wikipedia is in no way like a brain. A brain in the wikipedia sense would be more like a page which selects many pieces of data and forms a single page of information. This is contrary to wikipedia where you have to click links to reach other topics. The brain has ways of connecting data that are so far ahead of most data storage standards that there really is no connection. Also data storage is only part of the functions of the brain.

Posted: Wed Dec 20, 2006 5:57 pm
by spix
I think you are describing wikipedia as a kind of neural network. however there is a small flaw, in order for it to expand its knowledge, humans must edit wikipedia.

it seems more like you are describing wikipedia as a method for an artificial intelligence to learn things. Wikipedia is a good way for humans to learn things, and so it would be natural for artificial intelligence to also be able to learn things from it.

I don't think artificial intellegence will come around any time soon. We need to understand ourselves before we have any hope at creating a life form.

Posted: Wed Dec 20, 2006 6:01 pm
by spix
I can assure you... before you become overly worried about it become self aware and destroying us...
Perhaps while it is still young we should teach it the 3 laws of robotics. ;)

Posted: Wed Dec 20, 2006 6:01 pm
by Tyler
spix wrote:I think you are describing wikipedia as a kind of neural network. however there is a small flaw, in order for it to expand its knowledge, humans must edit wikipedia.

it seems more like you are describing wikipedia as a method for an artificial intelligence to learn things. Wikipedia is a good way for humans to learn things, and so it would be natural for artificial intelligence to also be able to learn things from it.

I don't think artificial intellegence will come around any time soon. We need to understand ourselves before we have any hope at creating a life form.
I hate to rip on humans... actually i don't hate it... but anyway... i don't think artificial intelligence will come from trying to mimic human behaviour. A far better idea would be to create an artifical being that is actually intelligent.

Posted: Wed Dec 20, 2006 6:16 pm
by spix
I hate to rip on humans... actually i don't hate it... but anyway... i don't think artificial intelligence will come from trying to mimic human behaviour. A far better idea would be to create an artifical being that is actually intelligent.
Mimicing human behaviour wouldn't work. The idea of artificial intelligence it to learn by itself and develop it's own traits. Simply mimicing behaviour wouldn't be intelligent at all.

Posted: Wed Dec 20, 2006 6:22 pm
by Tyler
spix wrote:
I hate to rip on humans... actually i don't hate it... but anyway... i don't think artificial intelligence will come from trying to mimic human behaviour. A far better idea would be to create an artifical being that is actually intelligent.
Mimicing human behaviour wouldn't work. The idea of artificial intelligence it to learn by itself and develop it's own traits. Simply mimicing behaviour wouldn't be intelligent at all.
Well i didn't mean a specific human... i mean generally. For examle the Turing test.. the idea that someone talking to the computer believes it to be a human. I see this as a fault in AI. Who would want to mimic humans.

Posted: Wed Dec 20, 2006 8:31 pm
by bubach
But in order to be smart, it needs to be able to comunicate, right? And if it can do that, it should also be able to talk to a human in a way that doesn't sound stupid. That leads us to the Turing test...

Posted: Wed Dec 20, 2006 11:28 pm
by Brendan
Hi,

When I was younger I played with the idea of constructing a database where pieces of information were compressed using lossless compression (initially) and lossy compression (eventually) when the database became full. Pieces of information would be selected for increased compression depending on how important that information is (how long ago it was used and how often it is used).

For example, the database might start with "the quick brown fox jumped over the lazy dog", but after a while it'd be compressed without loss (i.e. sent to long term memory), and then if it's still not used it'd be compressed with loss - first becoming something like "the brown fox jumped over the dog", then perhaps becoming "a fox jumped over a dog", then "a fox jumped", until eventually it's too small too reduce further and is discarded.

Of course the database wouldn't store sentences, but instead would store "nodes" and both forward references and backward references to other nodes. For example, the forward references would be:
  • quick->fox
    fox->jumped
    jumped->lazy
    lazy->dog
And the backward references would be:
  • fox->quick
    jumped->fox
    lazy->jumped
    dog->lazy
Then you could other sentences, e.g. "the lazy dog ran away" and "red is quick", and end up with more references, with increased "reference strengths":
  • red->quick
    quick->fox
    fox->jumped
    jumped->lazy
    lazy->dog * 2
    dog->ran
    ran->away
And the backward references would be:
  • quick->red
    fox->quick
    jumped->fox
    lazy->jumped
    dog->lazy * 2
    ran->dog
    away->ran
From here you can traverse the database from any node to find information related to that node, and you can make judgements based on the strength of references. For example, you might be unsure if the fox definately was quick and if it jumped anything or if the dog ran away, but you could be reasonable sure that the dog was lazy. You could also end up with "red quick fox jumped lazy dog ran away".

From here you can add "node types" and use "reference types" instead of just forward and backward references, and then build language parsing and queries on top of it.

YOU: Was the dog lazy?
Computer searches for links between dog and lazy and finds a double strength link
CPU: Almost certainly

YOU: Was the fox red?
Computer searches for links between fox and red and finds weak links to quick from both
CPU: I'm not sure but quick things are red, and the fox was quick

YOU: Was the dog quick?
Computer searches for links between dog and quck and finds no direct links or shared links
CPU: I don't know

YOU: Can you tell me about the dog?
Computer searches for dog and finds a strong link to lazy and a weak link to ran
CPU: The dog is lazy and may have ran away

YOU: Fast means the same as quick
Computer adds references between fast and quick
CPU: OK

YOU: Was the fox fast?
Computer searches for links between fox and fast and finds weak links to quick from both
CPU: I'm not sure but fast things are quick, and the fox was quick

Once you've implemented something like this you could point it to the internet and get it to fill it's database with information. ;)

I never actually implemented it though - once you start trying to decide which node types and reference types you need things start becoming very complicated very quickly...


Cheers,

Brendan

Posted: Thu Dec 21, 2006 4:54 am
by jvff
Very interesting Brendan =)

While thinking about your method I understood the complexity. Mainly in the form of ambiguity. Also I think it could be better to define a three entry system, where we can create relationships. That way we "optimize" the two way system by providing sling-shots. Therefore A leads to B but AB also leads to C. Example: "cat is animal", "animal isn't plant", "elephant bigger than mouse".

However I think the more entries you put on a system the more clear it becomes. And n-entry could be decomposed (I think) into 2-entry system as long as the ambiguities are resolved (like the creation of different entries with the same lexical word, but different meaning). Interesting topic, thx for the the info,

JVFF

Posted: Thu Dec 21, 2006 5:01 am
by m
Brendan wrote: Hi,
......
It seems to have something to do with cache algorithms,huh? :roll:

Posted: Thu Dec 21, 2006 7:15 am
by spix
Well i didn't mean a specific human... i mean generally. For examle the Turing test.. the idea that someone talking to the computer believes it to be a human. I see this as a fault in AI. Who would want to mimic humans.
Perhaps, but how do humans in general behave? I see intelligence as starting as a baby and a developing by evaluating the world around us, intelligence isn't a specific behaviour, but an ongoing process.

"Humans aren't very intelligent." but I see humans as the problem in that statement, not intelligence. If a machine was built with the same capacity of reason, thought and abilities to learn as a human, this machine would then be it's own sentient being and develop in it's own way, would it develop the same morals, ethics, prejiduces and emotions as humans? Perhaps, perhaps not.

The reasons humans behave the way they behave is debatable and not really the point. Generally we are products of our surroundings, as a machine would also be. However, the ability to reason and learn things is not the only way we interpret our surroundings, we have emotions that can cloud our judgment, we can be irrational and selfish. This is what I mean when I say we must learn about ourselves before we could create an artificial life form.

Could a machine learn to hate? If all we are is brain meats, then the answer would be yes. If there is more to us than that, if there is some intertwining of spirit and heart that influence our being, then creating an artificial life form would be far more complicated than a thinking machine.

I suppose it is rather philisophical and really a difference between "intelligence" and "consciousness"

As for the turing test, that depends on your concept of reality, ie. If I believe a machine to be intelligent, is it therefore intelligent? Is that a truth? If I believe a machine is intelligent and you do not, who is correct, are we both correct and the machine is intelligent in my reality but not in yours?

I ramble..

Posted: Thu Dec 21, 2006 8:09 am
by Brynet-Inc
If re-creating a conscious and intelligent mind (As we define it) is impossible.

Maybe we should instead find methods of copying our own intelligent minds into Machines.

Mind uploading :D.. 8)

Imagine some of the possibilities that would offer for a second..
In my understanding you wouldn't be restrained by physical limits.

Reading would be instantaneous.
Learning would be instantaneous.
Multi-threaded consciousness? :lol:



I ramble also.. 8)