Page 4 of 5

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 8:15 am
by Brendan
Hi,
Solar wrote:
Brendan wrote:Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
Just in case you're referring to the EUREKA / Prometheus project, that wasn't A.I. at all.
To be honest, I doubt any of the prototype cars are actually A.I. in the way that David Cooper thinks. Most of them look like "system of rules" with a trivial bit of A.I. that's used for deciding which if the air conditioner should be turned on. ;)


Cheers,

Brendan

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 8:19 am
by Rusky
Brendan wrote:Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
Depending on your definition of AI, there are some pretty well-known research projects that have done extremely well. If you include "driving a car" in AI (which it probably will be by marketers, at least until it becomes commonplace, at which point nothing is AI :P) then the EUREKA Prometheus project is an early one.

The one I was referring to is Google's- they have a much better rate of human intervention than EUREKA Prometheus. A human has to sit and monitor the car "just in case," but at this point it's "just it case" for legal purposes rather than for safety purposes- they've never had an accident with the car driving itself, and they're 3 orders of magnitude above the EUREKA Prometheus project's non-intervention distance. They plan to license the technology to car manufacturers once driving laws are less ambiguous about driverless cars (something they're lobbying for).

There's also all the universities that enter the DARPA Grand Challenge. They've made a lot of progress in this area, although their projects are not as impressive as Google's.
Brendan wrote:
Rusky wrote:Just because intelligence is "configured" by learning doesn't mean it has to be used in production during that process.
So it's a "system that follows a fixed set of rules" (where the fixed set of rules may have been generated by A.I. at the factory) and not A.I. at all?
This is wild speculation (due to the very early stages of real AI research), but once it's configured, it may be possible to disable the learning processes. If you want to be pedantic and not call that AI, go ahead, but it would still be massively parallel, hierarchical pattern matching so I wouldn't.

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 8:53 am
by Solar
I think at this point it boils down to the definition of the term "artificial intelligence". When this whole discussion started, it was from a standpoint of strong A.I., i.e. the equivalent of human intelligence or better, capable of common-sense reasoning, judgement in the face of uncertainty, understanding of natural language and so on.

If you start calling something like Autonomous Cars "A.I.", you have come a far distance from that definition.

Or, to phrase it differently with respect to the thread title: "Will autonomous cars take over the world?" - "Nope."

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 9:11 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
Depending on your definition of AI, there are some pretty well-known research projects that have done extremely well. If you include "driving a car" in AI (which it probably will be by marketers, at least until it becomes commonplace, at which point nothing is AI :P) then the EUREKA Prometheus project is an early one.
Let's have 3 definitions/classifications:
  • Cars controlled purely by a "system of rules" alone. Stupid people in the media have a habit of calling this "A.I." even though there's no intelligence or learning of any kind. I guess if I have to advocate anything it'd be this (I personally think the basic idea of cars is completely idiotic).
  • Cars controlled purely by A.I. alone. This is what I call an "A.I. car"; and I think it's also what David Cooper is advocating.
  • Cars controlled by a mixture of both "system of rules" and A.I.
As far as I can tell, there are no cars controlled purely by A.I. alone. Those that I've been able to find usable information for look like cars controlled purely by a "system of rules" alone. Google's car might or might not be a mixture (I haven't been able to find usable information). If I search for ""google car" "artificial intelligence"" I get hits, but in these cases the "artificial intelligence" phrase is not in the article itself (e.g. it's in irrelevant navigation areas on the page), or it's in things like "Stanford Artificial Intelligence Laboratory" (not talking about the car at all), or it's in the article's title and not anywhere in a description of the car (likely to be a case of "stupid people in the media" or sensationalism).
Rusky wrote:
Brendan wrote:
Rusky wrote:Just because intelligence is "configured" by learning doesn't mean it has to be used in production during that process.
So it's a "system that follows a fixed set of rules" (where the fixed set of rules may have been generated by A.I. at the factory) and not A.I. at all?
This is wild speculation (due to the very early stages of real AI research), but once it's configured, it may be possible to disable the learning processes. If you want to be pedantic and not call that AI, go ahead, but it would still be massively parallel, hierarchical pattern matching so I wouldn't.
A system of rules is a system of rules, regardless of how that system of rules was created. An A.I. system would be something that can adapt dynamically based on past experience. For an example, a "system of rules" car might see a thin layer of ice and decide that it's a smooth flat area that's safe for driving on and repeatedly fall through the ice. An A.I. system would fall through the ice a few times but eventually figure out that "smooth and flat" doesn't necessarily mean safe for driving on (and then refuse to move on smooth flat roads).


Cheers,

Brendan

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 12:38 pm
by Rusky
Brendan wrote:A system of rules is a system of rules, regardless of how that system of rules was created. An A.I. system would be something that can adapt dynamically based on past experience. For an example, a "system of rules" car might see a thin layer of ice and decide that it's a smooth flat area that's safe for driving on and repeatedly fall through the ice. An A.I. system would fall through the ice a few times but eventually figure out that "smooth and flat" doesn't necessarily mean safe for driving on (and then refuse to move on smooth flat roads).
I don't think intelligence necessarily requires adaptation- I don't think we know enough to say. As human brains mature they make fewer and fewer new connections, and we still consider old people intelligent.

I don't know how far this could go before becoming unintelligent, but I think (and again I don't think we know enough to say for sure here) a car AI could benefit from the stability of sticking to the established configuration learned before production.

Anyway, I think my answer to this thread overall is that Brendan's original predictions are far too pessimistic, because of what we know and can learn by emulating natural intelligence. The AI that will result from this research will be useful, but it will not take over the world except maybe by becoming pervasive technology the way computers are today.

Is that something we can agree on or do you still think AI could never possibly be useful?

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 3:00 pm
by DavidCooper
Brendan wrote:To be honest, I doubt any of the prototype cars are actually A.I. in the way that David Cooper thinks. Most of them look like "system of rules" with a trivial bit of A.I. that's used for deciding which if the air conditioner should be turned on. ;)
I don't regard them as A.I. (in the strong sense) - they are much closer to your system, but they are doing something more like vision through using lidar which takes them direct to a 3D representation of the world where objects are easier to identify and track. When we drive, we do almost all of it on autopilot using rule-based systems which we've programmed into ourselves, so we're not really applying intelligence directly when driving either, until some kind of problem comes up which we have to concentrate on. In many of those situations, it may be that the Google car just slows to a halt and waits for the problem to go away, relying on the intelligence of the drivers of other vehicles to work out how to solve it and to clear themselves out of its way.

Another point on using webcams for input: they have the same resolution right across the frame, whereas human eyes are blurred everywhere except the centre of vision. Moving the eyes around is slow, giving you a sharp view of only two or three parts of the scene per second (though it can be more if you include lines through the scene while your eyes are moving). We do the job with far from the best optical equipment, so webcams could be far superior. A better solution of course would be to have multiple sets of webcams - one pair of low resolution and covering a wide angle, plus more pairs of high resolution collectively covering the same scene. Most of the input from the high resolution cameras would be ignored - parts of it would only be checked if a more detaied view of something is required.

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 3:05 pm
by Sam111
I think the technology and be good and bad.
Good in the sense if could help disabled people better and work for us...etc
Bad in if they ever turned on us, got to powerful, smart ,...etc :)
Then we would be worth less i.e a lesser species that could be by natural selection unnecessary any more.

Also I believe while people think we are getting close to this level of AI. you will find that you have to fully understand most of the human brain before you can think to replicate it in AI.

So this is going to probably take a long while if ever to perfect.
Not to mention it may never be able to be done going by the Alan turnings problem about weather we could ever create a computer that could be indistinguishable to a human. That is unsolved which is why we are using catcha and other stuff to try to make it hard for computers to be like humans.

I not even sure if it is proved that we could make a machine smarter then a human...
Since I have never seen a computer or computers do something that a man couldn't do given enough time . Their just faster and don't make mistakes ... humans are slower and make mistakes but the way we make mistakes could be are greatest defeat over computers since we can learn from are mistakes.... computer can to hence AI. but at what extent (mathematical only or emotional can it be described completely mathematically / scientifically )

I would be hard pressed for anybody to show me a computer smarter then a human could be? Faster ya, smarter I don't know (maybe they could tell use the answer to that :)

Anyway AI. is very cool stuff
Of course when you get to smart them I could always ask you what the purpose of life is... This I believe is personal to each individual ... would love to see how AI. would ask that. Maybe I will go ask Alice lol

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 3:14 pm
by Sam111
exactly my point a lot of unknowns before we can perfect AI.

I for see AI being a big controversy when it comes to laws in the future as we start getting closer and closer....etc

8)
idea +exploration + analyzing + theory + testing + invention = good/bad stuff but fun in the process

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 4:28 pm
by DavidCooper
berkus wrote:I gave you a direct link which you refused then to read. Blame me.
I didn't refuse to read it - I glanced through it and didn't recognise it as being that project, and when I came upon parts that I didn't want to be influenced by I saved it to read some time in the future once I've had a proper go of my own at designing algoriths to do the same kinds of thing. Once you've got one method of doing something in your head, you're much less likely to find alternative ways which might be more efficient, so if I try to find my own ways first and then find that other people have found better ways, I don't lose anything because I can switch to doing things their way, whereas if I just go with their way or am influenced by them, I could miss something much better that I would have found otherwise.
berkus wrote:Lets take it to another deeper level then. Care to elaborate how it works in detail?
I don't care to, and I haven't designed this part of the process in detail either as my current starting point is text, but I will give you an overview of how I would do some of the initial stages. The first step's to extract a string of phonemes from the sound stream, and this string will often include sounds which aren't phonemes but have been misidentified as speech sounds. Some phonemes may also be misidentified due to unclear articulation or background noise combining with them. Others may be missing after being masked by background noise. Using stereo sound, or ideally four microphones for precise directionality, would make it easier to eliminate noise from other directions, but that isn't possible if you're taking the sound from a mono recording. Anyway, you'll end up with a string of phonemes something like this: enjwj iwlendapwy7astrykqvfonimzsam7yklajc2ys. The next stage of the process is to look up a phonetic dictionary to try to split the string of phonemes into words, and there may be multiple theories as to where some of the boundaries lie, so more than one set of the data may be passed on. If nothing fits exactly, the best fit must be looked for, and we might return to have another go at this if later stages of analysis suggest we've taken a wrong turn. There is no need to translate to standard spellings as we're following a different route from the one that would be taken if the machine was taking text as input (which is the route I've actually programmed for), but the process itself is now practically the same and merges a little further on. The meanings of words are checked to see if they fit with the context of what came before - this may pick up words which have been misheard if a similar sounding word of more likely meaning exists and would fit in the sentence. Next we get into an area which checks the way the sentence is constructed, building theories as to how it might hang together - many words can behave as nouns or verbs, for example, and you have to branch every time you meet one (sometimes several ways at a time), multiplying the amount of data you have to handle to the point where it could quickly overload the memory of the machine if you go about it the wrong way. You have to go with the most likely routes instead and label certain branch points to be looked at later on if the more likely routes break down. I do not want to describe my way of doing this, nor go through all the steps that follow, other than to say that several theories as to what the sentence means may survive the process, so it's necessary to see which make the most sense and which best fit the context. It may not be possible to work out a clear winner, so the data may need to be stored in a linked form where all the surviving meanings are kept but with different probabilities tied to each, and if one meaning is ruled out later on it will change the probabilities of the surviving ones. The machine may decide to ask for clarification if it's holding a conversation (and if the information sounds as if it may be important), but that won't be possible if it's just listening to audio files. If the sentence makes so little sense that there must be an error in it, other meanings can be considered and the sounds checked against the original input to see if they may fit sufficiently well for a "mishearing" to be the most likely explanation. The information, once the best theory as to what it is has been determined, is then rated for how likely it is to be true, and that will be based on who it came from (a code representing the source is used, so if the reliability rating of the source changes, the reliability of the data is automatically changed - this may lead to a cascade of other changes to probabilities throughout the database, depending on the significance of the source and the data) and how well it fits in with existing knowledge in the database. Most words and names are stored as 16-bit codes, but less common ones take prefixes, and large parts of sentences which are really just identifiers such as "the film we saw yesterday" can be converted into a single code which simply represents the film in question if it can be identified from that description (though the information that "we saw that film yesterday" may also be new data to the machine that could be stored) - overall this makes the stored data much less bulky than the original text or phonetic string. To avoid overloading the database, information has to be rated for its importance too, so if space becomes tight the less important stuff can be junked.

I've gone into more than enough detail there for anyone else who's doing this kind of work to know that I really am doing it, so that's all you're going to get from me for the time being.

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 5:16 pm
by DavidCooper
berkus wrote:Yup, I've got you're doing some sort of N-grams with ML over the "meanings" database, in your own and unique way of course.
Never heard of N-grams before, but having looked through the wikipedia page about them it appears to be based on sequences. I don't just store words in the order they come in, but transform them into SVO-nets (my own term) which take up the proper structure of the underlying thoughts.
I also understand that I'm unlikely to ever see anything working publicly, so good luck and hope you make something working for yourself in private.
I hope your life expectancy isn't that short, and I have no intention of keeping it private for any longer than necessary - I simply don't want to hand clues to rivals now that might help them get to the finish line before me. They've got big teams of people working on it and could eat up the ground to where I am in no time if they aren't already ahead. My only hope is that they're still bogged down in the linguistics.

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 5:33 pm
by Rusky
DavidCooper wrote:the proper structure of the underlying thoughts
as based on what

Re: Will A.I. Take Over The World!

Posted: Thu Jan 19, 2012 5:40 pm
by DavidCooper
Rusky wrote:
DavidCooper wrote:the proper structure of the underlying thoughts
as based on what
The simplest, neatest, most elegant arrangement of the components of the thought. If I tried to do the linguistics work using structures anything like Chomsky's, I'd still be bogged down in the incomprehensible mess like (I hope) everyone else. Trees don't fit.

Re: Will A.I. Take Over The World!

Posted: Fri Jan 20, 2012 3:06 pm
by DavidCooper
berkus wrote:Does subject-verb-object net properly represent thoughts of international audience also? Or is it limited to English grammar?
Whatever language you work with, you want to convert to the same structure so that the data can be compared more efficiently. SVO is the most rational order as the middle item is the link between the two end items, and the directionality (if there is one) is usually from S to O, although there is nothing to stop them going the opposite way ("I like that" = "that pleases me" [the latter arguably has the directionality right and the former wrong, but it's not important]). Trees can represent most thoughts well enough to do the job, but the clarity of what's going on is lost and you end up with an impenetrable thicket, and there are cases where some branches would need to come back together again in order to account for them properly (but I'll leave people to find those cases for themselves). I wrote up a little bit about linguistics many years ago here which gives a few glimpses of how SVO-nets work, but it deliberately only covers very simple cases. Most people will hopefully reject it all as unnecessary and stick with what they're doing, while anyone who decides to switch the way they do things will lose a lot of time doing so and not be confident that they're doing the right thing.

The system I'm building will be complete in a few months (if my health allows it), though it may take a few months more to get it working properly. It will not pass the Turing Test for a number of reasons: it will not hide its intelligence (or its nature - it will tell you straight that it is software), it won't lie (initially), and it will be unable to answer questions related to spatial things where you might for example describe a shape and ask what letter it makes. It should however be able to understand text and use logic to test the compatibility of data with other data - if you feed in a logic problem like Einstein's Riddle, it should be able to return the correct answer straight away (if it can be taught to understand the relative positions of the houses). It should be able to solve other kinds of problems by hunting for the right sequence of available actions to carry out new compound tasks, and this will also allow it to write programs once it has been taught how to do so (this is not something that needs to be programmed into it - it is a system capable of learning by collecting knowledge and rules which it can then apply, and, on paper at least, programming should be within its capabilities). I've gone through everything very carefully to make sure it can be done, and I'm more than confident that it can. Its replies will not initially be like the ones you would get from a human - it will speak in SVO-net structures until I've done the reverse transformation side of things. A higher priority than that will be to get it to learn to write program code, because then it will be able to help write itself and save me a lot of work (I've put a lot of time into working out how it is going to do this kind of thing). It should already be fully intelligent even before it has any data in its main database to work on, so it should be a fast learner - all the linguistics will already be in place, though its vocabulary will initially be small.

All the components of the system are demonstrably possible, so it's hard to see why so many people think this is still decades away. I could understand it if they thought it might be another five years and I can understand their skepticism about it happening this year or next as it doesn't seem at all likely if you haven't seen the whole system laid out on paper, but the way intelligence works is no longer a mysterious slodge of impenetrable stuff which we don't understand - it has already been broken down into fully manageable chunks which are being turned into program code right now. This stuff is on the way and it's going to knock things flying in all directions.

Re: Will A.I. Take Over The World!

Posted: Sat Jan 21, 2012 5:41 am
by turdus
@Combuster: "Did you know you just claimed there are no relations in a fractal?" No, I didn't. What makes you think that? I said every part holds the full information, so you can't bind the info point-to-point to the storage media. For example you can't say the Sun in a holographic picture is stored in the upper left corner. If you cut it in half, you'll have two Suns. And again, knowing how neurons transfer signals does not tell you anything about how they're interpreted. It's like knowing how a cd reads and writes bits with laser would make you know what's in it. No. You would need to know 8 bits grouping, sector concept, iso9660 and even the file format interpretation and decompression algorithm to make a stored picture displayed. Hope it's clear now.

@Solar: "You really think an internet article trumps an advanced course and several years of undergraduate studies in Biology (with psychology as secondary subject)?" No, of course not. But as a matter of fact I had 7 years studies in Biology (of which 4 years advanced course), and 2 years of ai course on university, which was my favourite course by the way. And I wrote quite a few ai algorithms, to know what I'm talking about. And I'm convinced that today's methodology won't lead to a real, self-aware ai, although it could create one that look like. Proof: why is that all today's algorithms badly fail to pass the Turing test?
http://www.loebner.net/Prizef/loebner-prize.html

Re: Will A.I. Take Over The World!

Posted: Sat Jan 21, 2012 6:51 am
by Combuster
turdus wrote:@Combuster: "Did you know you just claimed there are no relations in a fractal?" No, I didn't. What makes you think that?
Because I have had logic and philosophy courses and you probably didn't, or at least you forgot sufficient amounts of it to not realize that your innate skill for making valid deductions is really poor.

Reductio ad absurdum at your service.