NickJohnson wrote:The issue here is that a machine intelligence like this would only have raw web pages to feed it. It could work for searches, assuming the overhead on processing is not too much, but not for "simply telling you the answer." How would it know which sources are right? If web pages become obsolete, where would its information come from? How could this ever possibly replace something like a forum, or a opinionated blog?
Firstly, a lot of Web pages are there to provide information which isn't going to change in the future, so once a search engine has hoovered up all the knowledge within it and can present it in ways targeted to individual people asking questions, so few people will ever read the original source that it is likely to disappear (if it was written by some individual rather than being owned by a library or institution of some kind). It will certainly disappear from people's radar as they will never click through to it any more or even see it in lists of search results. Certain sites will be maintained regardless of how many people actually reach them. Hobby sites which provide enormous amounts of information (mainly overlapping with other sites, but presented in different ways, and some with added teaching via interactive programs) will tend to disappear from the Web when their owners die or can't be bothered going on paying for stuff that's only ever looked at by search engines, and even interactive teaching programs will be outmoded by those generated by A.I. once it gets on top of visual processing and can see as well as we do.
How would it know the sources are right? Well, how do you know what's right? You don't always know, but you usually have a good idea because you can compare information with what you already know and see how well it sits with it. If it clashes badly with what you know (or think you know) then you will rate it as improbable, whereas if it fits well you will rate it as plausible. If many independent sources agree with it, the probability of it being true may increase, depending on whether all those sources are genuinely independent and are using radically different ways of coming to the same conclusion. An intelligent machine would do exactly the same thing. It may be possible for it to get the wrong idea about how things are for a time, just as we can, but eventually it will learn something which leads to a serious contradiction in the machine's most favoured model of reality, at which point that whole model might fall apart and be replaced with another one. I don't think a machine would have to study a great deal before its most-probably-correct model of reality began to tie in with what the scientists and academics generally hold to be true, but it would always apply probabilities to everything and never state that what it's saying is absolutely true. It would also list rival possibilities for people who want to follow them, so ideas about the Earth being flat (which might still be mathematically possible) would get a mention in the small print. Your own machines would have a very good idea of what you know, what you're interested in and the kind of answers you're looking for (I expect most people will be looking for mainstream stuff most of the time), so the answer you get would be tailored to suit you as an individual.
The main problem is your apparent disconnect with reality: you present such a technique (assuming you even have it) as a panacea, even though it would be ineffective even at things you've mentioned, like driver writing and virus detection. Not to mention that computers simply don't (and, if trends hold, will not for decades) have the computational resources of even a single human, while billions of cheap human minds are at the world's disposal.
If you can follow a recipe, you can learn to program, but creating the recipe in the first place requires a little bit more work. The machine might be given a task to write a program that communicates with a piece of hardware using particular registers and command bytes. It would also be told how much data it needs to send or collect at a time and where to get it from or put it, plus any other details required for the process, and of course it knows the instruction set of the processor, how all the instructions work, and it knows all about the registers available. That's the same starting point as for any programmer. It also needs to be told whether the program's to work by polling status ports, using interrupts for each byte or pair of bytes transferred, or if it's to use the DMA (or any other system that may be used which I haven't read up on yet). Now it will need to plan how it's going to do things, working out what needs to be done first, second, third, etc., building up a map for how the program will work. I can't see that that involves any special ability so far. If A depends on B, B must come before A - it can reason its way to getting the order of the process sorted out. If part of the action needs to be repeated many times, it will have to recognise the advantage of using a loop, and again it will have knowledge in its database telling it that this is the best way to do things. That does require it to be able to recognise repeated patterns, but it could learn that from its knowledge database rather than having it directly programmed in, though knowledge about successful methods for doing things still counts as program code of a kind and could in time become directly programmed into the A.I. (possibly by the A.I. - just as we acquire skills and automate them to the point where we can carry them out as if they are instinctive responses). It should be able to recognise the start point and end point of the loop, and then it can look up its database to find out how to get a loop to end by putting a conditional jump into it. Having created the plan for how the program will work, it then needs to turn that into actual machine instructions, and that's just a translation process. I can't see any point at which any special kind of thinking has to be injected to make it happen, though some injected code could certainly speed it up, turning wisdom in its knowledge database into actual programmed systems which just do the right thing straight away.
As for how much in the way or resources such an A.I. system will need, that simply isn't known at the moment. Some ways of storing knowledge are enormously more compact than others, but my own calculations based on the information storage system I've designed for the kind of data required for its essential knowledge database suggest that it might be possible to hold a human-level intelligence in RAM on a current machine, though it wouldn't know as much as a normal adult and I haven't factored in any visual memory at all. With the speed of current processors, it would leave us for dead in thinking speed. If you then add a large hard drive into the equation and store data on it in a form where there is absolutely no duplication, I have no doubt that it could win any quiz show while answering questions in specialised subjects which it hasn't been warned of in advance against contestants who have specialised in those subjects all their lives. A TB of data is a million books, and a million books with zero duplication between them is a hell of a lot of knowledge. Going back to what you can store in RAM, a GB is a thousand books, and I doubt that many people could write a thousand books with zero duplication within or between them. Visual memories are a different story, but even there our visual memories are highly compressed (reduced in terms of stored components) and probably wildly inaccurate in all the unimportant aspects.