Re: Personal future thoughts
Posted: Sun Jan 08, 2012 10:19 pm
Interesting quote, I guess my idea and want to go into law enforcement was a come and go idea that i had considered, but i always find myself coming back to code.
The Place to Start for Operating System Developers
https://f.osdev.org/
Yes, because we will totally going to put AI in charge of things and set it up so that it's reward function is increased by exploiting humans. </sarcasm> If anything, AI is safer, because we can simply tell it to behave altruistically, and it will, instead of having to trust a human to do so. If you can't design a good cost function for making the world a better place, it's your own fault: if you can't even decide if the world has been made better by a change, how could you possibly make decisions that help the world?JackScott wrote:Any machine more intelligent than a human is going to realise that humans are weak and stupid, and have little control over their own desires. They're going to exploit that. We're probably all going to die (if all the other stupid things we've done like ruin the atmosphere don't kill us first).
I think the reason machines have taken over the world in The Matrix has less to do with that being the logical outcome of AI and more to do with movie writers (and the masses they get money catering to) being afraid of CS class. 1984, on the other hand, is all about what humans do to each other, and has nothing to do with AI.JackScott wrote:Have you ever read the book Nineteen-Eighty Four or the movie The Matrix? Because I have, and what you just described gives me the exact same shivering feeling that reading that book or watching that movie did.
Yes, very much so. Driving a car is about a very defined set of parameters that can easily be computed, and there is nothing that could go wrong with a "cold" decision. Actually, I think having traffic computer-controlled is long overdue.DavidCooper wrote:Is it any different from having cars that drive themselves?
That is just an underestimation, I'd say. There is a high possibility of AI-guided machines to replace humans. Besides, sophistication leads to laziness. I personally believe that when humans tend to rely on brainless tin-beings, there's no reason to live for. Why have hands when you have nothing to do with it?NickJohnson wrote:AI is safer, because we can simply tell it to behave altruistically, and it will, instead of having to trust a human to do so.
It is, but you can't force the world pay for your fault.NickJohnson wrote:If you can't design a good cost function for making the world a better place, it's your own fault:
Two things say otherwise:Rusky wrote:Brains are just a highly-evolved form of "artificial" intelligence, and there's nothing really stopping us from creating something similar but with better parameters...
The basic premise of those books is that brains can't be engineered software-style. So that's pretty much irrelevant.Solar wrote:>10 years of hands-on experience with how software is being written, in reality. "Real" software stinks. In spades.
We can't (and won't) control the parameters for humans nearly as well as we could an AI as described in On Intelligence.Solar wrote:Plus just shy of 40 years of experience of the inability of mankind to get even simple things right. We haven't even got the parameters for human judges, policemen or teachers down pat...
To the limits of our understanding, which is incomplete.Rusky wrote:The (extremely simplified) idea is that brains are hierarchical predictors or pattern-matchers.
Stop right there.A predictor whose inputs are the available data on a crime...
This is true, but we understand enough that such a system would be a viable option for some things, if not everything.Solar wrote:To the limits of our understanding, which is incomplete.
Also true. Because of this, you advocate using other humans to analyze this data. However, I advocate using something with the good parts of a human (brain-style pattern recognition and response), without the baggage of natural selection towards a different goal, loads of other motivations, and loads of additional points of failure.Solar wrote:The "available data" on a crime isn't rows of data. It will come from humans. Filtered through their eyes and ears, their imperfect recollection, their own personal agendas, their abilities to express themselves.
There's where you misunderstand. You don't "build an AI that takes all this into account," nor do you "rely on trained individuals preprocessing that 'data.'" There is no "fixed set of rules." The AI, which I should stress is less "artificial" and more "synthetic," works the same way as any "natural" intelligence, but with different parameters during its "evolution."Solar wrote:Either you try to build an AI that takes all this into account - and I will adamantly oppose the concept, on the basis that a system that judges nuances like this by a fixed set of rules can be fooled.
No, but I've just read the Wikipedia entry on it. If there's any serious idea in there about something that would make A.I. dangerous, that needs to be extracted from the story and looked at properly to see if it's a genuine threat or just something that sells stories.berkus wrote:Heh, we'd make a nice little Mars colony, as I have always dreamed about.
David, have you read "Steel Beach" by John Varley?
We've worked out how to do it, but the greed of many rich elites prevent it being done. By not having a world government we are failing to get food to the people who need it while people who have plenty are literally killing themselves by eating too much. The result of our failure to set up a world health service results in population growth because poor people compensate for all the deaths by having larger families, and the lack of their fair share of the world's food resources results in them having to thrash their environment far beyond its ability to sustain itself just in order to scrape enough food together to stay alive. In wealthy countries we're obsessed with job creation to the point that half the workforce either do unnecesssary work or support those who do. What governments should be doing is getting rid of as many jobs as possible, which could be done while increasing the quality of life for those laid off - the money would still be in the system to continue to pay them enough to maintain their quality of life at the very least. There are solutions, but people seem to be too stupid to be able to agree on trying to apply them. Things are so messy that we're going to need A.I. to sort it all out.Solar wrote:Besides, with all that automation going on today, we still haven't figured out how to make it so that the machines doing the work actually feed the people, instead of just making them unemployed and / or impoverished. We have all the ressources we need, all the food, all the space, but we can't figure out how to distribute it fairly and sustainably.
Play. Computer games are going to get more real-world, particularly as augmented reality takes off. Creating these games will be a popular passtime, and the most imaginitive people will probably still be able to come up with ideas which A.I. wouldn't be so likely to, given that A.I. isn't likely to understand so much about what it is that makes things interesting or fun for people. Personally, I won't miss work at all - I'll do a lot of sailing, hillwalking, birdwatching, etc. Might cycle round the world a few times too, if I can recover my health sufficiently (sitting at a computer is killing me, and I've got another medical issue which is quite limiting, but a possible cure for that is on the cards).Chandra wrote:I personally believe that when humans tend to rely on brainless tin-beings, there's no reason to live for. Why have hands when you have nothing to do with it?
People act and lie, but they all have a history. A.I. will know a lot of their history and will have a fair idea about how reliable they are. It might be sufficient just to collect people's opinions of each other as that would allow them to be given reliability ratings based on their considerable knowledge of each other. People would soon be sorted into a number of different groups, one of which would contain those who are extremely reliable while the group at the opposite extreme would contain utterly unreliable ones - the system would only need to know a few things about a small number of these people to work out which group is at which extreme. Add into that the knowledge collected by A.I. through direct observation of people's behaviour and it's easy to see how such a system would know how to weight the evidence from different sources.Solar wrote:The "available data" on a crime isn't rows of data. It will come from humans. Filtered through their eyes and ears, their imperfect recollection, their own personal agendas, their abilities to express themselves.
It's dead easy to fool people - at the moment many cases are decided by how well a performance went down with the jury. Cases should be judged by logical analysis of actual facts, and the history of the accused should ensure that a person with a good record should be given the benefit of the doubt even when the performance of the accuser would convince a human jury, whereas if the accused has a history of abusing other people, that person would already have been given the benefit of the doubt on previous occasions and it would be safer to make more assumptions of guilt, all based on probabilities. We lock people up on the basis of probabilities at the moment, but huge mistakes are made. Using A.I. would make the calculations of probabilities right every time, though it still wouldn't guarantee that it would produce the right verdict. Even so, it would get it right more often, and it would err in the direction of letting people off if their history is good while erring the opposite way if people have a bad track record, particularly in cases where an individual is dangerous.Either you try to build an AI that takes all this into account - and I will adamantly oppose the concept, on the basis that a system that judges nuances like this by a fixed set of rules can be fooled.