Page 2 of 5

Re: Personal future thoughts

Posted: Sun Jan 08, 2012 10:19 pm
by VolTeK
Interesting quote, I guess my idea and want to go into law enforcement was a come and go idea that i had considered, but i always find myself coming back to code.

Re: Personal future thoughts

Posted: Mon Jan 09, 2012 1:46 pm
by DavidCooper
Decisions about a carreer are only going to get harder, so you have to think carefully about what's likely to last. How long is it going to be before law enforcement is done primarily by robotic devices which can work out who's doing wrong, how wrong it is, and whether it would be right to shoot the wrong-doer on the spot? These devices will need to be programmed, but how long is it going to be before programming is done almost entirely by A.I.? There's no certainty any more, though it's a fair bet that there won't be many jobs left at all ten years from now.

Jobs involving artistic ability may last a lot longer as it'll be harder for machines to know what people like, though even that isn't at all certain as it may turn out that machines will be able to write perfect novels simply by responding to feedback from ordinary, talentless readers. Accountants will be gone; machines will judge court cases (without needing courts or lawyers); politicians will be shown to be so stupid that there won't be any point in them any more; most teachers will disappear because machines will teach children at the fastest possible speed while working with them one-to-one using the best methods for each child (and there will be no jobs to train them for anyway); armies of bureaucrats will be laid off as there'll be no paperwork to do; etc. Half the workforce in Europe is tied up doing pointless paperwork as it is, so what are they going to do when all of it can be done by a handful of machines within minutes?

Some of the jobs that will last longer will be the ones that would require highly-capable robots in addition to human-level intelligence, so nurses and surgeons will probably not be replaced so quickly. Car-mechanics likewise, though for small fiddly tasks it may be easier for robots to carry out tasks which we find hard. Anyway, it's up to you to do your own futurology as you may disagree with a lot of mine, but you certainly ought to think down these paths. Is there any point in slogging your guts out for years learning skills only to be sidelined by a little piece of silicon on the day you graduate? That is what you need to guard against, so try to find something that will give you a lasting advantage.

Re: Personal future thoughts

Posted: Tue Jan 10, 2012 1:04 am
by Solar
I sure hope your visions of automation are some kind of bad ingestion or something. I won't go into the discussion on whether AI can ever program (which we had elsewhere), but the day someone actually suggests putting machines in court, law enforcement, or teaching, I'll probably rent a one-way trip to Mars or something.

*shudder*...

Re: Personal future thoughts

Posted: Tue Jan 10, 2012 4:22 pm
by DavidCooper
Is it any different from having cars that drive themselves? It won't be long before people aren't allowed to drive any more because it'll be so much safer to let machines do the job. You can't have missed the developments in that area - anyone wanting a carreer driving lorries or taxis should forget it.

Another thing you must have noticed is the technology now being used in surveillance - bird and insect-sized robots which can carry cameras. How long do you suppose it will be before these things are able to get into everyone's houses to spy on people? We can't stop this happening because it is absolutely necessary to spy on some people (potential terrorists, people who abuse their children, etc.), but we can make sure that the images and conversations they record are analysed by intelligent machines rather than by people so that the information is only ever put before human eyes if the people being spied on have actually done something wrong. This is the way forwards to a better world, and it isn't one you'd need to escape from.

If you've been wrongly accused of a crime, would you really prefer to be judged by a jury than a machine which can out-think people and not make the huge mistakes in analysing probabilities that people do? Wouldn't it be better to have absolutely consistent sentencing rather than having some judges being massively more lenient than others? People will want the machines to take over as soon as they show themselves to be more intelligent than we are, and exactly the same will apply to politics - a country which insists on continuing to put monkeys in charge will be worse run and its people worse off.

In teaching, children are very lucky if they get more than a couple of minutes of one-to-one interaction with a teacher a day, but A.I. will soon give them instant access all day long to all of the best teachers in the world in a single package, so who's going to turn that down in favour of the current childhood-wasting system? It'll free children up for much more useful social interaction with real people out in the real world. All that interaction will be supervised and safe because everything that's said to a child will be monitored by A.I. Anyone who tries to harm a child will be prevented from doing so, so there will be no more room for bullying or child abuse. There's a new world coming and it's going to be better than anything we've ever known before. It isn't even going to be a downside that we'll no longer have any work to go to, because we'll all be paid enough to be able to enjoy a life of permanent holiday: with machines doing all the work and not wanting to be paid for any of it, all the things we want to buy will be virtually free, apart from environmental taxes (which will be necessary to avoid overexploitation of resources).

So, why would anyone want to run away from all that? I'd rather help build this new world in order to make it become a reality as soon as possible, though I don't think I'll have much chance of making a significant contribution to it: there are three big companies out there which could make a big announcement any day, and I live in continual fear of having all my work relegated to the bin by them.

Re: Personal future thoughts

Posted: Tue Jan 10, 2012 4:31 pm
by JackScott
Have you ever read the book Nineteen-Eighty Four or the movie The Matrix? Because I have, and what you just described gives me the exact same shivering feeling that reading that book or watching that movie did. I'm with Solar, I'm heading to Mars as soon as the machines are in charge.

Any machine more intelligent than a human is going to realise that humans are weak and stupid, and have little control over their own desires. They're going to exploit that. We're probably all going to die (if all the other stupid things we've done like ruin the atmosphere don't kill us first).

Re: Personal future thoughts

Posted: Tue Jan 10, 2012 7:01 pm
by NickJohnson
JackScott wrote:Any machine more intelligent than a human is going to realise that humans are weak and stupid, and have little control over their own desires. They're going to exploit that. We're probably all going to die (if all the other stupid things we've done like ruin the atmosphere don't kill us first).
Yes, because we will totally going to put AI in charge of things and set it up so that it's reward function is increased by exploiting humans. </sarcasm> If anything, AI is safer, because we can simply tell it to behave altruistically, and it will, instead of having to trust a human to do so. If you can't design a good cost function for making the world a better place, it's your own fault: if you can't even decide if the world has been made better by a change, how could you possibly make decisions that help the world?

Not that I think DavidCooper is sane in his predictions or anything; AI of that capability is going to be outside our grasp for quite a while.
JackScott wrote:Have you ever read the book Nineteen-Eighty Four or the movie The Matrix? Because I have, and what you just described gives me the exact same shivering feeling that reading that book or watching that movie did.
I think the reason machines have taken over the world in The Matrix has less to do with that being the logical outcome of AI and more to do with movie writers (and the masses they get money catering to) being afraid of CS class. 1984, on the other hand, is all about what humans do to each other, and has nothing to do with AI.

Re: Personal future thoughts

Posted: Tue Jan 10, 2012 7:18 pm
by JackScott
With 1984, I was just saying that it gave me the same shiver of that "world I don't want to be in" feel.

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 4:01 am
by Solar
DavidCooper wrote:Is it any different from having cars that drive themselves?
Yes, very much so. Driving a car is about a very defined set of parameters that can easily be computed, and there is nothing that could go wrong with a "cold" decision. Actually, I think having traffic computer-controlled is long overdue.

Teachers, law enforcement officers, and judges don't handle movement vectors, they handle people. There's a world of difference there, and I want someone capable of making a "judgement call" to do that kind of job.

Besides, with all that automation going on today, we still haven't figured out how to make it so that the machines doing the work actually feed the people, instead of just making them unemployed and / or impoverished. We have all the ressources we need, all the food, all the space, but we can't figure out how to distribute it fairly and sustainably.

I seriously doubt we could ever tune an AI to the point where it can be relied on to teach our children, or to judge our arguments.

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 6:39 am
by Chandra
NickJohnson wrote:AI is safer, because we can simply tell it to behave altruistically, and it will, instead of having to trust a human to do so.
That is just an underestimation, I'd say. There is a high possibility of AI-guided machines to replace humans. Besides, sophistication leads to laziness. I personally believe that when humans tend to rely on brainless tin-beings, there's no reason to live for. Why have hands when you have nothing to do with it?
NickJohnson wrote:If you can't design a good cost function for making the world a better place, it's your own fault:
It is, but you can't force the world pay for your fault.

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 8:13 am
by Rusky
Have any of you ever read On Intelligence by Jeff Hawkins, or maybe I Am a Strage Loop by Douglass Hofstadter? Until you actually think about the mechanism for an AI that would do things like judge law cases, I don't think you can really invoke "unspeakable dystopia" novels with any credibility.

Done right (and by right I don't mean configured correctly), there would really be no fundamental difference between a human making those decisions and an AI. There's no quantifiable way to define "coldness." Brains are just a highly-evolved form of "artificial" intelligence, and there's nothing really stopping us from creating something similar but with better parameters for performing tasks like law enforcement or teaching (with better meaning the AI would make "good" decisions more often than a typical human in that position).

I would be infinitely more worried about the people in control of that design and the execution of what it comes up with than I would ever be about the AI itself making "cold" decisions while teaching my children.

I do agree with Solar in that "we still haven't figured out how to make it so that the machines doing the work actually feed the people, instead of just making them unemployed and / or impoverished." But that's a different problem that I don't think AI will necessarily push one way or the other. More automation has the potential to both impoverish and enable billions; it all depends on too many other factors to blame automation alone and retreat to luddism.

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 8:21 am
by Solar
Rusky wrote:Brains are just a highly-evolved form of "artificial" intelligence, and there's nothing really stopping us from creating something similar but with better parameters...
Two things say otherwise:

>10 years of hands-on experience with how software is being written, in reality. "Real" software stinks. In spades.

Plus just shy of 40 years of experience of the inability of mankind to get even simple things right. We haven't even got the parameters for human judges, policemen or teachers down pat...

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 8:50 am
by Rusky
Solar wrote:>10 years of hands-on experience with how software is being written, in reality. "Real" software stinks. In spades.
The basic premise of those books is that brains can't be engineered software-style. So that's pretty much irrelevant.
Solar wrote:Plus just shy of 40 years of experience of the inability of mankind to get even simple things right. We haven't even got the parameters for human judges, policemen or teachers down pat...
We can't (and won't) control the parameters for humans nearly as well as we could an AI as described in On Intelligence.

The (extremely simplified) idea is that brains are hierarchical predictors or pattern-matchers. A predictor whose inputs are the available data on a crime, and that has been evolved based on parameters like finding the real criminal, rather than surviving and mating, wouldn't have nearly the problems of a human judge or policeman.

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 9:38 am
by Solar
Rusky wrote:The (extremely simplified) idea is that brains are hierarchical predictors or pattern-matchers.
To the limits of our understanding, which is incomplete.
A predictor whose inputs are the available data on a crime...
Stop right there.

The "available data" on a crime isn't rows of data. It will come from humans. Filtered through their eyes and ears, their imperfect recollection, their own personal agendas, their abilities to express themselves.

Either you try to build an AI that takes all this into account - and I will adamantly oppose the concept, on the basis that a system that judges nuances like this by a fixed set of rules can be fooled.

Or you will have to rely on trained individuals preprocessing that "data" and feeding it to the "AI judge" in a format it can understand - and I will adamantly oppose the concept too, on the basis that this is again open for tampering.

(Note that our current system, using humans, is also open to tampering; I don't argue that. But if an AI-based system is open to tampering just as well, plus the opportunity for systematic tampering, you don't win anything.)

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 11:55 am
by Rusky
Solar wrote:To the limits of our understanding, which is incomplete.
This is true, but we understand enough that such a system would be a viable option for some things, if not everything.
Solar wrote:The "available data" on a crime isn't rows of data. It will come from humans. Filtered through their eyes and ears, their imperfect recollection, their own personal agendas, their abilities to express themselves.
Also true. Because of this, you advocate using other humans to analyze this data. However, I advocate using something with the good parts of a human (brain-style pattern recognition and response), without the baggage of natural selection towards a different goal, loads of other motivations, and loads of additional points of failure.
Solar wrote:Either you try to build an AI that takes all this into account - and I will adamantly oppose the concept, on the basis that a system that judges nuances like this by a fixed set of rules can be fooled.
There's where you misunderstand. You don't "build an AI that takes all this into account," nor do you "rely on trained individuals preprocessing that 'data.'" There is no "fixed set of rules." The AI, which I should stress is less "artificial" and more "synthetic," works the same way as any "natural" intelligence, but with different parameters during its "evolution."

This AI's minimum level of competence, with capable enough hardware, is that of a human - nobody would let a more-poorly-performing AI make decisions. That's where the advantage of AI lies. Not in "computers don't make mistakes" because that's obviously false, but in "this brain emulation never gets tired and has no motivations other than solving crimes correctly."

Re: Personal future thoughts

Posted: Wed Jan 11, 2012 12:10 pm
by DavidCooper
berkus wrote:Heh, we'd make a nice little Mars colony, as I have always dreamed about.

David, have you read "Steel Beach" by John Varley?
No, but I've just read the Wikipedia entry on it. If there's any serious idea in there about something that would make A.I. dangerous, that needs to be extracted from the story and looked at properly to see if it's a genuine threat or just something that sells stories.

____________________________________________________________________________
Solar wrote:Besides, with all that automation going on today, we still haven't figured out how to make it so that the machines doing the work actually feed the people, instead of just making them unemployed and / or impoverished. We have all the ressources we need, all the food, all the space, but we can't figure out how to distribute it fairly and sustainably.
We've worked out how to do it, but the greed of many rich elites prevent it being done. By not having a world government we are failing to get food to the people who need it while people who have plenty are literally killing themselves by eating too much. The result of our failure to set up a world health service results in population growth because poor people compensate for all the deaths by having larger families, and the lack of their fair share of the world's food resources results in them having to thrash their environment far beyond its ability to sustain itself just in order to scrape enough food together to stay alive. In wealthy countries we're obsessed with job creation to the point that half the workforce either do unnecesssary work or support those who do. What governments should be doing is getting rid of as many jobs as possible, which could be done while increasing the quality of life for those laid off - the money would still be in the system to continue to pay them enough to maintain their quality of life at the very least. There are solutions, but people seem to be too stupid to be able to agree on trying to apply them. Things are so messy that we're going to need A.I. to sort it all out.

____________________________________________________________________________
Chandra wrote:I personally believe that when humans tend to rely on brainless tin-beings, there's no reason to live for. Why have hands when you have nothing to do with it?
Play. Computer games are going to get more real-world, particularly as augmented reality takes off. Creating these games will be a popular passtime, and the most imaginitive people will probably still be able to come up with ideas which A.I. wouldn't be so likely to, given that A.I. isn't likely to understand so much about what it is that makes things interesting or fun for people. Personally, I won't miss work at all - I'll do a lot of sailing, hillwalking, birdwatching, etc. Might cycle round the world a few times too, if I can recover my health sufficiently (sitting at a computer is killing me, and I've got another medical issue which is quite limiting, but a possible cure for that is on the cards).

______________________________________________________________________________
Solar wrote:The "available data" on a crime isn't rows of data. It will come from humans. Filtered through their eyes and ears, their imperfect recollection, their own personal agendas, their abilities to express themselves.
People act and lie, but they all have a history. A.I. will know a lot of their history and will have a fair idea about how reliable they are. It might be sufficient just to collect people's opinions of each other as that would allow them to be given reliability ratings based on their considerable knowledge of each other. People would soon be sorted into a number of different groups, one of which would contain those who are extremely reliable while the group at the opposite extreme would contain utterly unreliable ones - the system would only need to know a few things about a small number of these people to work out which group is at which extreme. Add into that the knowledge collected by A.I. through direct observation of people's behaviour and it's easy to see how such a system would know how to weight the evidence from different sources.
Either you try to build an AI that takes all this into account - and I will adamantly oppose the concept, on the basis that a system that judges nuances like this by a fixed set of rules can be fooled.
It's dead easy to fool people - at the moment many cases are decided by how well a performance went down with the jury. Cases should be judged by logical analysis of actual facts, and the history of the accused should ensure that a person with a good record should be given the benefit of the doubt even when the performance of the accuser would convince a human jury, whereas if the accused has a history of abusing other people, that person would already have been given the benefit of the doubt on previous occasions and it would be safer to make more assumptions of guilt, all based on probabilities. We lock people up on the basis of probabilities at the moment, but huge mistakes are made. Using A.I. would make the calculations of probabilities right every time, though it still wouldn't guarantee that it would produce the right verdict. Even so, it would get it right more often, and it would err in the direction of letting people off if their history is good while erring the opposite way if people have a bad track record, particularly in cases where an individual is dangerous.

The reality is that this will simply happen all by itself - A.I. will independently judge all cases and demonstrate that it does a much better job, thereby driving the public into forcing a change to using A.I. to decide on all cases so that the world becomes a safer place to live in. At the moment we have cases where someone is wrongly found guilty, and even though it's demonstrated that an error has been made it takes years of T.V. documentaries on the case and legal marathons to overturn them. With A.I. that would not happen - the corrected facts would be fed in, the data would be crunched afresh, and out would come a new verdict there and then.