thoughts about github co-pilot
- acccidiccc
- Member
- Posts: 38
- Joined: Sun Mar 21, 2021 1:09 pm
- Location: current location
thoughts about github co-pilot
Will github copilot replace us developers? It has become pretty evident that companies want to get rid of expensive developers. What effects will it have on osdev? What is the effect on c development. Due to the test data being taken from github, should i host my software there? I mean doing so would be like being a candle lighting itself for a little light. What is the general future of software engineering like, especially with tools like these? Will it do - nothing and will be remembered as a flop? This really seems like the extinguish phase of Microsofts business strategy - embracing developers with dev-friendly windows (wsl, powershell), extending it with github and vscode and extinguishing with github copilot. I kinda worry about the future of CS.
iustitiae iniustos iudicat
Re: thoughts about github co-pilot
Have you tried co-pilot? It's great for sure, but it makes mistakes. It's far from perfect. It's also unable to reason about anything. it's basically just a code-pattern matcher. It is impressive, but it will make mistakes, create bugs (I've seen examples of that) and it certainly isn't able to understand your program. In fact AIs are far from being inteliigent and capable of reasoning. Will one day AI write code? Possibly, but I think we are very very far from reaching that point. Even self-driving cars don't have a 3D view of the world. They are impressive for sure, but they are unable to reason.acccidiccc wrote:Will github copilot replace us developers
I am not sure what evidence you are talking about since you haven't named any, unless you refer to co-pilot. The goal of co-pilot is not to replace developers but to speed up the work of actual developers. If anything, companies want more and more developers and cannot find enough of them. This can be seen by the increasing number of unfilled positions and the salaries that keep going up and up for developers. Companies don't want to get rid of developers, they want more! They might be expensive, but the return on investment for each one of them is a net positive. Even not-so-good programmers.acccidiccc wrote:It has become pretty evident that companies want to get rid of expensive developers.
osdev is a niche hobby. At worst it will do nothing. At best co-pilot is a nice tool for people working on their OS.acccidiccc wrote:What effects will it have on osdev?
I am not sure I am following you. Microsoft's strategy is to embrace new ideas and make the life easier for everyone using software. This includes developers and non-developers. There isn't some sinister hidden agenda behind co-pilot. It's just an interesting project that has existed internally for some time and has now become public.acccidiccc wrote:This really seems like the extinguish phase of Microsofts business strategy - embracing developers with dev-friendly windows (wsl, powershell), extending it with github and vscode and extinguishing with github copilot. I kinda worry about the future of CS.
Full disclosure: I work on an AI team within Microsoft, so I will end this post here. But the above are my views.
Last edited by kzinti on Fri Dec 24, 2021 3:26 pm, edited 1 time in total.
Re: thoughts about github co-pilot
All AI will, at some point, replace human beings. I can't stand for any of it anywhere. That new Wombo bot trend will, at some point, replace artists and voice actors; that AI Dungeon game will replace writers; and Co-pilot will one day replace programmers, though it will probably take longest. Not just that, but AI can continue learning, while we have to retrain new people every generation. There is no theoretical limit for its capabilities, so there is no reason to not expect this, what I call "human obsoletion", to happen. Some studies even state that half of all jobs shall be replaced in 2050. Do people think 50% is some Great AI Barrier? And yet, people will continue to push it for its "convenience" and "efficiency", without considering that, perhaps, the two are overvalued or even harmful, and without taking into account the human factor.
In the same way states are now banning cash for their inconvenience, they will probably also ban human drivers for their inconvenience. I can imagine butterknives being banned, too. You have your friendly robot roommate who cuts butter with zero effort, so you wanting a knife can only mean you're a murderer.
This is where hedonism takes you. It'll all probably end with everyone at their homes, rotting on their couches on constant life support. But it's okay, because they'll be in their happy magic VR land where everything's fine, and they'll get to do things they can already do right now anyway.
In the same way states are now banning cash for their inconvenience, they will probably also ban human drivers for their inconvenience. I can imagine butterknives being banned, too. You have your friendly robot roommate who cuts butter with zero effort, so you wanting a knife can only mean you're a murderer.
This is where hedonism takes you. It'll all probably end with everyone at their homes, rotting on their couches on constant life support. But it's okay, because they'll be in their happy magic VR land where everything's fine, and they'll get to do things they can already do right now anyway.
I'm sure that's the goal right now, and I'm willing to believe Microsoft specifically has no hidden agenda, but even that I have concerns for. There's a line between productivity and plain sloth. And knowing sloth, any automated help will only result in people themselves putting in less effort into themselves, balancing everything out again and leading to the above. I'm sure you have good intentions, but I just can't stand for that either, nor see it as anything but naive.kzinti wrote:The goal of co-pilot is not to replace developers but to speed up the work of actual developers.
Re: thoughts about github co-pilot
I had written a long answer to mid's post, that I am now reconsidering. Part of that was that we might want to implement the Starship Troopers option, then, but actually, Starship Troopers is just the final consequence of enforcing personal responsibility. I believe that is where most countries are currently falling short. I keep hearing "it was a hacker attack" and "software problem, we can't do anything". Not "we failed to make our systems secure" and "we failed to ensure the software would do what we want when we bought it." The people in charge are dodging responsibility at every turn, and in the end either nobody is at fault, or they find a hapless janitor to take the blame (and the golden handshake). I mean, remember when VW's diesel scandal was supposedly the fault of one American manager (with Asian name and appearance)? Or when the all the courts in Berlin were brought to a halt by a "hacker attack" (from Russia, no less), and their IT took months to rebuild? That last one just wasn't anybody's fault. Or if it was I didn't hear about that.
AI will aid that culture of irresponsibility. When bugs are discovered, it will no longer be anybody's fault; not even the processes need to be adjusted. It was a co-pilot error, we can't do anything about it. We can't possibly be expected to understand the code we are running.
Even now, we are seeing exactly this trend with Tesla autopilot. Autopilot is, legally, an assistance device, a driving aid. It is merely an advanced cruise control. The driver is still in command and still expected to be alert. Crucially, the driver is still legally liable for any accidents happening. But no, "drivers" are no longer paying attention when autopilot is on, and keep driving into mis-parked trailers, and, in at least one case, into deer. But what then? Drivers are raising the autopilot defence in court. It does not help for the reason above, but it shows what they are thinking. These people simply are not taking responsibility for their own actions (which in this case was to delegate all driving duties to a system they do not understand, and that has flaws they do not understand, and that has destroyed their vehicle due to these flaws), unless forced upon them by others.
Recently I heard one of the January rioters being sentenced to 5.5 years in the slammer, and apparently he yammered into the microphones about being oh so sorry that he sprayed people with a fire extinguisher before throwing it at them, but it wasn't really his fault, see, it was all those fake news doing it. My god, that guy just got sentenced to prison and he isn't taking responsibility. I could go on like this for hours, but I do want to post this at some point before New Year's. So I will close with a plea to each and every one of you (and all the people in your lives): Own Your Sh*t. And Merry Christmas.
AI will aid that culture of irresponsibility. When bugs are discovered, it will no longer be anybody's fault; not even the processes need to be adjusted. It was a co-pilot error, we can't do anything about it. We can't possibly be expected to understand the code we are running.
Even now, we are seeing exactly this trend with Tesla autopilot. Autopilot is, legally, an assistance device, a driving aid. It is merely an advanced cruise control. The driver is still in command and still expected to be alert. Crucially, the driver is still legally liable for any accidents happening. But no, "drivers" are no longer paying attention when autopilot is on, and keep driving into mis-parked trailers, and, in at least one case, into deer. But what then? Drivers are raising the autopilot defence in court. It does not help for the reason above, but it shows what they are thinking. These people simply are not taking responsibility for their own actions (which in this case was to delegate all driving duties to a system they do not understand, and that has flaws they do not understand, and that has destroyed their vehicle due to these flaws), unless forced upon them by others.
Recently I heard one of the January rioters being sentenced to 5.5 years in the slammer, and apparently he yammered into the microphones about being oh so sorry that he sprayed people with a fire extinguisher before throwing it at them, but it wasn't really his fault, see, it was all those fake news doing it. My god, that guy just got sentenced to prison and he isn't taking responsibility. I could go on like this for hours, but I do want to post this at some point before New Year's. So I will close with a plea to each and every one of you (and all the people in your lives): Own Your Sh*t. And Merry Christmas.
Carpe diem!
Re: thoughts about github co-pilot
I am with nullplan here. I am very concerned with where things are going with AI, but not because it works. I am concerned because it doesn't work. His examples with Autopilot are spot-on. AI doesn't do what people think it does. AI does not "learn things". What we call machine learning is training a model on a set of data. The result is an inference engine that can use this model to produce some output based on some input. There is no reasoning anywhere in the system. It doesn't evolve by itself. In fact, most models need to be retrained periodically because they become obsolete.
Mid seems to be overreacting and repeating things out of science-fiction books and movies that have little to do with reality. Yes AIs are replacing more and more jobs just like automation is/was. But AIs, incapable of reasoning (and that's nowhere in sight) will not replace the job of programmers, lawyers, judges, police, fireman, artists and so on. When attempts are made to replace job with AIs, they cause more problems then they solve. Again I refer to nullplan's reply: things are about to get very bad if people trust AIs and attribute to them capabilities they don't have. Unfortunately it seems to be going that way.
There are also a lot of very concerning ethical questions around the usage of AI. But that's a whole different conversation.
Mid seems to be overreacting and repeating things out of science-fiction books and movies that have little to do with reality. Yes AIs are replacing more and more jobs just like automation is/was. But AIs, incapable of reasoning (and that's nowhere in sight) will not replace the job of programmers, lawyers, judges, police, fireman, artists and so on. When attempts are made to replace job with AIs, they cause more problems then they solve. Again I refer to nullplan's reply: things are about to get very bad if people trust AIs and attribute to them capabilities they don't have. Unfortunately it seems to be going that way.
There are also a lot of very concerning ethical questions around the usage of AI. But that's a whole different conversation.
Re: thoughts about github co-pilot
I honestly am concerned about the licensing problems and other legal issues that Co-Pilot is going to cause. I haven't used it myself, but from what I know its just a fancy clipboard. You start typing, use the model, and then it just dumps a ton of code in your lap, comments and license headers and all. People keep pushing AI as though its some kind of metacomputer, claiming it'll take over this and that, when it really won't. It might not ever. I mean, I don't even think we have adaptive AIs where you train it and then it can learn based on data fed into it when the model is in use. I'm no ML expert, but the AI would need to be retrained every time you want it to utilize new data.
Re: thoughts about github co-pilot
I have already brought up Wombo. I've already seen people I know using it for their wallpapers, instead of paying a real artist for work they may enjoy and money they may earn. I've already seen videos that used 15.ai, without paying for real voice actors. IOW, artists have already started being replaced. What do you think the average person will do when given a choice between spending money and waiting weeks for real art, or paying zero and waiting 2 minutes for something that only looks like real art? Likewise with retail. Do you really think 10 store clerks shall become 10 AI maintainers?kzinti wrote:But AIs, incapable of reasoning (and that's nowhere in sight) will not replace the job of programmers, lawyers, judges, police, fireman, artists and so on
It seems you people took my post to be speaking of terminators and Skynet, when it's nothing of the sort. It's more insidious than that.
It's not even AI in itself, it's just the last straw. Over the years, I've witnessed people around me change "thanks" to modern technologies, as they've been deployed here quicker than elsewhere, I've seen what they've caused clearly. That map app you have on your smartphone may have made you more efficient, but it had destroyed your spatial awareness, and socially isolated you further than you already have. Cashless has also isolated you from further human contact and worsened your mental arithmetic. The Alexas and Googles that listen in on you have given you even less reason to get off your arse and flip a light switch. Constant availability has only made us more irritable and impatient. I can't concentrate even for a short time at something without being distracted by myself. Most people can't handle a webpage loading for more than a few seconds. That used to not be. We had been using binary search in our books for long before it had been discovered, and now it's all Ctrl+F.
It's so horrifically common that those in charge may as well change the definition of "healthy" to mean hunched, blind and plain stupid.
And AI is much worse because it learns, speeding up this regression. Yet, people will gladly accept that for the same convenience and efficiency as they had done before.
My conclusion is that it's either-or: you cannot have smart machines and smart humans at once. As AI improves, humans will regress. Willingly.
Re: thoughts about github co-pilot
You are also talking about wallpapers. Not exactly Dali.mid wrote:I have already brought up Wombo. I've already seen people I know using it for their wallpapers, instead of paying a real artist for work they may enjoy and money they may earn. I've already seen videos that used 15.ai, without paying for real voice actors. IOW, artists have already started being replaced.
Nothing of the sort, I assure. I do not think there is a sinister intent behind the trajectory we are currently on. People acting with the best of intentions, each making rational decisions for their particular situation, are ending up sending humanity down a path that can only end up somewhere between the Matrix and Logan's Run.mid wrote:It seems you people took my post to be speaking of terminators and Skynet, when it's nothing of the sort. It's more insidious than that.
Yes. It is laziness. Laziness is the rational choice for all animals facing a lack of anything to do (e.g. lions will laze about 90% of the day). And technology allows us to do things more quickly or delegate our unpleasant duties on someone else. Replacing the duties thus abdicated, however, would be the harder choice, and so most people choose not to replace those duties with anything. Leaving them with a surplus of time, that is then filled with entertainment instead of anything worthwhile. Which leads to degeneration of their abilities, because the brain isn't trained anymore (and no, solving a crossword every now and again is not the same thing). And yes, I am probably guilty of that very thing myself, at least in some ways.mid wrote:It's not even AI in itself, it's just the last straw. Over the years, I've witnessed people around me change "thanks" to modern technologies, as they've been deployed here quicker than elsewhere, I've seen what they've caused clearly. That map app you have on your smartphone may have made you more efficient, but it had destroyed your spatial awareness, and socially isolated you further than you already have. Cashless has also isolated you from further human contact and worsened your mental arithmetic. The Alexas and Googles that listen in on you have given you even less reason to get off your arse and flip a light switch. Constant availability has only made us more irritable and impatient. I can't concentrate even for a short time at something without being distracted by myself. Most people can't handle a webpage loading for more than a few seconds. That used to not be. We had been using binary search in our books for long before it had been discovered, and now it's all Ctrl+F.
And then people lose their ability to even read a map, let alone place themselves on it. My siblings all have children now, and they are spending their car journeys looking into tablets. When I was a kid, I used to watch my parents drive, and in so doing learned the rules of the road (and the mechanics of driving).
But the most insidious thing is the loss of the ability to make a decision, and own the decision. You go left because the GPS tells you to go left, not because it makes sense with the information you have. The GPS was wrong, so now you are mad at the GPS. At the same time, you do not make the decision yourself anymore. No no, leave it to the GPS, then get mad at the GPS. Then you transpose that pattern on anything. You got fat because the food in the supermarket is unhealthy, not because you failed to educate yourself on cooking, or what the little numbers on the packaging mean. Corona got out of hand because the politicians are doing the wrong thing, not because tons of people are not vaccinating on spurious grounds (or failed to change their behavior before vaccines were available). You got hacked because that is just what hackers do, not because you employed an insecure system, and failed to verify its security. The kids are learning garbage in school because of corrupt school boards, not because you failed in your duty to elect a competent one. And on and on it goes.
AI merely aids in the laziness, it is not causing it.
Here you make the mistake of collectivism. Not all humans are the same, not all are equally OK with losing abilities they once had, some people like to preserve knowledge. I believe we will reach some middle-ground at some point. For example, I highly doubt that self-driving cars would entirely replace human drivers. Some people are just to stubborn for that. No, skills will be retained by some people.mid wrote:My conclusion is that it's either-or: you cannot have smart machines and smart humans at once. As AI improves, humans will regress. Willingly.
And I'm OK with that. We have a society because not everyone has to be equally good at everything. If we do end up at Logan's Run, someone has to be the old hermit, and I would be OK with being that at least.
Carpe diem!
Re: thoughts about github co-pilot
Mid, I think the problem you are talking about is much bigger than AI. I agree with most of what you said. I myself refuse to get an auto-driving car and rely on my smartphone as little as I can. People around me think this is funny, but I am turning into this "old hermit". I believe in developing your skills and learning new ones. This is a personal choice.
The main problem I see has to do with (lack/poor/insufficient) education. Critical thinking is not valued anymore. Lot of people don't even know what it is! The idea that everyone's opinion is valid and equal to anyone else's opinion is a huge problem causing most of the social and political problems in the occidental world.
Lot of this come to be because of the Internet. Instead of leveling everyone up as we were hoping for, it gave a voice to idiots, ignorants and bad people. Facebook is a major (if not the biggest) part of the problem. Somehow no one is holding them accountable.
I don't think things are going in the right way, the post-ww2 world order is over and things are about to change a lot. There might be an argument that AI is accelerating all that, but I don't think there are more fundamentals issues under that. In the right context AI is great. But in a world where the developers of AI do it because it's fun and where the 1% just want more money out of it, it's not getting used properly and in a way that can improve our lives. We don't ask ourselves enough (if at all) whether certain AI models should be developed and used.
As for the argument that jobs are lost, I think this is great. It's not great for the people losing their job, so it will do a lot of harm for a generation or two. But long term, if it leads to better quality of live and more leisure time, I think that's fantastic. Obviously the current capitalism model will not/ cannot survive and some radical changes will happen.
The main problem I see has to do with (lack/poor/insufficient) education. Critical thinking is not valued anymore. Lot of people don't even know what it is! The idea that everyone's opinion is valid and equal to anyone else's opinion is a huge problem causing most of the social and political problems in the occidental world.
Lot of this come to be because of the Internet. Instead of leveling everyone up as we were hoping for, it gave a voice to idiots, ignorants and bad people. Facebook is a major (if not the biggest) part of the problem. Somehow no one is holding them accountable.
I don't think things are going in the right way, the post-ww2 world order is over and things are about to change a lot. There might be an argument that AI is accelerating all that, but I don't think there are more fundamentals issues under that. In the right context AI is great. But in a world where the developers of AI do it because it's fun and where the 1% just want more money out of it, it's not getting used properly and in a way that can improve our lives. We don't ask ourselves enough (if at all) whether certain AI models should be developed and used.
As for the argument that jobs are lost, I think this is great. It's not great for the people losing their job, so it will do a lot of harm for a generation or two. But long term, if it leads to better quality of live and more leisure time, I think that's fantastic. Obviously the current capitalism model will not/ cannot survive and some radical changes will happen.
Re: thoughts about github co-pilot
People have been losing jobs to automation for decades, but instead of a post-scarcity society, we've had a huge increase of stress for perhaps the majority of people. In some ways we are post-scarcity, mobile computing is everywhere, even in almost all 3rd-world countries, but too many find it hard to get enough to eat, even in the United States. Even some who don't consider themselves poor have to work excessively to feed their families.
I've changed attitudes a lot over the years. In the 80s and 90s I dreamed of automation and AI, but in this century I've gradually shifted to favouring the employment and commissioning of people. Most people like to work if it's something they enjoy and there isn't too much of it.
I have to bring in my beliefs here. I don't believe human governance can ever work well, especially not in the long term. From a broad enough perspective, it doesn't matter whether we have AI or not, something will always go horribly wrong. Human greed is a major cause of problems, but there's also the simple inability to see the future. Despite humans best efforts to predict the results of changes, sometimes they make the wrong changes. What if AI governed humans? AI cannot literally see the future any more than humans can, so it will, at times, make horrible mistakes just like even the best humans. And in any case, what standards would be held by any hypothetical AI government? Many scenarios have been considered where an AI government follows standards which are not the best for the humans in its care.
My hope is for God's Kingdom to take over, as presented in the Bible. The governments and economies we know, full of problems as they are, are temporary. The whole mess which has existed for almost the entirety of human existance is temporary; all of it consisting of attempts to back up the challenge that humans would be better off governing themselves rather than being ruled by God. This goes back to the Garden of Eden, where the first liar suggested to Eve that she'd be better off breaking the one simple restriction God had given. God immediately made a plan, outlined it in an allegorical way to Adam and Eve, and revealed details piece by piece in the millenia since.
As for explaining in detail, I'm not at my best at present. I drew my conclusions about government, human or AI, years ago when I was in somewhat better health. I chose to put my trust in God's Kingdom then, and I've stuck to it despite the fact that I had to make a lot of changes. Philippians 4:13 is encouraging, "For all things I have the strength through the one who gives me power." What government can give its citizens the strength to get over ingrained harmful personality traits? They can fund therapy, but it doesn't always work. The true God who created humans knows exactly how we work and what to give us. But I'm digressing.
To anyone who wants to learn more about my hope, I invite you to follow the following links.
Videos (< 3mins 30s each):
Why Does God Allow Suffering?
What Is God’s Kingdom?
Or these "Bible Questions Answered" categories; something like a Bible FAQ:
Human Suffering
God’s Kingdom
I've changed attitudes a lot over the years. In the 80s and 90s I dreamed of automation and AI, but in this century I've gradually shifted to favouring the employment and commissioning of people. Most people like to work if it's something they enjoy and there isn't too much of it.
I have to bring in my beliefs here. I don't believe human governance can ever work well, especially not in the long term. From a broad enough perspective, it doesn't matter whether we have AI or not, something will always go horribly wrong. Human greed is a major cause of problems, but there's also the simple inability to see the future. Despite humans best efforts to predict the results of changes, sometimes they make the wrong changes. What if AI governed humans? AI cannot literally see the future any more than humans can, so it will, at times, make horrible mistakes just like even the best humans. And in any case, what standards would be held by any hypothetical AI government? Many scenarios have been considered where an AI government follows standards which are not the best for the humans in its care.
My hope is for God's Kingdom to take over, as presented in the Bible. The governments and economies we know, full of problems as they are, are temporary. The whole mess which has existed for almost the entirety of human existance is temporary; all of it consisting of attempts to back up the challenge that humans would be better off governing themselves rather than being ruled by God. This goes back to the Garden of Eden, where the first liar suggested to Eve that she'd be better off breaking the one simple restriction God had given. God immediately made a plan, outlined it in an allegorical way to Adam and Eve, and revealed details piece by piece in the millenia since.
As for explaining in detail, I'm not at my best at present. I drew my conclusions about government, human or AI, years ago when I was in somewhat better health. I chose to put my trust in God's Kingdom then, and I've stuck to it despite the fact that I had to make a lot of changes. Philippians 4:13 is encouraging, "For all things I have the strength through the one who gives me power." What government can give its citizens the strength to get over ingrained harmful personality traits? They can fund therapy, but it doesn't always work. The true God who created humans knows exactly how we work and what to give us. But I'm digressing.
To anyone who wants to learn more about my hope, I invite you to follow the following links.
Videos (< 3mins 30s each):
Why Does God Allow Suffering?
What Is God’s Kingdom?
Or these "Bible Questions Answered" categories; something like a Bible FAQ:
Human Suffering
God’s Kingdom
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: thoughts about github co-pilot
Artificial Intelligence: Expectation Vs. Reality
https://www.predictionhealth.com/blog/a ... p-27041318
https://www.predictionhealth.com/blog/a ... p-27041318