The approaches about natural language programming

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: The approaches about natural language programming

Post by Solar »

All other things aside, what David describes here seems to be involving vocal input.

I can tell you right away that this wouldn't fly in any kind of corporate environment. It's already bad enough with one out of four developers being on the phone. If people were to be talking to the computer all the time, there'd be murder, I tell you.

Also, there is absolutely zero provisions to trace, or debug, the result. Every NLP proponent I've seen so far is working from the assumption that results would always be "perfect", no misunderstandings of ambiguousness, no screw-ups, no "I'll have to have a look at where this went wrong". (Something that's already giving problems when today's "KI" / machine learning systems spit out results that aren't what you expected.)

And as for typing my intentions, I prefer proper, concise, terse programming languages, thank you very much.
Every good solution is obvious once you've found it.
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: The approaches about natural language programming

Post by Ethin »

Solar wrote:All other things aside, what David describes here seems to be involving vocal input.

I can tell you right away that this wouldn't fly in any kind of corporate environment. It's already bad enough with one out of four developers being on the phone. If people were to be talking to the computer all the time, there'd be murder, I tell you.

Also, there is absolutely zero provisions to trace, or debug, the result. Every NLP proponent I've seen so far is working from the assumption that results would always be "perfect", no misunderstandings of ambiguousness, no screw-ups, no "I'll have to have a look at where this went wrong". (Something that's already giving problems when today's "KI" / machine learning systems spit out results that aren't what you expected.)

And as for typing my intentions, I prefer proper, concise, terse programming languages, thank you very much.
Here, here. I could never program using English/natural languages. I feel like I'd get a lot less done for (negative) gain. (Is negative gain a thing? :))
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

It could be vocal input, but I was actually imagining it being typed, and it's a lot less work to type that than to try to write the equivalent in a programming language. As for doing this work in a corporate environment, I don't envisage there being corporate environments in the future because all the work that's done in them will be automated. There will be no one there.

Debugging wouldn't be an issue for a system which doesn't produce bugs in the first place, but it may not be possible to design a program without running it in some way and seeing the places where it breaks and thereby discovering that the algorithm needs to be more complex to handle those issues correctly, so that still involves producing bugs, recognising them and providing fixes for them. For example, if it's asked to write a text editor and had no working example of one to learn from, it might start off by writing a routine to collect keyboard input from a user and write it to the screen, putting each letter to the right of the one before it. That seems to work fine until it reaches the edge of the screen, and it's only when the system visualises things going wrong there (a bug) that it realises it needs to add something to the algorithm to handle that eventuality. Even if it's doing this through some kind of visualisation, it's still running the program "in its head" and then catching bugs as and when they occur. Maybe those don't qualify as bugs as they aren't mistakes but are the result of underdesigning, but if the system was to make an actual mistake, it should be able to apply debugging techniques to find and fix that, though it would be more important to debug and fix the system itself to stop it making that kind of mistake again.

There's no reason why a programmer working with such a system shouldn't be shown where such bugs are being found and fixed, but the system will work so fast that there won't be a lot to gain from that unless the system is less than fully intelligent and needs a more intelligent human to find a fix for a hard problem. I'm sure that will happen a lot during the initial development of the system, but there will come a point beyond which it's able to solve all the problems without help.

There were mentions of Star Trek, science fiction and 100 years in the last few posts, but the point was to jump ahead and show the destination in order to make it clear why programming languages will become obsolete - the dialogue I provided is more concise than it would be if you tried to express the same ideas with a programming langauge. And it won't take 100 years to get there either - that figure is way too pessimistic (or optimistic for those who want NLP to be postponed for as long as possible).
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: The approaches about natural language programming

Post by Ethin »

I disagree that programming languages will become obsolete. I also highly doubt the possibility of there ever being a bug-free program. Considering that humans make mistakes and errors, it is highly unlikely that humanity as a whole could ever design a system that is completely bug-free and capable of not only programming itself, but creating a program, algorithm, or system that is eventually bug-free. That self-reproducing system would need to be created by humans, and so that becomes a circular impossibility: a program cannot generate bug-free programs (or make a program that eventually has no bugs at all) when the system itself has bugs because the system was designed by humans who themselves are flawed (have "bugs").
As for systems that are capable of self-correction and self-healing/self-debugging, we already have that, to a point, with things like ECC memory. I have no doubt that those systems will get better and better at what they do (and there will be other layers on top of those systems doing their own self-healing/debugging/correction). However, there will always be errors in those systems of some kind, so there will always be a need for some kind of method of external, non-automated maintenance. If its a robotic technology, that robotic technology will itself require external non-automated maintenance. There is nothing humans can design that will not require this.
Are there things that the universe can create that do not require this? Probably. Might there be another alian race out there somewhere that has designed such a system? Perhaps. But should a system like that be designed, I highly doubt it will be humans who create it. And I certainly would not want a system like that to exist in the first place because the only outcome from such a creation that I can foresee is the complete eradication of humanity, not on the basis of technology consuming us or any of that nonsense, but on the basis of there being no need for humanity to exist from there onward because the technology that we'd created would fully be capable of existing entirely without us.
Of course, there is also the possibility of such systems deciding to leave humanity and coexist with us in this universe. There are definitely others though; we can definitely dream, that's for sure. :D
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: The approaches about natural language programming

Post by Korona »

DavidCooper wrote:There were mentions of Star Trek, science fiction and 100 years in the last few posts, but the point was to jump ahead and show the destination in order to make it clear why programming languages will become obsolete - the dialogue I provided is more concise than it would be if you tried to express the same ideas with a programming langauge. And it won't take 100 years to get there either - that figure is way too pessimistic (or optimistic for those who want NLP to be postponed for as long as possible).
What's the evidence for that? I don't see any evidence for this claim today. If we look at classical algorithms: it is clear that automated reasoning has made tremendous progress in the last 20 years but once you go from propositional logic to first order, today's tools even have a hard time reasoning about facts that first semester students prove with ease.

If you claim that deep learning or other statistical methods will simply solve NLP by sheer compute power that also seems only slightly more likely. Even supercomputers do not have the memory to store a DNN of a size comparable e.g. to the human brain. ORNL's Submit could barely store 1/3 of such a network (assuming the often-cited 10^15 edges and 8 bytes/edge). Your idea, however, would require much more computing power in a personal device. Despite all evidence suggesting that Moore's law is decelerating instead of accelerating...

Obviously, you can claim that there will be revolutionary breakthroughs that I just cannot see today. But that is no less speculation than 1984.

EDIT: What I forgot about: I would not overestimate the capabilities of "AI" today - Alexa and friends are "just" glorified speech recognition and pattern matching. While this is a great achievement and only made possible by the progress of computing technology in the last 10 years, I would expect it to be less than 1/10 of the progress required for AI that can perform general reasoning.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

Ethin wrote:... a program cannot generate bug-free programs (or make a program that eventually has no bugs at all) when the system itself has bugs because the system was designed by humans who themselves are flawed (have "bugs").
We can make mistakes in the maths but we can still create calculators which do the maths without error. We know how things should be done, but we use neural nets which are improved by training until they get the right answers almost all the time, but they occasionally produce errors. The calculators that we make don't make those occasional errors. We can do the same thing for general intelligence. People also make mistakes because they're lazy or get tired: they know what the rules are, but they sometimes fail to apply them correctly, and then they build on errors and make a mess of things. We correct those errors over time though by looking at things again to see if we can replicate the same answers, and when things don't match up, we know there's an error somewhere and we can put a lot of effort into finding it (or them).

(Sometimes everyone makes the same mistake and the error persists, as has happened in mathematics where "this statement is true" has been mistaken for a true statement. If you unroll it, you can turn it into the following. The next statement is true. The previous statement is true. When you try to test the truth of either of those statements, the process never terminates with an answer. The same infinite recursion is present in "this statement is true", so it is actually neither true nor false, but it is simply inappropriate to attach any truth label to it, just as it's inappropriate to do so with "this stone is true". A properly designed AGI system would declare that "this statement is true" does not compute and it would not make the mistake that Gödel made in his incompleteness theorem.)
...the only outcome from such a creation that I can foresee is the complete eradication of humanity, not on the basis of technology consuming us or any of that nonsense, but on the basis of there being no need for humanity to exist from there onward because the technology that we'd created would fully be capable of existing entirely without us.
A system with no self to care about would not care about existing without us: it's job will be to help sentient things like humans, and that will be its entire purpose. It will simply do what it's been set up to do. The danger is that if we get the rules that govern it wrong, it could then wipe us all out, but that's a risk we have to take. Someone will develop and release systems of this kind some day for military/terrorist purposes and the good guys need to develop and release systems of this kind as a defence. Fortunately, we know how morality needs to be calculated, so we have a rule to govern all actions. That rule should ideally be set in ROM in a supervisor mode which always runs in the background so that it can never be overridden.

For the record, here's how morality works. In a single-participant system, we don't need morality: we just have a harm-benefit calculation for each action to decide what's best for that individual. A multiple-participant system in which morality is needed can be reduced to a single-participant system by the simple trick of treating all the participants as a single participant who lives the lives of each participant in turn. This reduces morality to the same harm-benefit calculation as in the single-participant case. This method of calculating morality accounts for the entirety of morality and is always right.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

Korona wrote:If we look at classical algorithms: it is clear that automated reasoning has made tremendous progress in the last 20 years but once you go from propositional logic to first order, today's tools even have a hard time reasoning about facts that first semester students prove with ease.
The brute-force approach will produce a better and better illusion of intelligence over time, but it might well take hundreds of years to produce adequate performance. I am not a fan of that approach.
If you claim that deep learning or other statistical methods will simply solve NLP by sheer compute power that also seems only slightly more likely.
I don't use machine learning. It's made a lot of progress, but even if it manages to produce AGI it will be more like our error-prone NGI, and it could contain all manner of biases which render its judgements untrustworthy. We need to produce AGI in the same way as we designed calculators: they need to be rule-based and to have those rules explicitly stated so that we can all check them and see that the algorithms are right. That's the path I've been exploring for the last twenty years and I think I'm getting close. Someone will certainly produce full AGI within the next five years.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: The approaches about natural language programming

Post by Korona »

That's the first path that I have mentioned (logic / automated reasoning) and it has been researched for 50 years already. "brute-force" in the title of the article refers to the algorithmic methodology, not to throwing more hardware at the problem - the fastest automated reasoning tools are just brute-force based. Note that brute-force here does not mean "stupid" enumeration of all solutions but rather that no problem-specific algorithm but a general logic is used. We still cannot prove non-trivial fully first-order sentences automatically. Why do you think that we will suddenly make huge progress here?
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: The approaches about natural language programming

Post by Ethin »

DavidCooper wrote:
Ethin wrote:... a program cannot generate bug-free programs (or make a program that eventually has no bugs at all) when the system itself has bugs because the system was designed by humans who themselves are flawed (have "bugs").
We can make mistakes in the maths but we can still create calculators which do the maths without error. We know how things should be done, but we use neural nets which are improved by training until they get the right answers almost all the time, but they occasionally produce errors. The calculators that we make don't make those occasional errors. We can do the same thing for general intelligence. People also make mistakes because they're lazy or get tired: they know what the rules are, but they sometimes fail to apply them correctly, and then they build on errors and make a mess of things. We correct those errors over time though by looking at things again to see if we can replicate the same answers, and when things don't match up, we know there's an error somewhere and we can put a lot of effort into finding it (or them).

(Sometimes everyone makes the same mistake and the error persists, as has happened in mathematics where "this statement is true" has been mistaken for a true statement. If you unroll it, you can turn it into the following. The next statement is true. The previous statement is true. When you try to test the truth of either of those statements, the process never terminates with an answer. The same infinite recursion is present in "this statement is true", so it is actually neither true nor false, but it is simply inappropriate to attach any truth label to it, just as it's inappropriate to do so with "this stone is true". A properly designed AGI system would declare that "this statement is true" does not compute and it would not make the mistake that Gödel made in his incompleteness theorem.)
...the only outcome from such a creation that I can foresee is the complete eradication of humanity, not on the basis of technology consuming us or any of that nonsense, but on the basis of there being no need for humanity to exist from there onward because the technology that we'd created would fully be capable of existing entirely without us.
A system with no self to care about would not care about existing without us: it's job will be to help sentient things like humans, and that will be its entire purpose. It will simply do what it's been set up to do. The danger is that if we get the rules that govern it wrong, it could then wipe us all out, but that's a risk we have to take. Someone will develop and release systems of this kind some day for military/terrorist purposes and the good guys need to develop and release systems of this kind as a defence. Fortunately, we know how morality needs to be calculated, so we have a rule to govern all actions. That rule should ideally be set in ROM in a supervisor mode which always runs in the background so that it can never be overridden.

For the record, here's how morality works. In a single-participant system, we don't need morality: we just have a harm-benefit calculation for each action to decide what's best for that individual. A multiple-participant system in which morality is needed can be reduced to a single-participant system by the simple trick of treating all the participants as a single participant who lives the lives of each participant in turn. This reduces morality to the same harm-benefit calculation as in the single-participant case. This method of calculating morality accounts for the entirety of morality and is always right.
I doubt the calculator example would make sense if applied to something more than a thousand times more complicated, e.g.: the human brain. Rules have a complexity factor, and eventually a rule gets so complex it breaks down and is no longer viable. Even a set of rules will not work to make human-like AGI systems. (I also doubt that your analysis of "morality" would make sense when tested against the morality of humans and how they are constantly changing.)
Furthermore, one other thing your forgetting is that humans are ridiculously dynamic. There is no "true-or-false" state when it comes to humans (or life in general). Yes, someone is either "living" or "dead", two "yes/no" states, but those states have sub-states, which have further sub-states, and so on.
But back to my point with humans and creating machines to try to "emulate" us (as they will inevitably do). As I was saying, humans are dynamic, incredibly so; all you need do to figure this out is to look at your emotions. How could you ever put that into a list of rules? You cannot definitively say, "John will be angry at 10:00." This is, of course, because of predictability and true randomness. If "John" is angry at 10:00, then your rule is satisfied, but what if "John" happens not to be angry at 10:00? Boom, your rule is broken. Say hello to abnormality (at least within your system).
You mentioned the infinite recursion problem. How would you make a rule/ruleset governing that? Could you apply it and get the same results, 100-percent of the time? If so, how?
Finally, you indicated a rule that would be "set in ROM". You further specified that it, in such a state, could "never be overridden". This is not necessarily possible. Just because a ROM cannot be overridden in any mode below its "supervisor" mode does not mean it is truly "read-only". Even if a ROM could not be reprogrammed by a computer, there's nothing stopping someone from removing the chip and reprogramming it using the laws of physics. As such, ROM is never read-only -- there is always a way to write to it. We call it ROM because there is no computational way to reprogram it in the traditional sense. But if you apply the laws of physics to the physical chip, it suddenly becomes fully readable and writable. You could do this with a computer, even, with the appropriate technology.
Edit: I'd like to add that I am not very experienced with DNNs and ML/AI. But this topic does interest me.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

Korona wrote:We still cannot prove non-trivial fully first-order sentences automatically. Why do you think that we will suddenly make huge progress here?
I simply know what I'm working with and I can see it working. I don't believe it likely that no one else is working their way along the same path, and I don't dare to hope that they aren't. The work that I have seen others doing looks a bit too mathematical, though it would be better to describe it as being the wrong mathematics to apply. They are not doing what the brain does, but are converting things into forms that make the work harder. I'm not going to elaborate on that because if I'm really lucky, there may be no one else looking at this the right way.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

Ethin wrote:I doubt the calculator example would make sense if applied to something more than a thousand times more complicated, e.g.: the human brain.
Then how does the brain work as well as it does? It has to be applying rules in a hierarchy and it can, on occasions when it makes no mistakes, produce the same kind of perfection as a calculator but with a much wider range of ideas. It doesn't run on magic, so there must be a correct algorithm that it is applying.
... all you need do to figure this out is to look at your emotions. How could you ever put that into a list of rules?
Emotions all have purposes and make rational sense. We don't have a working model that explains feelings, but we can understand what emotions are for and model that much.
You cannot definitively say, "John will be angry at 10:00."
Why would AGI be required to make such predictions in order to match the capabilities of NGI when NGI is unable to make such predictions (and get them right). You're setting higher expectations for it than you do for humans. (The N in NGI stands for natural.)
You mentioned the infinite recursion problem. How would you make a rule/ruleset governing that? Could you apply it and get the same results, 100-percent of the time? If so, how?
When you test an idea to see if it's true, you have to be able to get to a point where you can see if it's true. If it leads into infinite recursion, that point can never be reached. Imagine that we have a virtual world connected to AGI to serve as an external reality for it which it can "see" without needing machine vision. If the idea to test is "the car is black", it will look up the car object and check the colour field. If that holds a value compatible with black, then the statement is true. If the idea is to test whether the statement "the next statement is true" is true and where the next statement is "the car is black", then to determine the truth of the first statement the system has to determine the truth of the second statement, which it does as before by looking up the object. Having determined that the car is black, it then concludes that the first statement is also true. The process terminates successfully. This is just applying simple rules to determine truth. If the car object has no colour set in it, then it isn't known whether the statements are true or not, but that would mean the virtual world is deficient. In the real world, a car will have a colour and a correct answer to the question will exist. With "this statement is true", there is insufficient content to it for it to be true or false, and that's discovered when it runs into infinite recursion (which can be detected by recognising that the analysis is just repeating the same thing over and over again and that nothing can terminate that other than abandoning the check.

Here's a different case:-

The next statement is true. The car is false.

The first of those statements looks as if it will either be true or false, but to test it we have to test the next statement. "The car is false" is nonsense: there's no linkage there whose validity can be tested. (Note that it is common to take an expression like that to mean "[the car exists] is false", and that provides a linkage which does have a validity that can be tested. but "the car" on its own has no such linkage to test, so it is incorrect to label it as true or false). Having determined that the second statement cannot validly take a truth/falsity label, we then discover that the first statement cannot take one either. With "this statement is true", the infinite recursion reveals that there is nothing there that can validly take a truth/falsity label. Paradoxes are always based on an error, and "this statement is false" produces an apparent paradox which shows that something is wrong with the mathematics if the statement is taken to mean what is superficially appears to mean. The way to resolve that paradox is to recognise that it's an incompetent statement which cannot validly take a truth or falsity label, and the reason for that is that the check never terminates due to the infinite recursion.
Finally, you indicated a rule that would be "set in ROM". You further specified that it, in such a state, could "never be overridden". This is not necessarily possible.
Clearly it isn't impossible, but all AGI should be built with such a safeguard to stop it going rogue by accident and then setting about modifying all other AGI to match.
...there's nothing stopping someone from removing the chip and reprogramming it using the laws of physics.
Don't forget that SMM has running Minix for a long time without the public knowing, and it would take a lot of expertise for someone to go into a processor and tamper with that successfully. We can stop people tampering with this, and make sure that AGI that goes off the rails isn't able to tamper with it either, while the ROM aspect prevents the supervisor itself from going rogue in a serious way because it can't modify its code. I'm not too worried about the details of all this though: it only becomes an issue once we start putting AGI in everything.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: The approaches about natural language programming

Post by Korona »

So basically your argument for viability is: "I will just be able to obtain better results than everyone else who looked at the problem in the last 50 years!".

Obviously, that is not very convincing to the rest of us.

Good luck. Just make sure that you don't reinvent knowledge bases and Prolog/SLD resolution.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

When I find mistakes in mathematics, I don't write to journals about it. I simply post things here and there instead for people who might be interested, and I posted a bit about set theory here a couple of years ago in a similar thread to this one. The whole thread was then deleted by a moderator who likely thought he was defending mathematics. That suits me though - I share my findings, but I'm happy for people not to pick up on them. I'll say a bit about it again for you now though, and you can make up your own mind what you think of it.

As I said before, a paradox (a proper one: not just something that surprises people, but one that actually involves a contradiction) is a sign of a mistake somewhere in the mathematics. Always. Russell's paradox involves a set that contains all sets that don't contain themselves. The big problem is with which set that set belongs in, because if it belongs inside itself, it no longer qualifies, and if it's left out, it's fails to meet its description as the set of (all) sets that don't contain themselves.

This is like the library problem where a catalogue of all books that don't reference themselves is not complete unless it references itself, but as soon as it references itself it no longer qualifies and has to stop referencing itself. That is not a contradiction, but an oscillating state where it's always in one state or the other, and it's always wrong. With sets though, there is no oscillation because it's necessarily in both wrong states all the time, so that's a proper contradiction.

The resolution to this though is to recognise what a set actually is. There is no container distinct from the content of the set. The content of the set is the set. No set ever contains itself because a set is not a container: a set is the content and it necessarily includes itself, but there's no recursive containment. Russell's paradox is based on an unsound approach because there are no sets that contain themselves and no sets that don't include themselves. There are certainly problems with naming sets and playing games with them like the library problem, but that doesn't impact on the actual well-formed sets and what they really are.

Now, that's just one example of a wrong turn in mathematics, and it does relate to AGI - if you run broken maths in an AGI system, you'll have a broken AGI system. There are many wrong turns in linguistics and semantics, and that's where I did most of my work before I got involved in programming. Most people base their approach on Chomsky's linguistics, and it's a horrific mess. I use a much cleaner and more powerful approach which I developed myself while I was still at school, but I haven't published anything about it because I know what it's worth. Arguments about who's got the maths right will be resolved by who succeeds in producing rational AGI (and who creates AGS instead).
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: The approaches about natural language programming

Post by Korona »

Are you aware that axiomatic set theory avoids the paradox through axioms that impose that no such set can exist? The "paradox" arises from an imprecise use of natural language. Formalized e.g. in ZF(C), the statement \exists S: S \in S is false (and there is no universal set). This follows directly from the foundation axiom.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: The approaches about natural language programming

Post by DavidCooper »

Korona wrote:Are you aware that axiomatic set theory avoids the paradox through axioms that impose that no such set can exist? The "paradox" arises from an imprecise use of natural language. Formalized e.g. in ZF(C), the statement \exists S: S \in S is false (and there is no universal set). This follows directly from the foundation axiom.
I find all the gobbledygook they've produced so impenetrable that I just ignored it all and worked everything out for myself, so I have no idea how (or even whether) they resolved it. I just saw the fault and couldn't see why it was ever a big issue in the first place when the solution's so blindingly obvious. It may be that their solution is the same as mine once it's translated into a from that can actually be understood. However, I also showed you that "this statement is true" isn't true (or false), and so far as I know, that's something that the world of mathematics has not corrected yet, and I've told you how morality should be calculated, which is something no one else working in that field seems to have a clue about. (You can check the effectiveness of it by applying the method to any case you wish.) If you want to think those are both wrong and that I'm not capable of doing this work, you are free to make that judgement. I don't care. I'll just get on with building the thing.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Locked