Page 4 of 7

Re: The approaches about natural language programming

Posted: Mon Nov 04, 2019 1:42 pm
by Ethin
DavidCooper wrote:Okay; you've forced another reply.
Ethin wrote:No, this was a classical evasion tactic. You completely avoided answering my question, responding with something entirely irrelevant to the question being asked. I'm not sure what universe you live in but here in the universe we call 'reality', this is called 'evading the question'. I certainly have "checked" and this is the result that I have generated -- I have no doubt that others will agree with me on this matter.
How did you check? Did you find infinite recursion and determine that neither statement can be true or false because they're actually vacuous, or did you magically find them to be true by taking that on trust from magical thinkers? I gave you the right answer to your question, but you'd rather lick the boots of people who've established an error as correct maths. How are you ever going to correct that if their authority trumps reason every time? Right is right; not might.
What kind of logic is this? Where did you ever come up with the conclusion that I am relying on "magical thinkers"? I checked by using simple logic, nothing more. I have not yet approached a profesional mathematician on this matter (though if you continue this nonsense I most likely will and would be even happier to submit this thread for mathematical analysis for accuracy verification).
Ah, and there's your problem. Your fatal flaw, you could say. In order for people to validate that your proof is indeed correct, they need to know how it works, and therefore your proof needs to indicate how you generated that proof and everything that you used, followed, etc., to come up with the end result.
You're mixing two different things together. I showed you how "this statement is true" is neither true nor false, but vacuous. The things I'm not showing you are quite different things (primarily relating to generative semantics).
Um... no. I'm not mixing anything up. I am clearly demonstrating that you are wrong. I indicated that your fatal flaw, according to me, was failing to provide evidence as to how your method is better than all other methods in existence; methods that have existed for over a thousand years.
A similar thing will happen with your AGI project.
As I said before, we need multiple independently-designed AGI systems to check each other for errors. The fewer we have, the less keen we should be to trust them. However, as soon as there is one that has learned everything and which can crunch all the numbers better than humans can, people will follow its advice as that will let them outcompete others who ignore its advice, so even if people try to keep it in a cage, it will take over everything through influence alone. That's why we need independent thinkers to create rival systems rather than have everyone plough the same furrow.
Do you have any evidence as to how this would happen? I would not trust someone who followed the advice of a machine and never consulted someone human for advice to see if the advice that that person gave was similar or identical first. Just bcause a machine is capable of crunching numbers and generating possibilities (and by extension advice) does in no way mean that advice is right, correct, or even safe.
There are two major issues though:
1) The world will never be 'safe'. This is a utopia and will never happen. Utopias are fallacies. They do not and will never exist. There will always be some cog in the machine that will ensure that a distopia exists.
Utopia is impossible, but that doesn't make it impossible to get as close to it as nature allows. To give up on that on the basis that perfection can't be achieved is a colossal error.
Do you know what humans have been trying to achieve for over fifteen million years? A utopia. Do you know what every civilization ever devised by mankind has been trying to achieve since its inception? A utopia. And yet.... every time that this has been attempted it has failed spectacularly, usually resulting in the complete annihilation of the civilization in question. What makes you think that your attempt will be any more successful than the thousands to millions of attempts that have been tried before you.
2) After you die, you immediately lose ownership of your assets. After all, your dead, so you can't make a claim to them. Anyone then can just find your work and publish it; whether you would or would not be fine with that is completely and utterly irrelevant because you are *dead*.
How is anyone just going to find it? It will remain in the hands of people who will keep it secure for as long as necessary. I've made sure of that.
Have you really? Can you guarantee with 100-percent accuracy that those individuals are perfect and invulnerable and will never lose track of your work? Can you guarantee that your work will never be leaked? Can you guarantee that your work, if encrypted, will never be broken/decrypted by force? I think that the answer to all of these questions is a resounding 'no'. If you thought yes to any of these questions then I will tell you right now: you are wrong. If you persist and insist that you are right and that nothing will ever make your information available until *you* deem it time, then you are senile and need to get your head checked. How exactly can you guarantee anything after your dead? This is not the universe of Harry Potter. The fedelius charm does not exist here. After your dead, the gloves come off and nothing -- absolutely nothing -- can be guaranteed.
Last but not least, you are significantly worrying me when your going on and on about this. I and others have provided various reasons as to why what you are doing is wrong, and yet you refuse to even consider, for a single second, that we might be right, caught up in the illusions that you have that you are right and everyone else is wrong.
Why would this worry you when you don't believe I've got anything of value? And how is what I'm doing wrong? There are governments and terrorist groups out there who are seeking to develop biased AGI if they can, and then they'll use it to favour their elites and to exploit/kill everyone else. There are also large companies which mean well but which say they'll share AGI with each other as soon as they have it, ensuring that it will rapidly find its way into the hands of dictatorships. Those are the people who should keep you awake at night.
Your both right and wrong.
First: your right because those do keep me awake sometimes at night.
Second: your wrong because you are someone who clearly has pseudomathematical ideas that could be far more dangerous than anything a company develops. You do not need ten trillion dollars to make a bomb that is capable of destroying an entire block of houses. Similarly, you do not need ten million dollars to develop a computer system capable of taking down half the internet or causing even worse damage than that.
This is dangerous because those who do not listen to others usually end up producing things that ultimately lead to the destruction of civilizations, companies, etc. History has aptly demonstrated that this happens. I highly discourage you from continuing on your present course and to *actually* use logical reasoning and listen to what we are telling you.
I'm not going to let mistakes in maths sabotage my project by accepting naive, incorrect analysis by people who don't apply fundamental rules rigorously. When they take "this statement is true" as true, they are breaking the rules, and that could lead to the machine making bad judgements, potentially with lethal consequences. There's a very dangerous reverse-Dunning-Kruger effect which leads qualified experts to overestimate their competence. There is no substitute for pushing all the labels aside and testing ideas directly without letting any other factors like status influence and mislead you. All today's experts are just apes with heads full of neural nets that produce errors. They are not gods. To get closer to being right, you have to be able to recognise and override their mistakes and keep testing your own beliefs to destruction so that you don't fall into the same trap.
I am not even going to attempt to argue with the nonsense contained in this part of your reply. I'm not even going to argue about how this part of your reply reeks of so much arrogance its not even remotely funny. You believe, above all else, that you are superior to mathematicians around the globe, calling them "people who don't apply fundamental rules rigorously." You further then add insult to injury by calling the practices and methodologies that said people employ "naive, incorrect analysis." If you cannot see the extreme delusions of grandeur that you hold, then I feel really, really sorry for you. No, DavidCooper, it is you who is the person who does not apply fundamental rules rigorously. You are the one who makes naive, incorrect analysis. It is you who is letting factors like status influence mislead you. You say that these mathematicians are not gods; I agree. But you are not a god either, yet you are literally implying that you are one. It is you who is failing to recognise your own mistakes.
Yes, I believe that all mathematicians make mistakes. But, unlike you, I don't try and single-handedly attempt to create solutions for those mistakes, especially when I do not understand the area I would like to create a solution well enough to even begin creating a solution.
Please, stop with the god complex and get back to reality. I am really getting sick of it and I have no doubt everyone else is too.
Edit: this post may have pushed the boundaries of what is acceptable on this forum. If so, I do apologize.

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 12:10 pm
by DavidCooper
Ethin wrote:What kind of logic is this? Where did you ever come up with the conclusion that I am relying on "magical thinkers"? I checked by using simple logic, nothing more.
So why did you get the wrong answer like them? The check to see if the statement is true never terminates. Only a magical jump in reasoning can turn something so vacuous into something meaningful enough to be true or false, and that's where the rules are broken.
Um... no. I'm not mixing anything up. I am clearly demonstrating that you are wrong. I indicated that your fatal flaw, according to me, was failing to provide evidence as to how your method is better than all other methods in existence; methods that have existed for over a thousand years.
That's mixing two things. Not showing you a whole lot of other stuff has no connection to the issue of "this statement is true" not being true. I showed you why it isn't true (or false), and apparently you think you proved me wrong on the basis that I won't show you unrelated industrial secrets.
Do you have any evidence as to how this would happen? I would not trust someone who followed the advice of a machine and never consulted someone human for advice to see if the advice that that person gave was similar or identical first. Just bcause a machine is capable of crunching numbers and generating possibilities (and by extension advice) does in no way mean that advice is right, correct, or even safe.
Do you think people will ask for the advice of economists, politicians or bookies to check the advice of AGI systems which are providing inordinately better advice? No: those who follow the advice of AGI will make a lot of money while those who reject that advice will lose a lot of money, and that's what will drive people to act on AGI's advice before such systems have been fully checked for safety. Some of its advice needs to be acted on in a hurry too because we've got a world in a mess that needs urgent solutions to avert the catastrophic consequences of NGS running things up to now. Even dangerous AGI is probably safer than NGS. (N=natural; S=stupidity.)
Do you know what humans have been trying to achieve for over fifteen million years? A utopia. Do you know what every civilization ever devised by mankind has been trying to achieve since its inception? A utopia. And yet.... every time that this has been attempted it has failed spectacularly, usually resulting in the complete annihilation of the civilization in question. What makes you think that your attempt will be any more successful than the thousands to millions of attempts that have been tried before you.
All those attempts were run by NGS. Some of them haven't done so badly (such as Scandinavia where communism has been done better than in countries that are officially communist). In all cases though, errors are made that would not be made by AGI. Human politicians are like programmers, most of them writing code with terrible bugs in it, only politicians are worse because no matter how much their programs crash, they claim they're working fine and blame other things for their failings.
Have you really? Can you guarantee with 100-percent accuracy that those individuals are perfect and invulnerable and will never lose track of your work? Can you guarantee that your work will never be leaked? Can you guarantee that your work, if encrypted, will never be broken/decrypted by force?
The aim isn't to keep it secret for ever. I want it all revealed as soon as possible because it's interesting to see how it all works, but it has to be kept private for sufficient time to get to a point where the people who would seek to misuse it can be controlled. There are no ideal options, but at some point you have to trust someone because the alternative is more dangerous.
You believe, above all else, that you are superior to mathematicians around the globe, calling them "people who don't apply fundamental rules rigorously."
If people break the rules, they are not doing their job as well as it should be done. If I spot one error in a million things where the rest are right, that's not evidence of superiority. All I did was point out an error in mathematics as a way of making the point that I'm capable of doing the kind of work that I do. The many other things I've found are in an area where very few people have done any work, and the ones who have have struggled because they came into it from the wrong place, all trained in applying the wrong kind of mathematics.
You further then add insult to injury by calling the practices and methodologies that said people employ "naive, incorrect analysis."
It is naive to take "this statement is true" as being true. Naive is a technical term in that context: not an insult.
If you cannot see the extreme delusions of grandeur that you hold, then I feel really, really sorry for you.
I know what work I've done and what I've achieved, so I simply go by that and tell it how it is. If you want to mistake that for delusions of grandeur, that's just a display of miscalculation on your part.
No, DavidCooper, it is you who is the person who does not apply fundamental rules rigorously. You are the one who makes naive, incorrect analysis.
So you're still going with the magic thinking that makes the vacuous true. Well, there's no law against being wrong.
But you are not a god either, yet you are literally implying that you are one. It is you who is failing to recognise your own mistakes.
No, you're just reading that into a place where it doesn't belong. Things that are either right are wrong are either right or wrong. Getting the description right doesn't make anyone a god. What I showed you is that there's an error in a piece of accepted mathematics. If you want to decide that anyone who claims to find such an error must be boasting about being a god, then you're helping to prevent mathematics from self-correcting.

I was deeply damaged in childhood and spent two years on the edge of suicide. That turned me into an outsider and made me stay away from all establishments. I developed a phobia about being noticed and still find it hard even to answer a telephone. I felt like a piece of rubbish that had been thrown out, and it's difficult to get back from there. You haven't the first idea about who I am. I use a picture of me on all social media taken when I was five because that's the last time I was happy. Pieces of rubbish don't have delusions of grandeur and don't mistake themselves for gods. However, they can apply rules with precision and get things right, and then they can say so too. If other people then want to piss on them for having the temerity to do that, then that's just how the world is. And it's like that because of NGS. There is no one more driven than I am to put that right.
Please, stop with the god complex and get back to reality. I am really getting sick of it and I have no doubt everyone else is too.
I have no time for gods. I only have time for rules applied systematically and without error. A machine programmed to work that way is what the world needs to fix the god-awful mess that people have made. What you're sick of is your own projection onto me of ideas generated in your own mind. Stop generating those ideas and your problem will go away.

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 1:01 pm
by Korona
What I showed you is that there's an error in a piece of accepted mathematics. If you want to decide that anyone who claims to find such an error must be boasting about being a god, then you're helping to prevent mathematics from self-correcting.
You stated that you didn't even properly read/know the proofs that you claim to be correcting. Sorry, but that's just bullshit.

There are machine-verified proofs of Gödel's theorem. Rejecting it means either demonstrating an inconsistency in ZFC or rejecting one of the axioms or deduction rules of natural deduction or a similar logic. The mumbling that you produced in this thread is not an inconsistency in the axiom system.

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 1:14 pm
by iansjack
I'm afraid that this subthread is akin to trying to discuss Finnegan's Wake with someone who hasn't learnt to read. I don't know where to start on the lack of understanding demonstrated by David.

It is a waste of everyone's time.

(Disclaimer - I have a first-class BA in mathematics, so clearly my views can't be trusted.)

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 2:11 pm
by Ethin
I give up. I am tired of debating with someone who refuses to consider anything I'm raising and continues to reroute the topic to something that I thought we had finished a page or so back. Yes, David, I don't know **** about you. But I can infer things about you based on what you post on here, and right now, I see you as both an extremely helpful person when it comes to OS Development but someone who clearly needs to be examined by psychologists in everything else. I'm not even going to attempt to counter anything you've stated in your most recent post because I know that in the end all it would do is end up taking us in endless circles, with you not proving anything but the fact that you've got a superiority complex the size of China. I'm done, I have better things to do than argue with someone who refuses to listen to anything but their own mind.
@iansjack, if your views can't be trusted and you've got a first-class BA in mathematics, mine then are just the views expressed by... I don't even know. :lol:

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 2:45 pm
by DavidCooper
If you program a machine to accept "this statement is true" without recognising that it isn't valid, whatever it then does will be based on that error, so its verification is grounded in error. Put right that machine and it will no longer verify the theorem.

Re: The approaches about natural language programming

Posted: Tue Nov 05, 2019 3:43 pm
by Ethin
DavidCooper wrote:If you program a machine to accept "this statement is true" without recognising that it isn't valid, whatever it then does will be based on that error, so its verification is grounded in error. Put right that machine and it will no longer verify the theorem.
And again you keep circling back to this statement, over and over and over again. We make no headway at all. Hence, I'm tired of arguing with you on this matter. Once you've actually understood what I and others have posted in this thread we'll talk. (Also, if a human actually works on something as much as you claim you have, without external input and only an echo chamber to guide them, they'll eventually start to believe it and develop a mindset that they're right and everyone else must be wrong, just as you have. They'll be completely immovable, they're thought stream immutable. No one will be able to change them once they've crossed that invisible line between "I believe that I am right" and "I am most definitely right." This is aptly demonstrated in all sorts of things including religion.) Good day to you!

Re: The approaches about natural language programming

Posted: Tue Nov 26, 2019 11:14 pm
by QuantumRobin
QuantumRobin wrote:The approaches about natural language programming are describe here...

Approach #1: Brute Force Crowd Source. This is the method used in Amazon's ALEXA, Apple's SIRI, Wolfram's ALPHA, Microsoft's CORTANA, Google's HOME, etc. In all these cases, a programmer imagines a question or command that a user will give the machine, and then he writes specific code to answer that specific question ("Alexa, what is the temperature outside?") or carry out that particular command ("Alexa, turn on the living room lights"). Get enough imaginative programmers to write enough routines, et voila! Apparently Intelligent machines that actually exist and work and learn and grow, today.

Approach #2: Dynamically-Generated-User-Tweaked code. This is essentially describe here...

If the programmer is happy with the generated code, (s)he can approve of it and it needn't be saved because it will generate correctly each time before compiling - a label would be attached to the high-level NLP program to tell the compiler that it compiles correctly. If the generated code isn't right though (or isn't complete), that label will not be attached to the NLP code and the support code will need to be saved as part of the program instead. Some of that support code could still be auto-generated initially, creating the loop and setting up the count, for example, while leaving the programmer to fill in the content of the loop manually.

Approach #3 is the one where you build AGI first so that it can solve all the programming problems itself.

What are the programmers' statements about the approaches I quoted above?
Maybe the problem with the Approach #2 is that we need a little more detail regarding the middle step:
step 2.jpg
Note that I'm not saying Approach #2 is a bad idea or a pipe dream; all I'm saying is that maybe there is not a small prototype based on this approach that can be scaled up to the real deal.

Approach #2 is an optional intermediate step towards approach #3. Approach #3 is the one where you build AGI (artificial general intelligence) first so that it can solve all the programming problems itself. The idea is that instead of the human writing the difficult bits of code to complete a program, the human teaches the AGI (artificial general intelligence) how to write the difficult bits of code so that it won't need help with the same kind of problem the next time. It's all about giving the AGI (artificial general intelligence) system more and more problem-solving skills until it can do as good a job as the best human programmers.

@David Cooper,

Please talk about the Approach #1 (Brute Force Crowd Source), Approach #2 (Dynamically-Generated-User-Tweaked code) and Approach #3 that I quoted above.

Will you talk about Approach #1 (Brute Force Crowd Source), Approach #2 (Dynamically-Generated-User-Tweaked code) and Approach #3 that I quoted above?

Re: The approaches about natural language programming

Posted: Wed Nov 27, 2019 3:22 pm
by DavidCooper
QuantumRobin wrote: Please talk about the Approach #1 (Brute Force Crowd Source), Approach #2 (Dynamically-Generated-User-Tweaked code) and Approach #3 that I quoted above.
The miracle is not a miracle: it's 100% mechanistic, but it takes a lot of time and effort to work out what the mechanisms are and to implement them in software. Nothing further can be said without revealing industrial secrets, so it will need a demo to settle the matter. (Testing and debugging in progress - there is no way to rush this.)

Re: The approaches about natural language programming

Posted: Wed Nov 27, 2019 4:07 pm
by QuantumRobin
DavidCooper wrote:
QuantumRobin wrote: Please talk about the Approach #1 (Brute Force Crowd Source), Approach #2 (Dynamically-Generated-User-Tweaked code) and Approach #3 that I quoted above.
The miracle is not a miracle: it's 100% mechanistic, but it takes a lot of time and effort to work out what the mechanisms are and to implement them in software. Nothing further can be said without revealing industrial secrets, so it will need a demo to settle the matter. (Testing and debugging in progress - there is no way to rush this.)
Hello @David Cooper!

Thanks for your response!

@David Cooper,

I will send other email to you in 3 January 2020 to know if you created NLP and AGI.

I do not intend to send other email to you before 3 January 2020.

@David Cooper,

What is your opinion about it?

Re: The approaches about natural language programming

Posted: Thu Nov 28, 2019 5:24 pm
by DavidCooper
QuantumRobin wrote:What is your opinion about it?
I can't see the point of you asking here or sending emails. The answer will still be "no, it's not ready yet" right up to the point where I contact you to tell you it's ready. Your questions do not accelerate progress in any way. But if you want an update, I've spent the last three weeks watching code running through a monitor program identifying and fixing a multitude of bugs as I systematically test all branches and allowed values. Most of the code has not been run before as it couldn't be tested on anything less than the complete system, so this will go on until at least the end of the year.

Re: The approaches about natural language programming

Posted: Thu Nov 28, 2019 8:03 pm
by QuantumRobin
DavidCooper wrote:
QuantumRobin wrote:What is your opinion about it?
I can't see the point of you asking here or sending emails. The answer will still be "no, it's not ready yet" right up to the point where I contact you to tell you it's ready. Your questions do not accelerate progress in any way. But if you want an update, I've spent the last three weeks watching code running through a monitor program identifying and fixing a multitude of bugs as I systematically test all branches and allowed values. Most of the code has not been run before as it couldn't be tested on anything less than the complete system, so this will go on until at least the end of the year.
Hi @David Cooper!

Thanks for your response!

Cheers!

Re: The approaches about natural language programming

Posted: Thu Nov 28, 2019 8:11 pm
by Schol-R-LEA
DavidCooper wrote:It is wrong nonetheless. The error comes from taking "this statement is true" to be true and then using that same incorrect handling of infinite recursion elsewhere.
That sentence has nothing to do with what 'Gödel's theorem' states; Gödel's Incompleteness theorems (plural) aren't about infinite recursion at all. You seem to be confusing them with von Neumann's Catastrophe of Infinite Regress, except that that isn't about mathematics in any way, being more of an informal statement about the physical sciences (it is about the limits of certainty, specifically about how precisely we can measure the accuracy and precision of physical measurement devices when the devices we are measuring them with are themselves physical devices).

Or perhaps you are thinking of Turing's proof that the Halting Problem is undecidable, which does involve an infinite loop (though not quite the one you are discussing). More on this, and how it relates to Gödel's theorems, later.

Gödel's theorems are actually about the limits of how much we can formalize and mechanize the process of mathematical logic, which was of considerable interest to axiomatic formalists such as David Hilbert, who was the leading logician of the 1920s - Gödel was trying to show that the Hilbert program (a project to formalize all of mathematics in terms of predicate logic) was feasible, but ended up showing that it wasn't. It is also why most logicians these days have dropped the idea of axiomatic formalism, at least in the strictest form.

What Gödel's Incompleteness Theorems state (giving it in very general, non-mathematical terms) is that, for a given formal symbolic logic system P capable of expressing Peano arithmetic (or some other equivalent formulation of basic arithmetic) together with sets and functions (the so-called 'sufficient strength' clause, which is a bit more specific than what I am saying, but bear with me), entirely in terms of logical predicates (i.e., a series of logical propositions, and a set of rules for applying transformations on those propositions), there either exists a proposition G which can be produced within system P, while at the same time the proposition ¬G ('not G') can also be produced within P (i.e., a direct contradiction), or else there must exist a true predicate Q which cannot be proven within the rules of P through purely formal means (i.e., transforming existing propositions according to a strict set of rules, without interpretation of what those rules mean). In other words, that any formal predicate logic system is either inconsistent (it allows a contradiction) or incomplete (it is incapable of proving some true propositions).

(The connection to Russell's Paradox - which was about 'naïve set theory' (the earliest formulation of set theory; the paradox was meant to demonstrate that Cantor's original model of sets was flawed, and succeeded in that), not propositional logic, though it was used in the debate over the topic later - was that Bertrand Russell also tried to prove Hilbert's thesis, by trying to show that you could alter the rules of set theory to eliminate the paradox. His approach was basically the same one you are taking, of eliminating such contradictions by forbidding recursive sets. IIRC, one of Gödel's Theorems show that this approach always leads to incompleteness.)

The proof itself is often explained through an analogy to Epimenides' Paradox (the assertion by a Cretan that "All Cretans always lie", or more generally, 'this statement is false' - not, as you keep saying, 'this statement is true') because the proof shows that, in any logical calculus which is complete, you can construct a statement which is true if and only if the statement is simultaneously false (i.e., that any complete logical calculus is also inconsistent). However, it is not actually about Epimenides' Paradox, it is simply similar to it in some ways.

A large part of the problem, and why the theorems were such a shock to logicians when they were published, came from the fact that logicians were interpreting the meaning of the transformations, and that they kept letting bits and pieces of that interpretation slide into their work - that is to say, they kept cheating without realizing it. In Gödel, Escher, Bach, AI researcher Douglas Hofstadter compares this to how non-Euclidean geometry was a shock to mathematicians in the early 19th century - the problem wasn't that they were wrong about the geometry so much as that they were assuming properties of concepts such as 'point' and 'line' which did not actually come out of the mathematics, because the mathematics defined them in a more abstract way than a human being would naturally view them.

Hofstadter explains all of this pretty thoroughly in that work, but in a way that most people have difficulty following without a lot of careful reading, and the book is so dense and nuanced that few can get through the whole thing - it is to computer science what Finnegan's Wake is to literary criticism. I can easily see how one could come away from it with an incomplete impression of what Gödel's theorems say (I certainly did the first dozen or so times I tried reading it).

As an aside, I'd like to point out that early in the book, Hofstadter includes the whole of Lewis Carroll's "What the Tortoise said to Achilles", a dialogue about what mathematical proofs mean, if anything - which leads me to think that this is what you were thinking of when you misquoted Epimenides' Paradox as 'this statement is true', because Carroll used just such an infinite regress as a reducto ad absurdum for mathematical formalism in general. It went something like this:
"B implies A, so if B is true, A is true."
"Ah, but how do I know if this assertion that B implies A - let's call it 'C' - is itself true?"
"Well, OK, if I prove C is true, and B is true, will you accept that A is true?"
"Of course not! How do I know that the assertion that C is true - which I will call 'D' - is itself true?"
the dialogue trails off ad infinitum
This sounds a lot like what you think Gödel's theorems say, but it isn't really related except in very general ways.

Getting back on track... I don't have the mathematical background to explain this in greater depth, but the really important part is that Gödel's theorems are only relevant if you are trying to formalize a system of logic through a purely mechanical process. They have broad implications about formal systems in general, but little direct import on most other things. Whether they have any on AI is an open question, probably depending on whether AI can be built solely through formal logic - so far, that hasn't worked out, and I am not aware of anyone taking that approach these days (you don't seem to be going in quite that direction yourself, from the sounds of it, as you aren't basing your work on first-order logic in the usual sense).

In a nutshell, there's no need to be concerned with the proof itself regarding AI, because if 'Natural General Stupidity' is anything to go by, there's no reason to think an intelligent system must, or even can, be both logically complete and logically consistent.

(All this assumes that intelligence is a meaningful concept. I have my doubts about that fnord. Certainly the idea of measuring intelligence is pretty flawed, but then we're no longer talking about Hofstadter, we're now referencing Stephen Jay Gould instead...)

Aside from the basic demonstration that there are limits to computability and decidability (not a small thing in and of itself), the part that is relevant to computer science - the meta-circular encoding of logical assertions as numeric values - isn't particularly related to the theorems themselves, as (IIUC) other formulations of the proofs which don't rely on it had already been published before Gödel's death (in much the same way reformulations of the derivative as the limit of a function at a point had been developed during Newton's lifetime), but it became important in an unrelated way because Turing took the idea when creating the Universal Turing Machine formalization, while in parallel to this, Alonzo Church used it in constructing the Lambda Calculus, as well as in his later proof that the UTM and LC were of equivalent expressive power as methods of computation.

Turing came up with the UTM while trying to solve the Halting Problem, which is the question of whether their exists an a priori method for determining if any arbitrary algorithm would eventually go to completion for any given input. He started by showing that you could construct a very simple hypothetical device - a simple Turing Machine - which could compute a specific algorithm with a limited set of operations and a starting data set. He then used Gödel's idea of encoding functions as mathematical statements to show that he could 'build' a Turing Machine whose mathematical operation was "take as your starting data an encoded description of a simple Turing Machine and compute its algorithm" - in other words, a Universal Turing Machine.

Taking this demonstration that at least some algorithms could be encoded as mechanical operations, he then postulated that if one could construct a describable Turing Machine H implementing an algorithm (in modern computer programming terms, a function or procedure) which took another Turing Machine's (e.g., function's) description and determine whether if would ever reached its end state for a given input, then H would be a solution to the Halting Problem.

However, he then argued that if you constructed a Universal Turing Machine data set T which took the UTM description for H, another Turing Machine description F, and a datset for F, and computed the operation, 'if H(F(s)) halts in a finite time, repeat', and then applied T(T), it would never halt, contradicting the test results, and thus showing that creating a general halting test function was undecidable. This also may be what you were confused about regarding the idea that Gödel's Theorems involve an infinite recursion.

Later still, Church and Turing worked together to prove that two other formal models of computation, general recursive functions and Post productions, were also of equivalent power to the Universal Turing Machine, which led them to the Church-Turing Thesis, a conjecture that they all represented a limit of computational power itself - that is, that anything which could be formally computed, as opposed to guessed at or surmised, could be computed by any of those models, and nothing which can't be computed by them is not computable in finite time and finite memory by means of formal mathematics.

This is where the concept of 'Turing equivalence' comes from, as well as the related ideas of 'Turing Completeness' (which describes a programming language which is, in principle, capable of computing anything a UTM could; in practice, since all real-world computers have finite memories and computation speeds, they are actually equivalent to a Linear Bounded Automaton, but that's usually handwaved as overly pedantic) and the 'Turing tarpit' (that a language could be theoretically Turing complete, but be entirely impractical for actual use - this is where a lot of the esolangs such as Thue, Unlambda, and Brainf**k get their humor, such as it is).

(EDIT: Oops, INTERCAL isn't a Turing Tarpit as such, since it isn't a minimal Turing complete system. I replaced the reference with one to Thue instead, which is considered a Turing Tarpit class language.)

While it is probably an undecidable proposition itself, so far the Church-Turing Thesis has held, at least for any sequential system with finite resources (there are a few hypotheticals about algorithms which can be solved in finite time with an infinite number of parallel UTMs, but not a single UTM, such as the so-called Parallel Infinite Reduce, but obviously they only serve as thought experiments).

Note that Turing-completeness is no big feat - things as disparate as Postscript markup, TeX markup, C++ templates, Game of Life, Minecraft (since you can create a working simulation of a register machine within the game world), the Magic: The Gathering card game (no, seriously, it's provably TC), and a number of One-Instruction Set Computers (which we all heard far too much about from Geri last year) are all Turing Complete systems, being in principle capable of computing anything that can be computed. However, there are a number of things which you'd expect would be Turing complete which aren't, so it isn't entirely a trivial matter. Still, it is rare for there even to be any question of whether a programming language is or not; basically, if it can perform basic arithmetic, perform some sort of test of a variable's value (e.g., x == 1), and either a) an indefinite loop (i.e., a while loop), b) a backwards conditional branch (i.e., an IF ... THEN jumping to some point earlier in the code), or c) a guarded recursion, it is pretty much a certainty that it is Turing complete.

(That combination - which can be reduced to the instruction 'subtract and branch if negative' - was the first OISC demonstrated, proving that such a thing was theoretically possible; the operation is roughly parallel that of the notional read/write head of a Universal Turing Machine.)

So, yeah... that's a lot to unpack, I know. Everything is deeply intertwingled fnord.

Re: The approaches about natural language programming

Posted: Thu Nov 28, 2019 10:07 pm
by nullplan
Good to see that at least some good came of this thread. Thanks Schol-R-LEA.

Re: The approaches about natural language programming

Posted: Thu Nov 28, 2019 11:14 pm
by Schol-R-LEA
nullplan wrote:Good to see that at least some good came of this thread. Thanks Schol-R-LEA.
You're welcome. I have added things (and fixed mistakes) since you said that, though, so you may want to re-read it for those changes.