Opinions On A New Programming Language
Re: Opinions On A New Programming Language
Objects (functions) can only share data with other objects if the data is immutable. The exception is that data can be sent (transferred) instead of shared. Once data has been sent, it can no longer be mutated by the sender, unless the receiver sends it back.
Traditional pass-by-value semantics entails creating a copy of data and then sending the copy. The use of "const" implies that data is immutable to both the callee and the caller for the lifetime of the callee. Problems arise when the callee shares the const data with another object that outlives the callee; because the data becomes mutable to the caller again and the object assumes that data is immutable when it's actually now volatile.
One possible solution to this problem is to use a reference counter, and not allow the object to become mutable to the caller again until there are no more references to the object.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Opinions On A New Programming Language
If you think about it, this is a very similar problem to two other common issues in language design and semantics: the 'upward local reference problem', where a reference to a local variable is returned by the function or inserted into a mutable argument; and the 'upward funarg problem' for nested functions, in which returning a reference to a nested function creates a closure, capturing the outer function's state, which then has to become persistent.Wajideus wrote:So, this is just something I noticed, but my ownership model actually parallels true object-oriented programming and CSP (communicating sequential processes).
Objects (functions) can only share data with other objects if the data is immutable. The exception is that data can be sent (transferred) instead of shared. Once data has been sent, it can no longer be mutated by the sender, unless the receiver sends it back.
Traditional pass-by-value semantics entails creating a copy of data and then sending the copy. The use of "const" implies that data is immutable to both the callee and the caller for the lifetime of the callee. Problems arise when the callee shares the const data with another object that outlives the callee; because the data becomes mutable to the caller again and the object assumes that data is immutable when it's actually now volatile.
One possible solution to this problem is to use a reference counter, and not allow the object to become mutable to the caller again until there are no more references to the object.
So what you are actually talking about is capturing a part of the callee's state - but in this case, it isn't the variable, but the condition of the variable. However, since here you are trying to avoid capturing the state accidentally, it simplifies things a bit in terms of runtime handling, though possibly at the cost of some added compile-time analysis.
Now, as a not-very-brief digression, I should mention that this gets a bit involved when talking nested functions, hence why it is called the 'upward funarg problem' - the entire nested state needs to be captured to avoid dangling references in the enclosed function, which could involve capturing several layers of function environments (especially in functional languages and the Lisp family languages, where things like blocks and loops are often syntactics sugar over anonymous functions).
This presents problems in terms of both run-time and compile-time support, and in particular, naive approaches to run-time support either require that all function environments be in heap rather than on the stack, which presents serious problems WRT memory management, or else involve added a lot of behind-the-scenes 'spooky action at a distance' which the programmer wouldn't be able to see. Similarly, the compile-time handling can be pretty complicated, and can also complicate or hamper some optimization rules the compiler designer might want to apply.
This is why C simply forbids nested functions - supporting closures would be a serious break from the rest of the language's semantics, and the compile-time techniques needed to handle it efficiently would have added far more complexity to the compiler than the language designers wanted (especially in 1969, when some methods weren't available to them yet). Some compilers (including GCC) support them, but it is very much a non-standard extension and usually the compiler requires you to explicitly set an option to allow them. Most languages whose syntax and semantics are partially derived from C, such as C++, Java, C#, JavaScript, etc., follow suit for the general case, but many have added (either from the outset or later on) special cases for things like lambda functions, and/or allow nesting but have rules regarding returning function references.
End of digression.
Fortunately for you, this doesn't apply to this specific problem, which is closer to the problem of dangling references to local variables. Still, the compile-time solutions to all three of them are somewhat similar (for languages which do permit upward funargs, and thus support closures), which is why I mentioned it.
One model in particular might help here: copy-on-return semantics, which is similar to copy-on-write semantics but involves making a copy of all or part of the local state from the local environment to the upward one, or giving an error which requires the programmer to make such a copy in order to except the conflict. You probably don't want to have the compiler this automatically in a systems language (more on this below), but you could make it an idiom of the language, possibly enforced by compiler error checking.
The solution that seems most promising to me would be to add a compile-time check to see if references to const arguments might be returned or added to object which has a persistence lifespan beyond that of the function call.
Since (AFAIK) as a practical matter you cannot determine this absolutely through static analysis, you probably want to be conservative and take any case where you can't show it won't be (which should be easier to do in static analysis) as a case where it will occur.
You as the language and compiler designer have a few different options about what to do when it does find such a conflict:
- Silently insert an operation that casts away const-ness in the upward reference, possibly by copying the data. This is a simple approach from the programmer's point of view, but it involves spooky-action-at-a-distance - the programmer doesn't know what the compiler and the run-time code are doing. It also adds some compiler complexity and overhead, and may add run-time overhead as well. It is a decent approach for application programming, where that is the kind of unnecessary detail which the language might want to suppress, but in a systems language it is unacceptable.
- Make it a error, and require the programmer to explicitly change the code to eliminate the conflict, probably by explicitly casting away const-ness but possibly in some other way. it is a more flexible approach than the previous one, giving the programmer more control over how it is handled, avoids spooky-action-at-a-distance, and doesn't require as much compiler cleverness. This parallels Brendan's favored solution for detecting and resolving other kinds of potential run-time conflicts such as those for array bounds and type ranges, and is a solid model, though it does require additional programmer effort.
- Take the previous method as a base case, but give additional syntax and semantics for abstracting it away for some common cases, which would somewhat reduce the cognitive load for the client-programmers while still allowing flexibility and transparency, but at the cost of additional (possible significant) compiler complexity. This can make sense for bounds checking in a type definitions (and is pretty much my own planned solution for those issues), but I am not sure how you would do this in this instance, as it is sort of a case-by-case issue. Still, it is something to think about I suppose - maybe you could add parameter modifiers that tell the compiler how to resolve it, and then give an error message if a conflict exists that doesn't have an applicable modifier so the programmer can either add one or address it in some other way.
- Warn about it, but don't require the programmer to handle it. This is a very unsafe approach, but unfortunately is the kind of ostrich attitude seen all too often in existing languages. I would recommend staying away from this sort of wishful thinking.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: Opinions On A New Programming Language
The point of ownership semantics is to avoid having to copy by communicating data instead of allowing random access.One model in particular might help here: copy-on-return semantics, which is similar to copy-on-write semantics but involves making a copy of all or part of the local state from the local environment to the upward one, or giving an error which requires the programmer to make such a copy in order to except the conflict
Imagine you have a book and a pen; and you can only talk with me by writing. There are 2 scenarios that can unfold.
1. You write your message in the book, and then give it to me. I can show the book to whoever I want; even put it on public display, however you're the only one with a pen so no one else can write to it. Also, you can't write to it either until the book is solely in your possession again. In this case, the book represents immutable data.
2. You write your message in the book, and then give me both the book and the pen. We've now reversed roles. I can write in the book and give you both the book and the pen back, or just the book. In this case, the book represents mutable data that is shared by communication.
This technique is more robust than const-ness because it doesn't allow the caller to change the value of an object that's passed by reference, which in turn prevents dangling pointers caused by the caller destroying an object that's referenced elsewhere because the callee promoted it's scope.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Opinions On A New Programming Language
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Opinions On A New Programming Language
I assume that by that parenthetical that you are explaining that Self is a descendant of Smalltalk, not that it is a version of Smalltalk. The latter would be like saying that C# and Java are the same language (actually, while Self uses the Smalltalk message syntax, semantically they are probably more different than Java and C# are). Just sayin'.Wajideus wrote:It's definitely better in terms of flexibility and simplicity. It's one of the things I love about languages like Lisp and Self (Smalltalk).
(As a related aside, the tendency of both Lispers and non-Lispers to speak of the Lisp family as just 'Lisp', is... problematic. The differences between the languages in the family - Common Lisp, Scheme, Racket, Clojure, older versions of Dylan, nuLisp - are as great as those between C, C++, C#, Java, JavaScript, PHP, Perl, and the other Algol-family languages which use C's bracket syntax as their structural basis. However, there's a long history of seeing Lisp as a continuum rather than a single language, even among Lisp programmers at least up until the late-1980s, and the one thing they all have in common - using sexprs to represent both built-in forms and functions in a mostly-uniform syntax - means that you can mostly-imitate any other Lisp just with a few functions, macros and read-macros, or even by slapping together a rump meta-circular interpreter for the other language that you can call, so the lines can easily get blurred. steps off of soapbox)
Yeah... well, that trade-off isn't a sharply divided as most people get the impression it is, and often depends less on the languages and more on the implementations. I've mentioned the Stalin∇ before, and while the language it is for (Vlad) isn't quite the same as any standard Scheme, it is definitely a Lisp; I see no reason why a similar super-optimizing finishing compiler couldn't be applied to any other type of language, though maybe not to any specific existing language. The key to this is in separating semantic model from implementation model in a way that let's you use different implementations so long as it results in the same the expected results.Wajideus wrote:The downside though is that you pay for the overhead of runtime compilation and dynamic binding. This specific language is intended for the same niche as C/C++, so that's a tradeoff I'm not willing to make.
More to the point, just because the language defaults to doing something one way, doesn't mean it can't have pragmas, modifier keywords, and specializing syntax that allow the programmer to tell (or at least request or suggest to) the compiler to use some other semantics. A simple example (if no long out of date) is the register keyword in C - yeah, it is pretty much a no-op today, but that's more to due with improvements in register painting algorithms since C was designed than a flaw in the idea of hints. The same holds as a runtime operation in the System.gc() method in Java (or the similar System.GC() in .Net languages) - the garbage collector is free to ignore the request, if it concludes that there's no need to collect garbage right now, but in most instances, it will take the hint and collect it.
An analogy - not a strong one, but still - could be made between this and using an instance of an abstract class. The parent class defines the expectation - that the variable accepts a message or method invocation with a given signature, in accordance with LSP, and that the method at least generally follow the intended semantics (e.g., a 'move' method causes the object to move, somehow) - but leaves the actual implementation to the child class. The child class's designer is still free to implement it as they choose, and more t my point here, can use overloading to have a method whose semantics are related, but signature is more specialized than that defined by the parent class, indicating that it should use a specific form of the idea rather than the generic one.
Here, the same relation holds between the language and the implementation - the language doesn't, and perhaps shouldn't, define how the compiler implements something, but probably should have modifiers, pragmas, etc. that say, "do it this way, please" with varying degrees of forcefulness.
While it is definitely an optimization, I should point out that lists aren't immutable in most Lisp family languages. The forms set-car! and set-cdr! (or replaca and replacd in Common Lisp and many older dialects) allow you to change the car (the first element of the list) and cdr (the remaining sub-list following the car), respectively. The actual semantics vary from language to language, though.Wajideus wrote:It sounds more or less like it's just an optimization for linked lists that happens to work because the lists in Lisp are value types (immutable).Schol-R-LEA wrote:Maybe this is a Lisp-ish view, as non-toy Lisp compiler implementations are full of this sort of thing, and some are even used by (non-toy) interpreters. The highly dynamic approach inherent to Lisp makes them necessary for anything like acceptable performance. Even things as basic to the language as lists are often not what they seem to be - nearly ever serious Lisp interpreter since the days if the MIT AI Lab (circa 1965) has used 'CDR-coding' to snap the pointers for lists when the are created, turning the linked lists into tagged arrays of elements (which could be addresses to a datum or another list, or any tagged datum that would fit into the same space as an address, or in some cases more exotic things such as tagged indexes into blocks of homogeneous data elements which could be treated as a unit by the garbage collector) as a way of reducing both space and time overhead.
OK, that is a fair point. The matter of defaulting things in a meaningful way is pretty much undecidable, because 'reasonable' is going to be different for different circumstances. Mind you, this is one of the reasons I am thinking in terms of have separate out-of-band modifiers, and I've been giving thought to ways to abstract them out as a matter of code as well - a kind of quasi-modal system where the default can be reset per module or even per-library, without it spilling over into other modules (well, not too much, at least - leaky abstractions and all that).Wajideus wrote:Yeah, as I mentioned before, it's certainly debatable. Being virtual never causes problems, but being non-virtual makes a system more rigid and hard to change. That being said, I also think it's important to base the design of the language on what we do empirically rather than a one-size-fits-all approach. It would get old rather quickly if I had to constantly sprinkle keywords all over the place to improve performance because the compiler is making assumptions that are rarely the case.Schol-R-LEA wrote:In my view, it would make sense to have virtual (or at least potentially virtual) be the default, and have some modifier indicate when something has to be resolved early. Similarly, a variable in a 'parent class', 'interface', 'template', or 'typeclass' could be resolved as the actual type/class if the compiler can determine early on that it can only be an object of a certain sub-class, even if the programmer can't be certain.
It's sort of like something I say sometimes about Linux versus Windows versus MacOS - they all suck, but with Linux, the existence of different distros, alternate file systems and system initialization models, redundant and competing GUIs and applications, etc. give you some measure of ability to choose the way your particular system sucks to let you keep the suckiness to a manageable level for yourself. Of course, most people would find any amount of Linux to be too much suck, and unless you are a programmer yourself, the consistency of Windows or MacOS is likely to count more than the flexibility of Linux (which is why I tend to see them as complementary rather than rivals - though they all still suck).
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: Opinions On A New Programming Language
I've more or less figured out how I want error handling to work. It's sort of a hybrid of using error codes and exceptions. Below is an example of a new type of assert/else statement used to generate an error (ignore the `$` here, it's just a dummy syntax for a parametric type):
Code: Select all
func getValue(values: [$t], index: uint) {
assert index < countOf(values) else OutOfBounds;
return values[index];
}
Code: Select all
var a = [int] {1, 2, 3}
try a.getValue(0)
{
case OutOfBounds:
print "fail";
}
The first and most obvious difference between this new syntax and exceptions is that error handling is done on function calls, not blocks.
The second difference is that errors are just numeric values. These values are indices into the `failsIf` error list in the function's metadata. For example, if a function `failsIf OutOfBounds, OutOfMemory`, then the `OutOutBounds` error returned by that function will have a value of 0 and the `OutOfMemory` error will have a value of 1. This means that the OutOfBounds returned by one function is not numerically the same as the OutOfBounds returned by another function.
The third difference between this mechanism and exceptions and that error codes are returned to the caller via a hidden function parameter instead of jumping.
The only caveat with this error handling system is that when the `failsIf` attribute of a function changes, all functions that call that function need to be updated. Granted, it's uncommon for functions to add new error codes out of the blue, and the callers would need to be updated anyhow to properly propagate the new errors upstream.
----
EDIT:
Just a little something I wanted to add which isn't particularly important. I'm pondering the idea of splitting the `enum` type into two separate types for states and flags and splitting the `uint` type into two separate types for cardinal and ordinal numbers. Intrinsically they're the same, but semantically, they're different.
----
EDIT 2:
I'm admittedly discontent with the declaration syntax because it's a bit more verbose than I want it to be. I'd really like it to be more C like because it's closer to my comfort zone, but that creates the following problem:
Code: Select all
print x, y; // function call
int i, j; // declaration
Another annoyance I have is that functions and structures are declared with typenames on the left, whereas everything else is declared with typenames on the right:
Code: Select all
func myfn() {
var x: int;
}
There's another solution though. If I replace block-scope with function scope, identifiers can be implicitly declared. This requires though that identifiers inside of functions are dynamically typed. It's basically the same thing that Python does.
Code: Select all
func val(array<$t> a, uint i) -> t {
if i < a.count {
value = a[i];
}
return value; // error, value is either `t` or undefined
}
Re: Opinions On A New Programming Language
Code: Select all
Allocator Obj
{
allocate Func*(count Int = 1, instance %T!) Bool
{
object Obj* = \type_of(instance);
if (object.load_address == NULL)
load_object(object);
object.instance_count++;
instance = NULL;
return TRUE;
}
free Func*(instance %T!)
{
object Obj* = \type_of(instance);
object.instance_count--;
if (object.instance_count == 0)
unload_object(object);
}
}
allocator Allocator = new Allocator();
The first thing is that unlike in C, objects (structs) are reference types. This behavior can be overridden using the ^ punctuator, which is semantically equivalent to doing a shallow copy.
Code: Select all
v1 Vector3;
v2 Vector3 = v1;
v2.x = 5; // this changes v1.x to 5
v2 = v1^;
v2.x = 7; // this only affects v2.x
Code: Select all
x Int*;
allocator.allocate(x);
defer allocator.free(x);
Code: Select all
x Int*;
allocator.free(x);
allocator.allocate(x);
The final interesting bit is the use of the % and \ punctuators. Just like printf substitution, a % is used to specify a parameter and \ does immediate evaluation. This means that `\type_of` is evaluated at compile time. In this particular case, `type_of` is a built-in function, but this can be done with any function whatsoever.
The allocator code snippet aside, the language does support pointer arithmetic, however brackets are not used.
Code: Select all
x Int;
y Int* = x*;
y* + 1 = 6;
y*++;
if (y != 6)
print("it should be!");
Re: Opinions On A New Programming Language
About 2 weeks ago (as of 01/31/2018), I started having a huge number of breakthroughs that stemmed from a complete shift in my ideology and how I approach language design. This has completely changed the language 100% for the better, and everything mentioned before this post is thus invalidated.
Design Guidelines (Summary)
- Don't make coders repeat themselves.
- Don't try to prevent mistakes.
- "Correct" means "deterministic".
- I assert that the primary factor for deciding on the syntax of a language and adding new features is that coders shouldn't have to repeat themselves. Repetition is noise, which makes code difficult to read, write, and change.
- I assert that trying to prevent coders from making mistakes is a bad thing. Preventative measures are always overly obtuse, inefficient, and difficult to comprehend and execute in practice. The reduction of errors should come naturally as a side-effect of the simplicity of the language and it's ability to express the algorithm which is being written.
- I assert that mathematical correctness is not true correctness in the field of programming; rather, true correctness is determinism based on well-defined behavior. In the context of unsigned 8-bit integers, "2 + 255 = 1" is a correct expression because it's defined to be so.
Modules
Adhering to the first design guideline (the DRY principal), I wanted to allow the ability to import multiple modules at the same time, and I also didn't want the symbols from libraries to automatically pollute the current scope. So I decided that I would borrow the use of `.*` from Java to explicitly indicate this.
Code: Select all
import {
io.*;
math.*;
};
Code: Select all
import std {
io.*;
math.*;
};
Cleaning Up Resources
A very common issue, and one of the remaining 3 justifications for the existence of goto, is cleaning up resources when you have multiple points of failure. For that, I decided to borrow the `defer` statement from Go, and a defer would push the statement onto a stack so that they're executed in the opposite order of which they're deferred. So in the below code:
Code: Select all
FILE *f1, *f2;
f1 = fopen("file1", "rb");
if (!f1)
return;
defer fclose(f1);
f2 = fopen("file2", "rb");
if (!f2)
return;
defer fclose(f2);
return;
Unconditional Loops
A very common idiom is the use of an unconditional loop. Not just the `while(true)` or `for(;;)`, but C has a `do/while` statement that basically creates an unconditional loop with an `if (condition) break;` statement at the end of it. I intended to fix this, and eliminate the second justification for goto by adding a `recur` statement:
Code: Select all
recur {
getEvents();
if (quit);
break;
update();
render();
}
Labelled Loop
To kill the 3rd and final justification for goto, which is to break/continue in nested loops, I decided to keep labels for loops and switch statements.
Bounded Arrays
I wanted to have bounded arrays, but I just couldn't decide on what the maximum size of a bounded array should be. Turns out, it's very dependent on what you're doing. So then I thought, why not do this:
Code: Select all
uint len;
int val[&len];
Use All The Things
Two particular things I wanted to deal with were the ability to use the symbols of a module in a specific scope, and allow for compositional polymorphism of structures by flattening the namespaces. For this, I added a `use` keyword:
Code: Select all
struct vec2 { float x, y; };
struct vec3 {
use vec2;
float z;
};
vec3 v;
v.x = 5;
Code: Select all
struct Table {
int val;
use proc(Table *this, char *key) int * {
return &this.val;
}
}
Table table;
table["test"] = 6;
struct Reader {
FILE *f;
use proc(Reader *this, int len, void *buf) int {
return fread(buf, 1, len, this.f);
}
}
Reader read;
int i;
read.f = fopen("test.txt", "rb");
defer fclose(read.f);
read(sizeof(i), &i);
Like Closures...But Waaaaayyy Better
Before I came up with the idea for using procs in structs, I was thinking hard about how you could bind data to a procedure, and came up with this idea:
Code: Select all
proc Read(Reader *reader, int len, void *data) int;
Reader reader;
int i;
var read = Read(reader, ..);
read(sizeof(i), &i);
Code: Select all
var read = proc(int len, void *data) int { return read(reader, len, data); }
Code: Select all
Read(reader, ..)(sizeof(int), ..)(&i);
Despite the fact that we can easily create trampolines, I do want to mention that tail-call recursion will still be a feature.
Stateful Machines
Another one of the problems on my todo list was to fix the broken switch statements in C. First (and least important) is the name. `switch` and `case` are very common words. I decided to replace them with `when`, `is`, and `or`. The `or` keyword doesn't mean `logical or` btw. It's a fallthrough case; because the `when` statement breaks by default.
But I wasn't necessarily done with switch statements yet. I wanted some way to be able to build state machines out of `when` statements. After much pondering, the idea hit me of having the value you're checking being like a tape. You read a value and either reach a breaking state or you `shift` to the next state:
Code: Select all
when ("hello") {
is 'h':
or 'l':
print("consonant");
shift;
is 'e':
or 'o':
print("vowel");
shift;
}
EDIT:
I forgot to mention, procedures are const by default, which was the original reason for having a `var` keyword. To create a function pointer, you'd use `var proc`. Both `const` and `var` can be used as either a modifier or a way to declare an identifier with type inference.
Code: Select all
var proc glBindBuffer(uint target, uint buf);
...
glBindBuffer = dlsym(dl, "symbol");
const pi = 3.1415926535;
var text = "stuff";
EDIT #2:
A proposal for the order of operations:
Code: Select all
• Unary Suffix Operators () [] . ++ --
• Unary Prefix Operators & * ! ~ ++ -- + -
• Multiplication *
• Division & Remainder / %
• Addition & Subtraction + -
• Bitmasking Operators & | ^
• Bitshifting Operators << >>
• Relational Operators == < > != <= >=
• Logical Operators && || ^^
• Ternary Conditions ?:
• Assignment Operators = += -= *= /= %= <<= >>= &= |= ^=
Code: Select all
1 ^ 4 << 1 == 10 // in C, the right hand side would be `9`
x & y == 7 // in C, this would be evaluated `x & (y == 7)`
4 / 2 * 2 == 1 // in C, this would be evaluated `(4 / 2) * 2 = 4`
false || true && false // in C, this would be evaluated `false || (true && false)`
Re: Opinions On A New Programming Language
Because the language has bounded arrays, and because we can bake parameters into procedures, we can do something like this:
Code: Select all
proc Order($T lval, T rval) int;
proc qsort(Order order, int len, $T val[&len]) int, T[&0];
proc run() {
int lpeople_len, rpeople_len;
char *lpeople[&lpeople_len] = {
"Bob", "Jill", "Tom", "Sue", "Joe"
}
char *rpeople[&rpeople_len];
lpeople => qsort(by_name, ..) => rpeople;
}
We can use `=>` to compose procedures like this into a pipeline that does mapping, filtering, and reducing operations; and then dumps the result into an identifier.
EDIT:
Also, something I forgot to mention about struct operator overloading yesterday:
Code: Select all
my_struct[x] = 5;
Code: Select all
*my_struct(x) = 5;
Re: Opinions On A New Programming Language
Code: Select all
struct File {
FILE *fp;
Reader read;
Writer write;
};
struct Reader {
FILE *fp;
use proc(use Reader reader, int len, void *data) int {
return fread(data, 1, len, fp);
}
}
struct Writer {
FILE *fp;
use proc(use Writer writer, int len, void *data) int {
return fwrite(data, 1, len, fp);
}
}
proc construct(use File file, str name, str mode) {
fopen(name, mode) => fp => read.fp => write.fp;
}
int i;
File file("test.txt", "rb");
file.read(&i, sizeof(int));
Re: Opinions On A New Programming Language
Code: Select all
struct Entity {
use Vector3 position;
};
struct Crate {
use Entity entity;
};
interf Updateable {
proc update();
};
impl Updateable (Crate crate) {
update = update_crate(crate, ..);
};
proc update(Updateable Entity entities[n], int n) {
for (entities) {
it.update();
}
}
proc update_crate(Crate crate) {
}
proc run() {
Crate crate;
update(&crate, 1);
}
Re: Opinions On A New Programming Language
Code: Select all
struct Person {
var name = "bob";
}
Code: Select all
Person person;
person.name = "tom";
Now that I've touched that base, I'm going to cover the next bit: first-class types and generics (parametric polymorphism).
Code: Select all
struct vec3(proto t) {
t x, y, z;
}
vec3(float) position;
Code: Select all
proc add(proto t, t x, t y) t {
return x + y;
}
const addf = add(float, ..);
Edit:
So, now that there's support for default parameter values, first-class types, and interfaces; I'm really leaning away from supporting function overloading. I just can't think of a good argument for having it anymore, and there's a huge bonus for not having it, which is that I can avoid name mangling and make this language as a perfect drop-in replacement for C that anything can interop with.
Edit #2:
I just noticed something really awesome about prototypes that you can do:
Code: Select all
interf updateable {
proc update();
}
impl updateable (crate c) {
update = update_crate(&c, ..);
}
proc update_crate(crate *c) {
}
proc doit(updateable proto t, t things[n], int n = 1) {
for (things) {
it.update();
}
}
proc run() {
crate mycrate;
doit(crate, &mycrate);
}
Re: Opinions On A New Programming Language
Code: Select all
proc Get(char c[n], int n) int;
interf getable {
Get get;
};
impl getable (char *s) {
get = proc(char **sp = &s, char c[n], int n) int {
char *s = *sp;
int total = 0;
for (c) {
if (!*s)
return total;
total++;
it = *s++;
}
}
}
proc makewords(Get get) Get;
proc lowercase(Get get) Get;
proc sort(Get get) Get;
proc unique(Get get) Get;
proc print(Get get);
proc run() {
getable char *s = "an interesting example of a pipeline";
makewords(s.get) => lowercase => sort => unique => print;
}
So `print` calls the getter returned by `unique`, which calls the getter returned by `sort`, etc. And this continues until `s.get` returns 0 by reaching the end of the string.
One thing to note is that this system doesn't take buffering into account. It's also really complex to implement, which is a code smell imo. I may consider supporting channels as first-class types instead.
Also, I never mentioned this, but unlike pretty much all other languages in existence, this language treats parameters and struct members as sets, represented by a directed acyclic graph. This means that they can cross reference each other to express meaningful relationships, as long as there is no recursion. eg. you can't do:
Code: Select all
proc add(proto t = typeof(x), t x, t y) t;