The problem is that this idea carries with it a dangerous set of assumptions, ones which could in fact be subverted. I am not saying it is a bad idea, per se, but rather that there is a subtle trap to it that you need to recognize.Brendan wrote:I don't just have ranged types, I only have ranged types.
Oh, and @embryo2 should read this too, as what I am about to say regarding static checks applies at least as much to runtime checks as well.
In absence of other information, I can make one of two assumptions regarding the absolute maximum size of a ranged type in your type system: either the limit is based on the physical limitations of the underlying hardware types, or there is a facility in the compiler to support bignum ranges (ranges exceeding the maximum range magnitude of the relevant system type size, e.g., an integer with a bit width greater than 64 bits on an x86-64 system).
In the latter case, then the compiler would need to generate code for such support, and recognize the need to insert it when the absolute range magnitude exceeds that of the underlying maximum system type. It would be problematic, especially when it is needed in conjunction with heap memory, but entirely possible nonetheless. There is in principle a question of how to handle the case of a range magnitude or range boundaries which cannot be represented in the maximum system type, but in practice this is unlikely to occur (I know, I know, famous last words, but in the case of the range magnitude at least, the size of the maximum addressing space is likely to be less than the system type's range magnitude, and range boundaries often need not be represented at runtime at all). Since Thelema is a Lisp, which traditionally has included bigint, big fixnum, and big flonum at the language level, this is more or less the solution I am going with myself, but it isn't one most language designers would choose.
The former case, placing a maximum limit on the range magnitude, sounds simpler, and fits with common existing practices... but it can be subverted fairly easily by a careless or unscrupulous coder, simply by changing the representation to one based on a dynamically resizable array or list type, and implementing (however clumsily) the bignum operations themselves. Simply by chaining the values together, one can kludge together a numeric type that bypasses the range checks entirely.
This is not just a technical limitation; it is fundamental, being a direct consequence of mathematical incompleteness. Any language which allows dynamically allocated memory values is vulnerable to this. It could be done regardless of whether the compiler supports big ranges or not, but providing bigint, big fixnum, and bigfloat support systematically makes it much less likely that someone would do so out of poor design (malice is another matter, but there's only so much one can do at that level about code that intentionally subverts security, anyway). The choice then becomes not one of supporting unranged types or not, but of either explicitly allowing dynamic memory and cutting off a large number of programming possibilities, or acknowledging that you implicitly allowing unranged types while doing everything to discourage them as a best practice (which in practice means providing language-level support for bignums, IMO).
This hardly is limited to numeric types, either, but most of the other places such subversion could occur would involve providing range support for non-numeric types (e.g., enums and sets), or would involve intrinsically unrangeable types (more or less any dynamically allocated collection type).
Now, as I said, I am not arguing against your respective solutions; since there are no unequivocal solutions, the solutions have to be judged on a case by case basis, and my own approach is as flawed as any other. I merely am bringing up something you were probably aware of, but might not have specifically considered in this context.