Secure? How?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
HoTT
Member
Member
Posts: 56
Joined: Tue Jan 21, 2014 10:16 am

Re: Secure? How?

Post by HoTT »

Yes, they are. The actual function being called is not known until run time. It can belong to anything that inherits from the base class.
But that's not random. It always the same giving the same inputs. C is no system programming language by your definition by the way.
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Re: Secure? How?

Post by SoulofDeity »

HoTT wrote:
Yes, they are. The actual function being called is not known until run time. It can belong to anything that inherits from the base class.
But that's not random. It always the same giving the same inputs. C is no system programming language by your definition by the way.
It's random at compilation time, not run time. Thats why I made that distinction.

If you're trying to say that function pointers break my definition, then no. They don't. What they point to is known at compilation time.
HoTT
Member
Member
Posts: 56
Joined: Tue Jan 21, 2014 10:16 am

Re: Secure? How?

Post by HoTT »

If you're trying to say that function pointers break my definition, then no. They don't. What they point to is known at compilation time.
Okay, I surrender.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
HoTT wrote:
Yes, they are. The actual function being called is not known until run time. It can belong to anything that inherits from the base class.
But that's not random. It always the same giving the same inputs. C is no system programming language by your definition by the way.
I think what SoulofDeity is trying to say might be better described as a type of "semantic gap".

Essentially there's 3 languages involved. The first is the language used to describe what the software is supposed to do (e.g. maybe English). The second language is the one programmers write code in (e.g. maybe C). There's a semantic gap between these languages (e.g. you can't tell C things like "find the largest prime number that is below N;" and expect it to compile); and this is what people are normally talking about when they say "semantic gap".

The third language is what the CPU understands (e.g. machine code). There's another semantic gap between the language the programmers use and the language the CPU understands. For a simple example, in C you can't do "x = y + z; if(overflow) { return -1; }" even though it's relatively trivial in the language the CPU understands, because of the semantic gap between C and machine code.

For different languages both gaps are different sizes. For a very high level language (e.g. maybe C#) the first gap is smaller but the second gap is larger; and for a very low level language (e.g. maybe assembly) the first gap is much larger but the second gap is tiny.

For some things (e.g. quickly slapping together a prototype) the first gap is much more important. For other things (performance/optimisation, system programming) the second gap is more important.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Re: Secure? How?

Post by SoulofDeity »

@Brendan thanks, that much sums up what I was talking about. :)

You can circumvent the semantic gap a great deal by adding a middle-man between the 2 extremes of Ousterhout's dichotomy (systems programming and scripting). Specializing only in one of the 3 ends of the 2 semantic gaps allows you to have great expressive power without making the languages complicated. Most language developers see languages as nothing more than a collection of features and end up making something like a swiss-army knife with a dull blade.
no92
Member
Member
Posts: 307
Joined: Wed Oct 30, 2013 1:57 pm
Libera.chat IRC: no92
Location: Germany
Contact:

Re: Secure? How?

Post by no92 »

That semantic gap has already been closed by some languages. The best example I can think of is Ruby, as code in Ruby or Rails can almost be read like the description of the algorithm, in English. The only problem there is that Ruby cannot be compiled down to C(++) and can't access memory directly.

The next step would be to close that semantic gap with languages we can use to OSdev - which, at least for what I know - hasn't been done yet. Please, prove me wrong with this one!

As we're already discussing these topics here, why shouldn't we group together and create a draft spec for a language made for low-level things like OSdeving, with built-in security features? Any thoughts on that?
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Re: Secure? How?

Post by SoulofDeity »

no92 wrote:As we're already discussing these topics here, why shouldn't we group together and create a draft spec for a language made for low-level things like OSdeving, with built-in security features? Any thoughts on that?
I would be up for it. As long as security-wise the language in question is being designed with the intention of keeping others out, not preventing ourselves from getting in.
willedwards
Member
Member
Posts: 96
Joined: Sat Mar 15, 2014 3:49 pm

Re: Secure? How?

Post by willedwards »

Brendan wrote:For a simple example, in C you can't do "x = y + z; if(overflow) { return -1; }" even though it's relatively trivial in the language the CPU understands, because of the semantic gap between C and machine code.
I have good news for you :) Clang and GCC are aligning on intrinsics to do this http://clang.llvm.org/docs/LanguageExte ... c-builtins

I hope this proves useful to hobby OS devs :D
no92 wrote:As we're already discussing these topics here, why shouldn't we group together and create a draft spec for a language made for low-level things like OSdeving, with built-in security features? Any thoughts on that?
Only if you first promise to write an autopsy of BitC explaining in your own words the problems to resolve and the lessons learnt ;) http://www.coyotos.org/pipermail/bitc-d ... 03300.html

A shortcut may be to learn Rust? http://jvns.ca/blog/2014/03/12/the-rust-os-story/
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Re: Secure? How?

Post by SoulofDeity »

willedwards wrote:Only if you first promise to write an autopsy of BitC explaining in your own words the problems to resolve and the lessons learnt ;) http://www.coyotos.org/pipermail/bitc-d ... 03300.html

A shortcut may be to learn Rust? http://jvns.ca/blog/2014/03/12/the-rust-os-story/
After reading that, I feel even more content in saying "I told you so".

As for Rust, I personally have been very leery of it. The syntax feels like a fake promise to me. eg mutability. There is no such thing as true immutability. If you know where the value is at in memory, you can change it. Forcing the programmer to explicitly declare things as mutable seems like giving them a false sense of security about what is actually happening. To put it briefly, the entire language feels like it's definition of safe is to control the actions of the programmer rather than protecting the code itself.

I could be wrong, maybe the Rust runtime does things like creating stack canaries or return protection, but at face value, it looks as though using Rust is akin to strapping on a straightjacket and locking yourself in a padded room.
User avatar
thepowersgang
Member
Member
Posts: 734
Joined: Tue Dec 25, 2007 6:03 am
Libera.chat IRC: thePowersGang
Location: Perth, Western Australia
Contact:

Re: Secure? How?

Post by thepowersgang »

After having worked with rust, and made some wonderful crashes with it. I disagree with your opinion (that it's a false sense of security). '&mut' is not strictly just mutable ("immutable" pointers can have interior mutability). It's the distinction between shared and unique references.

I'm pretty optimistic that a fully-rust OS will prevent most of the classic bugs that cause arbitrary code execution (use after free, stack overflows, out-of-bounds accesses) because the language is designed such that most operations are compile-time checked, and runtime checks can be cheap. Time will tell if ACE is possible under the rust model (without using marked-unsafe code with bugs in it)
Kernel Development, It's the brain surgery of programming.
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: Secure? How?

Post by bewing »

I disagree with much of what has been said about security here.
There are several categories, of course. One of the most obvious cybersecurity issues today is robopassword attacks. The recent attacks on icloud accounts are clear examples. This is an area of security that all modern OSes have completely failed to cover properly, and it can be done much better. You do not need to have some deep understanding of how security works to fix it, either.

In our overly-connected world, one of the biggest tasks of an OS is to build a wall around the area of the machine that can be accessed from "the outside" by users who should have no permissions. Many possible solutions have been suggested, and would work -- except that idiots usually defeat their own system's security before they ever get hacked.

From the desire to remotely manage your server farm, to the NSA -- there are security holes being intentionally punched through the security walls of most OSes out there, and it is a huge mistake to let this happen. So this is one answer to the OP's question -- even if you aren't a security expert, how do you make things more secure? You minimize the ability of anyone to randomly access data on the machine over a network. You allow password protection of sensitive files/partitions. You require physical connections to access secured data. You do trusted user authentication more cleverly. You resist the fad of allowing users to download binary blobs from the internet and run them (anywhere except within a sandbox, or whatever).

Once someone has managed to get logged in as a trusted user, they can obviously do damage up to their level of permissions. The key again is better authentication. And this is something any OS can do -- it doesn't come down to silly compiler tricks.

So the entire other category of security is to protect the kernel and other programs from a malfunctioning program. On intel chips this is partially an insoluble problem, because AFAIK they are inherently insecure. But in general, putting each app in its own address space works. Assuming that you are not so foolish as to also provide stupid syscalls in your OS that enable apps to affect the kernel or other apps. So this is another thing any OS designer can do without having a deep understanding of security -- don't intentionally create syscalls that breach the walls of each app's address space.

Then you have the idiotic category of working really hard to protect apps and programmers from themselves. Canaries and other stack smashing detection mechanisms go here. Overflow protection/detection. Etc. Etc. All of it very paternalistic and totally misguided. The only important concept is to build a wall around every app so it cannot do harm to the other programs, kernel, or system data (beyond the user's permission level, at least). As an OS, compiler, or software library designer: you should not ever have to protect a user program from crashing itself. Because that should never effect anything else on the system. If it can, then your OS is braindamaged. And the fix for that is not adding bounds checking to the user apps -- it's fixing the braindamage in the OS.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Secure? How?

Post by Rusky »

bewing wrote:Then you have the idiotic category of working really hard to protect apps and programmers from themselves. Canaries and other stack smashing detection mechanisms go here. Overflow protection/detection. Etc. Etc. All of it very paternalistic and totally misguided. The only important concept is to build a wall around every app so it cannot do harm to the other programs, kernel, or system data (beyond the user's permission level, at least). As an OS, compiler, or software library designer: you should not ever have to protect a user program from crashing itself. Because that should never effect anything else on the system. If it can, then your OS is braindamaged. And the fix for that is not adding bounds checking to the user apps -- it's fixing the braindamage in the OS.
Disagree, for three reasons:
  • It is good to have several layers of protection in case any single one is bypassed (and that is possible even with hardware protection, e.g. this hardware bug).
  • It is often very important to keep a program secure in and of itself, because it handles sensitive data or operations- crashing other programs is nowhere near the only thing you have to worry about.
  • Programmers are not perfect and make mistakes all the time, so the more tools they have at their disposal to pinpoint and correct those mistakes the better (not that they have to use them for everything, just that they be available).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
Rusky wrote:Disagree, for three reasons:
  • It is good to have several layers of protection in case any single one is bypassed (and that is possible even with hardware protection, e.g. this hardware bug).
There is no defence against exploitable hardware bugs like these (at least none that can be implemented before the hardware bugs are known); so for this case "several layers of protection" is just "several layers of pointless bloat that don't prevent exploitable hardware bugs".
Rusky wrote:
  • It is often very important to keep a program secure in and of itself, because it handles sensitive data or operations- crashing other programs is nowhere near the only thing you have to worry about.
This is mostly wishful thinking; because most of these bugs are impossible to detect or prevent (regardless of how much "babysitting bloat" you add). Apple's "goto fail" bug was a perfect example of this.
Rusky wrote:
  • Programmers are not perfect and make mistakes all the time, so the more tools they have at their disposal to pinpoint and correct those mistakes the better (not that they have to use them for everything, just that they be available).
Here we agree - a restricted virtual machine for the purpose of debugging and testing (that's used before programmers release software and before end users ever get anywhere near it) would help a lot to catch/avoid all of the bugs that can actually be caught (without adding pointless bloat for the end user).

Of course I also think that most of the problem is languages like C and C++ where there's far too much "implementation defined behaviour" that makes it impossible to tell the difference between bugs (e.g. accidental integer overflows) and valid code (e.g. intentional integer overflows); combined with incredibly idiotic compilers like GCC that treat "undefined behaviour" (that it does detect) as an opportunity for optimisation rather than as an error condition.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
willedwards
Member
Member
Posts: 96
Joined: Sat Mar 15, 2014 3:49 pm

Re: Secure? How?

Post by willedwards »

I know I have posted this link like a thousand times, but it continues to be relevant: www.openbsd.org/papers/ru13-deraadt/mgp00001.html

You can try and secure a program, and lock it down.

And the attacker might still get in. If this happens, lets hope you invested in all the exploit mitigation you can get! ;)

(And yes, OpenBSD have weaponized Comic Sans)
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: Secure? How?

Post by bewing »

Rusky wrote: Disagree, for three reasons:
OK, but I still disagree with that. Because the practical end result of that is that the OS designers get let off the hook. They have created a braindamaged OS that has a security hole, that can be exploited with a stack smash, say. They add stack smashing prevention to the compiler for userapps and say "look! Problem solved." Except it's not.
Post Reply