Fork Bomb

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Guest

Fork Bomb

Post by Guest »

Hello,

What are the available solutions to avoid fork bomb ??
Is the limination of the number of process forked by a single process be the only solution ??
Candamir

Re:Fork Bomb

Post by Candamir »

If I were to program a fork bomb (*Candamir searching his evil side*), I would do it like this:

Code: Select all

while (1)
{
fork();
}
This has the side-effect that the forked processes will also fork new processes, and so on. So in your code that is responsible for the fork command, you could keep track of which process is a fork and which not: "Original" processes have a value of 0, forked processes a value of 1, forks of forks 2, etc. So you can decide at which point you won't let any more forks happen. If you want something more sophisticated, you could decrease those values over the time (if a fork with value 1 hasn't forked itself during the last 15 mins, its value becomes 0).

My approach is better than just counting the total amount of forks, because unless you also count forks of forks, a process could be coded to just fork number_of_allowed_forks - 1 times...

Hope my talk serves anybody

Candamir
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Fork Bomb

Post by Pype.Clicker »

The problem with fork bombs is not that much to avoid them, but more to keep them mostly harmless for the rest of the system.

E.g. a single user launching the fork bomb cannot prevent other users from starting new processes (e.g. you should always keep at least N process available to each user).

Also, what you typically want is to make sure the system is still responding *even* during a fork-bomb assault, which means the system should setup resources with sufficiently high priority to perform process management even when there's apparently no more process/memory/whatever to handle a new stuff.

So i'd be rather limitating the number of process per user (assuming "process management recovery tool" run as a separate user), and probably would throttle the overal CPU time a user can obtain (e.g. using max/min fairness scheduler to ensure one out of N users can always have at worst his "fair share" of 1/N of CPU time).

Note that there are other programs that typically forks alot, but with a different timing: shells... a forkbomb mitigation scheme should make sure you can still run shells and maybe have a lot of "sleeping" processes.

I'd be tempted to use something like a "minimum wait time" between two "forks" of a given user, which would slowly decrease over time and quickly increase after a fork, e.g.

Code: Select all

@clocktick: this_user.forkwait-- unless this_user.forkwait==1;
@fork() : 
       do_the_fork;
       wait(this_user.forkwait);
       this_user.forkwait*=2;
0Scoder
Member
Member
Posts: 53
Joined: Sat Nov 11, 2006 8:02 am

Re:Fork Bomb

Post by 0Scoder »

One way could be to give each process a key that responds to its quota limit, and only let the process manager create these quota keys. Process that spawn other processes can then pass on any of their keys to child processes, but cannot give them ones it does not have. Thus each time a process forked it would have to pass its quota key on (or the child would not be able to spawn processes), and so do fork bomb could ever produce more than the N processes (or threads maybe?) that were allowed by the key. If you then combine this with what Pype.Clicker said about limiting the CPU time each user gets (or could this be key-orientated as well?), then you should prevent the damage of fork bombs.

Thoughts?
OScoder
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Fork Bomb

Post by Solar »

This, and many other security-related problems, are the topic of the GRSecurity patchset for Linux. While the patches themselves are unlikely to help you, the papers and docs can perhaps provide an idea or two.
  • maximum number of processes per user / parent process
  • suspending / killing a process that forks suspiciously often
  • forked processes (beyond a certain limit) get suspended for an increasing amount of time before allowed to run
Every good solution is obvious once you've found it.
Crazed123

Re:Fork Bomb

Post by Crazed123 »

In the case of a lottery scheduler, or any scheduler which assigns processes an individual timeslice, it's relatively simple to just treat tickets/ticks as a scarce resource during forks, splitting the parent's total between itself and the child. Any process that forks continually will then eventually run out of tickets/ticks, running less and less often/with less and less time with each generation of forks.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Fork Bomb

Post by Candy »

I'm going for the "limit each user to a certain percentage of cpu time". If you're going to spoil your computer time running a forkbomb, not my problem.
Candamir

Re:Fork Bomb

Post by Candamir »

Candy wrote: I'm going for the "limit each user to a certain percentage of cpu time". If you're going to spoil your computer time running a forkbomb, not my problem.
Yes, but what about some kind of virus or something like that? The user doesn't /know/ it's a fork bomb...
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Fork Bomb

Post by Candy »

Candamir wrote:
Candy wrote: I'm going for the "limit each user to a certain percentage of cpu time". If you're going to spoil your computer time running a forkbomb, not my problem.
Yes, but what about some kind of virus or something like that? The user doesn't /know/ it's a fork bomb...
That's a valid point. First off, if you run arbitrary code you're an idiot anyway, but since everybody seems to have accepted postscript, flash and javascript as a given...

If you want to specifically counter a forkbomb, note the addresses at which a fork was last done (the last 10 addresses or so would suffice) and delay forks from those addresses, more as more forks are done.

Countering a forkbomb in any way is pretty pointless however, you would still limit the user in an arbitrary way, you can find ways around arbitrary ways. The only way to actually prevent forkbombs is to hardlimit forks, which also hardlimits useful programs.

Make a system that an idiot can use and only an idiot will want to use it.
Guest

Re:Fork Bomb

Post by Guest »

Isn't limiting forks per user effect some programs the make use of fork() suck shells, which means that these programs has to be re-written.

i think whe should limit the number of forks per user, but create a layer of threads for each process, this threads shouldn't be based on fork() as 1:1 mapping, but real user-space thread, so another scheduler within each process should do this.

Any ideas
Kemp

Re:Fork Bomb

Post by Kemp »

Isn't limiting forks per user effect some programs the make use of fork() suck shells, which means that these programs has to be re-written.
Just in case someone can't translate that into english:

Isn't limiting forks per user going to affect some programs that make use of fork(), such as shells, which means that these programs have to be rewritten?

And possibly, yes. Security is often a trade-off with usability.
paulbarker

Re:Fork Bomb

Post by paulbarker »

Things like this are great as optional features, since this would suit a server where there is little interactive use and you know the characteristics of all the server apps that run. A policy like this would help:

Limit all processes to 10 children
Apache has unlimited children
Children of apache are limited to 100 children.

With a good policy system the tradeoff between security and usability can be controlled by the user. A good generic / default policy would be to fork-limit unsigned applications to 1 fork per .1 second, decreasing after each fork as explained by Pype. Make the limiting quite harsh (3x previous delay after each fork) and make exec drag the delay back down slighly (1/2 delay). This would increase the delay by 1.5x on a fork/exec pair, which is less likely to be a fork bomb than forking repeatedly without exec.

Obviously catching 2 processes which loop forking and execing each other or a process which forks and execs itself is the next step in this line of thinking.

Also limiting forks from an eip value previously forked from would help, but may upset applications which use a wrapper around fork.

Going beyond this you get to the land of virus writers and anti-virus writers: It's a race where the virus/fork-bomb code is using more and more complicated ways of doing its damage and all the anti-virus people can do is try to keep up.

I still believe that securing against running unintended code is a much better place for peoples efforts, but trojan horses and other malicious programs which are intentionally executed are still a real problem.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Fork Bomb

Post by Solar »

One thing to consider. What may be unacceptable behaviour on a shell server with thousands of users (one process' forks taking 80% of the system performance) might be just what you want on a dedicated Apache webserver, or a gaming machine. Making it configurable means adding complexity to the system and giving Joe Average just enough rope to hang himself (configure his system to death).

No idea how to fix this in a way that suits everybody. For my own OS, I was thinking about offering "run modes" (server / desktop / embedded) and making certain things (like fork-limits) available only under certain modes (you'd want it on a server, but might not bother on desktop / embedded systems)...
Every good solution is obvious once you've found it.
paulbarker

Re:Fork Bomb

Post by paulbarker »

Well if Joe User wants to mess with the configuration, thats his problem. The philosophy for my OS is that everything should be as configurable as possible, and sensible defaults should be provided. That way, you can choose to play around with the configuration for areas you understand and (if you're sensible) leave the configuration alone for areas you dont understand.

An idea like "Run modes" would be very useful to provide those sensible defaults.

On the other hand, what exactly constitutes "sensible defaults" for fork-limiting is a very open question... I'd guess disabled unless the user enables it.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Fork Bomb

Post by Candy »

I'm going for apparent persistence and both upper and lower limits per layer (assuming that anything in the layer wants to run at all times, it should get at least lower-bound and at most upper-bound). For most, lower-bound equals some very small number and upper-bound equals 100%. That in a way prevents fork bombs.

The UI server runs in its own layer including interface code. Interface code can't fork (or any such exhaustion resource) since it has no perception of process/thread. It runs with a lower-bound of some 10% and an upper bound of 40%. The kernel threads run in a yet higher priority class with a lower bound of 5% and an upper bound of 50%.

The persistence is optional, all things you have active are by default stored and restored in your next session, but you can choose to start up nothing. There is no startup folder, registry entry or whatever. You just run what you did last time, including window settings, Z-order and documents currently open. If you start a forkbomb, log out & start anew.
Post Reply