In my semi-educated opinion, it seems like threads are an unnecessary complication on both levels. On the OS side, you need to coordinate a system of managing threads and their stacks as well as linking them to processes. It also seems to inherently mix in too much policy, like the decision on whether to kill a process when a thread faults or instead kill just that thread. On the application side, it seems like threading can give you a whole mess of bugs and pitfalls that can never happen in sequential code: synchronization, race conditions, deadlocks.
The only thing that seems to be against lightweight processes is performance. How big is this gap really? I have lightning fast IPC in my design, so maybe lightweight processes could even beat out user threads for me. From what I read in TAOUP (quite biased...) the whole threading concept stems from the relatively slow IPC and lack of (v)fork() call in MS-DOS/Windows. Is there truth to this?
Edit:
And what about an event driven design in place of threads? The core of my kernel is completely event driven, and my userspace supports preemptible event handlers. Is a set of closely communicating lightweight processes that use events a good model for replacing threads? So far, that's why my design entails.
For me, this is less of an OS level concern, because my kernel already provides primitives that, fit together properly, could create user threads. I also could care less about pthreads - POSIX is not my religion
