x11 without server.
x11 without server.
Hi,
So in the last discussion we had, ditching x11 altogether didn't seem the right way.
To many components still rely on it.
Now, is it problematic to only ditch the x11 server part (so no sockets/equivalent), keep the code in it (keyboard, joysticks, mouse..) and placing it in a framework that every app will use instead of x11, with shared memory.
No code change for the drivers and apps, they think they communicate with x11 server but instead they talk with the framework (same function calls, contents changed).
Every app is now responsible for the drawing, with the help of the framework (it just assures that you don't write on another part of the window).
Now, I guess I've come with a -usable and realistic?- solution.
I am curious to know how you will break it. Please, forget about the tons of lines of code that have to be written in the OS.
Bye
So in the last discussion we had, ditching x11 altogether didn't seem the right way.
To many components still rely on it.
Now, is it problematic to only ditch the x11 server part (so no sockets/equivalent), keep the code in it (keyboard, joysticks, mouse..) and placing it in a framework that every app will use instead of x11, with shared memory.
No code change for the drivers and apps, they think they communicate with x11 server but instead they talk with the framework (same function calls, contents changed).
Every app is now responsible for the drawing, with the help of the framework (it just assures that you don't write on another part of the window).
Now, I guess I've come with a -usable and realistic?- solution.
I am curious to know how you will break it. Please, forget about the tons of lines of code that have to be written in the OS.
Bye
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: x11 without server.
Shared memory means it's time to attack the state of other apps
Re: x11 without server.
Absolutely Combuster
Every app is atomically given a number at the socket creation.
The shared memory is only authorized to the framework.
hmmm... Every app updates its own part and the framework just checks if it is possible via the shared memory.
It becomes very easy to even forbid/queue the drawing to some events.
Every app is atomically given a number at the socket creation.
The shared memory is only authorized to the framework.
hmmm... Every app updates its own part and the framework just checks if it is possible via the shared memory.
It becomes very easy to even forbid/queue the drawing to some events.
Re: x11 without server.
Hi,
Originally X11 was designed for this; but Linux (and other *nix clones) sucked badly (because they were stuck in "text only 1970") and failed to provide usable video drivers; so the X11 people (and others - e.g. SVGAlib) had little choice but to shove video drivers into user-space (where they never should've been for a "monolithic" kernel design) despite massive security problems and other problems (e.g. corrupted/unrecoverable device state when X11 crashed). Of course things have changed since, and now there's parts of video drivers in the kernel (e.g. KMS) to partially fix some of the original incompetence.
It's best to think of it as a kind of virtualisation. Applications are given their own virtual video device, their own virtual keyboard device, their own virtual sound device, etc; and "something" (window manager) is responsible for mapping these virtual video/keyboard/mouse/sound devices to the real devices. Of course because they're virtual devices the interfaces can be abstract.
Note that almost everything works on this "kind of virtualisation" idea. For example, every process is given its own virtual CPU ("thread") and something (scheduler) is responsible for using the real CPU/s to emulate hundreds of virtual CPUs/threads; every process is given its own virtual address space and something (kernel's memory manager) is responsible for using the real/physical address space to emulate virtual address spaces; etc.
I guess what I'm saying is that if you think applications should have direct access to real devices (including video, keyboard, etc) then you've failed to understand basic/ubiquitous multi-tasking concepts.
Cheers,
Brendan
I think you'll find that (on Windows, Andriod, Haiku and OS/2) nothing relies on X11 at all. Even for OS's like GNU/Linux (not andriod), the BSDs and Solaris (e.g. all the OSs that almost nobody wants even though they're free, simply because they're stuffed full of antiquated puss from half a century ago) you'll probably find that most software depends on libraries like QT and GTK, and don't depend on X directly.AlexHully wrote:So in the last discussion we had, ditching x11 altogether didn't seem the right way.
To many components still rely on it.
Yes, it's problematic. For a well designed OS you have layers - the kernel as the lowest layer; the drivers on top of that; things like file systems, network stack and virtual terminals in the layer on top of the drivers; and things like GUI and applications in the highest layer nowhere near drivers.AlexHully wrote:Now, is it problematic to only ditch the x11 server part (so no sockets/equivalent), keep the code in it (keyboard, joysticks, mouse..) and placing it in a framework that every app will use instead of x11, with shared memory.
Originally X11 was designed for this; but Linux (and other *nix clones) sucked badly (because they were stuck in "text only 1970") and failed to provide usable video drivers; so the X11 people (and others - e.g. SVGAlib) had little choice but to shove video drivers into user-space (where they never should've been for a "monolithic" kernel design) despite massive security problems and other problems (e.g. corrupted/unrecoverable device state when X11 crashed). Of course things have changed since, and now there's parts of video drivers in the kernel (e.g. KMS) to partially fix some of the original incompetence.
This sounds wrong too. For example, when the user presses a key something needs to send the key press to the window that currently has focus (and not to all apps/processes/key-loggers).AlexHully wrote:No code change for the drivers and apps, they think they communicate with x11 server but instead they talk with the framework (same function calls, contents changed).
It's best to think of it as a kind of virtualisation. Applications are given their own virtual video device, their own virtual keyboard device, their own virtual sound device, etc; and "something" (window manager) is responsible for mapping these virtual video/keyboard/mouse/sound devices to the real devices. Of course because they're virtual devices the interfaces can be abstract.
Note that almost everything works on this "kind of virtualisation" idea. For example, every process is given its own virtual CPU ("thread") and something (scheduler) is responsible for using the real CPU/s to emulate hundreds of virtual CPUs/threads; every process is given its own virtual address space and something (kernel's memory manager) is responsible for using the real/physical address space to emulate virtual address spaces; etc.
I guess what I'm saying is that if you think applications should have direct access to real devices (including video, keyboard, etc) then you've failed to understand basic/ubiquitous multi-tasking concepts.
I'd hope not. The only thing that should ever do actual drawing is the video driver (with or without GPU acceleration). Applications should only tell the video driver what it wants drawn and should not draw anything themselves. This "description of what to draw" may be a list of OpenGL commands, or a set of "X protocol" requests, or whatever (mostly it depends on how you felt like designing the "virtual video device" abstraction).AlexHully wrote:Every app is now responsible for the drawing, with the help of the framework (it just assures that you don't write on another part of the window).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: x11 without server.
MicroXWin is interesting - its an Xlib that talks to a (proprietary) kernel module instead. It shows that Xlib is not strictly tied to doing IPC with an X-server.
I would advocate absolutely against shared memory.
If someone (read: if you) wrote an Xlib that did client-side software-based rendering to a bitmap, then this would be a major help to all hobby OSers! Its then for your kernel (or micro-kernel) to offer composition of these bitmapped windows and deal with events.
Apart from X, another popular thing to port is SDL. You could combine the two. If your Xlib client-side software-based renderer used SDL, you could easily test it out and develop it on another platform, and then you have the job of making your kernel (or micro-kernel) offer composition of SDL 2D buffers, which is a smaller, easier target than Xlib...
(With an Xlib that actually rendered to SDL, you'd be offering an interesting way of running X window programs locally on other existing OS like Windows.)
There is a slippery slope with 3D. Its substantially easier to do client-side software-based rendering to bitmaps than it is to try and support OpenGL etc.
I would advocate absolutely against shared memory.
If someone (read: if you) wrote an Xlib that did client-side software-based rendering to a bitmap, then this would be a major help to all hobby OSers! Its then for your kernel (or micro-kernel) to offer composition of these bitmapped windows and deal with events.
Apart from X, another popular thing to port is SDL. You could combine the two. If your Xlib client-side software-based renderer used SDL, you could easily test it out and develop it on another platform, and then you have the job of making your kernel (or micro-kernel) offer composition of SDL 2D buffers, which is a smaller, easier target than Xlib...
(With an Xlib that actually rendered to SDL, you'd be offering an interesting way of running X window programs locally on other existing OS like Windows.)
There is a slippery slope with 3D. Its substantially easier to do client-side software-based rendering to bitmaps than it is to try and support OpenGL etc.
Re: x11 without server.
@Brendan: I "pretty much" understand the multi task concept
I found interesting to change things without altering security.
There is still a central place to direct the events to the right app! No need to recreate a broadcast socket equivalent! That would annihilate performance badly.
I though of a very tiny place which would manage the direction (events) and let the framework do the rest.
Yes, ditching Qt/gtk. Just having the OS window manager and nothing else possible (osx/iOS anyone?).
Layers are good but slow to traverse, inherently.
yes, it works, but it doesn't mean it is the best way of doing things. The devs from the past had to make things work. Some optimizations happened on that very idea that optimizations would come in an iterative fashion. But, the first idea may not be the best (look at the bit twiddling pages, between Knuth's solution and the new ones).
Are sockets the best thing? No. It is complicated and rubbish. Replace it with some optimized "tunnels" (cut the protocol, put your data as close as the hardware as possible) and your app's networking suddenly flies (tested here).
Do they work? yes, they do. But in an academic way (slow but correct).
I plan to keep on a specialized path and ready right now. This is much work. But rewarding. Academics ideas are eye opening (and now we could talk about sqlite and so on..)
Look at the userspace networking. Yes it works. Sometimes better. But it is so complicated that the use cases are not easy to link with real world apps (it has to be simpler to be efficient). Look at the dpdk framework source code. This the fault of the layers.
We create new OSes, this is the opportunity to change things and why not cut some layers.
(Do you like how web browsers work, too?)
@willedwards: no shared memory because of the notifying system (and atomics, sync involved), or because it may be easy to overwrite the controls? Or else?
I found interesting to change things without altering security.
There is still a central place to direct the events to the right app! No need to recreate a broadcast socket equivalent! That would annihilate performance badly.
I though of a very tiny place which would manage the direction (events) and let the framework do the rest.
Yes, ditching Qt/gtk. Just having the OS window manager and nothing else possible (osx/iOS anyone?).
Layers are good but slow to traverse, inherently.
yes, it works, but it doesn't mean it is the best way of doing things. The devs from the past had to make things work. Some optimizations happened on that very idea that optimizations would come in an iterative fashion. But, the first idea may not be the best (look at the bit twiddling pages, between Knuth's solution and the new ones).
Are sockets the best thing? No. It is complicated and rubbish. Replace it with some optimized "tunnels" (cut the protocol, put your data as close as the hardware as possible) and your app's networking suddenly flies (tested here).
Do they work? yes, they do. But in an academic way (slow but correct).
I plan to keep on a specialized path and ready right now. This is much work. But rewarding. Academics ideas are eye opening (and now we could talk about sqlite and so on..)
Look at the userspace networking. Yes it works. Sometimes better. But it is so complicated that the use cases are not easy to link with real world apps (it has to be simpler to be efficient). Look at the dpdk framework source code. This the fault of the layers.
We create new OSes, this is the opportunity to change things and why not cut some layers.
(Do you like how web browsers work, too?)
@willedwards: no shared memory because of the notifying system (and atomics, sync involved), or because it may be easy to overwrite the controls? Or else?
Re: x11 without server.
I have not studied the "X11" very well so I am not really familiar with all the problems. Some time ago I glanced through the X Window System Protocol and it was quite interesting. I would rather start from "too abstracted" than the other way around (e.g. direct hardware access with some conjuring tricks that make it abstract). Maybe the "X11" ended up being too complicated and bloated but at this point I think the idea was very good. Features like architecture-independent interfaces, network transparency and the like may sound like buzzwords but are very valid design visions. However, the current "X11" is not very attractive but I guess this is an implementation issue and not a fundamental design flaw?
Re: x11 without server.
I follow you Antti on this point actually.
The idea is brilliant, but the operating context hurts it.
The idea is brilliant, but the operating context hurts it.
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: x11 without server.
When I was tech architect for graphics and ui stuff at Symbian we had viruses that screen scraped banking apps etc. All because for legacy historic reasons the framebuffer was world-readable. We had plenty of horsepower (33Mhz is plenty for drawing UIs over IPC) and were doing retained drawing in a server, but still for legacy reasons the framebuffer and various other bitmap stores were still world-readable (from the days of the Psions).
Shared memory is a security and robustness nightmare. Don't do it. It will make things hard to develop and debug, and when you get it working you'll realise you've built something inherently insecure and unscalable.
Shared memory is a security and robustness nightmare. Don't do it. It will make things hard to develop and debug, and when you get it working you'll realise you've built something inherently insecure and unscalable.
Re: x11 without server.
@willedwards: I concur if the shared memory is world readable/writable.
But if there is the exchange of an id and a framework (permission) you cannot override, I don't see why it would not scale actually, or pose a security issue.
The problem you described is related to the permisisons that could be overriden in the first place.
Am I right?
But if there is the exchange of an id and a framework (permission) you cannot override, I don't see why it would not scale actually, or pose a security issue.
The problem you described is related to the permisisons that could be overriden in the first place.
Am I right?
Re: x11 without server.
You guys are talking about two different things.
AlexHully is saying he will share the Apps Framebuffer for it's window with the graphics driver and the app itself. That is it, not shared with all the other apps running. They all have their own buffer which is accessible to itself and the graphics driver.
willedwards is saying the entire screen buffer was accessible to all the apps, which I concur can be bad, it is very difficult to prevent, otherwise how do you implement things like a screen capture program? For a banking app, you may want some sort of protection, obviously it's a different ball game then normal OS dev.
AlexHully is saying he will share the Apps Framebuffer for it's window with the graphics driver and the app itself. That is it, not shared with all the other apps running. They all have their own buffer which is accessible to itself and the graphics driver.
willedwards is saying the entire screen buffer was accessible to all the apps, which I concur can be bad, it is very difficult to prevent, otherwise how do you implement things like a screen capture program? For a banking app, you may want some sort of protection, obviously it's a different ball game then normal OS dev.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: x11 without server.
I was wondering if you had looked at the designs of either Wayland or Mir, and what, if anything, you thought about them.
It is helpful to remember the context that X Window System was designed for: it was an offshoot of the earlier W, which (according to Wikipedia) originated as the graphics protocol for an experimental distributed micro-kernel OS called V. It was designed to communicate between two or more different systems, one of which was usually a smart terminal rather than a full computer. X evolved from this to a more general protocol designed for remote access to a graphical engine by a program, which is why the client-server relationship is reversed form the usual terminology (the 'client' is the program on the remote machine, the 'server' is the terminal - or in modern systems, the computer with the monitor - which provides the service of displaying the output), and was ported to Unix just because it already existed and no one (except NeXT) wanted to reinvent the wheel.
The important thing here is that it wasn't designed to be a rendering or compositing model, but a networking protocol. It was originally assumed that the server would be running on a dedicated system. Rendering was seen as something too hardware-specific to address at all, so X kept the whole thing fairly abstract. The whole idea of having both client and server on the same system probably didn't even occur to the designers, and would have seemed outlandish to them in 1984. The idea that you could have a system like the World Wide Web, with no explicit communication of graphical operations, and still present rendered images effectively (sort of) would have seemed just as unlikely. I guess my point is that it was intended for a very different purpose than it is now used for, a purpose that is mostly forgotten and unnecessary today.
It is helpful to remember the context that X Window System was designed for: it was an offshoot of the earlier W, which (according to Wikipedia) originated as the graphics protocol for an experimental distributed micro-kernel OS called V. It was designed to communicate between two or more different systems, one of which was usually a smart terminal rather than a full computer. X evolved from this to a more general protocol designed for remote access to a graphical engine by a program, which is why the client-server relationship is reversed form the usual terminology (the 'client' is the program on the remote machine, the 'server' is the terminal - or in modern systems, the computer with the monitor - which provides the service of displaying the output), and was ported to Unix just because it already existed and no one (except NeXT) wanted to reinvent the wheel.
The important thing here is that it wasn't designed to be a rendering or compositing model, but a networking protocol. It was originally assumed that the server would be running on a dedicated system. Rendering was seen as something too hardware-specific to address at all, so X kept the whole thing fairly abstract. The whole idea of having both client and server on the same system probably didn't even occur to the designers, and would have seemed outlandish to them in 1984. The idea that you could have a system like the World Wide Web, with no explicit communication of graphical operations, and still present rendered images effectively (sort of) would have seemed just as unlikely. I guess my point is that it was intended for a very different purpose than it is now used for, a purpose that is mostly forgotten and unnecessary today.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.