I'm building a GUI interface and was wondering how too link the interface to the program code
Should I use resources or Linear API's?
GUI
RE:GUI
In my OS i will implement low-level graphic functions in the video driver, that are loaded in the kernel address space like other drivers; an external DLL interfaced to the applications implements higher-level functions like those to draw windows, etc. and calls the video driver via syscalls.
RE:GUI
I think you'll have to explain what you mean by both "resources" and "linear API."
You're going to need some form of API, obviously. Perhaps a couple, layered on top. As for a "linear" api; if you mean that all system processes, when using this API, will think it has access to a linear frame buffer, then yes, I would agree with this.
Processes should never have to worry about bank switching or any other form of underlying graphics confusion. To the app, it should always be a single linear system.
Anyway, I'm late... gotta go
Cheers,
Jeff
You're going to need some form of API, obviously. Perhaps a couple, layered on top. As for a "linear" api; if you mean that all system processes, when using this API, will think it has access to a linear frame buffer, then yes, I would agree with this.
Processes should never have to worry about bank switching or any other form of underlying graphics confusion. To the app, it should always be a single linear system.
Anyway, I'm late... gotta go
Cheers,
Jeff
RE:GUI
So you mean an event driven interface?
Meaning, the client application basically goes through a loop, and based on whatever event it's sent (such as a mouse button press, a keyboard key, etc), it executes a block of code?
This is how many GUIs exist today. X11, for example, is an event driven architecture like this.
It's a simple approach, and it works well, but it's also an old approach. Continually looping is a waste of resources; there's no reason to loop through your switch statement if there is no event (ie; default: { code_here; } is probably called more than anything else, and probably does nothing.
An update on the basic event driven model might be to have your OS use a callback mechanism. In other words, your client app tells the OS that whenever there's a mouse click, call MyDefinedMouseClickFunction().
In this way, when there are no events, your program is completely dormant, and other applications can do their thing.
In an object oriented system, this can be quite simple by having a defined interface for events, and any "widget" inherits from it. This way, you don't have to register your callbacks to the OS, it simply knows that "this object is inherited from MyGUIInterfaceObject and therefore contains methods called onMouseClick, onMouseDrag, etc, etc" and can call them directly.
There are, certainly, other ways to do this sort of thing. You might want to take a look at how some of the OSs on this list perform inteface APIs (those of them that have gotten that far, anyway).
Cheers,
Jeff
Meaning, the client application basically goes through a loop, and based on whatever event it's sent (such as a mouse button press, a keyboard key, etc), it executes a block of code?
This is how many GUIs exist today. X11, for example, is an event driven architecture like this.
It's a simple approach, and it works well, but it's also an old approach. Continually looping is a waste of resources; there's no reason to loop through your switch statement if there is no event (ie; default: { code_here; } is probably called more than anything else, and probably does nothing.
An update on the basic event driven model might be to have your OS use a callback mechanism. In other words, your client app tells the OS that whenever there's a mouse click, call MyDefinedMouseClickFunction().
In this way, when there are no events, your program is completely dormant, and other applications can do their thing.
In an object oriented system, this can be quite simple by having a defined interface for events, and any "widget" inherits from it. This way, you don't have to register your callbacks to the OS, it simply knows that "this object is inherited from MyGUIInterfaceObject and therefore contains methods called onMouseClick, onMouseDrag, etc, etc" and can call them directly.
There are, certainly, other ways to do this sort of thing. You might want to take a look at how some of the OSs on this list perform inteface APIs (those of them that have gotten that far, anyway).
Cheers,
Jeff
RE:GUI
The callback based interface is more complex, and when you block
your thread in the event receiving syscall, then you don't waste
any cpu time. The other good side of this approcach is that you
will have a strictly serialized event interface. Because two
events can't be handled at the same time, so you can spare the
user mode synchronization code. Also, you can pass events in the
registers, so the kerne code never calls or even touches any
user mode memory. The kernel should provide: send_event() and
receive_event(), and also some event queues for each gui thread.
That is all.
Viktor
btw:
Currently most callback based interfaces use a message loop
under the callback layer. (because of simplicity and security)
your thread in the event receiving syscall, then you don't waste
any cpu time. The other good side of this approcach is that you
will have a strictly serialized event interface. Because two
events can't be handled at the same time, so you can spare the
user mode synchronization code. Also, you can pass events in the
registers, so the kerne code never calls or even touches any
user mode memory. The kernel should provide: send_event() and
receive_event(), and also some event queues for each gui thread.
That is all.
Viktor
btw:
Currently most callback based interfaces use a message loop
under the callback layer. (because of simplicity and security)