Page 1 of 2

Is MESA works only in X or it's provides rendering of all?

Posted: Wed Feb 01, 2017 1:42 am
by monobogdan
Hm. I'm interested. For example nouveau. It's driver works not only for X?
So, it's paints tty?

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 2:50 am
by Schol-R-LEA
I would say the place to start would be the Mesa website, which has all the documentation for the library.

I'm not sure what you mean by 'paints tty' (it is an implementation of OpenGL, and used primarily for 3-D rendering - it won;t run in a text mode, and while I suppose you could implement a text-mode emulation console in it, it would be massive overkill), but it is not specific to X, no. The "stand-alone" version of Mesa can act as rendering engine without X. In fact, the relationship is in some ways the reverse - according to the website, at least some of the X.org Direct Rendering Infrastructure drivers use Mesa as their underlying scaffolding.

Mind you, the primary purpose of Mesa is to provide OpenGL in software, without using hardware acceleration - it does provide drivers for common hardware, but the primary purpose is to provide 3-D rendering even without specific hardware support.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 3:22 am
by monobogdan
Schol-R-LEA wrote:I would say the place to start would be the Mesa website, which has all the documentation for the library.

I'm not sure what you mean by 'paints tty' (it is an implementation of OpenGL, and used primarily for 3-D rendering - it won;t run in a text mode, and while I suppose you could implement a text-mode emulation console in it, it would be massive overkill), but it is not specific to X, no. The "stand-alone" version of Mesa can act as rendering engine without X. In fact, the relationship is in some ways the reverse - according to the website, at least some of the X.org Direct Rendering Infrastructure drivers use Mesa as their underlying scaffolding.

Mind you, the primary purpose of Mesa is to provide OpenGL in software, without using hardware acceleration - it does provide drivers for common hardware, but the primary purpose is to provide 3-D rendering even without specific hardware support.
But, X renderer is powered by VESA or GL(dependence on graphics driver), right?

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 6:32 am
by Love4Boobies
By VESA, which is an organization, you mean VBE/Core. The latter is a standardized video interface for PC's using BIOS firmware, much like the ones that replaced it: UGA in EFI and GOP in UEFI. You can write (sucky) generic video drivers on top of them. OpenGL, on the other hand, provides an API for graphics programmers; it's a different level of abstraction and doesn't concern itself with drivers.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 7:31 am
by Solar
The Mesa rendering framework uses drivers.

One of them is the XLib driver. DRI is another. Then there are drivers for Windows, or llvmpipe. A pipe dream for AmigaOS 4.2 is to include Gallium3D, enabling the use of Mesa on that noteable as well.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 9:37 am
by dchapiesky
MESA can render directly to a framebuffer - however the code was retired a few years back in favor of fb devices provided by the OS

as for rendering the tty - you are thinking of kmscon or wlterm which sit between a tty device and mesa/opengl/wayland to render a terminal on the framebuffer...

kmscon and wlterm use libtsm to emulate a terminal - thus my post http://forum.osdev.org/viewtopic.php?f=1&t=31271


confusing as hell but powerful too

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 10:03 am
by monobogdan
Love4Boobies wrote:By VESA, which is an organization, you mean VBE/Core. The latter is a standardized video interface for PC's using BIOS firmware, much like the ones that replaced it: UGA in EFI and GOP in UEFI. You can write (sucky) generic video drivers on top of them. OpenGL, on the other hand, provides an API for graphics programmers; it's a different level of abstraction and doesn't concern itself with drivers.
What about gl under DOS? How it works? Because on DOS possible creating only software GL, it's renders on top of VBE(or vmem), right? And so, mesagl(like tinygl) can use two ways for render contents:
VBE extensions(no problem in accessing to it in linux)
VRAM(using special block device representing start of vmem, but

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 10:44 am
by dozniak
monobogdan wrote: What about gl under DOS? How it works? Because on DOS possible creating only software GL,
It uses a software renderer. Or a card-specific GL backend (e.g. Glide 3D)

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 11:02 am
by dchapiesky
monobogdan wrote: What about gl under DOS? How it works? Because on DOS possible creating only software GL, it's renders on top of VBE(or vmem), right? And so, mesagl(like tinygl) can use two ways for render contents:
VBE extensions(no problem in accessing to it in linux)
VRAM(using special block device representing start of vmem, but

As I said direct fb support was removed a few versions back... you can still find mesa for dos if you look hard enough

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 12:08 pm
by monobogdan
dchapiesky wrote:
monobogdan wrote: What about gl under DOS? How it works? Because on DOS possible creating only software GL, it's renders on top of VBE(or vmem), right? And so, mesagl(like tinygl) can use two ways for render contents:
VBE extensions(no problem in accessing to it in linux)
VRAM(using special block device representing start of vmem, but

As I said direct fb support was removed a few versions back... you can still find mesa for dos if you look hard enough
No, i'm trying to find way to draw anything on screen pixel per pixel without X

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 2:55 pm
by Korona
Solar wrote:The Mesa rendering framework uses drivers.

One of them is the XLib driver. DRI is another. Then there are drivers for Windows, or llvmpipe. A pipe dream for AmigaOS 4.2 is to include Gallium3D, enabling the use of Mesa on that noteable as well.
That is only half of the story. Under Linux (and FreeBSD IIRC) the 3D drivers actually sit inside Mesa. GPU drivers are split between the kernel and user space. The kernel performs graphics memory management and mode switching but the user space part of the driver actually does the rendering by passing command buffers (that are e.g. compiled via the LLVM AMDGPU target) to the GPU. The kernel just validates those buffers and sets up the DMA.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 3:35 pm
by Schol-R-LEA
monobogdan wrote:No, i'm trying to find way to draw anything on screen pixel per pixel without X
That's... exactly the opposite of OpenGL. Seriously, it is. That's so far from what you are asking about, and so far from what you probably actually want to be doing, that it indicates a serious confusion of ideas.

Mind you, this is a confusing subject. I think we all need to take a step back and reconsider this entire discussion.

Let's start over from scratch with: what are you trying to display? What kinds of images, widgets, and 'content' are you working with? What is the purpose in displaying it?

My impression is that your goal is something more along the likes of a window manager (comparable to Windows USER Subsystem, or X Window System when run for local use - the original purpose was for remote graphics, which is why it has always been a bit odd compared to other display managers meant for local use) but with a different display management model. Is this an accurate statement, and if not, how would you describe what you want?

I think we also need to review the terminology, and how most system decompose the different aspects of this (e.g., the "graphics stack").

At the lowest level we have the device drivers, which communicate with the actual hardware. These need to be able to work with either the specific display devices - the video memory, the GPU if any, the video signal generators, and even the monitor - or some common subset of it which it shares with disparate adapters. However, this does not mean that the driver must do all the work alone. The VESA VBE/Core defines a standard minimal interface to the hardware as an extension BIOS, which a complaint video adapter should provide as a way of interfacing with the hardware without needing any proprietary details of the adapter.

Somewhere here you would find things like the Mesa driver framework and the Xlib Direct Rendering Manager. This level doesn't have a formal name in most systems, at least not as far as I know of, which is a first abstraction layer which software system (not necessarily the operating system itself) provides to give a uniform model for drawing pixels on the screen, while still exposing the underlying hardware. The split between 2-D and 3-D often starts around here, as a 3-D renderer generally needs a lot more direct hardware access than a 2-D one.

Then next level is the renderer, which is where . This is where you really see 3-D becoming a separate thing, as most systems prior to, say, 2007 would have used a strictly 2-D rendering for everything that didn't specifically require 3-D rendering, due to the need for hardware acceleration for practical real-time 3-D rendering at the time. As Brendan has pointed out before, right now the Cycle of Reincarnation for graphics rendering has been swinging towards CPU-driven rendering since the 2012 or so, though dedicated rendering hardware is still dominant at the moment. Note, however, that the graphics rendering Wheel of Incarnation has been rolling since the very first days of computer graphics in the early 1960s, so it is a good guess that this won't be the last word on the subject.

Anyway, Mesa proper started out in the 1990s as a software 3-D renderer, but currently is used to sort of abstract the rendering in a way that the software rendering is more of a fallback mode.

This is where you need to decide how you are going to handle the differences between rendering 2-D images such as basic windows and widgets, and the more impressive but also more processing-intensive 3-D rendering. While the fact that you can treat 2-D as a special case of 3-D, it is tempting to use 3-D for everything, but that approach has some significant down sides, especially on older hardware; you may need to consider where you can use less general 2-D rendering to avoid a lot of hardware crunching where possible.

You also need to look at how you separate different renderable elements such as glyphs (letters, digits, text symbols, etc.), widgets (window borders, menus, icons, the mouse pointer), 2-D images such as drawings and pictures, 3-D manipulatable objects, etc. This relates, and raises the issue in, the next layer of the stack, the compositor. However, before that I need to mention another part of this layer, the widget toolkit.

The widget toolkit is the set of primitive widgets - window frames, menus, drawing spaces, textboxes, text areas, radio buttons, checkboxes, etc. - that a window manager uses. This is not a separate layer from the renderer, but side-by-side with it, and the widgets have to work together with the compositor.

The compositor is the part that combines the individual elements being rendered into the instantaneous display state, that is, the screen as it is at a given moment. In a 2-D design, this is usually done by the renderer directly, but 3-D UIs almost always have a separate compositor.

OK, quick history lesson. Early 2-D windowing systems generally composited in situ, that is, directly into the display. However, while this was feasible with the stroke-vector displays of the 1960s, or on raster displays that used fixed cells drawn from tables of glyphs such as PLATO and the majority of text-oriented terminals, this was problematic for bitmapped video systems even from the outset, as it meant that a large block of memory - often as much as 30% of system memory in the days of the Alto and 128K Macintosh had to be set aside for the video, and the timing of drawing had to be synced with the vertical refresh in order to avoid flicker.

While double buffering was part of the answer, it ran into issues with time - copying that much data would take longer than the vrefresh, so a workable double buffer needed to be done by hardware. You would have to dedicate two buffer's worth of memory in hardware (one to drive the video, and the other to draw to), and the display would need even more hardware to let it switch which of the video buffers was driving it in order to make it work. Pretty much every video system today supports this as a matter of course. However, this did nothing for when you have to copy a bitmapped image from general memory - something loaded from a file, say - into the drawing video buffer.

In order to cut the time further, they developed Bit BLT, which is a method in which a part of the image is prepared as a mask and only the mask is drawn to the video buffer. Other techniques, such as hardware sprites (which were drawn directly to the screen, bypassing the video buffer entirely) were also developed, but were mostly used in dedicated gaming and video editing systems.

I mention all this to get to compositing. Up until 2006 or so, the act of compositing for a window manager was done mainly as a 2-D action, and generally was focused a) determining what parts of the display have changed, b) determining which parts of the screen were observable, on blitting the observable sections of a window that were getting changed to the draw buffer. This was generally easier for a tiling window manager, as there was no z-scaling - no windows overlapped, so everything could be drawn, and you could divide the windows into those which had changed and those which hadn't. Layering windows managers were a little more complicated because some windows might obscure parts of others, but generally it wasn't too difficult. Even so, 2-D hardware acceleration was still very useful for this, even if it wasn't absolutely necessary.

With the introduction of 3-D layered UIs such as Aqua and Aero, the issue of combining things became much more complex, leading to the need for a separate compositor layer. Most major window managers today have a 3-D compositor, and for a time it was almost impossible to get good performance from one without a dedicated GPU, meaning software rendering was out of the question even for the basic GUI, leading to issues that previously were mostly seen in gaming.

Getting back on track, we now get to the window manager itself, which is the part that actually decided where to put each rendered component, sets things related to the way widgets interact, and just generally, well, manages the windows. This is what X Window System was from the outset, and it acts as the glue between the lower level aspects of the GUI and the more abstract parts such as the desktop manager.

The next layer is the desktop manager, and this is what most people are actually thinking when they talk of a GUI, and of the differences between Windows, Mac, and the various Linux desktops such as KDE, Gnome, Unity, XFCE, Cinnamon, MATE, and so forth.

Not all systems follow quite this pattern, and not all layers are found in all of them (or in this order), but that will at least give us a common language for discussing this.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 3:37 pm
by monobogdan
Schol-R-LEA wrote:
monobogdan wrote:No, i'm trying to find way to draw anything on screen pixel per pixel without X
That's... exactly the opposite of OpenGL. Seriously, it is. That's so far from what you are asking about, and so far from what you probably actually want to be doing, that it indicates a serious confusion of ideas.

Mind you, this is a confusing subject. I think we all need to take a step back and reconsider this entire discussion.

Let's start over from scratch with: what are you trying to display? What kinds of images, widgets, and 'content' are you working with? What is the purpose in displaying it?

My impression is that your goal is something more along the likes of a window manager (comparable to Windows USER Subsystem, or X Window System when run for local use - the original purpose was for remote graphics, which is why it has always been a bit odd compared to other display managers meant for local use) but with a different display management model. Is this an accurate statement, and if not, how would you describe what you want?

I think we also need to review the terminology, and how most system decompose the different aspects of this (e.g., the "graphics stack").

At the lowest level we have the device drivers, which communicate with the actual hardware. These need to be able to work with either the specific display devices - the video memory, the GPU if any, the video signal generators, and even the monitor. The VESA VBE/Core define a standard minimal interface to this hardware, which a complaint video adapter should provide an extension BIOS for.

Somewhere here you would find things like the Mesa driver framework and the Xlib Direct Rendering Manager. This level doesn't have a formal name in most systems, at least not as far as I know of, which is a first abstraction layer which software system (not necessarily the operating system itself) provides to give a uniform model for drawing pixels on the screen, while still exposing the underlying hardware. The split between 2-D and 3-D often starts around here, as a 3-D renderer generally needs a lot more direct hardware access than a 2-D one.

Then next level is the renderer, which is where . This is where you really see 3-D becoming a separate thing, as most systems prior to, say, 2007 would have used a strictly 2-D rendering for everything that didn't specifically require 3-D rendering, due to the need for hardware acceleration for practical real-time 3-D rendering at the time. As Brendan has pointed out before, right now the Cycle of Reincarnation for graphics rendering has been swinging towards CPU-driven rendering since the 2012 or so, though dedicated rendering hardware is still dominant at the moment. Note, however, that the graphics rendering Wheel of Incarnation has been rolling since the very first days of computer graphics in the early 1960s, so it is a good guess that this won't be the last word on the subject.

Anyway, Mesa proper started out in the 1990s as a software 3-D renderer, but currently is used to sort of abstract the rendering in a way that the software rendering is more of a fallback mode.

This is where you need to decide how you are going to handle the differences between rendering 2-D images such as basic windows and widgets, and the more impressive but also more processing-intensive 3-D rendering. While the fact that you can treat 2-D as a special case of 3-D, it is tempting to use 3-D for everything, but that approach has some significant down sides, especially on older hardware; you may need to consider where you can use less general 2-D rendering to avoid a lot of hardware crunching where possible.

You also need to look at how you separate different renderable elements such as glyphs (letters, digits, text symbols, etc.), widgets (window borders, menus, icons, the mouse pointer), 2-D images such as drawings and pictures, 3-D manipulatable objects, etc. This relates, and raises the issue in, the next layer of the stack, the compositor. However, before that I need to mention another part of this layer, the widget toolkit.

The widget toolkit is the set of primitive widgets - window frames, menus, drawing spaces, textboxes, text areas, radio buttons, checkboxes, etc. - that a window manager uses. This is not a separate layer from the renderer, but side-by-side with it, and the widgets have to work together with the compositor.

The compositor is the part that combines the individual elements being rendered into the instantaneous display state, that is, the screen as it is at a given moment. In a 2-D design, this is usually done by the renderer directly, but 3-D UIs almost always have a separate compositor.

OK, quick history lesson. Early 2-D windowing systems generally composited in situ, that is, directly into the display. However, while this was feasible with the stroke-vector displays of the 1960s, or on raster displays that used fixed cells drawn from tables of glyphs such as PLATO and the majority of text-oriented terminals, this was problematic for bitmapped video systems even from the outset, as it meant that a large block of memory - often as much as 30% of system memory in the days of the Alto and 128K Macintosh had to be set aside for the video, and the timing of drawing had to be synced with the vertical refresh in order to avoid flicker.

While double buffering was part of the answer, it ran into issues with time - copying that much data would take longer than the vrefresh, so a workable double buffer needed to be done by hardware. You would have to dedicate two buffer's worth of memory in hardware (one to drive the video, and the other to draw to), and the display would need even more hardware to let it switch which of the video buffers was driving it in order to make it work. Pretty much every video system today supports this as a matter of course. However, this did nothing for when you have to copy a bitmapped image from general memory - something loaded from a file, say - into the drawing video buffer.

In order to cut the time further, they developed Bit BLT, which is a method in which a part of the image is prepared as a mask and only the mask is drawn to the video buffer. Other techniques, such as hardware sprites (which were drawn directly to the screen, bypassing the video buffer entirely) were also developed, but were mostly used in dedicated gaming and video editing systems.

I mention all this to get to compositing. Up until 2006 or so, the act of compositing for a window manager was done mainly as a 2-D action, and generally was focused a) determining what parts of the display have changed, b) determining which parts of the screen were observable, on blitting the observable sections of a window that were getting changed to the draw buffer. This was generally easier for a tiling window manager, as there was no z-scaling - no windows overlapped, so everything could be drawn, and you could divide the windows into those which had changed and those which hadn't. Layering windows managers were a little more complicated because some windows might obscure parts of others, but generally it wasn't too difficult. Even so, 2-D hardware acceleration was still very useful for this, even if it wasn't absolutely necessary.

With the introduction of 3-D layered UIs such as Aqua and Aero, the issue of combining things became much more complex, leading to the need for a separate compositor layer. Most major window managers today have a 3-D compositor, and for a time it was almost impossible to get good performance from one without a dedicated GPU, meaning software rendering was out of the question even for the basic GUI, leading to issues that previously were mostly seen in gaming.

Getting back on track, we now get to the window manager itself, which is the part that actually decided where to put each rendered component, sets things related to the way widgets interact, and just generally, well, manages the windows. This is what X Window System was from the outset, and it acts as the glue between the lower level aspects of the GUI and the more abstract parts such as the desktop manager.

The next layer is the desktop manager, and this is what most people are actually thinking when they talk of a GUI, and of the differences between Windows, Mac, and the various Linux desktops such as KDE, Gnome, Unity, XFCE, Cinnamon, MATE, and so forth.

Not all systems follow quite this pattern, and not all layers are found in all of them (or in this order), but that will at least give us a common language for discussing this.
Oh yes. So, i'm want to render fast GUI. Because more than other graphics libraries i know opengl, it's be nice start.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 5:21 pm
by Schol-R-LEA
facepalm

First off, please do not quote the whole of a gargantuan post; if you are going to quote something, trim it so that only the parts you are replying to are quoted, directly above the part you are replying to.

Second, your answer explains nothing, and seems to be dismissive rather than responsive. OK, you are mostly familiar with OpenGL, but what about it? You haven't said what you are really trying to accomplish here. "Render fast GUI" tells us exactly zip about what you are doing. Is it a 2-D GUI? a 3-D GUI? A combination of the two? If it is the former, OpenGL is overkill, and inappropriate.

Third, OpenGL isn't a library, it's a standard. You want an OpenGL library? You would need to either write one, or port one. Your main question seems to be "can I port Mesa to my OS without also porting X?", in which case the answer is, "dunno, that depends on your OS, but it doesn't depend on X".

Finally, you still seem to be thinking that pixel-by-pixel control of the display is the best way to improve performance, which sounds like a reasonable assumption - I made the same one years ago - but is in fact almost, but not quite, completely unlike the truth. This is one case where direct access actual performs worse than a more abstract approach, because not only is does it mean more frequent, and slower, accesses to the video memory over the bus, but also ignores the acceleration hardware - even the parts which are accessible through VBE/Core.

Re: Is MESA works only in X or it's provides rendering of al

Posted: Wed Feb 01, 2017 6:53 pm
by dchapiesky
:arrow: :arrow: Tip of the Hat to Schol-R-LEA

I couldn't bring myself to answer anymore... Thank you kind Sir.