Page 3 of 4

Re: Concise Way to Describe Curved Surfaces

Posted: Sun Jun 28, 2015 11:30 pm
by bluemoon
Antti wrote:If not mentioned before, a simple method for making experiments is just to print a "simulated screenshot" on a paper and bend it according to a simulated viewing device. I am waiting for the photos of this laboratory setup...
A minor enhancement on this idea. put this on a projector (as screen), so you can easily see how the spot translated.

Re: Concise Way to Describe Curved Surfaces

Posted: Mon Jun 29, 2015 8:55 am
by embryo2
Brendan

Because we both describe the same virtual world it is possible to find a transformation from one model to another. But your way of modeling the 3d world is more complex. Essentially, your way starts from spherical coordinates, attached to some monitor, and extends your vision up to the world's fragment behind the monitor. My way is about cartesian space and an abstraction of the world view expressed as a sphere or cylinder. The cartesian space is much more intuitive and requires less calculations (less sines and cosines), so, the greater complexity is your choice.

The following just demonstrates the transformation between the models and shows some similarities.
Brendan wrote:You've got a virtual world described by vertices and polygons/triangles. You transform vertices (rotate them about the camera, etc); then clip polygons to the edges of the viewing volume (left/right/top/bottom/near/far planes). Then you use altitude and azimuth angles for each pixel to cast a ray from camera through the pixel and out to the far clipping plane and find the first point where that ray intersects with a polygon/triangle and use that intersection point to determine the colour of the pixel.
Here you see the same ray tracing, but miss some essential steps (I also haven't shown all required steps). Those steps are about considering the real complexity of a virtual world. For you to determine the color of a snowflake it is required to calculate average color of it's thousands faces and edges, that fit into the window produced by a pixel in the virtual world. For me it also is a problem, but my cartesian space requires less complex math tricks from me.
Brendan wrote:If you have "(X, Y, Z) coordinate for the physical location of the pixel", then the only thing it's good for is calculating the altitude and azimuth angles.
No. Geometry allows me to calculate exact point on the world-view sphere/cylinder. And also the same can be said about angles - in some situations they are useful only for calculation cartesian coordinates of a pixel. But in fact both, angles and cartesian coordinates, are translatable in each other.
Brendan wrote:You don't want to do this calculation for every pixel every frame - you want to do it once, and once you've done it the "(X, Y, Z) coordinates" can be discarded.
The same can be said about angles. But in cartesian space I can describe a plane (for flat parts of a monitor) and use the same algorithm with some minor corrections (some multiplication by a factor) for the whole plane.
Brendan wrote:Note that casting a ray for each individual pixel has 2 problems - there's no anti-aliasing (unless you cast 2 or more rays per pixel) and it has a lot of overhead.
In fact the sines and cosines will make your way to look as a much more serious overhead. And of course, you also will be required to perform anti-aliasing using more rays.
Brendan wrote:To really fix the anti-aliasing problem you can find the edges of the pixel and cast a pyramid instead of a ray. This is extremely expensive (but gives very high quality results - equivalent to "infinite super-sampling").
It requires only 4 rays and a few multiplications and additions to determine the color. I just can't say about such approach something like "extremely expensive", just because the anti-aliasing always requires some additional work to be done and it is always times bigger than no-anti-aliasing solutions.
Brendan wrote:To reduce overhead you can find a horizontal strip of pixels that happen to have the same altitude angle, cast a very wide pyramid (e.g. 300 pixels wide and 1 pixel tall), then sub-divide the resulting horizontal strip (using the azimuth angles).
As you can see, it is possible to find solutions to reduce the overhead. But using the spherical coordinates it will be much harder task to find a similar (in terms of complexity) solution.

And as a final remark, for your way to be complete you need third coordinate - the distance from an eye to the pixel, so the required set of parameters always will be the 3 numbers - 2 angles and the distance. Next, you have to extend the spherical coordinate system to the virtual world and place the origin of such coordinate system at the location of an eye. In case of the cartesian system it is very cheap to place the world's origin anywhere and translate any coordinate in a new base using simple addition. In your case such translation will look very ugly.

Re: Concise Way to Describe Curved Surfaces

Posted: Mon Jun 29, 2015 9:00 am
by embryo2
bluemoon wrote:put this on a projector (as screen), so you can easily see how the spot translated.
And even simpler - if you look at a final solution and see some distortions - the solution doesn't work correctly. So, there's even no need in a projector :)

But Antti's point was about an easy way to visualize distortions. Your point is about easy way to demonstrate distortions-free solution.

Re: Concise Way to Describe Curved Surfaces

Posted: Mon Jun 29, 2015 11:02 am
by Brendan
Hi,
embryo2 wrote:Because we both describe the same virtual world it is possible to find a transformation from one model to another. But your way of modeling the 3d world is more complex. Essentially, your way starts from spherical coordinates, attached to some monitor, and extends your vision up to the world's fragment behind the monitor. My way is about cartesian space and an abstraction of the world view expressed as a sphere or cylinder. The cartesian space is much more intuitive and requires less calculations (less sines and cosines), so, the greater complexity is your choice.
I'd store both angles as a vector from the camera at (0,0,0), which is only 2 values (X coord/azimuth and Y coord/altitude) because the third (Z) can always be 1. If you look at the formula for finding the intersection between a ray and a plane you'll see it doesn't involve any sine/cosine. Note that in my case the "world" that we're casting rays through does use a normal 3D cartesian space. Essentially, the only sine/cosine involved is in the construction of the matrix used to transform vertices into "world co-ords".

I'm not entirely sure what your proposed alternative actually is. It seems to begin with transforming vertices into "world co-ords" in a 3D cartesian space, then doing "I don't know" to project polygons/pixels onto either a cylinder or sphere (which sounds insane given that straight lines end up being curves when mapped onto a cylinder or sphere), then doing "I don't know" to map pixels in that cylinder or sphere onto whatever shape the monitor happens to be (which sounds even more hideously complex than I'm able to imagine, and likely to cause unwanted artifacts and/or blur).
embryo2 wrote:
Brendan wrote:If you have "(X, Y, Z) coordinate for the physical location of the pixel", then the only thing it's good for is calculating the altitude and azimuth angles.
No. Geometry allows me to calculate exact point on the world-view sphere/cylinder.
Um, what? Exactly how does "geometry" (an entire branch of mathematics) allow you to calculate the point on your sphere/cylinder without angles in any form?
embryo2 wrote:
Brendan wrote:To really fix the anti-aliasing problem you can find the edges of the pixel and cast a pyramid instead of a ray. This is extremely expensive (but gives very high quality results - equivalent to "infinite super-sampling").
It requires only 4 rays and a few multiplications and additions to determine the color. I just can't say about such approach something like "extremely expensive", just because the anti-aliasing always requires some additional work to be done and it is always times bigger than no-anti-aliasing solutions.
You've misunderstood (which is understandable as my explanation was a little over-simplified). For each pixel I create a pyramid and clip all polygons (at least, all polygons that haven't already been clipped/discarded) against the edges of that pyramid, then do "overlapping triangle" hidden surface removal, to end up with an tiny square containing tiny triangles. The colour of the pixel is determined by doing "colour = sum of ( colour of each triangle * area of each triangle) / area of square". This is what gives results that are equivalent to "infinite super-sampling" (e.g. equivalent to casting an infinite number of rays for each pixel, without actually casting an infinite number of rays for each pixel, and without casting any rays at all). It's the clipping and "overlapping triangle" hidden surface removal that's extremely expensive.
embryo2 wrote:And as a final remark, for your way to be complete you need third coordinate - the distance from an eye to the pixel, so the required set of parameters always will be the 3 numbers - 2 angles and the distance.
No. Imagine a tiny picture of a goat that's extremely close to your eye. Now imagine a massive version of the same picture of a goat that's far away (but happens to cover the exact same part of your field of view). In both cases you see exactly the same thing.

Now imagine this:

Code: Select all

        GOAT
       /
      /
     /
    /
   /
  *
Where the '*' is your eye or the camera and the diagonal line is a ray from your eye to the goat.

If the monitor is a flat monitor it becomes like this:

Code: Select all

        GOAT
       /
     M/ ___ For this pixel, I only need to know the angle of this ray at this point
     M
    /M
   / M
  *  M
If the monitor is a flat monitor, but is at a strange angle it becomes like this:

Code: Select all

        GOAT
       /
    M / ___ For this pixel, I only need to know the angle of this ray at this point
     M
    / M
   /   M
  *     M

If the monitor is a bizarre convex curved monitor it becomes like this:

Code: Select all

        GOAT
       /
     M/ ___ For this pixel, I only need to know the angle of this ray at this point
     M
    / M
   /   MM
  *
If the monitor is a concave curved monitor it becomes like this:

Code: Select all

        GOAT
       /
    M / ___ For this pixel, I only need to know the angle of this ray at this point
     M
    / M
   /  M
  *   M
If the monitor is a freaky "S-bend" thing it becomes like this:

Code: Select all

        GOAT
       /
   MM / ___ For this pixel, I only need to know the angle of this ray at this point
     M
    / M
   /   MM
  *
To handle all different size and shape monitors, I only need to know how to cast the rays. I only need "altitude and azimuth" to do that. Distance is irrelevant.


Cheers,

Brendan

Re: Concise Way to Describe Curved Surfaces

Posted: Tue Jun 30, 2015 5:14 am
by Antti
Antti wrote:I am waiting for the photos of this laboratory setup...
A little sidetrack of the discussion, I tried to find out the resolution of the widescreen monitor linked in the first post and I got 2880x900 pixels. I created a simulated screenshot, printed it, and made few experiments. This is not a useful contribution to the topic itself but I share it anyway.

Re: Concise Way to Describe Curved Surfaces

Posted: Tue Jun 30, 2015 6:23 am
by embryo2
Brendan wrote:I'd store both angles as a vector from the camera at (0,0,0), which is only 2 values (X coord/azimuth and Y coord/altitude) because the third (Z) can always be 1. If you look at the formula for finding the intersection between a ray and a plane you'll see it doesn't involve any sine/cosine.
Well, if you are going to use angles, then how it is possible to find any intersection without trigonometry? And if I look at the formula for finding the intersection between a ray and a plane then I see the formula for my model (cartesian space), but all other formulas (with trigonometry involved) are just useless for me.
Brendan wrote:Note that in my case the "world" that we're casting rays through does use a normal 3D cartesian space.
Now you see that using cartesian space is much better for some situations. But you still reject an advice about jumping in the cartesian space completely (including monitor description without angles).
Brendan wrote:Essentially, the only sine/cosine involved is in the construction of the matrix used to transform vertices into "world co-ords".
The complete set of calculations is much bigger. And if you will implement the set then at every step you can compare your calculations with pure cartesian model calculations. But until the full implementation is unavailable you will be missing the whole picture and show me just some fragments, like the one in example above. This example considers only small part of the whole picture while there are much more steps required for the full and quality solution. And please, do not ask me to show you all steps (yes, I'm a bit lazy), because it wasn't my idea to create a set of monitor based windows for a virtual world. But if you (as a men that supports his idea) will show us the full set of calculations, then I will show you where your formulas are more complex, than if only cartesian space was used.
Brendan wrote:I'm not entirely sure what your proposed alternative actually is.
The proposed alternative is like this:

We define a cylinder with it's center line and radius. Next we define an origin on the cylinder's center line. Next we define a pixel on the cylinder (knowing it's size is enough for this). Next we construct a line per every pixel corner, but because all pixels have neighbors, there actually will be a line per pixel. Each line is described with two points - the origin and the point at the cylinder. Vertical coordinate is calculated by simple addition of a pixel's height and horizontal is calculated after solving a simple triangle (may be here one sine will be preferable over powers of 2 and square root, but I haven't compared the formulas). Next there is a ray tracing task, that extracts a window from a virtual world cornered by 4 lines. Lines here are important because the distance from an object (it's covering triangle) can vary for the mentioned 4 lines just because the lines can intersect different objects. Here I omit the whole description of the virtual world windowing calculations. Next in the window we calculate weighted average for the target pixel color and store this color for use in the future. Here we have a complete picture of a world on our cylinder. If we know the exact location of monitor pixels then we can calculate the picture only for monitor projections on the cylinder. And next, as was shown earlier, we calculate colors for every monitor's pixel using 4 lines per pixel and averaging the pixel's color according to the area proportion on the cylinder's pixels. And of course, we can optimize the calculations by merging pixel calculations for the cylinder and for a monitor, but math here will be much harder.
Brendan wrote:
embryo2 wrote:
Brendan wrote:If you have "(X, Y, Z) coordinate for the physical location of the pixel", then the only thing it's good for is calculating the altitude and azimuth angles.
No. Geometry allows me to calculate exact point on the world-view sphere/cylinder.
Um, what? Exactly how does "geometry" (an entire branch of mathematics) allow you to calculate the point on your sphere/cylinder without angles in any form?
It is called analytic geometry. Here you can see the calculations without angles.
Brendan wrote:
embryo2 wrote:
Brendan wrote:To really fix the anti-aliasing problem you can find the edges of the pixel and cast a pyramid instead of a ray. This is extremely expensive (but gives very high quality results - equivalent to "infinite super-sampling").
It requires only 4 rays and a few multiplications and additions to determine the color. I just can't say about such approach something like "extremely expensive", just because the anti-aliasing always requires some additional work to be done and it is always times bigger than no-anti-aliasing solutions.
You've misunderstood (which is understandable as my explanation was a little over-simplified). For each pixel I create a pyramid and clip all polygons (at least, all polygons that haven't already been clipped/discarded) against the edges of that pyramid, then do "overlapping triangle" hidden surface removal, to end up with an tiny square containing tiny triangles. The colour of the pixel is determined by doing "colour = sum of ( colour of each triangle * area of each triangle) / area of square". This is what gives results that are equivalent to "infinite super-sampling" (e.g. equivalent to casting an infinite number of rays for each pixel, without actually casting an infinite number of rays for each pixel, and without casting any rays at all). It's the clipping and "overlapping triangle" hidden surface removal that's extremely expensive.
It's not a misunderstanding. It's an omission of required steps for brevity. I skipped steps for getting virtual world window color for a pixel, but the base algorithm was the same as you describe in your answer, the same pyramid and clipping and other stuff cornered by the mentioned 4 rays (or bordered by the window the rays define in the virtual world).
Brendan wrote:
embryo2 wrote:And as a final remark, for your way to be complete you need third coordinate - the distance from an eye to the pixel, so the required set of parameters always will be the 3 numbers - 2 angles and the distance.
No. Imagine a tiny picture of a goat that's extremely close to your eye. Now imagine a massive version of the same picture of a goat that's far away (but happens to cover the exact same part of your field of view). In both cases you see exactly the same thing.
No. Imagine, that the goat has moved and now is far from an eye. What area should occupy it's head on a screen? What number of pixels should show the head? What colors should those pixels have? And now you can show us the solution without a distance.

Re: Concise Way to Describe Curved Surfaces

Posted: Tue Jun 30, 2015 6:26 am
by embryo2
Antti wrote:A little sidetrack of the discussion, I tried to find out the resolution of the widescreen monitor linked in the first post and I got 2880x900 pixels. I created a simulated screenshot, printed it, and made few experiments. This is not a useful contribution to the topic itself but I share it anyway.
It can easily show what will happen if a user moves from the central point in the virtual world.

Re: Concise Way to Describe Curved Surfaces

Posted: Tue Jun 30, 2015 11:43 am
by Brendan
Hi,
embryo2 wrote:
Brendan wrote:I'd store both angles as a vector from the camera at (0,0,0), which is only 2 values (X coord/azimuth and Y coord/altitude) because the third (Z) can always be 1. If you look at the formula for finding the intersection between a ray and a plane you'll see it doesn't involve any sine/cosine.
Well, if you are going to use angles, then how it is possible to find any intersection without trigonometry?
I didn't say there'd would be no trigonometry; I only said there wouldn't be sine/cosine.

Please understand that there are many ways to store an angle. Degrees and radians are common. Another way is slope. For 3D, "2 angles" is the same as "2 slopes", and "2 slopes" can be stored as a (e.g.) a unit vector, or any vector (e.g. a vector where X and Y represent angles and Z is always 1). Essentially, there's no real differences between angles and vectors.
embryo2 wrote:And if I look at the formula for finding the intersection between a ray and a plane then I see the formula for my model (cartesian space), but all other formulas (with trigonometry involved) are just useless for me.
The formula for "intersection between plane and ray described by vector" that you'd be using is the exact same formula that I'd be using.
embryo2 wrote:
Brendan wrote:Essentially, the only sine/cosine involved is in the construction of the matrix used to transform vertices into "world co-ords".
The complete set of calculations is much bigger.
Yes; but the construction of the matrix used to transform vertices into "world co-ords", and the calculations used to create my "2 angles/vertex per pixel" during driver initialisation, are the only calculations that use sine/cosine.
embryo2 wrote:The proposed alternative is like this:

We define a cylinder with it's center line and radius. Next we define an origin on the cylinder's center line. Next we define a pixel on the cylinder (knowing it's size is enough for this). Next we construct a line per every pixel corner, but because all pixels have neighbors, there actually will be a line per pixel. Each line is described with two points - the origin and the point at the cylinder.
In this case the point at the origin would be (0,0,0) and needn't be stored; and the point at the cylinder needs an X coord (lets call that "azimuth") and a Y coord (let's call that "altitude").
embryo2 wrote:Vertical coordinate is calculated by simple addition of a pixel's height and horizontal is calculated after solving a simple triangle (may be here one sine will be preferable over powers of 2 and square root, but I haven't compared the formulas). Next there is a ray tracing task, that extracts a window from a virtual world cornered by 4 lines. Lines here are important because the distance from an object (it's covering triangle) can vary for the mentioned 4 lines just because the lines can intersect different objects. Here I omit the whole description of the virtual world windowing calculations. Next in the window we calculate weighted average for the target pixel color and store this color for use in the future. Here we have a complete picture of a world on our cylinder.
This is essentially exactly what I'd be doing (just with pre-computed angles instead of regularly spaced angles). However; because I've used the monitor's angles and not the "angles for a cylindrical surface" I'm finished after this and I don't need to do extra work trying to map "monitor pixels" to "cylindrical surface pixels".
embryo2 wrote:If we know the exact location of monitor pixels then we can calculate the picture only for monitor projections on the cylinder. And next, as was shown earlier, we calculate colors for every monitor's pixel using 4 lines per pixel and averaging the pixel's color according to the area proportion on the cylinder's pixels.
I'm doing "4 lines per monitor pixel"; and you're doing "4 lines per cylindrical surface pixel" followed by "4 lines per monitor pixel". It's extra unnecessary work.
embryo2 wrote:And of course, we can optimize the calculations by merging pixel calculations for the cylinder and for a monitor, but math here will be much harder.
No, the maths isn't harder - you just need the correct angles to start with, instead of using "wrong angles to start with, then correct angles for the unnecessary second attempt".
embryo2 wrote:
Brendan wrote:Um, what? Exactly how does "geometry" (an entire branch of mathematics) allow you to calculate the point on your sphere/cylinder without angles in any form?
It is called analytic geometry. Here you can see the calculations without angles.
See above - vectors (3D) and slopes (2D), are just another way to represent angles.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:And as a final remark, for your way to be complete you need third coordinate - the distance from an eye to the pixel, so the required set of parameters always will be the 3 numbers - 2 angles and the distance.
No. Imagine a tiny picture of a goat that's extremely close to your eye. Now imagine a massive version of the same picture of a goat that's far away (but happens to cover the exact same part of your field of view). In both cases you see exactly the same thing.
No. Imagine, that the goat has moved and now is far from an eye. What area should occupy it's head on a screen? What number of pixels should show the head? What colors should those pixels have? And now you can show us the solution without a distance.
A bunch of bananas that are close to the camera; where both rays hit the bananas:

Code: Select all

        B
       /a
     M/ n
     M  a
    /M  n
   / M  a
  *--M--s
A bunch of bananas that are further away from the camera; where only one ray hits the bananas:

Code: Select all

             /
            /
           /
          /
         /
        /      B
       /       a
     M/        n
     M         a
    /M         n
   / M         a
  *--M---------s
For both cases, I only need to know the angle of the ray as it passes through the pixel.

Note: For ASCII art it's hard to show more than 2 rays. Imagine there's 1920 rays and not just 2 rays, where most rays hit the bananas when the bananas are close, and when the banana is further away less rays hit the banana (making the bananas look smaller/take up less pixels).


Cheers,

Brendan

Re: Concise Way to Describe Curved Surfaces

Posted: Wed Jul 01, 2015 7:03 am
by embryo2
Brendan wrote:I didn't say there'd would be no trigonometry; I only said there wouldn't be sine/cosine.

Please understand that there are many ways to store an angle. Degrees and radians are common. Another way is slope. For 3D, "2 angles" is the same as "2 slopes", and "2 slopes" can be stored as a (e.g.) a unit vector, or any vector (e.g. a vector where X and Y represent angles and Z is always 1). Essentially, there's no real differences between angles and vectors.
If you use a unit vector, then you are in cartesian space (as advised). But even while being in the right space you still talk about angles. Why? Unit vector is not an angle, slope is not an angle, but degrees and radians are angles. And if something can be converted into an angle it doesn't meat it is an angle. Sine of an angle can be converted to the initial angel, but who names it "an angle"? It is alway a sine.
Brendan wrote:The formula for "intersection between plane and ray described by vector" that you'd be using is the exact same formula that I'd be using.
Ok, you have done one more step into the cartesian space.
Brendan wrote:the construction of the matrix used to transform vertices into "world co-ords", and the calculations used to create my "2 angles/vertex per pixel" during driver initialisation, are the only calculations that use sine/cosine.
Have you at least wrote a list of required actions for the virtual reality to be of acceptable quality? Even if you miss some important steps it is still highly probable that you will notice the long story ahead of you, but you ultimately suggest that you know that there is "the only calculations that use sine/cosine". Even if you use the analytic geometry only then you can see some non-optimal calculation steps that can be replaced with steps involving sines or cosines (or derivatives of them).

However, if you pick your angles and immediately translate them into something cartesian friendly (or even analytic geometry only), then again - why to use the angles?

And a bit more problems. What if a user has decided to move? In my case there is only the point of origin is changed. In your case it is required to recalculate all angles for all your monitors. And repeat it for every user movement. That's very efficient, isn't it?
Brendan wrote:In this case the point at the origin would be (0,0,0) and needn't be stored;
Yes, but not always.
Brendan wrote:the point at the cylinder needs an X coord (lets call that "azimuth") and a Y coord (let's call that "altitude").
No, the point needs X,Y and Z coordinates in cartesian space. Because if you take a diagonal on the cylinder and try to work with pixels on it then the missing Z coordinate will remind you about itself with an ugly image you will have.
Brendan wrote:because I've used the monitor's angles and not the "angles for a cylindrical surface" I'm finished after this and I don't need to do extra work trying to map "monitor pixels" to "cylindrical surface pixels".
It's not because of the angles. I mentioned before that it is possible to optimize the solution, but the calculations of the resulting color of the skewed virtual world window will be less intuitive and more complex. But yes, it is possible to skip the step with the cylinder coloring.
Brendan wrote:
embryo2 wrote:And of course, we can optimize the calculations by merging pixel calculations for the cylinder and for a monitor, but math here will be much harder.
No, the maths isn't harder - you just need the correct angles to start with, instead of using "wrong angles to start with, then correct angles for the unnecessary second attempt".
I still advice you to write down all required steps without leaving a hole in the process, then you will see that complexity just moves in another place, but not disappears.
Brendan wrote:
embryo2 wrote:Imagine, that the goat has moved and now is far from an eye. What area should occupy it's head on a screen? What number of pixels should show the head? What colors should those pixels have? And now you can show us the solution without a distance.
A bunch of bananas that are close to the camera...
If you like bananas then tell me please what exact number of pixels is required to show the banana? But do not go along the way of introducing new useless entities (like a living being from the insect world or even an alien).

Re: Concise Way to Describe Curved Surfaces

Posted: Wed Jul 01, 2015 8:55 am
by Brendan
Hi,
embryo2 wrote:
Brendan wrote:I didn't say there'd would be no trigonometry; I only said there wouldn't be sine/cosine.

Please understand that there are many ways to store an angle. Degrees and radians are common. Another way is slope. For 3D, "2 angles" is the same as "2 slopes", and "2 slopes" can be stored as a (e.g.) a unit vector, or any vector (e.g. a vector where X and Y represent angles and Z is always 1). Essentially, there's no real differences between angles and vectors.
If you use a unit vector, then you are in cartesian space (as advised). But even while being in the right space you still talk about angles. Why? Unit vector is not an angle, slope is not an angle, but degrees and radians are angles. And if something can be converted into an angle it doesn't meat it is an angle. Sine of an angle can be converted to the initial angel, but who names it "an angle"? It is alway a sine.
There's probably 100 different ways of storing the exact same information (a direction). They're not different things, they've related via. a (typically trivial) transformation. If I decide to store angles in radians and not degrees, it's still an angle. If I decide to divide a circle into 65536 steps and have "N 65536ths of a circle" it's still an angle. If I decide to store the information as "reciprocal of radians" it's still an angle. More specifically; if I decide to store it as a slope it's still an angle. It's just yet another different way of encoding the exact same information.
embryo2 wrote:
Brendan wrote:The formula for "intersection between plane and ray described by vector" that you'd be using is the exact same formula that I'd be using.
Ok, you have done one more step into the cartesian space.
I haven't done one more step towards anything. Nothing I've said has changed since I first said it.
embryo2 wrote:
Brendan wrote:the construction of the matrix used to transform vertices into "world co-ords", and the calculations used to create my "2 angles/vertex per pixel" during driver initialisation, are the only calculations that use sine/cosine.
Have you at least wrote a list of required actions for the virtual reality to be of acceptable quality?
Don't be foolish. I'm only talking (so far) about getting "meshes of polygons" displayed correctly (and mostly only because you wanted me to define what I meant by "distortion free").
embryo2 wrote:Even if you miss some important steps it is still highly probable that you will notice the long story ahead of you, but you ultimately suggest that you know that there is "the only calculations that use sine/cosine". Even if you use the analytic geometry only then you can see some non-optimal calculation steps that can be replaced with steps involving sines or cosines (or derivatives of them).
For software rendering, especially on older CPUs (without SSE/AVX), it's likely I'll be struggling to get acceptable frame rates just for "bare minimum meshes of polygons". The next step is dynamic lighting, which I can do without sine/cosine. After that will be volumetric fog and transparent textures, which I can do without sine/cosine. Beyond that, I really couldn't care less.
embryo2 wrote:However, if you pick your angles and immediately translate them into something cartesian friendly (or even analytic geometry only), then again - why to use the angles?
I don't immediately transform my "angles described by vectors" into anything else.
embryo2 wrote:And a bit more problems. What if a user has decided to move? In my case there is only the point of origin is changed. In your case it is required to recalculate all angles for all your monitors. And repeat it for every user movement. That's very efficient, isn't it?
There's 2 main cases here - "desktop" (including laptop on a desk) and VR helmets. All other cases are so rare that it's safe to ignore them.

For the first case; if the user decides to move it's impossible for the OS to know they've moved so it's impossible for the OS to do anything about it. Fortunately for this case the user never really moves while they're using the computer anyway (do you think I'm doing back-flips and barrel rolls while I'm typing this?).

For the second case (where you do have motion tracking) the motion tracking only tells the OS where the "virtual camera" needs to be moved. The display is stationary relative to the user's eyes and therefore the angles from pixels to camera/eyes is unaffected by user movement.

Note that I am planning a similar approach for 3D sound - e.g. OS "knows" where speakers are in relation to the user and uses this information to map "sounds in virtual space" to the most appropriate speakers.
embryo2 wrote:
Brendan wrote:In this case the point at the origin would be (0,0,0) and needn't be stored;
Yes, but not always.
The only case where it's not is "stereoscopic", where one display's origin is at (0,0,0) and the other display's origin is at a known/fixed point near that (depending on the interpupillary distance).
embryo2 wrote:
Brendan wrote:
embryo2 wrote:And of course, we can optimize the calculations by merging pixel calculations for the cylinder and for a monitor, but math here will be much harder.
No, the maths isn't harder - you just need the correct angles to start with, instead of using "wrong angles to start with, then correct angles for the unnecessary second attempt".
I still advice you to write down all required steps without leaving a hole in the process, then you will see that complexity just moves in another place, but not disappears.
The only maths that matters is the following equation:

Code: Select all

direction of ray from object (through pixel) to camera in the virtual world = direction of ray from pixel to eye in real world
Everything else (volumetric fog, shadows, reflection/refraction, whatever) either happens in the Cartesian space of the virtual world; or is an effect of the ray hitting/passing through something in the Cartesian space of the virtual world; and is not effected by how the angle/s of rays are selected in any way.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Imagine, that the goat has moved and now is far from an eye. What area should occupy it's head on a screen? What number of pixels should show the head? What colors should those pixels have? And now you can show us the solution without a distance.
A bunch of bananas that are close to the camera...
If you like bananas then tell me please what exact number of pixels is required to show the banana? But do not go along the way of introducing new useless entities (like a living being from the insect world or even an alien).
Either you're trying to understand basic theory and don't need to know a specific number of pixels; or you're deliberately wasting my time with pedantry. Either way, the exact number of pixels is 123 (for a specific/unknown bunch of bananas, at a specific/unknown distance from the camera, at a specific/unknown screen resolution).

Note: I switched from "goat" to "bananas" because I needed a world long enough to hit both rays in the previous "close to camera" diagram. With "goat" (4 characters) I would've ended up with those 4 characters and the rays and the monitor all in the same place.


Cheers,

Brendan

Re: Concise Way to Describe Curved Surfaces

Posted: Thu Jul 02, 2015 3:55 am
by embryo2
Brendan wrote:There's probably 100 different ways of storing the exact same information
Yes, but people prefer to name an angle as "an angle", a sine as "a sine" and so on. But you tell here that you prefer to name an angle as "an angle" and a sine as yet another "an angle". Well, it's your way, but people prefer something different.

In my case two points in the 3D world also can be expressed using some angle involving transformation, so, by your terms, I also use angles. Is it a pedantry? If yes, then what about your posts with demands of clear separation of entities?
Brendan wrote:I'm only talking (so far) about getting "meshes of polygons" displayed correctly (and mostly only because you wanted me to define what I meant by "distortion free").
Ok, I am a problem, but your angles for every monitor are still less efficient than my points in 3D world. And of course, you shouldn't care if you don't want to care.
Brendan wrote:For software rendering, especially on older CPUs (without SSE/AVX), it's likely I'll be struggling to get acceptable frame rates just for "bare minimum meshes of polygons".
Yes, 3D graphics is expensive.
Brendan wrote:
embryo2 wrote:And a bit more problems. What if a user has decided to move? In my case there is only the point of origin is changed. In your case it is required to recalculate all angles for all your monitors. And repeat it for every user movement. That's very efficient, isn't it?
There's 2 main cases here - "desktop" (including laptop on a desk) and VR helmets. All other cases are so rare that it's safe to ignore them.

For the first case; if the user decides to move it's impossible for the OS to know they've moved so it's impossible for the OS to do anything about it. Fortunately for this case the user never really moves while they're using the computer anyway (do you think I'm doing back-flips and barrel rolls while I'm typing this?).
Modern sensors (IR lasers or even Bluetooth based) allow local position tracking. Position accuracy can be questioned, but such solutions exist for more than a decade. So, it is possible to look through a virtual window in different directions after moving along a monitor.
Brendan wrote:Note that I am planning a similar approach for 3D sound - e.g. OS "knows" where speakers are in relation to the user and uses this information to map "sounds in virtual space" to the most appropriate speakers.
Sound is inaccurately modeled using direction only approach. For low frequency waves there's a very small dependence on direction.
Brendan wrote:The only maths that matters is the following equation:

Code: Select all

direction of ray from object (through pixel) to camera in the virtual world = direction of ray from pixel to eye in real world
Everything else (volumetric fog, shadows, reflection/refraction, whatever) either happens in the Cartesian space of the virtual world; or is an effect of the ray hitting/passing through something in the Cartesian space of the virtual world; and is not effected by how the angle/s of rays are selected in any way.
Then you need a transform from angles to the cartesian coordinates in the virtual world. Even if you express the angles as unit vectors you need 4 lines through the corners of a pixel, so, you need a transform from unit vector to a line. In my case two points define a line directly.
Brendan wrote:the exact number of pixels is 123 (for a specific/unknown bunch of bananas, at a specific/unknown distance from the camera, at a specific/unknown screen resolution).
The word in bold shows me that you see the importance of the third coordinate.
Brendan wrote:Note: I switched from "goat" to "bananas" because I needed a world long enough to hit both rays in the previous "close to camera" diagram. With "goat" (4 characters) I would've ended up with those 4 characters and the rays and the monitor all in the same place.
I insist on calculating pixel number for the need for the third coordinate to be visible.

Re: Concise Way to Describe Curved Surfaces

Posted: Thu Jul 02, 2015 6:30 am
by Brendan
Hi,
embryo2 wrote:
Brendan wrote:For software rendering, especially on older CPUs (without SSE/AVX), it's likely I'll be struggling to get acceptable frame rates just for "bare minimum meshes of polygons".
Yes, 3D graphics is expensive.
What I mean "expensive, relatively to traditional methods".

For traditional methods you transform "vertices in world" directly into "2D coords on screen + depth", and you don't need to cast rays or pyramids (and just draw polygons with Z testing). Casting rays or pyramids is extra work that makes it much more expensive than normal (but works for "non-flat" screens where the traditional method doesn't).
embryo2 wrote:
Brendan wrote:
embryo2 wrote:And a bit more problems. What if a user has decided to move? In my case there is only the point of origin is changed. In your case it is required to recalculate all angles for all your monitors. And repeat it for every user movement. That's very efficient, isn't it?
There's 2 main cases here - "desktop" (including laptop on a desk) and VR helmets. All other cases are so rare that it's safe to ignore them.

For the first case; if the user decides to move it's impossible for the OS to know they've moved so it's impossible for the OS to do anything about it. Fortunately for this case the user never really moves while they're using the computer anyway (do you think I'm doing back-flips and barrel rolls while I'm typing this?).
Modern sensors (IR lasers or even Bluetooth based) allow local position tracking. Position accuracy can be questioned, but such solutions exist for more than a decade. So, it is possible to look through a virtual window in different directions after moving along a monitor.
While solutions may have existed for decades, the number of people that actually have the necessary hardware (at least for general purpose PC compatible home/office use cases) is so close to 0% that it's safe to ignore. If I were doing an OS specifically for X-Box (where it's much more likely that the user has Kinect jammed at the back of a cupboard because it's useless for most types of games) then I'd still doubt it's worth bothering with.

The most common use case (for normal office/GUI apps and for 3D games) is "user on chair at desk, with keyboard, mouse (or maybe joystick) and monitor/s in front of them". If there is no VR helmet; it's the "user on chair at desk" use case that I'm designing for.
embryo2 wrote:
Brendan wrote:Note that I am planning a similar approach for 3D sound - e.g. OS "knows" where speakers are in relation to the user and uses this information to map "sounds in virtual space" to the most appropriate speakers.
Sound is inaccurately modeled using direction only approach. For low frequency waves there's a very small dependence on direction.
Yes - to model sound correctly (e.g. taking into account things like the Doppler effect, and echo caused by sounds being reflected by walls/ceiling/floor) you need a lot more than just a single "source of sound" co-ordinate. I still need to do more research in this area; but that doesn't change the fact that I am planning a 3D sound system (e.g. where OS maps the sound/s to whatever speakers are present based on the position and capabilities of individual speakers).

Mostly the only thing I'm saying here is that for the sound system I'm planning (just like video system I'm planning) the OS has to know where devices are in relation to the user for it to be ideal.
embryo2 wrote:
Brendan wrote:The only maths that matters is the following equation:

Code: Select all

direction of ray from object (through pixel) to camera in the virtual world = direction of ray from pixel to eye in real world
Everything else (volumetric fog, shadows, reflection/refraction, whatever) either happens in the Cartesian space of the virtual world; or is an effect of the ray hitting/passing through something in the Cartesian space of the virtual world; and is not effected by how the angle/s of rays are selected in any way.
Then you need a transform from angles to the cartesian coordinates in the virtual world. Even if you express the angles as unit vectors you need 4 lines through the corners of a pixel, so, you need a transform from unit vector to a line. In my case two points define a line directly.
For my method; I'd need 4 lines (one for each corner of a pixel), but those lines are shared between pixels so (e.g.) if the video mode is 1920 * 1600 I'd actually need 1921 * 1601 lines (or a little over 3 million lines).

For your method; you need 4 lines (one for each corner of a "unnecessary pixel in useless cylindrical surface"), but those lines are shared between pixels. However, because you're re-sampling these unnecessary pixels you're going to need more pixels to avoid unwanted artefacts; so (e.g.) if the video mode is 1920 * 1600 you're probably going to need 3840*3200 pixels for that "unnecessary cylindrical surface" and 3841 * 3201 lines (or a little over 12 million lines). Then (on top of that) you're going to need an additional 1920*1600 lines (or is it 1921 * 1601 lines?) to map "pixels on actual display" to those "pixels on useless cylindrical surface".

In other words (assuming 1920 * 1600) I'd need a little over 3 million lines, and you'd need a little over 15 million lines for "only slightly worse quality".
embryo2 wrote:
Brendan wrote:the exact number of pixels is 123 (for a specific/unknown bunch of bananas, at a specific/unknown distance from the camera, at a specific/unknown screen resolution).
The word in bold shows me that you see the importance of the third coordinate.
No; in that sentence it's the distance from the camera to the object in the "world" Cartesian space; which has nothing at all to do with the distance from the camera/eye to the pixel (or the distance from pixel to "object in world").
embryo2 wrote:
Brendan wrote:Note: I switched from "goat" to "bananas" because I needed a world long enough to hit both rays in the previous "close to camera" diagram. With "goat" (4 characters) I would've ended up with those 4 characters and the rays and the monitor all in the same place.
I insist on calculating pixel number for the need for the third coordinate to be visible.
I've tried my best to explain that "distance from camera to pixel" is completely unnecessary (and only "direction of ray from camera/eye" is needed). If you still don't understand, then I apologise, but I can't think of a way to make this more obvious.


Cheers,

Brendan

Re: Concise Way to Describe Curved Surfaces

Posted: Fri Jul 03, 2015 5:52 am
by embryo2
Brendan wrote:What I mean "expensive, relatively to traditional methods".

For traditional methods you transform "vertices in world" directly into "2D coords on screen + depth", and you don't need to cast rays or pyramids (and just draw polygons with Z testing). Casting rays or pyramids is extra work that makes it much more expensive than normal (but works for "non-flat" screens where the traditional method doesn't).
It also should work for the two staged approach with cylinder/sphere because the cylinder's "screen" is essentially the same 2D thing you have mentioned. And next step is just about an intersection of rectangles in 2D.
Brendan wrote:While solutions may have existed for decades, the number of people that actually have the necessary hardware (at least for general purpose PC compatible home/office use cases) is so close to 0% that it's safe to ignore.
Head position tracking is a necessary part of every VR headset. When Oculus Rift price is near the 350$, the tracking subsystem cost should be many times less (35$ for example). And the market for such headsets is growing at a speed of light (99% growth rate). So, I don't see how you can get numbers "close to 0%" when speak about the virtual reality.
Brendan wrote:The most common use case (for normal office/GUI apps and for 3D games) is "user on chair at desk, with keyboard, mouse (or maybe joystick) and monitor/s in front of them". If there is no VR helmet; it's the "user on chair at desk" use case that I'm designing for.
I consider this as a starting case for a quality VR system. But all cases beyond the basic should include the head tracking feature.
Brendan wrote:Mostly the only thing I'm saying here is that for the sound system I'm planning (just like video system I'm planning) the OS has to know where devices are in relation to the user for it to be ideal.
I see you plan to represent a lot of details in a virtual world. But still have no complete picture of every detail in mind. It means that most probably you are strongly underestimate the efforts required for the full implementation of your plan. But if you have a lot of time, then why not?
Brendan wrote:
embryo2 wrote:Then you need a transform from angles to the cartesian coordinates in the virtual world. Even if you express the angles as unit vectors you need 4 lines through the corners of a pixel, so, you need a transform from unit vector to a line. In my case two points define a line directly.
For my method; I'd need 4 lines (one for each corner of a pixel), but those lines are shared between pixels so (e.g.) if the video mode is 1920 * 1600 I'd actually need 1921 * 1601 lines (or a little over 3 million lines).

For your method; you need 4 lines (one for each corner of a "unnecessary pixel in useless cylindrical surface"), but those lines are shared between pixels. However, because you're re-sampling these unnecessary pixels you're going to need more pixels to avoid unwanted artefacts; so (e.g.) if the video mode is 1920 * 1600 you're probably going to need 3840*3200 pixels for that "unnecessary cylindrical surface" and 3841 * 3201 lines (or a little over 12 million lines). Then (on top of that) you're going to need an additional 1920*1600 lines (or is it 1921 * 1601 lines?) to map "pixels on actual display" to those "pixels on useless cylindrical surface".

In other words (assuming 1920 * 1600) I'd need a little over 3 million lines, and you'd need a little over 15 million lines for "only slightly worse quality".
First, you still need the transform from a unit vector to a line. And second, your line number estimation compares one incomplete algorithm with another incomplete, while complete algorithm includes all steps from a world description to the world visualization on all monitors in use. I haven't created the full algorithm for such visualization (just like you), so it is too early to compare the final results. But of course, you always can find some intermediate steps that look like something inefficient, when the whole algorithm can perform efficiently. For example you can consider the scaling of an image with higher resolution to it's representation on a screen with lower resolution - here the quality of the resulting image will be better than the same image with the same resolution, but constructed directly from a VR world using lower resolution of the monitor without intermediate creation of the image with higher resolution.
Brendan wrote:
embryo2 wrote:I insist on calculating pixel number for the need for the third coordinate to be visible.
I've tried my best to explain that "distance from camera to pixel" is completely unnecessary (and only "direction of ray from camera/eye" is needed). If you still don't understand, then I apologise, but I can't think of a way to make this more obvious.
If you try to calculate the number of pixels then you will see the need for the distance. But if you need an explanatory example then here it is:

When a banana is 100 meters away then it most probably would occupy not more than one pixel on the screen 0.5 meters away. But if the banana is 0.5 meters away, then it most probably would occupy hundreds of screens that are 100 meters away. So, if you delete the distance from the example, then you can see the distortion you can introduce with such an unrestricted intrusion into the world of complex interdependencies.

Re: Concise Way to Describe Curved Surfaces

Posted: Fri Jul 03, 2015 8:31 am
by Brendan
Hi,
embryo2 wrote:
Brendan wrote:What I mean "expensive, relatively to traditional methods".

For traditional methods you transform "vertices in world" directly into "2D coords on screen + depth", and you don't need to cast rays or pyramids (and just draw polygons with Z testing). Casting rays or pyramids is extra work that makes it much more expensive than normal (but works for "non-flat" screens where the traditional method doesn't).
It also should work for the two staged approach with cylinder/sphere because the cylinder's "screen" is essentially the same 2D thing you have mentioned. And next step is just about an intersection of rectangles in 2D.
A straight line (e.g. between 2 vertices in a 3D Cartesian space) becomes a curved line on the cylinder/sphere; so the traditional method won't work correctly for a cylinder/sphere.
embryo2 wrote:
Brendan wrote:While solutions may have existed for decades, the number of people that actually have the necessary hardware (at least for general purpose PC compatible home/office use cases) is so close to 0% that it's safe to ignore.
Head position tracking is a necessary part of every VR headset. When Oculus Rift price is near the 350$, the tracking subsystem cost should be many times less (35$ for example). And the market for such headsets is growing at a speed of light (99% growth rate). So, I don't see how you can get numbers "close to 0%" when speak about the virtual reality.
Erm.

For the "no VR helmet case" the chance of the user having position tracking hardware is close to 0% and the chance that the user is moving around and isn't sitting in a chair at the desk is also close to 0%; so caring about the user moving in relation to the display is both impossible and pointless.

For the "VR helmet case" the monitor is stationary relative to the user's head; so there's no need to care about user moving in relation to the display.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Then you need a transform from angles to the cartesian coordinates in the virtual world. Even if you express the angles as unit vectors you need 4 lines through the corners of a pixel, so, you need a transform from unit vector to a line. In my case two points define a line directly.
For my method; I'd need 4 lines (one for each corner of a pixel), but those lines are shared between pixels so (e.g.) if the video mode is 1920 * 1600 I'd actually need 1921 * 1601 lines (or a little over 3 million lines).

For your method; you need 4 lines (one for each corner of a "unnecessary pixel in useless cylindrical surface"), but those lines are shared between pixels. However, because you're re-sampling these unnecessary pixels you're going to need more pixels to avoid unwanted artefacts; so (e.g.) if the video mode is 1920 * 1600 you're probably going to need 3840*3200 pixels for that "unnecessary cylindrical surface" and 3841 * 3201 lines (or a little over 12 million lines). Then (on top of that) you're going to need an additional 1920*1600 lines (or is it 1921 * 1601 lines?) to map "pixels on actual display" to those "pixels on useless cylindrical surface".

In other words (assuming 1920 * 1600) I'd need a little over 3 million lines, and you'd need a little over 15 million lines for "only slightly worse quality".
First, you still need the transform from a unit vector to a line.
No, the formula for "intersection between ray and plane" uses a vector for the ray and needn't be converted.
embryo2 wrote:And second, your line number estimation compares one incomplete algorithm with another incomplete, while complete algorithm includes all steps from a world description to the world visualization on all monitors in use.
The steps leading up to "description of world in 3D Cartesian space" are the same for your method and mine (whatever those steps might be) and aren't a difference between our methods. The steps from "description of world in 3D Cartesian space" to "colour for each pixel" for both methods are different, and this is where you're using more lines for a worse quality.

Thing of it like this: I do "A then B" and you do "A then C", where A is the same and is therefore irrelevant for the comparison, and C is worse than B.
embryo2 wrote:When a banana is 100 meters away then it most probably would occupy not more than one pixel on the screen 0.5 meters away. But if the banana is 0.5 meters away, then it most probably would occupy hundreds of screens that are 100 meters away. So, if you delete the distance from the example, then you can see the distortion you can introduce with such an unrestricted intrusion into the world of complex interdependencies.
Please read my previous post. The distance from banana to camera (e.g. 100 meters) is important; but this has nothing to do with what you're complaining about. The distance from camera to pixel (0.5 meters) and the distance from pixel to banana (95.5 meters) are what you suggested are needed and it's these distances that are completely unnecessary.


Cheers,

Brendan

Re: Concise Way to Describe Curved Surfaces

Posted: Sun Jul 05, 2015 2:49 am
by JulienDarc
What about another way to "see" it :

What if the code was left unchanged but the angle has been worked on to let your brain do all the hard work ?

I mean, it all boils down to "average use". People are paid a lot to collect those datas and do testings.

The angles shown in the images in the first page are no hazard : your brain can correct the image easily. The distance people will be from that screen is known on average. Hence, you just have to get the right surface angle that 90 % people will be able to correct easily.

Human brain / vision system is a fucking machine.

Ask an ophtalmologist. He/She will tell you.