Hi,
embryo2 wrote:Brendan wrote:I'd store both angles as a vector from the camera at (0,0,0), which is only 2 values (X coord/azimuth and Y coord/altitude) because the third (Z) can always be 1. If you look at the formula for finding the intersection between a ray and a plane you'll see it doesn't involve any sine/cosine.
Well, if you are going to use angles, then how it is possible to find any intersection without trigonometry?
I didn't say there'd would be no trigonometry; I only said there wouldn't be sine/cosine.
Please understand that there are many ways to store an angle. Degrees and radians are common. Another way is
slope. For 3D, "2 angles" is the same as "2 slopes", and "2 slopes" can be stored as a (e.g.) a unit vector, or any vector (e.g. a vector where X and Y represent angles and Z is always 1). Essentially, there's no real differences between angles and vectors.
embryo2 wrote:And if I look at the formula for finding the intersection between a ray and a plane then I see the formula for my model (cartesian space), but all other formulas (with trigonometry involved) are just useless for me.
The formula for "intersection between plane and ray described by vector" that you'd be using is the exact same formula that I'd be using.
embryo2 wrote:Brendan wrote:Essentially, the only sine/cosine involved is in the construction of the matrix used to transform vertices into "world co-ords".
The complete set of calculations is much bigger.
Yes; but the construction of the matrix used to transform vertices into "world co-ords", and the calculations used to create my "2 angles/vertex per pixel" during driver initialisation, are the only calculations that use sine/cosine.
embryo2 wrote:The proposed alternative is like this:
We define a cylinder with it's center line and radius. Next we define an origin on the cylinder's center line. Next we define a pixel on the cylinder (knowing it's size is enough for this). Next we construct a line per every pixel corner, but because all pixels have neighbors, there actually will be a line per pixel. Each line is described with two points - the origin and the point at the cylinder.
In this case the point at the origin would be (0,0,0) and needn't be stored; and the point at the cylinder needs an X coord (lets call that "azimuth") and a Y coord (let's call that "altitude").
embryo2 wrote:Vertical coordinate is calculated by simple addition of a pixel's height and horizontal is calculated after solving a simple triangle (may be here one sine will be preferable over powers of 2 and square root, but I haven't compared the formulas). Next there is a ray tracing task, that extracts a window from a virtual world cornered by 4 lines. Lines here are important because the distance from an object (it's covering triangle) can vary for the mentioned 4 lines just because the lines can intersect different objects. Here I omit the whole description of the virtual world windowing calculations. Next in the window we calculate weighted average for the target pixel color and store this color for use in the future. Here we have a complete picture of a world on our cylinder.
This is essentially exactly what I'd be doing (just with pre-computed angles instead of regularly spaced angles). However; because I've used the monitor's angles and not the "angles for a cylindrical surface" I'm finished after this and I don't need to do extra work trying to map "monitor pixels" to "cylindrical surface pixels".
embryo2 wrote:If we know the exact location of monitor pixels then we can calculate the picture only for monitor projections on the cylinder. And next, as was shown earlier, we calculate colors for every monitor's pixel using 4 lines per pixel and averaging the pixel's color according to the area proportion on the cylinder's pixels.
I'm doing "4 lines per monitor pixel"; and you're doing "4 lines per cylindrical surface pixel" followed by "4 lines per monitor pixel". It's extra unnecessary work.
embryo2 wrote:And of course, we can optimize the calculations by merging pixel calculations for the cylinder and for a monitor, but math here will be much harder.
No, the maths isn't harder - you just need the correct angles to start with, instead of using "wrong angles to start with, then correct angles for the unnecessary second attempt".
embryo2 wrote:Brendan wrote:Um, what? Exactly how does "geometry" (an entire branch of mathematics) allow you to calculate the point on your sphere/cylinder without angles in any form?
It is called
analytic geometry.
Here you can see the calculations without angles.
See above - vectors (3D) and slopes (2D), are just another way to represent angles.
embryo2 wrote:Brendan wrote:embryo2 wrote:And as a final remark, for your way to be complete you need third coordinate - the distance from an eye to the pixel, so the required set of parameters always will be the 3 numbers - 2 angles and the distance.
No. Imagine a tiny picture of a goat that's extremely close to your eye. Now imagine a massive version of the same picture of a goat that's far away (but happens to cover the exact same part of your field of view). In both cases you see exactly the same thing.
No. Imagine, that the goat has moved and now is far from an eye. What area should occupy it's head on a screen? What number of pixels should show the head? What colors should those pixels have? And now you can show us the solution without a distance.
A bunch of bananas that are close to the camera; where both rays hit the bananas:
A bunch of bananas that are further away from the camera; where only one ray hits the bananas:
Code: Select all
/
/
/
/
/
/ B
/ a
M/ n
M a
/M n
/ M a
*--M---------s
For both cases, I only need to know the angle of the ray as it passes through the pixel.
Note: For ASCII art it's hard to show more than 2 rays. Imagine there's 1920 rays and not just 2 rays, where most rays hit the bananas when the bananas are close, and when the banana is further away less rays hit the banana (making the bananas look smaller/take up less pixels).
Cheers,
Brendan