3D Mathematics
3D Mathematics
What branches of mathematics are involved with 3D systems? I've been wanting to learn about 3D and such, but haven't really been sure about what to read up on.( I'm not talking about OpenGL or any existing library, but the in-depth mathematical side of it.)
Thanks.
Thanks.
C8H10N4O2 | #446691 | Trust the nodes.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
3D Maths are mainly governed by one broad subject: Linear Algebra. One of the better books i have explains many of the aspects, and it is in excess of 600 pages in size.
Apart from that, you need all sorts of mathematics to compute the physics involved. These are generally rather primitive so my guess is that if you have studied maths for a while you'd be familiar with the various aspects.
Anyway the book i have (and which i can recommend), is "Fundamentals of Computer Graphics" ( ISBN 1-56881-269-8 ). I have the 2nd edition, but compared to the first it only contains quite a bit more of practical things, which are nice to know but not necessary. The first edition is however out of print here.
Edit: Just to show off, the book can teach you everything necessary to do this:
I have been teaching the course involved, so if you have any questions on the subject i should be able to answer them
Apart from that, you need all sorts of mathematics to compute the physics involved. These are generally rather primitive so my guess is that if you have studied maths for a while you'd be familiar with the various aspects.
Anyway the book i have (and which i can recommend), is "Fundamentals of Computer Graphics" ( ISBN 1-56881-269-8 ). I have the 2nd edition, but compared to the first it only contains quite a bit more of practical things, which are nice to know but not necessary. The first edition is however out of print here.
Edit: Just to show off, the book can teach you everything necessary to do this:
I have been teaching the course involved, so if you have any questions on the subject i should be able to answer them
Thanks for the tips. I'm now reading up on it. It's amazing how many free books there are on mathematics. In a 1\2 hour period, I've found two books just on linear algebra, and as you said, they're huge. (One is 500+ pages and the other 800+) I'll get back to you if have any other questions.
Thanks!
PS: Nice picture.
Thanks!
PS: Nice picture.
C8H10N4O2 | #446691 | Trust the nodes.
Hi,
I guess it depends on what sort of 3D graphics - ray tracing is different to (doom style) ray casting, which is different to polygon projection.
For doom style ray casting it's mostly 2D trigonometry, while for polygon projection it's matrices and 3D trigonometry.
Ray tracing is a completely different field - intense calculations that can't be done in real time (but can give photo-realistic images). I've never tried this type of 3D graphics.
Cheers,
Brendan
I guess it depends on what sort of 3D graphics - ray tracing is different to (doom style) ray casting, which is different to polygon projection.
For doom style ray casting it's mostly 2D trigonometry, while for polygon projection it's matrices and 3D trigonometry.
Ray tracing is a completely different field - intense calculations that can't be done in real time (but can give photo-realistic images). I've never tried this type of 3D graphics.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Oh? If you dont mind putting perfectionism on the line you can get quite far:Brendan wrote:Ray tracing is a completely different field - intense calculations that can't be done in real time
If you know the keywords this is the first hit on google
[url=http://www.demoscene.hu/~picard/h7/]Heaven 7
Demo of a real-time 3D ray-tracing animation. [/url]
Hi,
For an example, see the conclusion on this article about a ray-tracing engine for Quake.
The quoted figure is 4.4 frames per second at 256 * 256 (for single-CPU). At normal game resolutions (800 * 600 or higher) this works out to around 0.6 frames per second (or lower) - roughly 100 times slower than you'd expect from ray casting (and I suspect that this is with one light source only).
Cheers,
Brendan
Extermely impressive, but how much is hard-coded and partially pre-generated or partially pre-processed? Can I expect the same level of performance and detail from a generic 3D renderer (something capable of generating arbitrary and/or interactive images)?Combuster wrote:Oh? If you dont mind putting perfectionism on the line you can get quite far:Brendan wrote:Ray tracing is a completely different field - intense calculations that can't be done in real time
If you know the keywords this is the first hit on google
Heaven 7
Demo of a real-time 3D ray-tracing animation.
For an example, see the conclusion on this article about a ray-tracing engine for Quake.
The quoted figure is 4.4 frames per second at 256 * 256 (for single-CPU). At normal game resolutions (800 * 600 or higher) this works out to around 0.6 frames per second (or lower) - roughly 100 times slower than you'd expect from ray casting (and I suspect that this is with one light source only).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
I did say: "If you dont mind putting perfectionism on the line"...
Still there are several optimisations that can be done to limit work that are not scene-specific:
- quadtrees/bsp-trees/other sorting algorithms can tell you what objects not to check.
- you can solve for the discriminant instead of the intersection, allowing you to determine where an object starts and ends (and hence saves you from testing all pixels in between)
- lookup tables. Underrated at the least
- you can do transformations for reflections/shadows on flat surfaces, like zone portals
- Use the GPU's Pixel/Vertexshaders to your advantage. (your 'free dual-processor' system)
- Caching intermediate results/object caching/heuristics
and the list goes on.
Apart from the consensus that
- Projective graphics scale poorly
- Raytracing algorithms scale VERY good
- We want everything to be faster and better looking
-----
- hardware raytracing = likely future?
Just a suggestion, IMHO i think a raytracer would be the perfect demo for your OS (due to its distributive nature)
Still there are several optimisations that can be done to limit work that are not scene-specific:
- quadtrees/bsp-trees/other sorting algorithms can tell you what objects not to check.
- you can solve for the discriminant instead of the intersection, allowing you to determine where an object starts and ends (and hence saves you from testing all pixels in between)
- lookup tables. Underrated at the least
- you can do transformations for reflections/shadows on flat surfaces, like zone portals
- Use the GPU's Pixel/Vertexshaders to your advantage. (your 'free dual-processor' system)
- Caching intermediate results/object caching/heuristics
and the list goes on.
Apart from the consensus that
- Projective graphics scale poorly
- Raytracing algorithms scale VERY good
- We want everything to be faster and better looking
-----
- hardware raytracing = likely future?
Just a suggestion, IMHO i think a raytracer would be the perfect demo for your OS (due to its distributive nature)
Hi,
Of course to complicate this there's "2.5D" ray casting, where you cast the rays across the floor to determine which wall you hit. This means for 800*600 graphics you only cast 800 rays instead of 480000. This is the technique used in the original Wolfenstien and Doom games, and it works much much faster. The disadvantage is that you can't easily tilt the camera, and it's not suited to open areas (e.g. a large picture of foothills).
Polgon projection is entirely different, but can also be very scalable. In general there's 3 steps:
For real time distributed 3D there's a lot of ways it could be done, and it doesn't necessarily need to be restricted to one type of rendering. What I'd want is code that automatically adjusts the image quality in response to how quickly frames are processed, that dynamically adjusts itself as computers come online or go offline, or as CPU load changes (i.e. dynamic quality at a fixed frame rate, rather than fixed quality at variable frame rates). Of course what I want and what I'm able to implement before I die of old age are 2 different things..
Cheers,
Brendan
To me, any work done by a race-tracer could be done by a ray-caster. For example, you can cast a ray from the camera at a certain angle and see what it hits ("point A"), and then cast secondary rays from "point A" to handle relfection, refraction and lighting, and then do tertiary rays, quaternal rays, etc. In theory you can get the same quality of graphics, but it'd cost slightly less to do so (because you never care about rays that don't effect the final picture). It could also be just as easily done as ray tracing (where rays are cast from light sources, rather than from the camera), and can be just as scalable.Combuster wrote:I did say: "If you dont mind putting perfectionism on the line"...
Of course to complicate this there's "2.5D" ray casting, where you cast the rays across the floor to determine which wall you hit. This means for 800*600 graphics you only cast 800 rays instead of 480000. This is the technique used in the original Wolfenstien and Doom games, and it works much much faster. The disadvantage is that you can't easily tilt the camera, and it's not suited to open areas (e.g. a large picture of foothills).
Polgon projection is entirely different, but can also be very scalable. In general there's 3 steps:
- - multiplying vertexes with a translation matrix to convert them into "screen co-ordinates" (where all vertexes can be done in parallel - well suited to SSE).
- doing backface culling, cropping and edge detection (where all polygons can be done in parallel)
- rasterizing the polygons (where each screen line can be done in parallel)
For generating high quality videos, any technique is embarrasingly easy to do in parallel - just generate one frame per CPU, use any type of networking to collect the final frames and then assemble the frames in order. It is so easy it can be done on any OS.Combuster wrote:Just a suggestion, IMHO i think a raytracer would be the perfect demo for your OS (due to its distributive nature)
For real time distributed 3D there's a lot of ways it could be done, and it doesn't necessarily need to be restricted to one type of rendering. What I'd want is code that automatically adjusts the image quality in response to how quickly frames are processed, that dynamically adjusts itself as computers come online or go offline, or as CPU load changes (i.e. dynamic quality at a fixed frame rate, rather than fixed quality at variable frame rates). Of course what I want and what I'm able to implement before I die of old age are 2 different things..
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Here:slasher wrote:Alboin,
could you please share the links to the books you found or the search keywords you used to find them.
Thanks
http://linear.ups.edu/
http://www.math.miami.edu/~ec/book/
http://joshua.smcvt.edu/linearalgebra/
http://www.numbertheory.org/book/
And this is particularly interesting, however I have not watched any of them yet:
http://ocw.mit.edu/OcwWeb/Mathematics/1 ... /index.htm
C8H10N4O2 | #446691 | Trust the nodes.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
I think you got photon mapping and raytracing confused: raytracing is done from the eye, photon mapping is done from light sources. (with a wide range of hybrid schemes) Raycasters are just dumb (for lack of a better word) raytracers in this respect.Brendan wrote:To me, any work done by a race-tracer could be done by a ray-caster. For example, you can cast a ray from the camera at a certain angle and see what it hits ("point A"), and then cast secondary rays from "point A" to handle relfection, refraction and lighting, and then do tertiary rays, quaternal rays, etc. In theory you can get the same quality of graphics, but it'd cost slightly less to do so (because you never care about rays that don't effect the final picture). It could also be just as easily done as ray tracing (where rays are cast from light sources, rather than from the camera), and can be just as scalable.
btw, is "race-tracer" a deliberate typo?
Hi,
According to the wikipedia page on ray tracing:
"Ray tracing is a general technique from geometrical optics of modeling the path taken by light by following rays of light as they interact with optical surfaces."
"The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel in 1968. The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of an image as a screen-door, with each square in the screen being a pixel."
"The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms cast rays from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow."
If this is all correct, then ray tracing (in optics) means tracing from the light source/s, but ray tracing in computer renderers has the opposite meaning (tracing rays from the eye), except where those rays stop at the first thing they hit, which is ray casting.
These definitions technically make the Doom engine a "2.5D ray tracer", as it would cast out 640 rays from the eye (for 640 * 480 mode) and if a ray hits a texture with transparent pixels it would continue tracing the ray until it hits something else. It would then draw the background (result of the second intersection) followed by the transparent pixels (result of the first intersection).
Cheers,
Brendan
It seems you're correct...Combuster wrote:I think you got photon mapping and raytracing confused: raytracing is done from the eye, photon mapping is done from light sources. (with a wide range of hybrid schemes) Raycasters are just dumb (for lack of a better word) raytracers in this respect.Brendan wrote:To me, any work done by a race-tracer could be done by a ray-caster. For example, you can cast a ray from the camera at a certain angle and see what it hits ("point A"), and then cast secondary rays from "point A" to handle relfection, refraction and lighting, and then do tertiary rays, quaternal rays, etc. In theory you can get the same quality of graphics, but it'd cost slightly less to do so (because you never care about rays that don't effect the final picture). It could also be just as easily done as ray tracing (where rays are cast from light sources, rather than from the camera), and can be just as scalable.
According to the wikipedia page on ray tracing:
"Ray tracing is a general technique from geometrical optics of modeling the path taken by light by following rays of light as they interact with optical surfaces."
"The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel in 1968. The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of an image as a screen-door, with each square in the screen being a pixel."
"The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms cast rays from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow."
If this is all correct, then ray tracing (in optics) means tracing from the light source/s, but ray tracing in computer renderers has the opposite meaning (tracing rays from the eye), except where those rays stop at the first thing they hit, which is ray casting.
These definitions technically make the Doom engine a "2.5D ray tracer", as it would cast out 640 rays from the eye (for 640 * 480 mode) and if a ray hits a texture with transparent pixels it would continue tracing the ray until it hits something else. It would then draw the background (result of the second intersection) followed by the transparent pixels (result of the first intersection).
No - just a generic typo..Combuster wrote:btw, is "race-tracer" a deliberate typo?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.