The year I learned to write C code I was 19, second year of university and already willing to drop out, so I started spending time with C and 3D graphics. I was just fresh of the math exam so the 3D matrix transformations to do rotations was trivial to implement. I just wrote a function to draw triangles, used a simple z-sorting technique, and the basic shading calculating the cosine of the angle between the observer and the surface. With just these basic things I ended up with 3D "worlds" similar to the ones I saw in DOS games when I was a child. All the effort was maybe 500 or 1000 lines of code, but to build things from scratch, only starting from the ability to draw an RGB pixel, gave me a sense of accomplishment that later shaped everything else I did. I basically continued for the next 20 years to create things from scratch.
My first year university Linear Algebra textbook even had an appendix explaining how to rotate and skew 3D objects in computer graphics using the matrix multiplication I had learned that semester. I loved it.
Then I finished university and got a programming job creating forms to gather user data, put it into a database, and generate reports. Sigh.
I've the feeling that unfortunately most programming jobs, even in shiny startups, are more like data forms than 3D engines... That's why many programmers have OSS side projects where they do cool things.
I did something very similar at a slightly early age, and I still have the code. So I've decided to put it on github. The last-modified-date on these files is 1996.
I had forget about this but at my first year and while the algebra professor was drawing 3D vectors to explain the lecture, I was thinking "that is a 2D surface, so there should be a linear transformation between the two worlds".
Later at home, I've found such space transformation and built a small 3D world that you can walk, using only vectors and plain triangles :)
It is also worth noting that today's CPUs are faster than 3D graphics chips from 20 years ago, so it seems perfectly reasonable to expect a good performance from a basic hand-written 3D rendering library.
This is a good tutorial, but it's important to note that scanline rasterizers are not how GPUs (or even high-performance SIMD software implementations) work. Instead, they use barycentric coordinate sign tests for better parallelism and "free" interpolation.
You're absolutely right about scanline rasterizers and GPUs, but scanline rasterization is very interesting historically. Most of the 1990s pre-GPU software renderers did scanline rasterization, for example Quake and Thief had a pure software renderer.
Before GPUs, perspective correct texture mapping was the holy grail of 3d graphics because CPUs of the time were not fast enough to do all the divisions for each pixel, and lots of clever tricks were invented to work around this limitation. It's a bit of a shame that this article does not cover it, perhaps there's a part 2 in progress.
The way I saw it was that the tutorial was more of a "roll your own software renderer" tutorial; it diverges quite heavily from what actual hardware accelerators do.
It takes a lot of mental pain for me to make that step still. I think in old 3D and then someone derive shaders. It works but it is a painful process. Then again, for now it is nothing more than a hobby anyway.
Takes me back. A long time ago I wrote a simple rendering library for the 3DFx "Glide" library. It didn't do shaders but it would do mipmapped texture rendering which allowed you to have an image (texture) on your triangle. For a while I was stuck on the projection matrix and understanding screen clipping until my Dad gave me his copy of the Kodak Reference Handbook[1] third edition, copyright 1945. And they describe focal length, field of view, fstops, and lens effects very clearly.
Nitpick: this is software rendering. This is how we did before any kind of 3D api existed. Both GL/D3D/etc were without shaders to begin with. I still maintain a fixed pipeline (no explicit vertex or fragment shaders) 3D app with DirectX.
One can argue that the fixed pipeline of D3D is using a kind of implicit shader, but it's not the kind of shader we usually mean when we talk about vertex and fragment shaders today.
> Back in the day — way before we had hardware accelerated 3D graphics cards, let alone programmable GPUs — if you wanted to draw a 3D scene you had to do all that work yourself. In assembly. On a computer with a 7 MHz processor.
7 MHz? That's so fast and modern. Back in the day we were writing 3d fill routines on the 6502 going 1 MHz. With no floating point and no diagonal line support. And in bare machine language, going uphill both directions in the snow! ;)
One of the first pieces of 6502 assembly I read, and spent ages deciphering, was an implementation of Bresenham's line algorithm published in some magazine. Who needs floating point...
When people talk about 'Magic' in software they mostly just mean the implementation details don't impact them. You can have two very different GPU's both implement the same WebGL calls correctly.
Kinda of a shame they omitted matrices. They're one of the foundational bit of any 3D api and one of the few things that translates well from fixed function/sw raster to modern pipelines.
Still great to know the fundamentals, texture formats, tiling and other things are also really useful pieces to understand when working with 3D pipelines.
Matrices are just a convenient notational trick for a set of linear algebra expressions. They seem confusing until you realize that an identity matrix represents:
I prefer to teach 3D graphics without matrices because it's really not anything more complicated than a compact notation. Tricks like "invert and transpose to receive the normal matrix" or "take the first column to get your up vector" make no sense unless you work out the algebra of what those things mean.
And don't get me started on homogeneous coordinates which are a way to put translation in a matrix by shoving a convenient "1" constant in the input vector, and the perspective matrix which does near/far, perspective transform, and a depth remap in the same matrix, and isn't easily separable because it steals the "w=1" constant for depth remap and also adjusts the "w" afterwards. Equivalent of reusing a local variable because you're short on registers :)
True, matrices are convenient for notation, and they tend to be confusing at first. A tutorial like this is better off without the detour through linear algebra.
But without them you lose the ability to smoosh multiple transforms together, making it harder to do anything hierarchical or animated. Personally, I'd place their utility higher than "just" notational convenience.
I feel like we haven't figured out how to teach matrices, that they're inherently easy after you understand them, but we don't know how to introduce or explain them easily. Do you introduce them later & do you have any good resources for them once you do broach the subject with your students?
I like your identity example. I think I really started to "get" matrices after realizing they are literally vectors put in a stack, and that the vectors represent the state of transforming from the identity to those vectors. With that in mind, a matrix can feel much easier than a rotation involving trig functions and hand-coded dot products. You can do all kinds of rotating and other transforming without any trig once you see matrices as transforms rather than an opaque and mysterious brick of numbers. But I admit it took me years to feel that way after my first encounters with matrices.
There's nothing magical about matrix multiplication. Smooshing multiple transforms together just comes down to taking those systems of equations and combining them. If I have one transform that scales space by 2, and another that scales translates space +5 (OK, yeah, this is an affine transform, not a linear one, but I wanted to focus on one coordinate for now), then I have:
f(x) = 2*x
and
g(x) = x+5
Using basic high school algebra we can find out that:
f(g(x)) = 2*x+10
and
g(f(x)) = 2*x+5
This applies all the way down. The magic isn't in numbers in a square shape, it's in basic algebra. Matrix multiplication comes down to composition of multiple systems of equations like this, and if you start from that, and nothing more than the distributive property of multiplication, you can easily derive "matrix multiplication".
Don't get me wrong. I use matrices in everything "production" for 3D graphics -- the compact notation is extremely convenient. I just wish we'd stop ascribing magical properties to matrices like "without them you lose the ability to smoosh multiple transforms together" because that's clearly false.
I would quickly start drifting if I attended a class on 3D graphics and the teacher started writing everything open without matrices like that. You must admit the usefulness of a short notation when communicating ideas.
While understanding and talking 3D graphics is certainly possible without matrix notation, I really see no reason to purposefully omit it in teaching or other communication. Teaching situation is also a good place to practice standard notation.
I appreciate the response, but I find it rather strange, if you are really teaching 3d graphics to students. I mean, you are technically right at a pedantic level, sure, yes, you can hand-code your matrix multiplies. I didn't mean to suggest that it was impossible, and I'd appreciate some benefit of the doubt before you claim I'm making false statements. I was trying to communicate that it's not practical to do so once you start using animation or hierarchies, and despite your objections and example, I still believe that's true.
You've given a 1-d example. If you tried to do what you're talking about with a character rig, it becomes unwieldy and unfeasible almost immediately. On top of that, the very second you try to do this in 3d with 3+ transforms, you will end up with a square of numbers, you will have derived the concept of a matrix just by trying to avoid them.
I've written lots of 3d transforms by hand using hard-coded dot products just like in your example, and I've written lots of code with matrices too. I've seen others do the same. What I haven't seen is hard-coded matrix multiplies to combine transforms. It is not practical to combine more than a single pair of transforms using manual dot products. In every practical way, you do lose the ability to smoosh transforms together if you don't use matrices. And I never said it was magical, it is simply one of the algebraic benefits of using matrices, and it is easy to understand, and easy to work out the mechanics of. That doesn't mean that it's equivalent in practice, and as it turns out, it is not equivalent in practice. There are significant practical benefits to using matrices once your needs grow past the level of complexity of a basic tutorial.
Re. how to teach matrices: First teach computer graphics, then teach linear algebra :)
I switched universities after the first year, and subsequently I took a lot of courses in the "wrong order". I did computer graphics first and then linear algebra. That's the wrong way around as defined in the curriculum, but it was clearly the right way for me. And for most people I believe too.
When the matrices came, they were clearly a practical vessel for the practical things we needed to to in computer graphics and were getting a little unwieldly and we had an inkling there was a pattern lying underneath for.
Sometimes it is both possible and better to teach the practical hands-on craft first, to create hunger for and an appreciation of the theory, and then teach the theory.
(And then go back and blow up the "craft" ideas and go further still.)
> I prefer to teach 3D graphics without matrices because it's really not anything more complicated than a compact notation.
From a programming perspective it's much less verbose. Also the 1:1 mapping of "spaces" (camera space, world space) makes it very clean to manage transformations between them.
> and isn't easily separable because it steals the "w=1" constant for depth remap and also adjusts the "w" afterwards
I'm sure you already know this but x,y,z,w == x/w,y/w,z/w. It's easy enough to move between a 3D rep and a 4D rep of the same coordinates.
Using 4D coordinates may be somewhat awkward as they're not necessary for representations of points in 3D space but it is a very elegant solution for lighting where it is necessary to represent infinity.
[Edit]
Sorry, the point I was making is that I was also bummed to not see matrices in the article. They are a little bit to wrap you head around but they've a very elegant solution to a lot of stuff in 3D.
Totally, I just think that matrices are a great abstraction around transformations. As long as certain constraints are followed you don't have to care what the previous operations are.
Look up projective geometry -- homogeneous coordinates are points in 3-d projective space. The defining characteristic of projective points is you can multiply all their coordinates by the same (nonzero) factor without changing the underlying point. (1 0 2 1) equals (2 0 4 2) equals (0.001 0 0.002 0.001). A cool fact about projective space is that every two distinct lines intersect in a point, including parallel lines!
I think once you know this math, the jump to using matrices is pretty trivial. I can understand not wanting to include it... not that I feel strongly about either direction. I just don't think it's necessary for the scope of this super useful article.
Also describing rotations in euler angles creates more problems than it solves in the long run.
It would have saved me a lot of headaches if quaternion based rotation representation would have been introduced to me as the default, which it is in most 3d engines.
"The green and blue colors, z-position, and normal vector are all interpolated in the same manner. (Texture coordinates behave slightly differently because there you’d also need to take the perspective into account.)"
Colors (c), z, and texture coordinates (t) should all be interpolated differently because of perspective. You need to interpolate 1/z, c/z, t/z and for every pixel then do division eg. c/z / 1/z = c
It may be a lost art for game developers. Far from it for CG grad students and researchers. Quite the contrary, it's actually part of the rite of passage, heck an undergrad level prerequisite to know these things like the back of your hand, plus a whole lot more, to do graduate level CG work.
Even if you're not a researcher, but wish to write your own path tracing code for example, you would end up learning this.
I strongly believe that an understanding of how old school 3D rendering worked is an excellent thing for modern graphics programmers to have, to appreciate and understand where all of our fancy modern graphics APIs and whatnot come from. Back when I helped teach a GPU programming course, one of the assignments I gave was a full-blown software rasterizer implemented entirely in CUDA. Not so much "program in OpenGL" as "program an OpenGL". :)
In a video I saw recently, the guy in it suggested [1] reading old books about earlier versions of DirectX from the late 90's and early 2000's, around DirectX version 9, even though one does not want to ever use DirectX for the reason that most graphics engines are built on the concepts of these versions of DirectX he said.
Ahh. The days. I remember before I had learned about linear algebra, I saw somebody rendering molecules as 3D wire frames. I had an Amiga back then with it's "Blitter" (could draw lines in hardware, a long as you tell it which if eight octants the line's angle falls into).
Then, being the geek I was, I sat down every day until I had figured out perspective transformation and rotation (later I found I had just done matrix multiplication). Of course I never thought of homogeneous coordinates, so translation was an extra step to be done for each point.
Even worked out "real" red-green 3D. Oh the days when I had time for this stuff. Fond memories.
Ah, the 80s & early 90s, when you had to implement everything yourself and every byte and instruction counted :)
The demo scene these days feels somehow less satisfying. The demos definitely look better, but they can plug into such a vast ecosystem of system libraries that 64KB feels like cheating.
> The demo scene these days feels somehow less satisfying. The demos definitely look better, but they can plug into such a vast ecosystem of system libraries that 64KB feels like cheating.
Some of these sceners are doing shadertoy.com type stuff that is to say, complex imagery and geometry calculated only from pixel shaders (no polygonal geometry / vertex shaders etc), ie. ever-cooler, faster, more-complex "faux (but realtime) ray-tracing" with a VERY limited set of operations, no real frameworks/libraries etc, everything fine-tuned down to the last float, endless neat tricks and approximations --- probably recaptures/continues quite a bit of the very same spirit!
Stuff like this https://www.shadertoy.com/view/4ttSWf WARNING might crash your browser or temporarily crawl it to a halt while open, save your work!
Sadly WebGL performance is STILL abysmal is newest browsers on even current-gen Quadro workstations.. what a sad joke. And here there's not even any particular per-frame data-to-GPU/VRAM transfers!
The minimalist scene is still out there, it's just a niche within a niche.
For instance, here's a demo from 2015, done in immediate mode and 256 bytes - I've got a feeling you'll enjoy it as much as the live audience clearly do:
For another recent example, also in 2015 some wizards managed to coax 1024 colours (among other impressive visual effects) out of an original IBM PC - as in, hardware from 1981:
> For instance, here's a demo from 2015, done in immediate mode and 256 bytes - I've got a feeling you'll enjoy it as much as the live audience clearly do:
Wow to both. Yet to me, Immediate Railways (2015) blew my quite a bit more than Demoplex (2006) ;)
Maybe I missed a detail, a film cinema room, including chairs, middle aisle, rotating camera, friggin ambient occlusion AND an animated plasma on the projector screen. Absolutely jaw-dropping, not a single doubt. I used to write 4096b demos but this is a class of its own, even crazier skills I don't have, I could have a stab but I'd get stuck at 512b at least, - cough - :-P
But Immediate Railways is the first 256b demo I remember seeing that actually has multiple parts! Three parts and they actually make sense! You could even argue that the three parts connect in a (very basic, minimal, three-clause sentence) story line. Now I haven't been keeping up with everything in the scene, but that's a first for any 256b demo I have ever seen. Many cases a 256b is showcasing a single "thing", scene or effect that is a few times more complex than Kolmogorov's worst acid dream. Also I think the design and colours are nicer than in Demoplex (and yes, that counts).
Fun thing to imagine though, if you'd concatenate 16 of these babies to make a single 4096b demo, you wouldn't stand a chance in a modern 4k compo :) Not even allowing 20 of them, sharing init code. The expectation of "way more than the sum of its parts" has already exploded in the step from 256b to 4096b. I'm not even sure one could fit a softsynth worthy of the music expected in a modern 4k into 256b? (ok probably one could, probably someone did, prove me wrong already :p mine was about 1100b, back in 2000)
Related: "JavaScript library for simple 3D graphics and visualisation on a HTML5 canvas 2D renderer. It does not use WebGL. Works on all HTML5 browsers, including desktop, iOS and Android."
A while back I created a small project for drawing 3D wireframe graphics using the Common Lisp LTK interface to Tk.
It's slow (uses inefficient matrix algorithms, uses Tk, etc.) but it's "fast enough" for some simple 3D scenes. Not very practical for real-life use, but it was fun.
It does seem to perpetuate -- or at least not make clear -- a misconception.
> 3D rendering without shaders
> We won’t use any 3D APIs at all
Those are two independent statements.
Metal, OpenGL, WebGL, and Vulcan are not a 3D APIs. They are (2D) rasterization APIs using shaders. Any 3D-ness of the math is external to them. In contrast, OGRE, Java 3D, and three.js are 3D rendering APIs.
Two independent choices yield four types of ways to do 3D rendering. E.g., in browser they could be
| 3D API | no 3D API |
---------------|------------------------|------------------------|
GPU shaders | three.js, using WebGL | WebGL |
---------------|------------------------|------------------------|
no GPU shaders | three.js, using canvas | canvas |
This article fits in bottom-right corner.
I take notice when I hear the oft-repeated fact that OpenGL/WebGL are 3D rendering APIs. At www.lucidchart.com, in 2015 we chose to use WebGL when available to improve rendering performance for (2D) diagramming. Were WebGL made for 3D stuff, it'd be a weird choice, but WebGL is for high-performance rasterization of all kinds.
Great article!
Reading the title, I thought it was about the "tricks" that games used when the best thing available was the fixed pipeline.
I still remember how amazed I was when learned the good balance between performance cost and the resulting image when using textures for static lighting (lightmaps).
This is the kind of article that I enjoy reading a lot. Most tools available today mask away fundamental concepts, and many aspiring young engineers learn to use "tools". While the ability to use various tools is of paramount importance, the most valuable skills an engineer can possible possess, in my opinion, is the ability to create new tools/concepts/whatever from 1st-ish principle
I have a question about the rasterization step. When creating the scanlines, would this be a possible entry point for anti-aliasing, by giving the lines a subtle gradient that goes to near 0 alpha at the right and left edges? (and maybe also the top and bottom edges for the lines at the top and bottom of the stack). There are many ways to do anti-aliasing and this seems like one possibility to me.
Any suggestions on a good primer for what shaders are and how they work? For years I've always thought "shaders" are just effects you can layer onto a rendered scene. Say, to get an 80s effect, or bloom, or a cel shading effect, etc. I never really thought of it as a way to actually do the base scene rendering.
Other than some very specific things like the rasterization algorithm for filling in a triangle, you still can't really skip learning the concepts in the article. The concepts are still quite relevant in the post-fixed-function-pipeline world, except you do even more math and even more clever tricks.
The GPU and associated API's just nicely abstract specific computations for you like matrix transforms, texture sampling, depth testing, etc., but the moment you want to do anything sort of fancy you're right back into the depths of it.
Nobody really wrote software rendering like that beyond CG classes. I'd think author is simplifying for the sake of accessibility but it's actually more complicated than what production renderers did. One could also think it's to show a GPU's internal work, but, again, GPUs don't do this either.
People in the video game industry wrote tons of this stuff. We would spend weeks figuring out how to get one or two instructions out of the rasterizer or scanline converter, etc. I know this because I was there. I wrote several software rasterizers, and I learned how to do it by reading papers and magazine articles written by other people who wrote software rasterizers.
I have no doubt that other industries did so as well.
Even more recently, companies like RAD Game Tools built as products software rasterizers that are very fast (e.g. Pixomatic).
Also, what's in this article is a simplified introductory take. It is actually much much more complicated than this. (It doesn't look to me like he is doing perspective-correct shading, for example.) Also this guy's code is crazy slow compared to what you'd write in the real world, but hey, it is a tutorial.
> Also this guy's code is crazy slow compared to what you'd write in the real world, but hey, it is a tutorial.
I read pandaman's comment to mean this, actually. Not that nobody wrote their own handcrafted rasterizer/poly engine whatnots.
It's that the article is both not showing what GPU's do internally, but it also doesn't quite show what a reasonable "artisanal" poly rasterizer gfx engine would look like for a system with no GPU. On the one hand, sure, it's a tutorial, so it just shows an overview of the steps a proper engine would need to complete, but it's done with an almost pseudocode-level of inefficiency. Sadly (just a little), it'll run fast enough on a modern computer that I am sure that some people will just take this example and use it as the core of an engine to build something crazy cool around :) And frankly, more power to them. Analogously if someone had told me 17 years ago that Javascript would one day associated with the term "web assembly", that it would be used as a low-level target for compilers, and for many purposes the first choice in application (even graphical) development ... I never expected the future to be anything less than weirder than I could possibly imagine, so let's just write code! hahaha :-)
(actually edited that from 15 to 17 years ago, because I remember in 2002 there were already some possible inklings of hints of what could be done in JS, XHR was on the glimpse of becoming mainstream accessible, the possibilities were all but not quite crystallised)
Are you sure? You have shipped games with software rasterizers and they did this exact algo (no matrices in transform, computing gradient for each scanline)?
I also wrote software renderers in the 90s. I would say it was mixed. Some people used matrices, and some didn't. You really can't say that everybody used the matrix formalism across the board. For that matter, I worked on an engine at a well-known game company as recently as 2013, where there were definitely hand-optimized paths for several different cases of transforms where the constraints were known. Generally speaking, game programmers will do whatever it takes, and painstakingly optimizing algos down to minimum arithmetic operations has always been pretty common in the field.
I too wrote software rasterizers in 90s and even shipped a game with one. If you used a single sine for each vertix, least computing full rotation, you were fucked. There was no hardware, which could handle this at real time in 90s.
If you did division for each scanline, you were running at 1/4th of the speed at most - division was that expensive on 386-Pentium, not to mention other platforms, where CPU's could not even have hardware div.
> Generally speaking, game programmers will do whatever it takes, and painstakingly optimizing algos down to minimum processor operations has always been pretty common in the field.
I am not even talking about crazy optimizations (like using intel's x86 half register to do ghetto fixed point), I am talking about common sense stuff.
Ah. I think you're talking about something completely different from what jblow and I were talking about. If you mean that the code in the article is not optimized, and is doing crippling amounts of redundant computation, then of course I agree with you.
The guy asked what kind of magic is calling trig functions in inner loops?
I replied that nobody really did it and I am confused as to what the author was trying to show.
I, frankly, do not understand what you and jblow read into this other than what I said.
Hahaha, I guess it's a big misunderstanding. You wrote:
"Nobody really wrote software rendering like that beyond CG classes".
I read this as a claim that nobody in general wrote software renderers. When by "like that" you just meant using the specific techniques he used.
That said, I still have to disagree, in the sense that, to get to a fast software renderer, you start with a slow software renderer. Nobody does all the crazy optimizations a priori ... so stuff like a divide per pixel was common, say. Calling trig functions in inner loops is of course goofy, but my presumption is that in the next step of refinement those would be lifted out of the loops, because that is the way things are always done.
> Nobody does all the crazy optimizations a priori ... so stuff like a divide per pixel was common, say. Calling trig functions in inner loops is of course goofy, but my presumption is that in the next step of refinement those would be lifted out of the loops, because that is the way things are always done.
Yeah, .. but no. Depends on what era you're talking about I guess.
When I wrote my first low-level rasterizer + basic 3D poly engine-ish-thing in 1998, I honestly wouldn't have considered for a second to do a division per scanline. An integer add was one tick, a mul was 3-10 (iirc), but a division was 10-40 ticks. Depending on what point you start considering the optimizatons "crazy", yes, had I known (and cared) about perspective correct shading back then, I would have started designing the algorithm, a priori (which for me usually meant on grid-paper) hunting for some way of faking a sufficiently accurate reciprocal using adds, shifts and at most 2 muls (per scanline, cause per pixel even a single mul was madness, obviously). With what I knew back then, probably go for a 2nd degree polynomial that might be sufficient to at least give the impression it was doing better than naive bilinear? :) Had I been aware of Carmack's (objectively crazy) inverse sqrt hack, I would probably have started looking in that direction (abusing the IEEE float spec on the bit level woohooooo).
Sure you could write a perspective correct triangle rasteriser with a div per scanline, and it would be too slow, it would be a nice theoretical proof of concept, but it would also be kinda useless if it turned out you couldn't make the above crazy hacks run fast enough. They were a real hurdle that had to be crossed or you might as well not bother. Also, why save the fun stuff for last? ;-)
It enforces the stupid idea, that you rasterize 2D triangles for a 3D renderer. People, who think this way are then completely shocked and mystified as to why there is perspective distortion to the surface attributes.
And then, on 2D triangles, it enforces a bizarre notion that a triangle is not flat, so the surface attributes gradients can change from scanline to scanline.
x86 has a bizarre register architecture. 4 "general purpose" registers are like this : EAX & 0xffff == AX == (AH<<8) | AL i.e.
the lower 16 bit portion of a 32bit register is a separate 16bit register, which, in turn, is made of two 8bit registers. If you stored a 8.8 value in a AX, for example, you could immediately access the integer part through AH. Of course, 8.8 numbers are not good for all values but you could write fast texturing loops with them (256x256 used to be a solid size for a texture so 8.8 was okay for texcoords).
Yup - it happened - I remember several games that just used trig. via lookup tables. (Esp. If they were mostly Z only rotation - cf. battlezone). Also, many games rasterized convex polygons, so gradient per scanline was a thing.
In those days there was so little information easily available - and very little contact with other coders. Once you had figured out something that kinda worked, you got on with making a game.
The European demo scene did a lot to move ideas around, but was not quite the same as game development.
That's one way of putting it. In the early 2000s, commercial production games were just as much "demos done inefficiently" (for good business reasons, mostly). Maybe I sense an unintended connotation with the term "kids", though.
I remember ATI came to demonstrate their GPU at the Breakpoint demoparties (02/03 or so?) .. it was not unimpressive, and I didn't really understand what a "shader" was in the context of a GPU back then (it was also a dumb choice for a name, given what "shading" meant in gfx programming back then, the term "vertex shader" was borderline nonsensical). But they mainly showed stuff we could already do, except doing it in a higher resolution (on what had to have been the most high-end consumer hardware available, ATI brought themselves, it wasn't the official compo machine, for sure). Given I now know what a GPU shader does (parallelism, domain specific instructions but mainly parallelism), I know that "the same thing but on a higher resolution", is absolutely the most obvious thing you can do with a GPU over just a CPU. In demoscene terms, boooooring :-P
Either way, at some point (late 200X's?) professional game industry blew past the demoscene in production level. It did happen. I always thought it was the Hollywood movie level budgets (because those exploded too, in that era, didn't they?), but there might have been other factors.
My memory of early 2000s is different. Commercial games gave up on software rendering with PS2, ubiquitous 3D cards on PC and Microsoft paying everyone who agreed to develop for the Xbox. Graphics programming in games then was not much different than now. It was about giving good tools to artists and designers and not about doing "cool fx". Though I cannot speak for the whole industry, obviously, just what I've experienced.
> it was also a dumb choice for a name
Blame Steve Jobs and his Renderman :) At the time consumer GPUs with shaders came out it's been long established term for any code processing graphics data during rendering.
>Either way, at some point (late 200X's?) professional game industry blew past the demoscene in production level. It did happen. I always thought it was the Hollywood movie level budgets (because those exploded too, in that era, didn't they?), but there might have been other factors.
I remember in mid 90s some demo-groups claimed to work on actual games ("Into the shadows" guys said it's a prototype for the game they had been writing) and even on hardware (Pyramid3D lol) but I don't think anything noticeable came out. Remedy Entertainment is, probably, the most successful exit from the demoscene and it never set the bar for the AAA industry. The difference between demos and games is the difference between dancing and fighting. You move your body in both cases but with dancing you are fine as long as you pull some cool moves, while in fighting you actually need to survive and harm your opponent as much as possible. Same as in games you need to draw what the artists want and in demos you can draw anything as long as it's cool. There was never a sensible competition between the two. I imagine game programmers suck at making demos just as much as democoders suck at making games :)
Q: "Did you ship a game doing this?"
A: "Lookup tables!"
If you were smart enough to use lookup tables, you were already too smart to transform each vertex in multiple steps instead of folding all transforms into a single matrix.
No one used sequential trig calls; it was still all matrix multiplications, and such. If you're trying to explain the effects without jumping into matrices though, calculating the trig like that is the way to do it.
The for loops are always present, but they're handled in the driver or the hardware itself. If you assume that all you've got for drawing graphics is a setPixel function, then they end up exposed directly in your code.