Good question! CORDIC and HAKMEM Item 149 are both hardware-friendly, but have different trade-offs:
CORDIC:
- Iterative algorithm (needs multiple clock cycles)
- Accuracy improves with more iterations
- Generates both magnitude and phase
- Typical hardware implementation: 12-16 iterations for decent precision
HAKMEM (Item 149):
- Single-cycle computation (just two adds per step)
- Uses the recurrence: x' = x - εy, y' = y + εx
- Accuracy depends on word width and epsilon choice
- Numerically stable in exact arithmetic if ε² < 2
CORDIC is more accurate, but takes as many iterations as you have bits of precision in your angle. Another demo called Warp in this contest used pipelined CORDIC to do atan2 on every pixel to create a tunnel, which is super impressive.
I liked perl, it was the first language I used daily as a HW engineer. When I moved to python more recently what I missed the most was how easy it was to do a one liner if with regex capturing. That couldn't be done in python for a long time. I think the walrus operator helps, but it's still not quite as concise, but it's closer
My code wasn't written to be hard to decipher, and it wasn't a goal to get everything on one line by any stretch, I just didn't like an if with regext was 2 lines minimum in python, it felt inelegant for a language that is pretty elegant in general
I heard last year the potential future of gaming is not rendering but fully AI generated frames. 3 seconds per 'frame' now, it's not hard to believe it could do 60fps in a few short years. It makes it seem more likely such a game could exist. I'm not sure I like the idea, but it seems like it could happen
The problem is going to be how to control those models to produce a universe that's temporally and spatially consistent. Also think of other issues such as networked games, how would you even begin to approach that in this new paradigm? You need multiple models to have a shared representation that includes other players. You need to be able to sync data efficiently across the network.
I get that it's tempting to say "we no longer have to program game engines, hurray", but at the same time, we've already done the work, we already have game engines that are relatively very computationally efficient and predictable. We understand graphics and simulation quite well.
Personally: I think there's an obvious future in using AI tools to generate game content. 3D modelling and animation can be very time consuming. If you could get an AI model to generate animated characters, you could save a lot of time. You could also empower a lot of indie devs who don't have 3D modelers to help them. AI tools to generate large maps, also super valuable. Replacing the game engine itself, I think it's a taller order than people realize, and maybe not actually desirable.
20 years out, what will everybody be using routine 10gbps pipes in our homes for?
I'm paying $43 / month for 500mbps at present and there's nothing special about that at all (in the US or globally). What might we finally use 1gbps+ for? Pulling down massive AI-built worlds of entertainment. Movies & TV streaming sure isn't going to challenge our future bandwidth capabilities.
The worlds are built and shared so quickly in the background that with some slight limitations you never notice the world building going on behind the scenes.
The world building doesn't happen locally. Multiple players connect to the same built world that is remote. There will be smaller hobbyist segments that will still world-build locally for numerous reasons (privacy for one).
The worlds can be constructed entirely before they're downloaded. There are good arguments for both approaches (build the entire world then allow it to be accessed, or attempt to world-build as you play). Both will likely be used over the coming decades, for different reasons and at different times (changes in capabilities will unlock new arguments for either as time goes on, with a likely back and forth where one pulls ahead then the other pulls ahead).
Increasing the framerate by rendering at a lower resolution + upscaling, or outright generation of extra frames has already been a thing for a few years now. NVidia calls it Deep Learning Super Sampling (DLSS)[1]. AMD's equivalent is called FSR[2].
I just had chatgpt explain that problem to me (I was unfamiliar with the mathematical background). It showed how to solve closed form answers for H(2) and H(3) and then numerical solutions using RK4 for higher values. Truly impressive, and it explained the derivations beautifully. There are few maths experts I've encountered who could have hand-held me through it as good.
I didn't understand the background before the explanation, but afterwards I did. Inl walked me through the mathematical steps and each was logical and ok to follow if you have a basic calculus knowledge.
It was. I asked it to give more details on parts of the derivation I didn't quite follow and it did that. Overall it was able to build from the ground up to the solution and solve it both numerically and analytically (for smaller values of x)
This makes sense, but what does not make sense is who tested this 'ultimate mode', I mean they went to the trouble of adding a physical hardware switch on the motherboard for this, surely when testing there was some kind of benchmark or comparison to show this feature was an advantage. Maybe they don't test, or they have 'internal firmware' that is not what the user gets, but it's a serious fail either way.
It takes a long time to get form standard to silicon, so I bet there are design teams working on pcie7 right now, which won't see products for 2 or more years
Exactly this. If a junior dev is never exposed to the task of reasoning about code themselves, they never will know what the difference between good and bad code is. Code based will be littered with code that doe the job functionally, but is not good code, and technical debt will accumulate. Surely this can't be good for the junior Devs or the code bases long term?
To be fair, most startups already trade "works today" for "easier to add stuff in the future", even before LLMs. I'm sure we'll see a much harder turn to the "works today" direction (which current vibe-coding epidemic already seem to signal we're in), until the effects of that turn really starts to be felt (maybe 1-3 years), then we'll finally start to steer to maintainable and simple software again.
I’m not so sure. In the short term, yes, we hear about disasters caused by developers choosing “works today” and the AI almost instantly making a mess…
But that’s the point. The feedback loop is faster; AI is much worse at coping with poor code than humans are, so you quickly learn to keep the codebase in top shape so the AI will keep working. Since you saved a lot of time while coding, you’re able to do that.
That doesn’t work for developers who don’t know what good code is, of course.