Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really know what I'm talking about, but doesn't this address that?

> with newly learned parameters that make up less than 0.6% of the total parameters in the generative model

0.6% sounds like a small number. Is it measuring the right thing?

Certainly, I wouldn't expect the model to necessarily be encoding exactly the set of things that they're extracting, but it still seems very significant to me even if it is "just" encoding some set of things that can be cheaply (in terms of model size) and reliably mapped to normals, albedo, and depth.

(I don't care what basis vectors it's using, as long as I know how to map them to mine.)



0.6 percent of a model with 890 million parameters is still 5.34 million parameters.

That's still pretty big. Maybe big enough to fake normals, learn some albedo smoothing functions, and learn a depth estimator perhaps??




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: