I appreciate this take. I largely agree with the framing, and I think this is closer to the intended reading than some of the more heated responses in the thread. (I'm understanding this is whats expected in the forum, and now I welcome it.)
You’re on point that the result is believable and not presented as some singular, world-ending breakthrough. Not at all. The point of Table 5 was to show that a surprisingly large amount of task-relevant signal survives under very strict constraints, not to claim that this alone replaces full inference or training. In that sense, calling it “nice but not shocking” is totally fair. Also making a lot of the other takes confounding more than anything.
On the 224× compression language, the claim is specifically about task-specific inference paths, NOT about compressing the entire model or eliminating the teacher. I agree that if someone reads it as end-to-end model compression, that framing invites confusion. That's good feedback and I’m taking it seriously and tightening up going forward.
I also agree that, viewed narrowly, this overlaps with distillation. The distinction I'm trying to surface (the part thats interesting here) is where and how early the structure appears, and how stable it's under freezing and extreme dimensional collapse. The paper deliberately avoids additional tricks, longer training, or normalization schemes precisely so that effect size is not inflated. In other words, this is closer to a lower bound than an optimized ceiling.
What I would add is this: believe it or not, the paper is actually intentionally conservative contrary to what the thread may suggest. It isolates one axis of the problem to make the geometry visible. There's ongoing work that relaxes some of those constraints and explores how these representations compose, persist across tasks, and interact with different extraction points. It's not ready to be released yet (and may never be released) But it does address several of the gaps you’re pointing out.
So basically I don’t disagree with your characterization. This is exactly what it is. A first, deliberately narrow step rather than the full story. Thanks for engaging with it at that level. I appreciate your time.
> On the 224× compression language, the claim is specifically about task-specific inference paths, NOT about compressing the entire model or eliminating the teacher.
I understand that after reading the paper, but it's not in the title and that's what people read first. Omitting it from the title might have given you a much more favorable reception.
It's not easy to get noticed when you're not from a big lab, don't get discouraged. It's nice work.
You’re on point that the result is believable and not presented as some singular, world-ending breakthrough. Not at all. The point of Table 5 was to show that a surprisingly large amount of task-relevant signal survives under very strict constraints, not to claim that this alone replaces full inference or training. In that sense, calling it “nice but not shocking” is totally fair. Also making a lot of the other takes confounding more than anything.
On the 224× compression language, the claim is specifically about task-specific inference paths, NOT about compressing the entire model or eliminating the teacher. I agree that if someone reads it as end-to-end model compression, that framing invites confusion. That's good feedback and I’m taking it seriously and tightening up going forward.
I also agree that, viewed narrowly, this overlaps with distillation. The distinction I'm trying to surface (the part thats interesting here) is where and how early the structure appears, and how stable it's under freezing and extreme dimensional collapse. The paper deliberately avoids additional tricks, longer training, or normalization schemes precisely so that effect size is not inflated. In other words, this is closer to a lower bound than an optimized ceiling.
What I would add is this: believe it or not, the paper is actually intentionally conservative contrary to what the thread may suggest. It isolates one axis of the problem to make the geometry visible. There's ongoing work that relaxes some of those constraints and explores how these representations compose, persist across tasks, and interact with different extraction points. It's not ready to be released yet (and may never be released) But it does address several of the gaps you’re pointing out.
So basically I don’t disagree with your characterization. This is exactly what it is. A first, deliberately narrow step rather than the full story. Thanks for engaging with it at that level. I appreciate your time.