For the sake of clarity: Woit's post is not about the same alleged instance of GPT producing new work in theoretical physics, but about an earlier one from November 2025. Different author, different area of theoretical physics.
This thread is about "whenever a new breakthrough in AI use comes up", and the comment you reply to correctly points out skepticism for the general case and does not claim any relation to the current case.
You reached your goal though and got that comment downvoted.
My goal was to help other people not make the same mistake as I initially did, of thinking that Peter Woit had made some criticism of the latest claim of GPT-5.2 making a new discovery in theoretical physics, which in fact he appears not to have done.
If I'd wanted that comment downvoted, I would have downvoted it myself, which as it happens I didn't. There was nothing particularly wrong with it, other than the fact that it was phrased in a way that could mislead, hence my comment.
It seems possible that people in the Philippines providing advice to Waymo vehicles in the US get some training on US road signage, traffic regulations, etc. (I can't see how it would make any sense for Waymo to pay people to do this and not give them the information they need to do it reasonably well, since the whole point is for them to handle difficult cases.)
And it would be difficult for whatever training Waymo provides to its employees to be less stringent than the lax license requirements of most US states.
This is wrong, although something quite like it is right.
Imagine that there are only 10 Waymo journeys per year, and every year one of them hits a child near an elementary school, while there are 1000000 non-Waymo journeys per year, and every year two of them hit children near elementary schools. In this scenario Waymo has half as many accidents but is clearly much more dangerous.
Here in the real world, obviously the figures aren't anywhere near so extreme, but it's still the case that the great majority of cars on the road are not Waymos, so after counting how many human drivers have had similar accidents you need to scale that figure in proportion to the ratio of human to Waymo car-miles.
(Also, you need to consider the severity of the accidents. That comparison probably favours Waymo; at any rate, they're arguing that it does in this case, that a human driver in the same situation would have hit the child at a much higher and hence more damaging speed.)
> they're unlikely to systematically prefer newer editions
That seems wrong to me. Generally when a new edition of something is put out it's (at least nominally) because they've made improvements.
("At least nominally" because it may happen that a publisher puts out different editions regularly simply because by doing so they can get people to keep buying them -- e.g., if some university course uses edition E of book B then students may feel that they have to get that specific edition, and the university may feel that they have to ask for the latest edition rather than an earlier one so that students can reliably get hold of it, so if the publisher puts out a new edition every year that's just different for the sake of being different then that may net them a lot of sales. But I don't think it's true for most books with multiple editions that later ones aren't systematically better than earlier ones.)
> But I don't think it's true for most books with multiple editions that later ones aren't systematically better than earlier ones.
Most books with multiple editions are books that have been translated multiple times. It is definitely true that later translations aren't systematically better than earlier ones.
Heaney's famous translation begins "So. The Spear-Danes ..." with that "So" being an interjection, a thing that could in principle stand on its own. (You might say "So." and wait for everyone to settle down and start listening.) Even more so with things like "Yo!" or "What ho!" or "Bro!" or "Lo!". (Curious how all the options seem to end in -o.)
This is more like "So, the Spear-Danes ..." where the initial "So" has roughly the same purpose of rhetorical throat-clearing and attention-getting, but now it's part of the sentence, as if it had been "As it turns out, the Spear-Danes ..." or "You might have heard that the Spear-Danes ...".
I think the theory described in OP makes the function of "hwaet" a little different, though; not so much throat-clearing and attracting attention, as marking the sentence as exclamatory. A little like the "¡" that _begins_ an exclamation in Spanish.
Of course a word can have more than one purpose, and it could be e.g. that "hwaet" marks a sentence as exclamatory and was chosen here because it functions as a way of drawing attention.
I'm having trouble finding any evidence for that. E.g., https://web.archive.org/web/20030808111721/https://edition.c... -- here's a thing from February of that year that (if I'm understanding right) reports Ventura leaving the Reform Party because he didn't like its endorsement of Pat Buchanan for president; it mentions Trump, but only as one person Ventura might have supported as a presidential nominee, and it actually quotes Trump saying to Ventura "you're the leader". Trump was never the Reform Party's nominee nor anyone else's. (https://en.wikipedia.org/wiki/Donald_Trump_2000_presidential... says that "he never expanded the campaign beyond the exploratory phase".)
It's not entirely clear to me that there was actually such a thing as the leader of the Reform Party, especially in early 2000 when there was a lot of infighting, but if there was one it seems to me that it might have been Ventura but certainly wasn't Trump.
I stand corrected. According to Wikipedia, Trump sought the nomination for the Reform party candidate for a few months and ultimately backed out.
Regardless, that fact only supports my point. The man is a loser whose modern following is based on largely external factors like white grievance and fear.
I agree that we should be reading books with our eyes and that feeding a book into an LLM doesn't constitute reading it and confers few of the same benefits.
But this thing isn't (so far as I can tell) even slightly proposing that we feed books into an LLM instead of reading them. It looks to me more like a discovery mechanism: you run this thing, it shows you some possible links between books, and maybe you think "hmm, that little snippet seems well written" or "well, I enjoyed book X, let's give book Y a try" or whatever.
I don't think it would work particularly well for me; I'd want longer excerpts to get a sense of whether a book is interesting, and "contains a fragment that has some semantic connection with a fragment of a book I liked" doesn't feel like enough recommendation. Maybe it is indeed a huge waste of time. But if it is, it isn't because it's encouraging people to substitute LLM use for reading.
The ideal way to find similarities between two books is to read both of them. If an LLM is finding links between two books, that means that the LLM read both of the books.
To determine if a book is worth reading, I think it's better to ask someone for their recommendation or look at online reviews.
What does "challenged Wikipedia so thoroughly" mean?
(My impression is that Grokipedia was announced, everyone looked it and laughed because it was so obviously basically taking content from Wikipedia and making it worse, and since then it's largely been forgotten. But I haven't followed it closely and maybe that's all wrong.)
reply