Hacker Newsnew | past | comments | ask | show | jobs | submit | bird0861's commentslogin

Aren't they one of the worst physics channels apart from just outright fraudulent/fringe grifters like ElectricUniverse? Seems like every other week or so I see someone detail patiently why they have incorrectly explained something. I think the "[particles, like photons] take all possible paths" fiasco might be the latest one I can recall.

There are things physicists themselves cannot agree on, so there is no "true" interpretation of some things and you can only present your interpretation.

So, sure, they deserve criticism for the "all possible paths" brouhaha but by and large, I think it offers access to physics in a consumable form for many lay people while trying to maintain rigor better than most.


I haven't watched that many, but for the few I did the physics was surprisingly good.

What was the many paths fiasco?


Typical quality of The Guardian unfortunately. Don't read their energy reporting if you're at all literate about any of those topics. Any time they do a story on fusion I just about have an embolism.

The water is actually ice crystals and the ice crystals form around the soot.

There is also an immense amount of water vapour being produced by the combustion of a hydrocarbon.

Sure, but water vapor doesn't spontaneously transition to a liquid and accrete onto surfaces - there needs to be a super-saturation of water vapor, and given the temperatures of jet exhaust, that's not trivial to achieve. However, the super-saturation needed for water vapor to deposit onto surfaces as ice is much lower, hence the preference for ice crystal nucleation.

stares in Lidarr


Doesn't really fulfill the same niche Soundcloud does. Most content on SC is non-commercial or just simply not available on any streaming service.

Lidarr relies on people ripping this music, and also adding the metadata to Musicbrainz, which just simply isn't going to happen for most SC uploads.


I thought for a moment while reading these comments that somehow SC had completely changed in terms of content and type of user. People seem to think it's a Spotify-like or something. I consumed essentially audio shitposts and DJ mix sets on SC, stuff that you're not going to find published in a pirateable form...


You seem like the type of coworker I would accept less pay to work with. Actually at a crossroads right now, did my research on my prospects and have narrowed it down to two places I most expect to be surrounded by good coworkers and managers. Cheers.


I've been asking around the last week about Go vs Elixir vs Zig, I'd love to get feedback here too. I only have time for one and I'm looking for something that can replace a lot of the stuff I do with Python. I don't have time to wait for Mojo.


I fully agree with this POV but for one detail; there is a problem with sunsetting frontier models. As we begin to adopt these tools and build workflows with them, they become pieces of our toolkit. We depend on them. We take them for granted even. And then the model either changes (new checkpoints, maybe alignment gets fiddled with) and all of the sudden prompts no longer yield the same results we expected from them after working on them for quite some time. I think the term for this is "prompt instability". I felt this with Gemini 3 (and some people had less pronounced but similar experience with Sonnet releases after 3.7) which for certain tasks that 2.5Pro excelled at..it's just unusable now. I was already a local model advocate before this but now I'm a local model zealot. I've stopped using Gemini 3 over this. Last night I used Qwen3 VL on my 4090 and although it was not perfect (sycophancy, overuse of certain cliches...nothing I can't get rid of later with some custom promptsets and a few hours in Heretic) it did a decent enough job of helping me work through my blindspots in the UI/UX for a project that I got what I needed.

If we have to perform tuning on our prompts ("skills", agents.md/claude.md, all of the stuff a coding assistant packs context with) every model release then I see new model releases becoming a liability more than a boon.


That study is garbo and I suspect you didn't even read the abstract. Am I right?


I've heard this mentioned a few times. Here is a summarized version of the abstract:

    > ... We conduct a randomized controlled trial (RCT)
    > ... AI tools ... affect the productivity of experienced
    > open-source developers. 16 developers with moderate AI
    > experience complete 246 tasks in mature projects on which they
    > have an average of 5 years of prior experience. Each task is
    > randomly assigned to allow or disallow usage of early-2025 AI
    > tools. ... developers primarily use Cursor Pro ... and
    > Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing
    > AI will reduce completion time by 24%. After completing the
    > study, developers estimate that allowing AI reduced completion time by 20%.
    > Surprisingly, we find that allowing AI actually increases
    > completion time by 19%—AI tooling slowed developers down. This
    > slowdown also contradicts predictions from experts in economics
    > (39% shorter) and ML (38% shorter). To understand this result,
    > we collect and evaluate evidence for 21 properties of our setting
    > that a priori could contribute to the observed slowdown effect—for
    > example, the size and quality standards of projects, or prior
    > developer experience with AI tooling. Although the influence of
    > experimental artifacts cannot be entirely ruled out, the robustness
    > of the slowdown effect across our analyses suggests it is unlikely
    > to primarily be a function of our experimental design.
So what we can gather:

1. 16 people were randomly given tasks to do

2. They knew the codebase they worked on pretty well

3. They said AI would help them work 24% faster (before starting tasks)

4. They said AI made them ~20% faster (after completion of tasks)

5. ML Experts claim that they think programmers will be ~38% faster

6. Economists say ~39% faster.

7. We measured that people were actually 19% slower

This seems to be done on Cursor, with big models, on codebases people know. There are definitely problems with industry-wide statements like this but I feel like the biggest area AI tools help me is if I'm working on something I know nothing about. For example: I am really bad at web development so CSS / HTML is easier to edit through prompts. I don't have trouble believing that I would be slower trying to make an edit to code that I already know how to make.

Maybe they would see the speedups by allowing the engineer to select when to use the AI assistance and when not to.


it doesnt control for skill using models/experience using models. this looks VERY different at hour 1000 and hour 5000 than hour 100.


Lazy from me to not check if I remember well or not, but the dev that got productivity gains was a regular user of cursor.


I can't emphasize this enough, it doesn't matter how good a model is or what CLI I'm using, use git and chroot (at the least, container is easier though).

Always make the agent write a plan first and save it to something like plan.md, and tell it to update the list of finished tasks in status.md as it finishes each task from plan.md and to let you review the change before proceeding to next task.


Check out Mask Banana - you might have better luck with using masks to get image models to pay attention to what you want edited.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: