Hacker Newsnew | past | comments | ask | show | jobs | submit | bbor's commentslogin

   large LLM-generated texts just get in the way of reading real text from real humans
In terms of reasons for platform-level censorship, "I have to scroll sometimes" seems like a bad one.

This feels like an oversimplification of the issue. Why moderate at all? Spam posted here would only require you to "have to scroll sometimes".

That reminds me that their new tab grouping feature is the first one to really impress me and immediately enter my workflow in… years? Probably since either reader mode or auto-translate first dropped.

Highly recommend everyone check it out. Handily trounces all the tab management extensions I’ve tried over the years on FF and Chrome


Random thought, but Kagi is acting like I wish Mozilla would. Their main product is a search engine, but they’ve been trying out a slew of other initiatives, all of which seem well thought out and integrate LLMs in an exclusively thoughtful, opt-in way. Surely many of them will end up being failures, but I can’t help but be impressed.

Maybe it’s because I’m a power user and they tend to cater to power users, idk — that’s definitely what the comment above yours is hinting at.

But at this point, I think we can all agree that whatever Mozilla is doing now isn’t working… so maybe power users are worth a shot again?


If Mozilla tried to do something like Kagi, they would likely be castigated by half of HN for "yet another side project adventure"

Search is quite the undertaking, so I'm not really hoping that Mozilla takes that on in particular. I'm just pointing out the odd reality that I tend to trust Kagi (a for-profit) to fight the general good fight in a way I agree with more than I trust Mozilla (a non-profit).

No because that would actually be a feature worth adding and actually make it a privacy browser instead of a funnel for data to Google.

I mean, there are lots of models that run on home graphics cards. I'm having trouble finding reliable requirements for this new version, but V3 (from February) has a 32B parameter model that runs on "16GB or more" of VRAM[1], which is very doable for professionals in the first world. Quantization can also help immensely.

Of course, the smaller models aren't as good at complex reasoning as the bigger ones, but that seems like an inherently-impossible goal; there will always be more powerful programs that can only run in datacenters (as long as our techniques are constrained by compute, I guess).

FWIW, the small models of today are a lot better than anything I thought I'd live to see as of 5 years ago! Gemma3n (which is built to run on phones[2]!) handily beats ChatGPT 3.5 from January 2023 -- rank ~128 vs. rank ~194 on LLMArena[3].

[1] https://blogs.novita.ai/what-are-the-requirements-for-deepse...

[2] https://huggingface.co/google/gemma-3n-E4B-it

[3] https://lmarena.ai/leaderboard/text/overall [1] https://blogs.novita.ai/what-are-the-requirements-for-deepse...


> but V3 (from February) has a 32B parameter model that runs on "16GB or more" of VRAM[1]

No. They released a distilled version of R1 based on a Qwen 32b model. This is not V3, and it's not remotely close to R1 or V3.2.


Over the past few months, I've switched a few decently-sized python codebases from MyPy (which I used for years) to PyreFly (because the MyPy LSP ecosystem is somewhere between crumbling and deprecated at this point), and finally to Ty after it left beta this week. I'm now running a fully Astral-ized (rust-ized!) setup:

1. packaging with uv (instead of pip or poetry),

2. type checking with ty (instead of the default MyPy or Meta's Pyrefly),

3. linting with ruff (instead of Jedi),

4. building with uv build (instead of the default setuptools or poetry build),

5. and publishing with uv publish (instead of the default twine)

...and I'm just here to just say that I highly recommend it!

Obviously obsessing over type checking libraries can quickly become bikeshedding for the typical project, but I think the cohesive setup ends up adding a surprising amount of value. That goes double if you're running containers.[1]

TBH I see Astral and Pydantic as a league of their own in terms of advancing Python, for one simple reason: I can trust them to almost always make opiniated decisions that I agree with. The FastApi/SQLModel guy is close, but there's still some headscratchers -- not the case with the former two. Whether it's docs, source code, or the actual interfaces, I feel like I'm in good hands.

TL;DR: This newly-minted fanboy recommends you try out ty w/ uv & ruff!

[1]https://docs.astral.sh/uv/guides/integration/docker/#availab...


Great article that I haven't finished, but if the author ends up reading this: any good dictionary of terms needs an index!


Well, expert systems aren’t machine learning, they’re symbolic. You mention perceptrons, but that timeline is proof for the power of scaling, not against — they didn’t start to really work until we built giant computers in the ~90s, and have been revolutionizing the field ever since.


Maybe true in general, but Gary Marcus is an experienced researcher and entrepreneur who’s been writing about AI for literally decades.

I’m quite critical, but I think we have to grant that he has plenty of credentials and understands the technical nature of what he’s critiquing quite well!


Yeah, my comment was mostly about the ecosystem at large, rather than a specific dig to this particular author, I mostly agree with your comment.


I always love a Marcus hot take, but this one is more infuriating than usual. He’s taking all these prominent engineers saying “we need new techniques to build upon the massive, unexpected success we’ve had”, twisting it into “LLMs were never a success and sucked all along”, and listing them alongside people that no one should be taking seriously — namely, Emily Bender and Ed Zitron.

Of course, he includes enough weasel phrases that you could never nail him down on any particular negative sentiment; LLMs aren’t bad, they just need to be “complemented”. But even if we didn’t have context, the whole thesis of the piece runs completely counter to this — you don’t “waste” a trillion dollars on something that just needs to be complemented!

FWIW, I totally agree with his more mundane philosophical points about the need to finally unify the work of the Scruffies and the Neats. The problem is that he frames it like some rare insight that he and his fellow rebels found, rather than something that was being articulated in depth by one of the fields main leaders 35 years ago[1]. Every one of the tens of thousands of people currently working on “agential” AI knows it too, even if they don’t have the academic background to articulate it.

I look forward to the day when Mr. Marcus can feel like he’s sufficiently won, and thus get back to collaborating with the rest of us… This level of vitriolic, sustained cynicism is just antithetical to the scientific method at this point. It is a social practice, after all!

[1] https://www.mit.edu/~dxh/marvin/web.media.mit.edu/~minsky/pa...


I see where you're coming from on a methodological level, but

1. Capitalists control our society, and live completely different lives than the rest. A typical CEO is certainly quite privileged, and may even work their way up to true wealth eventually! But at the end of the day, they're still clocking in for at least 40 hours a week to do something they'd rather not do, and their life would be completely upended if they had to stop working for some reason. The difference between Pichai and Bezos dwarfs the difference between Pichai and me for these reasons, IMO.

2. Capitalists directly control ~50% of the capital in the US last time I checked. It makes sense to split any given pie in half IMO, at least to start!


“The difference between Pichai and Bezos dwarfs the difference between Pichai and me”

I don’t understand: Pichai is a billionaire.


If you consider it in absolute values it makes sense. Bezos could give me a billion dollars which would match my wealth with Pichai's, and he'd still have 199 billion dollars


Yes, if you have a billion dollars then in terms of wealth Pichai is closer to you than to Bezos. But if you’re a typical HN reader (level 4 or 5), the difference between you and Pichai is pretty much infinite, while Pichai and Bezos are almost the same (relative to you): both are ultra rich.


How do you define "capitalists", in this context?


Probably the way it’s always been defined: those that own capital.


Yup, exactly this! To clarify a bit more for the lurkers:

Obviously the line can be hard to draw for most (intentionally so, even!), but at the end of the day there’s people who work for their living and people who invest for their living. Besides not having to work, investors are very intentionally & explicitly tasked with directing society.

Being raised in the US, I often assumed that “capitalism” meant “a system that involves markets”, or perhaps even “a system with personal freedom”. In reality, it’s much drier and more obvious: capitalism is a system where the capitalists rule, just like the monarchs of monarchism or the theocrats of theocracy. There are many possible market-based systems that don’t have the same notion of personal property and investment that we do.


Ah, that might explain some communication issues I've had.

Looking it up, it seems that marxists use the word "capitalists" to refer to the class of owners of capital. I've always used "capitalist" to refer to a market-led country or to people who believe in capitalism. My dictionary helpfully uses "capitalist" to mean anything related to capitalism.

At the very least, I'll have learnt something from this conversation :)


Lots of followers of capitalism fancy themselves capitalists, as supporters of a system that could enable them to themselves own capital - which feels like an even playing field in terms of possibility for the future. But they are not capitalists and have nothing in common with the ones they idolize. There is an in between sense of the word where people apply the label aspirationally.


>Capitalists control our society, and live completely different lives than the rest.

Also, the Capitalists are good at keeping thing hidden from us. For example, we do not know how they arrive on Earth. I certainly don't believe they aren't born to a mother and a father like the rest of us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: