Well Google has activated access to Google drive, mail, etc for most users automatically (or maybe I just clicked yes sometime) and so far I think it's a net positive for me personally and don't here from any disasters publicly.
While technically this is rooted in the technological misconstruction of a missing separation of data and instructions.
However my point is: on the other hand, that would be the same if you outsourced those tasks to a human, isn't it? I mean sure, a human can be liable and have morals and (ideally) common sense, but most major screw ups can't be fixed by paying a fine and penalty only.
We have no general-purpose solutions to the principal-agent problem, but we have partial solutions, and they only work on humans: make the human liable for misconduct, pay the human a percentage of the profits for doing a good job, build a culture where dishonesty is shameful.
The "lethal trifecta" is just like that other infamously unsolvable problem, but harder. (If you could solve the lethal trifecta, you could solve the principal-agent problem, too.)
Since we've been dealing with the principal-agent problem in various forms for all of human history, I don't feel lucky that we'll solve a more difficult version of it in our lifetime. I think we'll probably never solve it.
Sometimes time doesn't allow that for multiple communities simultaneously, but you are right. Still I think a lot of online communities are drowning in AI slob diluting the well thought about stuff that would deserve the attention.
The AI slop problem is real and getting worse. Half the "community engagement" you see now is LLM generated comments that add nothing. It actually makes genuine participation more valuable though, because people can tell the difference pretty quickly. The signal to noise ratio shift is probably the strongest argument for investing in real community presence over spray and pray posting.
True story, yesterday I tried to get some feedback from an industry relevant subreddit for a real estate quick check calculation tool (automatically extracts listing data into calculation and enables sharing investment ideas). The pure mention of AI brought up a whole crowd of fed up bullies that talked it down as vibecoding trash - which it really isn't. All those places are flooded.
People, not bullies. I can sympathize with you because I've struggled with the same, but we can't blame those people. They're now being asked every two days to give feedback on yet another tool. That used to be once every 6 months. And the overwhelming majority of those new "tools" is abandoned within a month. And there is indeed a huge amount of vibecoded slop. I've put more time and thought into our product than the last 20 such tools that got posted into our industry-relevant subreddit combined, but I can't expect the mods and users to put their time into assessing that.
I think it’s strange to dislike vibe coded things. I’ve seen a lot of cool stuff that’s mostly vibe coded. fomo.nyc for example. The problem is mostly the intention. I think a lot of vibe coded stuff isn’t solving a problem someone has its someone trying to seek profit. It’s no different from when smartphones first came out and people wanted to make a for everything when most of them didn’t solve any problems. The difference is nobody is wowed anymore by anything so your app that turns your phone into a beer kind of thing doesn’t exist in the vibe coded world.
> I’ve seen a lot of cool stuff that’s mostly vibe coded. fomo.nyc for example.
Maybe because you're not an actibe member of one of those fields whose subreddits now gets these vibecoded tools every other day. Because for those, I can tell you that the overwhelming majority isn't cool. Even when they are trying to solve a problem and aren't (yet) seeking a profit. They have very little time, energy and thought put into them. They're made by people who are passers-by, who aren't personally invested and often haven't experienced the problem first-hand; it's just a problem they heard about, or assumed would exist. And that leads to things that are a waste of time. Often they have blatantly obvious problems that show they didn't even QA it for an hour.
And to top it all off, they frrquently even vibe code their reddit posts. There is absolutely nothing of interest to interact with those posts.
The amount of reddit posts I land on from the last few years which are noticeable longer with no added value for the increased length than they would have been in the past is getting very annoying.
While this is directionally correct it does come down to a tonality, that I think wasn't justified in that case. But hey, it's the Internet and I'm not naive either.
I'm saying that the tonality is justified due to all of the other slop. You and me are simply the unavoidable casualties.
I'm saying this while in a field that's even more anti-AI/tools than yours, FWIW. By virtue of anything that mentions AI - and much of the tools that don't even mention it but are suspected - getting auto-removed by automod on the subreddit. There's only one subreddit in our field, everyone's on it, and it blanket bans anything with AI, despite the best tools out there incoporating it.
It's not even something like art, design, coding and so on where people are scared of job loss or even hobby loss, it's nothing like that at all for the community. I do suspect the mods' friends might feel threatened, as they _do_ make a living off of the community, but in our case I don't even think that's their primary reason to blanket ban it.
While that's true, my tool (as part of the flood) didn't originate from the same spring, it's just something I happen to be building that same way, I did before the LLM wave. It's not vibecoded SaaS fast food.
I checked community guidelines before and think regarding Reddit, this is where it should be resolve in my option.
Unfortunately the source doesn't matter when there so much. It is really hard to differentiate things when you are inundated. Did you try a Show HN here? It requires more luck than ever because of the same problem, but worth a try. I'll take an honest look if you do it (though hard to say if I am the target market).
Had basically the same thing happen. Posted in a side project sub, spam filter nuked it because new account. And in other subs now, anything that mentions AI gets hit with "vibecoded slop" automatically. Doesn't matter if you spent months on it.
What actually moved the needle was talking about data, not the product. I posted about my tool — crickets. Then I wrote about stuff I discovered while building it and people started engaging. Exact same product behind both posts, just "here's what I found" instead of "here's what I built." Night and day difference.
Maybe it still is supposed to sound fancy to say you didn't read any of the code. The guy definitely could very deeply understand, read and edit the code, he developed the industrial standard liberal for PDF editing (used by Dropbox etc).
Just saying what you want might be the future for development of some kinds of software, but this use case sure seems like a very bad idea.
I very much appreciate the vision he put into practice, but feel sorry for the project being acquihired kind of.
I challenged Gemini to answer this too, but also got the correct answer.
What came to my mind was: couldn't all LLM vendors easily fund teams that only track these interesting edge cases and quickly deploy filters for these questions, selectively routing to more expensive models?
Yes that's potentially why it's already fixed now in some models, since it's about a week after this actually went viral on r/localllama originally. I wouldn't be surprised if most vendors run some kind of swappable lora for quick fixes at this point. It's an endless whac-a-mole of edge cases that show that most LLMs generalize to a much lesser extent than what investors would like people to believe.
Like, this is not an architectural problem unlike the strawberry nonsense, it's some dumb kind of overfitting to a standard "walking is better" answer.
reply