Hacker Newsnew | past | comments | ask | show | jobs | submit | snailmailman's commentslogin

This case is wild and seems to perfectly encapsulate all the problems people complain about with vibecoded projects.

The "rewrite it in rust" commit is +1M lines of code. Humans haven't looked at that in depth. In about a week, they saw the tests passed and pushed it to main. Now people have started to look through it and are pointing out glaring issues. And the solution is just going to be "feed it to another AI and ask it to fix it".

The entire codebase is slop now. Nobody knows what it does. It manages to pass some tests, but its largely a black box just on the basis of humans haven't read it yet. The code isn't guaranteed to be anything close to 1:1 with the old codebase. Its probably vaguely shaped like the old codebase, but new bugs could be there, old bugs could be there, nobody knows anything yet.

Its going to be interesting to see how recoverable this is. They are almost certainly going to just hand every file to an AI, say "look for soundness issues and fix them" and then what? If AI is making huge, sweeping changes to the code so frequently that humans can't keep up, is that really maintainable? The only solution appears to be "even more AI" while anybody that looks closely gets scared away by the too-large-to-comprehend-and-entirely-slop codebase.

This kind of thing has been happening with many smaller projects already, but now its a larger project and happening in a much more public way, with the intent to replace human-written, mostly-understood code with slop. I suspect the same thing, with the same problems, is happening inside all the largest companies, just not quite as obviously.


> only solution appears to be "even more AI"

That's the idea, to transform businesses to be wholly dependent on "AI" service to develop software. What better way than to re/write entire codebases until no human being understands it.

The Zig project know this, and its so-called "anti-AI" policy is actually pro-community and cultivating human understanding. It's not about the tool or technology, per se, it's about people, knowledge, and sustainability.

In contrast, the Bun project is demonstrating how they doesn't care about any of that, YOLO-ing its way to losing the trust of its users, contributors, and maintainers. Oh well, AI will maintain the project now, since no one else can.


The one thing I can't stand about the AI zealots is their anti‑intellectualism. Even before coding agents became a thing, there were so many comments here along the lines of, "doing things properly has a learning cost! I don't have time for that nonsense because, unlike you, I'm busy actually making stuff." Now, too many people openly mock the practice of reading, writing, or understanding code altogether.

It's sad to see what hacker culture has been reduced to: outright contempt for science and engineering.


One thinking is most people writing software who are not software engineers prefer using AI because they don't think software is valuable in itself, it's only a way to solve a problem. So there are two camps, the other being people who like to solve "software problems". But this latter has been solved by AI

That's exactly thing I'm trying to call out. AI coding has attracted a flood of people whose only goal is to make a quick buck out of shoddy work. They regard science and engineering as beneath them, and they're not shy about saying it, here and elsewhere.

Any serious professional in this field knows that software development is far from a solved problem. It wasn't before LLMs, and it isn't now. Responsible development takes discipline and respect for the hard-won lessons of past and present efforts.

But no, according to many here, being responsible makes you a "luddite." "Humans make mistakes too," that's what they'll say as they'll inevitably screw over people's lives with their reckless disregard for others. "It's not my issue to solve."

Seriously, haven't techbros already caused enough damage throughout society with "move fast and break things"? A lot of people are losing patience for this nonsense.


This is because AI is most appealing to average and below average developers and users because it makes them feel like they can finally do something.

This is more or less my take on it.

I am not against AI code, it can be perfectly fine.

The principle issue in my mind is the rate of change.

Once you rewrite a code base like this (in a week no less) the only way to work on it in the future is using AI tools because no single person has any knowledge about any specific piece of code base any more.

AI generated code that is run through a classic PR process would potentially be fine, but then you sorta lose the entire point of using AI.


That happened to my project as well. The main issue hasn’t beet that ai couldn’t solve the problem, but it became so slow and you need more and more verification layers and CI/CD that at one point you wish a simpler codebase back, with reasonable tests, with storylines in codes and so on.

The mobile app is quite nice. Print error and print finish notifications. Webcam view when I’m not near my printer. The ability to pause it remotely if something looks off.

I use LAN mode, plus a home assistant plugin to restore the lost functionality. The default webcam is pretty bad so I’ve also mounted a better one to my printer for a live video view that’s at more than 1fps.

The main thing I’ve lost by using lan mode is printing from my phone? I think there are ways to do that. But OrcaSlicer has so many options that are frequently worth adjusting over random presets other creators used; it’s a strictly better experience compared to printing on mobile.

I think there is some niche “cancel printing of one specific object” feature that I dont know how to use without the mobile app. If you are printing many objects at once, and one fails, you can cancel a specific part/object using the mobile app. Not sure how to do that with OrcaSlicer + lan mode, or if it’s even possible. (Edit: OrcaSlicer doesn’t support it. The home assistant plugin might? Bambu studio in lan mode doesn’t support it either, it requires the mobile app)


On iOS at least, there's a third-party alternative mobile app for LAN Mode here:

https://forum.bambulab.com/t/bambu-companion-for-iphone-no-c...

Tailscale makes remote access pretty for easy for this and other related apps.

I'm unaware of an Android version, but since it's mostly MQTT, FTP, and RTSP, I assume that's just a good vibe coding session to implement.


I’ve been meaning to look into what the network plugin does more.

I see in my dns logs lots of repeated blocked requests to a Bambu labs domain whenever I have orcaslicer open. I assume it’s so many because it’s getting blocked and retrying.

I just print over lan though. Not using the Bambu servers (or the fork mentioned in OP) It works flawlessly.


richard stallman

It is relatively easy to configure. Just install Linux after windows, and Linux will generally automatically setup a boot-selection screen for you. The installer should detect windows and even shrink the partitions for you.

You can install a prettier looking boot selection menu like rEFInd, but the default works just as well, and I think the mainstream distros all setup secure boot too. On my pc it was very easy, on my (8yr old) laptop I had to add some secure boot keys and the bios was very confusing, using terms that didn’t seem to match what they should have been.

My setup has worked almost entirely flawlessly and survived updates from both OSes. Only issue being “larger” windows feature updates putting windows back as the first OS in the list, but that happens maybe once or twice a year? And it’s a quick bios change to fix the order.


I've never had this experience dual-booting, neither with UEFI nor Grub. I've been using Linux for nearly 15 years, 13 of those dual booting. Dozens of systems from laptops to desktops. Windows would always purge the boot entries. I'd have to manually fix booting, constantly. This happened with Ubuntu, Arch Linux and recently NixOS and through all the Windows editions till 11. I had to install Windows for a lan party recently and lo and behold on the second day NixOS is gone from the boot list and unbootable. Nothing of value is lost when it gets purged, but it's a damn annoying tax to pay just to be able to play video games.

Luckily gaming now works well enough that the only reason to use Windows was gone. Well, apart from some online games played during lan parties.


It’s a common thing for malware. But people are going to be more likely to fall for it when mainstream sites ask you to complete weird tasks with your phone to verify your identity.

I’m already sick and tired of seeing cloudflares “making sure you aren’t a bot” checkbox everywhere. Sometimes it locks me out entirely and decides I don’t get to view pages.

I see recaptcha less frequently but it’s much more annoying, with all the clicking of crosswalks, or busses, or whatever. I am not looking forward to a web where google can not only lock me out of my email, but also large sections of the previously public internet. Occasionally google decides I don’t get to do searches, and that’s not too much of an inconvenience, there are other search engines.


But what's the alternative? Sites need a way to prevent bots overwhelming them, and there's no perfect way to distinguish real users from bots.

One alternative is to make simple, efficient, and where appropriate even static sites that can scale to meet the demand.

The HIBP hashes distribution is a great example.


That doesn't really help if the same Huawei bot keeps re-requesting a bunch of 600 KiB JPEG from 120 rotating IP addresses with random crap at the end of the URL, like what happened to one of my servers. Efficiency doesn't really matter if you're getting hammered by bots.

I ended up aggressively IP blocking all of China, Singapore, and a few other East-Asian countries once I noticed that blocking server IP addresses just made the botnet switch to residential IPs. I didn't switch over to Cloudflare, but now a couple billion people can't read my website, which is arguably worse (but cheaper).

Also, a handful of people seeing an annoying checkbox is hardly a reason to re-architect an entire website. I am as opposed to Cloudflare taking over the internet as any sane person, but the usability story isn't really an argument for that kind of time investment.

The alternative to Cloudflare isn't some magical system that works for everyone but bots, it's hard-blocking IP ranges on the network level for anyone who doesn't fit the "normal" user profile.


Try using anubis. It uses a PoW challenge to make it not make economic sense to scrape websites.

Anubis is trivially bypassed by anyone that cares to bypass it. All it does is inconvenience real users with niche/older/extended browsers or those who take basic precautions against tracking and malware.

Anubis won't work now that scrapers just allocate more CPU time to beat Anubis challenges. The default configuration also permits all bots, only catching bots pretending to be browsers.

“Demand” has very little to do with any of the problems bots cause on the internet today.

You're right, we need big tech to protect us from the problems big tech created.

In the olden 20th century, we had a term for that...


You know that protection racket where the mobster came to my corner store and says if I don't pay him he will come later and rough me up? This is a worse deal than that.

Better turn on that 'free' Cloudflare 'bot' protection. Would be a shame if our, ahem, I mean, those botnets ddos'ed your site.

this is the modern version of that.

The alternative would be tar traps that only a bot would “see” and interact with and thus be caught by. Default to annoying machines not people.

Your idea works for generic crawlers.

That doesn't work for targeted bots. A major benfit of device attestation is to stop the hordes of custom bot creators who try all sorts of ways to make a buck off of your platform such as sms toll fraud, credit card testing, ad fraud, account takeovers, stolen card laundering, gift card laundering, botting for pay for platform / ecosystem benefits, paid harassment, the list just keeps going.

Some aps such as okta, banking, and others already check platform verfication. Websites can't currently until device attestation.

Personally, I hate the concept, but I also hate spending a large amount of time fighting mal-actors on my platform in a completely unbalanced fight. There are tons of them, and they have all the profit incentive. There's a few of us, we only take losses. They can lie all they want, we can't really trust any facts except kinda the credit card and the device attestation.

Like everything, it's a shitty compromise, but, as a platform runner, if I can leverage google's signal and cut 95% of my malicious botting users, guess what I'm going to do.


> A major benfit of device attestation is to stop the hordes of custom bot creators

Attestation is extremely ineffective at preventing this because it requires attackers be unable to compromise their own devices, even when they have permanent physical access to the hardware and can choose which model to buy and get devices known to be vulnerable.

For example, CVE-2026-31431 is from only a week ago. It's a major local privilege escalation vulnerability. If you can run unprivileged code you get root. How many people have Android phones that can pass attestation but will never see the patch because the OEM has already abandoned updating them? Tens of millions, hundreds of millions?

Attackers can trivially get root on a device that passes attestation. Many devices even have vulnerabilities that allow the private keys to be extracted.

The main thing attestation actually does is beset honest users who just want to use their non-Android/iOS device without getting a million captchas, because they chose the device they wanted to use as a real human person instead of doing as the attackers do and choosing a device for the purpose of defeating the attestation.

And it's easy to confuse this with real effectiveness because whenever you roll out any security change, the attacks may subside for a short period of time as the attackers adapt to it. But that's why it makes sense to avoid things that screw innocent people or entrench monopolies -- while the temporary effectiveness wears off, the screwing becomes permanent. Meanwhile spending the same resources on any other method of shuffling things around to make them adapt will give you the same temporary effectiveness without hurting your legitimate users.


s/stop/reduce/

I don't consider it a panacea.

People with rooted android phones are a drop in the bucket compared to people running botnets using programming languages. I'd be super happy if I could force people to use low end rooted android phones for botting. It'd massively decrease the problem versus a EC2 instance running at full tilt.

Getting and managing a fleet of rooted phones is not a trivial task.


But what's the alternative to shops strip searching you every time you want to buys something? Shops need a way to prevent looters overwhelming them, and there's no perfect way to distinguish real shoppers from looters.

One solution is to leave a deposit worth more than anything you could loot. What that means in the computing world is those silly browser-based crypto-solvers.

What are "bots"?

If I use Claude to gather and summarize information for me, is that a "bot"? Because I recently hit that wall and it wasn't great. Turns out in our quest to fight "bots" we also force humans to do the manual labor of copy/pasting information.

Why would bots "overwhelm" a site is another discussion — I find it really hard to create a website that would be "overwhelmed" by traffic these days, computers are stupidly fast.


> Why would bots "overwhelm" a site is another discussion — I find it really hard to create a website that would be "overwhelmed" by traffic these days, computers are stupidly fast.

are the cloudflare walls really about reducing load? I thought it's because bots are not profitable. They don't click on ads, don't buy, etc.


Do you think the introduction of Anubis on a lot of open source websites was a coincidence. The AI companies' crawling bots don't play by the regular crawling rules and not a good citizen and they are causing a lot of issues. If your Claude session is using the same user agent of their data crawling bot (most of the time it will just check for claude in the user agent) yes you will be classified as bot as well.

mCaptcha, ALTCHA, Cap, Friendly Captcha, Private Captcha, Procaptcha, Anubis... there are literally dozens of open source alternatives that aren't feeding the Do Be Evil company... not to mention all of the commercial alternatives - if for whatever reason, you do feel like paying for a service that costs nothing to offer

Gen off it. Fraud detection is nontrivial and requires ongoing effort. It’s reasonable for people to be compensated for that.

CAPTCHAs are not fraud detection and not an ongoing effort

PoW challenges that make bots not viable.

You mean a la Anubis? But people also seem unhappy with that; and in any case Anubis is designed to stop ai crawlers; it doesn't work against a targeted crawler or a targeted dos attack.

People are unhappy with Anubis because it's not designed to stop "AI crawlers", despite marketing as such. It's designed to stop DDoS attacks on layer 7. Anyone who pays the computing-fee gets to pass, regardless of species.

Whats your argument

Maybe ai companies should have invested any of those billions of dollars into safe and equitable ways of rolling out their new surveillance machines. Oh right that was never the point and this only serves to further that. Got it.

I think they'd be OK w/o the surveillance machine part of it, but they have never seemed to care about anything besides advancement of the tech or its side projects.

I can imagine a world where they were fighting for displaced workers, for Altman/Elon-suggested UBI/universal "high" income plans, and where they'd compensated those in the training set, and cut deals with publishers & content creators instead of scraping anything they could get their hands on. Would they be unpopular?


yeah. webpages now load so slow just because i have to wait for the captcha

reminder that any company which has a legal obligation towards you (GDPR requests, refunds, filling a complaint etc) can be contacted directly and forced to do it manually if you cannot use their web interface due to being blocked by Cloudflare & other captchas

I want to enjoy the movie theater experience. It should be a better screen than the one in my home. It should be a better audio setup, with full surround sound. It should be great, a premium experience.

The last few movies I’ve seen in theaters have not been that. Two of the last 3 movies I’ve seen had audio mixing problems, and dialogue was inaudible in some scenes. (I heard this got fixed later for one of the movies) In all of them, I could hear bass from the adjacent theaters in some scenes. In the last two movies I went to see, both had someone in the audience bring an intermittently crying baby to the movie.

Im done with watching movies in theaters. It’s a better experience to watch at home, with headphones, a blanket, and the ability to pause for bathroom breaks.


Sounds like you should be visiting Alamo Drafthouse. They take these things extremely seriously and are for the real fans. Here is their ad: https://www.youtube.com/watch?v=1L3eeC2lJZs

Unfortunately since they already filed for bankruptcy a few years back they have had to cut costs and so their system for ordering food replaced (from pen and paper collected and an usher quietly brings you your food to QR code with...a cellphone) people are recently concerned that this has reduced their legendary quality. They still take audio and picture quality very seriously in my experience.

Also where are you located? LA and NYC have legendary theaters that are truly a special treat. Its harder to replicate that in various states but there are still some states trying (ex. NJ being the actual birthplace of the American film industry has a few excellent theaters scattered throughout that dont tolerate poor quality/talkers)

If your story is from AMC theaters just know that you are visiting the Mcdonalds of movie theaters.


Besides the ordering experience, I also feel like the food at the Alamo has gone downhill. It's not bad or anything, but it used to be legit good. Now it's just alright.

I don't even care about movie theaters, I just miss drive-ins. The last drive-in within 450 km of me closed during covid

Now, when you click a link in GitHub, the current page doesnt change. I want to look at the linked issue on its own page. That doesn’t occur anymore.

The page i wanted to go to pops up in a small overlay on the right hand side. The body text and content that I wanted to view is in a new, weird location, with the old page still behind it in the normal spot. It’s very unintuitive.

Thankfully either the behavior has reverted or I’m no longer in the A/B test. I can’t get the popup to happen anymore for me. (edit, nvm, behavior varies depending on repo or something? it acts completely differently on different pages, sometimes links are normal and sometimes they open in a popup. extremely annoying)


Also it breaks copying links. If I want to link to an issue I copy the URL. But now there's two different issues open at the same time, which one am I linking to? Original? Popup? Both?


Right, not saying it's not annoying, but "every intuition about using a browser" is a bit over the top. A link can open a dialog, has been happening for decades.


I don’t think carriers have the ability to install apps on iOS. I’ve always thought it’s weird that they can do that on android.


They absolutely do. Some countries mandate some apps that cannot be removed. While Apple doesn't allow carriers to install mandatory bloatware apps, it allows country-specific "national security" apps and background processes that don't have app icons. It's been this way almost forever in pretty much every country that just about every mobile device, it's just Apple has been a bit better for users.

https://www.wired.com/story/apple-russia-iphone-apps-law/

https://9to5mac.com/2025/12/03/after-apple-refusal-indian-go...


Those articles don't seem to support what you're saying? Russia's apps aren't preinstalled, they're just offered as suggestions, and India never got their app installed. I certainly don't see anything that mentions background processes in either article either.


What? Sounds like a US thing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: