Hacker Newsnew | past | comments | ask | show | jobs | submit | daemin's commentslogin

I just thought I'd mention that Khalil Estell is doing some work regarding exceptions in embedded and bare metal programming. In the first of his talks about this topic (https://www.youtube.com/watch?v=bY2FlayomlE) he mentions that a lot of the bloat in exception handling is including printf in order to print out the message when terminating the program. For those interested he has another presentation on this topic https://www.youtube.com/watch?v=wNPfs8aQ4oo

Didn’t watch the video but as opposed to what? Printf is definitely the most efficient way to retrieve the info needed to begin initial triage especially in “hacking” or bring up where a formal process isn’t defined

There are processors and platforms where including the standard library feature to print text to standard output significantly increases the size of the binary. In such cases just enabling exceptions also enables this feature only for the purpose of outputting a termination message, which does not get displayed or read because the device doesn't have a way to emit standard output.

One main thing that this brings to mind is if an LLM can ever actually create a clean room implementation of a piece of open source software, given that there is a near certainty that the software was used in its training data. Therefore it has seen it and remembered it, and could if appropriately prompted recreate the code verbatim.

This can also apply to people, either if they have seen the code previously and therefore are ineligible to write the code for a clean-room implementation, or it gets murky when the same person writes the same code twice from their own knoeldge, as in the Oracle Java case.

Coming from a professional programming perspective I can totally see the desire to have more libraries written in permissive licences like BSD or MIT, as they allow one like myself to include them in commercial closed-source products without needing to open source the entire codebase.

However I find myself agreeing with the article in so far as this LLM generated implementation is breaking the social contract for a GPL/LGPL based library. The author could have easily implemented the new version as a separate project and there would not have been an outcry, but because they are replacing the GPL version with this new one it feels scummy to say the least.


There was a video by MKBHD where he said that every new phone manufacturer starts off being the hero and doing something different and consumer/user friendly before with growth and competition they evolve into just another mass market phone manufacturer. Realistically this is because they wouldn't be able to survive without being able to make and sell mass market phones. This has already happened to OnePlus back half a decade ago when they merged with Oppo, and it's arguably happened with ASUS as well when they cancelled the small form factor phone a couple years ago.


It is absolutely true that software can be finished, it's just that software appears to be dead if it hasn't had any work done on it for years. You don't need to keep adding features and changing the software ad infinitum.

Just like with your building analogy and with other car analogies presented here, software does need some maintanence every now and again to keep it up to date - with security fixes, compiling to a newer platform, integrating fixes from dependencies, etc. And yes while buildings may be finished they stil require regular maintance if they are used.


My argument is that the maintenance overhead of a finished product (or car, or building) should require much less effort than what it takes to build it - otherwise you should seek a refund from the original manufacturer(s).


That's absolutely not true in pretty much any mid scale software. You can have a team of 5 people make the core of an app, then need 50 people to help with support for customers. scaling up is never cheap, but software scalability is really low despite that.

That's not even true with a car. I'm about to spend 3000 dollars on a big repair for a car I bought 9 years ago used @ 3500. Even if you adjust for inflation we're still talking about 70% of the car's worth just to keep it running. As for a refund, the blue book value tops at 850 dollars.


> I'm about to spend 3000 dollars on a big repair for a car I bought 9 years ago used @ 3500. Even if you adjust for inflation we're still talking about 70% of the car's worth just to keep it running.

That's one way to run that ROI, sure, but is it correct?

1. The original $3.5k you spent is a sunk cost; you should ignore it, so your total cost of getting a running car is only $3k.

2. Even if you don't ignore it, your total bill to get a running car is $7.5k

In either of the above situations, you should be comparing the cost to get a running car by fixing your existing car (either $3k or $7.5k) to the cost of getting a running car by selling it as-is (so, perhaps +$500 as a parts donor -$X for a replacement running car).

Regardless of which calculus you are using, it's still going to come cheaper to fix the running car.

What the car is "worth" (however you define it) is irrelevant to the calculus.


That actually depends what kind of application you are building and maintaining.

From the sounds of it I assume you're talking about developing and maintaing a SaaS application, where there is no real maintanence of it, and instead what you end up doing is developing it further to support larger data sets and more people. That is of course assuming that the software is successful and the usage is growing.

For traditional desktop software you can declare it finished and then maintanence is minimal and limited to only critical bugs, so you'd have a team of 5 develop it and then 1 person or less maintain it.


> You don't need to keep adding features and changing the software ad infinitum.

I agree. I never claimed anyone needs to do this.


It's because these things aren't priorities for management so they aren't for the engineers, the latest fads like AI are priorities and therefore they get integrated everywhere.


that's a copout in my opinion. Most of the quality decisions are established by team culture and experience. If a team is full of webdevs, no amount of management investment in performance will make a good app. conversely, a team of high performance members will be able to deliver quality apps despite management's input.


It still depends on management to determine the culture and what requirements and limitations are most important.

I mean take a look at what happened to Boeing, there were people that cared about quality down in the trenches but they were overridden by the people that didn't care about quality, just about shoving items out the door.


Good example .


Visual Studio Professional did cost about $500 back in the day, although you ended up getting a perpetual licence for the software and some updates. These days they expect you to have a subscription as with all other business software.


The question is then do good C++ developers want to go work at Microsoft? With its mandatory use of AI and desire to rewrite everything in Rust?


The previous version would have been written in C rather than C++ since as someone else has said it's a very basic Windows application, more of a wrapper around the edit control than anything more complicated.

These days it would have to be written in some other language that has those Windows Runtime bindings available for it. So could be C++ but if I were to guess I'd say it's written in Typescript and compiled to a native or .Net binary.


You forgot the .Net renaming in the early 2000's.


Which is the exact mistake they are repeating right now. Force one brand on everything, even if it has nothing to do with it.


My biggest design peeve of the examples posted is the inconsistent indentation of each section of the menu. Where if any single item in the section has an icon it gets indented, but if none do it doesn't, and seeing them next to each other is jarring. I feel this is especially inconsistent design because if a menu item has a check mark it indents all menu items in the whole menu. I would have thought Apple would have the taste to keep things more consistent across the whole menu than that, as it seems sloppy.


I imagine Steve Jobs would've asked to see whoever designed those menues, picked up their laptop and thrown it out the window...


That would indeed be the myth. The reality is what you see on the screen


Hard for Steve Jobs to have done this for the changes of the last ~14 years...


but trivial for all the similar changes before that while the laptops were still flying out


You’ve never seen the movie Poltergeist?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: