Hacker Newsnew | past | comments | ask | show | jobs | submit | eks391's commentslogin

I'm incredibly impressed that you managed to make that whole message without a single usage of the most frequently used letter, except in your quotations.

Such omission is a hobby of many WWW folk. I can, in fact, think back to finding a community on R*ddit known as "AVoid5", which had this trial as its main point.

Down with that foul fifth glyph! Down, I say!


Bet they asked an AI to make the bit work /s

:-D

I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".

It's not oft that you run into such alluring confirmation of your point.


I'm having my thumbs-up back >:(

My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.

No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)


I felt like the article had a good argument for why the AI hype will similarly be unsuccessful at erasing developers.

> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.

What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?


No previous tool was able to learn on its own mistakes (RLVR).

It might be not enough by itself, but it shows that something has changed in comparison with the 70-odd previous years.


LLM's don't learn on their own mistakes in the same way that real developers and businesses do, at least not in a way that lends itself to RLVR.

Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.


> through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.

That is, the problems are a) how to generate a training signal without formally verifiable results, b) hierarchical planning, c) credit assignment in a hierarchical planning system. Those problems are being worked on.

There are some preliminary research results that suggest that RL induces hierarchical reasoning in LLMs.


> evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve

I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them. The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.

> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?

The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?


Yes, if we assume that AI can do the job of developers then tautologically it can do the job of developers.

It's not obvious at all. Some people believe that once AI can do the things I've listed, the role of developers will change instead of getting replaced (because advances always led to more jobs, not less).

And your entire argument is based around the possibility of it turning into a magic genie that can do anything

We are actually already at the level of magic genie or some sci-fi level device. It can't do anything obviously but what it can is mind blowing. And the basis of argument is obviously right - potential possibility is really low bar to pass and AGI is clearly possible.

Turning into a human-level intelligence. If you believe that it requires magic, well, it's your right.

A $3 calculator today is capable of doing arithmetic that would require superhuman intelligence to do 100 years ago.

It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.


> that would require superhuman intelligence to do 100 years ago

It had required a ton of ordinary intelligence people doing routine work (see Computer(occupation)). On the other hand, I don't think anyone has seriously considered to replace, say, von Neumann with a large collective of laypeople.


My argument would be that while some complexity remains, it might not require a large team of developers.

What previously needed five devs, might be doable by just two or three.

In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.

In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.

There are shortcuts.

Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.


Trees are not static, unchanging, pop into existence and forget about, things. Trees that don't get regular "updates" of adequate sunlight, water, and nutrients die. In fact, too much light or water could kill it. Or soil that is not the right courseness or acidity level could hamper or prevent growth. Now add "bugs". Literal bugs, diseases, and even competing plants that could eat, poison, or choke the tree. You might be thinking of trees that are indigenous to an area. Even these compete for the resources and plagues of their area, but are more apt than the trees accustom to different environments, and even they go through the cycle of life. I think his analogy was perfect, because this is the first time coding could resemble nature. We are just used to the carefully curated human made code, as there has not been such a thing as naturally occuring, no human interaction, code before

I could be misinterpreting parent myself, but I didn't bat an eye on the comment because I interpreted it similarly to "everything humans (or anything really) do increases net entropy, which is harmful to some degree for earth". I wasn't considering the moral good vs harm that you bring up, so I had been reading the the discussion from the priorities of minimizing unnecessary computing scope creep, where LLMs are being pointed to as a major aggressor. While I don't disagree with you and those who feel that statement is anti-human (another responder said this), this is what I think parent was conveying, not that all human action is immoral to some degree.


Yes, this is what I meant. I used the word "harmful" in the context of the argument that LLMs are harmful because they consume resources (i. e. increase entropy).

But everything humans do does that. Everything increases entropy. Sometimes we find that acceptable. So when people respond to Pike by pointing out that he, too, is part of society and thus cannot have the opinion that LLMs are bad, I do not find that argument compelling, because everybody draws that line somewhere.


The OS is not very relevant to the Pixel. Compare the Pixels you like that are new (GrapheneOS drops support as models become older flagships, I think for security reasons) and get that one. IIRC, currently only Pixel is allowed, because the bootloader can be opened without rooting the device.

https://grapheneos.org/faq#device-support


Unrelated to bootloader or rooting. Pixels are the only phones that adhere to the device requirements that are listed in the FAQ.

https://grapheneos.org/faq#future-devices


If the way you practice your religion is standardized by an authority across several churches, it is organized religion. For example: Catholic sects and Mormons are organized faiths. They have manuals for the priests to follow and you can go to the same one elsewhere and get mostly the same experience. Some small churches localized to a city are also organized. Islam sects in the Middle east are usually organized between Sunni and Shia. To my knowledge, Islam is not organized in the USA even though the Imam might align with a sect, because there is no authority they report to or strict standard for their patrons. Most Protestant churches are unorganized, and non-denominational are almost always unorganized because they are one-offs.

This is my informal understanding; I am not a religious scholar


Sometimes you need to take a step backward to go forward. By 'going back' to allowing third party stores and apps, you have introduced competition, and realistically, one of them becomes the defacto one that is easy for both developers and users. On my android, I have lots of sideloaded apps that come from different sources, however since F-droid allows you to connect lots of 'stores' to it, I only have one app store app, as I have connected 5 app repositiories to F-droid. This is a huge win, because most of my apps come from F-droid, but there are those few that require different repos to get, as well a the few that I can install without a store at all, by just installing the APK I grabbed from the official site. Apple's store could allow these features, but since it undermines their anti-competitive practices, law has to come in to temporarily inconvenience you, so that your and everyone else's lives can be better. It'll just take some time though, because Apple goes out of their way to conform to new regulations as minimally as possible, to the point of completely missing the point of the regulation when possible.


I'm not sure where you are reading, but people are not free to rant in China. Many of my friends would lose privileges because they were foolish enough to openly speak poorly regarding certain topics, and suddenly they were banned from Wechat, which is equivalent to being banned from the internet, and from using money in noncash form. My sister was visiting and was dumb enough to get herself banned from way more services and she was scared she wouldn't be able to get back home. In a very few places, they check your social score to ensure that you aren't low-life enough to be barred from there too. I only spoke freely after checking an area for no cameras, so I always had all of my privileges, but me and a Chinese friend, after coming to the USA (I am not Chinese, only went there for school), hope we never end up back in China. Regarding day to day life in the USA, I am unaffected by China.


There is no economic advantage of offering lower prices in the US medical sphere, as there is no way for a patient to know that you charge less than another provider. Most medical practices do not provide any form of costs until after a procedure except ones usually not covered by insurances, such as dental and chiro, which do offer transparent and low prices because they compete in the free market.


Do you configure this in your firewall? How can I replicate this?


what firewall do you use?


It's in the "404" handler of the backend. It should be possible to write a caddy or nginx module for it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: