imo, The OP has bad ai-assisted takes on almost every single "critical question". This makes me doubt if he has breadth of experience in the craft. For example.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
I think this is a fair take (despite the characteristic HN negativity/contrarianism), and succinctly summarizes a point that I was finding hard to articulate while reading the article.
My similar (verbose) take is that seniors will often be able to wield LLMs productively, where good-faith LLM attempts will be the first step, but will be frequently be discarded when they fail to produce the intended results (personally I find myself swearing at the LLMs when they produce trite garbage; output that gets `gco .`-ed immediately- or LLM MR/PRs that get closed in favor of manually accomplishing the prompted task).
Conversely, juniors will often wield LLMs counterproductively, accepting (unbeknown) tech debt that the neither the junior nor the LLM will be able to correct past a given complexity.
I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.
I would draw an analogy here between building software and building a home.
When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.
Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.
> This substantially reduces the incentive for the creation of new IP
And as a result of this, the models will start consuming their own output for training. This will create new incentives to promote human generated code.
> I can confirm that they are completely useless for real programming
Can you elaborate on "real programming" ?
I am assuming you mean the simplest hard problem that is solved. The value of the work is measured in those terms. Easy problems have boilerplate solutions and have been solved numerous times in the past. LLMs excel here.
Hard problems require intricate woven layers of logic and abstraction, and LLMs still struggle since they do not have causal models. The value however is in the solution of these kinds of problems since the easy problems are assumed to be solved already.
I saw this in a past hype cycle. What happens is that it becomes a "performative" art in an echo-chamber for startups, startup founders, VCs. Performative meaning doing things one thinks others want to see rather than when it makes sense.
Management is quizzing their tech teams on injecting agents into their workflows whatever the f that means. Some of these big companies will acquire startups in the space so they are not left behind on the hype-train. So, they can claim to have agentic talent on their teams.
Those of us who have seen this movie play out know the ending.
> They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling
Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.
> there's plenty of other training data in the world.
Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.
> I don't understand the ethical framework for this decision at all.
The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.
Saying people shouldn't create open source code because AI will learn from it, is like saying people shouldn't create art because AI will learn from it.
In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
> In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
Well maybe the AI parasites should have thought of that.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
reply