It's not a blog - it's a community website, so significantly more complex (running competitions, uploading tracks, moderation, etc, etc, etc).
I don't understand the car firmware non-sequitur. I've already seen productivity gains in this domain. Why do I need to also prove them in another one? They are already demonstrated. It's like me saying "I can get places faster in a car than on foot" and you're saying "yeah but you can't get to the moon."
Kyla Scanlon is speaking from personal experience. It can be a great school if you put in the effort. Will the market will reward that effort with a job? Maybe.
In my book a great school would likely have a low-ish acceptance rate. And so they could (even though they may not be happy about it) absorb some amount of declining applications by adjusting their acceptance bar. WKU's acceptance rate is like 95%. They're already taking ~everyone who applies. I question whether school quality truly plays no role at an institution which struggles to find a student they wouldn't admit.
A high acceptance rate isn't a problem. It represents distinct operational choices.
We've decided that a four-year degree is today's high-school diploma, so that means that you need to produce a lot of them. But you don't need a top tier research university to produce a stream of reasonably competent first-year teachers, engineers, feeders into medical and law programs, etc.
That can still be an excellent school. It doesn't have to deliver moonshots-- it has to serve its purpose.
Just because a school is highly selective doesn't mean that it produces a quality educational product or is serving the needs of its community. How many top schools are "selective" because they're mostly places for the children of the 1% to mingle before taking over Daddy's business? Are they going to be pushovers for grade inflation after asking for the cost of a condo for a semester's tuition?
Some people, when encountering a queue, will take the position at the end of the queue. They trust there wouldn’t be a queue if the reward weren’t worth the wait.
Why is rejecting prospective students something you expect a great school to do?
I would expect a great school to be appealing to potential students, and therefore attract a lot of them. I would also expect a great school to have high academic standards that not all applicants meet.
Working in microgrids and I completely agree. I use Claude Code every day. There’s so much we don’t know and so much that an LLM is not going to help you with.
It depends on the job. T-shirts yes. I enjoy building microgrids. There are many unsolved challenges. When the robots start doing it maybe it’ll be boring. That’s a long way off.
I don’t see a compelling reason for Apple to jump into the AI game. The MacBook Pro M4 is a dream to work with, and it works great with Claude Code. Creating quality products is a niche market, but that strategy still has merit.
1) AI threatens to take-over how you use your phone, it threatens to reduce apps to an API that it will use on your behalf so you don't use the apps yourself
2) By doing that it commoditizes the hardware because the software experience is virtually identical across platforms, you say make a dinner reservation and it doesn't matter what calendar you use, what restaurant app etc
3) Apple is no longer assured to be able to gatekeep or ban these things so if they aren't producing the most useful or entrenched assistant someone else could become people's primary interface for iPhones
There's a lot of parallel with "super apps" -
> Apple’s fear of super apps is based on first-hand experience with enormously popular super apps in Asia. Apple does not want U.S. companies and U.S. users to benefit from similar innovations. For example, in a Board of Directors presentation, Apple highlighted the “[u]ndifferentiated user experience on [a] super platform” as a “major headwind” to growing iPhone sales in countries with popular super apps due to the “[l]ow stickiness” and “[l]ow switching cost.” For the same reasons, a super app created by a U.S. company would pose a similar threat to Apple’s smartphone dominance in the United States. Apple noted as a risk in 2017 that a potential super app created by a specific U.S. company would “replace[ ] usage of native OS and apps resulting in commoditization of smartphone hardware.”
> AI threatens to take-over how you use your phone, it threatens to reduce apps to an API that it will use on your behalf so you don't use the apps yourself
It threatens to do that, sure. But the reality will likely be significantly less dramatic. The most likely outcome is AI (like every other hyped technology) finds a niche and that everything else carries on as it was.
the number one reason is that Siri is an embarrassment. And that has become so much clearer now with ChatGPT and Claude next to it. Everyone is simply thinking: why can't we have that. Why do we have to talk to a low quality agent that can't answer basic questions while i can also walk around with my airpods in having a full conversation with ChatGPT.
I understand it is not that easy. But Apple has been neglecting Siri, or maybe failed to improve it, for so many years. And now the perception is that there is just no excuse anymore.
Maybe it’s due to cost? I’m curious to know whether Apple fine tuned the Siri LLM to run lean in order to save money, and in the same vein that OpenAI loses money on even the paid queries. It has to break even somewhere unless hydro becomes miraculously free.
AI is being integrated with everything. Web, applications, cloud, mobile - everything. Any company that neglects this is going to be forced to license and dealmake.
You're joking, but many of the code bases I saw that were produced by/with AI-support are not maintainable by any sane human. The more you go AI, the less you can turn back.
I completely agree with the author's comment that code review is half-hearted and mostly broken. With agents, the bottleneck is really in reading code, not writing it. If everyone is just half-heartedly reviewing code, or using it as a soapbox for their individual preferences, using agents will completely fall apart as they can easily introduce serious security issues or performance hits.
Let's be honest, many of those can't be found by just 'reading' the code, you have to get your hands dirty and manually debug/or test the assumptions.
What’s not clear to me is how agents/AI written code solves the “half hearted review” problem.
People don’t like to do code reviews because it sucks. It’s tedious and boring.
I genuinely hope that we’re not giving up the fun parts of software, writing code, and in exchange getting a mountain of code to read and review instead.
Yeah, honestly what's currently missing from the marketplace is a better way to read all of the code, the diffs etc. that the LLMs output, like how do you review it properly and gain an understanding of the codebase, since you're the person writing a very very small part of it.
Or even to make sure that the humans left in the project actually read the code instead of just swiping next.
No, it can’t. Partially stems from the garbage the models were trained on.
Example anecdata but since we started having our devs heavily use agents we’ve had a resurgence of mostly dead vulnerabilities such as RCEs (CVE in 2019 for example) as well as a plethora of injection issues.
When asked how these made it in devs are responding with “I asked the LLM and it said it was secure. I even typed MAKE IT SECURE!”
If you don’t sufficiently understand something enough then you don’t know enough to call bs. In cases like this it doesn’t matter how many times the agent iterates.
To add to this: I’ve never been gaslighted more convincingly than by an LLM, ever. The arguments they make look so convincing. They can even naturally address specific questions and counter-arguments, while being completely wrong. This is particularly bad with security and crypto, which generally isn’t verified through testing (which only proves the presence of function, not the absence).
I hope he writes a personal essay about the experience after he leaves Microsoft. Not that he will leave anytime soon, but the first hand accounts of how they are talking about these systems internally are going to be even more entertaining than the wtf PRs.
This comment thread is incredible. It's like fanfiction of a real person. Of course this engineer I respect shares my opinion. Not only that, he's obviously going to quit because of this. And then he'll write a blog post I'll get to enjoy.
Of course that is what he says publicly. Can you imagine him saying anything different on this already very heated PR comment section? Those would be quoted in a headline in a news article the next second.
Hahaha. 1000% this. Also, first example from the linked video: a "not vibe coded, promise" example of an ascii space invaders clone... Of all the examples of "has a bunch of training code data since the 80s", this is the best representation of exactly what LLM coding is capable of "in 8 minutes".
reply