An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
I think a lot about how much we altered our environment to suit cars. They're not a perfect solution to transport, but they've been so useful we've built tons more road to accommodate them.
So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.
This is a really interesting observation! Cars don't have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you'd like to use one.
What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don't integrate technology in their lives as much as the average person does? That's a much more extreme lifestyle change than moving to NYC to get away from cars.
"Tons of power generation?" Perhaps we will go in that direction (as OpenAI projects), but it assumes the juice will be worth the squeeze, i.e., that scaling laws requiring much more power for LLM training and/or inference will deliver a qualitatively better product before they run out. The failure of GPT 4.5, while not a definitive end to scaling, was a pretty discouraging sign.
It already has with IVRs . I wonder if as a generalization, current technology will keep being used to provide layers and layers of "automation" for communication between people.
SDR Agents will communicate with "Procurement" Agents. Customer Support Agents will communicate with Product Agents. Coffee Barista Agents will talk with Personal Assistant Agents.
People will communicate less and less among each other. What will people talk about? Who will we talk to?
We didn't just build roads, we utterly changed land-use patterns to suit them.
Cities, towns, and villages (and there were far more of the latter then) weren't walkable out of choice, but necessity. At most, by the late 19th century, urban geography was walkable-from-the-streetcar, and suburbs walkable-from-railway-station. And that only in the comparatively few metros and metro regions which had well-developed streetcar and commuter-rail lines.
With automobiles, housing spread out, became single-family, nuclear-family, often single-storey, and frequently on large lots. That's not viable when your only options to get someplace are by foot, or perhaps bicycle. Shopping moved from dense downtowns and city-centres (or perhaps shopping districts in larger cities) to strips and boulevards. Supermarkets and hypermarkets replaced corner grocery stores (which you could walk to and from with your groceries in hand, or perhaps in a cart). Eventually shopping malls were created (virtually always well away from any transit service, whether bus or rail), commercial islands in shopping-lot lakes. Big-box stores dittos.
It's not just roads and car parks, it's the entire urban landscape.
AI, should this current fad continue and succeed, will likely have similarly profound secondary effects.
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
> An engine performs a simple mechanical operation
It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.
An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.
I would argue an "AI doomer" is a negatively charged type of evangelist. What the doomer and the positive evangelist have in common is a massive overestimation of (current-gen) AI's capabilities.
“More specifically, from JetBrains, the makers of the world-famous IntelliJ IDEA IDE, whose primary claim to fame is its lovely orange, green, purple and black 'Darcula' theme”
This must be a bad attempt at a joke, right? Darcula is a nice theme (I personally prefer high contrast), but surely IntelliJ’s code inspection and automatic refactoring have always been its claim to fame.
Ha! I remember being either 5 or 6 when my uncle showed me Minus World and it blowing my mind. That might have actually been my first exposure to "backrooms" glitches like that. What an amazing glitch. It even worked on my combo Super Mario Bros / Duck Hunt cartridge
MissingNo. is another good example. I have fond memories spending untold hours in my favorite game engines trying to break free. The Jak and Daxter series were some of my favorite to break, due to the uniqueness and flexibility of the engine and the weird ways that the chunk loading system could be broken.
Ahh, "Mountain King" on the Atari 2600 was the game for me finding a cool bug. If you bounced just right, you'd soar over the mountain into the glitches far above. Games didn't crash, they just worked with what they had.
I didn't have Mountain King for my 2600 so I looked it up. What a neat glitch. Platformer glitches are fun, I really enjoyed breaking the early Sonic games for things like the Hyper Sonic glitch, or some of the map glitches.
I think this is one thing about Super Mario Bros. 3 that felt so magical to me. With the addition of the hidden whistles and intentional "glitches" like crouching for an extended time on a white platform, running behind map elements, etc. you felt like some kind of plane walker just bending time and space to your will. Fantastic implementation of a level skip mechanism for veteran players. It gave an already incredibly expansive game quite a lot of extra replay value, just like Minus World.
Thank you for the reminder of MissingNo!
Takes me back to when I was a child and received a Gameboy Color without any games. I spent months just watching the start up animation on repeat before I got Pokémon yellow.
That is one of the saddest thing I've ever heard. Did your parents just not know it needed games, or was it a budget thing?
I was extremely poor growing up but I did get lucky and get a Gameboy Color for Christmas with a copy of Pokémon Gold at age 5, right before my guardians went insane and forbade any non-Christian media such as "Pocket Demons" or any fantasy content. That game expanded my mind so much, introduced me to a lot of things I'd never encountered before. It seemed so mysterious and huge, especially with the entire extra Kanto campaign. Still one of the greatest and most complete games ever made.
> Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
> With internal REST for companies I have seen so many single page specific endpoints. Gross.
As someone pointed out in reply to another comment, GraphQL is "a technological solution to an organizational problem." If that problem manifests as abuse of REST endpoints, you can disguise it with GraphQL, until one day you find out your API calls are slow for more obscure, harder to debug reasons.
> we accidentally created an infinite event loop between two Lambdas. Racked up a several-hundred-thousand dollar bill in a couple of hours
May I ask how you dealt with this? Were you able to explain it to Amazon support and get some of these charges forgiven? Also, how would you recommend monitoring for this type of issue with Lambda?
Btw, this reminds me a lot of one of my own early career screw-ups, where I had a batch job uploading images that was set up with unlimited retries. It failed halfway through, and the unlimited retries caused it to upload the same three images 100,000 times each. We emailed Cloudinary, the image CDN we were using, and they graciously forgave the costs we had incurred for my mistake.
> May I ask how you dealt with this? Were you able to explain it to Amazon support and get some of these charges forgiven? Also, how would you recommend monitoring for this type of issue with Lambda?
AWS support caught it before we did, so they did something on their end to throttle the Lambda invocations. We asked for billing forgiveness from them; last I heard that negotiation was still ongoing over a year after it occurred.
Part of the problem was we had temporarily disabled our billing alarms at the time for some reason, which caused our team to miss this spike. We've enabled alerts on both billing and Lambda invocation counts to see if either go outside of normal thresholds. It still doesn't hard-stop this from occurring again, but we at least get proactively notified about it before it gets as bad as it did. I don't think we've ever found a solution to cut off resource usage if something like this is detected.
Earlier in the week there was threads about how AWS will never implement resource blocking like you're talking about because big companies don't want to be shut off in the middle of a spike of traffic, and small companies don't pay enough money, and it's not like it hurts Amazon's bottom line
We use memory safe languages, type safe languages. AWS is not fundamentally billing safe.
Just to give you nightmares. There's been DDoS in the news lately, I'm surprised nobody has yet leveraged those bot nets to bankrupt orgs they don't like who use cloud autoscaling services.
I don't know how you monitor it, part of the issue is the sheer complexity. How do you know what to monitor? The billing page is probably the place to start - but it is too slow for many of these events.
I guess you could start with the common problems. Keep watchdogs on the number of lambdas being evoked, or any resource you spin up or that has autoscaling utilization. Egress bandwidth is definitely another I'd watch.
Dunno, just seems to me you'd need to watch every metric and report any spikes to someone who can eyeball the system.
For me? I limit my exposure to AWS as much as I reasonably can. The possibilities combined with the known nightmare scenarios, with a "recourse" that isn't always effective doesn't make for good sleep at night.
> There's been DDoS in the news lately, I'm surprised nobody has yet leveraged those bot nets to bankrupt orgs they don't like who use cloud autoscaling services.
> There's been DDoS in the news lately, I'm surprised nobody has yet leveraged those bot nets to bankrupt orgs they don't like who use cloud autoscaling services.
That’s interesting because I seems like it would happen, but what is in it for the attacker, whrn under threat they can implement caps?
A severe enough bill can cause an organization to be instantly bankrupt. No opportunity to try to do something like caps.
Regardless, turning on spending caps isn't a final solution to this particular attack. With caps the site/resources will hit the cap and go offline. Accomplishing what a DDoS generally tries to accomplish anyway.
The only real solution is that you have to have a cheap way to filter out the attacking requests.
Could only be an attack of spite, can’t really hold a ransom because the IPs of malicious traffic could be blocked or limits set after initial overspend. Perhaps if the botnet was big enough.
I think you're limited to 1,000 concurrent Lambda invocation by default anyway.
That said, it's not easy to get an overview of what's going on in an AWS account (except through Billing, but I don't know how up to the moment that is).
I've been able to get AWS support to waive fees for a runaway Lambda that no one spotted for a few weeks - they wanted an explanation of what happened and a mitigation strategy from us and that was it.
It is still unresolved because AWS wants us to pay the bill so they can then issue a credit but the company credit card doesn't have a high enough limit to cover the bill.
I'm shocked no one else gave this answer earlier in the thread. If you're using PDO directly in 2021, you're absolutely doing it wrong. You don't need to use all of Laravel, or even all of Eloquent for that matter. If you don't want to depend on a framework or use an ORM, you can use "illuminate/database" (https://packagist.org/packages/illuminate/database) for a secure wrapper around PDO. No need to reinvent the wheel.
>This is somewhat the point. If using the language's standard libraries is "absolutely doing it wrong", that's an indictment of the language.
Exactly, all languages have footguns but some have a lot more than others. You don't hear for example Java developers bitching about JDBC to anywhere close to the extent PHP developers bitch about the various common approaches to database connections.
> If using the language's standard libraries is "absolutely doing it wrong"
You are being deliberately obtuse. Other comments in this thread offer correct examples of using PDO to avoid SQL injection. I didn’t mean it was impossible to write safe database code using the standard library—obviously, PHP is a Turing-complete language, it can be done!—I just meant it’s awkward, and verbose, and developers are unlikely to do it consistently throughout an application. Hence this type of concern is best abstracted into a library.
To your point about “indicting a language,” most languages have footguns like this. The worst you can say about PHP is that the documentation should do more to discourage new users from working with PDO directly. (And I mean the official documentation—the language maintainers can’t be held responsible for the kind of unofficial tutorials the article complains about.) But regardless of what the official docs say, most PHP development today is done using frameworks like Laravel, Symfony, and Zend framework that do not suffer from SQL injection issues.
I would put Laravel/PHP against anything else out there right now. Based on my (albeit limited) experiences with Node.js and Java/SpringBoot, I would choose Laravel for the vast majority of applications.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.