If we're being generous we could say mayyybe 20% of the layoffs are accountable to overhiring during ZIRP.
Block was doing $4B in revenue with 4K employees in 2019 before the pandemic.
They're now doing $24B in revenue with 10K employees and are going to cut near to those previous employee levels. That's a 5X jump in revenue per employee from the pre-covid, pre-AI levels.
If you don't think code becoming 1,000X cheaper to produce doesn't radically change the number of employees needed inside a technology org, then it's time to put down the copium pipe.
The problem is that there is no hard evidence anywhere to actually prove this.
I’m going to avoid whether or not AI productivity gains are real, but all the “data” I have seen affirming this is black box observations or vibes.
Even your evidence is just conjecture. You’re proposing that they’re going to be successful cutting their workforce like this because AI is such a boon.
The Financial Times ran an article [1] the other week with a title saying that AI is a productivity boost and then the article basically spends a bunch of words talking about how the signs are looking good that AI is useful! Then mentions that all of this is inherently optimistic and is not necessarily indicative of an actual trend yet.
> While the trends are suggestive, a degree of caution is warranted. Productivity metrics are famously volatile, and it will take several more periods of sustained growth to confirm a new long-term trend.
IMHO, at the moment it is not possible to separate trends from AI being an actual game changer vs. AI being used as a smoke screen to launder layoffs for other reasons. We are in a bubble for sure and the problem is that it’s great until it’s not. Bar Kokhba was considered the messiah…until everyone was slaughtered and the Romans depopulated Judaea. Oops.
I just posted the hard evidence (the actual numbers). The company is going to produce 5-6X the revenue with a similar number of employees as they had 6-7 years ago before the overhiring boom.
But I guess we'll just have to defer to the AI experts at...the Financial Times...and their emotional vibes of the situation instead.
The future is not evidence? I don’t understand what you’re saying.
> The company is going to produce 5-6X the revenue with a similar number of employees as they had 6-7 years ago before the overhiring boom.
That’s not evidence. That’s a belief. I’m not disagreeing they overhired, but this statement contains no evidence that reducing the size of the company like this is going to yield the same or greater profits.
Just the other day, I saw a comment on HN accuse another comment of being AI for no good reason. I personally thought the comment was fine.
I know it's an unpopular opinion, but I don't really read too deeply into whether text is AI generated or not. On social platforms like HN I tend to just skim many comments anyway so it's not like the concept of "they spent no time writing so you shouldn't spend time reading" really applies.
I know some people use apps like Grammarly to improve their language and stuff, which I can respect. But at what point do we draw the line between AI assisted text and AI generated text?
I sometimes use AI to do research into the nuance of some topics to help me formulate a response and synthesize ideas, but if I ever get to the point where I'd be asking AI generate a response to the comment then I find it better to just not respond at all.
I think it depends on the purpose of the comment. I can see why someone may get frustrated with AI text if they were say looking for advice on xyz, as you'd usually want someone with personal or senior experience. If I want career advice, AI will give me a predictable response similar to LinkedIn, but something from a successful person in the industry will have a lot more trust and substance in it.
It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
True, CSAM should be blocked by all means. That's clear as day.
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
> When these models are fine tuned to allow any kind of nudity
If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.
The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."
Great list, thank you. The only thing to note is that whenever I imported a large list like this in the past, I always stopped checking my RSS reader after a while because the content wasn't interesting. I think finding RSS/adding it to a reader should happen organically over time.
This may be because most feed readers don't have a proper way to triage items. Adding a feed doesn't mean you want to read everything from said feed. Usually only a subset of articles are interesting.
I built a feed reader with that concept in mind, having a separate triage stage where you only decide if it's worth reading or not. This will make it easier to handle large feed lists and find the best articles from them.
Air, as a product line, quite famously started with Jobs emphasising the thinness of the MacBook Air by pulling it from a paper folder. Taking what are ultimately marketing terms as literal face-value descriptors isn't particularly useful.
I would guess they do it because they want to minimize the chance that someone will install an unapproved app to someone’s phone and cause harm. I know it’s already pretty hard but Apple seems to be very particular when it comes to this.
That’s an opinion. Apple’s take is that they sell ”everything that runs on your phone has gone through our reviews, so you can trust it isn’t malware”
That, in their opinion, makes it their job to prevent people from permanently installing software on other people’s phones. I’m sure they would remove the “permanently” if they could, but developers have to test builds so frequently that they can’t review them all.
It's not that they can't afford a $3 toothpaste, it is the environment they are in that makes it hard to prioritize things like this. It is the education and the overall life quality (or the lack there of) that causes this problem.
You miss the point that this is not about AI in the first place
reply