This is not the only possible outcome. Another approach would be not to offer software within the affected region. U.S. local news is often not available to European visitors now due to GDPR. Similarly, Canadian news outlets are not available on Facebook due to Bill C-18. If I was an indie game developer I would consider this approach and simply avoid selling within California.
Larger game studios would likely adjust as you say. However they too could adjust in such a way that they only offer subscriptions within California as that appears to exempt them from this rule. Many outcomes are possible beyond simply adjusting to the legislation in the way you are suggesting.
> But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level, then the burden is on them to explain either:…
This is not the context in which I hear about sigmoids vs exponentials. I hear it in regards to “the singularity”, not that AI won’t reach some pre-specified level. You may get AGI, you aren’t getting a singularity.
Blendle, Scroll, Flattr and several others have attempted this. It turns out no consumer actually wants to do this, it’s primarily an idea that’s invoked on HackerNews to defend not subscribing to journalism while using ad blockers, it’s not a real business model.
Well, I've witnessed this on dozens of houses in the town where my ex-wife grew up. The local river was slow-moving in a shallow river valley. Every spring, it would flood, and houses built within a half-mile of the main river would flood up to the second floor.
Would the environmental assessment help? I'd like to think so, but when I almost bought in the area, I discovered that the floodplain maps were "optimistic."
That's not what an environmental impact assessment is. Environmental impact assessments look at potential harms to the environment, not the property. It would look at if building a house would impact the wildlife, and sometimes other related phenomenon
> where infringing copyright is legal as long as you're rich.
This isn’t true. A rich person and a poor person can train LLMs on copyrighted material in 2026. How they acquired those materials matters. Wealthy corporations hold no legal advantage in this space. For example, Anthropic recently settled for $1.5 billion due to acquiring books via piracy: https://www.nytimes.com/2025/09/05/technology/anthropic-sett...
My understanding is that an individual could likely pirate the same books without paying a dime (not due to differing legal standards but simply due to the fact it would be hard to identify them in many jurisdictions). In a practical sense it seems corporations are held to a higher standard in this regard.
The discrepancy is that some people equate training a model with piracy even though they are not the same thing. This is typically due to intellectual laziness (refusal to understand the differences) or willful misrepresentation (due to being an ideologically opposed to generative AI). No need to make such a mistake here though.
Of course it's not the same thing -- it's way worse.
The piracy comes first, and it's exactly the same thing. GenAI Corp. can't train models on illicitly obtained media before illicitly obtaining said media. And that very thing is already what private individuals got and get sued for millions over.
The GenAI Corp., having gotten away with that unpunished, then goes on to commit further violations by commercially exploiting the media with neither a license to do so, nor any intentions to pay the rights-holders for their use.
By the media conglomerates' own math, these GenAI companies should all be drowning in lawsuits over kazillions of bajillions of dollars.
> The piracy comes first, and it's exactly the same thing. GenAI Corp. can't train models on illicitly obtained media before illicitly obtaining said media.
My contention is that this is not happening. Most generative AI companies do not source their training data from illegal torrents and the few that do are currently paying for it. Further, I suspect the companies that get away with it today are _smaller_ not larger.
Training data is typically sourced by scraping the publicly available web.
> Of course it's not the same thing -- it's way worse.
Setting aside your own moral standards here, we should at least be able to agree that from a legal standpoint training a model is not copyright infringement.
> A rich person and a poor person can train LLMs on copyrighted material in 2026.
Updating an old adage for the modern age:
“The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.”
― Anatole France
Why do you think Google, the world's largest ad company, is paying money out of its ears to research those topics? The sooner people realize all major us tech companies are contractors for the us department of war the better.
Go ahead use metas verifier, give your biometrics to openai, type all your personal and financial information into copilot for advice, email your boss tell him anthropics boris was right you are now redundant, click on all of the ads you see, only engage with your peers on Facebook to let the algorithm decide how that goes, only drive in roads with flock cameras to stay safe, turn off your ad blocker, don't use vpns, etc. it's your life.
Larger game studios would likely adjust as you say. However they too could adjust in such a way that they only offer subscriptions within California as that appears to exempt them from this rule. Many outcomes are possible beyond simply adjusting to the legislation in the way you are suggesting.
reply