I didn't want to hassle with migrating my WordPress blog, so now just deploy it to Github > Cloudflare Pages so it's served statically (fast + secure). It's free too, wrote a blog post on it a couple years back: https://gmays.com/how-to-host-wordpress-sites-free/
But these days any new site I build is on NextJS since coding agents make it a breeze.
Unfortunately, Cloudflare Pages are deprecated. While the existing pages will be supported for some time, for new projects the only option is Workers that is much more complicated to configure.
You can actually still create new Pages projects, but it is de-emphasized in the UI in favor of Workers.
We've done a lot of work to make static sites on Workers just as easy to configure as Pages was. Have you tried it lately? Would love to hear what aspects you feel are still more complicated than Pages.
Just to be clear, we will not break existing sites using Pages. We will either auto-migrate them to Workers once we have all the tools in place to do so, or we'll keep supporting Pages forever.
Nice to hear! I haven’t tried it since last summer. What I tried to do was to deploy Jekyll from GitHub to Pages, and the only option I see was to deploy to Workers, but I couldn't find the documentation for doing this on Workers. Maybe this was improved since last summer. I'll try again.
Thanks! No hard limit on team size. On the collab side, I load tested it with 210 people typing simultaneously in the same doc and it handled it fine (9ms median latency, zero dropped keystrokes). So a normal team won't come close to stressing it.
Good point, it's a mix. The "it'll only get harder" is also because things are moving so fast and it takes time to learn (especially across teams) and change habits. No past paradigm has moved this quickly, which makes it hard to grok.
I also fully agree with "don’t overdo your investment into this generation of tools". IMO there are too many "cutting edge" tools trying to do all of this sexy stuff that'll be irrelevant in the next few months.
It's best to keep things simple with tooling. I push the edge on my general approach (99% of everything is AI coded) but conservative with my tools (pretty much only using Cursor now) to have at least some layer of stability. Otherwise stacking too many cutting edge things just feels too fragile, and will decay as AI improves, causing other issues. And this stuff is moving so fast and these companies are sufficiently motivated that the best things will make it into the tools, like plan/debug modes in Cursor.
I also feel that agentic coding is fast enough for now, so I don't even bother with multi-agent workflows. I still get a ton done and it's already at the edge of my ability to design coherently. Sure I could get 10X more code written in parallel with 10X more agents, but I can't design that fast, so it's just hurry up and wait with worse quality. And if that much code is needed I'm probably doing something wrong anyway.
Same. This is a surprisingly simple recipe for a happy life and helps prevent lifestyle inflation. It reminds me of PG's "Keep your identify small" (https://paulgraham.com/identity.html).
That's fair, but it wasn't the point of the article because it's messy. Many would argue that core LLMs are 'trending' toward commodity, and I'd agree.
But it's complicated because commodities don't carry brand weight, yet there's obviously a brand power law. I (like most other people) use ChatGPT. But for coding I use Claude and a bit of Gemini, etc. depending on the problem. If they were complete commodities, it wouldn't matter much what I used.
A part of the issue here is that while LLMs may be trending toward commodity, "AI" isn't. As more people use AI, they get locked into their habits, memory (customization), ecosystem, etc. And as AI improves if everything I do has less and less to do with the hardware and I care more about everything else, then the hardware (e.g. iPhone) becomes the commodity.
Similar with AWS if data/workflow/memory/lock-in becomes the moat I'll want everything where the rest of my infra is.
Your comment on Intel is correct, but it's also true that TSMC could invest billions into advanced fabs because Apple gave them a huge guaranteed demand base. Intel didn’t have the same economic flywheel since PCs/servers were flat or declinig.
That's a good clarification on Amazon, running on commodity hardware with competitive pricing != competing on price alone. It would have been better to clarify this difference when pointing out that they're trying the same commodity approach in AI.
Amazon is doing exactly everything except competing on price and they don't run commodity hardware either. They're even developing their own chips. Sure they have "commodity" GPUs and CPUs in some lineups, but they also have Graviton.
If you get something this mundane wrong from the start I don't know how I could trust anything else from the post either.
True, but Apple is a consumer hardware company, which requires billions of users at their scale.
We may care about running LLMs locally, but 99% of consumers don't. They want the easiest/cheapest path, which will always be the cloud models. Spending ~$6k (what my M4 Max cost) every N years since models/HW keep improving to be able to run a somewhat decent model locally just isn't a consumer thing. Nonviable for a consumer hardware business at Apple's scale.
I'm somewhat bullish on Google as well, they have the opportunity if they can figure out the product (which they are bad at) and they have the edge in cloud with their models + TPUs.
But your comment about the phone could have been about horses, or the notepad or any other technology paradigm we were used to in the past. Maybe it'll take a decade for the 'perfect' AI form factor to emerge, but it's unlikely to remain unchanged.
Yes brain chips and implants will be the next form factor. Until then the slab of battery and screen in your pocket is going to be present (and probably remain for a while even after we get brain implants).
Right, but remember Microsoft was 'working on' mobile also. The issue is that they're working on it the wrong way. Amazon is focused on price and treating it like a commodity. Apple trying to keep the iPhone at the center of everything. Thus neither are fully committing to the paradigm shift because they say it is, but not acting like it because their existing strategy/culture precludes them from doing so.
> The issue is that they're working on it the wrong way.
So is everyone else, to be fair. Chat is a horrible way to interact with computers — and even if we accept worse is better its only viable future is to include ads in the responses. That isn't a game Apple is going to want to play. They are a hardware company.
More likely someday we'll get the "iPhone moment" when we realize all previous efforts were misguided. Can Apple rise up then? That remains to be seen, but it will likely be someone unexpected. Look at any successful business venture and the eventual "winner" is usually someone who sat back and watched all the mistakes be made first.
We begrudgingly accept chat as the lowest common denominator when there is no better option, but it's clear we don't prefer it when better options are available. Just look in any fast food restaurant that has adopted those ordering terminals and see how many are still lining up at the counter to chat with the cashier... In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance.
We don't know what is better for this technology yet, so it stands to reason that we reverted to the lowest common denominator again, but there is no reason why we will or will want to stay there. Someone is bound to figure out a better way. Maybe even Apple. That business was built on being late to the party. Although, granted, it remains to be seen if that is something it can continue with absent of Jobs.
What is representative, though, is simple use: All you have to do is use chat to see how awful it is.
It is better than nothing. It is arguably the best we have right now to make use of the technology. But, unless this is AI thing is all hype and goes nowhere, smart minds aren't going to sit idle as progression moves towards maturity.
The problem with UX driven by this kind of interface is latency. Right now, this kind of flow goes more like:
"What burgers do you have?"
(Thinking...)
(4 seconds later:)
(expands to show a set of pictures)
"Sigh. I'll have the thing with chicken and lettuce"
(Thinking...)
(3 seconds later:)
> "Do you mean the Crispy McChicken TM McSandwich TM?"
"Yes"
(Thinking...)
(4 seconds later:)
> "Would you like anything else?"
"No"
(Thinking...)
(5 seconds later:)
> "Would you like to supersize that?"
"Is there a human I can speak with? Or perhaps I can just point and grunt to one of the workers behind the counter? Anyone?"
It's just exasperating, and it's not easy to overcome until local inference is cheap and common. Even if you do voice recognition on the kiosk, which probably works well enough these days, there's still the round trip to OpenAI and then the inference time there. And of course, this whole scenario gets even worse and more frustrating anywhere with subpar internet.
Right. We talk when it is the only viable choice in front of us, but as soon as options are available, talk goes out the window pretty quickly. It is not our ideal mode of communication, just the lowest common denominator that works in most situations.
But, now, remember, unlike humans, AI can do things like materialize diagrams and pictures out of "thin air" and can even make them interactive right on the spot. It can also do a whole lot of things that you and I haven't even thought of yet. It is not bound by the same limitations of the human mind and body. It is not human.
For what reason is there to think that chat will remain the primary mode of using this technology? It is the easiest to conceive of way to use the technology, so it is unsurprising that it is what we got first, but why would we stop here? Chat works, but it is not good. There are so many unexplored possibilities to find better and we're just getting started.
I think chat will remain dominant, but we'll go into other modes as needed. There's no more efficient way to communicate "show me the burgers" than saying it - thinking it is possible, but sending thoughts is too far off right now. Then you switch to imagery or hand gestures or whatever else when they're a better way to show something.
> Chat is a horrible way to interact with computers
Chat is like the command line, but with easier syntax. This makes it usable by an order of magnitude more people.
Entertainment tasks lend themselves well to GUI type interfaces. Information retrieval and manipulation tasks will probably be better with chat type interfaces. Command and control are also better with chat or voice (beyond the 4-6 most common controls that can be displayed on a GUI).
> Chat is like the command line, but with easier syntax.
I kinda disagree with this analogy.
The command line is precise, concise, and opaque. If you know the right incantations, you can do some really powerful things really quickly. Some people understand the rules behind it, and so can be incredibly efficient with it. Most don't, though.
Chat with LLMs is fuzzy, slow-and-iterative... and differently opaque. You don't need to know how the system works, but you can probably approach something powerful if you accept a certain amount of saying "close, but don't delete files that end in y".
The "differently-opaque" for LLM chatbots comes in you needing to ultimately trust that the system is going to get it right based on what you said. The command line will do exactly what you told it to, if you know enough to understand what you told it to. The chatbot will do... something that's probably related to what you told it to, and might be what it did last time you asked for the same thing, or might not.
For a lot of people the chatbot experience is undeniably better, or at least lets them attempt things they'd never have even approached with the raw command line.
Exactly. Nobody really wants to use the command-line as the primary mode of computing; even the experts who know how to use it well. People will accept it when there is no better tool for the job, but it is not going to become the preferred way to use computers again no matter how much easier it is to use this time. We didn't move away from the command-line simply because it required some specialized knowledge to use.
Chatting with LLMs looks pretty good right now because we haven't yet figured out a better way, but there is no reason to think we won't figure out a better way. Almost certainly people will revert to chat for certain tasks, like people still use the command-line even today, but it won't be the primary mode of computing like the current crop of services are betting on. This technology is much too valuable for it to stay locked in shitty chat clients (and especially shitty chat clients serving advertisements, which is the inevitable future for these businesses betting on chat — they can't keep haemorrhaging money forever and individuals won't pay enough for a software service).
My experience with Claude Code is a fantastic way to interact with a (limited subset) of my computer. I do not think Claude is too far off from being able to do stuff like read my texts, emails, and calendar and take actions in those apps, which is pretty much what people want Siri to (reliably) do these days.
But these days any new site I build is on NextJS since coding agents make it a breeze.
reply