I loved Christmas Lemmings so much back in the day! The snowfall visualization and the little Santa lemming clearing it. I made a much less impressive snowfall demo a while back based on that (minus the clearing lemming, because I always wanted to watch the snow pile up). https://anderegg.ca/projects/flake/
In the early 2000s I was 100% sold on the idea of strict XHTML documents and the semantic web. I loved the idea that all web pages could be XML documents which easily provided their data for other sources. If you marked your document with, an XHTML 1.0 Strict or XHTML 1.1 doctype, a web browser was supposed to show an error if the page contained an XML error. Problem was, it was a bit of a pain to get this right, so effectively no one cared about making compliant XHTML. It was a nice idea, but it didn't interact well with the real world.
Decades later, I'm still mildly annoyed when I see self-closing tags in HTML. When you're not trying to build a strict XML document, they're no longer required. Now I read them as a vestigial reminder of the strict XHTML dream.
As someone who has gotten into the idea of semantic Web long after XHTML was all the rage[0], I somewhat resent that semantic Web and XML are so often lumped together[1]. After all, XML is just one serialisation mechanism for linked data.
[0] I don’t dislike XHTML. The snob in me loves the idea. Sure, had XHTML been The Standard it would have been so much more difficult to publish my first website at the age of 14 that I’m not sure I would have gotten into building for Web at all, but is it necessarily a good thing if our field is based on technology so forgiving to malformed input that a middle school pupil can pass for an engineer? and while I do omit closing tags when allowed by the spec, are the savings worth remembering these complicated rules for when they can be omitted, and is it worth maintaining all this branching that allows parsers to handle invalid markup, when barely any HTML is hand-written these days?
[1] Usually it is to the detriment of the former: the latter tends to be ill-regarded by today’s average Web developer used to JSON (even as they hail various schema-related additions on top of JSON that essentially try to make it do things XML can, but worse).
That is a good point, if you consider XSD then that is an XML connection, it starts to become a bit complicated and I see why people start to dislike it. I forget about that because to me it’s just about the idea of a graph, which is otherwise quite elegant. Why not have a graph type-free with just string literals; much richer information about what kind of values go where can be provided through constraints, vocabularies, etc.
My favourite serialisation has got to be dumb triples (maybe quads). I don’t think writing graphs by hand is the future. However, when it comes to that, Turtle’s great.
Because the semantics of numbers and dates matters.
It's absurd that JSON defines numbers as strings and has no specification for dates and times.
I believe we lose a lot of small-p programming talent (people who have other skills who could put them on wheels by "learning to code") the moment people have the 0.1 + 0.2 != 0.3 experience. Decimal numbers should just be on people's fingertips, they should be the default thing that non-professional programmers get, IEEE doubles and floats should be as exotic as FP16.
As for dates, everyday applications written by everyday people that use JSON frequently have 5 or more different date formats used in different parts of the application and it is an everyday occurrence that people are scratching their heads over why the system says that some event that happened on Jan 24, 2026 happened on Jan 23, 2026 or Jan 25, 2026.
Give people choices like that and they will make the wrong choices and face the consequences. Build answers for a few simple things that people screw up over and over and... they won't screw up!
> Because the semantics of numbers and dates matters.
Type semantics is only a small part of what is needed for systems and humans to know how to adequately work with and display the data. All of that information, including the type but so much more, can be supplied in established ways (more graphs!) without having to sprinkle XSD types on your values.
For example, say you have a triple where the object is a number that for whatever good reason must lie between 1 and <value from elsewhere in the graph> in 0.1 increments. Knowing that it is a number and being able to do math on it is not that useful when 99% of math operations would yield an invalid value; you need more metadata, and if you have that you also have the type.
Besides, verbatim literal, as obtained, is the least lossy format. The user typed "2.2"—today you round it to an integer but tomorrow you support decimal places, if you keep the original the system can magically get more precise and no one needs to repeat themselves. (You can obviously reject input at the entry stage if it’s outlandish, but when it comes to storage plain string is king.)
You're annoyed when people are trying to keep the dream alive?
Since HTML5 specifies how to handle all parse errors, and the handling of an XML self-closing tag is to ignore it unless it's part of an unquoted attribute value, it's valid HTML5.
I'm not annoyed by it when people are trying to make XML compatible documents, but effectively no one is. Platforms like WordPress use self-closing image tags everywhere, but almost no one using WordPress cares about document validation. This ends up meaning that the `<img ... />` is just an empty gesture.
All of vintage computer emulators on the Internet Archive owe a tip of the hat to the Canon Cat emulation, because that was the computer that started me on automating the original cross-compilation infrastructure.
Author here. I was also surprised to see this getting a bunch of HN traffic suddenly. I guess the Liquid Glass hate is pretty strong when a dashed-off blog post about a real blog post can randomly do numbers! Heartened to see that others are annoyed by this design as well, though. Hopefully Apple will do something about it, but I'm not holding my breath.
I don’t know if you remember iOS 7, it was a catastrophe. Designs evolve on Apple platforms, usually in the correct direction.
Anecdotally, I have used Liquid Glass since the first beta and I honestly think there are a lot of good things there. Took me a few months but I actually like it now (and I have some colleagues in there same boat as me).
I really need to give Deno another shot. Supply-chain attacks terrify me, and any Node project these days comes with dozens or hundreds of dependancies. It's basically impossible to vet them all, and I don't think this problem is going to go away. The Deno solution here seems like a nice way to handle this. Almost none of my dependancies need to touch my filesystem or directly access the network.
Over the last few years, I've found that most sites clients want can be built with static site generators and JavaScript. PHP is also great and easily hosted! But most times when there's a sprinkling of dynamism needed, it's OK if its happens at build/run time rather than when the page is rendered on a seever. This leads to faster page load and less to worry about security-wise. No shade! I've just been finding this has lead to good outcomes for me.
You mean to say some basic company site, blog or photo gallery that only gets updated once/twice a month, with zero dynamic content otherwise doesnt need a whole LAMP stack?
Honestly though with GH/CF pages type hosting and how simple static sites can be its a direction I'm ever thankful things have been moving. Just seems so much less painful for those who arent here to be security experts and just want a bloody site that 'just works'
Your static site generator can generate PHP instead of html and have some server-side dynamism sprinkled in your mostly static site, same way that generating JS can sprinkle some client side dynamism.
No clue how relevant they are today, but server side includes (SSI) solves the problem of wanting a _mostly_ static page with a little bit of dynamic content in it.
I wrote up notes here https://news.ycombinator.com/item?id=45180555 but this wasn't coordinated. As I mentioned in the first paragraph of the post, I was following a thread on Bluesky where Adrian and Jeff were chatting. I posted the article there. I didn't know Jeff had posted on Hacker News about this already, and I didn't know he would post my piece. I definitely would have done another editing pass if I had known!
Author here. I woke up to a surprising amount of traffic! Some notes based on the discussion.
This wasn't coordinated between Jeff Geerling and myself. However, I did mention the post in the Bluesky thread that Jeff was included in. [0]
I concluded the piece with “[t]his space is ripe for disruption”. That was a really poor choice of words. I've since updated the piece to better match what I was trying to say. Diffs are available. [1]
On YouTube: as I mention in the piece, I think the service is excellent as a consumer, and I pay for Premium.
This piece was mostly written because I've been frustrated that YouTube is effectively the only place for user submitted video on the internet. I wasn't going to write anything until I saw the video from RedLetterMedia that I mentioned in the post. They have a huge following and were blaming something that might be related? Or might not? It's really hard to tell! I'm not a YouTube creator, but I assume having metrics that determine your livelihood shift out from under you as a creator must feel awful.
> On YouTube: as I mention in the piece, I think the service is excellent as a consumer, and I pay for Premium
Why? Because the tools that allow them to take almost 50% of the revenue (they say you earn) have low friction?
I would say the opposite. There is no customer service. There are endless legal pit traps that allow larger channels and companies to predate on smaller ones alongside the AI channels, which lead to the same end. The entire point of the platform is to push as much advertising as possible, while mutating a user's search habits. Ironically, this leads to videos becoming borderline useless for many use cases, without taking them off youtube. This is not a good platform.
I'm sure I feel this way because I don't have a bunch of content I'm afraid of being yanked from the platform. Another "benefit" of having a big youtube presence, is I would be forever worried about implied retaliation.
I read it as they're enabled to feel that way--and express it publicly--because their digital life and livelihood is not held hostage to the capricious monopoly.
I did implicate that Youtube has monopolized the market, allowing a lower bar of service to become the norm. This latest move, seems to make every aspect of youtube's value proposition worse.
They said an LTT store message directed them to the Brodie Robertson video https://www.youtube.com/watch?v=1hVwUjcsl6s so they did their own investigation which confirmed similar things.
It looks like Youtube might be measuring views differently and perhaps getting rid of unmonetizable views which doesn't impact the number of likes or revenue. I think the annoyance is over the lack of transparency and the power Youtube holds over content creators rather than any immediate concern over loss of income etc.
> rather than any immediate concern over loss of income etc.
I don't know if that's necessarily true. Apparently there's not a significant loss of revenue _from YouTube_ from the reports of these creators. But some sponsor deals might be structured based on CPM, and so a suddenly decreased view count could have a direct revenue impact from those sponsorship deals.
They probably would prefer zero third party sponsors, because adding sponsored content dilutes the value of the on-platform ads. Features like “commonly skipped section” and the timeline view intensity histogram reduce the value of sponsorships.
But if they eliminated sponsors, creator revenue would drop significantly and so would content production.
The nuclear option would be to require all sponsored segments to register with YouTube. That would give YouTube way more control and dramatically reduce creators’ business flexibility (how do you tax a donated 3d printer?).
The Wan Show is very long and waffley and strictly for fans. LTT clip segments of the show but the relevant segment is still nearly 40 minutes long https://www.youtube.com/watch?v=9JJ8dur6unc
I host videos on my own server and there's Vimeo and Mux. I guess you're saying it's the free-as-in-beer service that has a social network and recommendation network attached to uploaded videos.
Mux is new to me. Looks like a video-first headless CMS with some neat AI integrations.
Vimeo does have monetization tools [1] but they’re focused on direct sales.
YouTube is just way ahead… even if you ignore the ads platform, a YouTube premium subscription gives you WAY more ad free content than a Vimeo purchase or Floatplane/Nebula subscription.
> This piece was mostly written because I've been frustrated that YouTube is effectively the only place for user submitted video on the internet.
I realized this back in 2009 and tried really hard to start using other platforms, but wound up just not watching YouTube as often instead. I hope this changes. The only true competitors are places like TikTok and Instagram, but they don't feel like a true replacement to the rest of us who don't want to be tied to "social media" but YouTube shorts are evidence that it does compete with YouTube directly.
I think YouTube even tried to have "IG Stories" at one point iirc.
One of the things that is notable about Youtube is there was once competition (Vimeo and Daily Motion) but they effectively outdistanced it. A bit like Amazon and Ebay. There are related things semi-competing like Twitch.TV etc, also, of course.
I suspect that the situation with the earlier video providers is that they were "bleeding cash" for many years until the process finally reversed - if they were the winner (again like Amazon).
I think this long capital investment process is what means that no one wants to or expects to step into the ring with a large, successful player. It took that player a long time to learn to be successful, that player will fight you to keep their relative monopoly and you will have to risk a lot of money.
Youtube content creators are effectively Youtube's suppliers. Youtube is squeezing and its "normal" - squeezing suppliers is part of the monopolist's playbook. Its unfortunately convenient for Youtube that people have been willing to make good quality video for nearly nothing since the tools to do so became cheaply available.
Why there is "no competition" for Nvidia, Amazon, Youtube, etc. Not that I like the situation but it's not an "unnatural" situation.
Structurally there's only a few ways disruption can happen to a platform that has existing centralized hosting of metadata and centralized hosting of data. Either the disruptor also centralizes both, decentralizes just the data or decentralizes both.
The second isn't viable in most real world cases until something changes the huge expense of decentralized CDN fetching. My gut says that the third would be on the losing side of almost every network effect.
> This piece was mostly written because I've been frustrated that YouTube is effectively the only place for user submitted video on the internet.
Well, technically there's lots of user submitted videos posted to p*rn sites... Apparently even started posting educational videos there, like math and neural networks and stuff.
I ditched Google Analytics on my blog because it more than doubled the size of any page load (the “Universal” version loaded a lot of additional JS). This was fine for a while, as I wasn’t posting much. Later, I wanted to write more. This might be shallow, but I found having numbers associated with my writing helped me keep at it. I also found it fun and useful to see what got traction. I tried Plausible for a while, but it was overkill for my needs, and I grew out of the first tier quickly. I then moved to Tinylytics [0], and have been enjoying that.
reply