Don't you find it problematic that the only reason Thiel can organize these lectures is because he is a billionaire? Is he a bona fide scholar on the subject? Would any tenured theology scholar be welcome to hold the same lectures at the Vatican?
I guess that's what you get for electing an American as the Pope. /s
The lectures were not given in the Vatican but somewhere nearby, and if you read the article you would see that all the Catholic institutions names denied involved with the lectures.
He didn't give lectures at Vatican, not even at the Catholic university close to Vatican, and even Catholic University of America didn't have anything to do with it.
I am very much not a billionaire; but I can hire a village hall and give a lecture on the antichrist. I may have to work a little harder to get as much press coverage but that is not what is stopping me.
A key point that TFA misses (probably for the sake of story-telling) is that, unlike the 2006 iMac the author fondly remembers of, MacBook Neo is not a hand-me-down computer.
It is not the proverbial gift horse. You are paying fresh $ for it. So, it is only reasonable to have some baseline expectations on redeeming value from it.
Also, an important point of the MacBook Neo criticism is that because of its cut-down features, a Neo may never graduate to a "hand-me-down computer", but instead head straight to the e-waste pile.
ARM macs are too new for us to know how the reuse/hand-me-down/legacy support world will shake out for them. There’ll be signs when the first M1 machines get axed but for now, I have no clue.
Apple wants to give me $250 trade-in for my M1 Air with 16GB, but it seems to be worth $500+ on the open market, so yeah, still above hand-me-down territory. It feels as good as the day I bought it, and literally the only reason I'm considering replacing it is now we have 2/3 laptops with magsafe and I'd like to start distributing those chargers around the house. So tempting to just swap for a used M2 for a couple hundred dollars, but the chore of moving to a new computer is holding me back.
I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.
There may be non-zero maintenance work happening, but a project that only maintains support for old versions and will never adopt new ones is functionally one that the ecosystem will eventually forget about. Maybe you call that "under active development" but my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."
On the other hand, if you don't support new minor versions day one, but you eventually support them, that's quite different.
More specifically, the Scientific Python community through SPEC 0[0] recommends that support for Python versions is dropped three years after their release. Python 3.12 was released in October 2023[1], so that community is going to drop support for it in October 2026.
Considering that PyPy is only just now starting to seriously work on supporting 3.12, there's a pretty high chance that it won't even be ready for use before becoming obsolete. At that point it doesn't even matter whether you want to call it "in active development", it is simply too far behind to be relevant.
What's the point of a three year window? It seems like a weird middle-point. Either you are in a position to choose/install your own interpreter and libraries or you are not.
If you can choose your own versions and care at all about new releases, you can track latest and greatest with at the very most a few months of lag. Six months of "support" is luxurious in this scenario.
If you can't choose your own versions, you are most likely stuck on some sort of LTS Linux and will need to make do with what they provide. In that case three years is a cruel joke, because almost everything will be more than three years old when it is first deployed in your environment.
I guess the point of a three year window is to be able as an ecosystem to at some point adopt new language features.
When you have some kind of ecosystem rule for that, you can make these upgrade decisions with a lot more confidence.
For example in my project I have a dependency on zstandard. In 3.14 zstandard was added to the standard library. With this ecosystem wide 3 year support cycle I can in good confidence drop the dependency in three years and use the standard lib from then on.
I feel like it just prevents the ecosystem from going stale because some important core library is still supporting a really old version, thus preventing other smaller libraries from using new language features as well, to not exclude a large user base still on an old version.
This is silly, there's no killer feature for scientific computing being added to python that would make an existing pypy codebase drop that dependency, getting a code validated takes a long time and dropping something like pypy will require re-valditating the entire thing.
The phenomena you're describing is why Cobol programmers still exist, and simultaneously, why it's increasingly irrelevant to most programmers
The killer feature is ecosystem: Easily and reliably reusing other libraries and tools that work out-of-the-box with other Python code written in the last few years . There are individually neato features motivating the efforts involved in upgrading a widely-used language & engine as well, but that kind of thinking misses the forest for the trees unfortunately.
It's a bit surprising to me, in the age of AI coding, for this to be a problem. Most features seem friendly to bootstrapping with automation (ex: f-strings that support ' not just "), and it's interesting if any don't fall in that camp. The main discussion seems to still be framed by the 2024 comments, before Claude Code etc became widespread: https://github.com/orgs/pypy/discussions/5145 .
The alternative is when you run a script that you last used a few years ago and now need it again for some reason (very common in research) and you might end up spending way too much time making it work with your now upgraded stack.
Sure you can were you should have pinned dependencies but that's a lot of overhead for a random script...
We can play that game - items like GIL-free interpreters and memory views are pretty relevant to folks on the more demanding side of scientific computing. But my point is this is a head-in-sand game when the community vastly outweighs any individual feature. My experience with the scientific computing community is that the non-pypy portion of it is much bigger.
I'm not a pypy maintainer, so my only horse in this race is believing cpython folks benefit from seeing the pypy community prove Things Can Be Better. Part of that means I rather pypy live on by avoiding unforced errors.
Unfortunately python does add features in a drip-drip kind of way that makes being behind an experience with a lot of niggles. This is particularly the case for the type annotation system, which is retrofit to a language that obviously didn't have one originally. So it's being added slowly in a very conservative way, and there are a lot of limitations and pain points that are gradually being improved (or at least progressed on). The upcoming lazy module loading will also immediately become a sticking point.
They appear to be talking about CPython implementations, taking into account when those versions continue to be sorted (in the sense of security updates). That's irrelevant for PyPy, which clearly supports version numbers on a different schedule.
It's not irrelevant, because if SPEC 0 says that a particular Python version is no longer supported, then libraries that follow it won't avoid language or standard library features that that version doesn't have. And then those libraries won't work in the corresponding PyPy version. If there isn't a newer PyPy version to upgrade to, then they won't work in PyPy at all.
> I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.
There is literally a Python 3.12 milestone in the bug tracker.
> my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."
It sounds a lot more like your actual response is "I don't care about pypy".
Which is fine, most people don't to start with. You don't have to pretend just to concern-troll the project.
E.g. EU enforced mandatory USB-C charging from 2025, and pushes for ending production of combustion engine cars by 2035. Why not just make ECC RAM mandatory in new computers starting e.g. from 2030?
AMD is already one step away from being compliant. So, it's not an outlandish requirement. And regulating will also force Intel to cut their BS, or risk losing the market.
OMG no. Politician have no business making technological decisions. They make it harder to innovate, i.e. to invent the next generation of ECC with a different name.
I would argue that in the present conditions, regulation can actually foster and guide real innovation.
With no regulations in place, companies would rather innovate in profit extraction rather improving technology. And if they have enough market capture, they may actually prefer to not innovate, if that would hurt profits.
Ethernet was once carried over thick coax at like 2 then 3 megabits per second. By the time it was standardized as IEEE 802.3 it was at 10 megabits. 802.3 was thin coax. 802.3e took a step back in speed to 1 megabit, but over phone-type wire. 10 base T, Ethernet over twisted pair at 10 megabits per second, wasn’t until 802.3i in 1990. Then 10 base F (fiber) in 1992.
Then there are various speeds of 100 M, 1000 M / 1G, 2.5 G, 5 G, 10 G, 25 G 40 G, 50G, 100 G, 200 G, and 400 G. Some of the media included twisted pair, single mode fiber, multimode fiver, twinax cable, Ethernet over backplanes, passive fiber connections (EPON), and over DWDM systems.
There have also been multiple versions of power over Ethernet using twisted pair cable. Some are over one pair, some two pairs, and some over the data pairs while other use dedicated pairs for power.
There are also standards for negotiation among multiple of these speeds. There have been improvements to timestamping. There have been standards to bring newer speeds to fewer pairs or current speeds over longer distances.
There’s currently work on 1.6 Tbps links up to 30 or possibly 50 meters. There has been work on the past to use plastic optical fibers instead of glass ones. Oh, and there are standards specific to automative Ethernet.
Ethernet itself, the name and the first implementation of a network with that name, were from 1972 and 1973. It was on the market in 1980 and first standardized in 1983 as ECMA-82.
Ethernet supports in its different configurations direct host-to-host connections, daisy chains, hubbed networks, switched networks, tunnels over routed protocols like TCP or UDP, bridges over technologies like MOCA or WiFi, and even being tunneled across the open Internet.
All of these are Ethernet. They have a common lineage. They are all derived from the same origin. Token Ring, FDDI, ATM, and SONET have all been more than one thing over time too. So has WiFi. 802.11a is very little like 802.11be, but those are also similar enough to carry the same family name.
The IEEE 802.3 series has a lot of history buried in those documents.
ECC has only 10-15% more transistor count. So you're only making one component of the computer 15% more expensive. This should have been a non-brainer, at least before the recent DRAM price hikes.
Also, while computers may not be used much for cosmic rays to be a risk factor, but they're still susceptible to rowhammer-style attacks, which ECC memory makes much harder.
Finally, if you account for the current performance loss due to rowhammer counter-measures, the extra cost of ECC memory is partially offset.
It's still weird. Why not just use an effing install.sh script like everybody else? And don't tell me "security". Because after installation you will be running an unknown binary anyway.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
Corporate statements like these get written very carefully. You can be certain that not a single word in these sentences has been placed there without considering what they do imply and what they omit.
I thought this was ambiguously worded in a beautiful way. At the moment, one could say that some autocratic adversaries of the United States and other democracies currently lead the government of the United States.
The US is already autocratic when it comes to people in many other countries, where the US government didn't like their democratically elected governments and decided to pick a new one for them instead.
China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?
By slightly ranks below you mean ~50-60% per capital.
>China somehow puts less pressure on the environment
PRC renewables at staggering scale.
Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.
This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.
It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.
Also, unlike US and Russia, China has green transition as an official policy. There are additional savings from total electrification. (I think they also care more about longterm and being closer to the equator and the sea, they better understand the consequences of global warming.)
western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)
That makes your argument a true scotsman, though. Western liberal ideals are the supreme ones, you're just not doing it right!
Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.
The old world is dying, and the new world struggles to be born: now is the time of monsters.
No, my argument was that western liberal ideals are good. The commenter chimed in that some states which have historically held the mantle of western liberalism are losing their grip on it.
There's nothing contradictory or circular in both of those claims.
If someone were to present to me a better caretaker of western liberal ideals than the US and ask whether I would prefer AI empower them, the answer would be: yes.
And in fact, that is precisely what I am arguing. It is good that Anthropic, which so far has demonstrated closer adherence to western liberal ideals than the current US government, is pushing back on the current US government.
I also think it is good that Anthropic stands in opposition to China, which also does not embody western liberal ideals.
France have already developed their own (recently posted here) [1][2].
Also, the "there's no drop in replacement" line is just making up excuses for not acting. Yes, you will not get 100% of the Office 365 features out of the box. There will be some friction.
It's simply ridiculous seeing EU bureaucracy preparing e.g. to ban russian oil [3], making life more expensive for all people, and balking on being forced to switch their stupid word processor.
Considering that I doubt most normal office-user people even use features in Word other than changing fonts etc I doubt that will be a big issue anyway.
Not sure if you've worked in an office recently, but on google workspace I (we) use very regularly:
- Group Editing - this ones hard to get right
- Reviewing Tools
- Automated document generation
- Embedding of data-backed images from 3rd party tools
Looking at my wife who works in government, they use it even more heavily, with a lot of complicated formatting, numbering, standards etc going into each document, plus OneDrive collaborative features on top of that.
I suspect office-user people are where most of the features get used. Agreed, most people only use 15% of the features, but which 15% that is likely changes quickly person to person.
It doesn't need to be "most". "Some" or even "a few" can be enough to make a hell of a mess if those few have created documents that are key to the business in one way or another (proposals, end-user documentation, etc). And there are the other components to the suite like Powerpoint, Excel, and Project to consider.
So then act now, because the best time to act was yesterday, and the longer you wait the worse the mess and pain becomes. Not acting at all is not an option.
What France is doing is great but, as you’ll see discussed in that HN comment section, it is hardly an office suite. It’s not a full replacement by a long shot. I hope it will be one day though!
We all acknowledge the AI slop posts. The question is what fraction of the comments under the posts is also AI slop. And how long until we see AI-targetted ads, to manifest the Dead Internet Theory in its fullest.
I guess that's what you get for electing an American as the Pope. /s
reply