Just this week it also dawned on me the impracticality of the large corners after twice in a row failing to grab the corner of a window. Tahoe is absolute amateur hour.
I have been dismissed for saying this and using self driving cars as the example. Getting from 95 percent there to 100 percent with AI is going to be nearly impossible. Not impossible, but the time and resource allocation to get one use case, such as self driving cars to a point of usability is going to cost trillions and take decades. Anything that we might want to automate with AI the question needs to be asked is if automating this task worth billions if not trillions of dollars and decades of time.
AI makes a really big first impression. And it looks good at first glance, especially if you aren’t good at/ knowledgeable at what you are asking it to do, but as soon as you know anything about what you are asking it for / to do you realize it is wrong or bad and sometimes incredibly so.
I don’t want this to be dismissive of the technology. It is already having an impact and will continue to do so, but expectations and investment need to be tempered.
18a supposedly has some advantages in power efficiency and some other areas compared to TSMCs approach. Ultimately, TSMC doesn’t have a 2nm product yet, so it is a pretty big deal Intel is competitive again with TSMC latest. Samsung is incredibly far behind at this point.
TSMC is not trailing. They’re so far ahead that Intel’s 18A is equal to their N5 family in density.
Therefore, N2 is 2 generations ahead of 18A by definition.
Therefore power advantage you’re referring to is backside power delivery. This is a need long term but it doesn’t bring as much power benefits as you think. TSMC’s customers don’t appear to demand this feature as highly since it’s been getting pushed back to 16A. The design change for designers don’t seem worth it yet.
By measure of density, but by performance to watt 18a is likely superior to n3. So I don’t think it is fair to say they are cleanly 2 generations ahead, more like 1 or 1.5 ahead now. Yes, n2 / n3 are overall more mature and well rounded processes at this point in time. But this is a big step for Intel and if they can prove 18a they could start bleeding. Bit of market share from TSMC.
They did reshuffle their board a bit after firing Pat to bring in some people with industry and domain expertise and not just academics / outside industry folks.
Without hyperbole, Liquid Glass on Mac OS is visually the worst commercial desktop UX I have ever had the displeasure of using. It is amateurish and frankly, ill advised to have even tried to unify the aesthetic across devices so universally. I think much of what is on the phone works fine. There are some pain points and some bits that are visually awkward, but generally it works and is new and fresh, but on the Mac it is as if no one really cared. And that reflects really poorly on where Apple is at, because if nothing else, Apple seemed like the company that always really cared about the user experience.
There are some things that are nice. The dock looks nice. The transparent menu bar is nice enough too and there is a toggle off switch if it doesn't work for you. Spotlight looks fine. But the rest is so bad that I just cannot fathom how someone at Apple did not stop it before release. I would be throwing a fit to stop it from being released if I was in Apple and had any sway at all. I assume the executive team was all looking at it and using it before release. So how did this happen? The new side bar and the new tool bars are abominations. I cringe every time I have to use the finder; it is just a blob of various shades of white or, if you prefer, dark mode, grey.
My hope is that if nothing else they roll back the sidebar and the tool bar changes or do a complete rethink on how they are implemented. If they rolled back the extra rounded corners I wouldn't complain either.
Even on mobile it took a few iterations from when the design was first introduced for it to be usable. Not good mind you, just usable.
Even Apple's own marketing material had screenshots where text was near impossible to read, even for someone with good eyesight: grey text on top of highly transparent glass... what were they thinking!?
Keyword “looks”. Because considering behavior, there’s tons of delay introduced and results change under your finger as you’re selecting them, causing you to get the wrong thing.
A lot of Windows 8 I liked, but Windows perpetually suffers from needing to support older versions of Windowing systems, or some corporate usecase from the early 90s that carries too much money to ever say no to implementing.
Windows 11 is, I think, worse than MacOS these days, half for still dragging the past along with it, and half for introducing a second start menu just for ads.
I think Windows greatest strength is their greatest weakness, which is backwards compatibility. MacOS greatest weakness is their UX, which has slowly been going downhill for the past few years and on this release took a nose dive. It is a wild reversal from the mid 2000s when Apple's UX was so far superior to anything else that it felt revelatory to switch from Windows XP to OSX.
Oh geez, I forgot about Windows 8. Visually it looked nice enough, though. Once you got out of the insane touch first overlay it was fine, but I reinstalled Windows 7 so fast I never had to spend much time with it. I guess by that measure Windows 8 was worse.
I do think Linux is accessible to many more people, but I would not say it is ready for the masses. The terminal is going to be a non-starter for your average computer user.
But, with that said, I started seriously using Linux for the first time in 2025. I bounce between Debian, Windows 11, and MacOS, and Debian is probably the most refreshing to use. I don’t find Windows 11 as oppressive as other seem to, but I have turned off most of what people cite as the issues. I find MacOSs Liquid Glass redesign to be more aggressively bad.
>I don’t find Windows 11 as oppressive as other seem to, but I have turned off most of what people cite as the issues.
So you debloated your windows but at any update you have to spin your wheels and try to remove any crap they put back in. At any time there’s the possibility you can no longer remove x or y. The vast majority don’t have the energy to play this game or don’t know how to.
I agree, it is bad and I don't like it, but I think it is bad in a way most users won't care about. I have not really considered a version of Windows to be good since...Windows 2000...maybe 3.1.1. They have all had major issues, so I just kind of shrug off the issues when I use Windows. The enshitification of MacOS is relatively new and so still stings a bit.
I think where Microsoft is playing with fire is that while most users will not care about some of these changes power users do. And the 5% of power users ultimately make the decisions and provide the recommendations for the other 95%. With so many apps and SAAS services going web or web app only there will be less and less reason to need to stick with Windows and that is where Microsoft will start to lose control.
It’s quite funny, that I was in a very similar thread here a few years ago, where it was flat out stated that everything worked on Linux. Just like how many people state it now. When I commented that, but for example dpi scaling is still broken to an unusable level, I got a ton of unhelpful comments, either recommending basic things, or just “it works for me”. Then now, most say that everything started to work in the past year, and how great the improvement was. I know these people are probably different, but it’s funny how the general sentiment is always “everything works now, but it was shit a year ago”.
Who installed linux and did the initial setup? And then I think there is a class of user that is savvy enough to say, update their graphics drivers but not willing to use a terminal and that is before you get into the mess that is Nvidia on linux.
I agree, under a managed setup scenario where a user is only really going to use a web browser and a few apps. Linux is just fine.
I installed, she could do the same, insert USB stick, run the graphical installer, remove it, boot into the new OS. That's all I did, on this machine, our LR TV PC, MVR PC, DR PC (for pleasant visual videos on YouTube), her PC, etc. Some are Dell, some are Lenovo, my last PC was an HP. I personally have used nVidia on multiple machines and models the past 2 decades. On mid-2000's machines I'd sometimes have to run the driver installer .sh file I downloaded from their site. The past at least 10 years, it gets installed automatically, didn't have to do anything.
Glad we agree on casual users. She uses Chrome and only 2 apps, same as when she was on Windows. Would you agree that probably most of the world is made up of casual users?
"The terminal is going to be a non-starter for your average computer user."
My wife has no idea what a terminal is and does not care - she rocks Arch and has no idea what that means. The people that attend my uncle's PC clinic to have their "Win 10 that won't run Win 11" converted to Linux don't care either.
My Dad's PC will shortly be running Linux after I've taken him through MSOffice -> Libre Office + Scribus + (Evolution||Thunderbird).
I started off my early IT career as a trainer - I once did a day of DTP with Quark Express where I was given the floppies the night before. When I hear that Linux (actually LO etc) is incapable of doing whatever, I soon find that a deep discussion about what constitutes "incapable" generally turns into a training session.
For example I often hear about documents that apparently LO can't handle. That normally ends up with me teaching (proselytizing!) about how to use styles properly or even the real basics such as the four tab forms (L/R/C/decimal). Then we might segue into spreadsheets ... ahh, you'll want a array formula there ... "a what?" and off we go again.
Now, I have wandered off track here somewhat but I'm noting the other "not ready" convo that will often happen after we have covered how to find your mouse pointer or why Windows seems to still have two Control Panels and at least three half arsed IP stacks.
I do actually have a fondness for Windows, having used it since v2.0 at school in 1986ish. That fondness is rapidly going west along with VMware (consultant for 25 years).
I fucking hate being taken for a ride and basically being abused. Today, my company received an email from Broadcom telling us that we are no longer welcome as a reseller/unpaid support org. Luckily we started migrating our customers away from VMware some time ago and only the ones with the deepest pockets and greatest inertia remain. The rest are rocking Proxmox and I'm a much happier consultant too.
One day MS might tell my company that they have decided to dispense with our reseller/unpaid support services too, once they are sure that everyone is tucked up with a subscription.
Well, they can piss off too. I am capable of running email systems on prem (and do) even though I have migrated my firm from on prem Exchange to M365. I still point MX records to our place (Exim + rspamd) and run an imapd for some mailboxes. A calendar app is all that is missing.
What I hope I am getting across is that dumping Windows and co is quite a broad subject.
I think that your choice of Deborah and Ian's (bless!) distro is a really good solid starter for 10 but to be honest after a while you should be able to run any variety of Linux.
You should be able to install multiple Window Managers eg Gnome and KDE Plasma and all the rest at the same time and be able to select which session to use from your Display Manager (eg SDDM).
I have almost certainly overstayed my welcome in this tread but before I go, I will suggest that anyone who calls themself an IT (anything) should at least have a go at all available systems. Nowadays OS/2 Warp on something like 25 floppies is not a barrier to play (spin up a VM).
it's funny because it's Linux (and especially KDE) that has bridged that gap so long ago. I told my dad just open up sftp and edit the files. He's on windows of course. There's some convoluted thing. I totally forgot he cant literally just put in the URL and edit the file in kate!
Some industries are of national security or other strategic value, so protecting them even if that means some stagnation is desirable over the offshoring of said industry.
The question is: how do you define "national security" and "other strategic value"? At the end of the day both really mean economic interest. Especially in case of US.
So if someone says "national security" is above economic interest of US, I would say these people mean _their_ economic interest is above economic interest of US and use both terms as a cover.
> Insofar as the country being conquered and Americans being slaughtered wholesale would be against our economic interests lol
> There are clear national security reasons for the government to prop up shipbuilding and semiconductors.
Are you saying countries without shipbuilding facilities or not producing semicondutors are being conquered and their citizens being slaughtered?
I'd say that is fear mongering done by the people doing business on "national security".
> Are you saying countries without shipbuilding facilities or not producing semicondutors are being conquered and their citizens being slaughtered?
Yes that is a clear risk. For most of human history, powerful leaders have unleashed violence on their neighbors to increase their wealth and prestige. For about 70 years, the cold war balance prevented very catastrophic wars between powerful nations but we now seem to be having an atavistic throw back of powerful nations being led by expansionist leaders. You either need to create your own manufacturing capacity or be at the mercy of others.
You can call it fearmongering but I can point to the whole of human history and tell you that not only has it happened, at a certain point it is inevitable. I can point at Ukraine, right now, as an example of what happens when one country appears much weaker than an aggressive neighbor.
The United States is the greatest power the world has ever seen. While the oceans protect us, the truth is that even the White House was once burned down in a war.
The economic interest is the US ability to as rapidly as possible convert those shipyards to military shipyards during a large scale prolonged war. The US did not make (relatively) many ships before WW2 and then during WW2 was briefly the largest ship builder in the world.
> The economic interest is the US ability to as rapidly as possible convert those shipyards to military shipyards during a large scale prolonged war.
Nah, that doesn't add up. US needs _ships_ and SOTA military equipment to make sure that any military conflict is as short as possible (ie. US wins). Losing money on unused production capability does not make sense because in case of prolonged conflict there is time to build the capability (as it happened during WWII).
In reality, what you call "prolonged military conflict", is nothing more than normal international competition. One could even argue US is in prolonged military conflict since WWII.
In which case making rational decisions based on hard economic criteria (ie. not losing money) is the key to success.
Films rely on 24 fps or, rather, low motion resolution to help suspend disbelief. There are things that the viewer are not meant to see or at least see clearly. Yes, part of that specific framerate is nostalgia and what the audience expects a movie to look like, but it holds a purpose.
Higher frame rates are superior for shooting reality. But for something that is fictional it helps the audience suspend their disbelief.
I think that whole complaint is just "people getting used to how it is". Games are just worse in lower framerate because they are interactive and because we never had 24 fps era, the games had lower framerate only if studio couldn't get it to run better on a given hardware
With one caveat, some games that use animation-inspired aesthetics, the animation itself is not smoothed out but basically ran on the slower framerate (see guilty gear games) while everything else (camera movement, some effects) is silky smooth and you still get quick reaction time to your inputs.
I'm not sure I buy that it helps the audience suspend their disbelief.
If it did horror films would be filmed at higher frame rates for extra scares.
Humans have a long history of suspending belief in both oral and written lore. I think that 'fps' may be as functionally equivalent as the santa clause stories, fun for kids but the adults need to pick up the bill.
Suspend disbelief in that you can't see that the punch never actually landed, or that the monster that ran across screen was actually a man in a rubber suit. When something happen fast at 24 fps it naturally blurs. It is why shaky cam, low resolution footage can be scary. Direct to VHS horror movies could be scary because you could only barely see what was happening allowing your brain to fill in the gaps. At full resolution captured with a high speed camera everything looks a bit silly / fake.
reply