Hacker Newsnew | past | comments | ask | show | jobs | submit | rainsford's commentslogin

Maybe, but there is also the potential for survivorship bias being a factor here too. The chance that a specific person with no football skills can throw a football 50 yards into a trash can is pretty low. But if you gather a stadium full of unskilled random people, chances are good that one of them will be able to do so, even multiple times. But you'd be wasting your time trying to discern what special football skill that person has.

I'm not saying this means successful CEOs don't have any relevant skills contributing to their success, but it's worth considering that for the most part we're only seeing the successful ones. It's hard to say how many would-be billionaire CEOs are out there with similar skills to someone like Elon Musk who just happened to get unlucky.


One aspect of this I don't see mentioned all that often is that AI is competing for computing resources that are at least somewhat limited in the short to medium term while being pretty inefficient at utilizing those resources compared to alternatives.

A medium end gaming PC can display impressively realistic graphics at high resolutions and framerates while also being useful for a variety of other computationally intensive processing tasks like video encoding, compiling large code bases, etc. Or it can be used to host deeply mediocre local LLMs.

The actual frontier models from companies like Anthropic or OpenAI require vastly more expensive computing resources, resources that could otherwise be used for potentially more useful computation that isn't so inefficient. Think of all the computing power going into frontier models but applied to weather forecasting or cancer research or whatever.

Of course it's not either or, but as this article and similar ones point out, chips and other computing resources aren't infinite and AI for now at least has a seemingly insatiable appetite and enough dollars to starve other uses.


(Sharing a comment I recently posted on a similar thread..)

For last 2+ years, I've noticed a worrying trend: the typical budget PCs (especially Laptops) are being sold at higher prices with lower RAM (just 8GB) and lower-end CPUs (and no dedicated GPUs). Industry mandate should have become 16GB RAM for PCs and 8GB for mobile, since years ago, but instead it is as if computing/IT industry is regressing.

New budget mobiles are being launched with lower-end specs as well (e.g., new phones with Snapdragon Gen 6, UFS2.2). Meanwhile, features that were being offered in budget phones, e.g., wireless charging, NFC, UFS3.1 have silently been moved to the premium mobile segment.

Meanwhile the OSes and software are becoming more and more complex, bloated and more unstable (bugs) and insecure (security loopholes ready for exploits).

It is as if the industry has decided to focus on AI and nothing else.

And this will be a huge setback for humanity, especially the students and scientific communities.


I don't think we need "industry mandate". I suspect the electronics specs are dictated by the very efficient market and the consumer is being squeezed in many ways. So device manufactures are just meeting the pricing needs of the consumer and dropping the expensive things that are less understood like GPUs and extra ram.

> dropping the expensive things that are less understood

If "old" devices outperform new devices, consumers will gain new understanding from efficient market feedback, influencing purchase decisions and demand for "new" devices.


RAMageddon is here.. https://www.tomsguide.com/news/live/ram-price-crisis-updates

Summary: * Massive spikes: Consumer RAM prices have skyrocketed due to a tight supply. Major PC companies have issued warnings of price hikes, with CyberPowerPC stating: "global memory (RAM) prices have surged by 500% and SSD prices have risen by 100%."

* All for AI: The push for increased cloud computing, as seen in the likes of ChatGPT and Gemini, means more data centers are needed, which in turn requires High Bandwidth Memory (HBM). Manufacturers like SK Hynix and Micron are now shifting priorities to make HBM instead of PC RAM.

* Limited supply: Companies are now buying up stock of all the remaining supply of standard DRAM chips, leaving crumbs for the consumer market and price hikes for the limited supply there is.

Good luck expecting "value for money" from this "efficient" market.


Define "efficient" in this context.

The point of the gold rush now is that a large number of investors think AI will be more efficient at converting GPU and RAM cycles into money than games or other applications will. Hence they are willing to pay more for the same hardware.


Having tried both Zerotier and Tailscale, I found Tailscale to be a significant improvement. Tailscale uses Wireguard as the base encrypted protocol instead of a semi-homebrew protocol Zerotier came up with that notably lacks things like ephemeral keys/perfect forward secrecy. Tailscale also has a faster pace of improvement and is responsive to customer asks, regularly rolling out new features, improving performance, or fixing bugs. Zerotier by contrast seems to move slower, regularly promising improvements for years that never materialize (e.g. fixing the lack of PFS).

My last gripe is more niche, but I found Zerotier's single threaded performance to be abysmal, making it basically unusable for small single core VMs. My searching at the time suggested this was a known bug, but not one that was fixed before I switched to Tailscale. Not impossible to work around, but also the kind of issue that didn't endear the product to me or inspire confidence.


It's not really "basically wireguard" and you don't have to pay for it for personal use. Wireguard is indeed pretty easy to set up, but basic Wireguard doesn't get you the two most significant features of Tailscale, mesh connections and access controls.

Tailscale does use Wireguard, but it establishes connections between each of your devices, in many cases these will be direct connections even if the devices in question are behind NAT or firewalls. Not every use-case benefits from this over a more traditional hub and spoke VPN model, but for those that do, it would be much more complicated to roll your own version of this. The built-in access controls are also something you could roll your own version of on top of Wireguard, but certainly not as easily as Tailscale makes it.

There's also a third major "feature" that is really just an amalgamation of everything Tailscale builds in and how it's intended to be used, which is that your network works and looks the same even as devices move around if you fully set up your environment to be Tailscale based. Again not everyone needs this, but it can be useful for those that do, and it's not something you get from vanilla Wireguard without additional effort.


I guess I'm still not following. Is there an example thing that you can do with Tailscale that you can't do with Wireguard? "Establishes connections between each of your devices" is pretty vague. The Internet can already do that.

I install tailscale on my laptop. I then install tailscale on a desktop PC I have stashed in a closet at my parents. If they are both logged in to the same tailnet, I can access that desktop PC from my home without any addition network config (no port forwarding on my parents router, UPNP, etc. etc).

I like to think of it as a software defined LAN.

Wireguard is just the transport protocol but all the device management and clever firewall/NAT traversal stuff is the real special sauce.


> software defined LAN

That’s such an elegant way of putting it that they should use it in their marketing.


I can guide any tech-illiterate relative to install Tailscale and connect it over the phone.

1) download Tailscale 2) install 3) log in with Google account

done. It doesn't matter if they're on Windows or MacOS.


You can run two nodes both behind restrictive full cone NATs and have them establish an encrypted connection between each other. You can configure your devices to act as exit nodes, allowing other devices on your "tailnet" to use them to reach the internet. You can set up ACLs and share access to specific devices and ports with other users. If you pay a bit more, you can also use any Mullvad VPN node as an exit point.

Tailscale is "just" managed Wireguard, with some very smart network people doing everything they can to make it go point-to-point even with bad NATs, and offering a free fallback trustless relay layer (called DERP) that will act as a transit provider of last resort.


Obviously there are confounding variables besides vaccination status, but I find it pretty compelling that the decrease in COVID mortality among the vaccinated group was significantly larger than the decrease in all-cause mortality of that group. This suggests whatever the difference was between the two groups, besides vaccination, either had a much larger impact on COVID than other causes of death or that the vaccine had some positive impact.

One example of the former explanation I could imagine is that people who got vaccinated against COVID were probably also more likely to take other preventative measures, like wearing a mask or avoiding larger crowds of people. Those precautions would be more likely to be effective against a contagious disease like COVID but less likely to protect them against some other causes of death like heart disease.

I'm not sure how likely I find that as an explanation compared to the alternative that the vaccines provide at least some level of protection. My observation was that widespread measures specifically meant to defend against COVID, like masking and social distancing, largely went away well before the end of the time period covered by this study, at least in the US.

Amusingly, I suspect the anti-vax contingent would likely be bothered by data suggesting anything the COVID vaccinated group was doing differently protected against COVID, since their position seems to largely be that not only is the COVID vaccine useless, but so are any other measures meant to reduce the spread.


> I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.

I would also like to see those metrics, because I strongly suspect the cost is dramatically in favor of automation, especially when you consider the improved security posture you get from short lived certificates. My personal experience with Let's Encrypt has been that the only hassle is in systems that don't implement automation, so I get the worst of both worlds having to effectively manually renew short lived certificates every 90 days.

CRL based revocation is largely a failure, both because implementation is a problem and because knowing when to revoke a certificate is hard. So the only way to achieve the goal of security in WebPKI that's resilient to private key compromise is short-lived certificates. And the only way to implement those is with automated renewal.


I'm glad there are free alternatives to Let's Encrypt, but a major PKI provider also being by far the largest browser provider feels like a disaster waiting to happen. The check on PKI providers doing the right thing is browsers including them (or not) in their trust stores. Having both sides of that equation being significantly controlled by the same entity fundamentally breaks the trust model of WebPKI. I'm sure Google has the best of intentions, but I don't see how that's in any way a workable approach to PKI security.


In addition to energy, the biggest reason I no longer use old desktops as servers is the space they take up. If you live in an apartment or condo and don't have extra rooms, even having a desktop tower sitting in a corner somewhere is a lot less visually appealing than a small NSA or mini-PC you can stick on a shelf somewhere.


Tastes differ. I personally find the 36U IBM rack in the corner of my apartment more visually appealing than some of my other furniture, and consolidating equipment in a rack with built-in PDUs makes it easier to run everything through a single UPS in an apartment where rewiring is not an option.


Some hybrid cars almost work this way. I know at least Honda's hybrids basically do what you're suggesting but at constant highway speeds will directly couple the engine to the drive wheels. They presumably could use electric motors powered by the engine in all driving scenarios, but I believe direct engine drive at highway speeds is more energy efficient.

This is probably why most hybrid systems I'm aware of don't only use electric motors to power the drive wheels. The idea sounds cool and I've also wondered why you can't buy something like that in the US (I think it exists elsewhere), but the math doesn't really work out. Even in terms of engineering complexity, because the engine is only directly driving the wheels at certain speeds, you can get by without a lot of the mechanical drivetrain components like transmissions.


Not to be nitpicky, but that's only really true if you're driving down a perfectly flat, straight highway at a constant speed. Any hills or traffic slowdowns and your car or truck is doing more work the more it weighs.


True, but regenerative braking eliminates most of that as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: