Hacker Newsnew | past | comments | ask | show | jobs | submit | pegasus's commentslogin

Nobody likes to be left hanging.

Currently looking up emails for jsiepkes’ board to walk them through my frustrating experience with their comment.

The brush-strokes are part of the painting (they give texture and structure for example), so a painter would care about them, if he'd care for the end product. But a painter who would instead deeply care about details of the brush incidental to the task of creating paintings, by definition got lost in the woods, or at least stopped being a painter for those moments. It makes sense to care for example about the feel and balance of a brush, because that has a direct impact on the artwork, but say, collecting embellished brushes would be him wearing not a painter's hat (beret?) but a collector's.

My point is that the end-product matters most, and getting wrapped in any other part of the process for its own sake is a failing, or at best a distraction - in both cases.


Is remote rendering a thing? I would have imagined the lag would make something like that impractical.

The lag is high. Google was doing this with stadia. A huge amount of money comes from online multiplayer games and almost all of them require minimal latency to play well. So I doubt EA, Microsoft, Activision is going to effectively kill those cash cows.

Game streaming works well for puzzle, story-esque games where latency isn't an issue.


Hinging your impression of the domain on what Google (notoriously not really a player in the gaming world) tried and failed will not exactly give you the most accurate picture. You might as well hinge your impression of how successful a game engine can be on Amazon's attempts at it.

GeForce NOW and Xbox Cloud are much more sensible projects to look at/evaluate than Stadia.


It doesn't matter who does it. To stream you need to send the player input across the net, process, render and then send that back to the client. There is no way to eliminate that input lag.

Any game that is requires high APM (Action Per Minute) will be horrible to play via streaming.

I feel as if I shouldn't really need to explain this on this site, because it should be blindingly obvious that this will always be an issue with any streamed games for the same reason you have a several seconds lag between what happening on a live sports event and what you see on the screen.


GeForce NOW is supposedly decent for a lot of games (depending on connection and distance to server), although if Nvidia totally left gaming they'd probably drop the service too.

It will be if personal computing becomes unaffordable. The lag is simply mitigated by having PoP everywhere.

Unless you happen to meet the unendingly patient and helpful cook who is willing to explain the recipe in any depth one desires.

You mean the cook who will in the same unendingly patient and helpful manner sometimes confidently suggest putting glue into your dishes and serving your guests rocks for lunch?

That's part of the cook being helpful. It's how they check that you're not asleep at the wheel and your critical thinking is awake and engaged ;)

I'm sure these year-old examples of errors from a bad product are still valid!

What bad product? I'm not as categorical as OP, but acting like this is a solved problem is weird. LLMs being capable of generating nonsensical stuff isn't a one-off blip on the radar in one product that was quickly patched out, it's nigh unavoidable due to their probabilistic nature, likely until there's another breakthrough in that field. As far as I know, there's no LLM that will universally refuse to try outputting something it doesn't "know" - instead outputting a response that feels correct but is gibberish. Or even one that wouldn't have rare slip-ups even in known territory.

There's a difference between recent frontier coding LLMs and Google doing quick-and-cheap RAG on web results. It's good to understand it before posting cheap shots like this.

You still need to do the cooking yourself to master it. If that cook will be giving you a readymade dish, can you say you can cook? Although yes, that’s the goal for many…

Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.

Maybe not, but I was answering to "nobody believes", not to whether AI is intelligent or not (which might just be semantics anyway). Plenty believe, especially the insiders working on the tech, who know it much better than us. Take Ilya Sutskever, of "do you feel the AGI" fame. Labelling them all as cynical manipulators is delusional. Now, they might be delusional as well, at least to some degree - my bet is on the latter - but there are plenty of true believers out there and here on HN. I've debated them in the past. There are cogent arguments on either side.

"They convinced the investors so they must be right"

> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.


It's not that the money can predict what is correct, it's that it can tell us where people's values lie.

Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.

Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.

However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.


> However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.


The CEO of Nvidia is saying this.

So yeah, he's just "one guy", but in terms of "one guys", he's a notable one.


Someone also believed the internet would take over the world. They were right.

So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.


Someone also believed the moon was made of green cheese. They were wrong.

And some of those beliefs they were wrong about is about when and how it will change things.

And my post is not about who is correct. It's about discerning what people truly believe despite what they might tell you up front.

People invested money into the internet. They hired people to develop it. That told you they believed it was useful to them.


And a parrot (or human) is not stochastic? The truth is we don't actually know. So the usually included "just" is unjustified.

Those foreign students usually pay for the education they receive, they might not be willing to do so (or as much) if there are strings attached. Besides, I don't think any country should aim on brain draining any other country, that kind of selfishness will be counterproductive long-term. Who knows, might be what we're seeing right now (the US self-sabotaging). Karma's a bitch.

The proof is verified mechanically - it's very easy to verify that a proof is correct, what's hard is coming up with the proof (it's an NP problem). There can still be gotchas, especially if the statement proved is complex, but it does help a lot in keeping bugs away.

How often can the hardness be exchanged with tediousness though? Can at least some of those problems be solved by letting the AI try until it succeeds?

To simplify for a moment, consider asking an LLM to come up with tests for a function. The tests pass. But did it come up with exhaustive tests? Did it understand the full intent of the function? How would it know? How would the operator know? (Even if it's wrangling simpler iterative prop/fuzz/etc testing systems underneath...)

Verification is substantially more challenging.

Currently, even for an expert in the domains of the software to be verified and the process of verification, defining a specification (even partial) is both difficult and tedious. Try reading/comparing the specifications of e.g. a pure crypto function, then a storage or clustering algorithm, then seL4.

(It's possible that brute force specification generation, iteration, and simplification by an LLM might help. It's possible an LLM could help eat away complexity from the other direction, unifying methods and languages, optimising provers, etc.)


Comparing humans with machines on resource use gives some seriously dystopian vibes.

I agree, but that's what people are implicitly doing every time they toss out one of those "The machine drinks a glass of water every time it" statistics. We are to assume a human doesn't.

That's just the thing, desktops computers have always been in an important way the antithesis of a specialized appliance, a materialization of Turing's dream of the Universal Machine. It's only in recent years that this universality has come under threat, in the name of safety.

I wouldn't save the driver is "safety". It's happened that a few highly-specialized symbolic manipulation tasks now have enough market value such that they can demand highly specialized UX to optimize task performance.

One classic example is the "Bloomberg Box": https://en.wikipedia.org/wiki/Bloomberg_Terminal which has been around since the late '80s.

You can also see this from the reverse (analog -> digital) in the evolution of hospital patient life-sign monitors and the classic "6 pack" of gauges used in both aviation and automobiles.


I meant the universality (openness) of desktop computers comes under threat, as the "walled garden" model seeks to make the jump from mobile to desktop.

Ah yes, I agree. I run macOS as my daily driver, but otherwise barely skim the Apple ecosystem. Apple laptops were just the best hardware to run a Unix-ish (BSD) on.

Now with performant hypervisors, I just run a bunch of Linux VMs locally to minimize splash-zone and do cloud for performance computing.

I'll likely migrate fully to a Framework laptop next year, but I don't have time (atm) to do it. Ah, the good 'ole glory days of native Linux on Thinkpads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: