Great. So if that pattern matching engine matches the pattern of "oh, I really want A, but saying so will elicit a negative reaction, so I emit B instead because that will help make A come about" what should we call that?
We can handwave defining "deception" as "being done intentionally" and carefully carve our way around so that LLMs cannot possibly do what we've defined "deception" to be, but now we need a word to describe what LLMs do do when they pattern match as above.
The pattern matching engine does not want anything.
If the training data gives incentives for the engine to generate outputs that reduce negative reaction by sentiment analysis, this may generate contradictions to existing tokens.
"Want" requires intention and desire. Pattern matching engines have none.
I wish (/desire) a way to dispel this notion that the robots are self aware. It’s seriously digging into popular culture much faster than “the machine produced output that makes it appear self aware”
Some kind of national curriculum for machine literacy, I guess mind literacy really. What was just a few years ago a trifling hobby of philosophizing is now the root of how people feel about regulating the use of computers.
The issue is that one group of people are describing observed behavior, and want to discuss that behavior, using language that is familiar and easily understandable.
Then a second group of people come in and derail the conversation by saying "actually, because the output only appears self aware, you're not allowed to use those words to describe what it does. Words that are valid don't exist, so you must instead verbosely hedge everything you say or else I will loudly prevent the conversation from continuing".
This leads to conversations like the one I'm having, where I described the pattern matcher matching a pattern, and the Group 2 person was so eager to point out that "want" isn't a word that's Allowed, that they totally missed the fact that the usage wasn't actually one that implied the LLM wanted anything.
Thanks for your perspective, I agree it counts as derailment, we only do it out of frustration. "Words that are valid don't exist" isn't my viewpoint, more like "Words that are useful can be misleading, and I hope we're all talking about the same thing"
I didn't say the pattern matching engine wanted anything.
I said the pattern matching engine matched the pattern of wanting something.
To an observer the distinction is indistinguishable and irrelevant, but the purpose is to discuss the actual problem without pedants saying "actually the LLM can't want anything".
I agree, which is why it's disappointing that you were so eager to point out that "The LLM cannot want" that you completely missed how I did not claim that the LLM wanted.
The original comment had the exact verbose hedging you are asking for when discussing technical subjects. Clearly this is not sufficient to prevent people from jumping in with an "Ackshually" instead of reading the words in front of their face.
> The original comment had the exact verbose hedging you are asking for when discussing technical subjects.
Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?
I sincerely doubt that. When people find bugs in software they just say that the software is buggy.
But for LLM there's this ridiculous roundabout about "pattern matching behaving as if it wanted something" which is a roundabout way to aacribe intentionality.
If you said this about your OS people qould look at you funny, or assume you were joking.
Sorry, I don't think I am in the wrong for asking people to think more critically about this shit.
> Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?
I'm sorry, what are you asking for exactly? You were upset because you hallucinated that I said the LLM "wanted" something, and now you're upset that I used the exact technically correct language you specifically requested because it's not how people "normally" speak?
Sounds like the constant is just you being upset, regardless of what people say.
People say things like "the program is trying to do X", when obviously programs can't try to do a thing, because that implies intention, and they don't have agency. And if you say your OS is lying to you, people will treat that as though the OS is giving you false information when it should have different true information. People have done this for years. Here's an example: https://learn.microsoft.com/en-us/answers/questions/2437149/...
I hallucinated nothing, and my point still stands.
You actually described a bug in software by ascribing intentionality to a LLM. That you "hedged" the language by saying that "it behaved as if it wanted" does little to change the fact that this is not how people normally describe a bug.
But when it comes to LLMs there's this pervasive anthropomorphic language used to make it sound more sentient than it actually is.
Ridiculous talking points implying that I am angry is just regular deflection. Normally people do that when they don't like criticism.
Feel free to have the last word. You can keep talking about LLMs as if they are sentient if you want, I already pointed the bullshit and stressed the point enough.
In reality, we are likely about to get yet another data point on where the lines for the average person really lie in the dimensions of functionality, friction, network effect and privacy.
There are those that will stay on Discord because the benefits of the first three outweigh the degradation of privacy. Then there are those that will leave because the first three aren't important enough to outweigh the privacy loss. There will be all sorts of people in between.
HN has a rather amplified showing of folks who won't trust anything unless it's completely decentralized using E2EE clients verifiably compiled from source that they'be personally audited running on hardware made from self-mined rare metals. The reality is that there is a spectrum of folks out there all with different preferences and while some folks will leave (in this case) Discord, others will remain because thats where the folks they want to chat/game/voice with.
Likely the intended meaning here is that the practicality of space data centers goes against the physical realities of operating in space. The single most prevalent issue with operating anything in space is heat dissipation in that the only method of doing so is via radiation of heat, which is very slow. Meanwhile, the latest Nvidia reference architectures convert such ungodly amounts of power into heat (and occasionally higher share prices) that they call for water cooling and extensive heat-exchange plant.
Even if one got the the economics of launching/connecting GPU racks into space into negligable territory and made great use of the abundent solar energy, the heat generated (and in space retained) by this equipment would prevent running it at 100% utilization as it does in terrestrial facilities.
In addition to each rack worth of equipment you'd need to achieve enough heat sink surface area to match the heat dissipation capabilities of water-cooled systems via radiation alone.
The [ONT → OLT(+BNG)] → Internet] sections of the paths will continue to be owned by commercial entities that can still be the subject of court orders and/or government pressure.
Even if you were to roll your own cable in the ground to your own ONT/OLT/BNG at some point you will need to acquire IP transit or peering from other commercial entities.
The latter usually isn't that difficult, just expensive. You can usually rent a leased line from anywhere to anywhere. The government will still come knocking if they think you're evading their censorship.
A leased line though will only get you A<->B where sure, A and B can be anywhere but have to be concrete locations/hand off points when provisioned. It does ultimatley come down to the service that one orders from a commercial entity.
A hypothetical court order saying something like "kill internet access" would likely cause an IP transit service to stop working (implemented by said provider no longer announcing global IP routing tables to that service) but a leased line between two locations would likely remain untouched since that isn't an "internet" service. So they might not need to come knocking if they're reasonably confident that all such edge cases like leased lines end up at dead-ends because any internet-capable product they might be enabling access to is sufficiently disabled.
I do imagine though that if they get as far as "kill the internet" that obtaining a subsequent court order to go after some suspicious leased line would be trivial.
As a side note, I find that IP transit is typically the cheapest aspect of providing an internet service since a cross-connect at a well connected DC will cost well under $1/Mbps/month unmetered. Plus the cost is very well amortized when residential users are the target. This has tended to hold when one takes into account the co-lo costs as well since network gear doing relatively basic packet forwarding/internet table routing doesn't take up that much space or power.
A DoS that will disappear once you close the funnel. Tailscale are proxying the traffic so your public IP isn’t exposed. Your choice of port makes no difference.
The comparison breaks down since, in race car terms, the great acceleration isn’t enough to offset the negatives that make electric cars poor race cars. So in a sense it is pointless cosplay. Even the acceleration might be working against itself since the great acceleration comes at the cost of the battery pack expending more energy, contributing to heat build up.
This isn’t to say the heat problem couldn’t be managed, but one of the biggest issues with race cars generally is heat management so starting from a platform with a unique and significant heat problem isn’t ideal. Then the weight and overall longevity of the battery pack comes into play.
To tout the acceleration without discussing the drawbacks involved in delivering it or the practicalities of leveraging it suggests that it’s such a great feature that the drawbacks either don’t exist or don’t matter.
> Most race cars aren't electric though? That analogy makes no sense.
No, they aren't. I attend a significant amount of track events as a driver and I will see maybe 1 electic car every few events. Besides the lack of charging infrastrucutre at most race tracks, the one positive of instant torque/power is significantly outweighed by their overall mass and significant heat generation.
The latter tends to result in a Tesla S being unable to last more than 20 minutes at Laguna Seca or Sonoma before the battery pack overheats and reduces power output requiring the car to exit the track.
> The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
If they have a search warrant then a judge has, from a legal perspective, determined that the request/search is reasonable. So while you have the right to secure against unreasonable cases I think it is a reasonable trade off that those security mechanisms/processes/etc should either be removed by yourself or you should expect them to be removed for you.
I don't know what fantasy world you live in but the police don't ask you to remove your security mechanisms yourself. Your likely to catch a charge for destruction of evidence if you do that along with a bunch of other related charges
Considering that police can, with a warrant, forcibly place your thumb on your phone's fingerprint sensor to unlock it [1], "I don't know what fantasy world you live in" is unwarranted.
Removing the security mechanism in my comment is akin to opening the safe to enable the search or entering your PIN number on a phone to unlock it. I can't really see how otherwise removing a roadblock to enable law enforcement to perform their court approved mandate would lead to further charges for the act of helping them do so. Of course if you're referring to poison-pill mechanisms that upon removal destroy the data they wanted to search for then sure, more charges are coming.
That isn't how it works at all in the US. You'd be asked to provide the combination to the safe. Or compelled under a court order to divulge it under penalty of contempt of court.
If you're asked to directly interact with anything like that you're very likely being set up to bring additional charges against you. You can be compelled to provide passwords, combinations, etc. in a court. You can't be compelled to actually enter the safe combination
It seems reasonable to suggest that the number of profit-driven ransomware endeavors and the number of for-fun ransomware endeavors can both be non-zero and contain some overlap and some non-overlap. Therefore it seems that to make it unprofitable would at least eliminate the former reason which under all by the worst case scenario where those numbers are perfectly equal and overlapping would result in fewer ransomware endeavors.
To say we shouldn't do X because it doesn't perfectly eliminate/solve Y is akin to saying we should do nothing because by that standard, we'll never do anything about Y.
I don't see how banning payments would inherently create more opportunity for ransomware attacks. Assuming that the operators are already attacking as much as they can (why wouldn't they be - its more profit that way since its business after all) the only way to maintain profitability with lower per-attack yields would be to ask for more ransom per-attack which would likely drive the yields down even further.
I'll +1 to this. Coming from Australia to the US I've found that (generally, especially in big corp entities/govt.) American customer service is always extremely courteous and eager but ultimately unhelpful or severely limited in what they can do. As soon as you are off script, good luck.
I would agree. My standard engagement with CS in Australia was a lot more personable and once it was recognized that a situation was off script they were far more willing to go into problem solving mode.
reply