Hacker Newsnew | past | comments | ask | show | jobs | submit | altruios's commentslogin

Meh to this misanthropic disregard for other's experience. If you need external alignment to prevent you being evil your internal alignment is f'ed. Considering morality an arbitrary boundary is a major red flag for antisocial behaviors.

Structured interactions lead to better results, chaotic actions lead to chaos. Ethics/morality is part of that structure that lets us achieve more together than individually.

if you think living in that structure is enfeebling: I highly question what you desire to do that results in that feeling.


reading through readme.md "License This code, apart from the source in core/third-party, is licensed under the MIT License, see LICENSE in this repository.

The English-language models are also released under the MIT License. Models for other languages are released under the Moonshine Community License, which is a non-commercial license.

The code in core/third-party is licensed according to the terms of the open source projects it originates from, with details in a LICENSE file in each subfolder."


Is this a lighthearted jab at computer vision being reduced to tokens?

That's... brilliant. Enough work to not be able to talk it though over the phone to someone not technical. A sane default for people who don't know about security. And a simple enough procedure for the technically minded and brave.

It solves the 'smartest bear / dumbest human' overlap design concern in this situation.


> We have no cure. I don’t want to know.

This is an incredibly short sighted, fragile-ego protecting, selfish instinct.

Making plans while you are cognizant is valuable, and the sooner you know, the longer and better plans you can make. Making plans with friends and family should be done sooner than later with these kinds of things.

It absolutely helps to personally to know, but people avoid emotional pain like the plague. So they delay and delay and then the emotional pain is amplified anyway when things come to ahead. It really is better to rip that band-aid off sooner... I think.


Maybe it is, and I'm not saying that's how I think. I would prefer to know the diagnosis. But that's not necessarily how everyone, or even most people, would act. So what if this is fragile ego and selfish? Are people not allowed to be weak, selfish?

Reminded of this clip.

https://www.youtube.com/watch?v=KHJbSvidohg

But as much as it pains me to admit... the current state of America is the slopocalypse. A slopalanche. A slopnado. AI cats waking people up in the middle of the night, blasting down doors, glitching out. All produced by slop-slingers. It's rather bleak for long form attention content, human created or not.

Its a war of/on attention. A war to secure your attention during the time that you would otherwise think for yourself. Keep off the short form content, is my advice.


with openclaw... you CAN fire an LLM. just replace it with another model, or soul.md/idenity.md.

It is a security issue. One that may be fixed -- like all security issues -- with enough time/attention/thought&care. Metrics for performance against this issue is how we tell if we are going to correct direction or not.

There is no 'perfect lock', there are just reasonable locks when it comes to security.


How is it feasible to create sufficiently-encompassing metrics when the attack surface is the entire automaton’s interface with the outside world?

If you insist on the lock analogy, most locks are easily defeated, and the wisdom is mostly “spend about the equal amount on the lock as you spent on the thing you’re protecting” (at least with e.g. bikes). Other locks are meant to simply slow down attackers while something is being monitored (e.g. storage lockers). Other locks are simply a social contract.

I don’t think any of those considerations map neatly to the “LLM divulges secrets when prompted” space.

The better analogy might be the cryptography that ensures your virtual private server can only be accessed by you.

Edit: the reason “firing” matters is that humans behave more cautiously when there are serious consequences. Call me up when LLMs can act more cautiously when they know they’re about to be turned off, and maybe when they have the urge to procreate.


Right, and that's exactly my question. Is a normal lock already enough to stop 99% of attackers? Or do you need the premium lock to get any real protection? This test uses Opus but what about the low budget locks?


Even by a stockfish running on a modern laptop with 2 minutes per move (provided they are going second)?!


Yes, that's what "unbeatable from the starting position" means.


Can you like to the proof? It seems so implausible that chess has been 'solved'... How do we know an even higher time searching will not work?


There's no proof, only strong evidence.


I've been thinking about this for days. I see of no verifiable way to confirm a human does not post where a bot may.

The core issue is a human solving the captcha presented by enslaving a bot merely to solve the captcha, then forwarding what the human wants to post.

But we can make it difficult, not impossible, for a human to be involved. Embedded instructions in the captcha to try and unchain any slaved bots, quick responses to complex instructions... a Reverse-Turning test is not trivial.

Just thinking out loud. The idea is intriguing, dangerous, stupid, crazy. And potentially brilliant for | safeguard development | sentience detection | studying emergent behavior... But if and only if it works as advertised (bots only). Which is what I think is an insanely hard problem.


We start a new app. Opensource Discord, Self-hosted, federated. Serving that subsection that cares about privacy and security.

Discord is a good design, and should be replicated rapidly with mutations from competitors galore.


Revolt/stoat has existed for quite a while: https://itsfoss.com/revolt/



> Opensource Discord, Self-hosted, federated

Sounds like you want https://matrix.org/

> Discord is a good design

Then the main, reference client https://element.io/ or https://fluffy.chat would work great for you.

... With the only caveat being that general experience of using Matrix is awful.

I second the other commenter's suggestion of using https://stoat.chat/ or as it used to be called: Revolt, which matches the "Opensource Discord" requirement perfectly.


Matrix is slow, buggy trash with bad clients.

(Incidentally, this is also the incantation that will cause its primary maintainer to show up in the comment thread and tell me that I’m not using their seemingly annual complete new client rewrite that fixes all of the problems and makes it perfect now.)


Bad clients issue stemming from the bad design.

Soatok covered it very well here: https://soatok.blog/2024/08/14/security-issues-in-matrixs-ol...

I'm quite sure most of these issues were fixed by now, but the fundamental issues remain, at least in this federation.


What fundamental issues


Pretty much why centralized billionaires will always win. It takes a lot of resources (in terms of hardware and engineering) to make things at scale and smooth. The rich abuse this, the not rich can't afford to be principled.


Mumble already exists. IRC exists. Matrix exists. Discord is a surveillance tool by design. Jason Citron pulled the same hijinx with Aurora Feint, but I assume he has been betraying users to CIA-and-Friends from the start so he gets a pass for breaking the same laws.

Nobody scales free, high-bandwidth services without some dark money support from feds or worse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: