Hacker Newsnew | past | comments | ask | show | jobs | submit | drdeca's commentslogin

I don’t see why any of those should be exonerating?

Also, I feel like “nothing wrong if it does happen” regarding shooting someone, is the wrong perspective. If shooting someone is necessary, then it is necessary, but that doesn’t mean nothing went wrong. Anytime someone gets shot is a time something has gone wrong.


So if someone threatens to kill you and your family, and you shoot them, something has gone wrong? I'd say something has gone right.

Yes, something has gone wrong: someone threatened to kill me and my family, and apparently the only way to stop them from doing so was to kill them. That may be the best option available, but it is still a tragedy.

There are many situations where that isn’t the right response to that.

What is the smallest level of additional security such that, if you assumed that the TSA only provides that much additional security over the alternative of not having them, you would regard it as worth it?

And, is the actual amount of security provided greater than that amount?


Hm. It shouldn’t be too hard to add something to models to make them do that, right? I guess for that they would need to know the user’s time zone?

Can one typically determine a user’s timezone in JavaScript without getting permissions? I feel like probably yes?

(I’m not imagining something that would strictly cut the user off, just something that would end messages with a suggestion to go to bed, and saying that it will be there in the morning.)


Chatbots already have memory, and mine already knows my schedule and location. It doesn't even need to say anything directly, maybe just shorter replies, less enthusiasm for opening new topics. Letting conversation wind down naturally. I also like the idea of continuing topics in the morning, so if you write down your thoughts/worries, it could say "don't worry about this, we can discuss this next morning".


I know a few people who work 3rd shift. That is people who good reason to be up all night in their local timezone. They all sleep during times when everyone else around them is awake. While this is a small minority, this is enough that your scheme will not work.


I actually was considering those people. That’s part of why I suggested it shouldn’t be a hard cut-off, but just adding to the end of the messages.

Of course, one could add some sort of daily schedule feature thing so that if one has a different sleep schedule, one can specify that, but that would be more work to implement.


Ideally, sufficiently powerful AI would not be created unless the necessary safety mechanisms are established.

But also, that’s a different kind of asymmetry?


Many people don’t think there is a moral case against training a model on copyrighted data without obtaining a license to do that specifically.


I don’t see why you think that AGI can reverse the effects of another AGI?



Not convincing


> By definition you cannot have someone who is the most informed about everything.

This is not true-by-definition . It may be true, but not by-definition. If there were an omniscient person, they would be the most informed about everything.


Anthropic? ChatGPT is the one affiliated with Microsoft.


They are saying that judgements of what qualifies as harm is something like a judgement of what is good, or what is right or wrong. That’s not the same thing as evaluating whether something causes pain. You can measure whether something caused pain, sure. (Well, the sort of limitations you mentioned in measuring pain exist, but as you said, they are not a major issue.)

“Harm” isn’t the same thing as “pain”.

I would say that when I bite my finger to make a point, I experience pain, but this doesn’t cause me any suffering nor any harm. If something broke my arm, I claim that this is harm to me. While this (“if my arm were broken, that would be harm to me”) might seem like an obvious statement, and I do claim that it is a fact, not just an opinion, I think I agree that it is a normative claim. It is a claim about what counts as good or bad for me.

I don’t think normative claims (such as “It is immoral to murder someone.”) are empirical claims? (Though I do claim that they at least often have truth values.)


I'd go beyond that and even say that one might consider something harmful, but be willing to endure a certain level of harm in pursuit of something of higher value.

For example, I once asked a smoker why she smoked, and the response was "because I love it" -- when I asked if the enjoyment was worth the health risks, she said "yes; I never planned to live forever". She was making a conscious decision to seek short-term pleasure at the cost of potential longer-term damage to her health. At that point, there wasn't really anything remaining to debate about.


I didn’t mean to imply that the harmful effects of something can’t be worth it for the beneficial effects of that thing. Yeah, if someone is trapped, doing something that frees them and also breaks their arm, may well be an appropriate action for them to take.


Well, what exactly an “idea” is might be a little unclear, but I don’t think it clear that the complexity of ideas that result from combining previously obtained ideas would be bounded by the complexity of the ideas they are combinations of.

Any countable group is a quotient of a subgroup of the free group on two elements, iirc.

There’s also the concept of “semantic primes”. Here is a not-quite correct oversimplification of the idea: Suppose you go through the dictionary and one word at a time pick a word whose definition includes only other words that are still in the dictionary, and removing them. You can also rephrase definitions before doing this, as long as it keeps the same meaning. Suppose you do this with the goal of leaving as few words in it as you can. In the end, you should have a small cluster of a bit over 100 words, in terms of which all the other words you removed can be indirectly defined. (The idea of semantic primes also says that there is such a minimal set which translates essentially directly* between different natural languages.)

I don’t think that says that words for complicated ideas aren’t like, more complicated?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: