Hacker Newsnew | past | comments | ask | show | jobs | submit | more Vrondi's commentslogin

Nobody talks about "life in the 22nd century" in the way they talked about "life in the 21st century" in recent decades, because for the past 24 years we've been at the _beginning_ of a century. Once we get halfway through the 21st century, the talk about "life in the 22nd century" will really ramp up.


You apparently never learned about the 1939 New York World's Fair's "The World of Tomorrow" expo. That didn't wait for the century half way point. How about the 1900 Paris Expo and the 1933 Chicago World's Fair, which both also featured predictions and prototypes of future technologies that got everyone from workers to sci-fi writers focused on flying cars and moving sidewalks.

Hardly anyone on this site has any sense of history and people just make shit up about the past. How sad to see a once intellectual forum turn into another Reddit or Twitter.


I've heard of those - and while I never really dug into details of what was presented or why, I believe I got the overall vibe of those expos - and that vibe is, sadly, missing today.


Except people in the first quarter of the 20th century did talk about the 21st century.

https://www.upworthy.com/11-ridiculous-future-predictions-fr...


To make it super easy to use the same "program" on your smartphone/tablet/laptop/desktop, wherever you are. And/or to simultaneously share access to it with friends/family. Yes, you can all just point a local app at a data share instead (the old way), but many apps aren't available across all these platforms to even make this possible across iOS, OSX, Linux, Android, and Windows, so this often doesn't work for a group of more than two people.


> many apps aren't available across all these platforms to even make this possible across iOS, OSX, Linux, Android, and Windows,

Thanks to all the spying/telemetry in most of those platforms once you use your privacy protecting app on them you lose the advantage of your data being self-hosted in the first place.


The fascinating thing is that the political right largely prefers ICE vehicles in the USA. This is a combination of factors, like more rural voters who need more range/work more blue collar jobs/haul firewood/etc. Combined with a personality type that is more cautious about change. And also, the insane prices for things like Teslas here. Perhaps Musk's close association with President Trump will gain him a small number of new rich fans on the political right, but most common Americans in most of the country aren't even considering a Tesla to begin with, and it's the majority of folks with lower incomes that put Trump back in office this time.


That's what "self hosted" usually means.


The only thing that will be different for most is vendor lock-in will be to LLM vendors.


Some note-taking apps can already do this. Samsung note devices are decent at it.


Any user ever posting URLs should never ever be removed. The Web should be allowed to exist. This is utterly despicable behavior.


Clearly there is content that would be unacceptable to post. Anything patently illegal, for example.


Or websites that look exactly like paypal, whose URLs begin in paypal.com (followed by a dot, not a slash), but that are, in fact, not Paypal.

I think that's a much more pressing concern.


the parent said: to block things that are illegal. phishing is illegal.


Insanity. Absolutely. Maybe.

Clearly there's a need for some kind of bad-url blocker. You don't want compromised accounts (or clueless people) sharing nefarious links to trusted friends.

And clearly blocking distrowatch etc is bizarre overreach. And probably not intended behaviour -- it just makes no sense.

The web exists just fine. Using Facebook as a front end to the web is a terrible idea though.


I once posted a Youtube comment with a link. Got removed without notice. I thought it was the uploader first but no ...


Amusingly, new YouTube channels can't themselves put links in the description of their own videos even.

They really dislike this whole hypertext thing.


They really want Xanadu's for-profit linking.


Imagine how bad that could have been, had it happened - extrapolating from the current state of the web.


I'm pretty sure I still see spam links in youtube comments though.


Yeah, anything with a link gets silently removed.

Wikipedia links seem to be an exception, maybe that’s special-cased.


The internet would look like the spam folder of a compromised email address. No thanks.


Mastodon does not restrict the posting of URLs and it does not look like “the spam folder of a compromised email address” at all.


Mastodon isn't in charge of moderation though, that's up to the individual instances.

Also, Mastodon is tiny, and spam is a numbers game.


But are you not somewhat agreeing with the point that you're implicitly arguing against: "[This isn't a problem] if I [am] only seeing updates from the people I actually know and explicitly connected to on the social graph. The current problem exists because the content is chosen algorithmically."

The size of a total network is irrelevant until you start randomly connecting nodes.


At the moment "no one" is on mastodon. The folk there are the few, and are likely a self-selecting group that are resistant to spam or scams. Therefore you don't see (much) spam or scams there.

Of course should it become popular (side note; it wont) such that my mom and her friends are on it, then the spammers and scammers will come too. And since my mom is in my social graph a lot of that will become visible to me.

Enjoy mastodon now. The quality is high because the group is small and the barrier to entry us high. Hope it never catches on, because all "forums" become crap when the eternal September arrives.


Mastodon is perfect for affirmation of your worldview and strengthen your social bubble because instance rules are intolerant to random kind of opinions.


There is always NOSTR. Over there you follow however you wish without such artificial walled gardens.

Tip: If someone is trolling you, they can also write to your texts without a chance of you stopping them. No perfect solution exists, I guess.


You are correct that since nostr is censorship resistant, you can't really prevent someone from posting something, but you can prevent being exposed to it on your side. If it's a single nostr account (npub) sending you something you don't want, then you can block or mute them (the blocking is done in your app on your device). If they try attacking you at scale, then you can rely on web of trust (i.e. only allow content from people you actually follow, and 2nd degree) - this is now often the default.


That works for our own account to avoid seeing the texts, it doesn't prevent the troll from still posting replies to our posts.

With that said, that is an exotic situation. I'm a big fan of NOSTR in overall, all my recent hobby projects used npub and nsec. The simplicity and power of that combination is really powerful. No more emails, no more servers, no more passwords.


Because everyone knows, Twitter and Facebook have never arbitrarily enforced moderation on political topics they consider distasteful.


Yet. There are lots of sign spam is coming to Mastodon and there is real concern by a fair number of people who are there. Anyone with a lot of followers will be tagged often by spam (if you tag someone all their followers will see your post)


The simplest explanation for this would be that spammers are not targeting Mastodon.


As someone who uses Mastodon I can assure you that spammers do target mastodon. So far it is only a few though and so human moderators are able to keep up. I doubt that will last long.


Mastodon looks like a barely used social network instead.


Not if I only seeing updates from the people I actually know and explicitly connected to on the social graph.

The current problem exists because the content is chosen algorithmically


No. Even then. You may know assholes. User accounts may be compromised. Users may have different tolerances for gore that you don’t.

Not gotchas, I’m not arguing for the sake of it, but these are pretty common situations.

I always urge people to volunteer as mods for a bit.

At least you may see a different way to approach thing, or else you might be able to articulate the reasons the rule can’t be followed better.


Would not a less draconian solution then to be to hide the link requiring the user to click through a [This link has been hidden due to linking to [potential malware/sexually explicit content/graphically violent content/audio of a loud Brazilian orgasm/an image that has nothing to do with goats/etc] Type "I understand" here ________ to reveal the link.]?

You get the benefits of striving to warn users, without the downsides of it being abusive, or seen as abusive.


It’s not a bad option, and there may be some research that suggests this will reduce friction between mod teams and users.

If I were to build this… well first I would have to ensure no link shorteners, then I would need a list of known tropes and memes, and a way to add them to the list over time.

This should get me about 30% of the way there, next.. even if I ignore adversaries, I would still have to contend with links which have never been seen before.

So for these links, someone would have to be the sacrificial lamb and go through it to see what’s on the other side. Ideally this would be someone on the mod team, but there can never be enough mods to handle volume.

I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

That is IF you get reports. People click on a malware infection, but aren’t aware of it, so they don’t report. Or they encounter goats, and just quit the site, without caring to report.

I’m actually pulling my punches here, because many issues, eg. adversarial behavior, just nullify any action you take. People could decide to say that you are applying the label incorrectly, and that the label itself is censorship.

This also assumes that you can get engineering resources applied - and it’s amazing if you can get their attention. All the grizzled T&S folk I know, develop very good mediating and diplomatic skills to just survive.

thats why I really do urge people to get into mod teams, so that the work gets understood by normal people. The internet is banging into the hard limits of our older free speech ideas, and people are constantly taking advantage of blind spots amongst the citizenry.


> I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

When I consider my colleagues who work in the same department: they really have very different preferred schedules concerning what their preferred work hours are (one colleague would even love to work from 11 pm to 7 am - and then getting to sleep - if he was allowed to). If you ensure that you have both larks and "nightowls" among your (voluntary) moderation team, this problem should become mitigated.


Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.

But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people.

This is a difficult and unsolved problem.


I admit that ensuring consistent moderation quality is the harder problem than the moderation coverage (sleep pattern ;-) ) problem.

Nevertheless, I do believe that there do exist at least partial solutions for this problem, and a lot of problems concerning moderation quality are in my opinion actually self-inflicted by the companies:

I see the central issue that the companies have deeply inconsistent goals what they want vs not want on their websites. Also, even if there is some consistency, they commonly don't clearly communicate these boundaries to the users (often for "political" or reputation reasons).

Keeping this in mind, I claim that all of the following strategies can work (but also each one will infuriate at least one specific group of users, which you will thus indirectly pressure to leave your platform), and have (successfully) been used by various platforms:

1. Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right"). This will, of course, infuriate users who are on the "free speech" side. Also people who have a "currently politically accepted" stance on the controversial topic will be angry that they are not allowed to post about their "right" opinion on this topic, which is a central part of their life.

2. Only allow arguments for one side of some controversial topics ("taking a stance"): this will infuriate people who are in the other camp, or are on the free speech side. Also consider that for a lot of highly controversial topics, which side is "right" can change every few years "when the political wind changes direction". The infuriated users likely won't come back.

3. Mostly allow free speech, but strongly moderate comments where people post severe insults. This needs moderators who are highly trustable by the users. Very commonly, moderators are more tolerant towards insults from one side than from the other (or consider comments that are insulting, but within their Overton window, to be acceptable). As a platform, you have to give such moderators clear warnings, or even get rid of them.

While this (if done correctly) will pacify many people who are on the "free speech" side, be aware that 3 likely leads to a platform with "more heated" and "controversial" discussions, which people who are more on the "sensitive" and "nice" side likely won't like. Also advertisers are often not fond of an environment where there are "heated" and "controversial" discussions (even if the users of the platform actually like these).


>Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right").

Yup. One of my favored options, if you are running your own community. There are some topics that just increase conflict and are unresolvable without very active referee work. (Religion, Politics, Sex, Identity)

2) This is fine ? Ah, you are considering a platform like Meta, who has to give space to everyone. Dont know on this one, too many conflicting ways this can go.

3) One thing not discussed enough, is how moderating affects mods. Your experience is alien to what most users go through, since you see the 1-3% of crap others don't see. Mental health is a genuine issue for mods, with PTSD being a real risk if you are on one of the gore/child porn queues.

These options to a degree are discussed and being considered. At the cost of being a broken record, more "normal" users need to see the other side of community running.

Theres MANY issues with the layman idea of Freespeech, its hitting real issues when it comes to online spaces and the free for all meeting of minds we have going on.

There are some amazing things that come out of it, like people learning entirely new dance moves, food or ideas. The dark parts need actual engagement, and need more people in threads like this who can chime in with their experiences, and get others down into the weeds and problem solving.

I really believe that we will have to come up with a new agreement on what is "ok" when it comes to speech, and part of it is going to be realizing that we want freespeech because it enables a fair market place of ideas. Or something else. I would rather it happen ground up, rather than top down.


> Ah, you are considering a platform like Meta, who has to give space to everyone.

This is what I at least focused on since

- Facebook is the platform that the discussed article is about

- in https://news.ycombinator.com/item?id=42852441 pixl97 wrote:

"Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.

But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people."


As you said, consistent moderation is different that coverage. Coverage will matter for smaller teams.

There’s a better alternative for all of these solutions in terms of of consistency, COPE was released recently, and it’s basically a light weight LLM trained on applying policy to content. In theory that can be used to handle all the consistency issues and coverage issues. It’s beta though, and needs to be tested en masse.

Eh.. let me find a link. https://huggingface.co/zentropi-ai/cope-a-9b?ref=everythingi...

I’ve had a chance to play with it. It has potential, and even being 70% good is a great thing here.

It doesnt resolve the free speech issue, but it can work towards the consistency and clarity on rules issues.

I will admit I’ve strayed from the original point at this stage though


Lord do I wish that were true. The main reason I left Facebook was less the algorithmic content I was getting from strangers, and more the political bile that my increasingly fanatical extended family and past acquaintances chose to write.


You would be surprised at the amount of crap that exists and the amount of malware that posts to fb


When you browse without a Pihole and a blocker, it does.


...have you seen the internet in the last 30 years? That's exactly what remains.


You should know that this sort of rhetoric is both

a) silly, because... it's not true. Spam, phishing attempts, illegal content - all of this should be removed.

b) more damaging to whatever you're advocating for than you realize. You want a free web? So do I. But I'm not going to go around saying stuff like "all users should be able to post any URL at any time" and calling moderation actions "utterly despicable"


Colcannon can keep you going forever.


A la Star Trek: Deep Space Nine, https://youtu.be/bPBzj90Su8A?feature=shared


What model truck do you have?


A 1500, with a 153.5 inch wheelbase


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: