Hacker Newsnew | past | comments | ask | show | jobs | submit | jebronie's commentslogin

its either okay or not

The answer is that it's not okay and never was. Do you really think you're pulling a gotcha here?

Photoshopping nudes of your coworkers was always seen poorly and would get you fired if the right people heard about it. It's just that most people don't have the skill to do it so it never became a common enough issue for the zeitgeist to care.


I am not trying to pull a gotcha and I made no claim that it is okay or not okay. Don't suggest otherwise. I also wasn't talking about coworkers or any other particular group.

My argument is that it is either okay or not, regardless of the tools used.


No, you are missing the aspect of distribution.

I'm not missing anything. An act is either immoral or not.

Creating CSAM or non-consensual sexually explicit images of others in photoshop is immoral. If you can’t see that then you need to take an ethics course.

I made no claim that it is okay or not okay. Don't suggest otherwise. My argument is that it is either okay or not, regardless of the tools used.

lets not mince words, its called the "piss filter"


this could have been a couple of lines of simple and maintainable php


Too bad I don't write PHP then


no. even including a font from a different host is not allowed under the gdpr because you are leaking the users IP to that host. you are poorly informed on this topic.


But the different host IS tracking because that's how they make money from serving "free" fonts. So if what you're saying is true, that's exactly how it should be. When I go to a website I don't want others involved.


We used to use Subway's proprietary font. We never needed to call a server for that.

Maybe don't build stuff in such a dumb and lazy way?


I don't understand why your post is flagged. You are 100% right. The point of CSRF protection is that -you can't trust the client-. This new header can just be set in curl, If I understand correctly. Unlimited form submissions here I come!


CSRF protects the user by not allowing random pages on the web using resources from a target website, without the user being aware of this. It only makes sense when serving people using browsers. It is not a defense against curl or skiddies.


To elaborate/clarify a bit, we defend against curl with normal auth, correct? Be it session cookies or whatever. That plus origin/Sec-Fetch-Site (and tls, secure cookies, hsts) should be reasonable secure, no?


indeed, you need some form of CSRF, but the Sec-Fetch-Site is primarily focused on keeping a browser secure, not the server. Having said that it's nice defence in depth for the server as well but not strictly required as far as the server is concerned.


I'm confused. In my mind, you only really need to keep the server secure, as that's where the data is. Auth cookies and csrf protections (eg Sec-Fetch-Site) are both used towards protecting the server from invalid requests (not logged in, or not coming from your actual site).

What are you referring to when you talk about keeping the browser secure?


The Sec-Fetch-Site header can't be read / written by Javascipt (or WASM, etc), cookies (or some other tokens) on the other hand can be. In most circumstances allowing Javascript to access these tokens allows for "user friendly" interfaces where a user can log in using XMLHttpRequest / API rather than using a form on a page. OOB tokens one a one off auth basis or continuous (i.e. OAuth, TOTP with every request) are more secure, but obviously requires more engineering (and comes with its own "usability" / "failure mode" trade offs).


> The Sec-Fetch-Site header can't be read / written by Javascipt

Perfect. It's not even meant or needed to be. The server uses it to validate the request came from the expected site.

As i and others have said in various comments, you seem to be lost. Nothing you're saying has any relevance to the topic at hand. And, in fact, is largely wrong.


"Nothing you're saying has any relevance to the topic at hand. And, in fact, is largely wrong."; your confidence in your opinion doesn't make you right.

Prove it.


This is not what this is supposed to protect, and if you are using http.CrossOriginProtection you don't even need to add any header to the request:

> If neither the Sec-Fetch-Site nor Origin headers are present, then it assumes the request is not coming from web browser and will always allow the request to proceed.


Wait, but if those headers are missing, then isn't there a vulnerability if someone is using an old browser and clicks on a malicious link? Do we need to also check user agent or something else?


Exactly, the post talks about this too: older browsers will be vulnerable, this probably affects only a small amount of the population and it is even lower if you limit service to accept TLSv1.3 (for this to be useful you of course need to enable HTTPS otherwise the attacker can just strip the headers from your request).

If you can't afford to do this you still need to use CSRF tokens.


I suppose that we could just reject anything that doesnt have these tokens, depending on whether you want to allow curl etc... I might just do that, in fact.


Du bist ein Ausländer.


I just tried to submit a contact form with it. It successfully solved the ReCaptcha but failed to fill in a required field and got stuck. We're safe.


As someone who's been quite heavily involved with having a brain, I'd advocate for using of the test pass rate as a metric for how many tests are passed.


this isn't reddit


its way better than the github thing in my experience it produces usable PRs


A blind monkey smashing a keyboard can produce better PR and PR reviews than GitHub copilot. I don't get how they managed to make copilot so bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: