Hacker Newsnew | past | comments | ask | show | jobs | submit | input_sh's commentslogin

Really? IMO it went about as well as I expected given the audience.

I think this might be a you problem because both Medium and Substack allowed randoms on the Internet to post from day 1. There aren't any requirements, anyone can do it.

Im gonna chip in and say that yes while they allow randos to post to the same extent i imagine the average person views a blog post/article as more legitimate when it has the branding of substack or medium attached to it rather than someones unbranded personal website

Funny… I’ve often felt the exact opposite.

Medium articles often look janky; if you’ve got a personal website you’ve at least figured out how to get that working, and if it looks good, that’s a positive signal!

Think myname@gmail.com vs me@myname.com


ICC judge, the fact that he's French didn't have an impact. He's also far from being the only one.

In fact, the Executive Order that imposed these sanctions is very broad and gives "immunity" to pretty much everyone affiliated with the US. If the ICC tries to prosecute anyone from NATO or anyone from a "major non-NATO ally" (Australia, Egypt, Israel, Japan, Jordan, Argentina, the Republic of Korea, and New Zealand), the current administration will put sanctions on those judges.

So there's 40 or so countries whose governments are effectively "immune" from being prosecuted from the ICC, but the president has authority to add literally any country to that list.


I'm looking forward to the reaction from the public when he adds Russia to that list.

It will, no doubt, be every bit as effective as the "thoughts and prayers" that follow the weekly school shootings that no other nation on earth have.


So about as effective as the ICC in the first place.

To be a bit more precise:

Asking people for their social media accounts is not new, it's a part of the visa application process since Trump's first term.

What's new is that now on top of that, they're asking people for those social media accounts to be public.


It has been asked in the ESTA for a long long time, afaik even before Trump.

But can we please remember that there is a huge huge difference between being asked to provide it optionally, to being required to provide it.


Okay, let me be even more clear then: it is required to fill out every social media handle and every phone number you've used for the past 5 years as a part of the DS-160 form (AKA online non-migratory visa application for countries not covered by ESTA).

That's been the case since 2019. Before that, asking to hand that info out even voluntarily was widely seen as an overreach. Now, it's required for countries not covered by ESTA and still voluntary for ESTA countries.


In serious news organizations, absolutely. Journalists write the stories, fact checkers make sure every claim is backed up by evidence before it gets published.

To describe their job poorly, they're there as a way of reducing odds of a lawsuit. At one of my previous jobs, there was a whole fact-checking team that wrote no stories themselves, but every story had to be run through them as a part of the publishing pipeline.


I see errors all the time in mainstream media. Sometimes these appear from some kind of info file that they raid every time they have to look up a subject, so the same information is quoted again and again (even if inaccurate). A lot of things in life are subjective and open to interpretation, especially when it comes to politics and culture.

Mainstream != serious. In fact it's quite the opposite, as serious news organizations cannot match the output of mainstream news. Even one story per month is a success for many.

In serious news organizations, there's quite a few steps between a journalist writing a draft and that draft being published. Fact-checking is one of them, having a competent "boss" (called an editor) is another.

Most news orgs have both a "serious" department and a "publish as much as possible" department, with far different requirements. In general, if you're publishing something along the lines of "X said Y", you don't need a rigorous process. If you're doing an investigation in which you're accusing someone of doing something illegal, then you need a far more rigorous process, otherwise you'd be sued out of existence pretty quickly.

Of course, having a rigorous process doesn't mean you won't get sued at all, but there's a term for that: SLAPP (strategic lawsuit against public participation). In those lawsuits, the goal is not to prove the story wrong, but to just waste news org's resources on defending their reporting in front of a judge instead of doing their job.


I'll use a non-American example. Such outlets present themselves as more serious than they are, particularly the BBC. The BBC presents itself as neutral, for example, when it is no such thing in many areas. When it comes to British foreign policy or the Royal Family, its biases are clear. The BBC tried to bury the Prince Andrew story repeatedly.

I pointed out to someone that the BBC was institutionally biased against Scottish independence, just by the nature of its funding. The BBC is funded by the TV licence, and if Scotland became independent, then the BBC would immediately lose 10% of its potential funding.

Other media outlets are the same. The question is who owns them, and how are they funded. State broadcasters have to kowtow to governments, or they can face trouble (as happened in Israel a few years ago when Netanyahu shut theirs down). Ones owned by major media conglomerates and corporations will reflect the interests of their owners. We have seen unions play less and less of a role in the political arena, probably partly because large profit-making corporations don't want them to be publicised.


I don't know how any of that has anything to do with what I explained to you. Two completely separate topics, I'm not here to indulge your every gripe you have with news.

You wanted to discuss so called fact checking and we did discuss fact checking. It is not always about facts, but projecting a narrative.

No "indulgence" whatsoever.


I think what you wanted to do is to rant.

Not what I was thinking, but thanks anyway, pal!

> BBC

> State broadcasters have to kowtow to governments, or they can face trouble

Very good example, BBC has criticised the government many times, and even did embarrassing investigations and fought in courts to get to publish them. A very good one is them fighting like hell to publish that MI5 are shielding an informant who is a pedophile. And they got to publish it, and directly say that MI5 tried to stop them via the courts on the grounds of "national security", but the courts disagreed.

So yeah, no.

> Ones owned by major media conglomerates and corporations will reflect the interests of their owners

Depends. Le Monde is a French left-wing newspaper (top 2 in France alongside the right-wing Le Figaro), which is majority owned by a holding company majority owned by one of France's premier tech billionaires (Xavier Niel). But everything is structured in such a way that he barely has any control (he can't even sell the holding company without approval from the remaining owner of Le Monde, a representative body of the journalists, staff and even readers). It has full editorial freedom.


The people doing that job are not the ones being targeted here.

> It directs consular officers to "thoroughly explore" the work histories of applicants, both new and returning, by reviewing their resumes, LinkedIn profiles, and appearances in media articles for activities including combatting misinformation, disinformation or false narratives, fact-checking, content moderation, compliance, and trust and safety.

Not only are they targeted, but so are many more.


You're quoting the NPR article, which misleadingly conflates the people we're talking about (who work for news agencies to verify their stories before publication) with social-media moderators, not the State Department directive, which (if we can believe the Reuters reporting at https://www.reuters.com/world/us/trump-administration-orders...) is fairly clear that it's only talking about the second.

How do you know?

Please link it if you have found it, because as far as I understand this story, the directive was sent out as an internal memo and therefore neither you or me can simply read it. Plus the Reuters story you've linked also has an almost-identical paragraph:

> The cable, sent to all U.S. missions on December 2, orders U.S. consular officers to review resumes or LinkedIn profiles of H-1B applicants - and family members who would be traveling with them - to see if they have worked in areas that include activities such as misinformation, disinformation, content moderation, fact-checking, compliance and online safety, among others.


No, those with more money than you can now push even more slop than they could before.

You cannot compete with that.


Prediction: it won't.

You can't fit every security consideration into the context window.


90% of human devs are not aware of every security consideration.

90% of human devs can fit more than 3-5 files into their short-term memory.

They also know not to, say, temporarily disable auth to be able to look at the changes they've made on a page hidden behind auth, which is what I observed Gemini 3 Pro doing just yesterday.


Ok, and that’s your prediction for 2 years from now? It’d be quite remarkable if humans had a bigger short term memory than LLMs in 2 years. Or that the kind of dumb security mistakes LLMs make today don’t trigger major, rapid improvements.

Do you understand what the term "context window" means? Have you ever tried using an LLM to program anything even remotely complex? Have you observed how the quality of the output drastically reduces the longer the coversation gets?

That's what makes it bad at security. It cannot comprehend more than a floppy drive worth of data before it reverts to absolute gibberish.


You may want to read about agentic AI, you can for instance call an LLM multiple times with different security consideration everytime.

There's about a dozen workarounds around context limits, agents being one of them, MCP servers being another one, AGENTS.md being the third one, but none of them actually solve the issue of a context window being so small that it's useless for anything even remotely complex.

Let's imagine a codebase that can fit onto a revolutionary piece of technology known as a floppy drive. As we all know, a floppy drive can store <2 megabytes of storage. But a 100k tokens is only about 400 kilobytes. So, to process the whole codebase that can fit onto a floppy drive, you need 5 agents plus the sixth "parent process" that those 5 agents will report to.

Those five agents can report "no security issues found" in their own little chunk of the codebase to the parent process, and that parent process will still be none the wiser about how those different chunks interact with each other.


In one instance it could not even describe why a test is bad unit test (asserting true is equal to true), which doesn’t even require context or multi file reasoning.

Its almost as if it has additional problems beyond the context limits :)


You may want to try using it, anecdotes often differ from theories, especially when they are being sold to you for profit. It takes maybe a few days to see a pattern of ignoring simple instructions even when context is clean. Or one prompt fixes one issue and causes new issues, rinse and repeat. It requires human guidance in practice.

Strongman: LLMs aren't a tool, they're fuzzy automation.

And what keeps security problems from making it into prod in the real world?

Code review, testing, static and dynamic code scanning, and fuzzing.

Why aren't these things done?

Because there isn't enough people-time and expertise.

So in order for LLMs to improve security, they need to be able to improve our ability to do one of: code review, testing, static and dynamic code scanning, and fuzzing.

It seems very unlikely those forms of automation won't be improved in the near future by even the dumbest form of LLMs.

And if you offered CISOs a "pay to scan" service that actually worked cross-language and -platform (in contrast to most "only supported languages" scanners), they'd jump at it.


There is an argument here that the LLM is a tool that can multiply the addition or removal of the defects depending on how it is wielded.

I think the father figure of a developer who was bitten by a radioactive spider once made a similar quip.

And that buys you what, exactly? Your point is 100% correct and why LLMs are no where near able to manage / build complete simple systems and surely not complex ones.

Why? Context. LLMs, today, go off the rails fairly easily. As I've mentioned in prior comments I've been working a lot with different models and agentic coding systems. When a code base starts to approach 5k lines (building the entire codebase with an agent) things start to get very rough. First of all, the agent cannot wrap it's context (it has no brain) around the code in a complete way. Even when everything is very well documented as part of the build and outlined so the LLM has indicators of where to pull in code - it almost always cannot keep schemas, requirements, or patterns in line. I've had instances where APIs that were being developed were to follow a specific schema, should require specific tests and should abide by specific constraints for integration. Almost always, in that relatively small codebase, the agentic system gets something wrong - but because of sycophancy - it gleefully informs me all the work is done and everything is A-OK! The kicker here is that when you show it why / where it's wrong you're continuously in a loop of burning tokens trying to put that train back on the track. LLMs can't be efficient with new(ish) code bases because they're always having to go lookup new documentation and burning through more context beyond what it's targeting to build / update / refactor / etc.

So, sure. You can "call an LLM multiple times". But this is hugely missing the point with how these systems work. Because when you actually start to use them you'll find these issues almost immediately.


To add onto this, it is a characteristic of their design to statistically pick things that would be bad choices, because humans do too. It’s not more reliable than just taking a random person off the street of SF and giving them instructions on what to copy paste without any context. They might also change unrelated things or get sidetracked when they encounter friction. My point is that when you try to compensate by prompting repeatedly, you are just adding more chances for entropy to leak in — so I am agreeing with you.

> To add onto this, it is a characteristic of their design to statistically pick things that would be bad choices, because humans do too.

Spot on. If we look at, historically, "AI" (pre-LLM) the data sets were much more curated, cleaned and labeled. Look at CV, for example. Computer Vision is a prime example of how AI can easily go off the rails with respect to 1) garbage input data 2) biased input data. LLMs have these two as inputs in spades and in vast quantities. Has everyone forgotten about Google's classification of African American people in images [0]? Or, more hilariously - the fix [1]? Most people I talk to who are using LLMs think that the data being strung into these models has been fine tuned, hand picked, etc. In some cases for small models that were explicitly curated, sure. But in the context (no pun) of all the popular frontier models: no way in hell.

The one thing I'm really surprised nobody is talking about is the system prompt. Not in the manner of jailbreaking it or even extracting it. But I can't imagine that these system prompts aren't collecting mass tech debt at this point. I'm sure there's band aid after band aid of simple fixes to nudge the model in ever so different directions based on things that are, ultimately, out of the control of such a large culmination of random data. I can't wait to see how these long term issues crop and and duct taped for the quick fixes these tech behemoths are becoming known for.

[0] https://www.bbc.com/news/technology-33347866 [1] https://www.theguardian.com/technology/2018/jan/12/google-ra...



PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.

Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).

Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.


I think it's more of a red flag that they chose a name that's one letter away from a well-known site that sells music samples: https://www.loopmasters.com/

Not like a fringe unknown one, but one with over 20 years of history and now-owned by Beatport.


meh, if they were that worried about their brand, they should have bought up the variants of their domain plus TLDs. otherwise, they can't possibly be that concerned about their trademark.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: