Hacker Newsnew | past | comments | ask | show | jobs | submit | stalfie's commentslogin

To me the "web of trust" element frankly seems like the only viable solution. And in fact, its almost here already: https://playsafeid.com/

I predict that hacker news in particular will dislike using facial recognition technology to allow for permanent ban-hammers, but frankly this neatly solves 95% of the problem in a simple, intuitive way. Frankly, the approach has the capacity to revitalize entire genres, and theres lots of cool stuff you could potentially implement when you can guarantee that one account = one person.


You know, security is a nebulous concept until it suddenly isn't. I live in a country neighboring Russia. Russian infiltration, sabotage, and perhaps large scale political assassination by means of autonomous drones (like the ukrainian operation "spiderweb"), is a very real and frankly not entirely unrealistic worry of mine. This is in addition to the unfortunate reality of hybrid warfare, where an uneducated populace that gets their news from TikTok is a very real security risk, which has almost already crashed immature European democracies. And arguably it has already succeeded in crashing the US.

In practice, encrypted messaging, and more broadly the unregulated, anonymous nature of the internet is THE technology that enables this. Ukrainian refugees are essentially indistinguishable in practice from Russian operatives and pose a very real security risk. The loss of the US as a reliable ally, which in practice is the new reality, is felt here in a very real way.

I think this point is largely missed by hacker news. I am legitimately afraid that Russia might assassinate elected leaders and invade, and embroil my own country in a war that might lead to my death. And to be honest my worries are a bit overblown in my particular case, it is very unrealistic that this will happen to my particular country, but if I were to live in Poland they wouldn't be.

I raise this point in response to your quotation marks around "security". European countries have very real, and very pressing security concerns.


Thanks for the excellent reply/comment. Having supported the US IC for the majority of my career, I'm quite aware of the threats to/from, and the behaviors of nation states.

It's easy to justify snooping. The issue (for me) is when the snooping unjustifiably infringes on my personal privacy. Governments will argue that they don't know that I'm not a threat, so they must surveil me. Unfortunately, those who are doing the surveilling can also be a threat to the people, even when the people are behaving completely in compliance with the law. You need only look at some of the recent revelations in the US press for examples.

Knowledge is power, and power corrupts.


I would argue that your argument is simplistic and does not account for observed geographical variations.

Japan does not have an obesity epidemic. The US has an extreme obesity epidemic. There does not seem to be any good genetical explanation, there might be cultural based behavioral explanations, but Japanese communities in the US are also more obese than ones in Japan (although less obese than the general US population).

So it is clearly entirely possible for a society to have plenty of easily accessible delicious food, with no major government restrictions in place, and not have an obesity crisis. And there seems to be some particularly bad environmental and/or cultural factor in the US driving the abnormally bad obesity epidemic there, and no intervention before GLP-1 has managed to reverse the trend (not that there have been many). There are a lot of theories about this topic, but no clear scientific consensus beyond "all very sweet things are probably maybe bad".

PS: I am aware that Japans "fat-tax" exists and is technically a form governement restriction, but I would assume that it plays a relatively minor role overall.


I’m not sure cross-cultural comparisons are useful here. One big difference is that friends and family will aggressively police your weight and the amount you eat, with the ideal set far below health standards for normal weight. It’s not clear how you could operationalize that into an intervention, even if you wanted to.


they’re starting to get fatter in japan too

i think it’s due to the increasing prevalence of dairy


I think one major lesson of the history of the internet is that very few people actually care about privacy in a holistic, structural way. People do not want their nudes, browsing history and STD results to be seen by their boss, but that desire for privacy does not translate to guarding their information from Google, their boss, or the government. And frankly this is actually quite rational overall, because Google is in fact very unlikely to leak this information to your boss, and if they did it would more likely to result in a legal payday rather than any direct social cost.

Hacker news obviously suffers from severe selection bias in this regard, but for the general public I doubt even repeated security breaches of vibe coded apps will move the needle much on the perception of LLM coded apps, which means that they will still sell, which means that it doesn't matter. I doubt even most people will pick up the connection. And frankly, most security breaches have no major consequences anyway, in the grand scheme of things. Perhaps the public conscioussness will harden a bit when it comes to uploading nudes to "CheckYourBodyFat", but the truly disastrous stuff like bank access is mostly behind 2FA layers already.


The neural network in the retina actually pre-processes visual information into something akin to "tokens". Basic shapes that are probably somewhat evolutionarily preserved. I wonder if we could somehow mimic those for tokenization purposes. Most likely there's someone out there already trying.

(Source: "The mind is flat" by Nick Chater)


It's also easy to spot as when you are tired you might misrecognize objects, I caught myself with this when doing long roadtrips


AFAIK this is actually a separate mechanism, which is part of the visual cortex and not the retina. Essentially recognizing even a single object requires the complete attention of pretty much your entire brain in the moment of recognition.

What I am referring to is a much more basic form of shape recognition that goes on at the level of the neural networks in the retina.


The subtle "wiggle" animation that the second hand makes after moving doesn't fire when it hits 12. Literally unwatchable.


In its defence, the code actually specifically calls that edge case out and justifies it:

    // Calculate rotations
    // We use a cumulative calculation logic mentally, but here simple degrees work because of the transition reset trick or specific animation style.
    // To prevent the "spin back" glitch at 360->0, we can use a simple tick without transition for the wrap-around,
    // but for simplicity in this specific React rendering, we will stick to standard 0-360 degrees.
    // A robust way to handle the spin-back on the second hand is to accumulate degrees, but standard clock widgets often reset.


The Swiss and German railway clocks actually work the same way and stop for (half a?) second while the minute handle progresses.

https://youtu.be/wejbVtj4YR0


Station clocks in Switzerland receive a signal from a master clock each minute that advances the minute hand, the seconds hand moves completely independent from the minute hand. This allows them to sync to the minute.

> The station clocks in Switzerland are synchronised by receiving an electrical impulse from a central master clock at each full minute, advancing the minute hand by one minute. The second hand is driven by an electrical motor independent of the master clock. It takes only about 58.5 seconds to circle the face; then the hand pauses briefly at the top of the clock. It starts a new rotation as soon as it receives the next minute impulse from the master clock.[3] This movement is emulated in some of the licensed timepieces made by Mondaine.

https://en.wikipedia.org/wiki/Swiss_railway_clock


The video shows closer to 2 seconds for it to finally throw itself over in what could only be described as a "Thunk". I figured it would be a little more smooth.


Fixed with prompt "Second hand doesn't shake when it lands on 12, fix it." and 131 seconds. With a bunch of useState()-s and a useEffet()


I think OP might be referring to circumcision.

And just as a small aside, not really related to OPs points, I'd just like to point out that nature pretty consistently tampers with everyones kids DNA, which quite regularly leads to absolute nightmare fuel. Whatever those unknowable nightmares may be, they have to be pretty gruesome in order to compete.


Im working on a PhD concerning LLMs and somatic medicine, as an MD, and I must admit that my perspective is the complete opposite.

Medical care, at the end of the day, has nothing to do with having a license or not. Its about making the correct diagnosis, in order to administer the correct treatment. Reality does not care about who (or what) made a diagnosis, or how the antibiotic you take was prescribed. You either have the diagnosis, or you do not. The antibiotic helps, or it does not.

Doing this in practice is costly and complicated, which is why society has doctors. But the only thing that actually matters is making the correct decision. And actually, when you test LLMs (in particular o3/gpt-5 and probably gemini 2.5), they are SUPERIOR to individual doctors in terms of medical decisionmaking, at least on benchmarks. That does not mean that they are superior to an entire medical system, or a skillfull attending in a particular speciality, but it does seem imply that they are far from a bad source of medical information. Just like LLMs are good at writing boilerplate code, they are good at boilerplate medical decisions, and the fact is that there is so much medical boilerplate that this skill alone makes it superior to most human doctors. There was one study which tested LLM assisted (I think it was o3) doctors VS LLMs alone (+doctors alone) on a set of cases, and the unassisted LLM did BETTER than doctors, assisted or not.

And so all this medicolegal pearlclutching about how LLMs should not provide medical advice is entirely unfounded when you look at the actual evidence. In fact, the evidence seems to suggest that you should ignore the doctor and listen to chatGPT instead.

And frankly, as a doctor, it really grinds my gears when anyone implies that medical decisions should be a protected domain to our benefit. The point of medicine is not to employ doctors. The point of medicine is to cure patients, by whatever means best serves them. If LLMs take our jobs, because they do a better job than we do, that is a good thing. It is an especially good thing if the general, widely available LLM is the one that does so, and not the expensively licensed "HippocraticGPT-certified" model. Can you imagine anything more frustrating, as a poor kid in the boonies of Bangladesh trying to understand why your mother is sick, than getting told "As a language model I cannot dispense medical advice, please consult your closest healthcare professional".

Medical success is not measured in employment, profits, or legal responsibilities. It is measured in reduced mortality. The means to achieve this is completely irrelevant.

Of course, mental health is a little bit different, and much more nebulous overall. However, from the perspective of someone on the somatic front, overregulation of LLMs is unecessary, and in fact unethical. On average, an LLM will dispense better medical advice than an average person with access to google, which is what it was competing with to begin with. It is an insult to personal liberty and to the Hippocratic oath to support that this should be taken away simply because of some medicolegal bs.


I appreciate your perspective. May I contact you? You are invited to send me an email, my Gmail username is the same as my HN username.


To be fair, being knowledgeable about the pre-test probability of a patient having a certain disease vs the sensitivity/specificity of a test IS part of the ideal practice of medicine, although how important it is in practice varies somewhat between specialities. In rheumatology for instance, it is front and center to how you make diagnoses. I was in primary care for a short while myself, and on more than one occasion regretted deeply ordering certain rheumatological screening panels (which you get without asking for it when looking for certain antibodies).

Explaining to a parent the fact that their child did in fact not have a rare, deadly and incurable multi-system disorder even though an antibody which is 98% specific for it showed up on the antibody assay, that we took for an entirely different reason, is the kind of thing thats hard to explain without understanding it yourself.


Bayesian thinking isn’t about p-values and doesn’t need to be presented that way.

If you use the centor criteria before resting for strep, is that worse than getting out a piece of paper and researching background population prevalence?

The OP is being dogmatic about doctors needing to know things he does which is obviously silly.

Edit - but yes, I agree that we should think about sensitivity and specificity, I just don’t think you need to be a statistician, just to have a helpful script and resources for patients who wish to know more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: