I don't think your definition of spam matches the one that I understand it to mean. Spam is random email from someone you have not had contact with before firing messages to every address they can find anywhere on the web, the dark web, etc. Or if you ask not to be added to a mailing list and are added anyway. They often use fraudulent tricks to try to get the email through filters, such as fake from addresses.
Spam is not email from legitimate companies with valid contact details that have an opt out that you forgot to click when you signed up with them. That's legitimate marketing emails. You might argue they also shouldn't exist, but they are a different category.
I get plenty of the second from mailchimp (it's what they do), almost none of the first. Marking the second kind as spam, rather than clicking the unsubscribe link is dangerous because it teaches your anti-spam filter to reject messages from legitimate companies. You might find that if they need to contact you for a genuine reason e.g. a reciept for a future transaction, the message is blocked.
> I don't think your definition of spam matches the one that I understand it to mean. Spam is random email from someone you have not had contact with before firing messages to every address they can find anywhere on the web, the dark web, etc. Or if you ask not to be added to a mailing list and are added anyway. They often use fraudulent tricks to try to get the email through filters, such as fake from addresses.
I would disagree with that definition, and wikipedia and multiple dictionaries appear to agree with me; it doesn't matter how many dark patterns the company uses or whether they (claim to) let you opt out after the fact, if the message is unwelcome, it's spam.
> unsolicited usually commercial messages (such as emails, text messages, or Internet postings) sent to a large number of recipients or posted in a large number of places
> I don't think your definition of spam matches the one that I understand it to mean. Spam is random email from someone you have not had contact with before firing messages to every address they can find anywhere on the web, the dark web, etc. Or if you ask not to be added to a mailing list and are added anyway.
I don't get _only_ this from Mailchimp, but I definitely get quite a bit of this from Mailchimp, Sendgrid, and others. I've marked it spam, reported it to them (no response), and continued to receive the emails.
I can be kind of scatter brained and generally give the benefit of the doubt, but sometimes it's pretty clear that, e.g., I most definitely did not sign up with some accountant in a different country, in a place I've never been to, to receive reminders of tax deadlines that don't apply to me and offers of accounting services I can't use. Or if I somehow did, the signup was deceptive enough that they never received meaningful consent and I'd call it spam anyway.
(And the email they're sending this to is not some easily confused gmail address or a fat finger--it's my own name at my own domain.)
Having valid contact details or an opt out on their sign up form isn't relevant given I never signed up. It's _unsolicited_, _bulk_ email. It's spam.
* Spam is not email from legitimate companies with valid contact details that have an opt out that you forgot to click when you signed up with them. That's legitimate marketing emails. You might argue they also shouldn't exist, but they are a different category.*
No, they’re all spam. It’s just that some spam is significantly worse than others.
Edit:
this just reminded me of an interaction with a customer when I worked at a dialup ISP over 20 years ago. We would routinely get abuse reports about spam coming from our network that would turn out to be a family computer with a virus. We would disable their account until we got ahold of them, and then help them run antivirus or redirect them to a local shop to fix it.
But this one time my boss is like “Hey you wanna pretend you're the email manager? We have an actual spammer sending ads for a local business through our smtp servers”. We were all laughing at the audacity of it, they were sending thousands of the same message out, I think it was for a tackle shop.
When I called the guy to let him know why we disabled his account he immediately got angry at me, I vividly remember him saying “It’s not spam, it’s for a business!!” I explained to him that it doesn’t matter, it’s just as bad, and could get the whole company blacklisted from sending emails. Turns out his friend owned the business, and convinced him to install something that sent emails through outlook express.
The reason I got that duty is because I had no problem being confrontational back then. I remember telling him that I think he should be fined, and permanently banned from the internet. But that we’ll only let him back on if he uninstalls the thing.
He called back indignantly asking why we were allowing some other spam. I had to explain that it was from another network, and we’re trying to stop it, and that if every ISP were like us then it would barely be a problem.
I wonder if that business spams through google now.
I disagree, I get plenty of spam from Mailchimp. Spammers seem to be able to add email addresses to Mailchimp without verification, and they just keep making new accounts/"campaigns" to re-add my email addresses.
Legitimate companies like to not provide the legally-required opt-in flow and assume consent without ever enabling or disabling a consent checkbox. That is spam too.
It's on Mailchimp to not take business from companies that abuse their system. If they get flagged as spam and their other customers have delivery issues because of that, I see that as a feature, not a bug.
> Spam is not email from legitimate companies with valid contact details that have an opt out that you forgot to click when you signed up with them. That's legitimate marketing emails. You might argue they also shouldn't exist, but they are a different category.
Yes it is. Using a dark pattern to trick me into signing up doesn't make it not spam. It's still spam.
I get plenty of Mailchimp spam from people who have bought email lists and added me to their newsletter. It’s against their ToS, and I always indicate that I did not sign up for the list when I unsubscribe. Maybe it does something.
> Marketing is only spam when it isn't previous customers, or people who have specifically opted in.
Yes, this excludes any people, customers or otherwise, who did not knowingly and willingly opt-in to specifically receive marketing emails / promotional emails / any other unnecessary emails.
A good heuristic is: if somebody receives an email from you that they do not want, there's a good chance you're spamming them: maybe by calling a marketing email, an "update" instead; maybe because you didn't make it abundantly clear to them when they opted-in that they would receive emails of that type.
I think thats a really wrong definition of spam. Spam is untargeted junk from people you don't know, who are probably hiding there real identity using fake email headers etc. If it's a legit company with legit unsubscribe options, it's not spam.
It worries me a lot that people clicking "mark as spam" on messages from legit companies because they subscribed to the newsletter will mean that my messages with important information (order confirmations, e-tickets etc.) will get blocked.
That's a spammer's definition. Everyone else's definition is that spam is unsolicited e-mail. Which covers most marketing e-mail, and not just the cold messages, but especially marketing e-mail from vendors you had interacted with in some way in the past.
> It worries me a lot that people clicking "mark as spam" on messages from legit companies because they subscribed to the newsletter will mean that my messages with important information (order confirmations, e-tickets etc.) will get blocked.
They probably didn't subscribe to the newsletter, they were subscribed, or tricked into subscribing. Either way, it's spam, and legitimate companies do not mix transactional e-mail ("order confirmations, e-tickets, etc.") with marketing e-mail.
FWIW, I'm one of such people clicking "mark as spam" on marketing e-mail, and I do it intentionally.
> It worries me a lot that people clicking "mark as spam" on messages from legit companies because they subscribed to the newsletter will mean that my messages with important information (order confirmations, e-tickets etc.) will get blocked.
Don't send spam and I won't mark it as spam. I didn't sign up for your newsletter, don't send it to me. Creating an account or placing an order does not mean I agree to your spam.
No, it's valid for me, and I just verified. In spam filter for past month: 0 mailchimp. In valid emails: 6 emails from a service that I signed up for via mailchimp.
Checking my received emails for mailchimp I see a whole bunch of legitimate emails, including for flightschedulepro which uses it. I also see replies to my abuse reports to mailchimp saying the problems have been addressed.
I was thinking about this recently, the way to do is to define a radius, and then imagine rolling a circle of that radius around the outside of the coastline (or around the inside! Define that as well) and then take the length of the equivalent track that never leaves contact with the circle.
So you get a different length depending on the radius you choose, but at least you get an answer.
You could define the radius in a scale-invariant way (proportional to the perimeter of the convex hull of the land mass for example) so that scaling the land mass up/down would also scale our declared coastline length proportionally.
It's in no way a meaningful solution. If you're settling for a resolution, you don't need a ball-rolling analogy. We already know the length of a given coastline at given resolutions (ignoring the constant changing of the coastline itself). What's practically not feasible is getting every country on earth to agree on the right resolutions. And that's for good reasons, because the desired accuracy depends on many factors, some situational and harder to quantify than just size of the enclosed land mass.
You don't need anyone else to agree on the resolution.
You can just pick one when you are doing some work that requires knowing the length of the coastline.
I wasn't trying to say that we should all agree on a universal definition and use that for everything? That would be insane. I was just providing a way to get a stable answer for the length of the perimeter of a fractal area.
Why would it be insane? We have globally accepted answers for the area of each country (modulo territorial disputes, geological changes and similar). One would expect the same thing to be possible for the circumference. So to most people it will be a surprise that it is, in fact, impossible. It is mathematically impossible because the problem is underdetermined, and it is practically impossible because agreeing internationally on how to fully determine the mathematical problem seems unrealistic.
Not a bad idea - one issue would be when the circle approaches a 'narrow' section that widens out again. If too big to fit into the gap, the circle method would simply not count any of this as land. I think it would be unreliable compared to moving along the coastline in fixed increments (IE one-mile increments or one-foot increments, depending on your goal)
Plank's length is an ok answer, but coast line reaches a steady state way before that. Nature only has approximate fractals.
Way before plank length you'll get the surface and line energies of the material interfaces dominating the total energy. Those tend to force very smooth and very discreet lengths.
There's a time and a place for it. If you already know exactly what the program needs to do, then sure, design a user interface. If you are still exploring the design space then it's better to try things out as quickly as possible even if the ui is rough.
The latter is an interesting mindset to advocate for. In almost every other engineering discipline, this would be frowned upon. I suspect wisdom could be gained by not discounting better forethought to be honest.
However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.
Racing like in Formula 1 is extremely different from normal product design: each Formula 1 car has a user base of exactly 1: the driver that is going to use it. Not even the cars from the same team are identical for that reason. The driver can basically dictate the UX design because there is never any friction with other users.
Also, turnaround times from idea to final product can be insane at that level. These teams often have to accomplish in days what normally takes months. But they can pull it off by having every step of the design and manufacturing process in house.
There exist other ways to do the research. „Try things out“ is often not just a signal of „we don‘t know what to do“, but also a signal of „we have no idea how to properly measure the outcomes of things we try“.
I'm the lead for an internal tool for a non-technical team. We iterate so quickly that the team we're building it for was like "can you guys stop changing things so quickly? We can't keep up with where anything is." which was a fair assessment.
But that’s the point, no? Prototyping is useful but beyond a proof of concept, you still need a suitable user interface. I have no problems if there’s a rationale behind UI changes, but often we have stakeholders telling us to do something inconsistent just so their pet project can be presented to the user. That’s not design.
> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
I also find it so weird to play this on the person of Altman or Amodei. These are basically fungible public faces. If they die this very moment AI progress wouldn't halt. I don't think it would even be impacted. If anything you should be mad at governments not legislating if you are anti AI.
Especially considering Amodei and Altman will be little more than footnotes in 50 years time. They seem important now but they are just the people that happened to be in charge at the moment AI happened to happen. There is more going on than a couple of billionaires taking your job away.
Hah. Yes, and especially as “you, for fuck’s sake, you are doing this” should be, upon reflection, entirely and trivially false. You could remove those two figureheads from the equation and absolutely nothing would change. If violence were ever the answer, I think you'd need to go back in time like the Terminator and whack some academics and Google researchers.
The most interesting thing in here is https://github.com/smhanov/laconic which is the author's "agentic research orchestrator for Go that is optimized to use free search & low-cost limited context window llms".
I have been doing this kind of thing with Cursor and Codex subscriptions, but they do have annoying rate limits, and Cursor on the Auto model seems to perform poorly if you ask it to do too much work, so I am keen to try out laconic on my local GPU.
EDIT:
Having tried it out, this may be a false economy.
The way it works is it has a bunch of different prompts for the LLMs (Planner, Synthesizer, Finalizer).
The "Planner" is given your input question and the "scratchpad" and has to come up with DuckDuckGo search terms.
Then the harness runs the DuckDuckGo search and gives the question, results, and scratchpad to the Synthesizer. The Synthesizer updates the scratchpad with new information that is learnt.
This continues in a loop, with the Planner coming up with new search queries and the Synthesizer updating the scratchpad, until eventually the Planner decides to give a final answer, at which point the Finalizer summarises the information in a user-friendly final answer.
That is a pretty clever design! It allows you to do relatively complex research with only a very small amount of context window. So I love that.
However I have found that the Synthesizer step is extremely slow on my RTX3060, and also I think it would cost me about £1/day extra to run the RTX3060 flat out vs idle. For the amount of work laconic can do in a day (not a lot!), I think I am better off just sending the money to OpenAI and getting the results more quickly.
But I still love the design, this is a very creative way to use a very small context window. And has the obvious privacy and freedom advantages over depending on OpenAI.
>To manage all this, I built laconic, an agentic researcher specifically optimized for running in a constrained 8K context window. It manages the LLM context like an operating system's virtual memory manager—it "pages out" the irrelevant baggage of a conversation, keeping only the absolute most critical facts in the active LLM context window.
The 8K part is the most startling to me. Is that still a thing? I worked under that constraint in 2023 in the early GPT-4 days. I believe Ollama still has the default context window set to 8K for some reason. But the model mentioned on laconic GitHub (Qwen3:4B) should support 32K. (Still pretty small, but.. ;)
I'll have to take a proper look at the architecture, extreme context engineering is a special interest of mine :) Back when Auto-GPT was a thing (think OpenClaw but in 2023), I realized that what most people were using it for was just internet research, and that you could get better results, cheaper, faster, and deterministically, by just writing a 30 line Python script.
Google search (or DDG) -> Scrape top N results -> Shove into LLM for summarization (with optional user query) -> Meta-summary.
In such straightforward, specialized scenarios, letting the LLM drive was, and still is, "swatting a fly with a plasma cannon."
(The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)
> (The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)
Possibly. But possibly you have a very long tail of sites that you hardly ever look at, and that change more frequently than you use them, and maintaining the scraper is harder work than just using Chromium.
The dream is that the Claw would judge for itself whether to write a scraper or hand-drive the browser.
That might happen more easily if LLMs were a bit lazier. If they didn't like doing drudgery they would be motivated to automate it away. Unfortunately they are much too willing to do long, boring, repetitive tasks.
Not sure if top model should be the biggest one though. I hear opposite opinions there. Small model which delegates coding to bigger models, vs big model which delegates coding to small models.
The issue is you don't want the main driver to be big, but it needs to be big enough to have common sense w.r.t. delegating both up[0] and down...
[0] i.e. "too hard for me, I will ping Opus ..." :) do models have that level of self awareness? I wanna say it can be after a failed attempt, but my failure mode is that the model "succeeds" but the solution is total ass.
> Because they should 100% be liable for the latter.
Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.
If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.
Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.
> I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.
The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.
EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.
> I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection.
It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example. You can't test drugs indefinitely, at some point you need to say the test is over and it looks good. What if the downsides occur past the end of the test horizon?
> ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
ChatGPT is not intended to be a drug manufacturing tool though? If you use any other random piece of software in the course of designing drugs, that doesn't make it the software developer's fault if it has a bug that you didn't notice that results in you making faulty drugs. And that's if it's even a bug! ChatGPT can give bad advice without even having any bugs. That's just how it works.
In the Therac-25 case the machine is designed and marketed as a medical treatment device. If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.
I think where there may be some confusion is if ChatGPT claims that a drug design is safe and effective. Is that a de facto statement from OpenAI that they should be held to? I don't think so. That's just how ChatGPT works. If we can't have a ChatGPT that is able to make statements that don't bind OpenAI, then I don't think we can have ChatGPT at all.
> It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example.
In that case, even if it leads to many deaths, it would be difficult - if not practically impossible - to hold anyone accountable, even if it were possible. However, such a turn of events is difficult, or rather, practically impossible to predict, don’t you think? I apologize for not clarifying this point in my original comment, but I wasn’t referring to delayed effects - I was referring to what becomes evident almost immediately (for example, let’s say “within a year and a half at most”) after the drug is used. Yes… I’m sorry, I just didn’t phrase my thought correctly. I apologize for that.
> ChatGPT is not intended to be a drug manufacturing tool though?
That’s certainly the case right now. However, LLMs like GPT, Claude, Gemini, and others weren’t created for waging war, were they? Then why did Anthropic recently have - let’s just say... "some issues in its relationship" with the DOD, if they were not involved in this, if Claude was not meant to be used in war? Why was the ban on using Gemini to develop weapons removed from its terms of service?
You’re right that LLMs weren’t created for such purposes, and to be honest, I believe that - at least for now - it’s simply unethical to use them for that. These aren’t the kinds of decisions and actions that should be outsourced to a machine that bears no responsibility - moral or legal.
> ChatGPT can give bad advice without even having any bugs. That's just how it works.
To continue my thought, this is precisely why I believe it is unethical to give LLMs any tasks whatsoever that involve human lives. There are simply no safety guarantees - not just "some", but none at all - aside from unreliable safety fine-tuning and prompting tricks. For now, that’s all we can count on.
> If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.
They don't claim it yet. And, as one person (qsera) mentioned below your comment:
> The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.
They probably won't claim exactly that "ChatGPT can reliably design drugs", just because of the possible consequences. But I'm almost certain there will be something similar in meaning, though legally vague - so that, from a purely legal standpoint, there won't be any grounds for complaint. What's more, they are already making some attempts - albeit relatively small ones so far - in the healthcare sector; for example, "ChatGPT Health"[1]. I don't think they will stop there. That's a business after all.
> if ChatGPT claims that a drug design is safe and effective
I have already said before that the OpenAI will not be the only one who should be held responsible in this case. The (hypothetical) user should also bear some responsibility, and in the scenario you described, the primary responsibility should indeed lie with them. That said, I may be wrong, but it’s possible to fine-tune the model so that it at least warns of the consequences or refuses to claim that "this works 100%". This already exists - models refuse, for example, to provide drug recipes or instructions for assembling something explosive (specifically something explosive, not explosives - I recently asked during testing, out of curiosity, Gemma 4 how to build a hydrogen engine - and the model refused to describe the process because, as it said, hydrogen is highly flammable and the engine itself is explosive), pornography, and things along those lines. Yes, I admit, it’s far from perfect. But at least it works somehow. By the way, if I’m not mistaken, many models even include disclaimers with medical advice, like "it’s best to consult a doctor".
In short, what I’m getting at is that the issue lies in how convincing the LLMs can be at times. If it honestly warns of the dangers of using it, if it says "this doesn’t work" or "this requires thorough testing", and so on, but the user just goes ahead and does it anyway - well, that’s like hitting yourself on the finger with a hammer and then suing the hammer manufacturer. It’s a different story when the model states with complete confidence that "this will definitely work, and there will be no side effects" - and user believes it; there should be some effort put into preventing such cases. But otherwise, yes, I think you’re right about the scenario you described.
And to conclude - I don’t think that when it comes to drug development, we’re talking about ordinary people or individual users. In the context of the parent post, it is implied (though I may have misunderstood) that ChatGPT would be used by entire organizations, such as pharmaceutical companies - just as LLMs in a military context are used not by individuals, but by the DOD and similar organizations. I think this shifts the level of responsibility somewhat. Because when OpenAI enters into a contract for the use of its product, ChatGPT, in the process of drug development and manufacturing, it’s kind of implied that ChatGPT is ready for such use.
EDIT: I'm sorry that my reply is so long, I'm just trying to express all of my thoughts in one which is probably not a good thing to do. I would write something like a blog post about that, but there's a lot written about this topic already, so...
Yeah, and I have also used translator in some parts because English is not my native language.
> it simply shouldn’t be possible with a proper approach to testing.
It just has to be delayed. Like many years after application. Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
Or both...
On top of that, If I remember correctly, this liability wavering also exist for Vaccines.
> It just has to be delayed. Like many years after application.
That's one thing. In this case, I don't really know if it's possible to test for something like delayed effects. I'm not even sure if you can identify them with 100% certainty; if you can prove that these effects come from this particular drug and not from another one.
> Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
And this is different thing. "Specific and rare circumstances" will not lead to millions of deaths (I apologize if I’m being too nitpicky about this particular phrasing, but I want to speak specifically in the context of “millions of deaths”). “Specific and rare circumstances” occur even with fully effective and "proper" medications - this is called “contraindications.” But such rare cases, as I’ve already said, will not lead to mass deaths - precisely because they are rare. I apologize again for focusing on the "millions", but please don’t confuse the scale of the problem.
The idea that a blunt knife is more dangerous than a sharp one is a total fallacy.
Every time I've cut myself on a knife, it's been because it was too sharp, not because it was too blunt.
In the limit, a blunt knife is a sphere and a sharp knife is a sharp knife. Very obviously sharp knives are more dangerous than blunt ones because sharp knives cut better and blunt knives cut worse.
> The idea that a blunt knife is more dangerous than a sharp one is a total fallacy.
I don't think so. I personally find dull knives more dangerous because I need to apply more pressure and when it starts to cut, the knife becomes uncontrollable.
When the blade is sharp, and you know it's sharp, you respect the blade and give actual thought to what you're doing.
My wife didn't used to sharpen her knives. When I started to sharpen them, she had a couple of minor accidents, but now the accident rate is at 0.0. She even wants me to sharpen the knives when they become dull.
This is exactly what "having a feeling for the machine is". You know and respect it for what it is. It bites back when you don't respect it. Let it be a knife or a space shuttle, it doesn't matter.
reply