>If you are today a big insurance like Munich Re, and you see already today that self-driving produces already much less accidents (90% or so I read days ago?), and the tech is really new & "not 100% reliable" and you believe that this tech will be rolled out - one day you will start lobbying politicans that manual driving needs to be forbidden, except some rare cases.
Why would insurance companies lobby for that? 90% reduction in accidents means 90% reduction in premiums, which means 90% reduction in profits.
Do insurance companies have a history of lobbying for safety regulations?
You have a point. At the same time insurance companies are getting out of insurance because they can't afford it. Apparently (I didn't verify this), the cost to repair cars has gone up so much that the economics don't work out. So they're leaving the market. Maybe less crashes would bring them back?
I believe in CA, the reason for pulling out is that CA limits the amount the companies can increase premiums[1]. So less crashes is one way to bring them back. Allowing them to increase premiums more is another way to bring them back.
That 1st link is the CA government, not an insurance company. Also, it doesn't seem to be a regulation requiring people to do something. It's something people can optionally do, and if so, get a discount on insurance. That's not insurance companies pushing for regulation, it's insurance companies offering a competitive price to both high-risk and low-risk houses, similar to how car insurance companies base their rate based on how risky of a driver they estimate the person to be.
But fire insurance is different from auto insurance. Insurance companies want uncorrelated risk. Insurance companies want a high rate of car crashes, but the exact same rate of car crashes each year, because that makes planning easy. If there's a risk that in a some years way more crashes will happen than other years, that's correlated risk and makes planning difficult; they don't want that.
For cars, there's not much correlated risk. For fire insurance, there is correlated risk due to wildfires. So to reduce correlated risk, insurance companies do likely want to reduce wildfires, while still wanting to increase non-wildfire fires.
Self-driving cars will increase correlated risk, because there could be some software update with a bug that's pushed out and causes a ton of crashes. (That risk does also exist with cars today, due to the various software in cars, but self driving increases the risk.)
The 2nd link is an insurance company, but it as well doesn't seem to be advocating for regulation.
Insurance companies aren't a monopoly. They're in competition with each other to offer lower rates. So if there's a reduction in paying out, they'll need to reduce their premiums to stay competitive with each other.
Good point that more miles driven might increase both accidents and premiums and thus increase insurance company profit.
However, how many more miles will they drive? Double? If there's a 90% reduction in accidents as KellyCriterion alluded to, then the total number of accidents will still go down. That means total premiums will go down, and total profit will go down.
It generally does. If accidents (and payouts) drop by 90%, revenue will ultimately drop and profits will follow. Profit margins may increase, but total profit $$ will likely drop.
Yes, this is true - but it’s still beneficial enough to have fewer claims. Claims incur cost in many ways and running a business with fewer claims would be more predictable and likely worth the minimal trade off.
I just took a look at the layers. In some cases, e.g. the 2nd letter in the Local Registrar's signature, a single letter is partially in the background layer, and partially in the upper layer.
This is easily explained by the character separation software being not 100% accurate.
It's not at all explained if someone is fraudulently adding text. Why would someone put half of the character in 1 layer and half of the character in a different layer?
The quote has nothing to do with a well regulated militia. It's about whether the technical ability for internet shutdowns has been built or not.
>A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)
>The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.
Nukes and tanks weren't built for internet shutdowns, and it's a ridiculous idea that if the US government decided to do an internet shutdown that they would decide to use a nuke for that.
What's the solution though? Stop letting kids play outside? I think the solution should be to reform CPS so it's not so traumatizing, and have more governmental awareness campaigns of the benefits of kids playing outside. I see government billboards all the time about anti-smoking, eating healthy, prediabetes screening. There can similarly be billboards promoting kids playing outside.
2) At the bare minimum, victims of CPS reports should be able to face their accuser. Currently laws anonymize reporters, this is not compatible with an open and balanced justice system. Also, needs to be heavy penalties and liabilities for abusing CPS reporting -- asymmetrical risks would end up with just getting the same result over and over again.
3) Cultural change. People that curtail child independence of others' children should be shamed, publicly. People that let their kids have independence, left the hell alone.
there would not be any issue with anonymous reports if CPS would look for actual evidence before doing anything else, and reject any anonymous report as baseless if no evidence is found. innocent until proven guilty must hold here too.
> If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100]), don't create a pointer to that location (don't create a pointer p to &data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.
That doesn't make sense to me. How can the "offset in an array itself" be "secret" if it's "always" 100? 100 isn't secret.
I think it may be about the absolute memory address to the secret being stored, which may itself be exploitable (ie you’re thinking about the offset value, rather than the pointer value). it’s about leaking even indirect information that could be exploited in different ways. From my understanding, this type of cryptography goes to extremely lengths to basically hide everything.
That’s my hunch at least, but I’m not a security expert.
The example could probably have been better phrased.
I don't see how a single absolute address could be exploitable based on my understanding of the threat model of this library. The library is in charge of erasing the secrets from memory. Once the secrets have been erased from memory, what would an attacker gain from knowing an absolute address?
The only thing that makes sense to me is a scenario with a lot of addresses. E.g. if there's an array of 256 integers, and those integers themselves aren't secret. Then there's a key composed of 32 of those integers, and the code picks which integers to use for the key by using pointers to them. If an attacker is able to know those 32 pointers, then the attacker can easily know what 32 integers the key is made of, and can thus know the key. Since the secret package doesn't erase pointers, it doesn't protect against this attack. The solution is to use 32 array indexes to choose the 32 integers, not 32 pointers to choose the 32 integers. The array indexes will be erased by the secret package.
reply