It's easy to take for granted lots of experience programming before the advent of LLMs. This seems like a good strategy to develop understanding of software engineering.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
> We are moving to disable the usage of unrestricted API keys in the Gemini API, should have more updates there soon.
It's unacceptable the contract for client-side keys was broken in this manner, and doubly bad that it's taken so long for Google to remediate this issue. The Gemini team needs to publish a postmortem to explain what broke down in the engineering process to allow this to happen.
Indeed, all the hot security scanning vendors are using custom prompts to capture a more holistic approach. There are of course plenty of legacy scanners that still focus on OS package versions and static configs, but the parts of the industry leaning into LLMs have genuine value to add.
I don't expect Claude Code Review to be a replacement for a good vendor's solution.
The pressure by internal auditors and cyber insurance providers to implement these programs will be strong. I have been at organizations where EDR was added only due to the board of directors following the recommendation of 3rd parties. Of course, there will be new companies that haven't achieved the maturity to have had these pressures. But new companies being thoroughly compromised is hardly a recent phenomenon.
The supply chain attack is interesting in that it doesn't require any marginal effort for an attacker to get an initial exploit for additional targets. Then the bottleneck is post-exploitation efforts and value of the targets.
This is a great example of vulnerability chains that can be broken by vulnerability scanning by even cheaper open source models. The outcome of a developer getting pwned doesn't have to lead to total catastrophe. Having trivial privilege escalations closed off means an attacker will need to be noisy and set off commodity alerting. The will of the company to implement fixes for the 100 Github dependabot alerts on their code base is all that blocks these entrepreneurs.
It does mean that the hoped-for 10x productivity increase from engineers using LLMs is eroded by the increased need for extra time for security.
This take is not theoretical. I am working on this effort currently.
It's great news for developers. Extra spend on a development/test env so dev have no prod access, prod has no ssh access; and SREs get two laptops, with the second one being a Chromebook that only pulls credentials when it's absolutely necessary.
Yes, having a good development env with synthetic data, and an inaccessible, secure prod env just got justification. I never considered the secondary SRE laptop but I think it might be a good idea.
The value-add is having a workstation that's disconnected from work that would be susceptible to traditional vectors that endpoints are vulnerable to. For example, building software that pulls in potentially malicious dependencies, installing non-essential software, etc. The "SRE laptop" would only have a browser and the official CLI tools from confirmed good cloud and infrastructure vendors, e.g. gcloud, terraform.
I think that such a posture would only be possible in a mature company where concerns are already separated to the point where only a handful of administrators have actual SSO or username/passphrase access to important resources.
It's not a joke. Supply chain attacks are a thing, but Google Chromebooks are about the most trustable consumer machine you can run custom code on short of a custom app on an iPad. The Chromebook would only ever have access to get the root AWS (or whatever) credentials to delete, say, the load balancer for the entire SaaS company's API/website. If my main laptop gets hacked somehow, the attacker can't get access to the root AWS credentials because the main laptop doesn't have them. The second laptop would only be used sparingly, but it would have access to those root credentials.
When you have insane amounts of capital and your gpu and talent needs are more or less met, there is a capital relief valve known as growth hacking. It only works if the consumer isn’t aware they’re being hacked.
No. They want you to believe in the hype and that LLMs are the death of programmers and limitless. OpenClaw and other such agents are sold as a tool that "can do anything" but behind the scenes, the implication is still that big LLM is driving it. So both are conflated.
If someone does something to nth degree, it's bad. If someone does something to (n*10)th degree, are the sheeple really at fault for reacting? Do you not behave the same way in your own life?
From busterarm's profile: "Most people are stupid and/or on drugs."
The account is from 2013 but given that profile, I can't give it any credibility. After all, it could be somebody's OpenClaw having been granted control of the account.
> After all, it could be somebody's OpenClaw having been granted control of the account.
Luckily for HN, I actually have a post history. You can use my post history, textual analysis and statistics to make an informed decision about whether I'm a bot or not. Whether I'm being consistent or spouting any random bs.
The account I was responding to doesn't have anything.
> The account is from 2013 but given that profile, I can't give it any credibility.
What's in my profile is a statistical fact. It's there as a reminder, to me, not to expect everyone to see the world the same way that I do. To be comfortable with strong disagreement.
Just a hair shy of half the population is below average intelligence. Roughly 1 in 4 people has a cognitive impairment. This is of any age but trends upwards with age, reaching 2 in 3 by age 70. 1 in 4 Americans take psychiatric medication. 1 in 4 participates in illegal drug use. We haven't even touched on alcohol abuse.
My profile statement is just objective reality, whether you're comfortable with being stated openly or not.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
reply