Hacker Newsnew | past | comments | ask | show | jobs | submit | hxtk's commentslogin

Even with the ellipsized link I knew you were talking about one of a few things because the link shows up as `:visited` for me... had to be either BigTable, MapReduce, or Spanner. All good reads.

There's some real science there for a couple of reasons. Protein is a macronutrient you can be malnourished if you don't get enough of even if you eat enough calories and the right micronutrients, and if most of your calories are from protein then you're actually probably not getting as many "burnable" calories as you think you are because (1) the amount of protein you need to meet your daily protein needs never enters the citric acid cycle to oxidized for ATP regeneration, (2) protein is the macronutrient that feels the most filling, and (3) excess protein that goes to the liver to be converted into carbs loses around 30% of its net usable calories due to the energy required for that conversion.

The way we count calories is based on how many calories are in a meal vs the resulting scat, and that just isn't an accurate representation of how the body processes protein such that a protein-heavy diet doesn't have as many calories as you probably think it does, which makes it a healthy choice in an environment where most food-related health problems stem from overeating.

However I agree with your skepticism insofar as when they say "prioritizing protein" they probably mean "prioritizing meat," which is more suspect from a health standpoint and looks somewhat suspicious considering the lobbyists involved.


Most Americans get plenty of protein without trying. It's hard to see how eating more meat should help unless you think the amount of protein actually needed is much more than what the May Clinic thinks: https://www.mayoclinichealthsystem.org/hometown-health/speak...

The "without trying" people probably aren't going to make much use of a food pyramid anyway. The guidelines are more aimed at people who will try.

Americans simply do not have the problem of not getting enough protein. It's a made-up idea.

As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel.

I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.


So much this. The act of having the agent create a research report first, a detailed plan second, then maybe implement it is itself fun and enjoyable. The implementation is the tedious part these days, the pie in the sky research and planning is the fun part and the agent is a font of knowledge especially when it comes to integrating 3 or 4 languages together.

This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job.

I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing.

“Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project.


I get this and operated this way for most of 2025. In Q4 I paired up with peer who has similar experience level to me. They specialize in TypeScript, I specialize in Go, we're both ops / platform grey beards.

This pattern of LLM usage has been great for leaning the other's skill set so we can more effectively review each other's code. I can spend a week planning and iterating with Claud on TypeScript, then have my peer review and correct both the implemented outcome _and_ the plan that produced it, allowing me to learn how to drive the LLM more effectively in my non-preferred language. The same is true of him, he's able to autonomously learn and iterate on Go in a way that's efficient and respectful of my time.

More anecdotal evidence supporting the concept these tools are a super-power for experienced engineers, especially when you have a small group of them working together in multiple languages.


Yeah, this is a lot of what I'm doing with LLM code generation these days: I've been there, I've done that, I vaguely know what the right code would look like when I see it. Rather than spend 30-60 minutes refreshing myself to swap the context back into my head, I prompt Claude to generate a thing that I know can be done.

Much of the time, it generates basically what I would have written, but faster. Sometimes, better, because it has no concept of boredom or impatience while it produces exhaustive tests or fixes style problems. I review, test, demand refinements, and tweak a few things myself. By the end, I have a working thing and I've gotten a refresher on things anyway.


The latter sounds like a reimplementation of AIDE, which exists in major Linux distributions’ default package managers.

Did you ever compare what you wrote to that?


If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.

WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.

That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.


It's how code is written in Google (including their open-source products like AOSP and Chromium), the ffmpeg project, the Linux Kernel, Git, Docker, the Go compiler, Kubernetes, Bitcoin, etc, and it's how things are done at my workplace.

I'm surprised by how confident you are that things simply aren't done this way considering the number of high-profile users of workflows where the commit history is expected to tell a story of how the software evolved over time.


"It's how code is written" then you list like the 6 highest profile, highest investment premier software projects on Earth like that's just normal.

I'm surprised by how confident you are when you can only name projects you've never worked on. I wanted to find a commit of yours to prove my point, but I can't find a line of code you've written.


Virtually all databases compile queries in one way or another, but they vary in the nature of their approaches. SQLite for example uses bytecode, while Postgres and MySQL both compile it to a computation tree which basically takes the query AST and then substitutes in different table/index operations according to the query planner.

SQLite talks about the reasons for each variation here: https://sqlite.org/whybytecode.html


Thanks for the reference.


It’s a real problem for defense sites because .mil is a public suffix so all navy.mil sites are the “same site” and all af.mil sites etc.


Or if the document is just text, simply scan it in black and white (as in, binary, not grayscale).


Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.

Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].

I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.

1: https://github.com/kubernetes/kubernetes/issues/48912


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: