There are lots of restaurants in the US these days that charge 3% for use of any credit card. One that I've been even has a sign posted at the entrance about it, that it's legal to do so. Must have gotten a lot of complaints that it was somehow illegal, or perhaps against card processing rules. Because it's one thing to post a sign that says you charge the fee, it's another for that sign to mention the legality of it.
> won't it make just doing a "git checkout" start to be really heavy?
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.
Of course it's how it works, how else do you justify a company that is making negative profit into some how being worth $300M? Like that's just the game, IDK why people accept it. It's not democratic and all it does it prime the population for fraud and abuse.
I don't need to "play the game" to realize private valuations are just marketing fluff not based in reality. It's literally "the greater fool" theory in action. When the incentives are to just lie and not put out something with some actual scrutiny like a 409A, it's quite clear what game is being played.
But yes, I would totally love to invest in startups with people's pension funds. It seems like the perfect scam where the only losers are the public that allows such actions.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
> that's because it was defined in decimal from the start
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
Looking around their website, they appear to be an enthusiastic novice. I looked around because isn't a hardware architecture course part of any first year syllabus? The author clearly hasn't a clue about hardware, how memory is implemented.
> "The author clearly hasn't a clue about hardware, how memory is implemented."
I'm the author. Actually I'm quite familiar how memory addressing works, including concepts related to virtual memory / memory paging. Yes, I'm not a "low-level nerd" with deep knowledge in OS, hardware or machine code / assembly, but I know enough basics. And yes, I already mentioned that binary addressing makes more sense in RAM (and most of the hardware), and yes, I would not expect 4000-byte memory pages or disk clusters.
My main points are:
1) Kilo, mega, etc. prefixes are supposed to be base 10 instead of base 2, but in tech industry they are often base 2.
2) But this isn't the worst part. While we could agree on 1024 magnitude for memory, the problem is that it's still used inconsistently. Sometimes kilobyte is 1024 bytes, sometimes it's 1000. And this causes a confusion. In some contexts, such as RAM stick or disk cluster, you can assume base 2, but in some other contexts, such as file size, it's ambiguous. For example, would it be good if Celsius meant different things? I don't think so, it would certainly complicate things.
When Gmail downloads the image it identifies itself as GoogleImageProxy, and will be coming from a GCP/Google ASN.
Similar signal will be there for any email provider or server-side filter that downloads the content for malware inspection.
Pixel trackers are nearly never implemented in-house, because it's basically impossible for you to do your own email. So the tracker is a function of the batteries-included sending email provider. Those guys do that for a living, so they are sophisticated, and filter on the provider download of images.
umm, anti-glare/matte used to be the norm for LCD. Around 2005-2006 that changed. As laptops became more of a consumer product, and DVD watching was an important usage, the glossy screens became the norm.
So, I would call it a massive step backwards! The 2006 MBP had an optional glossy screen, and the 2008 was the first one with default glossy. Around 2012 Apple dropped the matte option altogether.
The screen has an oleophobic coating. That is the danger of alcohol, that it strips the coating. For your phone absolutely don't do this. For your laptop it should be fine.
reply