Hacker Newsnew | past | comments | ask | show | jobs | submit | saghm's commentslogin

This feels like the modern analog of the king, the mice, and the cheese. What cats do I need to bring in to eat my git submodules?

Probably the same reason that pretty much no other package manager (or even major email provider, when email is ostensibly the most famous use-case for it) has adopted it: the UX is atrocious.

Basically all Linux distro package managers?

That's fair, distro package managers use it. I don't know of any language that uses it though; I think NuGet uses certificates for signing rather than PGP.

I'm not convinced that Python should be the standard for package management either. Earlier this week I was trying to publish a Python package for the first time wrapping a Rust library I wrote (for use only on Linux and Python 3.12+), and it literally took me hours to get from "I have a wheel that I can import and it works on my system" to "I have published that wheel and can install the package from PyPI on the set of systems that I'm trying to support and it actually works". Everything I've heard about this indicates that the situation for Python packaging is literally better than it ever has been before with the current tooling, so I can't even imagine how bad it was for the decades before. In comparison, having literally never touched npm before, I was able to publish a wrapper around the same library and validate that it was working in maybe 10 minutes (most of which were spent from not realizing that a certain tool was failing with a vague "file not found" error because I hadn't installed npm yet).

I'm not saying that npm is doing everything right, but I suspect that beyond the obvious low-hanging fruit that we hear about pretty consistently with npm there's probably a long tail of less obvious stuff that can be exploited that will not be specific to npm. The fundamental problems with supply-chain vulnerabilities aren't going to go away if npm magically became pip or go modules overnight.


I’m not suggesting pythons package management was good. This thread was started with a post about JS and Python, and I was responding to a message saying JS is so vulnerable to package repository attacks because its stdlib is so small. But Python’s been vulnerable too in spite of a robust stdlib.

And IMO the complaints about Python packaging tooling are overblown. Setuptools on its own was a bit disappointing, but coming from PHP 20 years ago it was a revelation! Virtualenvs and requirements.txt were an further improvement and so was pip — in an era where most other scripting languages didn’t have pinning for sub dependencies either; but you could always “pip freeze” to capture everything.

Later on, pipenv wasn’t perfect, but it was enough. I never ran into any of the headaches people keep saying poetry and uv solve. Poetry on the other hand always gives me one reason or another to beat my head against a wall.

That said, I’ve never bothered to try to publish anything and can’t comment on that end of it.


> Python should be the standard for package management

Python is the antistandard for package management. Or maybe even the eldritch horror of package management.


Curious if we included package managers from operating system distros (example: Debian apt), in your experience, what do you suggest JavaScript/Python/Rust package managers learn / borrow from?

Part of me wonders if the reason we see more npm attacks than pypi attacks is malware authors not wanting to deal with python packaging either

Hilarious but would they not abuse LLMs for this if so?

Thanks to uv, all is forgiven.

I feel like I saw phrasing like this pretty often even before LLMs were a thing

If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.

Needs to be "solicited" from a senior dev. How many requests for ai code do you think they will be making?

I can't find any reference to using LLMs to ask questions in the ways that are cited from the parent comment needing to be solicited.

> I’m considering that the amount of vulnerable software in the wild is very, very large

I'd imagine this set is very similar to just "the set of software on the world". Even before the AI stuff, it was a pretty good bet at any given software had some vulnerability; it was just a question of how easy to was to find it.


Yes, that’s my point. Look at how fast the Calif team tackled that macOS issue. Against the top company in the world. One week from bug to exploit. In 2-5 years things will be really wild for everybody out there. We released a technology that make it possible to design extremely complex exploits at a scale we never had to face before. What does that mean if you’re not the top company? Things will be really bad

I think they're saying they already did

Honestly, having increment in expressions rather than a statement feels like more of a footgun than a benefit. Expressions shouldn't mutate things.

I think the history of this is that these operations were common with assembly programmers, so when C came along, these were included in the language to allow these developers to feel they weren't leaving lots of performance behind.

Look at the addressing modes for the PDP-11 in https://en.wikipedia.org/wiki/PDP-11_architecture and you'll see you can write (R0)+ to read the contents of the location pointed to by R0, and then increment R0 afterwards (so a post increment).

Back in the day, compilers were simple and optimisations weren't that common, so folding two statements into one and working out that there were no dependencies would have been tough with single pass compilers.

You could argue that without such instructions, C wouldn't have been embraced quite so enthusiastically for systems programming, and the world would have looked rather different.


Additionally, those indirect memory instructions ended up disappearing because it complicated virtual memory implementations. It was a pain in the ass to describe the multiple places in memory an instruction could be accessing and which actually faulted to a fault handler, not to mention having to roll back all that state on more complex designs.

I worked on a more recent custom AI ISA that had that too. Pretty neat; I'm surprised it's not more common. I guess it doesn't matter so much now that memory is so much slower than ALU ops.

Python recently went the other way and added an assignment expression. I actually wish more languages would go further and add statement expressions instead of having to imitate them with IIFEs.

C just wouldn't be C without things like a[i++]


If the past few weeks of CVEs indicate anything, it's that C being C maybe isn't a good thing...

Those things are for pointer golf and writing your entire logic inside the if statement.

Both are favorite idioms of C developers. And they are ok if done correctly, clearer than the alternative. They are also unnecessary in modern languages, so those shouldn't copy it (yeah, Python specifically).


In any language where the practice of iteration isn't achieved via C-style for-loops, having an operator devoted to increment just doesn't make sense (let alone four operators, for each of pre/post-increment/decrement). This is one of those backwards things that just needs to be chucked in the bin for any language developed post-2010.

When used well it makes for compact readable code. I don't see what it has to do with for loops or operators specifically. For example you can do the same in scheme while iterating by means of tail recursion.

> I don't see what it has to do with for loops or operators specifically.

The reason that these operators pull their weight in C is because iteration over arrays is achieved by manual incrementation (usually via the leading clauses of the for-loop) followed by direct indexing. Languages with a first-class notion of iteration don't directly index in this way, which overwhelmingly eliminates not only the vast majority of array indexing operations in codebases but also the need to manually futz with the inductive loop variable. Case in point, Rust doesn't have `++` in any form, and it doesn't miss it, because Rust has first-class iteration; on the then relatively rare occasion where you want do want to increment, you can do `+=1`, which doesn't have the footguns of `++` due to assignment being a statement rather than expression, while leading to a simpler language due to leveraging the existing `+=` syntax rather than needing a whole new set of operators.


For loops are hardly the only usecase and built in iteration constructs frequently fall short. For example any mildly complex loop that involves pointer juggling can benefit.

> which doesn't have the footguns of `++` due to assignment being a statement rather than expression,

So then I implement the local equivalent of inc( v ) and ... same issue, right? Plus with rust macros is there any technical reason you can't trivially implement ++ for yourself? That's the case for most lisps that I touched on earlier.


> For example any mildly complex loop that involves pointer juggling can benefit.

I'd say that when you're writing a mildly complex loop that involves pointer juggling, one should prefer to be defensive and explicit rather than cleverly trying to compress everything into one-liners.

> So then I implement the local equivalent of inc( v ) and ... same issue, right?

This isn't done in Rust because there's no benefit. It's rare to find an occasion where it's necessary to do something tricky enough to forego using iterators, and when working with raw pointers Rust code just plain doesn't use basic addition for pointer arithmetic; instead it has a variety of pointer arithmetic methods for being explicit about the desired semantics (e.g. ptr::add, ptr::offset, ptr::wrapping_add, etc).

> Plus with rust macros is there any technical reason you can't trivially implement ++ for yourself?

There's not, but people might look at you sideways. Here, I implemented it for you: https://play.rust-lang.org/?version=stable&mode=debug&editio... . It expands to nested blocks with internal assignments, which results in a well-defined semantics following the defined order of evaluation in Rust.


In Rust you hide all kinds of error prone iterations behind the "iterator" interface. Both the "for(int x=0;..." and the "while(list[i++])" are implemented at the standard library.

People tend to use FP abstractions for the "x[i++] = f(y[j++])" though, not iteration.


I always hate C-style for-loops because even thought I learned C over 40 years ago, I can never remember whether the increment comes before the test or the test comes after the increment. Fortunately, modern IDEs let me continue to be ignorant on those occasions when they’re necessary (usually because I need the index for some reason).

int d = foo ? bar() : baz();

I think if anything people have been leaning more and more into expressions over statements, because when everything is an expression you end up being able to walk the gradient of complexity a bit more nicely than when you end up with a thing that just has to be broken down to a bunch of statements.


Expressions are nice specifically because they don't tend to mutate things. The ternary operator is not at all the same as `a++` because you have the assign the result.

Wenn wouldn't have pearls like while (dst++ = src++);

Another A/B test that affects 2% of users, maybe?

I tend the be the one on my team who reads them. I don't think I've ever had a coworker in close to seven years of writing Rust professionally that had trouble because they didn't read them. At most, sometimes I'll suggest using some function I remembered reading about to someone in a code review, and then they'll go "oh, cool!", and use it, and then they don't need to do anything else. Meanwhile, the times that there are useful changes under the hood, you get them in a few weeks instead of maybe some time later in the year or next year.

I don't understand at all how someone could plausibly argue that this is a problem in this context. At absolute most, you could argue against some of the downstream effects of the rapid release cycle (like how it reinforces the small standard library size; more surface area means potentially more bugs, and putting out an extra release to fix them when there are already releases so frequently is a hard sell, so it makes sense to keep things slim), but the article doesn't have anywhere close to that level of nuance in addressing the issue, so I'm skeptical that this is even something they've considered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: