Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know, dropping 17000 lines of code is probably not the best way to solicit technical discussions

(https://lore.kernel.org/lkml/20240826103728.3378-1-greg@enje... is the patch set in question)



That's version 4. Here's version 2, which looks like the first patch submission:

https://lore.kernel.org/linux-security-module/20230204050954...


7000 is not a lot better.


Okay, but how long is reasonable to take to tell someone that their patch is too big to be reviewed?

> We were meticulous in our submissions to avoid wasting maintainers time. We even waited two months without hearing a word before we sent an inquiry as to the status of one of the submissions. We were told, rather curtly, that anything we sent would likely be ignored if we ever inquired about them.

It's reasonable to ask that people make smaller-sized patches to get reviewed, and it's reasonable to have to rule out some things due to not having the bandwidth for it compared to other priorities, but it's pretty ridiculous to expect people to know that those are the reason their code isn't getting reviewed if they're not allowed to inquire once after two months of radio silence.


Not everything can be broken down into 5 line patches. Some of the bigger features need bigger code. You can look at the impact radius and see if the patch is behind a feature flag to make a decision. In this case, it seems like it is elective and does not have any changes beyond what is self-contained.


I don’t follow the Linux kernel development process closely, but on most projects—open source or proprietary—that I’ve been involved in, dropping a big diff without having proposed or discussed anything about it beforehand would have been fairly unwelcome, even if it’s something that would be infeasible to break down smaller. I’d also argue that even quite substantial changes can be made with incremental patches, if you choose to do so.


There had probably been a lot of discussion in the background

The "every change can be a small patch, can be split in such" is part now of the Linux folklore IMHO. As in, a belief that people kinda heard about but nobody has concrete proof

Also do you know what would make it much easier to review long changes? Not using email patches

But I'm glad there's an good answer on "how to do it" in the thread https://lore.kernel.org/rust-for-linux/Z6bdCrgGEq8Txd-s@home...


> Not using email patches

GitHub issues with "load 450 hidden comments" on it? No threads, no comparisons between multiple submissions of the same patch, changes from rebasing impossible to separate from everything else? Having to drop anyway to the command line to merge a subset that has been reviewed? Encouraging squash merging makes it easier to work with long changes?

Email is complex and bare bones, but GitHub/GitLab reviews are a dumpster fire for anything that isn't trivial.


Try modern/patched versions of Gerrit, I'm sure Linus could tune it to his preference

And I agree, GH has several issues


I've used Gerrit myself for nearly 10 years. I recognize it's strengths and like it for what it does. I also am comfortable with mailing lists and Git Hub and Git Lab. I've lived in "both worlds".

IIUC, the Kernel project doesn't want to use it because of the single-point-of-failure argument and other federation issues. Once the "main" Gerrit instance is down, suddenly it's become a massive liability. This[1] is good context from the kernel.org administrator, Konstantin Ryabitsev:

"From my perspective, there are several camps clashing when it comes to the kernel development model. One is people who are (rightfully) pointing out that using the mailing lists was fine 20 years ago, but the world of software development has vastly moved on to forges.

"The other camp is people who (also rightfully) point out that kernel development has always been decentralized and we should resist all attempts to get ourselves into a position where Linux is dependent on any single Benevolent Entity (Github, Gitlab, LF, kernel.org, etc), because this would give that entity too much political or commercial control or, at the very least, introduce SPoFs. [...]"

[1] https://lore.kernel.org/rust-for-linux/20250207-mature-paste...


That's interesting, but I think the preocupation with SPoF to be overblown

By that measure, kernel.org is a SPoF as well

Maybe we can figure out a way to archive Gerrit discussions

(offtopic note - don't upvote the comment you're responding to while you're commenting as you'll lose your comment)


SPoF is not the only concern. Gerrit search is painful. You can't search the internet for a keyword that you wrote in v10 of some Gerrit patch, unlike on list-based workflows. Also, there's too much point-and-click for my taste, but I lived with it, as that's the tool chosen by that community. It's on me to adapt if I want to participate. (There are some ncurses-based tools, such as "gertty"[1]; but it was not reliable for me—the database crashed when I last tried it five years ago.)

In contrast, I use Mutt for dealing with high-volume email-based projects with thousands of emails. It's blazingly fast, and an absolute delight to use—particularly when you're dealing with lists that run like a fire-hose. It gives a good productivity boost.

I'm not saying, "don't drag me out of my email cave". I'm totally happy to adapt; I did live in "both worlds" :-)

[1] https://opendev.org/ttygroup/gertty


Linus wrote this on kernel.org not being a SPoF: https://lore.kernel.org/rust-for-linux/CAHk-=wiQUya5_zTwLFNT...


For what it's worth, I think the main issue is archival. Like it or not, there's nothing that beats email when it comes to preservation.



That's a great post, and also the presentation linked there is very interesting and hints at some of the issues (though I disagree with some of the prescriptions there)

Yes, the model is not scaling. Yes the current model is turning people away from contributing

I don't think linux itself is risking extinction, but someday the last person in an IRC channel will turn the lights off and I wonder if then they will think about what could they have done better and why don't people care about finicky buildsystems with liberal use of M4. Probably not.


> There had probably been a lot of discussion in the background

If that were the case then the patch would have a lot more CCs.


Then you have to discuss first, before throwing tousands of lines of code at somebody.


Totally. And also: the discussion must take place FIRST!


One of the first habits I learned at BigCo was to have a sense of the DAG in any work over 400 lines, so I could make patches arbitrarily large or small. Because whichever I chose, someone would complain eventually


Indeed, and that's exactly what the Rust for Linux guys are doing. They have Binder, multiple changes with various degrees of dependency on newer compilers, support for different kinds of drivers, and they are extracting them and sending them to Linux in small bits.


17KLoC? Maybe he should consider they’re probably still reviewing it through bloody eyes. But seriously I wouldn’t want to review that much code.


I'd get mildly angry if someone sent a giant patch without having discussed with me first. (Not working with any kernel things) But a quicker "no, don't have time, probably never will" response would have been nice I guess?


no feedback whatsoever for 2 months - just because it was 17k lines - and you are blindly defending it? if you don't know what you are posting & talking about, then maybe you shouldn't.

if someone is willing to put in the efforts to submit such a huge patch, how about just show them a little bit respect, I mean the minimum amount of respect, for such efforts and send them a one line reply asking them to break the patch into smaller ones.

surely that won't take more than 45 seconds. still too hard to get the point?


> if you don't know what you are posting & talking about, then maybe you shouldn't

I hate to appeal to authority, but I have been working with large cathedral-style open source projects (mostly three: GCC, QEMU, Linux itself) for about 20 years at this time and have been a maintainer for a Linux subsystem for 12. So I do know what I am posting and talking about.

The very fact that I had never learned about this subsystem from LWN, Linux Plumbers Conference, Linux Security Summit etc. means that the problem is not technical and is not with the maintainers not replying it.

A previous submission (7000 lines) did have some replies from the maintainers. Apparently instead of simplifying the submission it ballooned into something 150% larger. At some point I am not sure what the maintainers could do except ignoring it.


> At some point I am not sure what the maintainers could do except ignoring it.

Ignoring people is a childs approach to communication. If there is a problem, its your responsibility to communicate it. If you have communicated the issue and the other party is refusing to hear it, that is a different issue, but it is your responsibility to communicate.


Yes, as I wrote they had communicated it before. (BTW of course it's bazaar not cathedral... Brain fart for me).


Surely if there were insufficient established context for a reviewer to easily follow a large patch the correct response would be a swift rejection with a request that such context be established prior to submitting again.

It sounds like there was willingness to meet any requirements, but submitters end up in a position of not knowing what the requirements are and if they have met them or not.


It's very easy to develop tunnel vision when working remotely with people you've not even ever met. It's happened to me multiple times and there's a very high chance that this is happening here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: