Sometimes feature requests are actually bugs and can be illustrative of one not properly understanding design.
But I think it is important how user feature requests are interpreted. They have a frustration that you might not be aware of but they aren't aware of all the code and constraints. It can even be in design, which is still important. Very often there is a way to resolve a feature request that is not what the user explicitly asks for. But to do that you have to read between the lines, and carefully. Of course, some people go completely the wrong way with this and cough Apple cough decide that they know what is best for the user. It's totally a hard balance to strike, but I think it is very common for it to be framed much simpler.
There's the joke that the user is dumb, and maybe they are, but that doesn't mean the issue they face is. It's not always dumb when a person pulls on a door that says push, because it may actually be that the sign and design are saying different things[0]. And personally, I like when users suggest methods of resolving the problem. I might throw that in the garbage, but it can often give me better context clues as to what they're trying to ask for and really does tell me if they're thinking hard about the problem that they care about the product. They just don't have the same vantage point that I do, and that's okay.
You can have two missing features that add up to a bug in total. For example, I worked with two cloud products from the same vendor where a missing back-end HTTP feature of the CDN product interacted with a missing HTTP front-end feature of the PaaS service such that the two products that have a "natural fit" together couldn't actually be used in combination.
This made many architectures that ought to have worked a no-go, forcing customers into contorted design patterns or third-party products.
IMHO this is a bug ("Can't use your products"), but each team individually marked it as a missing feature and then they just ignored this for about three years.
Also: not enough people voted the missing features up because not enough people were using the products... because they couldn't.
I know this is a bit off-topic here, but it circles back to the "statistics is hard" intro in the original blog article. You can make catastrophic business mistakes relying on statistics you don't full understand, such as this example of "you won't get many complaints for unusable products".
You will get many complaints however for the usable products... they have users to complain.
> because not enough people were using the products... because they couldn't.
I don't think this is off topic at all. I think is is explicitly on topic, at least the the underlying one. Not just statistics are hard, but it's hard to measure things and even harder to determine causality. Which is often the underlying goal of statistics and data science. To find out why things happen. Measurements are incredibly difficult and people often think they are simple. The problem is that whatever you're measuring is actually always a proxy and has uncertainty. Often uncertainty you won't know about if you don't have a good understanding of what the metric means. You'll always reap the rewards when putting in the hard work to do this, but unfortunately if you don't it can take time before the seams start to crack. I think this asymmetry is often why people get sloppy.
The example I like to use is the confusion around COVID statistics, and how people mis-interpreted them.
For example, the rate of infections (or deaths) per day that was reported regularly in the news is actually: rate of infections * measurement accuracy * rate of measurement.
I.e.:
If more people turn up to be tested, the "rate" would go up.
If the PCR tests improved, the "rate" would go up.
A similar thing applies with hospitalisations and deaths. It might go up because a strain is more lethal than another strain, or because more people are infected with the same strain, or because more deaths are attributed to COVID instead of something else.
It doesn't help that different countries have different reporting standards, or that reporting standards changed over time due to the circumstances!
But I think it is important how user feature requests are interpreted. They have a frustration that you might not be aware of but they aren't aware of all the code and constraints. It can even be in design, which is still important. Very often there is a way to resolve a feature request that is not what the user explicitly asks for. But to do that you have to read between the lines, and carefully. Of course, some people go completely the wrong way with this and cough Apple cough decide that they know what is best for the user. It's totally a hard balance to strike, but I think it is very common for it to be framed much simpler.
There's the joke that the user is dumb, and maybe they are, but that doesn't mean the issue they face is. It's not always dumb when a person pulls on a door that says push, because it may actually be that the sign and design are saying different things[0]. And personally, I like when users suggest methods of resolving the problem. I might throw that in the garbage, but it can often give me better context clues as to what they're trying to ask for and really does tell me if they're thinking hard about the problem that they care about the product. They just don't have the same vantage point that I do, and that's okay.
[0] https://www.youtube.com/watch?v=yY96hTb8WgI