Hacker Newsnew | past | comments | ask | show | jobs | submit | f1shy's commentslogin

You can measure all the parameters you want. The question is: does it really matter? I know many doctors, and one of my favorite questions is about stethoscopes: I have unanimously the answer "I could just roll a piece of paper and if the room is quiet enough, I can do my work". My grandpa used one made out of wood, just a cone. Once, I was fascinated by a Littmann, with bluetooth audio, I told a friend doctor, that would be great (thinking about a present) the answer was "That is all hype, I can do with a $2 piece exactly the same". I pointed out the possibility to record the sound, to possibly defense in case of being sued: she laughed out loud, said is unpractical to record everything, would take too much time, and again, just a toy.

> I could just roll a piece of paper and if the room is quiet enough, I can do my work

That is true. The job is certainly doable. It's also possible to press one's ear against the patient's body to directly listen to the sounds without any tools whatsoever. Stethoscope was itself invented because a male doctor doing that to a woman's chest was uncomfortable for obvious reasons.

There is some kind of difference between a good littmann and a cheap stethoscope. My experience is that the important sounds are just easier to hear with the littmann. Would love to know why that is, and why the cheap ones just can't seem to match it.


Lots of nurses and EMTs swear by the amplified Bluetooth stethoscopes but unlike a Dr working in a nice quiet office, they're often in much noisier conditions.

The difference between a $100 mic and one that costs ten to a hundred times as much is not how well they work in perfect conditions, but how well they work in the worst conditions imaginable.

Doctors often have the seniority and authority to make the room quiet; nurses and EMTs are often working in much different conditions.


And patients. What would you think if the doctor in front of use is using a plastic thingy that seems more come from a doctor-toy-set?

Neurosymbolic programming

That’s not a particular reading

Also in Germany, also using exclusively iPhone with carplay. Not perfect, but light years better than google maps.

I have seen way too often, advocates of SOLID and patterns to have religious arguments: I don’t like it. That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas. About patterns, I cannot really say as much positive. They are not bad per-se. But I’ve seen they have made lots of harm. In the gang of 4 book, in the preface, I think, says something like “this list is neither exhaustive, nor complete, and often inadequate” the problem is every single person I know who was exposed to the book, try to hammer every problem into one pattern (in the sense of [1]). Also insist in using the name everywhere like “facade_blabla” IMHO the pattern may be Façade, but putting that through the names of all classes and methods, is not good design.

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


> That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas

This should be the header of the website. I think the core of all these arguments is people thinking they ARE laws that must be followed no matter what. And in that case, yeah that won't work.


I agree. But I have to say, when defining the architecture, there are things known that will be terrible bottlenecks later. They should be avoided. Just as the previous comment, about defining proper indices in a database. Optimization means making something that is already “good” and correct better. There is no excuse to make a half ass, bug ridden shitty software, under the excuse “optimization is for later” that is technical debt generation: 2 very different things.

That is the giat of the leftpad history, isn’t it?

It is to me incredible, how many „developers“, even “10 years senior developers” have no idea how to use a dubugger and or profiler. I’ve even met some that asked “what is a profiler?” I hope I’m not insulting anybody, but to me is like going to an “experienced mechanic” and they don’t know what a screwdriver is.

It’s because in most enterprise contexts:

1) Most bugs are integration bugs. Whereby multiple systems are glued together but there’s something about the API contract that the various developers in each system don’t understand.

2) Most performance issues are architectural. Unnecessary round trips, doing work synchronously, fetching too much data.

Debuggers and profilers don’t really help with those problems.

I personally know how to use those tools and I do for personal projects. It just doesn’t come up in my enterprise job.


If you don't have personal examples of using a profiler to diagnose an issue like "too many round trips" and identify where those round trips are coming from, then you've never inherited a complex performance problem before.

That is surprising. They have come up in every enterprise job i have had. Debuggers and profilers absolutely do help although for distributed systems they are called something else.

> 2) Most performance issues are architectural. Unnecessary round trips, doing work synchronously, fetching too much data.

More like death by a thousand abstractions. Not that profilers will help you any more with that.


Doesn't really change the picture. If you don't know the basics of a car, then you absolutely shouldn't be driving in traffic either.

yeah but that analogy is sort of false. A better analogy...but then it would make you look absurd...would be "if you don't know how to take apart and re-assemble the engine of a vehicle you shouldn't be allowed to drive it on the road". You get a driver's license if you can remember a few common sense facts and spend a bit of monitored time behind the wheel without doing anything absurdly illegal or injuring/killing somebody

You don't use like Datadog or something at your enterprise job?

I once interviewed at Microsoft. The hiring manager asked me how I would go about programming a break point if I were writing a debugger. I started to explain how I would have to swap out an instruction to put an INT 3 in the code and then replace it when the breakpoint would hit.

He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.


Did you get the job ... or were you overqualified?

I guess I was overqualified. Didn't get the job.

What is an int3


The last time I interviewed (around 10 years ago) I was surprised when 9 of the 10 senior developers didn't know how many bits were in basic elemetary types.

(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)


> 9 of the 10 senior developers didn't know how many bits were in basic elemetary types

That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.

The net result of that is I never use C "long", instead using "int" and "long long".

This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.


It's substantially worse on the JVM. One's intuition from C just fails when you have to think about references vs primitives, and the overhead of those (with or without compressed OOPs).

I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.


Conversely I've met many folks who come into managed environments and piss away time trying to wrangle the managed system into how they think it should work, instead of accepting that clever people wrote it and guidelines when followed result in acceptable outcomes.

The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.


>on the dotnet repo

You mean the .net compiler/runtime itself? I haven't looked at it, but isn't that the one place you'd expect to see weirdly low-level C# code?


My favourite JVM trivia, although I openly admit I don't know if it's still true, is the fact that the size of a boolean is not defined.

If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed


". While it represents one bit of information, it is typically implemented as 1 byte in arrays, and often 4 bytes (an int) or more as a standalone variable on the stack "

In what way is it worse? The range of values they can contain is well-specified.

And you have a frame with an operands stack where you should be able to store at least a 32-bit value. `double` would just fill 2 adjacent slots.

And references are just pointers (possibly not using the whole of the value as an address, but as flags for e.g. the GC) pointing to objects, whose internal structure is implementation detail, but usually having a header and the fields (that can again be reference types).

Pretty standard stuff, heap allocating stuff is pretty common in C as well.

And unlike C, it will run the exact same way on every platform.


I’m saying very few folks understand the cost tradeoffs of using references/objects versus using primitives directly. The difference in memory used for significant amounts of data is huge.

Not to mention indirection costs, but that’s a different issue.


That's a reasonable answer. But, I meant they seemed to have little understanding or interest. I don't interview much, and I'm probably a poor interviewer. But, I guess I was expecting some discussion.

I ran into some comp sci graduates in the early 80's who did not know what a "register" was.

To be fair, though, I come up short on a lot of things comp sci graduates know.

It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.


Oooh, saw Andrei's name pop up and remember his books on C++ back in the day .. ran into a systems engineer a while ago that asked why during a tech review asked why some data size wasn't 1000 instead of 1024.. like err ??

> That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.

Don't you mean Windows instead of Mac? Most Unix-like operating systems use LP64 while Windows uses LLP64.


Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).

Microsoft tried valiantly to make Win16 code portable to Win32, and Win32 to Win64. But it failed miserably, apparently because the programmers had never ported 16 bit C to 32 bit C, etc., and picked all the wrong abstractions.

> Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).

And yet even more of a fun time with porting pointer code was going from the various x86 memory models[0] to 32-bit. Depending on the program, the pain was either near, far, or huge... :-D

0 - https://en.wikipedia.org/wiki/X86_memory_models


Why did they design it like that? It must have seemed like a good idea at the time.

In ancient computing times, which is when C was birthed, the size of integers at the hardware level and their representation was much more diverse than it is today. The register bit-width was almost arbitrary, not the tidy powers of 2 that everyone is accustomed to today.

The integer representation wasn't always two's complement in the early days of computing, so you couldn't even assume that. C++ only required integer representations to be two's complement as of C++20, since the last architectures that don't work this way had effectively been dead for decades.

In that context, an 'int' was supposed to be the native word size of an integer on a given architecture. A long time ago, 'int' was an abstraction over the dozen different bit-widths used in real hardware. In that context, it was an aid to portability.


Was it possible to write a program taking into account this diversity, and have it work properly?

C is a portable language, in that programs will likely compile successfully on a different architecture. Unfortunately, that doesn't mean they will run properly, as the semantics are not portable.

So what’s the point of having portable syntax, but not portable semantics?

C certainly gives the illusion of portability. I recall a fellow who worked on DSP programming, where chars and shorts and ints and longs were all 32 bits. He said C was great because that would compile.

I suggested to him that he'd have a hard time finding any existing C code that ran correctly on it. After all, how are you going to write a byte to memory if you've only got 32 bit operations?

Anyhow, after 20 years of programming C, I took what I learned and applied it to D. The integral types are specified sizes, and 2's complement.

One might ask, what about 16 bit machines? Instead of trying to define how this would work in official D, I suggested a variant of D where the language rules were adapted to 16 bits. This is not objectively worse than what C does, and it works fine, and the advantage is there is no false pretense of portability.


On the one hand, in today's world asking how many bits is in an int is exactly as answerable as "how long is a piece of rope"

On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.


An 'int' is also 64 bits on some platforms.

It's the wrong question. How many bits is uint64 is a much better question, if we're at a place where that's relevant.

I mean, as a senior developer, the number of bits in an "int" is "who the hell knows, because it has changed a bunch of times during my career, and that's what stdint.h is for." And let's not even talk about machines with 32-bit "char" types, which I actually had to program for once.

If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.

The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.


How many bits are in an `int` in C? What do you mean "at least 16", that's ridiculous, nobody would write a language that leaves the number of bits in basic elementary types partially specified‽

It is a good idea - most of the time you don't care, and on slower systems a large int is harmful since the system can't handle that much and it cost performance - go to the faster system with larger ints when you need larger intw.

I had one tell me all ints are 16 bits, and then they said 0xffff is a 32bit number.

Maybe I'm wrong but I suspect this might be partly due to the rise of Docker which makes attaching a debugger/profiler harder but also partly due to the existence of products like NewRelic which are like a hands-off version of a debugger and profiler.

I haven't used a debugger much at work for years because it's all Docker (I know it's possible but lots of hoops to jump through, plus my current job has everything in AWS i.e. no local dev).


On the other hand, I had to debug a PHP app in Docker using XDebug and it was mostly painless. Or, to be more precise, no more painful than debugging it on local Wamp/Xampp.

Oh I know it's doable - I remember a two people that I used to work with individually getting it working. Most people didn't seem to want to bother though.

I don’t understand why self is placed in the list instead of smalltalk. Smalltalk came first, and Alan Key was the one who invented the “OOP” name.

Also ML is seen as a child of Lisp.


They should be placed alongside each other, because Self OOP model is quite different from Smalltalk, including how the graphical programming experience feels like.

For those that never seen it, there are some old videos (taken from VHS) on the language site, https://selflanguage.org/


> I don’t understand why self is placed in the list instead of smalltalk.

The article explains that:

> Smalltalk inherited the notion of a value and its type from earlier languages, and implemented the idea of a class. All objects had a class that gave their type, and the class was used to construct objects of that type. Self disposed of the notion of class and worked solely with objects. As this is a purer form, I have chosen Self as the type specimen for this ur-language.


Yes, but I still don't understand that explanation. Clearly self is a descendant of Smalltalk, that purified a part; but still is a descendant. At least I understand the "ur-" as indicating linage, more about time as features. For me is still backwards.

Although it didn't call it that, Simula-67 was basically object-oriented and both preceded and inspired Smalltalk. But syntactically it looks much like other Algol-inspired langages so it doesn't look that interesting at first glance.

Waterfall disguised with other names and extrem expensive certification.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: