Hacker Newsnew | past | comments | ask | show | jobs | submit | skolemtotem's commentslogin

> None of this is particularly controversial

Judging by the fact that most maths degrees are listed as science degrees, it's pretty clear that "mathematics is not science" is a controversial claim.


I don't think that's true. Certainly among the people in my department while I was doing my PhD in mathematics, it was universally agreed that mathematics is not a science (because mathematicians don't do experiments as a part of their work).

Even outside of math departments I see some evidence of that agreement -- for example, why else would the "STEM" acronym list Science and Math separately?


> Judging by the fact that most maths degrees are listed as science degrees

How colleges categorize their departments has an element of politics to it, so I wouldn't pay much attention to that. I've got both a math and science degree, and none of my professors or peers would consider math to be part of science -- even considering our math department was listed under our science department. I think OP is correct that this topic is not controversial at all, at least among the people who actually do math and do science. There's no question that math and science are completely separate things.


Assembly offers just as much capability for abstraction as any other language, and I think that it demands more "ability to grasp abstraction", because you pretty much have to build those abstractions yourself.


Akwardly, i agree on the facts but the sentence doesn't feel right. Would you agree with this rephrasing: asm is simple in terms of number of abstract concepts needed to define its syntax and semantics (and as far as i know this is good since it's intended as a mother-of-all lingua franca), but of course every language out there is turing complete (and has some mechanism for syscalls); so in the end the only way to build haskell-like abstractions in asm is actually to code up GHC and then code in haskell. Which i wouldn't call at all programming in asm (just like i'm not commuting circuits by typing on this keyboard). Nor do i think this is actually what asm is used for (when written by humans). My guess is that asm is used for programming close to the metal--crucial parts of firmwares, drivers--in situations with simple logic and in which abstraction would actually get in your way (who care about types or functions when you just need to write some values into some memory location).


How can this make anyone feel safe if they won't even admit that it's happening?! Moreover, the people are forced to compromise on just about every issue when voting, so they naturally get swayed by marketing and personal priorities. Somehow I doubt that surveillance was much of a priority for anyone.


Firstly, it's not unjustified to say that someone who thinks vanilla JavaScript is "bare metal" obviously doesn't know what they're talking about. Secondly, as a software-engineer-in-training, I agree - software "engineering" doesn't have much in common with other engineering, but I think that's a good thing for software.


Could you imagine if software engineers DID do rigorous engineering practices?

It would be mayhem! Projects would just NEVER FINISH.


But when they do finish, it will stand the test of time!


No, but the right way should at least be easier than the wrong way.


> No, but the right way should at least be easier than the wrong way.

Let us compare:

> bigDecimal.doubleValue() + bigDecimal2.doubleValue()

vs.

> bdResult = bd1.add( bd2 )

The first code is longer (two methods calls, which each do a conversion, one addition, and probably omitted some conversion back to BigDecimal). The second one is just one simple method call. I would strongly argue that the second, right way is much easier than the wrong way.


It's possible that real world code that converts to double uses temporary variables and a more complicated expression involving other operators with different precedence. Converting all that to method invocations would require refactoring the expression and if the developers are unaware of the implications (because of a lack of education in numerical methods) they might think that switching to temporary variables and preserving the expression is the safest way to adapt the code to some library that suddenly deals with that funny new numeric type. After all, if all tests (if any exist) pass...


> It's possible that real world code that converts to double uses temporary variables and a more complicated expression involving other operators with different precedence.

The primitive types in Java and their operators have always been a hack for performance in the object-oriented type system of Java. So if such code exists in the program, there can be very good reason (in particular performance), but it always has "code smell". So it always should be commented properly.

> if the developers are unaware of the implications

You do not do such a conversation if you have not read into the implications.


Now how about a more complex example?

bdP = bdA / pow((1 + bdr / bdm), (bdm - bdt));

bdP = bdA.divide((ONE.add(bdr.divide(bdm))).pow(bdm.subtract(bdt)));

Obviously the first is more transparent than the second and this isn't a particularly complicated equation. Mind you that this can be mostly mitigated by splitting up the equation but that can sometimes make solutions quite opaque comparatively.


> orders of magnitude harder

They seem to be at about the same difficulty to me; it would be nice if you could elaborate.


If a private company fails over and over again it goes bankrupt and is dissolved. If a government department or organisation completely fails at their goal they’ll generally be given more money and resources. See many, many school districts in America, the CIA (failed to predict the fall of the USSR and 9/11), the Pentagon (Iraq and Afghanistan), the State Department (Let’s destroy Libya and Syria!)

Organisations very rarely reform or change. Usually they die and other successful ones take over their market niche. Where this doesn’t happen or happens very rarely things change very slowly.

Any competitor with the public school system has to compete with free at the point of access. Absent transferable budget per child local school systems have a huge advantage.


I suggest you try asking on Workplace.StackExchange, too, since the people there have more experience answering this kind of question.


Rust takes a similar approach. For unit tests, you use the #[cfg(test)] attribute (/ pragma / directive) for conditional compilation, and #[test] to mark a function that is a unit test, which is run whenever you run the `cargo test` command. Also, any Rust code in Markdown fences in documentation is, by default, also run by `cargo test`, which you can disable for an individual code block by marking it as `rust,norun` instead of `rust`.


Yes! I'd suggest learning OpenGL, then Metal, then Vulkan/DX12, which is how I'd rank them from high to low level.


Specifically I'd suggest sticking to OpenGL 4.x and ES 3.x when learning OpenGL. Even OpenGL 4.4+ brought a lot of changes that should be used instead of what existed before (bindless resources, texture/buffer storage, persistently mapped buffers, etc.)


I'd prefer to stay at OpenGL level. Maybe some kind of standardized high-level API pops up on top of Vulkan/Metal, for those who don't want to go that low-level.


Metal is actually not that bad to live in. Vulkan, on the other hand, is micromanagement hell. I definitely agree with the people saying that it's for "building your own OpenGL".


Worse, because they took the extensions concept and took them a step further.

Every couple of weeks there is a new version, and each card supports a certain minor version.

https://vulkan.gpuinfo.org


It would always be possible to project a 3D world onto a 2D plane in any 2D graphics API, which is exactly what the projection matrix does in OpenGL.


The projection matrix transforms the 3D world into a 3D world in screen space. Two of the dimensions are the screen coordinates, and the third is depth into the screen. The depth is used for Z-buffering (hiding the stuff in back), and for fog and focus effects.


Depth in OpenGL is a separate 2D buffer. You can use it to approximate the effects of 3D space, but it’s still fundamentally a set of 2D operations.

Of course, it’s all fundamentally just bits on a heap, so past a certain point the argument becomes academic.


That, I sort-of agree with. Layering isn't unique to 3D, so it's debatable whether the incorporation of a depth buffer makes OpenGL a 3D API.


It's not layering. It's depth. The depth buffer will work for cases where two objects cross in Z. Unlike the painter's algorithm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: