Judging by the fact that most maths degrees are listed as science degrees, it's pretty clear that "mathematics is not science" is a controversial claim.
I don't think that's true. Certainly among the people in my department while I was doing my PhD in mathematics, it was universally agreed that mathematics is not a science (because mathematicians don't do experiments as a part of their work).
Even outside of math departments I see some evidence of that agreement -- for example, why else would the "STEM" acronym list Science and Math separately?
> Judging by the fact that most maths degrees are listed as science degrees
How colleges categorize their departments has an element of politics to it, so I wouldn't pay much attention to that. I've got both a math and science degree, and none of my professors or peers would consider math to be part of science -- even considering our math department was listed under our science department. I think OP is correct that this topic is not controversial at all, at least among the people who actually do math and do science. There's no question that math and science are completely separate things.
Assembly offers just as much capability for abstraction as any other language, and I think that it demands more "ability to grasp abstraction", because you pretty much have to build those abstractions yourself.
Akwardly, i agree on the facts but the sentence doesn't feel right. Would you agree with this rephrasing: asm is simple in terms of number of abstract concepts needed to define its syntax and semantics (and as far as i know this is good since it's intended as a mother-of-all lingua franca), but of course every language out there is turing complete (and has some mechanism for syscalls); so in the end the only way to build haskell-like abstractions in asm is actually to code up GHC and then code in haskell. Which i wouldn't call at all programming in asm (just like i'm not commuting circuits by typing on this keyboard). Nor do i think this is actually what asm is used for (when written by humans). My guess is that asm is used for programming close to the metal--crucial parts of firmwares, drivers--in situations with simple logic and in which abstraction would actually get in your way (who care about types or functions when you just need to write some values into some memory location).
How can this make anyone feel safe if they won't even admit that it's happening?! Moreover, the people are forced to compromise on just about every issue when voting, so they naturally get swayed by marketing and personal priorities. Somehow I doubt that surveillance was much of a priority for anyone.
Firstly, it's not unjustified to say that someone who thinks vanilla JavaScript is "bare metal" obviously doesn't know what they're talking about. Secondly, as a software-engineer-in-training, I agree - software "engineering" doesn't have much in common with other engineering, but I think that's a good thing for software.
The first code is longer (two methods calls, which each do a conversion, one addition, and probably omitted some conversion back to BigDecimal). The second one is just one simple method call. I would strongly argue that the second, right way is much easier than the wrong way.
It's possible that real world code that converts to double uses temporary variables and a more complicated expression involving other operators with different precedence. Converting all that to method invocations would require refactoring the expression and if the developers are unaware of the implications (because of a lack of education in numerical methods) they might think that switching to temporary variables and preserving the expression is the safest way to adapt the code to some library that suddenly deals with that funny new numeric type. After all, if all tests (if any exist) pass...
> It's possible that real world code that converts to double uses temporary variables and a more complicated expression involving other operators with different precedence.
The primitive types in Java and their operators have always been a hack for performance in the object-oriented type system of Java. So if such code exists in the program, there can be very good reason (in particular performance), but it always has "code smell". So it always should be commented properly.
> if the developers are unaware of the implications
You do not do such a conversation if you have not read into the implications.
Obviously the first is more transparent than the second and this isn't a particularly complicated equation. Mind you that this can be mostly mitigated by splitting up the equation but that can sometimes make solutions quite opaque comparatively.
If a private company fails over and over again it goes bankrupt and is dissolved. If a government department or organisation completely fails at their goal they’ll generally be given more money and resources. See many, many school districts in America, the CIA (failed to predict the fall of the USSR and 9/11), the Pentagon (Iraq and Afghanistan), the State Department (Let’s destroy Libya and Syria!)
Organisations very rarely reform or change. Usually they die and other successful ones take over their market niche. Where this doesn’t happen or happens very rarely things change very slowly.
Any competitor with the public school system has to compete with free at the point of access. Absent transferable budget per child local school systems have a huge advantage.
Rust takes a similar approach. For unit tests, you use the #[cfg(test)] attribute (/ pragma / directive) for conditional compilation, and #[test] to mark a function that is a unit test, which is run whenever you run the `cargo test` command. Also, any Rust code in Markdown fences in documentation is, by default, also run by `cargo test`, which you can disable for an individual code block by marking it as `rust,norun` instead of `rust`.
Specifically I'd suggest sticking to OpenGL 4.x and ES 3.x when learning OpenGL. Even OpenGL 4.4+ brought a lot of changes that should be used instead of what existed before (bindless resources, texture/buffer storage, persistently mapped buffers, etc.)
I'd prefer to stay at OpenGL level. Maybe some kind of standardized high-level API pops up on top of Vulkan/Metal, for those who don't want to go that low-level.
Metal is actually not that bad to live in. Vulkan, on the other hand, is micromanagement hell. I definitely agree with the people saying that it's for "building your own OpenGL".
The projection matrix transforms the 3D world into a 3D world in screen space. Two of the dimensions are the screen coordinates, and the third is depth into the screen. The depth is used for Z-buffering (hiding the stuff in back), and for fog and focus effects.
Judging by the fact that most maths degrees are listed as science degrees, it's pretty clear that "mathematics is not science" is a controversial claim.