Asthma, no. Diabetes, no. There is not a single person on that ship with either, they’re both disqualifying from shipboard service.
There are probably some people who are overweight to a varying degree, but again people who get overweight and don’t get back within standards get removed from the Navy.
In other cases, a business benefits from having customers park at neighboring businesses. If parking is provided, it will be taken by customers of other businesses.
Those types should not be optional. CHAR_BIT needs to be 8. It is clearly possible to implement the types even on a 6502 or Alpha. From the early days of pre-ANSI C, the language supported types for which the hardware did not have appropriate word sizes. There was a 32-bit long on the 16-bit PDP-11 hardware.
I would go beyond that, requiring all sizes that are a multiple of 8 bits from 8-bit through 512-bit. This better supports cryptographic keys and vector registers.
I was on an OS development team in the 1990s. We were using the SHARC DSP, which was naturally a word-addressed chip. Endianness didn't exist in hardware, since everything was whatever size (32, 40, or 48 bits) you had on the other end of the bus. Adding 1 to a hardware pointer would move by 1 bus width. The chip vendor thought that CHAR_BIT could be 32 and sizeof(long) could be 1.
We couldn't ship it that way. Customers wanted to run real-world source code and they wanted to employ normal software developers. We hacked up the compiler to rotate data addresses by 2 bits so that we could make CHAR_BIT equal to 8.
That was the 1990s, with an audience of embedded RTOS developers who were willing to put up with almost anything for performance. People are even less forgiving today. If strangely sized char couldn't be a viable product back in the 1990s, it has no chance today. It's dead. CHAR_BIT is 8 and will forever be so.
A horrifying case was multiplication in an x86 emulator. The opcode handler needed to multiply a pair of unsigned 16-bit values, then return a 64-bit result.
The uint16_t got promoted to an int for the multiplication, causing undefined behavior. (if I remember right, the result was assigned to a uint16_t as well, making the intent clear) The compiler then assumed that the 32-bit intermediate couldn't possibly have the sign bit set, so it wouldn't matter if promotion to a 64-bit value had sign extension or zero extension. Depending on the optimization level, the compiler would do one or the other.
This is truly awful behavior. It should not be permitted.
I can't really blame gcc for that one, since the most straightforward way of using signed integer arithmetic would yield a negative value if the result is bigger than INT_MAX, but it would be very weird that programs would expect and rely upon that behavior.
On the other hand, even the function "unsigned mul_mod_65536(unsigned short x, unsigned short y) { return (x * y) & 0xFFFF; }" which the authors of the Standard would have expected commonplace implementations to process in consistent fashion for all possible values of "x" and "y" [the Rationale describes their expectations] will sometimes cause gcc to jump the rails if the arithmetical value of the product exceeds INT_MAX, despite the fact that the sign bit of the computation is ignored. If, for example, the product would exceed INT_MAX on the second iteration of a loop that should run a variable number of iterations, gcc will replace the loop for code that just handles the first iteration.
See post above. There is no good way for compilers to handle that case, but gcc gets "creative" even in cases where the authors of C89 made their intentions clear.
C currently replaces my use of an array with a pointer. This sucks, because I'd have taken the address if I wanted that.
Your proposal replaces my use of an array with two things, a pointer (as before) and a length. This is not too helpful, because I already could have done that if I'd wanted to.
What is missing is the ability to pass an array. Sometimes I want to toss a few megabytes on the stack. Don't stop me. I should be able to do that. The called function then has a copy of the original array that it can modify without mangling the original array in the caller.
> Your proposal replaces my use of an array with two things, a pointer (as before) and a length. This is not too helpful, because I already could have done that if I'd wanted to.
C doesn't have a reasonable way of doing that. I know my proposal works, because we've been using it in D for 20 years.
Your proposal does not work, at least when making the declarations binary compatible with older code.
Note that C is a pass-by-value language, so passing an array means that the called function can modify the content without the modifications being seen in the caller.
To sort of pass arrays in an ABI-compatible way, the version for older code would require putting the array inside a struct.
Even that doesn't fully work with any ABI that I've ever heard of. The struct doesn't really get passed. Disassemble the code if you have doubts. The caller allocates space for the struct, copies the struct there, and then passes a pointer to the struct. From the high-level view of the language, this is passing the struct, but the low level details are actually wrong.
C has strayed very far from the original intent because compiler authors prioritized benchmark results at the expense of real-world use cases. This bad trend needs to be reversed.
Consider signed integer overflow.
The intent wasn't that the compiler could generate nonsense code if the programmer overflowed an integer. The intent was the the programmer could determine what would happen by reading the hardware manual. You'd wrap around if the hardware naturally would do so. On some other hardware you might get saturation or an exception.
In other words, all modern computers should wrap. That includes x86, ARM, Power, Alpha, Itanium, SPARC, and just about everything else. I don't believe you can even buy non-wrapping hardware with a C99 or newer compiler. Since this is likely to remain true, there is no longer any justification for retaining undefined behavior that is getting abused to the detriment of C users.
There are some add-with-saturation opcodes in 8bit-element-size SIMD ISAs, I think that includes x86_64, some recent Nvidia GPUs, and the Raspberry Pi 1's VideoCore IV's strange 2D-register-file vector unit made for implementing stuff like VP8/H.264 on it. They are afaik always opt-in, though.
Most of the useful optimizations that could be facilitated by treating integer overflow as jump the rails optimization could be facilitated just as well by allowing implementations to behave as though integers may sometimes, non-deterministically, be capable of holding values outside their range. If integer computations are guaranteed never to have side effects beyond yielding "weird" values, programs that exploit that guarantee may be processed to more efficient machine code than those which must avoid integer overflow at all costs.
1. Behave usefully when practical, if given valid data.
2. Do not behave intolerably, even when given maliciously crafted data.
For a program to be considered usable, point #1 may be sometimes be negotiable (e.g. when given an input file which, while valid, is too big for the available memory). Point #2, however, should be considered non-negotiable.
If integer calculations that overflow are allowed to behave in loosely-defined fashion, that will often be sufficient to allow programs to meet requirement #2 without the need for any source or machine code to control the effects of overflow. If programmers have to take explicit control over the effects of overflow, however, that will prevent compilers from making of the any useful overflow-related options that would be consistent with loosely-defined behavior.
Under the kind of model I have in mind, a compiler would be allowed to treat temporary integer objects as being capable of holding values outside the range of their types, which would allow a compiler to optimize e.g. x*y/y to x, or x+y>y to x>0, but the effects of overflow would be limited to the computation of potentially weird values. If a program would meet requirements regardless of what values a temporary integer object holds, allowing such objects to acquire such weird values may be more efficient than requiring that programs write code to prevent computation of such values.
Integer overflows that yield "weird values" in one place can easily lead to disasterous bugs in another place. So the safest thing in general would be to abort on integer overflow. But I'm sure there are applications where that, too, is intolerable. Kinda hard to have constraint 2 then.
Having a program behave in unreliably uselessly unpredictable fashion can only be tolerable in cases where nothing the program would be capable of doing would be intolerable. Such situations exist, but they are rare.
Otherwise, the question of what behaviors would be tolerable or intolerable is something programmers should know, but implementations cannot. If implementations offer loose behavioral guarantees, programmers can determine if they meet requirements. If an implementation offers no guarantees whatsoever, however, that is not possible.
If the only thing about overflow is that temporary values may hold weird results, and if certain operations upon a "weird" result (e.g. assignment to anything other than an automatic object whose address is never taken) will coerce it into a possibly-partially-unspecified number within type's range, then a program may ensure that behavior will be acceptable regardless of what weird values result from computation.
According to the published Rationale, the authors of C89 would have expected that something like:
unsigned mul(unsigned short x, unsigned short y)
{ return (x*y); }
would on most implementations yield an arithmetically-correct result even for values of (x*y) between INT_MAX+1U and UINT_MAX. Indeed, I rather doubt they could imagine any compiler for a modern system would do anything other than yield an arithmetically-correct result or--maybe--raise a signal or terminate the program. In some cases, however, that exact function will disrupt the behavior of its caller in nonsensical fashion. Do you think such behavior is consistent with the C89 Committee's intention as expressed in the Rationale?
> Do you think such behavior is consistent with the C89 Committee's intention as expressed in the Rationale?
No, but in general I'm ok with integer overflows causing disruptions (and I'm happy that compilers provide an alternative, in the form of fwrapv, for those who don't care).
I do think that the integer promotions are a mistake. I would also welcome a standard, concise, built-in way to perform saturating or overflow-checked arithmetic that both detects overflows as well as allows you to ignore them and assume an implementation-defined result.
As it is, preventing overflows the correct way is needlessly verbose and annoying, and leads to duplication of apis (like reallocarray).
I wouldn't mind traps on overflow, though I think overflow reporting with somewhat loose semantics that would allow an implementation to produce arithmetically correct results when convenient, and give a compiler flexibility as to when overflow is reported, could offer much better performance than tight overflow traps. On the other hand, the above function will cause gcc to silently behave in bogus fashion even if the result of the multiplication is never used in any observable fashion.
It lets you check that a+b > a for unknown unsigned b or signed b known > 0, to make sure addition didn’t overflow. I’m rather certain all modern C compilers will optimize that check out.
This is about gain-of-function research that the USA stopped in 2014 for safety reasons.
Gain-of-function is adding features to viruses. A goal of her research was to add functionality to coronaviruses that could infect humans. With funding stopped in the USA, she moved to Wuhan to continue her work.
The timeline runs for years, right up to December 2019. It includes a SARS coronavirus escape in 2017 that was caused when lab workers failed to properly inactivate the virus before taking it out of the BSL-4 high-containment facility.
Any stay-at-home order which stopped religious services would be found in violation of the US Constitution's first amendment. It's possible a judge would throw out the whole stay-at-home order. To have a stay-at-home order that survives in court, exempting religious services is necessary.
I am just not certain that this is true. The 1A also protects protests- am I allowed to organize gatherings of 10+ people for the purpose of a protest? I believe not. The 1A is comprehensive; it is not all-powerful.
Were you required to remove stars already on the resume? If not, then I know just what to do.
Where on my resume should I put the two stars? Are those the traditional hand-written style, with the lines crossing to make a pentagon in the middle? What sort of pen or pencil should I use?
We wrote the stars ourselves - normal 5 point star style. If you wrote the stars yourself recruiters would probably cross them out and write in the stars that correctly reflect your diversity status.