Hacker Newsnew | past | comments | ask | show | jobs | submit | mlochbaum's commentslogin

It was the subject of quite some debate, see "Panel: Is J a Dialect of APL?" at http://www.jsoftware.com/papers/Vector_8_2_BarmanCamacho.pdf . Ken and Roger backed off this stance after witnessing the controversy.

"Ken Iverson - The dictionary of J contains an introductory comment that J is a dialect of APL, so in a sense the whole debate is Ken's fault! He is flattered to think that he has actually created a new language."


Dunno why electroly is dragging me into this but I believe you've misread the article. When it says "His languages take significantly after APL" it means the languages themselves and not their implementations.


The article: "Let's make sense of the C code by the APL guy"

Do you think the article meant to say it was more likely that the code wasn't inspired by APL?


I think the article expresses no position. Most source code for array languages is not, in fact, inspired by APL. I encourage you to check a few random entries at [0]; Kap and April are some particularly wordy implementations, and even A+ mostly consists of code by programmers other than Whitney, with a variety of styles.

I do agree that Whitney was inspired to some extent by APL conventions (not exclusively; he was quite a Lisp fan and that's the source of his indentation style when he writes multi-line functions, e.g. in [1]). The original comment was not just a summary of this claim but more like an elaboration, and began with the much stronger statement "The way to understand Arthur Whitney's C code is to first learn APL", which I moderately disagree with.

[0] https://aplwiki.com/wiki/List_of_open-source_array_languages

[1] https://code.jsoftware.com/wiki/Essays/Incunabulum


I unfortunately glossed over the part of the original comment that gives it substance: "The most obvious of the typographic stylings--the lack of spaces, single-character names, and functions on a single line--are how he writes APL too."

That's backing for a claim.

Also, I haven't once written APL. I think this might've been borderline trolling, just because of how little investment I have in the topic in reality. Sorry.


It looks like a weirdo C convention to APLers too though. Whitney writes K that way, but single-line functions in particular aren't used a lot in production APL, and weren't even possible before dfns were introduced (the classic "tradfn" always starts with a header line). All the stuff like macros with implicit variable names, type punning, and ternary operators just doesn't exist in APL. And what APL's actually about, arithmetic and other primives that act on whole immutable arrays, is not part of the style at all!


"the typographic stylings ... are how he writes" is what I said, isn't it? :) Well said.


It's just. So gross. Say it. Sudden interruption of slime coming up your throat. Like walking out the door into a spiderweb. Alphabetically I was mistaken but in every way that matters I was right.


Hmm. I guess it if was BQM, it would be pronounced “bequem” which means comfortable in German.

And a comfortable APL is clearly an oxymoron.


Ordinarily I'd make fun of the Germans for giving such an ugly name to a nice concept, but I've always found "comfortable" to be rather unpleasant too (the root "comfort" is fine).


Well, do you know how it works? Don't judge a book by its cover and all. Although none of these are entirely aiming for elegance. The first is code golf and the other two have some performance hacks that I doubt are even good any more, but replacing ∧≢⥊ with ∧⌜ in the last gets you something decent (personally I'm more in the "utilitarian code is never art" camp, but I'd have no reason to direct that at any specific language).

The double-struck characters have disappeared from the second and third lines creating a fun puzzle. Original post https://www.ashermancinelli.com/csblog/2022-5-2-BQN-reflecti... has the answers.


The point that the article is addressing (but you have to ignore the image and study the equations to see this!) is that this sort of shifting can't equalize everything. In the span of 3 white keys C to E at the front, you have 2 black keys at the back, so if you take r to be the ratio of back-width to white key front-width then you have 3 = 5r. But in the 4 keys F to B, you've got 3 black keys so 4 = 7r. No single ratio works! So the article investigates various compromises. The B/12 solution is what seems to me the most straightforward, divide white keys in each of the sections C to E and F to B equally at the back, and don't expect anyone to notice the difference.


I don't see the problem... Use one unit of width per semitone. Then raise the black keys up a bit. Then for the white keys, elongate them and append some extra stuff on the sides of their fronts so the white keys' fronts' all have one same width as well. They are two separate "problems", not interdependent.


> Then for the white keys, [...] append some extra stuff on the sides of their fronts so the white keys' fronts' all have one same width as well.

There's no way you can achieve that.


The relevant operations for matrix multiply are leading-axis extension, shown near the end of [0], and Insert +˝ shown in [1]. Both for floats; the leading-axis operation is × but it's the same speed as + with floating-point SIMD. We don't handle these all that well, with needless copying in × and a lot of per-row overhead in +˝, but of course it's way better than scalar evaluation.

[0] https://mlochbaum.github.io/bencharray/pages/arith.html

[1] https://mlochbaum.github.io/bencharray/pages/fold.html


And the reason +˝ is fairly fast for long rows, despite that page claiming no optimizations, is that ˝ is defined to split its argument into cells, e.g. rows of a matrix, and apply + with those as arguments. So + is able to apply its ordinary vectorization, while it can't in some other situations where it's applied element-wise. This still doesn't make great use of cache and I do have some special code working for floats that does much better with a tiling pattern, but I wanted to improve +˝ for integers along with it and haven't finished those (widening on overflow is complicated).


To be clear, you are referring to the preface to "An Introduction to Array Programming in Klong", right? Having just checked it, I find this to be a very strange angle of attack, because that section is almost exclusively about why the syntax in particular is important. Obviously you disagree (I also think the syntax is overblown, and wish more writing focused on APL's semantic advantages over other array-oriented languages). I think this is a simple difference in taste and there's no need to reach so far for another explanation.


Search for "teaching" at https://aplwiki.com/wiki/APL_conference. I count at least five papers about teaching non-APL topics using APL. The language is not only possible to read, it's designed for it.


"already": APL dates back to about 1966, and even K from 1993 predates Numpy and Julia. But yes, we do not live in caves and are familiar with these languages. Klong has even been implemented in Numpy, see https://github.com/briangu/klongpy.


I understand that these languages are older; I meant that in the sense that they are nonetheless still trying to recruit new users, but these potential users may already be using something that does something similar.


Oddly enough, the biggest mistake in how I presented BQN early on was thinking only APL insiders would be interested, when in fact the APLers went back to APL and people who hadn't tried other array languages or hadn't gotten far with them were were most successful with BQN. Plenty of people coming to BQN have worked with Numpy or whatever, but I don't think this has the same deterrent effect; they see BQN as different enough to be worth learning. Julia in particular is very different: I don't find that it culturally emphasizes array programming at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: