Lisp, as McCarthy proposed it in 1958, actually had an alternate syntax called M-expressions that would be perfectly readable to any JavaScript programmer: (foo a b) became foo[a,b].
It never took off. From the earliest days -- before video terminals were a thing, let alone video-terminal editors that could match parens and indent automatically -- people just preferred working with sexprs. I think it has to do with the fact that while mexprs are easier to read with prior ALGOL experience, sexprs are easier to transform, and to reason about their transformation.
I think Lisp is one of those things where, you either get it or you don't, and once you do get it parens are a non-issue. There's a fundamental leap of insight that requires enormous activation energy to achieve.
Never took off? In addition to the other replies, look at Mathematica/Wolfram Language where they are _the_ syntax:
f[1, 2, 3]
And we can index it
Part[f[1, 2, 3], 1] == 1
Now what would the 0th element be?
Part[f[1, 2, 3], 0] == f
It is “head” or what corresponds to the first element in cons lists.
Also, as an appendix, there is no reason why you can’t have a “high-level” syntax and an abstract syntax which can both coexist. Mathematica has a syntax which can get quite cryptic (almost APL like at times), but it is possible to ask “freeze” expressions and print out the full m-expression form and thus inspect the structure of a function or similar.
Yeah it's possible, but it leads to somewhat awkward languages where you have sometimes multiple levels of desugaring going on and it's annoying. The last time I looked into Julia (IIRC >6 years ago) it had something similar going on and it really scared me off. The problem with having a "technically homoiconic" language is that you're going to end up with not only lots of desugaring, but multiple syntaxes for the same things.
The idea of the m-expressions might be coming back though, two of the popular newcomers are homoiconic languages (depending on your definition), Elixir as a functional programming language and Julia as an imperative CLOS inspired language.
Being not as powerful (able to create any valid syntax due to more complex and rigid parsing rules) and trivial to manipulate code as data as s-expression is considered a feature, as it helps preventing the proliferation of language extensions while still allowing them when their benefits outweighs the costs. Even the main elixir metaprogramming book starts with saying: "the first rule of macros: don't use macros", and in Julia you have to attach a @ to call a macro to make it clear that the normal rules of the language do not apply ahead. Those languages also focus in providing everything the user needs without the use of macros, which are for more advanced purposes.
While people who use Lisp eventually see the advantages of s-expressions (or at least of writing more idiomatic code in those languages), being attractive to new users coming from the more popular languages is very important (and like clojure interoperate with java, it might be even easier to make a lisp that has perfect interoperability with Elixir for example, so both groups can coexist).
Re: "trivial to manipulate code as data as s-expression is considered a feature"
Although Julia goes further in other directions, just changing the surface syntax of s-expressions, like this proposal, doesn't really interfere with this feature. With alternate syntax, you can still write macros and stick to the code-as-data idea in pretty much the same way. In Lisps with a programmable reader (like Common Lisp and some Schemes), you can set up the reader to recognize something like foo[bar,baz] as an sexp, and once read, it'll be represented internally exactly as if (foo bar baz) had been read with the default reader. Then any macros and even code-walking code continue to work the same as before.
I meant that macros being a little harder to use and the result being a little more restricted is the feature, a small protection against the Lisp Curse (if every custom language is just as good as the base language, then no one can agree what the base language should be). Kinda like newer languages kinda avoid inheritance because while it's powerful, it's kinda easy to misuse (though Elixir special 'using' macro is a fantastic simple way to share behavior between modules).
And while programmable reader macros allow Lisp to have any syntax you want, they go even further in producing heterogeneous environments in which not even the parsing rules are shared. But you can't just only know the algol/infix way of programming in Lisp since you'll have to see other people code and documentation, and when you use these alternate parsers/extensions other people have to understand the quirks of your unique parser/extension before understanding your code. Everyone using the exact same language is definitely a feature, and those languages (and clojure who dropped programmable reader macros and asks people to use macros more judiciously) have a point in their belief.
But as a curiosity, Julia also has it's own version of reader macros [1], which can be used to give it s-expression syntax [2].
You're probably right, I didn't really go deep in with the macros, mostly just injecting code with 'quote' to remove occasional boilerplate. The point was more about the ability to easily manipulate the code within the code to create DSLs (like Ecto for SQL) in a "lispy" way, which Elixir is notably good at. In that sense we can add D, Rust and Nim as well, all somewhat recent languages that also provide that feature.
Yeah I had just read some stuff about Elixir not being homoiconic and wanted to just point that out (in essence, I was being pedantic), it doesn't really ding your point.
> "There's a fundamental leap of insight that requires enormous activation energy to achieve."
I don't think it requires energy per se, because s-expressions aren't actually difficult at all. The biggest hurdle seems to be willingness to learn something new.
That's actually a very good point. From my anecdotal experience, I actually vastly prefer Lisp like syntax to basically any other syntax because it's so regular and clearly delimited, so I can grab whole expressions and move them around and transform them really easily. It also makes it easier to navigate code, and in my opinion easier to read since I don't have to remember "syntax", just the behavior of functions and macros.
Weirdly enough, even if a language isn't "lispy" I would still consider a expressions a point in it's favor.
Frankly Ill take the extra visual structure of square brackets and commas for lists instead of overloaded parens and white spaces of Lispy sexprs any day.
It never took off. From the earliest days -- before video terminals were a thing, let alone video-terminal editors that could match parens and indent automatically -- people just preferred working with sexprs. I think it has to do with the fact that while mexprs are easier to read with prior ALGOL experience, sexprs are easier to transform, and to reason about their transformation.
I think Lisp is one of those things where, you either get it or you don't, and once you do get it parens are a non-issue. There's a fundamental leap of insight that requires enormous activation energy to achieve.