On this “types first across languages”, I’ve been hacking on something in that vein called Kumi (typed calculation schemas compiled to Ruby/JS). Tiny demo here https://kumi-play-web.fly.dev
I am still thinking about some blog post about all the journey but I have never wrote one of those, but see here some write-up I did on a reddit post https://www.reddit.com/r/Compilers/s/osaVyUgvUf
I wonder what would be your opinion on a OSS library that I am working that provides a declarative data flow DSL that statically checks and compile/optimize pure functions (no runtime. working on C target but have Ruby and JS already).
I feel I got a lot of inspiration from my time automating working with Excel as a Financial Analyst.
I’m building a typed, array-oriented dataflow compiler that takes small declarative schemas and emits plain Ruby and JavaScript, with a C path. It has a mid-end with inlining, common subexpression elimination, constant folding, dead code elimination, loop fusion, and LICM.
Location: Sao Paulo / Brazil
Remote: Only
Willing to relocate: No
Technologies: Ruby / Rails / JS / Linux / DevOps / IAM
Résumé/CV: https://drive.google.com/file/d/1eFJt5gpK7C-nNJWar75jmskSTIrh0rCv
Email:andremuta+hn@gmail.com
I'm André, I have around 8 years of experience as a SWE, mostly building and improving Rails applications in a secure and scalable way in the IAM domain (e.g. an identity provider that now handles 50m users), including the Devops/Infra/Architecture work.
Also here is my GitHub, you will see some recent contributions to CRuby itself and my OS library Kumi.
https://github.com/amuta
If you are working on a challenging problem with a lot of unknowns or need a very skilled debug guy, I'd be very interested to hear about it. Please send me an email, and let's have a conversation.
This looks something that could work nicely with my calculation DSL (https://github.com/amuta/kumi) This is one of the scenarios that was in my head: auditable/exportable/reusable tax-related calculations schemas.
Location: Sao Paulo / Brazil
Remote: Only
Willing to relocate: No
Technologies: Ruby / Rails (PostgreSQL, Redis, Github...) Linux / DevOps / IAM
Résumé/CV: https://drive.google.com/file/d/1eFJt5gpK7C-nNJWar75jmskSTIrh0rCv
Email:andremuta+hn@gmail.com
I'm André, I have around 8 years of experience as a SWE, mostly building and improving Rails applications in a secure and scalable way in the IAM domain (e.g. an identity provider that now handles 50m users), including the Devops/Infra/Architecture work related. I am someone that likes to tackle challenging problems.
One of these kind of problems is related to what I am currently building, vector (domain specific) language implemented in Ruby that lowers and then codegens to Ruby and JS: https://github.com/amuta/kumi
I am very confident on my skills as a developer but at the same time I find it to be inversely correlated with short performatic technical interviews. If you evaluate me with scoped work or a debugging session with real logs and tracing, we’ll both get a truer signal.
If you are working on a challenging problem with a lot of unknowns that requires deep technical understanding, I'd be very interested to hear about it. Please send me an email, and let's have a conversation.
I still am not sure exactly how to define it, but it's a ruby library, that is mix of a rules engine+spreadsheet feelings+array language+static validation+compiled/codegen... that last part is mostly not merged yet but yeah, ruby DSL codegenerating ruby, it's ruby all the way.
https://github.com/amuta/kumi/tree/codegen-v5
(see ./golden for more context on the compilation/codegen. I barely knew what a compiler was before doing this so I might have just created some nonsense )
There seems to be a gap between the theory you're advocating...which I actually agree with and the practical execution in your own public projects which you talk about heavily.
I haven't been able to get any of your recently published projects (Scribe, volknut, a few others) running successfully on linux, whether following your documentation, using the tagged releases, or reverse engineering the true functionality and CLI arguments working from your provided source code I found to have wasted my time.
It's difficult to believe you when your own documentation is entirely wrong or 404'd.
I'd genuinely love to see your projects in action, since the concepts sound promising, but you need to back it up with shipping properly.
There's a reason I haven't dropped release pages yet. These projects are under heavy development and I push to github to checkpoint mostly rather than release. I'm sorry if you feel like I've misled you, I've tried to be clear that what I'm sharing now is to understand what I'm doing, to give visibility, and it's not ready yet. I'm committed to delivering great software, and when I tell you it's ready you can rest assured that it will work.
Understand that I'm one man working on 20 projects simultaneously, with 5+ under active development at any one moment, so release stabilization and cadence will take a little bit to lock in.
A question, would you interpret this as rank polymorphism?
schema do
input do
array :regions do
float :tax_rate
array :offices do
float :col_adjustment
array :employees do
float :salary
float :rating
end
end
end
end
trait :high_performer, input.regions.offices.employees.rating > 4.5
value :bonus do
on high_performer, input.regions.offices.employees.salary * 0.25
base input.regions.offices.employees.salary * 0.10
end
value :final_pay,
(input.regions.offices.employees.salary + bonus * input.regions.offices.col_adjustment) *
(1 - input.regions.tax_rate)
end
result = schema.from(nested_data)[:final_pay]
# => [[[91_000, 63_700], [58_500]], [[71_225]]]
I think i'm misunderstanding, rank is explicit throughout this example but i'm not familiar with this syntax (ruby maybe?) but whatever the case i don't see the rank polymorphism.
If i'm reading the syntax correctly, this would translate in kdb/q to a raze (flatten) on the 3 dimensions (regions, offices, employees). Probably more likely to be expressed as a converge but in either case, the calculations here are not possible in a rank polymorphic way.
The broadcasting handles the rank differences automatically. When bonus (at employee level) multiplies with col_adjustment (at office level), each employee's bonus gets their office's adjustment applied, no flattening or manual reshaping. The structure [[[91_000, 63_700], [58_500]], [[71_225]]] was preserved.
This is from a Ruby DSL I'm working on (Kumi). Probably the broadcasting semantics are very different from traditional rank operators in q/APL?
Edit: I realized that I missed the input structure:
Region 0: Office 0 has 2 employees, Office 1 has 1 employee
Region 1: Office 0 has 1 employee
I didn't downvote but was utterly puzzled with your example. After your response below, it occurs to me that you are confusing Employee rank (a business concept) with Array rank a mathematical concept. Either that or it is very strange explanation for rank polymorphism.