Hacker Newsnew | past | comments | ask | show | jobs | submit | mmoskal's commentslogin

Grammars work best when aligned with prompt. That is, if your prompt gives you the right format of answer 80% of the time, the grammar will take you to a 100%. If it gives you the right answer 1% of the time, the grammar will give you syntactically correct garbage.


OpenAI is using [0] LLGuidance [1]. You need to set strict:true in your request for schema validation to kick in though.

[0] https://platform.openai.com/docs/guides/function-calling#lar... [1] https://github.com/guidance-ai/llguidance


I don't think that parameter is an option when using pydantic schemas.

class FooBar(BaseModel): foo: list[str] bar: list[int]

prompt = """#Task Your job is to reply with Foo Bar, a json object with foo, a list of strings, and bar, a list of ints """

response = openai_client.chat.completions.parse( model="gpt-5-nano-2025-08-07", messages=[{"role": "system", "content": FooBar}], max_completion_tokens=4096, seed=123, response_format=CommentAnalysis, strict=True )

TypeError: Completions.parse() got an unexpected keyword argument 'strict'


I had good experience with carefully spaced holes in PCB and a 50 mil header, see https://jacdac.github.io/jacdac-docs/ddk/firmware/jac-connec...



The previous article is in the same issue, in science and technology section. This is how they typically do it - leader article has a longer version in the paper. Leaders tend to be more opinionated.


Consciousness (subjective experience) is possibly orthogonal to intelligence (ability to achieve complex goals). We definitely have a better handle on what intelligence is than consciousness.


That does make sense, reminds me of Blindsight, where one central idea is that conscious experience might not even be necessary for intelligence (and possibly even maladaptive).


Counting to 2^61 probably is.

To actually find a collision in 128b cryptographic hash function it would take closer to 2^65 hashes. Back of the envelope calculations suggest that with Pollard's rho it would cost a few million dollars of CPU time at Hetzner's super-low prices. Not nearly mere mortals budget, but not that far off I guess.


A GUID is not a cryptographic hash function.

In any case, in 2023 I back-of-the-envelope estimated that you could compute 2^64 SHA256 for ~$100K, using rented GPU capacity https://www.da.vidbuchanan.co.uk/blog/colliding-secure-hashe...


That's great analysis. As you call out in the post, the 2^64 value is used to attack SHA256-128 (SHA256 truncated to 128 bits). NIST recommends at least SHA-224, which makes sense given your conclusions.


Airplanes are dirty, unsafe and unclean?


The robotaxis will have someone cleaning them between uses? Maybe they can sit in the front seat!


By letting people report messes in the taxi that arrives and passing on the vehicle if it’s soiled, then you can quickly determine and evict the messy people from the system.


Airplanes usually have crews cleaning them after every flight + crew keeping order during the flight. Have you ever seen an airplane which just landed after 12 hours flight? What a mess huh?


I think this is like unsafe - most of your code won’t have it, so you get the benefits of borrow checker (memory safety and race freedom) elsewhere.


An important saving grace that `unsafe` has is that it's local and clearly demarcated. If a core data structure of your program can be compared to `unsafe` and has to be manually managed for correctness, it's very valid to ask whether the hoops Rust makes you jump through are actually gaining you anything.


This seems way too readable! I think you should remove the character literals in the name of purity.

Also, this is likely way more compact than Brainfuck, as the lambda calculus is written essentially as usual.

And seriously, very cool!


Thanks! I'm torn on having the character literals actually; they're definitely syntactical sugar, but I was struggling to write programs that printed anything without them getting super unwieldy! If someone smarter than me can write a compact-looking enough Hello World program then consider them gone ;-)


Yeah there's something wrong with the idea of brainfuck having character literals. The de bruijn indexing is definitely on point but the lack of continuations feels wrong to me given the stated goal.

Also shouldn't the indexes be expressed as a repeated character? Like "---" would be index 3. Integer literals are decidedly non-brainfuck as well.


getchar does take a continuation of sorts (as in continuation passing) which is passed the input. In one my initial drafts, getchar was a special form that would accept input at the point of evaluation, which was really funny and unpredictable.

putchar I feel kind of weird about, it acting as an identity function with a side effect is kind of weird; I'm not sure changing it to take a second argument as a continuation would make it better or worse.

Regarding the de Bruijn indices, I don't think there's a huge distinction between writing 3 vs writing ---: it would still form a single lexical token, so I feel like --- is just more noise.

Perhaps a de Bruijn index register you could move around and dereference? e.g. from index 1, index 3 is >>*, then index 2 from there is <*. But that feels less functional, because you're now imperatively manipulating some hidden state.


Entirely agreed that it's nothing but more noise, but isn't that exactly how BF is? Why ----- instead of 5-? Well, because BF of course. The point of the exercise (IMO) is having the bare minimum in parsed characters to achieve the turing tarpit.

I quite like the movable register idea but as you say that's no longer a "BF except lambda calculus" it's some other esolang at that point.

I think my objection about the lack of continuations was misplaced given that appears to be a BF take on the lambda calculus rather than a BF take on scheme.


You can always write it in continuation-passing style if you really want continuations! It's not pleasant but none of this is supposed to be ;-)

Agreed on having too many characters though, I don't like that having numerical indices makes the syntax whitespace-sensitive, too.

And once I figure out how to write hello world, those character literals are gone!


Maybe my brain just isn't functioning right now but I don't think writing in CPS is the same as having access to first class continuations? But as previously noted I think that was a misplaced request on my part to begin with.


It should be! e.g. if every function takes a continuation as its final argument, then:

  call/cc& = \f. \k. f k k
Then in f you can invoke the continuation k as many times as you want, but that does involve a whole program transformation to CPS.


My line of thought had been that doing so doesn't restore execution context. But it dawns on me that without the ability to mutate variables that doesn't have the same relevance.

Still, doesn't it throw the de bruijn indexes off? Or am I wrong about that as well?

Lambda calculus makes my head hurt.


So here’s my question: is the interpreter more or less compact than a brainfuck interpreter? Which interpreter would have a lower Kolmogorov complexity, or could they be equivalent?


I have been geeking out recently on Blaise Agüera y Arcas talks about his BFF interpreter most recently at the Santa Fe Institute and how it can produce information stability out of noise using simple primitives in Brainfuck. I am looking forward to his two books on the subject and recommend checking his talks if you are in the small category of people interested in BF interpreters.

https://www.youtube.com/live/75PAyV83YqE?si=tQNO3IFS-y7cQeR2


https://github.com/verus-lang/verus is similar tool for Rust (developed by previous heavy users of Dafny).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: