Hacker Newsnew | past | comments | ask | show | jobs | submit | _flux's commentslogin

Seems like Mermaid parsing and layout would be a useful crate as by itself. I would enjoy a fast mermaid layout command line tool with svg/pdf/png support, which I think would be quite feasible to implement with such a crate.

This is exactly the plan for v0.3.0! Extracting the ~7000 line Mermaid renderer into a standalone crate with SVG/PNG output and CLI support. Pure Rust, WASM-compatible. Stay tuned!

That's great! I'm pretty interested in that. I hooked up `mark` [1] at work to upload md files to our internal confluence and would love to integrate a native tool to convert Mermaid diagrams to a png rather using mark's built-in system which calls out to mermaid.js and thus needs us to vendor chromium, which I'd rather avoid!

[1] https://github.com/kovetskiy/mark


Or call the tool "Read" and it works, according to an issue comment.

But actually the solution is checking out how the official client does it and then doing the same steps, though if people start doing this then Anthropic will probably start making it more difficult to monitor and reverse engineer.

It might not matter, as some people have a lot of expertice in this, but people might still get the message and move away to alternatives.


The endgame is a small background agent that runs Claude Code every once in a while, inspects its traffic, and adjusts on the fly.

Then they'd start pinning certs and hiding keys inside the obfuscated binary to make traffic inspection harder?

And if an open source tool would start to use those keys, their CI could just detect this automatically and change the keys and the obfuscation method. Probably quite doable with LLMs..


Without breaking legitimate clients?

At some point it becomes easier to just reevaluate the business model. Or just make a superior product.


Aren't Anthropic in control of all the legitimate clients? They can download a new version, possibly automatically.

I believe the key issue here is that the product they're selling is all-you-can-eat API buffet for $200/month. The way they manage this is that they also sell the client for this, so they can more easily predict how much this is actually going to consume tokens (i.e. they can just put their new version of Claude Code to CI with some example scenarios and see it doesn't blow out their computing quota). If some third party client is also using the same subscription, it makes it much more difficult to make the deal affordable for them.

As I understand it, using the per-token API works just fine and I assume the reason people don't want to use it because it ends up costing more.


> In order to have a chat with an LLM, every time the whole conversation history gets reprocessed - it is not just the last answer / question gets send to the LLM but all preceding back and forth.

Btw, context caching can overcome this, e.g. https://ai.google.dev/gemini-api/docs/caching . However, this means it needs to persist the (large) state in the server side, so it may have costs associated to it.


Aren't PINs usually short, and might even be really be made out of just digits in the first place? So would there be real security benefits in adding that to the key?

You can make PINs as complex as you want, there's only a maximum length limitation of 20 characters. There's no difference between passwords and PINs in Windows except that Windows calls it a PIN if it's used with TPM. And yes, it does nudge you in the direction of making it simple because "TPM guarantees security", but you don't have to.

The financial aspect of the project is the service they sell, core is open: https://github.com/typst/typst

What the core lacks is the web service that offers e.g. collaborative editing.


I personally don't enjoy the MyObject? typing, because it leads to edge cases where you'd like to have MyObject??, but it's indistinguishable from MyObject?.

E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.

It's still obviously way better than having all object types include the null value.


When you want to distinguish `MyObj??` then you'll have to distinguish the optionality of one piece of code (wherever your `MyObj?` in the list came from) with some other (list find) before "mixing" them. E.g. by first mapping `MyObj?` to `MyObj | NotFoundInMyMap` (or similar polymorphic variant/anonymous sum types) and then putting it in a list. This could be easily optimized away or be a safe no-op cast.

Common sum types allow you to get around this, because they always do this "mapping" intrinsically by their structure/constructors when you use `Either/Maybe/Option` instead of `|`. However, it still doesn't always allow you to distinguish after "mixing" various optionalities - if find for Maps, Lists, etc all return `Option<MyObj>` and you have a bunch of them, you also don't know which of those it came from. This is often what one wants, but if you don't, you will still have to map to another sum type like above. In addition, when you don't care about null/not found, you'll have the dual problem and you will need to flatten nested sum types as the List find would return `Option<Option<MyObj>>` - `flatten`/`flat_map`/similar need to be used regularly and aren't necessary with anonymous sum types that do this implicitly.

Both communicate similar but slightly different intent in the types of an API. Anonymous sum types are great for errors for example to avoid global definitions of all error cases, precisely specify which can happen for a function and accumulate multiple cases without wrapping/mapping/reordering. Sadly, most programming languages do not support both.


> E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.

This is a problem with the signature of the function in the first place. If it's:

  template <typename T>
  T* FindObject(ListType<T> items, std::function<bool(const T&)> predicate)
Whether T is MyObject or MyObject?, you're still using nullpointers as a sentinel value;

  MyObject* Result = FindObject(items, predicate);
The solution is for FindObject to return a result type;

  template <typename T>
  Result<T&> FindObject(ListType<T> items, std::function<bool(const T&)> predicate)
where the _result_ is responsible for the return value wrapping. Making this not copy is a more advanced exercise that is bordering on impossible (safely) in C++, but Rust and newer languages have no excuse for it

Different language, but I find this Kotlin RFC proposing union types has a nice canonical example (https://youtrack.jetbrains.com/projects/KT/issues/KT-68296/U...)

    inline fun <T> Sequence<T>.last(predicate: (T) -> Boolean): T {
        var last: T? = null
        var found = false
        for (element in this) {
            if (predicate(element)) {
                last = element
                found = true
            }
        }
        if (!found) throw NoSuchElementException("Sequence contains no element matching the predicate.")
        @Suppress("UNCHECKED_CAST")
        return last as T
    }
A proper option type like Swift's or Rust's cleans up this function nicely.

Your example produces very distinguishable results. e.g. if Array.first finds a nil value it returns Optional<Type?>.some(.none), and if it doesn't find any value it returns Optional<Type?>.none

The two are not equal, and only the second one evaluates to true when compared to a naked nil.


What language is this? I'd expect a language with a ? -type would not use an Optional type at all.

In languages such as OCaml, Haskell and Rust this of course works as you say.


This is Swift, where Type? is syntax sugar for Optional<Type>. Swift's Optional is a standard sum type, with a lot of syntax sugar and compiler niceties to make common cases easier and nicer to work with.

Right, so it's not like a union type Type | Null. Then naturally it works the same way as in the languages I listed.

Well, in a language with nullable reference types, you could use something like

  fn find<T>(self: List<T>) -> (T, bool)
to express what you want.

But exactly like Go's error handling via (fake) unnamed tuple, it's very much error-prone (and return value might contain absurd values like `(someInstanceOfT, false)`). So yeah, I also prefer language w/ ADT which solves it via sum-type rather than being stuck with product-type forever.


How does this work if it is given an empty list as a parameter?

I guess if one is always able to construct default values of T then this is not a problem.


> I guess if one is always able to construct default values of T then this is not a problem.

this is how go handles it;

  func do_thing(val string) (string, error)
is expected to return `"", errors.New("invalid state")` which... sucks for performance and for actually coding.

I like go’s approach on having default value, which for struct is nil. I don’t think I’ve ever cared between null result and no result, as they’re semantically the same thing (what I’m looking for doesn’t exist)

In Go, the default (zero) value for a struct is an empty struct.

Eh, it’s not uncommon to need this distinction. The Go convention is to return (res *MyStruct, ok bool).

An Option type is a cleaner representation.


Funny how how all the links, including the ones to their own pages, are routed through google.com/url, e.g. the link "Assets Available to Download". Usually tracking isn't quite this visible.

It's because their blog is hosted on blogger.com (yeah, weird decision), which is owned by Google and does that by default.

I also have a blogger.com blog.

Why? Because I had it for 20+ years, and I still didn't find an easy way to automatically migrate it to WordPress.


You're also presumably not a $400m+ company, which makes it more intestesting.

I assure you no amount of capital trivializes the endeavour of migrating to/from Wordpress.

GP speaks wisdom.


In my experience, the blog usually falls in some weird space where the marketing team owns it somehow. It’s best to leave them be and let them handle it, because if you suggest an alternative and then something goes wrong or isn’t to their liking you’ll never hear the end of it.

My point was that it's not trivial to migrate away from blogger.

Clearly engineers at Netflix have more important work to do.

It is very odd. I don’t see a good reason, not even tracking.

Aren't those just the URLs in google search results if you copy from the results page instead of clicking through to the destination?

The reason for the intermediary is because the clickthrough sends the previous URL as a referer to the next server.

The only real way to avoid leaking specific urls from the source page to the arbitrary other server is to have an intermediary redirect like this.

All the big products put an intermediary for that reason, though many of them make it a user visible page of that says "you are leaving our product" versus Google mostly does it as an immediate redirect.

The copy/paste behavior is mostly an unfortunate side effect and not a deliberate feature of it.


I don't understand. They are redirecting to their own S3 bucket, so who would be the recipient of the leak?

Also, isn't this what Referrer-Policy is for? https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...


Quoting web standards, you are more optimistic than I am, unfortunately, nobody uses them consistently or accurately (look at PUT vs POST for create / update as a really good example of this - nobody agrees) its a shame too, there's a lot of richness to the web spec. Most people don't even use "HEAD" to ensure they aren't making wasteful REST calls if they already have the data.

I was replying to

> All the big products put an intermediary for that reason

Surely whoever maintains the big products can add headers if they want?

And this is about people who care enough about not showing up in Referer headers to do something about it rather than people in general not understanding the full spec .


I worked on these big web products before and the answer then was that no, you couldn't trust it to be honored and it would have been considered a privacy incident so better off just having the redirect and having no risk. You can't trust the useragents for example.

Not sure if the reliability of the intentional mechanism has improved enough where this is just legacy or if there's entirely new reasons for it in 2026.


The other problem is if you're too big like Google, you cannot assume everyone will honor this, which is why they do these redirects.

Referrer-Policy is a response header, so in this case it would be Google sending it, and the browsers who would be honouring it. You have to hope that the browser makers get it correct... Unless I misunderstood?

Blogger predates the existence of this header by many years. Blogger, I believe, has also been in maintenance mode for many years.

It sees periodic major updates to keep it in line with standards. That's not much more than maintenance mode, but it's more than just keeping the servers running. It seems like someone at Google pays attention to it and keeps it from falling behind, but I suspect the same was true of Google Reader until it wasn't.

>someone at Google pays attention to it and keeps it from falling behind

I feel like it's the same for Google My Maps. They even discontinued the Android app, so you can only use it on the web. It totally feels like there's a single guy keeping the whole system up.


Not if you use the ClearURLs addon ;)

And when I click them I get a page with "Did you mean netflix.com? The site you just tried to visit looks fake. Attackers sometimes mimic sites by making small, hard-to-see changes to the URL." which then sends me to the Netfçix home page. Chrome on MacOS.

it's because their s3 bucket is called "download.opencontent.netflix.com.s3.amazonaws.com". the subdomain makes chrome think it's pretending to be "netflix.com"

But they said it sends them to Netfçix? That seems incorrect

...how is that even possible?

The ios gmail app does the same thing, but why? I would assume the app could just transparently relay the click through its already-open grpc channel to google's servers, and it would be faster for them and (more importantly) for me.

> Most of the issues (like "judder") that people have with 24fps are due to viewing it on 60 fps screens

That can be a factor, but I think this effect can be so jarring that many would realize that there's a technical problem behind it.

For me 24 fps is usually just fine, but then if I find myself tracking something with my eyes that wasn't intended to be tracked, then it can look jumpy/snappy. Like watching fast flowing end credits but instead of following the text, keeping the eyes fixed at some point.

> Films are more like dreams than like real life. That frame rate is essential to them, and its choice, driven by technical constraints of the time when films added sound, was one of happiest accidents in the history of Arts.

I wonder though, had the industry started with 60 fps, would people now applaud the 24/30 fps as a nice dream-like effect everyone should incorporate into movies and series alike?


I think this is a fair take:

> We currently do not support unprivileged use case (same as BPF). Basically, Rex extensions are expected to be loaded by privileged context only.

As I understand it, in privileged context would be one where one is also be able to load new kernel modules, that also don't have any limitations, although I suppose the system could be configured otherwise as well for some reasons.

So this is like a more convenient way to inject kernel code at runtime than kernel modules or eBPF modules are, with some associated downsides (such as being less safe than eBPF; the question about non-termination seems apt at the end of the thread). It doesn't seem like they are targeting to actually put this into mainstream kernel, and I doubt it could really happen anyway..


Yeah I agree with this assessment. It is not an eBPF replacement for many reasons. But could be a slightly safer alternative to kernel modules.


> Open source means the source is available. Anything else is just political.

Where was that defined so? And most of all, given the domain of information technology, who understand open source to cover cases where the source is available ie. only for reviewing?

The purpose of words and terms is so that people can exchange ideas effectively and precisely, without needing to explain the terms every time from the grounds up. Having different groups having divergent definitions on the same words is counterproductive towards that goal. In my view, labeling a release "open source" with very big limitations on how the source is used is just not about marketing, it's miscommunication.

If "open source" and "source available" (and "open weights") mean the same thing, the how come people have come up with the two terms to begin with? The difference is recognized in official contexts as well, i.e. https://web.archive.org/web/20180724032116/https://dodcio.de... (search for "source available"; unfortunately linking directly doesn't seem to work with archive.org pages).

It doesn't seem there is any benefit in using less precise terms when better-defined ones are available.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: