Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.
I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.
Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.
I'm not sure what the dunk is supposed to be here .. Atomic supports the exact same feature set with local models as it does for OpenRouter. Is your gripe just that Openrouter is the first option in the dropdown?
Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)
Did you think local-first meant how a dropdown is sorted?
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.
You mean you will read all code with dependencies and compile it yourself to make sure?;) good for you. but good luck creating a popular e2e messenger then.
If you release it as GPL or AGPL, it should be pretty difficult to obey those terms while using the code for AI training. Of course, they'll probably scoop it up anyway, regardless of license.
The legal premise of training LLMs on everything ever written is that it’s fair use. If it is fair use (which is currently being disputed in court) then the license you put on your code doesn’t matter, it can be used under fair use.
If the courts decide it’s not fair use then OpenAI et al. are going to have some issues.
Quite possibly. If they care a great deal about not contributing to training LLMs then they should still be aware of the fair use issue, because if the courts rule that it is fair use then there’s no putting the genie back in the bottle. Any code that they publish, under any license whatsoever, would then be fair game for training and almost certainly would be used.
TRACKING: Only privacy-respecting essentials:
. Sliplane (European hosting) server logs
. No Google Analytics, no third-party trackers
What made you think there's tracking? I want to fix any privacy concerns immediately. This is a European digital sovereignty project & privacy is the whole point.
Can you share what triggered the concern? (Specific script/banner you saw?) Thanks for your help.
Clicking on "Customize" on the cookie consent banner reveals toggles for the following:
> Analytics Cookies
> Help us understand how visitors interact with our website. We use privacy-first analytics.
Tracking.
> Marketing Cookies
> Used to track visitors and show relevant advertisements.
Ads and more tracking.
> Preference Cookies
> Remember your preferences like language and theme settings.
Do these actually require separate consent, or can they be considered functional?
I would expect that a European digital sovereignty project in which privacy is the whole point would not have a cookie consent banner at all, because it would simply not use any non-functional cookies that would require it. I see the cookie banner as a sort of "mark of shame" that nefarious websites are forced to wear.
Also, I recall hearing that there were plans to make highlighting the "Accept All" button above the other options illegal, because it's a dark pattern that gets people to click the highlighted option more often.
Thank you for your persistence and pushback. Despite good intentions I fell into the boilerplate trap.
In the meantime I:
. removed the 'mark of shame' :)
. zeroed the cookies, only localstorage for umami analytics
. Simple opt-out in footer, no dark patterns, just 'learn more, opt-out'
. Updated the privacy policy to reflect this
Your feedback made this project better. Thank you :)
I understand the skepticism, but let me address this:
"LLM shovelware": The articles are curated from around 30 European news sources (TechCrunch Europe, Sifted, The Verge, etc.). AI is only used for:
1. Translation (EN→NL/DE/FR/ES/IT)
2. Pattern-based image generation
The curation, source selection, and quality filters are all manual.
"Self-promotion": Fair point on the account activity. I created this account specifically to share this project with HN because the community values European tech sovereignty and privacy.
Happy to answer specific questions about the implementation. The goal is NOT traffic farming, it's building a multilingual resource for European digital policy/startups.
Docker is unusable for build tools that use namespaces (of which Docker itself is one), unless you use privileged mode and throw away much more security than you'd need to. Docker images are difficult to reproduce with conventional Docker tools, and using a non-reproducible base image for your build environment seems like a rather bad idea.
reply