Is webgpu a good standard at this point? I am learning vulkan atm and 1.3 is significantly different to the previous APIs, and apparently webgpu is closer in behavior to 1.0. I am by no means an authority on the topic, I just see a lack of interest in targeting webgpu from people in game engines and scientific computing.
For a text editor it's definitely good enough if not extreme overkill.
Other then that the one big downside of WebGPU is the rigid binding model via baked BindGroup objects. This is both inflexible and slow when any sort of 'dynamism' is needed because you end up creating and destroying BindGroup objects in the hot path.
The modern Vulkan binding model is relatively fine. Your entire program has a single descriptor set containing an array of images that you reference by index. Buffers are never bound and instead referenced by device address.
Apparently "joy to use" is one of the new core goals of Khronos for Vulkan. Whether they succeed remains to be seen, but at least they acknowledge now that a developer hostile API is a serious problem for adoption.
The big advantage of Metal is that you can pick your abstraction level. At the highest level it's convenient like D3D11, at the lowest level it's explicit like D3D12 or Vulkan.
Bevy engine uses wgpu and supports both native and WebGPU browser targets through it.
The WebGPU API gets you to rendering your first triangle quicker and without thinking about vendor-specific APIs and histories of their extensions. It's designed to be fully checkable in browsers, so if you mess up you generally get errors caught before they crash your GPU drivers :)
The downside is that it's the lowest common denominator, so it always lags behind what you can do directly in DX or VK. It was late to get subgroups, and now it's late to get bindless resources. When you target desktops, wgpu can cheat and expose more features that haven't landed in browsers yet, but of course that takes you back to the vendor API fragmentation.
It's a good standard if you want a sort of lowest-common-denominator that is still about a decade newer than GLES 3 / WebGL 2.
The scientific folks don't have all that much reason to upgrade from OpenGL (it still works, after all), and the games folks are often targeting even newer DX/Vulkan/Metal features that aren't supported by WebGPU yet (for example, hardware-accelerated raytracing)
Having no CSD at all is unacceptable on small screens IMHO, far too much real estate is taken up by a title bar, you can be competitive with SSD by making them really thin, but then they are harder to click on and impossible with touch input. At the moment I have firefox setup with CSD and vertical tabs, only 7% of my vertical real estate is taken up by bars (inc. Gnome), which is pretty good for something that supports this many niceties.
I use a lot of obscure libraries for scientific computing and engineering. If I install it from pacman or manage to get an AUR build working, my life is pretty good. If I have to use a Python library the faff becomes unbearable, make a venv, delete the venv, change python version, use conda, use uv, try and install it globally, change python path, source .venv/bin/activate. This is less true for other languages with local package management, but none of them are as frictionless as C (or Zig which I use mostly). The other issue is .venvs, node_packages and equivalents take up huge amounts of disk and make it a pain to move folders around, and no I will not be using a git repo for every throwaway test.
uv has mostly solved the python issue. IME it's dependency resolution is fast and just works. Packages are hard linked from a global cache, which also greatly reduces storage requirements when you work with multiple projects.
uv is great for resolution, but it seems like it doesn't really address the build complexity for heavy native dependencies. If you are doing any serious work with torch or local LLMs, you still run into issues where wheels aren't available for your specific cuda/arch combination. That is usually where I lose time, not waiting for the resolver.
It sounds like your understanding of modern package management is at least ten year out of date, and Python has been (until recently) among the worse, yes, so that definitely wouldn’t have been a model to follow
- AI "collaboration"
- pure maths in a cosmology paper
- Zenodo
- small number of citations from a wide range of dates
- cosmology
One of my favourite youtube videos is Angela Collier's one on cranks, she makes the point that a motivated independent researcher can do science if they choose less ambitious problems, but these people always choose the deepest and most fundamental problems in maths and physics.
Ouch, really? That's basically just work that's not in the current hot topics. Not really a datapoint in favour of 'crank', rather a point against 'active academic/student'. I think it's admirable to look for value in older work.
one suggestion. There is the main paper, as well as supplemental supporting papers on Zenodo. Just download them. Read them. Or...if you don't have time, feed all of them to a reasoning AI and ask for analysis. Ask if it breaks GR. Ask if it is coherent.
Hint. It is. And it is falsifiable...not with stuff that maybe exists either...data that exists now or will in the very near future.
>If you called Netanyahu a monkey because of his Gaza genocide, most people who are pro-palestine will try to cancel you! Not because they think highly of him, but because it hurts the cause more than it helps.
Your reading of the current political climate is very different to mine.
I don't know about that. in my view, you can call him a murderer, genocidal, sociopath, anything related to his actions. But calling him an epithet, comparing him to an animal is a different thing. Even physical violence is more tolerable. of course people can say whatever they want in private, i'm talking about public discourse. terms like "monkey" and "dog" have been used across cultures to mean really nasty things. It's dehumanizing (literally!), it says as much about the speaker as it does about the subject.
when humans say "an animal" in the English language, they're referring to "non-human animals". Being called an animal in itself isn't insulting either before you go there. Hardly anyone would be insulted at being called a lion. I think everyone who can read understands exactly the implication being drawn and the dehumanizing being done. Everyone from slave traders, colonialists, nazis,etc.. have used "monkey" to dehumanize people. Same with "dog" , "snake" ,etc.. in different contexts.
Indeed! Why not just call it "asynchronous software development" or something similar? "asynchronous programming" is a bad choice, partly because it will be un-googleable.
I've being trying out SQLite for a side project of mine, a virtual whiteboard, I haven't quite got my head around it, but it seems to be much less of a bother than interacting with file system APIs so far. The problem I haven't really solved is how sync and maybe collaboration is going to interact with it, so far I have:
1. Plaintext format (JSON or similar) or SQLite dump files versioned by git
2. Some sort of modern local first CRDT thing (Turso, libsql, Electric SQL)
3. Server/Client architecture that can also be run locally
SQLite has a builtin session extension that can be used to record and replay groups of changes, with all the necessary handling. I don't necessarily recommend session as your solution, but it is at least a good idea to see how it compares to others.
That provides a C level API. If you know Python and want to do some prototyping and exploration then you may find my SQLite wrapper useful as it supports the session extension. This is the example giving a feel for what it is like to use:
reply