It's not something we can pinpoint in any experiment, even not clear how to design one in theory. Yet we know by our very personal experience that it exists. Sounds pretty supernatural to me.
Hm, how does one not get into that conclusion? Most everyone would agree we have the concept of "selfness", yet I don't think there's a scientific theory to explain how a set of physiochemical processes can have that endresult to an observer, any more than a computer has the idea of "me".
I’m not so sure. They operate that way because of scale and economy (and tech that enables that). In a future where all industries are optimized in such way, very little will actually flow as most won’t have the money to buy goods, thus factories won’t make goods, thus shippers won’t ship, and the global economy grinds to a halt.
We need waste as much as we need investment. The trick is to find the value in between. I think the sweet spot will be augmenting work, not necessarily optimizing it.
That doesn't seem to make sense. As things get cheaper and wages go down too because there's an oversupply of labor, those poorer people can still afford those cheaper things.
We're talking about factories using low/no labor to produce goods, right? Those goods will be cheaper because they cost less (man-hours) to make. That's obviously already true for all the mass-produced stuff we have that's cheaper (measured by hours of work needed to pay for it) than 500 year old artisinal furniture, cookware, clothes, etc. which was very labour intensive.
Housing is weird because it just sucks up whatever leftover money people have. We all have to eventually spend all our income on something so it's impossible for everything to get cheaper in the long term. That doesn't mean we won't be able to afford stuff, just that we'll spend all our money just like we always have done.
Food would be encheapened by labour-free production just like products.
I adopted a “props down events up” interface for all my components (using svelte right now but it should work regardless. I am importing it the approach from a datastar experiment).
I describe -often in md- the visual intent, the affordances I want to provide the users, the props+events I want it to take/emit and the general look (although the general style/look/vibe I have in md files in the project docs)
Then I take a black box approach as much as possible. Often I rewrite whole components whether with another pass of ai or manually. In the meantime I have workable placeholder faster than I can manage anything frontend.
I mostly handle the data transitions in the page components which have a fat model. Kinda ELM like except only complete save-worthy changes get handled by the page.
Vibe code to production perhaps not, but vibe code for regular personal use doesn’t seem out of the realm of possibility already.
Unless there is inherent complexity in the problem (and assuming subscriptions don’t get pricey soon) I can see nontechnical people getting into designing their own apps.
It makes me think of 3d printing. A lot of people got into 3d modeling because of it. And a lot of people publish cute baubles 3d models (analogous to vibe coded ai wrappers?) but there is genuinely useful stuff that people not in the fabrication or 3d design industry create and share, some even making money off of it.
I just can’t think of a way saas margins will stay as high as they are now.
3d printing is something I think about. LLMs do their best work against text and 3d printers consume gcode. I’ve had sonnet spit out perfectly good single layer test prints. Obviously it won’t have the context window to hold much more gcode BUT…
If there was a text based file format for models, it could generate those and you could hand that to the slicer. Like I’ve never looked, but are stl files text or binary? Or those 3mf files?
If Gemini can generate a good looking pelican on a bicycle SVG, it can probably help design some fairly useful functional parts given a good design language it was trained on.
And honestly if the slicer itself could be driven via CLI, you could in theory do the entire workflow right to the printer.
It makes me wonder if we are going to really see a push to text-based file formats. Markdown is the lingua franca of output for LLMs. Same with json, csv, etc. Things that are easy to “git diff” are also easy for LLMs…
There is a text based file format for models. It's called OpenSCAD. It's also much more information compacted than a mesh model file like STL - e.g. in OpenSCAD you describe the curve, in the mesh file like STL you explicitly state all elements of it.
It's just gimped to the point that you can basically only use it for hobbyist projects, anything reasonably professional looking is using STEP compatible files and that is much more complex to try to emulate and get right. STEP is a bit different - it's more like a mesh in that it contains the final geometry, but in BRep which is pretty close to the machining grade, while OpenSCAD is more like what you're asking about - a textual recipe to generate curves that you pass into an engine that turns it into the actual geometry. It's just that OpenSCAD is so wholly insufficient to express what professional designs need it never gets used in the professional world.
That AI can do it better - by what dimension? - than the priests is arguable, but the reason for a priest to write one is reflection, connection..
Have you ever considered that possibly performing something is not only a mean to some output but that the process is the thing?
That may or may not translate to your coding analogy, but for the article comment you pose, I think you are way off.
reply