Hacker Newsnew | past | comments | ask | show | jobs | submit | empath75's commentslogin

    I like to think
    (it has to be!)
    of a cybernetic ecology
    where we are free of our labors
    and joined back to nature,
    returned to our mammal
    brothers and sisters,
    and all watched over
    by machines of loving grace.

> something like a winding number (which has to be an integer). Electric charge is a kind of "defect" or "kink" in the photonic field, while color charge (quarks) are defects in the strong-force field, etc.

Quark's don't have integer charge


Redefine the down quark charge as the fundamental unit and you lose nothing.

> you lose nothing

For some reason electrons have charge -3 then, that coincides with the proton charge for no good reason.


Right, but then you have the questions of 1) why do leptons have (a multiple of) the same fundamental unit as quarks, and 2) why does that multiple equal the number of quarks in a baryon, so that protons have a charge of exactly the same magnitude as electrons?

I mean, I guess you could say that charge comes from (or is) the coupling of the quark/lepton field to the electromagnetic field, and therefore if it's something that's quantized on the electromagnetic side of that, then quarks and leptons would have the same scale. I'm not sure that's the real answer, much less that it's proven. (But it might be - it's a long time since my physics degree...)


> it's a long time since my physics degree...

me too, just addressing that a fraction might as well be an integer with some redefinition of the fundamental charge.


It wasn't wrong. It was proved for uniform materials. This paper extends it to non-uniform materials, with an additional condition.

People should know the legal context of this, which is that the Fifth Circuit is out of step with every other circuit in the country on this.

Two judges on the 5th overturned what ~150 other judges have found, including some extremely right wing Trump-appointed ones. Just shambolic.

https://www.lawdork.com/p/fifth-circuit-immigration-detentio...


> To those who are acquainted with the principles of the Jacquard loom, and who are also familiar with analytical formulæ, a general idea of the means by which the Engine executes its operations may be obtained without much difficulty. In the Exhibition of 1862 there were many splendid examples of such looms. It is known as a fact that the Jacquard loom is capable of {117} weaving any design which the imagination of man may conceive. It is also the constant practice for skilled artists to be employed by man­u­fac­turers in designing patterns. These patterns are then sent to a peculiar artist, who, by means of a certain machine, punches holes in a set of pasteboard cards in such a manner that when those cards are placed in a Jacquard loom, it will then weave upon its produce the exact pattern designed by the artist. 〈WEAVING FORMULÆ.〉 Now the man­u­fac­turer may use, for the warp and weft of his work, threads which are all of the same colour; let us suppose them to be unbleached or white threads. In this case the cloth will be woven all of one colour; but there will be a damask pattern upon it such as the artist designed. But the man­u­fac­turer might use the same cards, and put into the warp threads of any other colour. Every thread might even be of a different colour, or of a different shade of colour; but in all these cases the form of the pattern will be precisely the same—the colours only will differ. The analogy of the Analytical Engine with this well-known process is nearly perfect. The Analytical Engine consists of two parts:— 1st. The store in which all the variables to be operated upon, as well as all those quantities which have arisen from the result of other operations, are placed. 2nd. The mill into which the quantities about to be operated upon are always brought.

- Charles Babbage, Passages from the life of a philosopher.


Thank you for this relevant quote!

Chatgpt 5.2 in the past couple of weeks has gotten noticeably worse for me to the point that I stopped using it and just ask claude code questions instead.

I am not going to argue this on the basis of LLM's suck at fiction, because even if it's true, it's not really that relevant. The problem is that what LLM's are good at is producing mediocre fiction particular to the tastes of the individual reading at. What people will keep reading is fiction that an LLM is writing because they personally asked it to write it.

I don't want to read fiction generated from someone else's ideas. I want to read LLM fiction generated from my weird quirks and personal taste.


I think this is mostly a problem of making things skills that don't need to be skills (telling it how to do something it already knows how to do), and having way too much context, so that the skills effectively disappear. If skills are important, information about using skills needs to be a relatively large proportion of the context. Probably the right way to do it, is aggressively trimming anything that might distract from them.

They're basically all trade-offs between context-size/token-use and flexibility. If you can write a bash or a python script, or an api or an MCP to do what you want, then write a bash or python script to do it. You can even include it in the skill.

My general design principle for agents, is that the top level context (ie claude.md, etc) is primarily "information about information", a list of skills, mcps, etc, a very general overview, and a limited amount of information that they always need to have with every request. Everything more specific is in a skill, which is mostly some very light touch instructions for how to use various tools we have (scripts, apis and mcps).

I have found that people very often add _way_ to much information into claude.md's and skills. Claude knows a lot of stuff already! Keep your information to things specific whatever you are working on that it doesn't already know. If your internal processes and house style are super complicated to explain to claude and it keeps making mistakes, you might want to adapt to claude instead of the other way around. Claude itself makes this mistake! If you ask it to build a claude md, it'll often fill it with extraneous stuff that it already knows. You should regularly trim it.


Thanks, super useful!

Experimenting with skills over the last few months has completely changed the way I think about using LLMs. It's not so much that it's a really important technology or super brilliant, but I have gone from thinking of LLMs and agents as a _feature_ of what we are building and thinking of them as a _user_ of what we are building.

I have been trying to build skills to do various things on our internal tools, and more often then not, when it doesn't work, it is as much a problem with _our tools_ as it is with the LLM. You can't do obvious things, the documentation sucks, api's return opaque error messages. These are problems that humans can work around because of tribal knowledge, but LLMs absolutely cannot, and fixing it for LLM's also improves it for your human users, who probably have been quietly dealing with friction and bullshit without complaining -- or not dealing with it and going elsewhere.

If you are building a product today, the feature you are working on _is not done_ until Claude Code can use it. A skill and an MCP isn't a "nice to have", it is going to be as important as SEO and accessibility, with extremely similar work to do to enable it.

Your product might as well not exist in a few years if it isn't discoverable by agents and usable by agents.


> If you are building a product today, the feature you are working on _is not done_ until Claude Code can use it. A skill and an MCP isn't a "nice to have", it is going to be as important as SEO and accessibility, with extremely similar work to do to enable it. Your product might as well not exist in a few years if it isn't discoverable by agents and usable by agents.

This is an interesting take. I admit I've never thought this way.


Yeah, omnipresent LLMs are a kind of forcing function for addressing typical significant underinvestment in (human-readable) docs. That said, I'm not entirely sold on MCP per se.


Wow, that is almost point for point what I had written down in a bunch of documents I had been spreading around at work this week. Excellent post.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: