Hacker Newsnew | past | comments | ask | show | jobs | submit | benlivengood's commentslogin

With an optimal way of determining fair splitting of gains like Shapley value[0] you can cooperate or defect with a probability that maximizes other participants expected value when everyone act fairly.

The ultimatum game is the simplest example; N dollars of prize to split, N/2 is fair, accept with probability M / (N /2) where M is what's offered to you; the opponents maximum expected value comes from offering N/2; trying to offer less (or more) results in expected value to them < N/2.

Trust can be built out of clearly describing how you'll respond in your own best interests in ways that achieve fairness, e.g. assuming the other parties will understand the concept of fairness and also act to maximize their expected value given their knowledge of how you will act.

If you want to solve logically harder problems like one-shot prisoners dilemma, there are preliminary theories for how that can be done by proving things about the other participants directly. It won't work for humans, but maybe artificial agents. https://arxiv.org/pdf/1401.5577

[0] https://en.wikipedia.org/wiki/Shapley_value


Thanks. I'll take a look!

At Google I worked with one statistics aggregation binary[0] that was ~25GB stripped. The distributed build system wouldn't even build the debug version because it exceeded the maximum configured size for any object file. I never asked if anyone had tried factoring it into separate pipelines but my intuition is that the extra processing overhead wouldn't have been worth splitting the business logic that way; once the exact set of necessary input logs are in memory you might as well do everything you need to them given the dramatically larger ratio of data size to code size.

[0] https://research.google/pubs/ubiq-a-scalable-and-fault-toler...


In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?

Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.

Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.

Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.


If we had done what you say (distribute wealth more evenly between people/corporations) more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.

The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.


Transformers grew out of the value-add side (autotranslation), though, not really the ad business side iirc. Value-add work still gets done in high-progressive-tax societies if it's valuable to a large fraction of people. Research into luxury goods is slowed by progressive tax rates, but the actual border between consumer and luxury goods actually rises a bit with redistributed wealth; more people can afford smartphones earlier and almost no one buys superyachts and so reinvestment into general technology research may actually be higher.

And I'm sure none of it was based on any public research from public universities, or private universities that got public grants.

Sure. I just know in most companies (seeing the numbers on projects in a number of them across industries now) funding projects which give time for people to think, ponder, publish white papers of new techniques is rare and economically not justifiable against other investments.

Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.

They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.

In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.


I agree with much of what you say.

Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.

Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.

Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.


> Career being the core of one's identity is so ingrained in society

In AMERICAN society. Over there "what do you do?" is in the first 3 questions people ask each other when they meet.

I've known people for 20 years and I don't have the slightest clue what they do for a living, it's never came up. We talk about other things - their profession isn't a part of their personality.


    Education for educations sake isn't a thing.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.

I finally got around to reading Wheel of Time. It didn't quite take the whole year but a few solid months. If I had tried spreading it out over a longer period I wouldn't have been able to remember the overall plot or characters, I think.

What did you think of it? I got to about the third book and completely lost interest

Don't finish; read a good summary if you cared about any character or plot point sufficiently.

I enjoyed it overall but in large part for the detail/immersion which it sounds like wasn't interesting enough to keep you.

All character growth and relationships were very slow burns, and so if you enjoy watching the same characters just dealing with things as they come up then you'd probably enjoy the other books as much as the first three. Satisfying arc resolution happens almost entirely at the end.


Presumably, like Cruise, if the safety rate is appalling then they get their permits revoked which is 99% the same as jail for a company that only does self driving cars.

I think I would enjoy building houses, or solar-battery-electrical installations. I like infrastructure (my favorite games include Factorio) and being able do that in the real world sounds both useful and enjoyable/satisfying.

It would be nice if there was a common crawler offering deltas on top of base checkpoints of the entire crawl; I am guessing most AI companies would prefer not having to mess with their own scrapers. Google could probably make a mint selling access.


commoncrawl.org

Our public web dataset goes back to 2008, and is widely used by academia and startups.


I always wanted to ask:

- How often is that updated?

- How current is it at any point in time?

- Does it have historical / temporal access i.e. be able to check the history of a page a la The Internet Archive?


- monthly

- it's a historical archive, the concept of "current" is hard to turn into a metric

- not only is our archive historical, it is included in the Internet Archive's wayback machine.


If the universe is infinite that should happen sometime (not really important) around 10^10^29 meters away. Of course, you don't actually have to die for that copy to exist either, and a copy of the local galaxy etc. is not too much further away (10^10^92 m), so waking up indistinguishably somewhere else after a good night's sleep would happen occasionally too.


Infinite does not mean everything happens.

Waking up up the equivalent memories requires a body with that arrangement of neurons that isn’t in ill health. That could easily be looking for a 3 on an infinite sequence of odd numbers.


Yes, at some point Boltzmann brains become higher likelihood but don't forget that interventions are not limited to prior history, e.g. someone builds a machine that performs whole-body surgery based on the timing of radioactive decay. Still more likely to end up with a functioning body than a Boltzmann brain. Most likely to end up with a weirder but more likely intervention (low expectation of it actually working, but not infinitesimal) (and from anthropic perspective)


The issue I was pointing to was a society with a higher technical standard isn’t going to naturally create a brain that remembers a vastly less technologically advanced civilization. It’s possible they would create such structures artificially, but now your dependent upon a civilization with the capacity that also happens to create this specific arrangement.


A major con of bedframes is annoying squeaks. Joints bear a lot of load and there usually isn't diagonal bracing to speak of, so they get noisy after almost no time at all. Fasteners loosen or wear the frame materials. I have yet to find one that stays quiet more than a few months or a year without retightening things; but I haven't tried a full platform construction with continuous walls which I expect might work better, but also sounds annoyingly expensive and heavy.


> months or a year without retightening things

Generally manufactures just send things like bolts. Just add your own lock washers or things like teflon washers and these problems tend to go away long term.


I have a metal one from Zinus and it's been 100% silent after almost two years.


If your argument is that value produced per-cpu will increase so significantly that the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter, then yes that seems to be one of the significant risks long term if governments don't intervene.

If the argument is that prices will skyrocket simply because of long-term AI demand, I think that ignores the fact that manufacturing vastly more products will stabilize prices up to the point that raw materials start to become significantly more expensive, and is strongly incentivized over the ~10-year timeframe for IC manufacturers.


>the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter

The value of AGI/ASI is not only defined by its practical use, It is also bounded by the purchasing power of potential consumers.

If humans aren’t worth paying, those humans won’t be paying anyone either. No business can function without customers, no matter how good the product.


Precisely the place where government intervention is required to distribute wealth equitably.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: