Hacker Newsnew | past | comments | ask | show | jobs | submit | levesque's commentslogin

Is there really no way to do this?


Not officially. You can side load UTM using AltStore which requires you to sign apps using your own developer certificate and re-sign them about once a week to keep it running.

The iPads have had the hardware in the M-series chips and the software in the form of Apple's hypervisor framework in iPadOS for a couple of generations now, but Apple hasn't enabled it to be used officially.

I really wish they would just allow this on iPadOS. It still maintains the sandbox model Apple wants for iOS, it would just give a (contained) outlet for doing things that are difficult in native iPadOS.


> The iPads have had the hardware in the M-series chips and the software in the form of Apple's hypervisor framework in iPadOS for a couple of generations now, but Apple hasn't enabled it to be used officially.

They removed the hypervisor framework in addition to the kernel support for virtualization a few months ago unfortunately.


Within 6 months iPadOS will have side loading in the EU.


Source? I thought the DMA only covers iPhones.

In any case, Apple still wants to "review" apps, and we want (arbitrary) user code execution on device. That's something Apple strictly forbids on iOS/iPadOS AFAIK (which is why we can't even have Firefox addons). Unless we can have at least true side loading, a DMA extension to iPad won't help.

The DMA isn't really the right tool to liberate devices, since it's about market competition not consumer rights. I think it would be better to widely address this along right-to-repair, electronic waste reduction and consumer rights regarding actual ownership. Unconditionally locked hardware is ridiculous.

I wish they would simply unlock the bootloader, so we can have Asahi Linux for iPad. They don't have to do anything else. Although Asahi is on trajectory to exceed MacOS performance and dev usability, I don't think they would lose their existing appstore cattle to Linux, but rather gain new hardware only customers.



Thx.

Though, I don't see true sideloading (like on Android) specified. If apps still need Apple's approval, we will get "freedom" who to pay, not what to run. I still don't see device liberation within the scope of the DMA.

However, if we're lucky, Apple may decide the app approval process may not be worth it, if they are not allowed to extort developers anymore, so they may allow unsupervised sideloading as a consequence.

In any case, requirements and reality may take much longer to align than 6 more months, considering Apple's cringeworthy tantrums so far...


> Though, I don't see true sideloading (like on Android) specified. If apps still need Apple's approval, we will get "freedom" who to pay, not what to run. I still don't see device liberation within the scope of the DMA.

But if Apple abuses its position as platform gatekeeper with the app approval process and rules, the EU will probably slap them down. The DMA doesn't care about device liberation, but it does care about fairness, so Apple will probably only be allowed to continue this if they act in very good faith towards 3rd parties, which doesn't seem like an Apple thing to do.



Sidecar sort of does this right now? If you are fine with the iPad functioning as a low res display that the Mac can send a few windows over to. It doesn't work amazingly well for me but I have used it a few times.


To be fair, the people quoted in the book don't seem to be having that much fun. But yes, most of modern tech seems to be exploiting the same mechanism, generating addiction.


Absolutely, was thinking the same. IIRC it's also only telling success stories, so people burning themselves out but succeeding in the end.


Predicting whether a text was written by a LLM or not is not trivial. What was the latest number by OpenAI? 30%? As LLMs get better, it seems like we won't be able to distinguish real text from fake text. Your LLM will be able to summarize it, but it will still be 99% spam.


You don't need to predict if it what written by LLM, if it's a human or machine makes no difference to the validity of a text. You just need to be able to extract the actual information out of it and cross check it against other sources.

The summary that an LLM can provide is not just of one text, but of all the texts about the topic it has access to. Thus you never need to access the actual texts itself, just whatever the LLM condenses out of them.


"just" need to "extract the actual information out of it and cross check it against other sources".

How do you determine the trustworthiness of those other sources when an ever increasing portion are also LLM generated?

All the "you just need to" responses are predicted on being able to police the LLM output based upon your own expertise (e.g., much talk about code generation being like working with junior devs, and so being able to replace all your juniors and just have super productive seniors).

Question: how does one become an expert? Yep, it's right there: experts are made through experience.

So if LLMs replace all the low experience roles, how exactly do new experts emerge?


You're trusting the LLM a lot more than you should. It's entirely possible to skew those too. (Even ignoring the philosophical question of what an "unskewed" LLM would even be.) I'm actually impressed by OpenAI's efforts to do so. I also deplore them and think it's an atrocity, but I'm still impressed. The "As an AI language model" bit is just the obvious way they're skewed. I wouldn't trust an LLM any farther than I can throw it to accurately summarize anything important.


>cross check it against other sources.

The problem comes in when 99.999999% of other sources are also bullshit.


If LLMs start writing a majority of HN comments, we won’t know what is true or not. HN will be noise and worthless then.


For HN and forums in general, I think this will mean disabling APIs and having strict captchas for posting.

Beyond HN, I think this will translate in video content and reviews becoming more trustworthy, even if it's just a person reading a LLM-produced script. You will at least know they cared enough to put a human in the loop. That and reputation. More and more credit will be assigned based on reputation, number of followers, etc. And that'll be until each of these systems get cracked somehow (fake followers, plausible generated videos, etc.).


Banal is banal, whether written by a human or not.

But GPT text is inherently deceptive, even when factually flawless— because we humans never evaluate a message merely on its factuality. We read between the lines. The same way insects are confused and fly in spirals around light, we will be flying spirals around GPT text based on our assumptions about its nature or the nature of the human whom we presume to have written it.


Isn't ChatGPT a 165B parameter model?


No. OpenAI haven't disclosed parameter count of GPT-3.5 or GPT-4, which are models used by ChatGPT. You may be thinking of GPT-3, which is indeed a 175B parameter model.


Ah, interesting. Thought GPT-3.5 had the same structure as GPT-3, for some reason. GPT-4 would obviously be different.


GPT-3.5 is likely a finetuned Curie 13B using output from the full size GPT-3 175B.


This read more like a technical demo than a scientific paper. Wonder why they put it on arXiv.


It's not only about performance, it's also about maintainability. Codebases written by inexperienced programmers are extremely convoluted, nothing is decoupled and functions are extremely long and do multiple things. Extremely hard code to maintain or add features to.


Anthropomorphization of a text completion engine. Humanity will not be destroyed by a fancy autocomplete bot. This is just alarmist clickbait, moving on.


In what way is this a ChatGPT implementation or equivalent? Seems like a chatbot based on a different backend, therefore it has absolutely zero link to ChatGPT.


It is a different backend but it supposedly should be roughly comparable to ChatGPT. Also, looks like it's both open source and requires a lot less hardware to run and train.


Its not open source until the weights are available. I have the hardware I need to run it but the required files are not available unless you receive special access.

You can't use what has been released unless you want to spend $500,000 on training.


With only a modicum of trolling-level here, I wonder what percentage of that training expense was used to identify and avoid "true things that must be muted because they offend someone"


Ignoring the subtext of "true things that must be muted because they offend someone", there's a whole section in the paper on how they didn't filter and the problems that causes. TL;DR:

> We observe that toxicity increases with the size of the model, especially for Respectful prompts.

It does outperform GPT3 slightly in terms of observed bias against protected groups (as in it is slightly less biased) but not substantially so.


It is an analogy.

ChatGPT:GPT3::ChatLLaMa:LLaMa


It uses a different engine, so this is as related to ChatGPT as a Toyota Corolla is related to a BMW car. This is an efficient and open-source chatbot, which is very good news, but the authors just wrote a clickbait title and they know it.


In formal analogies, : is pronounced "is to" and :: is pronounced "as".

The purpose here is to use the relationship from a known, to describe the relationship between a partial known.

ChatGPT is to GPT3 as ChatLLaMa is to LLaMa. It uses the relationship between ChatGPT and GPT3 to extrapolate a relationship between an unknown and LLaMa.

see Analogies.pdf https://resources.finalsite.net/images/v1584287027/brockton/...

Corolla:Toyota::3-Series:BMW. If you had heard of a Corolla, Toyota, and BMW, but not a 3-Series, you now roughly know that a 3-Series is BMWs equivalent to a Corolla.


I think I prefer the other commenter's point, referring to ChatGPT as a known learning paradigm for chatbots. But thanks for the little crash course on analogies ;)


Isn't that consistent with, and the same as, my original comment?

ChatGPT:GPT3::ChatLLaMa:LLaMa::Chatbot-through-RLHF:LLM

aka ChatGPT is a known chatbot implementation, and GPT3 and LLaMa are known LLMs.


Yes and no. You wouldn't say GPT to mean large language models or autoregressive language models. I would've thought the same to be true for ChatGPT instead of Chatbots with RL from human feedback (RLHF), perhaps the field is moving towards adopting ChatGPT as a paradigm name. Note that the title doesn't say a ChatGPT-like model based on LLaMa, it says outright opensource implementation of ChatGPT.


> You wouldn't say GPT to mean large language models or autoregressive language models.

In the analogy, that’s exactly what you are saying. Identical to Toyota and BMW meaning “the make of the car.”

Maybe reimplementation is a more precise word, a black box re-engineering/cloning. In this case I inferred it by knowing it was a different LLM underneath, and that this group didn’t have access to the chatgpt source code.


> Toyota Corolla is related to a BMW car.

The analogy is somewhat accurate, but also moot, since within the ML community "ChatGPT" can be used either as the product or the method (more specifically called RLHF) somewhat interchangeably. It's more like Google/Googling, where the largest/most popular provider becomes the defacto way to refer to a method. As someone who develops DL models, the title seems quite apt.


Yeah OP missed the target on this one. Super Mario Maker is a great example of insane creativity demonstrated by a community. AI can't do that though.


Point then still stands that if SMM2 itself isn't a hit, MarioGPT cannot as well.


AI can't do that by its own yet.

AI can be leveraged by an human designer to do that with some effort. Like, humans may have good taste in level design and AI may explore the concrete possibilities.

AI might be able to do this in the future by itself


AI models might be able to do this in the future by themselves, though with current paradigms AI will barely generate copies of existing levels with little creativity. Sure, a composition of existing level pieces could lead to an interesting level design, though it would be more by accident than by design. Models do not maximize player enjoyment, there is no metric for that. Maybe engagement metrics could be used, but I don't think players would stick around long enough playing bad levels to reach a viable model.

New models and paradigms will come up, but until then I'd say anything AI-generated will feel pretty vanilla and somewhat incoherent.


> Models are do not maximize player enjoyment, there is no metric for that

There might be! Just have some people to play and rate the levels.

I think that doing this would considerably improve the quality of the levels in MarioGPT or other algorithms for generation of game levels


My point isn't that the Super Mario Maker players weren't having fun flexing their game design muscles. My point is that a million makers on a million joycons couldn't generate enough commercially viable content for a single game. So what hope does GPT have? Both situations have similar design constraints, which I'm arguing is missing the critical design component necessary to make commercially viable platformers.

The reason why commercial viability is of interest is because the article claims this tool will be valuable to game developers and I don't think it will be because it doesn't solve for any problems in the business of making games. Nobody is stuck deciding where the pipes and bricks go.

To end on a positive note, lots of open world games use terrain generators as a first pass. AI might have better luck in that domain.


> My point is that a million makers on a million joycons couldn't generate enough commercially viable content for a single game.

How does this criticism follow after seeing a playlist full of creative uses of the limited systems available?

What do you expect, these individual makers using a proprietary tool somehow actually making a commercially viable game out of their levels that they can't even export and are entirely based on the closed source engine powering SMM? That never would have happened because of the nature of the platform, not the content being made.


Nintendo released SMM knowing it wouldn't be a direct threat to the Mario franchise. They were able to guarantee this because they know a Mario game with 80 levels needs 80 things we've never seen before. SMM ships with 200 things we've definitely seen before.

It's possible that some combination of those things is new, and good for a level worth of content. But there isn't 80 of them. The playlist has stuff like invisible pipes, lag spike inducers, soft lock strategies, etc. This style of troll design is popular(?) within the SMM community but you wouldn't sell a million copies of it in its own game.


> Nintendo released SMM knowing it wouldn't be a direct threat to the Mario franchise.

Sure, that would certainly be a good pro albeit hardly the sole reason as you're making it out to be.

> They were able to guarantee this because they know a Mario game with 80 levels needs 80 things we've never seen before. SMM ships with 200 things we've definitely seen before.

That just plain does not follow. You can do millions of things with those 200 things, and combinations of those 200 things have certainly never been seen in a Mario game.

The playlist has 117 videos. This only scratches the surface of what is possible in SMM.

They are insanely challenging and certainly introduce new mechanics, which is your entire premise, that levels need gimmicks. There are levels that literally take hours due to how complex and new they are. They do things that Nintendo never intended you to do and could easily fill multiple games' worth.

Have you actually tried it out? Played some of these creative levels? Because everything you say makes me believe you have never even opened SMM let alone looked at the levels out there. There are literally tens of thousands of videos showing off thousands of new mechanics built on top of those standard widgets, in tens of thousands of hard levels. Your description of what is out there is just, plain and simple, not reality.

Just because it's not physically possible to make a commercially viable game due to the nature of the platform does not mean that there isn't enough content (and then some) out there made by a million joycons. If a person can make it, an AI can eventually figure it out.


I've played it and also watch it often on Twitch. You can go hours without seeing a decent level. What the SMM community thinks is playable content is very far off what commercially viable content would be. Shell bounces, abusing physics with bouncy blocks, leaps of faith, "be small Mario on purpose"; are all unshippable content, fun only to folks who have accepted the constraints of SMM.

The latest Mario lets you throw your hat to control other characters and there are dozens of mechanics born from that interaction alone, each one unlike the other. SMM has no hope of ever competing with that.


Comparing a 2D platforme to a 3D one is like apples and oranges.

They serve entirely different purposes and have different constraints, the biggest one being an entire dimension. Odyssey's mechanic would be extremely dull in a 2D platformer.

Your opinion is also just that, an opinion. Of one person. Just because you don't find it entertaining or think it's worthy of a game doesn't make it fact.

It being relatively popular on Twitch, enough for you even to watch it, is testament in itself that there is clearly interest in the levels that are being created and playing them, which in itself proves the viability of the levels. If they were anywhere near as dull, repetitive and lacking due to the constraints of SMM as you claim them to be, people wouldn't be streaming or watching them.


The 2D/3D aspect is so irrelevant. I could have chosen New Super Mario Bros U and talked about riding on the back of dragons, micro mushrooms, ice flowers. SMM2 could add these mechanics but they by definition aren't novel and combining them will not generate novel content likely to be in the next ~two dimensional~~ Mario game. That game will have a bunch of stuff never seen in any game before.

SMM is not a flop and I never claimed it is. But if the levels are so good, why doesn't anyone sell them and make Mario brother money? Because SMM levels are the 2nd slice of cake.


> The 2D/3D aspect is so irrelevant

I mean your entire premise is that some flashy new mechanic is what's holding SMM back, and you chose a shit example and I called you out on it. That's hardly irrelevant. Inconvenient for you, maybe. Next time, don't choose such a bad example as your only point. You've done it twice already in this thread, first with SMM itself.

> But if the levels are so good, why doesn't anyone sell them and make Mario brother money? Because SMM levels are the 2nd slice of cake.

No, it's literally because there is absolutely no platform for exporting or selling the levels. They are levels based entirely on the engine of SMM. They can only be played within SMM. Even if you were to take the elements and reproduce them in another engine, it won't work the same because it's based on the timings of SMM.

I already said as much. There not being a platform to sell these things does not mean they don't have value. If it were possible, the best levels would most certainly be good enough for a full game.

Just because there's no riding on the back of a dinosaur does not mean the combination of mechanics in new ways isn't itself a novel mechanic. You're discarding it because it isn't some flashy first party thing. People have done crazy things within the confines of SMM and throwing it all away because Mario isn't throwing his hat onto an enemy or having a new power up available is absurd.


You will just have to move on and find someone else on the internet to argue with.

I think you are not trying very hard to understand my points because they aren't that complicated and you continually misconstrue them. You quicky destroy the nuance of the conversation and then argue with your absurd black & white interpretations. Any attempt to clarify is met with increasing hostility.


>My point is that a million makers on a million joycons couldn't generate enough commercially viable content for a single game. So what hope does GPT have?

Your criticism is that the AI doesn't create new game functionality, even though it doesn't have access to create new game functionality?

That's an artificially impossible bar you're setting for the AI. Maybe if it did have access to create new functionality it would be able to?


That's exactly my point. The path to new game content can't be reduced to putting the blocks in the right place. You also have to come up with new mechanics out of thin air, consider the educational burden of your mechanics, the emotional tempo, how the level plays for different player types (speedsters, young players), how the mechanics reinforce the theme of the zone you're in, and more.

What I am pushing back against is the idea that since GPT can assemble blocks, that it's somehow approaching game design.


> My point is that a million makers on a million joycons couldn't generate enough commercially viable content for a single game.

The toolset is limited, so you end up with Mario levels of LittleBigPlanet.

If you provide a fuller toolset (like UnrealEd or the ability to mod), then you absolutely have viable content, enough for (in the case of CS) the original publishers of the base game acquiring your commercially viable content.


There are more good (and also terrible) Mario Maker levels than one could play in a year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: