Hacker Newsnew | past | comments | ask | show | jobs | submit | danielvaughn's commentslogin

Yeah I think it all boils down to culture. Tools like RFC (and anything else) can help propel a good culture forward. But you can't fix a broken culture with a tool.

Many years ago I had an idea for a mobile app that would effectively be an interactive tour for street art. Just a map with all known instances of public art, whether murals, quality graffiti, public sculptures, etc.

Still seems like a good idea tbh.


If you haven't tried building a grammar with tree-sitter, I highly recommend you do so. It's incredibly fun once you get into a flow state. The docs call it a zen-like experience, and that's a perfect way to describe it. It's so, so good.

I have a 2 year old daughter and I'm about to have a son in February. Walking away from them is unfathomable. I can't imagine the regret I'd feel at my old age, having lost the few short years where I get to watch my children grow up, just so I can walk to some places.

There's far more depth and mystery to be explored in raising a human than there will ever be as a tourist. The deep stupidity it takes to think otherwise is depressing to behold.


The mother left the country and went to Belgrade, where Carl was not allowed entry. The mother is who eliminated the possibility of contact. Karl left England after the estrangement. It was part of the reason for his journey, as he told me at least.

I leave often to go to the mountains because I could not live an entirely domestic life.

It is just a day or two at a time but I realized at some point that this is what I have to do to be able to be a caring husband and father. If I don’t I will become depressed and miserable and no amount of loving them will overcome it. I am much more useful as a happy and functional human being 350 days a year than a miserable one for 365.


That's different from what the article was describing, though. What you do is just normal; you gotta take a break at times for your sanity.

I’m not defending this guy, but many fathers leave because they don’t want to have family. It could have nothing to do with any of his other plans.

[flagged]


Seems like the "right decision" should have happened a little earlier. Or at least before repeating the mistake a second time.

not defending his goals, nor himself (vasectomy your high corporate climber, please) but men don't any authority about abortion

edit: and yeah, i got the ick from the walking guy too


Not giving men any authority on abortion is taking a hard stance that abortion is a female issue over a human issue.

I won't pass any judgement either way, but it's an interesting perspective.

With 100+ Million orphans in the world, having your own kids is anti-humanitarian (not anti-human) anyways, so why is being a corporate climber relevant?


> Not giving men any authority on abortion is taking a hard stance that abortion is a female issue over a human issue.

i think it should be this way. but what happens when you got someone pregnant by mistake? it can happen even with people taking secure measures... the man doesn't want but the woman do. she has the right of having it but the man shouldn't be obligated "on being a dad". maybe i think in a country that has abortion legalized the man also should abstain from paying pension. the otherwise (the man wanting and the woman not) should still depend on the woman decision, after all is her body and any consequence of pregnancy falls upon her

> With 100+ Million orphans in the world, having your own kids is anti-humanitarian (not anti-human) anyways, so why is being a corporate climber relevant?

yes, i would love a law punishing people (higher taxes maybe?) from having children when there are anyone for adoption in the country... beyond orphans, having kids is the worst offense to climate. much more than owning a car, going vegan and using an airplane for traveling occasionally, all summed together. it's serious business and i don't like the idea of scarce ecosystems and resources in 200-400 years :) i was just trying to show a case where it's somehow valid to a man simply walk away (no pun intended, i really didn't sympathized with the plot of our corporate climber here nor the walking guy)


>you got someone pregnant

And this is the problem, your exact phrasing. You get her pregnant. A man gets a woman pregnant. It's putting all the onus on the man in an activity that requires two consenting participants (rape is obviously excluded for this argument).

It's kinda sexist because it diminishes the responsibility of the woman involved and strengthens the responsibility of the man involved, both bad things and everpresent through many aspects of society.


> And this is the problem, your exact phrasing. You get her pregnant. A man gets a woman pregnant. It's putting all the onus on the man in an activity that requires two consenting participants (rape is obviously excluded for this argument).

have you read what i typed? where do i diminish the responsibility of a woman in my comment? i literally typed i'm against any decision on having or not a child BY MEN


Men: 3-4 contraception types (mostly barrier/permanent); 0 mitigation/cancellation options.

Women: 12-15 contraception types; 4 mitigation types; 4 cancellation types.

And still it is men who are being blamed, despite all the power being in women' hands. Men often only wanted sex, not the child. And yet, if pregnancy happened, there is nothing he can do about it, even if he was tricked or lied to.

If a woman gets gets pregnant, she has all the power. She is the sole decider what to do about it. Therefore, if the child was born it was always because the woman decided to do it.

If the woman decides to abort the child, she can also do it, without the guy/husband having any say.

This is the reason why I think that the abortion rights should be extended to men as well. If women have rights to be the sole deciders in getting the children aborted, then men should have the right to a financial abortion (she can decide what to do with the child, he should decide whether he wants to be financially participating in the woman's decision; her body, her choice. His money, his choice.). Not only would that be fair and balancing the reproductive rights, but would also greatly decrease the baby trappings and the number of single mothers.

And while we are at it, make paternity tests mandatory after each birth (before taking upon oneself a 20-year financial burden for the kid who is very often bot yours). This would greatly decrease adultery and paternity fraud.


Then why wonder that men feel left alone and act accordingly.

There aren't enough kids to be adopted in Western countries, even for very small number of people who would want it. The formal requirements, time and money expenses, as well as reliance on a huge amount of luck is often an insurmountable obstacle. My friends tried for many years, but were forced to abandon the process. This was incredibly sad, knowing how great parents they would have been.


I actually wasn’t referring to abortion, rather taking any of the various steps you can take to avoid having children if you don’t want them. Especially the second time around.

I mean as a gay man who doesn't want kids I still think that it's unfair for men to have zero reproductive rights beyond "Well don't have sex then". Women aren't told the same thing.

I believe the law should be changed; if an unintended child is unwanted by the Father and the mother does not want to get an abortion (which is her choice) then the Father has the right to refuse contact with the child as well as refusing to support the child.

Cause straight men: at the moment, as soon as you stick it in you have zero choice, zero rights, even if you're using protection and there's been no agreement that you're doing it for fun or for reproductive purposes. But then none of you seem to care about it so...?


Women are definitely told the same thing. That's the whole fight about roe v wade in the US. The difference is that if a man wants the kid and the woman doesn’t, the woman is the one who is putting her health and life on the line, not the man. That's why it's her choice. Or at least it used to be in the US. In many places it's not and women die as a result. Childbirth is somehow still the top 10 killer of women. It's only birth control that dropped it from #1. Men don't die. They're not even the most financially impacted. They also get to walk away like women never get to do. A woman who is forced to carry a child rarely gets to walk out the door and forget about her family. That's why women grt to choose. Until men carry the same burden in child care and child creation, it's the kind of of unfairness that's inherent to the situation.

I understand why men feel this way, but realistically when a woman is stuck with a child she didn't want, which happens more often than people admit because of so many factors and systems set against the idea of abortion, she never gets to walk away.


this post reads like a parody you'd find on linkedin lunatics. I mean, sure, how could the joys of raising a human being compare to a slight bump in relevant kpi's?

Given this account is just 3 hours old, I take it as satire. Please be satirical or get help.

this has to be satire :D

Jesus Christ, dude.

It's really nuts when they don't even see the problem

They know. That's why it is a brand new throw away account.

I'm inclined to believe it was written in jest. Then again, I shouldn't be too surprised if it wasn't.

The username literally contains “jester”. I’m surprised anyone took it seriously.

brand new account...only comment...don't take the bait.

I don’t know much about AI, but I have this image test that everything has failed at. You basically just present an image of a maze and ask the LLM to draw a line through the most optimal path.

Here’s how Nano Banana fared: https://x.com/danielvaughn/status/1971640520176029704?s=46


I just oneshot it with claude code (opus 4.5) using this prompt. It took about 5 mins and included detecting that it was cheating at first (drew a line around the boundary of the maze instead), so it added guardrails for that:

```

Create a devenv project that does the following:

  - Read the image at maze.jpg
  - Write a script that solves the maze  in the most optimal way between the mouse and the cheese
  - Generate a new image which is of the original maze, but with a red line that represents the calculated path
Use whatever lib/framework is most appropriate

```

  Output: https://gist.github.com/J-Swift/ceb1db348f46ba167948f734ff0fc604  
  Solution: https://imgur.com/a/bkJloPT

This (writing a program to solve the problem) would be a perfectly valid solution if the model had come up with it.

I participated in a "math" competition in high school which mostly tested logic and reasoning. The reason my team won by a landslide is because I showed up with a programmable calculator and knew how to turn the problems into a program that could solve them.

By prompting the model to create the program, you're taking away one of the critical reasoning steps needed to solve the problem.


Programs can solve mazes and LLMs can program. That's a different thing completely.

That just seems like an arbitrary limitation. Its like asking someone to do answer a math calculation but "no thinking allowed". Like, I guess we can gauge if a model just _knows all knowable things in the universe_ using that method... but anything of any value that you are gauging in terms of 'intelligence', is going to actually be validating their ability to go "outside the scope" of what they actually are (an autocomplete on steroids).

It depends whether you're asking it to solve a maze because you just need something that can solve mazes, or if you're trying to learn something about the model's abilities in different domains. If it can't solve a maze by inspection instead of writing a program to solve it, that tells you something about its visual reasoning abilities, and that can help you predict how they'll perform on other visual reasoning tasks that aren't easy to solve with code.

Again, think about how the models work. They generate text sequentially. Think about how you solve the maze in your mind. Do you draw a line direct to the finish? No, it would be impossible to know what the path was until you had done it. But at that point you have now backtracked several times. So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?

> So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?

Represent the maze as a sequence of movements which either continue or end up being forced to backtrack.

Basically it would represent the maze as a graph and do a depth-first search, keeping track of what nodes it as visited in its reasoning tokens.

See for example https://stackoverflow.com/questions/3097556/programming-theo... where the solution is represented as:

A B D (backtrack) E H L (backtrack) M * (backtrack) O (backtrack thrice) I (backtrack thrice) C F (backtrack) G J


And my question to you is “why is that substantially different than writing the correct algorithm to do it”? Im arguing its a myopic view of what we are going to call “intelligence”. And it ignores how human thought works in the same way by using abstractions to move to the next level of reasoning.

In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing in terms of judging if its “able to do” that thing. Its functionality equivalent for evaluating what the “state of the art” is, and honestly is naive to what these models even are. If the model hid the tool calling in the background instead, and only showed you its answer would we say its more intelligent? Because that’s essentially how a lot of these things work already. Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.


> In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing

That's great, but it's demonstrably false.

I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven[1].

Tool use is absolutely an intelligence amplifier but it isn't the same thing.

> Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.

This is technically true, but somewhat misleading. Humans speak "left to right" too. Specifically, LLMs do have some spatial reasoning ability (which is what you'd expect with RL training: otherwise they'd just predict the most popular token): https://snorkel.ai/blog/introducing-snorkelspatial/

[1] https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...


> I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven

That is precisely the point I am trying to make. Its an arbitrary goalpost to say that knowing how to write the code doesnt mean its intelligent, and only doing it in a "chain of thought" would be.


First, the thrust of your argument is that you already knew that it would be impossible for a model like Gemini 3 Pro to solve a maze without code, so there's nothing interesting to learn from trying it. But the rest of us did not know this.

> Again, think about how the models work. They generate text sequentially.

You have some misconception on how these models work. Yes, the transformer LLMs generate output tokens sequentially, but it's weird you mention this because it has no relevance to anything. They see and process tokens in parallel, and then process across layers. You can prove, mathematically, that it is possible for a transformer-based LLM to perform any maze-solving algorithm natively (given sufficient model size and the right weights). It's absolutely possible for a transformer model to solve mazes without writing code. It could have a solution before it even outputs a single token.

Beyond that, Gemini 3 Pro is a reasoning model. It writes out pages of hidden tokens before outputting any text that you see. The response you actually see could have been the final results after it backtracked 17 times in its reasoning scratchpad.


You could actually add mazes and paths through them to the training corpus, or make a model for just solving mazes. I wonder how effective it would be, I’m sure someone has tried it. I doubt it would generalize enough to give the AI new visual reasoning capabilities beyond just solving mazes.

By your analogy, the developers of stockfish are better chess players than any grandmaster.

Tool use can be a sign of intelligence, but "being able to use a tool to solve a problem" is not the same as "being intelligent enough to solve a specific class of problems".


Im not talking about this being the "best maze solver" and "better at solving mazes than humans". Im saying the model is "intelligent enough" to solve a maze.

And what Im really saying is that we need to stop moving the goal post on what "intelligence" is for these models, and start moving the goal post on what "intelligence" actually _is_. The models are giving us an existential crisis on not only what it might mean to _be_ intelligent, but also how it might actually work in our own brains. Im not saying the current models are skynet, but Im saying I think theres going to be a lot learned by reverse engineering the current generation of models to really dig into how they are encoding things internally.


> Im saying the model is "intelligent enough" to solve a maze.

And I don't agree. I think that at best the model is "intelligent enough to use a tool that can solve mazes" (which is an entirely different thing) and at worst it is no different than a circus horse that "can do math". Being able to repeat more tricks and being able to select which trick to execute based on the expected reward is not a measure of intelligence.


I would encourage you to read the code it produced. Its not like a simple "solve maze" function. There are plenty of "smart" choices in there to achieve the goal given my very vague instructions, and as a result of it analyzing why it failed at first and then adjusting.

I don't know how else to get my point across: what I am trying to say is that there is nothing "smart" about an automaton that needs to resort to A* algorithm implementations to "solve" a problem that any 4-year old child can solve just by looking at it.

Where you are seeing "intelligence" and "an existential crisis", I see "a huge pattern-matching system with an ever increasing vocabulary".

LLM's are useful. They will certainly cause a lot of disruption of automation on all types of white-collar work. They will definitely lead to all sorts of economic and social disruptions (good and bad). I'm definitely not ignoring them as just another fad... but none of that depends on LLMs being "intelligent" in any way.


We know there are very simple maze solving algorithms you could code in few lines of Python but no one could claim that constitutes intelligence. The difference is between applying intuitive logic and using a predetermined tool.

If you allow tool use much simpler models can solve it.

In fact, one of the tests I use as part of GenAI Showdown involves both parts of the puzzle: draw a maze with a clearly defined entrance and exit, along with a dashed line indicating the solution to the maze.

Only one model (gpt-image-1) out of the 18 tested managed to pass the test successfully. Gemini 3.0 Pro got VERY close.

https://genai-showdown.specr.net/#the-labyrinth


super cool! Interesting note about Seedream 4 - do you think awareness of A* actually could improve the outcome? Like I said, I'm no AI expert, so my intuitions are pretty bad, but I'd suspect that image analysis + algorithmic pathfinding don't have much crossover in terms of training capabilities. But I could be wrong!

Great question. I do wish we had a bit more insight into the exact background "thinking" that was happening on systems like Seedream.

When you think about posing the "solve a visual image of a maze" to something like ChatGPT, there's a good chance it'll try to throw a python VM at it, threshold it with something like OpenCV, and use a shortest-path style algorithm to try and solve it.


I have also tried the maze from a photo test a few times and never seen a one-shot success. But yesterday I was determined to succeed so I allowed Gemini 3 to write a python gui app that takes in photos of physical mazes (I have a bunch of 3d printed ones) and find the path. This does work.

Gemini 3 then one-shot ported the whole thing (which uses CV py libraries) to a single page html+js version which works just as well.

I gave that to Claude to assess and assign a FAANG hiring level to, and it was amazed and said Gemini 3 codes like an L6.

Since I work for Google and used my phone in the office to do this, I think I can't share the source or file.


The reason is that image generators don't iterate on the output in the same way the text-based LLMs do. Essentially they produce the image in "one hit" and can't solve a complex sequence in the same way you couldn't one-shot this either. Try taking a random maze, glance at it, then go off to draw a squiggle on a transparency. If you were to place that on top of the maze, there's virtually no chance that you'd have found the solution on the first try.

That's essentially what's going on with AI models, they're struggling because they only get "one step" to solve the problem instead of being able to trace through the maze slowly.

An interesting experiment would be to ask the AI to incrementally solve the maze. Ask it to draw a line starting at the entrance a little ways into the maze, then a little bit further, etc... until it gets to the end.


you're kind of describing the figure in table 1 (page 8) of the diffusion forcing paper

https://arxiv.org/abs/2407.01392

of course it doesn't redraw the image on every step, so not exactly what you're suggesting (interesting idea btw) but i think it's relevant.


I tried this with Claude:

``` > [Image #1] Create a unicode "ascii-art" version of this image, with the optimal path through the maze highlighted in a solid colour.

I'll create an ASCII art version of this maze with the solution path highlighted!

  ┌─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐
  ●●│ │ │ │       │   │       │     │ │   │   │ │ │   │
  ├─┤●└─┴─┐ ├───┐ │ ╔═╗ ├─────┤ ╔═══╝ │ ╔═╝ ╔═╝ │ │ ╔═╝ ├─┤
  │ │●●●●●└─┤   │ │ ║ │ │     │ ║     │ ║   ║   │ │ ║   │ │
  │ └───┐●╔═╧═╗ │ └─╢ │ │ ┌─╔═╝ ║ ┌───┤ ╚═╗ ╚═╗ │ └─╢ ┌─┘ │
  │     │●║   ║ │   ║ │ │ │ ║   ║ │   │   ║   ║ │   ║ │   │
  ├───┐ │●╚═╗ ╚═╧═╗ ║ └─┤ │ ║ ╔═╝ └─╔═╧═╗ ╚═╗ ╚═╧═╗ ║ └─┬─┤
  │   │ │●●●└───┐ ║ ║   │ │ ║ ║     ║   ║   ║     ║ ║   │ │
  │ ╔═╝ └───┐●╔═╝ ║ ╚═╗ │ └─╢ ╚═╗ ╔═╝ ╔═╝ ┌─╢ ╔═══╝ ╚═╗ │ │
  │ ║       │●║   ║   ║ │   ║   ║ ║   ║   │ ║ ║       ║ │ │
  │ ╚═══╗ ╔═╝●╚═╗ ╚═╗ ║ ├─╔═╧═╗ ║ ║ ╔═╝ ╔═╝ ║ ║ ╔═╗ ╔═╝ │ │
  │     ║ ║●●●●●│   ║ ║ │ ║   ║ ║ ║ ║   ║   ║ ║ ║ ║ ║   │ │
  ├───╔═╝ ║●┌─┐●└─╗ ║ └─┤ ║ ╔═╝ ║ ╚═╝ ╔═╧═╗ ║ ╚═╝ ║ ╚═╗ │ │
  │   ║   ║●│ │●●●║ ║   │ ║ ║   ║     ║   ║ ║     ║   ║ │ │
  │ ╔═╝ ╔═╝●│ └─┐●╚═╧═╗ │ ║ ║ ╔═╧═══╗ ║ ╔═╝ ╚═══╗ ╚═╗ ║ └─┤
  │ ║   ║●●●│   │●●●●●║ │ ║ ║ ║     ║ ║ ║       ║   ║ ║   │
  │ ║ ╔═╝●┌─┴─╗ └───┐●╚═╝ ║ ║ ║ ╔═╗ ║ └─╢ ╔═══╗ ╚═╗ ╚═╝ ╔═╝
  │ ║ ║●●●│   ║     │●●●●●║ ║ ║ ║ ║ ║   ║ ║   ║   ║     ║
  │ ╚═╝●╔═╧═╗ ╚═╗ ╔═╧═══┐●╚═╝ ║ ║ ║ ╚═╗ ║ ║ ╔═╧═╗ ╚═══╗ ╚═╗
  │   ●●║   ║   ║ ║     │●●●●●║ ║ ║   ║ ║ ║ ║   ║     ║   ║
  ├─╔═┘●╚═╗ ╚═╗ ║ ║ ╔═╗ └───┐●╚═╝ ╚═╗ ║ └─╢ ║ ╔═╧═╗ ╔═╝ ╔═╝
  │ ║●●●●●║   ║ ║ ║ ║ ║     │●●●●●●●║ ║   ║ ║ ║   ║ ║   ║
  │ ║●┌─╗ ╚═╗ ║ └─╢ ║ ╚═══╗ └─────┐●╚═╧═╗ ║ ║ ║ ╔═╝ ║ ╔═╝ ╔
  │ ║●│ ║   ║ ║   ║ ║     ║       │●●●●●║ ║ ║ ║ ║   ║ ║   ║
  │ ╚═╝ ║ ╔═╝ ╚═╗ ║ ╚═╗ ╔═╧═════╗ └───┐●╚═╝ ║ ╚═╝ ╔═╝ ║ ╔═╝
  │     ║ ║     ║ ║   ║ ║       ║     │●●●●●║     ║   ║ ║
  └─────╨─╨─────╨─╨───╨─╨───────╨─────┴───●●╨─────╨───╨─╨──

  The path from the mouse () to the cheese () is marked with ● characters. This shows the optimal route through the maze, navigating through the corridors while avoiding all the walls!
```

Suffice to say, it didn't do either part right.


>Suffice to say, it didn't do either part right.

I dunno why people are surprised by this. This is what you get with text->text. Reasoning doesn't work text->text.


Honestly, even though it failed, I'm kind of impressed that the trajectory mostly stays in the lines. If you remove all but two openings, does it work? The drawing you show has more than two openings, some of which are inaccessible from the inside of the maze.

It's ASCII art, so the "trajectory" will always stay within the lines, because you can't have the ● and ║ characters intersect each other.

The only impressive part would be that the trajectory is "continuous", meaning for every ● there is always another ● character in one of the 4 adjacent positions.


I know the characters can’t cross. By intersect, I mean two dots on either side of a boundary line in the direction of the path.

I kinda want to know what happens if you make it continue the line by one step 20 times in a row. A human can draw this gradually, the image model has to draw it in one shot all at once.

That might be an interesting training set, a bunch of mazes…

I think it’s a disingenuous read to assume original commenter means “understanding” in the literal sense. When we talk about LLM “understanding”, we usually mean it from a practical sense. If you give an input to the computer, and it gives you an expected output, then colloquially the computer “understood” your input.


At this point, I'd be happy to send my child to an entirely analog school. No computers, just books, pencils and paper.


A couple of comments:

1. I really wish Dreams had kept going, expanded beyond PlayStation, and tried to take the market from Roblox. They were infinitely more safety-minded with their content. It would be great to see a Roblox competitor.

2. Kotaku on mobile is a horrid experience. There’s like 20% of the screen allocated to content, the rest are ads. My god.


I've gotten sucked into TikTok for the past couple of years, and I can really feel it. All social media is likely harmful to some degree, but TikTok is not like other social media. I've been online since the beginning of the web, and there's nothing else like it in terms of actively destroying your brain's ability to focus.

It's like the fentanyl of attention, the purest distillation of the state of mind we entered into when we mindlessly flipped through channels on TV.

If you use it enough, your brain starts to find it _very_ irritating to focus on anything for more than 5-10 seconds or so. I really can't describe how powerful of an effect it is. I don't know if the Chinese government intended to use it as some sort of covert weapon, but if they did then they're geniuses. It literally makes you stupid.


Couldn't be further from my experience. I enjoy it, watch for a bit, or even for an hour+, and then put it down. No noticable impact to my ability to focus at all: 5 hours flies by while coding still.

Idk if I'm built different, but I generally doubt it. I find these statements about brain rot to be either hyperbolic or at very least reminiscent of the "violent video games make you kill people IRL" conversations of the 90s/00s


Anecdotes are anecdotes, but my experience mirrors the above poster except the timelines and platform. I feel like I got vortexed into YouTube shorts in a way that I haven’t ever felt anything close to except maybe the early days of stumbleupon. A very addictive rush hitting all the right synapses. I’d probably watch 2 or more hours a night and I doubt that’s even an honest account. Some furniture refinishing projects thankfully pulled me away long enough to break the cycle.

It was a very addictive sensation. I believe other accounts that mirror this and see them as non-hyperbolic having experienced it myself.


I'm sure everyone is different. I believe alcohol is probably very addictive, but even though I've had periods of my life where I was drinking heavily (mostly in social situations), I've never once felt the sensation of "needing a drink". It's completely foreign to me. Maybe it's a genetic thing, no idea. I just know deep down that I'll never become an alcoholic. But that doesn't mean it doesn't affect other people very differently.

I remember the video game arguments of the 90s; Mortal Kombat never made me violent. I can see how it might seem like history repeating itself, but in this case I'm talking about my own experience.


Your experience is at odds with the statistical results of the linked studies covering nearly 100,000 participants.


The linked study r values are all minor to no effect. -0.37 as an r value is barely correlated. Talk to me when they're up above -.5

How much of it is just constantly being told that SFVs are bad for you?

The linked article is a meta analysis with basically 0 controls that finds a statistical correlation barely better than background noise. Yawn


> I don't know if the Chinese government intended to use it as some sort of covert weapon, but if they did then they're geniuses. It literally makes you stupid.

I don't think you need any government conspiracy for this. Tiktok is an inevitable product of the attention economy — moreso a capitalist wart than deliberate sabotage.


China solved this.

They deliberately made SFV educational content only. Tiktok had to comply or face the wrath of the CCP.

SFV about a math theorem or a physics phenomena or explaining how to use a type of conjugation in English? Cool.

SFV thats just a clip of a show or movie with some stupid music overlayed? SFV about a kid doing some stupid dance? Banned.


I've heard this, but I don't think this is actually true (save for some other services that target children).

The problem isn't just the content, though. It's also the medium. SFV as a medium cripples cognition and attention span. It feeds bad habits, emotionally and intellectually.

How many of Gen Z/Alpha have the attention span to sit through a movie, let alone a book? There's even a joke that if someone says they've binged on a Netflix show, it means they're a millennial: younger generations would find that kind of sustained engagement with a subject boring.

SFV is opposed to the kind of engagement you need for deep work.


Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: