Hacker Newsnew | past | comments | ask | show | jobs | submit | johntiger1's commentslogin

https://bobalearn.org/

Site where you can read and generate graded Chinese stories, in order to learn Chinese. What's a graded story? It's one written with the vocab of a {X} year old. Words are often repeated, so that you can learn from the left-and-right context. I normally pay for book versions of these, so I thought, why not make one that's online and free?


I tried this one: https://bobalearn.org/read/00000000-0000-0000-0000-000000000... And the words are mostly in Cantonese while few are Mandarin when listening to the sound, anything wrong here? After a page refresh it all goes to Cantonese now.

Also it seems that it can not read the whole story or did I missed the button somewhere?


Yeah principal EM is confusing here. Wouldn't EM I report to EM II? At Meta it's typically M1 -> M2


I think L5 Manager at Google is EM1 which is what Facebook calls M zero. So L6 manager (vast majority of line managers) would be EM II at Google.


who watches the watch man?


Wow, this will eat Meta's lunch


Meta is so cooked, I think most enterprises will opt for OpenAI or Anthropic and others will host OSS models themselves or on AWS/infra providers.


I'll accept Meta's frontier AI demise if they're in their current position a year from now. People killed Google prematurely too (remember Bard?), because we severely underestimate the catch-up power bought with ungodly piles of cash.


And boy, with the $250m offers to people, Meta is definitely throwing ungodly piles of cash at the problem.

But Apple is waking up too. So is Google. It's absolutely insane, the amount of money being thrown around.


It's insane numbers like that that give me some concern for a bubble. Not because AI hits some dead end, but due to a plateau that shifts from aggressive investment to passive-but-steady improvement.


catching up gets exponentially harder as time passes. way harder to catch up to current models than it was to the first iteration of gpt-4


Maverick and Scout were not great, even with post-training in my experience, and then several Chinese models at multiple sizes made them kind of irrelevant (dots, Qwen, MiniMax)

If anything this helps Meta: another model to inspect/learn from/tweak etc. generally helps anyone making models


There's nothing new here in terms of architecture. Whatever secret sauce is in the training.


Part of the secret sauce since O1 has been accesss the real reasoning traces, not the summaries.

If you even glance at the model card you'll see this was trained on the same CoT RL pipeline as O3, and it shows in using the model: this is the most coherent and structured CoT of any open model so far.

Having full access to a model trained on that pipeline is valuable to anyone doing post-training, even if it's just to observe, but especially if you use it as cold start data for your own training.


Its CoT is sadly closer to that sanitised o3 summaries than to R1 style traces.


It has both raw and summarized traces.


I mean raw GPT-OSS is close to summarised o3.


I believe their competition is from chinese companies , for some time now


They will clone it


Yeah there was no need for an ad hominem there


There are definitely some shills all over HN now... But even aside from that, the sheer novelty aspect (+less robotic ethical alignment) of it is enough for many


Looks like 225 will be affected starting in April? Or has this already occurred?


Can't her husband just stand in front of her, behind the laptop webcam? There's often ridiculously simple real-world workarounds these complex device security process


There is no one else supposed to be in the same room as the test taker - for obvious reasons that they should receive no help on the test in any way.

Some households -- especially if you have small children, or live in a small house without the luxury of separate rooms, noisy neighborhood, etc. -- may pose a challenge.

But outside those scenarios the candidate should know to dedicate one room to themselves for the duration of the test and preferably keep it locked from inside and inform others in the house not to disturb them for those 2 or 3 hours.

I am surprised to see this simple requirement -- that there should be no other person in the video frame (which will be audited for it, both manually and through automatic processing) -- is considered draconian? How? Are test takers expecting to take tests in rooms where anyone else can casually walk in, move around, etc?

To be sure there are other quirks like no bathroom breaks, no glancing away from scree, no mouthing the words as you read, no covering your face or sometimes no resting your chin on your hand as you think etc that all can become very tedious and stressful sure.


They make you wave the webcam around the room before starting the exam.


They want to see your face and will disqualify you if you glance away from the monitor too often.


Will they disqualify me if I am naked during the exam?


Yes - they have standards

They don't actually disqualify you (most of these places like https://www.proctoru.com or similar) just report to the exam administrator what they saw/noticed.

Some are even using AI flagging now - https://assess.com/remote-proctoring/


That sounds terrible; I often look away to the side when I am thinking..


I often stare into space (some have said it looks like a thousand-yard stare), I wonder if that would be an issue?


Ironically, Meta employees are also affected by this


Just like last time they had an outage.


Gonna have my Angle Grinder controller ready for the new Sysadmin Simulator release


Yep, in medical school that's one of the first things we learn. In theory it is also best to measure in both arms (as well as one leg if you suspect a certain diagnosis)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: