Hacker Newsnew | past | comments | ask | show | jobs | submit | fromlogin
Reversing the alignment part of LLM training (twitter.com/jxmnop)
3 points by fzliu 4 months ago | past
Unaligned GPT-OSS-20B-base extracted from OpenAI's model (twitter.com/jxmnop)
1 point by fragmede 4 months ago | past
GPT-OSS-20B extracted to a base model without alignment (twitter.com/jxmnop)
3 points by polyrand 4 months ago | past | 2 comments
Curious about the training data of OpenAI's new GPT-OSS models? I was too (twitter.com/jxmnop)
239 points by flabber 4 months ago | past | 57 comments
Curious about the training data of OpenAI's new GPT-OSS models? I was too (twitter.com/jxmnop)
4 points by tosh 4 months ago | past
Reverse Engineering training data of OpenAI's new GPT-OSS models (twitter.com/jxmnop)
1 point by amrrs 4 months ago | past
All Embedding Models Learn the Same Thing (twitter.com/jxmnop)
1 point by MrBuddyCasino 7 months ago | past
The paper that got LLM started. It was in 2003, in Montreal (twitter.com/jxmnop)
6 points by ksec 7 months ago | past
Prompting LLMs to Learn Language from a Single Book (twitter.com/jxmnop)
2 points by abetusk on June 26, 2024 | past | 1 comment
Training open-source LLMs is a losing battle, a complete dead end (twitter.com/jxmnop)
3 points by jxmorris12 on Sept 15, 2023 | past

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: