We’re working on an AI-first interview platform for developers: Valuate.dev
The usual approach to coding tasks doesn’t work anymore - companies are looking for AI engineers, yet it’s still unclear how to assess AI proficiency.
Our goal is to design challenges that combine prompting + coding, allowing us to score both how well a candidate prompts and how good the resulting code is. The aim is to bring measurement to AI prompting skills - how well-aligned prompts are and how candidates handle LLM-generated code.
At the same time, we want to keep a strong human balance in the process: hiring is a two-way street, and screening shouldn’t be fully offloaded to AI. We’re human-first.
The usual approach to coding tasks doesn’t work anymore - companies are looking for AI engineers, yet it’s still unclear how to assess AI proficiency.
Our goal is to design challenges that combine prompting + coding, allowing us to score both how well a candidate prompts and how good the resulting code is. The aim is to bring measurement to AI prompting skills - how well-aligned prompts are and how candidates handle LLM-generated code.
At the same time, we want to keep a strong human balance in the process: hiring is a two-way street, and screening shouldn’t be fully offloaded to AI. We’re human-first.
Several tasks are already live - you can try them here: https://valuate.dev