We desperately need AI-literate employees—that’s the future of work. But those same people are the ones best equipped to game our hiring processes.
There’s now an entire industry of tools designed to help candidates cheat: LeetCode Wizard, Interview Coder, Cheating Daddy. Some even have a “humanizer” feature that introduces deliberate mistakes so answers don’t seem suspiciously perfect.
I recently explored this tension with Wiktor Żołnowski (CEO of Pragmatic Coders) and Łukasz Wróbel (founder of Job for Agent) on my podcast. The conversation left me thinking about what we’re actually screening for—and how to find people who will thrive in this new reality.
But is it actually cheating? As Łukasz put it: “If pasting your questions to ChatGPT is considered cheating, your questions were never filtering the right attributes of candidates.”
The real problem isn’t candidates using AI. It’s that we’re testing 21st-century skills with 20th-century methods. We’re no longer hiring humans—we’re hiring human+AI systems. And if intelligence is becoming abundant and AI can do in seconds what used to take hours, what does the human actually bring?
Why now
This isn’t a slow-moving trend. According to METR’s research, the length of software tasks AI can handle reliably is doubling roughly every seven months. OpenAI’s own analysis shows AI performance approaching human benchmarks across professional services.
AI task capabilities over time. Source: METR
As Fiverr’s CEO put it: “Impossible is the new hard.” The hiring process needs to catch up—fast.
What to look for
This is evolving fast, but right now four competencies seem to matter:
- Judgment — making calls when the answer isn’t obvious
- Context — reading the room, navigating nuance, understanding the business
- Orchestration — coordinating across people and AI agents to get things done
- Accountability — owning outcomes and standing behind decisions
AI struggles with all of these. I don’t expect that to change anytime soon.
Here’s one interview question worth trying: “Describe coordinating three or more people on something with no defined process.” You’ll quickly spot who’s drawing from real experience.
What’s working
Wiktor shared his agency’s approach: they brought interviews back to the office—but not to prevent AI use. Instead, they actively ask candidates to use AI during the interview and watch how they use it.
“We pay attention to how they use AI and if they are able to see where the AI is wrong,” Wiktor explained. “80% of mis-hires wasn’t technical skills problems. It was culture fit or lack of learning or acting based on feedback.”
Canva takes a similar stance—they explicitly allow AI in interviews and design harder, more ambiguous challenges that test judgment over syntax.
Both approaches point the same way: screen for judgment, not speed.
For the full conversation, listen to this episode of Hidden Layers:
If you want to go deeper:
- Canva’s approach to embracing AI in interviews: Yes, You Can Use AI in Our Interviews
- METR’s research on AI task capabilities: Measuring AI Ability to Complete Long Tasks
- OpenAI’s analysis of AI impact on professional work: Measuring AI Ability to Complete Tasks
- CGP Grey’s 2014 video, worth revisiting in 2025: Humans Need Not Apply
Want to discuss how your team can adapt? Book a call.