
If you have ever interviewed for a job, there is a non-trivial chance that you have encountered “difficult” or outlandish interview questions. These are questions that are intentionally unexpected, abstract, or only loosely related to the actual requirements of the position. Instead of systematically assessing job-relevant skills, they are designed to surprise candidates, evaluate composition, or signal creativity.
Interviewers often advocate these questions as smart ways to assess problem-solving ability, cultural fit, or performance under pressure. The evidence tells a different story. Decades of research in industrial-organizational psychology show that unstructured, puzzle-style interviews have low predictive validity. They generate noise, not perception. At best, they measure how comfortable someone is with improvisation. At worst, they measure how similar the candidate is to the interviewer.
{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9. jpg”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1×1-2.jpg”,”eyebrow”:””,”headline”:”Learn more from Tomas Chamorro-Premuzic”,”dek”:”Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and co-founder of DeeperSignals. He is the author of 15 books and more than 250 scientific articles on the psychology of talent, artificial intelligence and entrepreneurship. “,”subhed”:””,”description”:””,”ctaText”:”Learn. More”,”ctaUrl”:”https:\/\/drtomas.com\/intro\/”,”theme”:{“bg”:”#2b2d30″,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”: “#3b3f46″,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#ffffff”},”imageDesktopId”:91424798,”imageMobileId”:91424800,”shareable”:false,”slug”:””}}
Specific cases
To illustrate this point, here are some common examples, ordered from least absurd, or at least somewhat defensible, to most absurd:
1. What is your greatest weakness?
Nominally work-related, although the response is usually more strategic than honest. The only rational way to respond is to disguise a strength as a flaw. It’s less a test of self-awareness than an audition for plausible humility.
2. Sell me this pen.
Some relevance to sales roles, but still an artificial performance far from real context. Popularized by The Wolf of Wall Streetreinforces the myth that great sales are based on talking fast instead of listening, diagnosing needs and building trust.
3. Tell me about a time you failed.
In principle, a legitimate behavioral issue. In practice, it is often an invitation to narrate a carefully curated setback that highlights resilience, determination, and eventual triumph. Reward storytelling ability more than learning agility.
4. How many tennis balls fit inside a Boeing 747?
A classic “guess-timate” puzzle meant to test structured thinking. Geeks may like it, but it predicts little beyond prior exposure to similar puzzles. If you want to measure cognitive ability, there are much more reliable and validated tools.
5. How many windows are there in New York City?
Same logic, further removed from any realistic work task. For what it’s worth, major language models estimate the figure in the tens of millions, depending on assumptions. Which illustrates the deeper point: if ChatGPT can respond in seconds, why do we use it to judge human potential?
6. If you were an animal, which one would you be and why?
A thinly veiled personality quiz. It seems like a BuzzFeed throwback disguised as a talent evaluation. The answer usually reveals more about the interviewer’s projections than about the candidate’s traits.
7. If you could have dinner with any historical figure, who would it be?
A nice icebreaker disguised as a values assessment. It also serves as a signaling exercise: how curious, cultured, contrary or provocative can you appear in less than 30 seconds? Say Nelson Mandela and you are a virtue signaller. Let’s say Steve Jobs and you indicate ambition. Say Machiavelli and you will point out strategic depth. But say Stalin and suddenly the interview becomes a moral investigation. Was it intellectual curiosity, dark humor or deeply questionable judgment? The question reveals less about your leadership potential than about your appetite for the risk of reputational self-sabotage.
8. If you were a kitchen utensil, what would you be?
At this point, the exercise has become a pure parody: shows like The Office come to mind. Spoon suggests reliability. The knife points to the edge. Spork implies versatility. The real variable being tested may simply be how badly you want the job, indicated by the fact that you don’t quite leave the room.
science
So what does the real science of interviews say?
First, there is evidence that some interviewers are not only wrong, but take a certain Machiavellian pleasure in putting candidates on the spot. Research on interviewer behavior shows that people with higher sadism or everyday dominance are more likely to ask questions that are stress-inducing or intentionally uncomfortable. In other words, the conundrum can sometimes have less to do with evaluating you and more to do with interviewers enjoying deviant power dynamics.
Second, the predictive validity of unstructured interviews is consistently low. Meta-analyses spanning decades show that traditional, fluid interviews correlate only modestly with subsequent job performance. The problem is not the conversation per se, but the inconsistency. Different candidates receive different questions. Interviewers rely on intuition. The evaluation criteria change midway. The result is noise, bias, and overconfidence, and unfortunately, these issues often go unnoticed due to subsequent confirmation bias or lack of admission of errors by hiring managers. Essentially, if an interviewer likes you, they’ll still like you after they hire you or you’ll pretend like you’re doing a great job to avoid looking like a fool.
In contrast, structured interviews work. The formula is not mysterious: define the important competencies for the position; ask all candidates the same questions relevant to the job; anchor assessments to predefined scoring rubrics; and combine interview data with other validated predictors, such as cognitive ability or work samples. Behavioral questions about past actions and situational questions tied to realistic work scenarios consistently outperform seemingly clever riddles and outlandish riddles.
The role of AI
And then there’s the AI, not so much the elephant in the room but the bull in the china shop, which is already rearranging the furniture while we’re still debating the seating plan.
In a world where candidates can rehearse impeccable answers with generative tools, the theatrical interview becomes even more obsolete. Chatbots can generate polished responses to “biggest weakness” or “sell me this pen” in seconds. Ironically, the more predictable and formulated the question, the easier it will be to fool it. This raises the bar for employers: assessment should be geared toward observable skills, simulations, job tests, and data from multiple sources.
This does not mean that interviews become irrelevant. It means they must evolve. When information is abundant and answers are cheap, the premium shifts from heard narratives to demonstrated ability. Instead of asking candidates what they would do, employers can look at what they actually do: solve a real problem, analyze a real case, critique a flawed strategy, or collaborate with a future teammate. AI can help candidates prepare, but it can’t completely fake sustained performance in a realistic simulation.
There is also a deeper irony. The same tools that allow candidates to refine their answers can help employers design better assessments. AI can help standardize questions, generate competency-based scenarios, detect assessment bias, and even predict which interview questions correlate with results. In other words, AI exposes the weakness of theatrical interviews while offering the tools to fix it. The real risk is not that candidates use AI. The problem is that employers do not update their methods accordingly.
In short, the future of interviews isn’t about more complicated questions. It’s about better design. The inconvenient truth is that outlandish interview questions persist because they’re fun, easy, and ego-affirming. But recruiting is too important to leave to entertainment. If organizations are serious about talent, they must replace improvisational theater with evidence-based assessments and have the humble, self-critical honesty to truly test the outcome of their decisions, recognize when they are wrong, and make an effort to change things and improve.
{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9. jpg”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1×1-2.jpg”,”eyebrow”:””,”headline”:”Learn more from Tomas Chamorro-Premuzic”,”dek”:”Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and co-founder of DeeperSignals. He is the author of 15 books and more than 250 scientific articles on the psychology of talent, artificial intelligence and entrepreneurship. “,”subhed”:””,”description”:””,”ctaText”:”Learn. More”,”ctaUrl”:”https:\/\/drtomas.com\/intro\/”,”theme”:{“bg”:”#2b2d30″,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”: “#3b3f46″,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#ffffff”},”imageDesktopId”:91424798,”imageMobileId”:91424800,”shareable”:false,”slug”:””}}

