
Multiple AI models, like ChatGPT, Gemini, and Grok, tend to pick 17 when asked to choose a number between 0 and 25. While seemingly trivial, this behavior reveals how deeply human biases are baked into machine learning systems. The number 17 is a culturally and psychologically favored “random” number due to its prime status and uniqueness. When AIs echo that preference, it reflects their training on massive datasets filled with human patterns. This small but consistent choice is a snapshot of a larger truth: AIs are not reasoning agents but systems trained to imitate human language, biases, quirks, and all.
Why 17 Feels Random to Humans
Seventeen holds a strange power in the human mind. Researchers can define the psychological aspect of it as a random number in a way that makes it difficult to group into an easy number like 10 or 20, or highly recognizable like 7. Seventeen is a prime number that makes it indivisible, both in mathematical and psychological terms. It has a cultural significance as well: 17 is unlucky in Italy; it pops up as a default random number all over pop culture.
Surveys consistently show that when people are asked to “pick a number between 1 and 20,” 17 is chosen far more often than chance would predict. That tendency likely extends upward slightly to ranges like 0 to 25. Human brains don’t generate randomness well. We overcorrect, avoiding symmetrical or round numbers that feel “too obvious.” In that context, 17 lands in the sweet spot; it’s unique, memorable, and just obscure enough to seem surprising.
AI models learn from massive amounts of text written by humans. If people tend to pick 17, then AIs trained on those texts will learn that pattern too. When prompted, they reproduce that pattern, not because they understand randomness, but because they’ve observed us doing the same, again and again.
What This Says About AI Design
The tendency for AI models to pick 17 reveals a central limitation in how today’s large language models work. These systems don’t understand numbers, randomness, or even the concept of choice. Instead, they rely on probabilistic prediction: given a prompt, what word (or number) most likely follows based on their training data? In the above case, by seeing the phrase Pick a number between 0 and 25 frequently in their data, and people picking 17 frequently as a response to the phrase, they learn the model that 17 is a statistically significant choice. This is the essence of the “stochastic parrot” critique: modern AIs are masters of mimicry, not meaning.
This consistency across models, ChatGPT, Grok, and Gemini, also hints at shared data sources or pretraining objectives. If different teams train AIs on similarly structured internet data, similar outputs are inevitable. This creates a convergence in behavior, raising questions about diversity in model design. To change beyond superficial imitation, future systems must have architecture and training methods that lead to abstraction, reasoning, and grounded meaning. Until that point, people were releasing their reminders of how ever-so-close to us today AIs are and how much they are far behind the way we think, such as the patterns, such as the one called, always picking 17.
Why It Matters
It is not a mere statistical anomaly that suggested 17 as the number of choices several times. Instead, it’s the fact that AI tends to use the same number, an extensive reflection of human behavior. It makes us realize that current models lack the ability to create ideas or have genuine randomness because they regurgitate what they have learnt or at least not understood or interpreted. This has practical implications for AI design, fairness, and robustness. When a favorite coffee shop is mentioned in such a simplistic way, how much more bias is learned simply from deeper puzzles? A clue to the strengths and weaknesses of AI is the fact that it has quirks such as this to identify what it can have and what it cannot.