We can confirm that there is ZERO artificial bias towards the AI player (i.e. the AI does not cheat).
In fact, if you are on NORMAL Difficulty, the AI has a "combo-breaker" turned on where it cheats in YOUR favor by avoiding lucky drops of gems!
The article below is reproduced from a reponse on our forums by the game's creative director, Sirrian (a.k.a Steve Fawkner, author of the original Puzzle Quest game)
We had similar questions in our previous games, and so we made all of our AI/Gem-Matching code in Puzzle Quest 2 & Galactrix publicly available, and anybody who looked at it could confirm that there was no cheating taking place. We use mostly the same type of code in Gems of War.
There is a phenomenon called "Recall Bias" where our brains are hardwired to remember negative experiences more vividly.
The next section is for anyone who’s interested in game design and all this recall bias stuff.
If not TL;DR… We ran some tests and recall bias is surprisingly strong.
Back when we were working on the original Puzzle Quest (pre-release) we kept hearing from testers about how the AI cheated, or how the suggested matches had been engineered to be the worst possible match. We even felt that ourselves sometimes, though, having written the AI in 2 hours, I actually knew that nothing sinister was taking place.
So we decided to do some usability test geared towards learning more about players’ perceptions of luck. We had the game log “lucky drops” when they happened, and register whether they were for the human or the AI player. We also created a version that cheated in the Human’s favor, so we could test some extreme results. A lucky drop was defined as one of 3 things: a cascade that was unforeseeable because at least one of the gems dropped in, OR a skull that dropped in to a perfect position for a match , OR any gem that dropped in to set up a 4/5 of a kind for the player.
I don’t have the actual numbers on hand, but here were a few findings from the first round of testing:
Players were asked to rate how lucky the AI player was on a scale from 1-9, where 1 = Player Way luckier than the AI, 5 = Equal Luck, and 9 = AI Way luckier than the player
- If the AI had greater than 20% extra lucky drops: 9: AI Extremely lucky
- If the Player & AI had roughly Equal Lucky Drops (within 10-20% of each other): 7 = AI Very Lucky
- If the Human had about twice as many lucky drops as the AI: 5 = Equal luck
- If the Human had more than twice as many drops, it varied, but we finally saw numbers under 5
So unless a human player is TWICE AS LUCKY as an AI, they perceive it’s cheating. I found that astounding. But here is where it gets interesting, and this is not taking a shot at you, wasted, I’m just sharing some data from our testing, that showed recall bias is such a crazy trick that our brains play on us, that its effects can be surprising.
So… now, after the first round of testing, we asked the players in the test to do it a second time and to count how many times they thought they were lucky compared to the AI. We wanted them to log their results, to see if they could arrive at a position of understanding that the AI didn’t cheat.
Once again I don’t have the numbers, and I’m working from memory, so this is an approximate summary:
Results for 10 Players.
- 1 Tester accurately recorded the lucky drops (within about 10% of what was logged)
- 4 Players accurately recorded the AI lucky drops (within 10%), but under-counted Human Lucky drops by about 10%-80%
- 5 Players over counted the AI lucky drops (10%-50%), but under-counted Human Lucky drops by about 10%-100%
It’s a small sample size of 10 people. But the results are quite interesting I think.