In pre-lauch tests, people use to complete the Turing Adventure in an average of two plays – which is what I was looking for.
Now, I am observing how some people cannot complete the games in three plays, while others complete the game in just ten lines! This high variance seems to be hard to balance.
However, comparing this data with those of my very early prototype, which no player was able to complete the game in less than three plays, I believe that this is going to work like this: As the robots knowledge database grow thanks to the user inputs, the puzzles will become more and more apparent to first time players. Actually, the puzzle of Turing Adventure is quite simple, so if robots talked close as if they were sentient, most people should complete the game in a single play. This lead me to think that, as robots learn to speak as I want to, I should rework the puzzles to make them harder.
Actually, that would be great news, since the game should revolve around the puzzles. The AI behind NPCs should be just a mechanism to further immerse the player in the adventure, and not to be an obstacle by itself!
For a future full-length game, there would be another factor to consider. As player progress in the game, they will learn what to expect from the chatbots, so talking their way out of the puzzles would be a more straightforward task. Therefore, chatbots from those puzzles should not need to be as polished as robots from the early game. We’ll see.
0 comments