So I was reading this thread
on the Forge, where people are discussing the merits of mathematical to supplement/partially obviate playtesting. This got me thinking about something I'm reading up on at the moment, neural networks.*
It seems to me that you could set up a structure (e.g., I want 2 inputs to go into every conflict, with one, stat, having a bigger impact than the other, skill), and then plug in a ton of cases with desired outcomes.
That is, feed in a number of conflicts of varying obstacles, and in each case say what you want the likelihood of success to be (between 0 and 1). As I understand it, the network should then be able to adjust the weights it gives to the original inputs to give you the outcome you desire from the conditions you force on it.
This might be enlightening. For example, it might suggest that the common value that the player and their adversary get to add to their conflict score (e.g., the number of dice you add to your stat+skill) needs to be contributing between 30-50% of the players total - any more, and the variability leads to multiple conflicts becoming too risky.
Or I might be blowing smoke from my ass. I wonder if anyone savvier has ever thought to go down this road? Just as a way to tweak values, not as a replacement of the play experience, I hasten to add.
*I'm here referring to what my primer - Callan's The Essence of Neural Networks
- calls artifical neural networks - ie, they're not attempting to model biological reality, but get a certain kind of output from certain kinds of input.