Kindred Futures Roundtable – can artificial intelligence help spot problem gambling earlier, and with more accuracy?
Kindred Group have always prided themselves on their leadership in the responsible gaming space. From offering new tools to customers such as loss limits and more flexible time-outs, to developing the PS-EDS detection system, we have always aimed to be ahead of regulatory demands and have been recognised as such through numerous industry awards.
However, this high standard is not a stationary point. Kindred Futures, in conjuncture with Karmarama and The Friday Club hosted a roundtable discussion in early March to explore the use of artificial intelligence and machine learning to help recognise patterns in player behaviour, and from this be able to spot developing problems and then intervene earlier and more effectively.
A personal tale of addiction set the scene and showed the devastating impact this problem can have, before members of Kindred, leading academics and industry figures presented on their expertise to show the current situation and trends. Based on these presentations, a discussion followed. The aim of the day was not to design a new machine learning capability, but to explore what would be possible, what would we be looking to achieve, and what would potentially prevent this from occurring.
Machine learning relies upon enough historical data to train the algorithms on what each behaviour represents. Thus, where potential value exists in problem gambling is modelling behaviours and changes in behaviour over time, to assess if a player is developing a problem. Past research in the space has identified factors such as; the amount a players stakes, the time they play for, how they deposit money, if they withdraw money, what time of day they play, what types of games they play, and it’s from these data points that our current responsible gaming systems are built upon. Where artificial intelligence comes in is improving the models, scaling capacity, and being able to notice more nuanced connections; ultimately a personalised ‘risk score’ for every player could be developed and tracked over time.
However, discussion showed how the situation is not as simplistic as the above paragraph suggests. Every player has a different profile and addictive behaviour for one person is normal for another. Likewise, the way we gamble today is far different from only 5 years ago, and so building models on historical data could make them outdated if not continually developed. There is also the issue of a lack of ‘ground truth data’ with respect to who is a problem gambler. Self-exclusion is an unreliable proxy and while we have data on those we’ve reached out to or they’ve reached out to us, there are others which previous models didn’t catch and need to be included in the training data. Finally, other problems arise such as; a lack of data on social gamblers whose history is limited to once-a-month interactions; players moving between operators and so no one operator being able to spot the problem; and changes in life circumstances that could explain the current shift in behaviour.
The group felt though that these were not unassailable issues. It may require a regulator mandate, but could cross-operator sharing of anonymised data create better models and enable more accurate tracking of players? Likewise, changes in life circumstances may be available through looking at social media – Facebook have recently announced they will monitor their users’ activity to spot suicidal tendencies. Would they be interested in helping to tackle problem gambling?
Another issue discussed was that of false positives; regardless of the complexity of the machine learning and the validity of the score it may provide, you will always miss some problem players, and conversely flag those who don’t have a problem – false negatives and false positives. Some operators may shy away from intervening due to potential false positives, however the group felt that it shouldn’t hold us back; starting an intervention programme with one or two individuals who don’t have an issue is far better than being reluctant to intervene at all. Even presenting information to players such as how long they’re playing for, and how this relates to the population at large, can be a softer intervention but enough to nudge them back to more controlled behaviours. There still needs to be human judgement in assessing each case, however utilising artificial intelligence to provide deeper insights is an opportunity that we should take advantage of.
The afternoon was a first discussion of this opportunity and created lively debate over what problem gambling represents, and what type of behaviours we should be looking to monitor through artificial intelligence. Following on from the afternoon, Kindred will certainly further explore the nascent use of artificial intelligence for problem gambling as we look to continually build trust with our players and help limit the effects of this small but devastating problem.