Predicting Folds in Poker Using Action Unit Detectors and Decision Trees (bibtex)
by Doratha Vinkemeier, Michel Valstar, Jonathan Gratch
Abstract:
Predicting how a person will respond can be very useful, for instance when designing a strategy for negotiations. We investigate whether it is possible for machine learning and computer vision techniques to recognize a person's intentions and predict their actions based on their visually expressive behaviour, where in this paper we focus on the face. We have chosen as our setting pairs of humans playing a simplified version of poker, where the players are behaving naturally and spontaneously, albeit mediated through a computer connection. In particular, we ask if we can automatically predict whether a player is going to fold or not. We also try to answer the question of at what time point the signal for predicting if a player will fold is strongest. We use state-of-the-art FACS Action Unit detectors to automatically annotate the players facial expressions, which have been recorded on video. In addition, we use timestamps of when the player received their card and when they placed their bets, as well as the amounts they bet. Thus, the system is fully automated. We are able to predict whether a person will fold or not significantly better than chance based solely on their expressive behaviour starting three seconds before they fold.
Reference:
Predicting Folds in Poker Using Action Unit Detectors and Decision Trees (Doratha Vinkemeier, Michel Valstar, Jonathan Gratch), In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), IEEE, 2018.
Bibtex Entry:
@inproceedings{vinkemeier_predicting_2018,
	address = {Xi'an, China},
	title = {Predicting {Folds} in {Poker} {Using} {Action} {Unit} {Detectors} and {Decision} {Trees}},
	isbn = {978-1-5386-2335-0},
	url = {https://ieeexplore.ieee.org/document/8373874/},
	doi = {10.1109/FG.2018.00081},
	abstract = {Predicting how a person will respond can be very useful, for instance when designing a strategy for negotiations. We investigate whether it is possible for machine learning and computer vision techniques to recognize a person's intentions and predict their actions based on their visually expressive behaviour, where in this paper we focus on the face. We have chosen as our setting pairs of humans playing a simplified version of poker, where the players are behaving naturally and spontaneously, albeit mediated through a computer connection. In particular, we ask if we can automatically predict whether a player is going to fold or not. We also try to answer the question of at what time point the signal for predicting if a player will fold is strongest. We use state-of-the-art FACS Action Unit detectors to automatically annotate the players facial expressions, which have been recorded on video. In addition, we use timestamps of when the player received their card and when they placed their bets, as well as the amounts they bet. Thus, the system is fully automated. We are able to predict whether a person will fold or not significantly better than chance based solely on their expressive behaviour starting three seconds before they fold.},
	booktitle = {Proceedings of the 2018 13th {IEEE} {International} {Conference} on {Automatic} {Face} \& {Gesture} {Recognition} ({FG} 2018)},
	publisher = {IEEE},
	author = {Vinkemeier, Doratha and Valstar, Michel and Gratch, Jonathan},
	month = may,
	year = {2018},
	keywords = {UARC, Virtual Humans},
	pages = {504--511}
}
Powered by bibtexbrowser