Fig 2 - uploaded by José Gonçalves
Content may be subject to copyright.
Chess Robot System Algorithm

Chess Robot System Algorithm

Source publication
Article
Full-text available
This paper describes a chess robot system that allows remote users to play chess, using a six axes anthropomorphic robot to move chess pieces in the chessboard on getting commands from the player and from the application chess engine. This experience allowed applying the concept of 'learning by doing', involving the integration of multi-disciplinar...

Context in source publication

Context 1
... the virtual chessboard, the player drags and drops a white chess piece. The command associated to this movement is delivered to the chess logic control module that forwards it to the chess engine that evaluates the move and if the move is valid, the chess logic control module calculates the robot parameters for the move, as illustrated in the flowchart of the Figure 2. ...

Citations

... The literature documents several similar initiatives, and many studies use image processing techniques such as edge detection, hough line transform, and colour conversion methods to identify the chessboard's lattice [5], [6]. However, these methods often fail in most scenarios and lack any AI feature, leading to inaccurate results, and they may provide an output even when a chessboard is not visible in the given frame. ...
... Initial research into chess recognition emerged from the development of chess robots using a camera to detect the human opponent's moves. Such robots typically implement a three-way classification scheme that determines each square's occupancy and (if occupied) the piece's colour [2][3][4][5][6][7]. Moreover, several techniques for recording chess moves from video footage employ the same strategy [8][9][10]. ...
... Czyzewski et al. [14] achieve an accuracy of 95% on chessboard localisation from non-vertical camera angles by designing an iterative algorithm that generates heatmaps representing the likelihood of each pixel being part of the chessboard. They then employ a CNN to refine the corner points that were found using the heatmap, outperforming the results obtained by Gonçalves et al. [7]. Furthermore, they compare a CNN-based piece classification algorithm to the SVM-based solution proposed by Ding [11] and find no notable amelioration, but manage to obtain improvements by reasoning about likely and legal chess positions. ...
... Once the four corner points have been established, finding the squares is trivial for pictures captured in bird's-eye view, and only a matter of a simple perspective transformation in the case of other camera positions. Some of the aforementioned systems circumvent this problem entirely by prompting the user to interactively select the four corner points [5,7,8,12], but ideally a chess recognition system should be able to parse the position on the board without human intervention. Most approaches for automatic chess grid detection utilise either the Harris corner detector [3,10] or a form of line detector based on the Hough transform [4,6,12,[17][18][19][20], although other techniques such as template matching [21] and flood fill [9] have been explored. ...
Article
Full-text available
Identifying the configuration of chess pieces from an image of a chessboard is a problem in computer vision that has not yet been solved accurately. However, it is important for helping amateur chess players improve their games by facilitating automatic computer analysis without the overhead of manually entering the pieces. Current approaches are limited by the lack of large datasets and are not designed to adapt to unseen chess sets. This paper puts forth a new dataset synthesised from a 3D model that is an order of magnitude larger than existing ones. Trained on this dataset, a novel end-to-end chess recognition system is presented that combines traditional computer vision techniques with deep learning. It localises the chessboard using a RANSAC-based algorithm that computes a projective transformation of the board onto a regular grid. Using two convolutional neural networks, it then predicts an occupancy mask for the squares in the warped image and finally classifies the pieces. The described system achieves an error rate of 0.23% per square on the test set, 28 times better than the current state of the art. Further, a few-shot transfer learning approach is developed that is able to adapt the inference system to a previously unseen chess set using just two photos of the starting position, obtaining a per-square accuracy of 99.83% on images of that new chess set. The code, dataset, and trained models are made available online.
... Initial research into chess recognition emerged from the development of chess robots using a camera to detect the human opponent's moves. Such robots typically implement a three-way classification scheme that determines each square's occupancy and (if occupied) the piece's colour [2][3][4][5][6][7]. Moreover, several techniques for recording chess moves from video footage employ the same strategy [8][9][10]. ...
... Czyzewski et al. [14] achieve an accuracy of 95% on chessboard localisation from non-vertical camera angles by designing an iterative algorithm that generates heatmaps representing the likelihood of each pixel being part of the chessboard. They then employ a CNN to refine the corner points that were found using the heatmap, outperforming the results obtained by Gonçalves et al. [7]. Furthermore, they compare a CNN-based piece classification algorithm to the SVM-based solution proposed by Ding [11] and find no notable amelioration, but manage to obtain improvements by reasoning about likely and legal chess positions. ...
... Once the four corner points have been established, finding the squares is trivial for pictures captured in bird's-eye view, and only a matter of a simple perspective transformation in the case of other camera positions. Some of the aforementioned systems circumvent this problem entirely by prompting the user to interactively select the four corner points [5,7,8,12], but ideally a chess recognition system should be able to parse the position on the board without human intervention. Most approaches for automatic chess grid detection utilise either the Harris corner detector [3,10] or a form of line detector based on the Hough transform [4,6,12,[17][18][19][20], although other techniques such as template matching [21] and flood fill [9] have been explored. ...
Preprint
Full-text available
Identifying the configuration of chess pieces from an image of a chessboard is a problem in computer vision that has not yet been solved accurately. However, it is important for helping amateur chess players improve their games by facilitating automatic computer analysis without the overhead of manually entering the pieces. Current approaches are limited by the lack of large datasets and are not designed to adapt to unseen chess sets. This paper puts forth a new dataset synthesised from a 3D model that is an order of magnitude larger than existing ones. Trained on this dataset, a novel end-to-end chess recognition system is presented that combines traditional computer vision techniques with deep learning. It localises the chessboard using a RANSAC-based algorithm that computes a projective transformation of the board onto a regular grid. Using two convolutional neural networks, it then predicts an occupancy mask for the squares in the warped image and finally classifies the pieces. The described system achieves an error rate of 0.23% per square on the test set, 28 times better than the current state of the art. Further, a few-shot transfer learning approach is developed that is able to adapt the inference system to a previously unseen chess set using just two photos of the starting position, obtaining a per-square accuracy of 99.83% on images of that new chess set. The dataset is released publicly; code and trained models are available at https://github.com/georgw777/chesscog.
... The robotic ability to manipulate board-game pieces by dead reckoning is straightforward and commonplace, and there have been attempts to integrate such ability with automated board-game play based on visual perception of game states and piece positions [21]- [23]. However, we are unaware of any prior work that reports such a combined sensorimotor game-play system that achieves the level of robustness needed for fully-automated play of a training set that is sufficiently large to support game-rule learning. ...
Conference Paper
Full-text available
We present an integrated vision and robotic system that plays, and learns to play, simple physically-instantiated board games that are variants of TIC TAC TOE and HEXA-PAWN. We employ novel custom vision and robotic hardware designed specifically for this learning task. The game rules can be parametrically specified. Two independent computational agents alternate playing the two opponents with the shared vision and robotic hardware, using pre-specified rule sets. A third independent computational agent, sharing the same hardware, learns the game rules solely by observing the physical play, without access to the pre-specified rule set, using inductive logic programming with minimal background knowledge possessed by human children. The vision component of our integrated system reliably detects the position of the board in the image and reconstructs the game state after every move, from a single image. The robotic component reliably moves pieces both between board positions and to and from off-board positions as needed by an arbitrary parametrically-specified legal-move generator. Thus the rules of games learned solely by observing physical play can drive further physical play. We demonstrate our system learning to play six different games.