


default search action
CIG 2008: Perth, Australia
- Philip Hingston, Luigi Barone:
Proceedings of the 2008 IEEE Symposium on Computational Intelligence and Games, CIG 2009, Perth, Australia, 15-18 December, 2008. IEEE 2008, ISBN 978-1-4244-2973-8
Special Session: Coevolution in Games
- Simon M. Lucas
:
Investigating learning rates for evolution and temporal difference learning. 1-7 - Phillipa M. Avery, Zbigniew Michalewicz
:
Adapting to human game play. 8-15 - Thomas Thompson, John Levine, Russell Wotherspoon:
Evolution of counter-strategies: Application of co-evolution to Texas Hold'em Poker. 16-22
Special Session: Player/Opponent Modeling
- Roderick J. S. Baker, Peter I. Cowling
, Thomas W. G. Randall, Ping Jiang:
Can opponent models aid poker player evolution? 23-30 - Alan J. Lockett, Risto Miikkulainen:
Evolving opponent models for Texas Hold 'Em. 31-38 - Stephen Hladky, Vadim Bulitko:
An evaluation of models for predicting opponent positions in first-person shooter video games. 39-46 - Marcel van der Heijden, Sander Bakkes, Pieter Spronck:
Dynamic formations in real-time strategy games. 47-54
Special Session: Computational Intelligence in Real Time Strategy Games
- Johan Hagelbäck, Stefan J. Johansson:
Dealing with fog of war in a Real Time Strategy game environment. 55-62 - Nicola Beume, Tobias Hein, Boris Naujoks
, Nico Piatkowski
, Mike Preuss, Simon Wessing:
Intelligent anti-grouping in real-time strategy games. 63-70 - Holger Danielsiek, Raphael Stür, Andreas Thom, Nicola Beume, Boris Naujoks
, Mike Preuss:
Intelligent moving of groups in real-time strategy games. 71-78 - Sander Bakkes, Pieter Spronck, H. Jaap van den Herik:
Rapid adaptation of video game AI. 79-86
Special Session: Player Satisfaction
- Jacob Kaae Olesen, Georgios N. Yannakakis
, John Hallam:
Real-time challenge balance in an RTS game using rtNEAT. 87-94 - Keisuke Osone, Takehisa Onisawa:
Friendly partner system of poker game with facial expressions. 95-102 - Georgios N. Yannakakis
, John Hallam:
Real-time adaptation of augmented-reality games for optimizing player satisfaction. 103-110 - Julian Togelius
, Jürgen Schmidhuber:
An experiment in automatic game design. 111-118
Simulated Car Racing
- Daniele Loiacono
, Julian Togelius
, Pier Luca Lanzi, Leonard Kinnaird-Heether, Simon M. Lucas
, Matt Simmerson, Diego Perez Liebana
, Robert G. Reynolds, Yago Sáez
:
The WCCI 2008 simulated car racing competition. 119-126 - Ho Duc Thang, Jonathan M. Garibaldi
:
A fuzzy approach for the 2007 CIG simulated car racing competition. 127-134 - Alexandros Agapitos, Julian Togelius
, Simon M. Lucas
, Jürgen Schmidhuber, Andreas Konstantinidis
:
Generating diverse opponents with multiobjective evolution. 135-142
Adversarial and FPS Games
- Michelle McPartland, Marcus Gallagher
:
Creating a multi-purpose first person shooter bot with reinforcement learning. 143-150 - Matt Parker, Bobby D. Bryant:
Visual control in quake II with a cyclic controller. 151-158 - Thomas Thompson, John Levine:
Scaling-up behaviours in EvoTanks: Applying subsumption principles to artificial neural networks. 159-166 - John Reeder, Roberto Miguez, Jessica Sparks, Michael Georgiopoulos, Georgios C. Anagnostopoulos:
Interactively evolved modular neural networks for game agent control. 167-174
Monte Carlo Approaches
- Nathalie Chetcuti-Sperandio, Fabien Delorme, Sylvain Lagrue, Denis Stackowiak:
Determination and evaluation of efficient strategies for a stop or roll dice game: Heckmeck am Bratwurmeck (Pickomino). 175-182 - Kazutomo Shibahara, Yoshiyuki Kotani:
Combining final score with winning percentage by sigmoid function in Monte-Carlo simulations. 183-190 - Shogo Takeuchi, Tomoyuki Kaneko, Kazunori Yamaguchi
:
Evaluation of Monte Carlo tree search and the application to Go. 191-198
Board Games
- Alan Blair
:
Learning position evaluation for Go with Internal Symmetry Networks. 199-204 - Yasuhiro Osaki, Kazutomo Shibahara, Yasuhiro Tajima, Yoshiyuki Kotani:
An Othello evaluation function based on Temporal Difference Learning using probability of winning. 205-211 - Kyung-Joong Kim
, Sung-Bae Cho:
Ensemble approaches in evolutionary game strategies: A case study in Othello. 212-219 - Erkin Bahçeci, Risto Miikkulainen:
Transfer of evolved pattern-based heuristics in games. 220-227
Video Games
- Nathan Wirth, Marcus Gallagher
:
An influence map model for playing Ms. Pac-Man. 228-233 - Mark Wittkamp, Luigi Barone, Philip Hingston:
Using NEAT for continuous adaptation and teamwork formation in Pacman. 234-242 - Joost Westra, Hado van Hasselt, Virginia Dignum
, Frank Dignum:
On-line adapting games using agent organizations. 243-250 - Payam Aghaei Pour, Tauseef Gulrez
, Omar AlZoubi
, Gaetano D. Gargiulo
, Rafael A. Calvo
:
Brain-computer interface: Next generation thought controlled distributed video game development platform. 251-257
Real-world and Serious Games
- Daniel Harabor, Adi Botea:
Hierarchical path planning for multi-size agents in heterogeneous environments. 258-265 - Donna Djordjevich, Patrick G. Xavier, Michael L. Bernard, Jonathan Whetzel, Matthew R. Glickman, Stephen J. Verzi
:
Preparing for the aftermath: Using emotional agents in game-based training for disaster response. 266-275 - Mostafa Sahraei-Ardakani
, Mahnaz Roshanaei, Ashkan Rahimi-Kian, Caro Lucas:
A study of electricity market dynamics using Invasive Weed Colonization Optimization. 276-282 - Adrian Boeing:
Morphology independent dynamic locomotion control for virtual characters. 283-289
Poster Session
- Mostafa Sahraei-Ardakani
, Ashkan Rahimi-Kian, Majid Nili Ahmadabadi:
Hierarchical Nash-Q learning in continuous games. 290-295 - Yuhki Inoue, Yuji Sato
:
Applying GA for reward allotment in an event-driven hybrid learning classifier system for soccer video games. 296-303 - Hasan Mujtaba, Abdul Rauf Baig:
Survival by continuous learning in a dynamic multiple task environment. 304-309 - Thomas Thompson, Lewis McMillan, John Levine, Alastair Andrew:
An evaluation of the benefits of look-ahead in Pac-Man. 310-315 - Garrison W. Greenwood, Richard Tymerski:
A game-theoretical approach for designing market trading strategies. 316-322 - Vukosi Ntsakisi Marivate
, Tshilidzi Marwala:
Social Learning methods in board game agents. 323-328 - Shiven Sharma, Ziad Kobti
, Scott D. Goodwin:
Learning and knowledge generation in General Games. 329-335 - Diego Perez Liebana
, Yago Sáez
, Gustavo Recio, Pedro Isasi
:
Evolving a rule system controller for automatic driving in a car racing competition. 336-342 - Andrew Chiou, Kok Wai Wong:
Player Adaptive Entertainment Computing (PAEC): Mechanism to model user satisfaction by using Neuro Linguistic Programming (NLP) techniques. 343-349 - Ian D. Watson
, Song Lee, Jonathan Rubin, Stefan Wender:
Improving a case-based texas hold'em poker bot. 350-356 - Tom Schaul, Jürgen Schmidhuber:
A scalable neural network architecture for board games. 357-364 - Luis delaOssa
, José A. Gámez
, Verónica López:
Improvement of a car racing controller by means of Ant Colony Optimization algorithms. 365-371 - Stefan Wender, Ian D. Watson
:
Using reinforcement learning for city site selection in the turn-based strategy game Civilization IV. 372-377 - Julien Kloetzer, Hiroyuki Iida
, Bruno Bouzy:
A comparative study of solvers in Amazons endgames. 378-384 - Su-Hyung Jang, Sung-Bae Cho:
Evolving neural NPCs with layered influence map in the real-time simulation game 'Conqueror'. 385-388 - Benjamin E. Childs, James H. Brodeur, Levente Kocsis:
Transpositions and move groups in Monte Carlo tree search. 389-395

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.