Mixed reality for crisis response training and analysis

Simulation-based crisis response exercises and training within military and civilian organizations often involve large amounts of resources, time and personnel. Although often regarded as successful, such exercises are also characterized by not having clear learning objectives, not having plans for (longitudinal) assessment of skill building and not explicitly using learning and skill-building principles.

Moreover, even though simulations are considered essential in training, the cost and time for development limits their frequency, which further attenuates the lack of systematic observations as to the exercises’ effects. To improve on this situation, an Exercise Management & Simulation architecture (ExManSim) is proposed, where the organizational and technical challenges concerning structured planning, execution and analysis were explicitly addressed.

Existing systems for simulation-based training focus on what objects and interactions should be present in simulations, rather than a focus on what essential skills should be stimulated and measured. Indeed, ongoing efforts in the framework of Modeling and Simulation as a Service (MSaaS) to shorten the time for simulation development focus on offering objects and their interaction effects as services. In contrast, a central point in the ExManSim architecture is to offer skill-stimulating events as services, so that exercise managers can compose simulations in terms of learning effects, rather than merely in terms of objects and their interactions.

The architecture consists of four main components:

A front-end exercise management system (a web application for easy access on portable devices), where an exercise manager can

  • declare training objectives,
  • select which essential skills to be trained to reach those objectives,
  • design events to stimulate development of those essential skills,
  • design corresponding metrics for measuring essential skill performance in the events and
  • compose vignettes; that is, series of events, from the events.

The design of events and composition to vignettes happens in a graphical work space:

  • A back-end planning assistant based on Answer Set Programming, which suggests and validates events and vignettes to ensure that the exercise manager designs events and composes them in a logically consistent manner in line with the essential skills to be trained.
  • A back-end stage and content generator that translates answer set programs to code structures to be interpretedby simulation engines, so that the exercise manager’s design changes are reflected more or less instantly in the simulations.
  • A suite of mixed reality simulation components that implement the designed vignettes, together with the associated measurement of skill performance metrics.

Vignette composition

This design promotes the idea of composing exercise and training vignettes from a selection of events that are designed to stimulate essential skills. Vignette composition can range from manual through semi-automatic to automatic. The front-end also allows exercise management to visualize information for use in before-, during- and after-action reviews; in which exercise managers may assess and evaluate a training session at critical points in time for validation of the consistency and coherency between objectives, skills, events and metrics, based on actual trainee performance. Pretest and posttest effect measurements allow essential skill performance to be measured in controlled simulations before and after training and exercises for measuring the effect of training and exercise.

The purpose of these functions is to support deliberate practice; a framework that addresses the short-comings of “learning on the job”, by a strong focus on difficult aspects, immediate and tailored feedback (by a coach or computer-adaptive system), followed by tailored re-trials integrated into the larger sequence of tasks.The mixed-reality simulations are an appropriate mix of real and synthetic actors, objects and events are composed in a shared reality. This gives the opportunity to use existing systems and equipment together with virtual elements in an optimized learning arena. This arena can be distributed, such that those training can be located at their normal work places. The collected metrics and other data will be utilized to analyse learning performance and to improve practices and procedures once in operational use.

Publications:

  • Stolpe A, Rummelhoff I, Hannay JE. A logic-based event controller for means-end reasoning in simulation environments. SIMULATION. 2023;99(8):831-858.
  • Stolpe A, Hannay JE. Quantifying means-end reasoning skills in simulation-based training: a logic-based approach. SIMULATION. 2022;98(10):933-957.
  • Rouwendal, Dashley Kevin, Audun Stolpe, and Jo Erskine Hannay. “Toward an AI-based external scenario event controller for crisis response simulations.” Proceedings of the International Conference on Information Systems for Crisis Response and Management. International Association for Information Systems for Crisis Response and Management, 2021.
  • Audun Stolpe, & Jo Hannay. (2021). On the Adaptive Delegation and Sequencing of Actions. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 28–39). Blacksburg, VA (USA): Virginia Tech.
  • Rouwendal van Schijndel, Dashley K., Audun Stolpe, and Jo E. Hannay. “Using block-based programming and sunburst branching to plan and generate crisis training simulations.” HCI International 2020-Posters: 22nd International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part III 22. Springer International Publishing, 2020.
  • van Schijndel, Dashley K. Rouwendal, Jo E. Hannay, and Audun Stolpe. “Simulation Vignette Generation from Answer Set Specifications.” ISCRAM. 2020.
  • Hannay, Jo Erskine, and Yelte Kikke. “Structured crisis training with mixed reality simulations.” ISCRAM. 2019.

Title: MixStrEx: Mixed Reality for Structured Exercises

Partner: Fynd Reality AS, Gexcon AS, The University of Oslo, Department of Technology Systems

Period: 2018 – 2022

Funding: Research Council of Norway