The following are VR design principles Dr. Johnson-Glenberg has gathered for a manuscript that is currently under review.
Author: Dr. Mina C. Johnson-Glenberg, 2018
The full articles focuses on the two profound affordances associated with VR for educational purposes: 1) the sensation of presence, and 2) the embodied affordances of gesture in a three dimensional learning space. VR headsets with hand controls allow for creative, kinesthetic manipulation of content, these movements and gestures have been shown to have positive effects on learning. Below are 18 principles beginning with general ones and ending with gesture specific guidelines.
Assume every learner is a VR newbie
Not everyone will know the controls or know to look around. Users are now in a sphere. They should be induced to turn their heads… but only so far. Do not place important UI components far from each other. Do not place actionable content too far apart. E.g., do not capture butterfly #1 at 10° and force them to capture butterfly #2 at 190°. Be gentle with users’ proprioceptive systems (where the body is in space). If the content includes varying levels of difficulty, allow the user to choose the level at the start menu. This also gives a sense of agency.
Introduce User Interface (UI) components judiciously, fewer is better
When users build the first fireworks in our chemistry lesson, they can only make one stage rockets. The multi-chambered cylinders are not available in the interface until users show mastery of the simpler content.
Scaffold – also introduce cognitive steps one at a time
Build up to complexity. As described in our lesson on Coulomb’s Law, each component or variable in the equation is revealed one at a time. Users explore and master each component in successive mini-lessons.
Co–design with teachers
Co-design means early and on-going consultations. Let the teachers, Subject Matter Experts (SMEs), or clients play the lesson/game at mid and end stages as well. Playtesting is part of the design process. Write down all comments made while in the game. Especially note where users seem perplexed, those are usually the breakpoints.
Use guided exploration
Some free exploration can be useful in the first few minutes for accommodation and to incite curiosity, but once the structured part of the lesson begins, guide the learner, using constructs like pacing, signposting, blinking objects, etc.
Minimize text reading
In the headset; use graphics or mini-animations whenever possible. Prolonged text decoding in VR headsets causes a special sort of strain on the eyes, perhaps due to the eyes’ vergence-accomodation conflict, but see Hoffman, Girshick, Akeley, and Banks (2008). In our VR game Catch-a-Mimic we do not make players read lengthy paragraphs on how butterflies emerge during chrysalis, instead a short cut-scene animation of butterflies emerging from cocoons is displayed.
Build for low stakes errors early on
Learning often requires errors to be made and learning is facilitated by some amount of cognitive effortfulness. In the Catch-a-Mimic game, the player must deduce which butterflies are poisonous, just like a natural predator. In the first level, the first few butterflies on screen are poisonous. Eating them is erroneous and depletes the learner’s health score, but there is no other way to discern toxic from non-toxic without feedback on both types. Some false alarms must be made. Later in the game, errors are weighted heavier.
Playtest often with novices and end-users
It is crucial that you playtest with multiple waves of age-appropriate learners for feedback. This is different from co-designing with teachers. Playtesting with developers does not count. Our brains learn to reinterpret visual anomalies that previously induced discomfort, and user movements become more stable and efficient over time (Oculus, 2018). Developers spend many hours in VR and they physiologically respond differently than your end-users will.
Give players unobtrusive, immediate, and actionable feedback
This does not mean constant feedback. Feedback and adjustments must be integrated into the learner’s ongoing mental model, which takes time.
Design in opportunities for reflection (it should not be all action!)
All educators/designers are currently experimenting with how to do this in VR. Higher level learning (cognitive change) is not facilitated by twitch. Reflection allows the mental model to cohere. Should the user stay in the headset or not? How taboo is it to break immersion? Short quizzes are another way to enhance learning. At this stage, we know reflection should be incorporated, but we are not certain about optimal practices.
Encourage collaborative interactions
Synced, multiplayer is still expensive. Try to include workarounds to make the experience more social and collaborative, either with a preprogrammed non-player character (NPC), having a not-in-headset partner interact via the 2D computer screen, or sharing the headset back and forth in an asynchronous manner.
Using Hand Controls/Gestures
This section focuses on using the hand controllers in VR for learning.
Use the hand controls to make the learners be “active”
Incorporate into lessons opportunities for learners to make physical decisions about the placement of content and to use representational gestures. Active learning has been shown to increase grades by up to 20% (Waldrop, 2015).
How can a body-based metaphor be applied?
Be creative about ways to get kinesthetics or body actions into the lesson. E.g., if information is going to be displayed as a bar chart, first ask users to swipe upwards and make a prediction about how high one of the bars might be. (Note: Prediction is a metacognitive, well-researched comprehension strategy (Palinscar & Brown, 1984).)
The gesture/action should be congruent, i.e., it should be well-mapped, to the content being learned.
Actions strengthen motor circuits and memory traces
Performing actions stimulates the motor system and appears to also strengthen memory traces associated with newly learned concepts.
Ownership and Agency
Gestural control gives learners more ownership of and agency over the lesson. Agency has positive emotional affects associated with learning. With the use of VR hand controls, the ability to manipulate content and interactively navigate appears to also attenuate effects of motion sickness (Stanney & Hash, 1998).
Gesture as assessment – Both formative and summative
Design in gestures that reveal the state of the learner’s mental model, both during learning (called formative or in-process) and after the act of learning (called summative). For example, prompt the learner to demonstrate negative acceleration with the swipe of a hand controller. Does the controller speed up or slow down over time? Can the learner match certain target rates? This is an embodied method to assess comprehension, we actually consider it somewhat superior to the traditional text-based multiple choice format that includes a guess rate.
Personalized, more adaptive learning
Gesture research on younger children shows they sometimes gesture knowledge before they can verbally state the knowledge, and that gesture-speech mismatches can reveal a certain readiness to learn (Goldin-Meadow, 1997). Gestures can be used as inputs in adaptive learning algorithms. Although adding adaptivity (dynamic branching) to lessons is more costly, it is considered a best practice in education and should be incorporated into VR lessons whenever possible.
Here are the top contenders. If there are only resources to focus on a subset of the guidelines, then the recommendation is to follow the Necessary Nine:
- Use guided exploration
- Scaffold cognitive effort (and the interface) – one step at a time
- Give immediate, actionable feedback
- Playtest often – with correct group
- Build in opportunities for reflection
- Use the hand controls for active, body-based learning
- Integrate gestures that map to the content to be learned
- Gestures are worth the time – they promote learning and agency; they lead to less simulator sickness
- Embed gesture as a form of assessment, both during and after the lesson