Community of Practice

Exploring Virtual Reality Prototype in Pedagogy: Reflections

Written by Desmond Iriaye

In the past year at the Digital Learning Commons (DLC) in Monterey, I have used a couple of tools but none had me yearning for more exploration than the exciting virtual reality (VR) sessions with students and faculty in the Learning Lab. Watching them use VR was like venturing into unfamiliar territory almost every time. The excitement and curiosity was clearly visible, whether they were using the google cardboard, Theta S camera, or VR application with the HTC Vive head gear. Shouts of excitement and laughter erupted as students navigated through Google Earth or painted a 360 image with the Tilt brush app or played adventure and action games.

It was only natural for me to express keen interest when the opportunity came to do a prototype on exploring virtual reality with Bob Cole and Professor Shalini Gopalkrishnan. Up until that point, we had gotten our hands dirty, playing with VR tools, but hadn’t actually created any content for learning and teaching purposes. Now, the question was ‘What exactly does it take to make content for immersive learning with virtual reality?’

My initial discussions with Bob and Shalini brought to the fore the need to prototype with current tools in creating a “minimum viable product,” or MVP. The aim was to put together a series of video scenes that a VR user could interact with and make decisions about, in a “forking” interactive story. The aha moment came! We decided to prototype a resource for orienting future Graduate Assistants with respect to customer service with the Middlebury and Monterey communities.

My first task was to research different VR authoring platforms. Then I drafted a role play with basic interactive ideas around customer service orientation. Next I did a literature search to identify a number of platforms which could help; Viar360, InstaVR and sceneVR. While choosing a platform, I also continued the process by taking images of the DLC Learning Lab using the Theta S 360 camera and recording audio sessions with my colleague Ianthe Duncan-Poole.

I must add that reading about the product features of these platforms took a while as it was daunting trying to learn and at the same time create my own content. It was a good learning experience—and I’ll say that it is continuous as I plan to develop more content for this project.

Although the mechanics of these platforms appear the same (upload images/videos, augment with hotspots and navigation, create app homepage, add icon, publish app), they all had their specific approaches. To determine my approach, I decided to work backwards and try to decipher what were the highlights of my own GA orientation with Evelyn Helminen in my early days at the DLC. In trying to make the orientation experience as immersive as possible and get a simulated environment from any part of the world, what would make it tick? When a client puts on their VR headset or google cardboard, would they want to:

  • Learn facts they didn’t already know?
  • Become more comfortable in a specific environment?
  • Recognize visual or auditory clues?
  • Learn through repetition?
  • Make decisions they will replicate in the real-world environment?
  • Do something that can’t easily be done in the real world?

Or would they be uninterested in this type of orientation?

With all these questions in mind, I had a clearer idea of what platform I should start with. InstaVR struck a chord and I began the journey of prototyping with it. One of the best parts of InstaVR is the ability to add gaze-based hotspots. Hotspots augment your existing panoramic images or videos. They’re particularly useful in a training environment as they can add valuable information. Hotspots can also initiate audio narration to explain how to better engage with a client.

Currently, the result of the prototype works as an app installed in your phone with embedded 360 images of the Learning lab and voice over narration on how GAs can serve clients at the DLC. With the app, you can surf around the images and click on hotspots which give audio messages on different scenarios faced when GAs interact with clients in the learning lab and how each scenario can be managed.

See the short video here that demonstrates how the app works »

We are still exploring how we can improve on this feature as every step we take, there’s always something new to learn. I’d love to see how we can engage more with using these VR authoring platforms in developing more content for pedagogy at Middlebury.

Post by Desmond Iriaye, Graduate Assistant in the Digital Learning Commons, 2017-18

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: