Artwork

Content provided by The VRguy's podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The VRguy's podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

VRguy podcast Episode 25: Jason Jerald, Principal Consultant at NextGen Interactions

27:35
 
Share
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 29, 2018 02:27 (5y ago). Last successful fetch was on November 15, 2018 14:06 (5+ y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 185742384 series 1264521
Content provided by The VRguy's podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The VRguy's podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

My guest today is Jason Jerald, Co-Founder and Principal Consultant at NextGen Interactions and author of The VR Book. This episode was recorded on Aug 18th, 2017.

Jason and I talk about fine motor movements and pens in VR, motion sickness techniques and other aspects of human-computer interaction in VR.

http://sensics.com/wp-content/uploads/2017/08/rec_jasonjerald_18_Aug_2017_10_33_10.mp3

Subscribe on Android

subscribe on itunes

Available on Stitcher

Yuval Boger (VRGuy): Hello, Jason, and welcome to the program.

Jason Jerald: Thanks for having me.

VRguy: So who are you and what do you do?

Jason: My name is Jason Jerald, and I do, like many people I suspect listening to this podcast, is virtual reality. A little bit of augmented reality, but primarily virtual reality is our focus. I’m co-founder and principal consultant at NextGen Interactions. It’s kind of funny, 10 years ago I’d tell people I do virtual reality and they’d say, “Wow, what is that?” And in some cases they’d laugh at me. And of course, everything’s changed in the last few years. It seems that I’m certainly not the only person working exclusively on virtual reality. Everyone sort of gets it, and is excited about it, and the challenge is really making those VR experiences effective instead of something we just talk about.

VRguy: And you’ve summarized a lot of your work in a book that was published recently, right?

Jason: Yeah, so I have a book, it’s called ‘The VR Book: Human-Centered Design for Virtual Reality’, so it takes a little different angle or perspective on virtual reality than a lot of other great books out there. This one is not so technically focused. It’s more upon the higher level design concepts, designed thinking of how you integrate different things together and it’s very interdisciplinary. There’s sort of the common answer, which a lot of people don’t like, but really is the truth, is the answer is usually “it depends”. We have different constraints, we have different end users, we have different goals, we’re using different hardware, and so there’s very few universal truths when it comes to virtual reality design.

VRguy: If you go back three or four years, I guess in VR terms, it’s when the dinosaurs roamed the Earth, people were worried about motion sickness, and, “Oh, is it going to get me sick?” And so on and so on. Do you think that issue is primarily solved today? I mean, if you write an application and sort of follow the guidelines that major manufacturers offer, do you think that’s addressed or is that still an open question?

Jason: It’s definitely still an open question. In some cases, depending what you’re doing and what your goals are, we can certainly solve that. If you’re in a seated position and you don’t need to move virtually through the world, then you’re largely not going to have motion sickness, although there’s other challenges such as eye strain and such. Or if you’re using a wider area of tracking, you can physically walk around. However, once you want to actually move through a larger world, that can become more challenging, and there’s some great options like teleportation, which can pretty much prevent motion sickness. But again, there’s trade-offs to that. So when you teleport, for example, your users can get a little bit confused of where they moved to, or what their new orientation is, so there’s a lot of trade-offs there. There are no perfect answers, and those answers depend on what you’re trying to do.

VRguy: So I guess motion sickness is sort of a result of a disconnect between what your brain thinks you’re doing and what your body feels that is happening, or is it more than that?

Jason: Yeah, that’s definitely the most widely accepted theory, that conflict between your vestibular system in your inner ear– your sense of balance and acceleration–compared to what you’re seeing. I’d say that’s really the most important factor. But there’s also other sort of lesser factors. I divide those factors into system factors, application factors, and individual user factors. System factors include challenges such as is the hardware good enough, do you have a high enough frame rate, is there low latency, is it calibrated correctly–those sorts of issues. Then there’s also the application factors, as developers that we have control over. If we move the user’s viewpoint in crazy ways, then that can be a problem.

And then there’s also the individual user. As individuals, we think differently, so I might be more sensitive to motion sickness than someone else that’s largely immune to it. But there’s also a whole lot of psychological effects there as well. For example, it’s been found that just by thinking about motion sickness, so if you suggest someone might get sick, then they’re actually more likely to get sick. Then it brings up the question of, well, should we warn our users? We want to warn them but at the same time, if we warn them, they’re more likely to feel that sickness. In my book I list 60 some factors that can contribute. There’s also issues outside of motion sickness. For example, falling over, a physical trauma, or slamming your head into … well, not your head, your head would be bad as well, but slamming your hand if you’re using a controller system into your desk, or eye strain, or all sorts of challenges that we need to be aware of and be careful of.

VRguy: So, where did the science come from? For instance, if I think about Google Earth in VR, when you do teletransportation, so for instance, if I take myself from the top of the Eiffel Tower to the Champs-Elysees in one click, then the field of view narrows for a little bit and then widens back again.

Jason: Yep.

VRguy: Is that just a result of trial and error on the Google side, or is it backed by science, and if so, who’s doing that science?

Jason: There’s a really great paper by Fernandes and Feiner in 2016, that studies that. It studies the effect of that decreased field of view when you move through the world. It’s one of my favorite papers. It’s really well done. That’s sort of one example of a way to study that, and a way to collect data. What they do is basically every 20 seconds or so, they bring up a question to the user saying, “How comfortable do you feel?” And of course, over time, the longer you have these virtual motions through the world, you’re going to feel it more and more. And so the longer a person experiences those provocative motions, the more of a problem it is. So they measure that over time, not what you’d want to do in a really presence inducing experience, but for research, breaks-in-presence are fine. And you can really see the discomfort going up when you’re looking at the data, pretty much for all users.

And then they have different conditions, one condition being “None of that decreased field of view,” and then “with the decreased field of view.” And there were very interesting results. Of course, it was better with that decreased field of view, and the reason why, one reason why anyway, is that we’re more sensitive to motion in our peripheral vision. And so if we take away that virtual motion in our peripheral vision, then of course there’s less discomfort. Now, there’s older research that comes from the flight simulator industry, the Kennedy Simulator Sickness Questionnaire, and that’s another example of a great way to collect data. There’s also physiological measures, or you can just simply observe your users.

There’s different ways of studying motion sickness, but similar to what I was saying about design, there’s really no perfect way to collect all this data. And so thinking about what you are trying to determine, and what way are your users comfortable in collecting that information, for example. There’s a whole lot of things to consider. One of the biggest most controversial topics is how to collect data—with formal studies done in universities and such that are looking for very specific answers, and really looking to get at the truth, versus constructionist approaches that we want to get answers quick that maybe we don’t run statistics on, but we’re just kind of loosely collecting data, comments and such from our users. And so there’s a whole wide range of different ways to collect that data.

VRguy: You mentioned the Feiner paper, and if you could recommend a couple of more seminal papers, that you think people from industry should be reading, what would those be?

Jason: I’d definitely say the Kennedy paper [Editor’s note: links at the end of the transcript] on the Simulator Sickness Questionnaire, that goes back to the ’90s, and there’s some challenges with that paper (none of these ways of measuring sickness is perfect) but that is definitely a classic, and that’s sort of the most commonly used method of collecting that information. So even if you’re not going to use that method, at least it kind of gives you a great background of the ways other people are collecting data, and to understand when you’re reading these papers of what they’re doing, as far as collecting data in that way. And of course, the Feiner paper definitely. That’s one of my favorites.

VRguy: How does the university research compare with some of the corporate research? You know, Microsoft research, or Oculus, or some of the work that Valve is doing. How would you look at those in comparison?

Jason: They’re definitely related. There’s certainly scientific, industrial research, and then there’s more sort of qualitative anecdotal research that a lot of the game studios do. They’ll try some new method and then just kind of put some users through it and say, “What do you think?” And so they’re both valid, but quite different, and there’s sort of a controversy, depending who you ask, what’s the better way to do it. And again, I’ll say the answer is “it depends”, what are you trying to determine? If you’re looking to publish in a scientific journal, then you need to do the more scientific rigorous sort of study.

So a good example of that is with one of our own games at NextGen Interactions. We’re exploring reducing sickness, in some ways, expanding upon that concept from Fernandes and Feiner, where we’re using something we call dynamic rest frames. Well let me back up. There’s a way to reduce sickness called rest frames. And all that means is that you have a stabilized cue in the environment, that even though you’re moving forward, you’re moving through the environment, something stable is in your view as well, that does not move. And so an easy way to think about that is a cockpit, or in a car, for example. The game EVE: Valkyrie is a great example of that. That can help, because now not all the pixels on the screen are moving in a way inconsistent with your vestibular system. Instead, only the things outside of your cockpit are moving, and so the cockpit is stable, so it’s more comfortable.

What we did was we combined that concept of the cockpit with the concept of Fernandes and Feiner, of fading in those sort of rest frames, what we call ‘dynamic rest frames’, so that when you’re not moving, you have more situational awareness because you have an entire field of view that you can look around. But when you’re moving, we sort of fade in rest frames. We essentially fog up the windows on the cockpit, and so there’s less visual motion that is in conflict with your vestibular system. We’re kind of doing a more constructionist approach of just observing ourselves, getting users in there, going to conferences, collecting simple questionnaires of how does this feel and such.

We’ve gotten some really amazing results. I’m not claiming to solve it completely, far from it, but we’re definitely helping to increase that comfort. We’ve had some people say they’ve never been able to play that type of first-person type of motion based game in virtual reality, they can’t last more than 30 seconds, and by using these different concepts of dynamic rest frames and the decreased field of view these users that at first are very skeptical, we’ve had some people say, “I can’t believe this. I’m able to play a good amount of time without getting sick. This is the first time I’ve been able to do that.”

Now that’s very anecdotal, you could argue it’s very biased, because there’s all sorts of factors going on there when collecting that information. It’s not scientific. We’re working with Duke University, where I’m also adjunct faculty, in addition to my primary role at NextGen Interaction. We’re conducting scientific studies to actually investigate that in a more objective manner. We’re working with Regis Kopper, and one of his graduate students Zekun Cao on studying this, and we hope to get this published soon. So not only have a fun game example for dynamic rest frames, but also for people to look at more of the details of how it works, how to implement it, get some idea of how effective it is, what we think it might be appropriate for, those types of things. Putting that out in the world so other people can take advantage of these things.

VRguy: You mentioned that in a seated position, simulator sickness is largely resolved. What do you think are the main unresolved issues these days? What would you want the researches and the industry to focus on with regards to human interaction?

Jason: There’s locomotion, which there’s so many ways you can do that, and one of my favorite things to work on. We’ve looked at locomotion all sorts of different ways, and some really kind of fun engaging ways you can move through the world. And then there’s the interaction of how do I interact with objects in the world, other characters, whatever it might be. And that’s a huge area as well, whether you’re interacting to locomote through the world, or just reaching out to grab something. There’s sort of the straight forward, more direct interaction of if you have hand tracking, reach out, intersect an object, pull the trigger and you pick the object. And those work extremely well. That’s a very visceral sort of direct experience, but there’s all sorts of other ways you can interact with objects in the world as well, that you can go beyond how we interact in the real world, depending what you’re trying to do.

In some cases, you want to replicate the real world, so if you’re doing a training application, where it’s sort of a physical task, then you probably want to get as close to the physical interaction to match that as close as possible to the real world equivalent, something we call interaction fidelity. But in some cases, you want to go beyond what you can do in reality. You could argue if we’re just trying to replicate reality, then what’s the point? We’d just use reality. What if we wanted to go on and reach out and grab something further in the distance, when I don’t have to walk over to grab it, and so an example technique of that that expands beyond that sort of direct hand selection technique is a really cool technique called the Go-Go Technique. Within about two thirds of the reach of your arms, you interact normally and such, but then when you extend further out, your hand starts to move exponentially further out into the distance. It’s motivated by Inspector Gadget, a cartoon back in the ’90s, that I can really stretch my arm out at it at a distance, and grab things way far away. And so that’s an example of something that doesn’t really make sense in the real world, unless you have Inspector Gadget’s hand, but something that can be actually pretty effective and pretty comfortable, in virtual reality.

VRguy: What about fine motor movements? So for instance, I can understand why with tracks controller I could walk around a jet engine, rotate it, grab a part, zoom in. But what happens now if I want to just write a note to myself about oh, you know what? We need to make this, this, the other way or change the color. How do you get sort of handwriting and text, or fine motor movements in VR?

Jason: I would say text entry, in general, is one of the biggest challenges of VR. I haven’t seen a lot of great solutions out there. I think there’s some potential for good solutions, for example, chord keyboards. If you had more buttons on each controller, then you can do combinations of buttons and such to do text entry. There’s been a lot of resistance of that in traditional computing, but I think there’s more of a reason to do it in VR. So I’m hoping to see those type of out-of-the-box thinking solutions for text entry. For actually drawing text, that is certainly a challenge, and one of the advantages of using a mouse, because for a mouse you’re constrained to a physical mousepad. You’re confined to that 2D physical constraint of your desk. And so you can get a lot more control, whereas if you’re sort of in free form space, it’s really difficult to hold your hand steady in space, and so text entry certainly can be more challenging.

There’s other ways, although maybe not for text entry, to be able to interact with things precisely. You can have non-isomorphic rotations, for example, I may rotate my hand 30 degrees, and maybe that only maps to a virtual rotation of 5 degrees, and that way you can have more control. So, that’s one example. Another example is we’ve been working with HTC’s VIVE studios, Sixense, and Digital ArtForms on an application called MakeVR, which is an immersive CAD program, using the ACIS CAD engine, so it’s truly a CAD engine versus just moving vertices around and such.

One thing that makes MakeVR interesting is the ability to locomote, with your hands instead of your feet. You can also walk around, because it’s using HTC’s system. But to be able to zoom in, sort of like you were mentioning, Yuval, Google Earth is doing this in different ways now as well, being able to zoom in and work more precisely. And so, in this case, we can grab the world and we can make ourselves smaller so we’re the size of an ant. And the way we do that is you first grab the world with your hands. We call it 3D multi-touch, similar to how you can pinch on a 2D application with 2D multitouch. Here you’re using your hands instead of your fingers. You grab the world with both hands and then spread your hands apart, and you essentially zoom in about that point, about the mid-point between your hands, hence we call it 3D multi-touch. You can also think of it as sort of grabbing the world as an object, so you can grab the fabric of space itself, and stretch your arms apart and now you are scaled as if you are the size of an ant or as if the world is much larger. Essentially, we’re zoomed in where we can work in a more refined manner in more detail.

Now there’s still challenges to that because it’s still difficult to hold your hands steady, so we’ve been integrating precision tools. 2D grids for example, so you can take take that grid, snap it to any other object, and then take another object and snap it to grid points so it tracks or skips along the specific grid points that you can set the resolution and such to that grid point. Or if you want to do a one dimensional ruler, so you can move something up five units, then you can do that. So we’ve been having a whole lot of fun of exploring how to do precision within virtual reality. There’s certainly a whole lot to think about. And also it’s very much human-centered design, in general, like sometimes what you think is going to be a great idea doesn’t work at all. So there’s a whole lot of iteration there. And vice versa. Sometimes you think something probably won’t work. You really don’t know until you actually try it and explore that space. That’s one of the reasons it makes designing and developing for virtual reality so exciting, because there’s so many unknowns and so many things to explore.

VRguy: It still feels like the pen is missing in VR. I mean, just like the pen found its way to the Palm Pilot many years ago, with handwriting, or if I’m an illustrator I would work with a sort of digital pen in front of my computer. I think that’s still missing in VR.

Jason: Yes.

VRguy: Let me go back to… You mentioned eye strain before, as a concern with human-centered design. Is that just in the focal distance, where my eyes are focused, or is that the disconnect between where the objects are and where I’m looking? Is the solution to that just different human interface, or is it different displays, sort of light field displays? How do you see that?

Jason: Yeah, certainly. So you probably know this as good or better as anyone, you being, creating a lot of these head-mounted displays. But I can kind of summarize what you already know, but maybe your listeners aren’t aware of, is essentially your ability to focus out in the distance. In the real world, if we focus on something, we don’t consciously notice it typically, but a certain distance is going to be in focus, but something further out is not going to be in focus. Focus being what you can clearly see in detail.

The challenge is that we have the convergence of our eyes also, because we have two eyes, if I look at something close to me, a close object, my eyes are looking inward. They’re converged, looking at that object. But if our display, if our optics show actually everything’s in focus two meters out, then there’s a disconnect of the physical convergence of my eyes, and the accommodation, or the lens of my eyes focusing on something two meters out in the distance. And because, as human beings, we’ve had many years of experience of those two things being connected and consistent with each other. They’re now in conflict when using most of today’s HMDs. And so that’s another example of a sort of sensory conflict, similar to the vestibular-visual conflict that results in motion sickness. This is accommodation-vergence conflict. And that can cause strain on the eyes.

One of the solutions for that sort of at a high level, is not having a single focal distance. As you can imagine, that’s hugely challenging, and some people are looking at light fields. Magic Leap is supposedly looking at that. That’s a big advantage that they have, to be able to make that more comfortable, especially for longer periods of usage. This isn’t bad if it’s only a couple of minutes at a time, but if you want to use VR for hours at a time, then a lot of people will start getting headaches because of that conflict. So a lot of researchers are looking to solve that problem.

VRguy: So as we start bringing our conversation to a close, if you had control over what Sensics, and Oculus, and Valve, and others, and Google work on in the next 18 months or so, what would you have us do?

Jason: Yeah, boy, there’s so many things to work on. I’m very happy to see companies are now offering two-handed controllers. A lot of the prior research actually focuses on one-handed interaction, but there’s so much more you can do with two hands. I would say, what I think is going to totally shift and be sort of that barrier that needs to be broken down of VR going mainstream, is in the mobile space, taking some of the things that we know that work really well, that Oculus, and HTC, and others are doing with hand tracking. Of course, there’s always improvements we can do with the head-mounted displays, working with OSVR open source virtual reality, for example. There’s always room for improvement there.

But I really think the break through is quality hand tracking, if we can get that working with mobile at a low cost for mobile, and take in those lessons that we’ve learned from HTC VIVE and Oculus with hand tracking, that we know can be so engaging and so effective, and you really start opening the design space of what you can do because you bring the hands into the world, when we’re able to start doing that with quality tracking on mobile displays, combined with phone or dedicated headsets, then I really think that’s going to just open up that mobile VR space and the value that can be offered, which is really needed to get the large number of users that we’d all love to have in this industry.

VRguy: Excellent. So, Jason, thank you so much. Where could people connect with you to learn more about what you’re doing?

Jason: Yeah, so you can follow me on Twitter at @TheVRBook, or you can shoot me an email at Jason@NextGenInteractions.com , and of course, you can just do a Google search as well to find me.

VRguy: Perfect. Well, thanks so much for joining me today.

Jason: Absolutely. Thanks for having me, Yuval.

LINKS:

VR Apocalypse—A game that utilizes dynamic rest frames: http://store.steampowered.com/app/554940/VR_Apocalypse/

MakeVR—an immersive modeling application that utilizes 3D Multitouch and precision https://www.viveport.com/apps/9e94a10f-51d9-4b6f-92e4-6e4fe9383fe9

The VR Book: Human-Centered Design for Virtual Reality http://www.thevrbook.net

PAPERS:

Fernandes, A. S., & Feiner, S. K. (2016, March). Combating VR sickness through subtle dynamic field-of-view modification. In 3D User Interfaces (3DUI), 2016 IEEE Symposium on (pp. 201-210). IEEE. http://ieeexplore.ieee.org/document/7460053

Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). A Simulator Sickness Questionnaire (SSQ): A New Method for Quantifying Simulator Sickness. International Journal of Aviation Psychology, 3(3), 203–220.

Poupyrev, I., Billinghurst, M.,Weghorst, S., and Ichikawa, T. (1996). The Go-Go Interaction Technique: Non-linear Mapping for Direct Manipulation in VR. In ACM Symposium on User Interface Software and Technology (pp. 79–80). ACM Press. http://dx.doi.org/10.1145/237091.237102

Young, S. D., Adelstein, B. D., and Ellis, S. R. (2007). Demand Characteristics in Assessing Motion Sickness in a Virtual Environment: Or Does Taking a Motion Sickness Questionnaire Make You Sick? In IEEE Transactions on Visualization and Computer Graphics (Vol. 13, pp. 422–428).

The post VRguy podcast Episode 25: Jason Jerald, Principal Consultant at NextGen Interactions appeared first on Sensics.

  continue reading

29 episodes

Artwork
iconShare
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 29, 2018 02:27 (5y ago). Last successful fetch was on November 15, 2018 14:06 (5+ y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 185742384 series 1264521
Content provided by The VRguy's podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The VRguy's podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

My guest today is Jason Jerald, Co-Founder and Principal Consultant at NextGen Interactions and author of The VR Book. This episode was recorded on Aug 18th, 2017.

Jason and I talk about fine motor movements and pens in VR, motion sickness techniques and other aspects of human-computer interaction in VR.

http://sensics.com/wp-content/uploads/2017/08/rec_jasonjerald_18_Aug_2017_10_33_10.mp3

Subscribe on Android

subscribe on itunes

Available on Stitcher

Yuval Boger (VRGuy): Hello, Jason, and welcome to the program.

Jason Jerald: Thanks for having me.

VRguy: So who are you and what do you do?

Jason: My name is Jason Jerald, and I do, like many people I suspect listening to this podcast, is virtual reality. A little bit of augmented reality, but primarily virtual reality is our focus. I’m co-founder and principal consultant at NextGen Interactions. It’s kind of funny, 10 years ago I’d tell people I do virtual reality and they’d say, “Wow, what is that?” And in some cases they’d laugh at me. And of course, everything’s changed in the last few years. It seems that I’m certainly not the only person working exclusively on virtual reality. Everyone sort of gets it, and is excited about it, and the challenge is really making those VR experiences effective instead of something we just talk about.

VRguy: And you’ve summarized a lot of your work in a book that was published recently, right?

Jason: Yeah, so I have a book, it’s called ‘The VR Book: Human-Centered Design for Virtual Reality’, so it takes a little different angle or perspective on virtual reality than a lot of other great books out there. This one is not so technically focused. It’s more upon the higher level design concepts, designed thinking of how you integrate different things together and it’s very interdisciplinary. There’s sort of the common answer, which a lot of people don’t like, but really is the truth, is the answer is usually “it depends”. We have different constraints, we have different end users, we have different goals, we’re using different hardware, and so there’s very few universal truths when it comes to virtual reality design.

VRguy: If you go back three or four years, I guess in VR terms, it’s when the dinosaurs roamed the Earth, people were worried about motion sickness, and, “Oh, is it going to get me sick?” And so on and so on. Do you think that issue is primarily solved today? I mean, if you write an application and sort of follow the guidelines that major manufacturers offer, do you think that’s addressed or is that still an open question?

Jason: It’s definitely still an open question. In some cases, depending what you’re doing and what your goals are, we can certainly solve that. If you’re in a seated position and you don’t need to move virtually through the world, then you’re largely not going to have motion sickness, although there’s other challenges such as eye strain and such. Or if you’re using a wider area of tracking, you can physically walk around. However, once you want to actually move through a larger world, that can become more challenging, and there’s some great options like teleportation, which can pretty much prevent motion sickness. But again, there’s trade-offs to that. So when you teleport, for example, your users can get a little bit confused of where they moved to, or what their new orientation is, so there’s a lot of trade-offs there. There are no perfect answers, and those answers depend on what you’re trying to do.

VRguy: So I guess motion sickness is sort of a result of a disconnect between what your brain thinks you’re doing and what your body feels that is happening, or is it more than that?

Jason: Yeah, that’s definitely the most widely accepted theory, that conflict between your vestibular system in your inner ear– your sense of balance and acceleration–compared to what you’re seeing. I’d say that’s really the most important factor. But there’s also other sort of lesser factors. I divide those factors into system factors, application factors, and individual user factors. System factors include challenges such as is the hardware good enough, do you have a high enough frame rate, is there low latency, is it calibrated correctly–those sorts of issues. Then there’s also the application factors, as developers that we have control over. If we move the user’s viewpoint in crazy ways, then that can be a problem.

And then there’s also the individual user. As individuals, we think differently, so I might be more sensitive to motion sickness than someone else that’s largely immune to it. But there’s also a whole lot of psychological effects there as well. For example, it’s been found that just by thinking about motion sickness, so if you suggest someone might get sick, then they’re actually more likely to get sick. Then it brings up the question of, well, should we warn our users? We want to warn them but at the same time, if we warn them, they’re more likely to feel that sickness. In my book I list 60 some factors that can contribute. There’s also issues outside of motion sickness. For example, falling over, a physical trauma, or slamming your head into … well, not your head, your head would be bad as well, but slamming your hand if you’re using a controller system into your desk, or eye strain, or all sorts of challenges that we need to be aware of and be careful of.

VRguy: So, where did the science come from? For instance, if I think about Google Earth in VR, when you do teletransportation, so for instance, if I take myself from the top of the Eiffel Tower to the Champs-Elysees in one click, then the field of view narrows for a little bit and then widens back again.

Jason: Yep.

VRguy: Is that just a result of trial and error on the Google side, or is it backed by science, and if so, who’s doing that science?

Jason: There’s a really great paper by Fernandes and Feiner in 2016, that studies that. It studies the effect of that decreased field of view when you move through the world. It’s one of my favorite papers. It’s really well done. That’s sort of one example of a way to study that, and a way to collect data. What they do is basically every 20 seconds or so, they bring up a question to the user saying, “How comfortable do you feel?” And of course, over time, the longer you have these virtual motions through the world, you’re going to feel it more and more. And so the longer a person experiences those provocative motions, the more of a problem it is. So they measure that over time, not what you’d want to do in a really presence inducing experience, but for research, breaks-in-presence are fine. And you can really see the discomfort going up when you’re looking at the data, pretty much for all users.

And then they have different conditions, one condition being “None of that decreased field of view,” and then “with the decreased field of view.” And there were very interesting results. Of course, it was better with that decreased field of view, and the reason why, one reason why anyway, is that we’re more sensitive to motion in our peripheral vision. And so if we take away that virtual motion in our peripheral vision, then of course there’s less discomfort. Now, there’s older research that comes from the flight simulator industry, the Kennedy Simulator Sickness Questionnaire, and that’s another example of a great way to collect data. There’s also physiological measures, or you can just simply observe your users.

There’s different ways of studying motion sickness, but similar to what I was saying about design, there’s really no perfect way to collect all this data. And so thinking about what you are trying to determine, and what way are your users comfortable in collecting that information, for example. There’s a whole lot of things to consider. One of the biggest most controversial topics is how to collect data—with formal studies done in universities and such that are looking for very specific answers, and really looking to get at the truth, versus constructionist approaches that we want to get answers quick that maybe we don’t run statistics on, but we’re just kind of loosely collecting data, comments and such from our users. And so there’s a whole wide range of different ways to collect that data.

VRguy: You mentioned the Feiner paper, and if you could recommend a couple of more seminal papers, that you think people from industry should be reading, what would those be?

Jason: I’d definitely say the Kennedy paper [Editor’s note: links at the end of the transcript] on the Simulator Sickness Questionnaire, that goes back to the ’90s, and there’s some challenges with that paper (none of these ways of measuring sickness is perfect) but that is definitely a classic, and that’s sort of the most commonly used method of collecting that information. So even if you’re not going to use that method, at least it kind of gives you a great background of the ways other people are collecting data, and to understand when you’re reading these papers of what they’re doing, as far as collecting data in that way. And of course, the Feiner paper definitely. That’s one of my favorites.

VRguy: How does the university research compare with some of the corporate research? You know, Microsoft research, or Oculus, or some of the work that Valve is doing. How would you look at those in comparison?

Jason: They’re definitely related. There’s certainly scientific, industrial research, and then there’s more sort of qualitative anecdotal research that a lot of the game studios do. They’ll try some new method and then just kind of put some users through it and say, “What do you think?” And so they’re both valid, but quite different, and there’s sort of a controversy, depending who you ask, what’s the better way to do it. And again, I’ll say the answer is “it depends”, what are you trying to determine? If you’re looking to publish in a scientific journal, then you need to do the more scientific rigorous sort of study.

So a good example of that is with one of our own games at NextGen Interactions. We’re exploring reducing sickness, in some ways, expanding upon that concept from Fernandes and Feiner, where we’re using something we call dynamic rest frames. Well let me back up. There’s a way to reduce sickness called rest frames. And all that means is that you have a stabilized cue in the environment, that even though you’re moving forward, you’re moving through the environment, something stable is in your view as well, that does not move. And so an easy way to think about that is a cockpit, or in a car, for example. The game EVE: Valkyrie is a great example of that. That can help, because now not all the pixels on the screen are moving in a way inconsistent with your vestibular system. Instead, only the things outside of your cockpit are moving, and so the cockpit is stable, so it’s more comfortable.

What we did was we combined that concept of the cockpit with the concept of Fernandes and Feiner, of fading in those sort of rest frames, what we call ‘dynamic rest frames’, so that when you’re not moving, you have more situational awareness because you have an entire field of view that you can look around. But when you’re moving, we sort of fade in rest frames. We essentially fog up the windows on the cockpit, and so there’s less visual motion that is in conflict with your vestibular system. We’re kind of doing a more constructionist approach of just observing ourselves, getting users in there, going to conferences, collecting simple questionnaires of how does this feel and such.

We’ve gotten some really amazing results. I’m not claiming to solve it completely, far from it, but we’re definitely helping to increase that comfort. We’ve had some people say they’ve never been able to play that type of first-person type of motion based game in virtual reality, they can’t last more than 30 seconds, and by using these different concepts of dynamic rest frames and the decreased field of view these users that at first are very skeptical, we’ve had some people say, “I can’t believe this. I’m able to play a good amount of time without getting sick. This is the first time I’ve been able to do that.”

Now that’s very anecdotal, you could argue it’s very biased, because there’s all sorts of factors going on there when collecting that information. It’s not scientific. We’re working with Duke University, where I’m also adjunct faculty, in addition to my primary role at NextGen Interaction. We’re conducting scientific studies to actually investigate that in a more objective manner. We’re working with Regis Kopper, and one of his graduate students Zekun Cao on studying this, and we hope to get this published soon. So not only have a fun game example for dynamic rest frames, but also for people to look at more of the details of how it works, how to implement it, get some idea of how effective it is, what we think it might be appropriate for, those types of things. Putting that out in the world so other people can take advantage of these things.

VRguy: You mentioned that in a seated position, simulator sickness is largely resolved. What do you think are the main unresolved issues these days? What would you want the researches and the industry to focus on with regards to human interaction?

Jason: There’s locomotion, which there’s so many ways you can do that, and one of my favorite things to work on. We’ve looked at locomotion all sorts of different ways, and some really kind of fun engaging ways you can move through the world. And then there’s the interaction of how do I interact with objects in the world, other characters, whatever it might be. And that’s a huge area as well, whether you’re interacting to locomote through the world, or just reaching out to grab something. There’s sort of the straight forward, more direct interaction of if you have hand tracking, reach out, intersect an object, pull the trigger and you pick the object. And those work extremely well. That’s a very visceral sort of direct experience, but there’s all sorts of other ways you can interact with objects in the world as well, that you can go beyond how we interact in the real world, depending what you’re trying to do.

In some cases, you want to replicate the real world, so if you’re doing a training application, where it’s sort of a physical task, then you probably want to get as close to the physical interaction to match that as close as possible to the real world equivalent, something we call interaction fidelity. But in some cases, you want to go beyond what you can do in reality. You could argue if we’re just trying to replicate reality, then what’s the point? We’d just use reality. What if we wanted to go on and reach out and grab something further in the distance, when I don’t have to walk over to grab it, and so an example technique of that that expands beyond that sort of direct hand selection technique is a really cool technique called the Go-Go Technique. Within about two thirds of the reach of your arms, you interact normally and such, but then when you extend further out, your hand starts to move exponentially further out into the distance. It’s motivated by Inspector Gadget, a cartoon back in the ’90s, that I can really stretch my arm out at it at a distance, and grab things way far away. And so that’s an example of something that doesn’t really make sense in the real world, unless you have Inspector Gadget’s hand, but something that can be actually pretty effective and pretty comfortable, in virtual reality.

VRguy: What about fine motor movements? So for instance, I can understand why with tracks controller I could walk around a jet engine, rotate it, grab a part, zoom in. But what happens now if I want to just write a note to myself about oh, you know what? We need to make this, this, the other way or change the color. How do you get sort of handwriting and text, or fine motor movements in VR?

Jason: I would say text entry, in general, is one of the biggest challenges of VR. I haven’t seen a lot of great solutions out there. I think there’s some potential for good solutions, for example, chord keyboards. If you had more buttons on each controller, then you can do combinations of buttons and such to do text entry. There’s been a lot of resistance of that in traditional computing, but I think there’s more of a reason to do it in VR. So I’m hoping to see those type of out-of-the-box thinking solutions for text entry. For actually drawing text, that is certainly a challenge, and one of the advantages of using a mouse, because for a mouse you’re constrained to a physical mousepad. You’re confined to that 2D physical constraint of your desk. And so you can get a lot more control, whereas if you’re sort of in free form space, it’s really difficult to hold your hand steady in space, and so text entry certainly can be more challenging.

There’s other ways, although maybe not for text entry, to be able to interact with things precisely. You can have non-isomorphic rotations, for example, I may rotate my hand 30 degrees, and maybe that only maps to a virtual rotation of 5 degrees, and that way you can have more control. So, that’s one example. Another example is we’ve been working with HTC’s VIVE studios, Sixense, and Digital ArtForms on an application called MakeVR, which is an immersive CAD program, using the ACIS CAD engine, so it’s truly a CAD engine versus just moving vertices around and such.

One thing that makes MakeVR interesting is the ability to locomote, with your hands instead of your feet. You can also walk around, because it’s using HTC’s system. But to be able to zoom in, sort of like you were mentioning, Yuval, Google Earth is doing this in different ways now as well, being able to zoom in and work more precisely. And so, in this case, we can grab the world and we can make ourselves smaller so we’re the size of an ant. And the way we do that is you first grab the world with your hands. We call it 3D multi-touch, similar to how you can pinch on a 2D application with 2D multitouch. Here you’re using your hands instead of your fingers. You grab the world with both hands and then spread your hands apart, and you essentially zoom in about that point, about the mid-point between your hands, hence we call it 3D multi-touch. You can also think of it as sort of grabbing the world as an object, so you can grab the fabric of space itself, and stretch your arms apart and now you are scaled as if you are the size of an ant or as if the world is much larger. Essentially, we’re zoomed in where we can work in a more refined manner in more detail.

Now there’s still challenges to that because it’s still difficult to hold your hands steady, so we’ve been integrating precision tools. 2D grids for example, so you can take take that grid, snap it to any other object, and then take another object and snap it to grid points so it tracks or skips along the specific grid points that you can set the resolution and such to that grid point. Or if you want to do a one dimensional ruler, so you can move something up five units, then you can do that. So we’ve been having a whole lot of fun of exploring how to do precision within virtual reality. There’s certainly a whole lot to think about. And also it’s very much human-centered design, in general, like sometimes what you think is going to be a great idea doesn’t work at all. So there’s a whole lot of iteration there. And vice versa. Sometimes you think something probably won’t work. You really don’t know until you actually try it and explore that space. That’s one of the reasons it makes designing and developing for virtual reality so exciting, because there’s so many unknowns and so many things to explore.

VRguy: It still feels like the pen is missing in VR. I mean, just like the pen found its way to the Palm Pilot many years ago, with handwriting, or if I’m an illustrator I would work with a sort of digital pen in front of my computer. I think that’s still missing in VR.

Jason: Yes.

VRguy: Let me go back to… You mentioned eye strain before, as a concern with human-centered design. Is that just in the focal distance, where my eyes are focused, or is that the disconnect between where the objects are and where I’m looking? Is the solution to that just different human interface, or is it different displays, sort of light field displays? How do you see that?

Jason: Yeah, certainly. So you probably know this as good or better as anyone, you being, creating a lot of these head-mounted displays. But I can kind of summarize what you already know, but maybe your listeners aren’t aware of, is essentially your ability to focus out in the distance. In the real world, if we focus on something, we don’t consciously notice it typically, but a certain distance is going to be in focus, but something further out is not going to be in focus. Focus being what you can clearly see in detail.

The challenge is that we have the convergence of our eyes also, because we have two eyes, if I look at something close to me, a close object, my eyes are looking inward. They’re converged, looking at that object. But if our display, if our optics show actually everything’s in focus two meters out, then there’s a disconnect of the physical convergence of my eyes, and the accommodation, or the lens of my eyes focusing on something two meters out in the distance. And because, as human beings, we’ve had many years of experience of those two things being connected and consistent with each other. They’re now in conflict when using most of today’s HMDs. And so that’s another example of a sort of sensory conflict, similar to the vestibular-visual conflict that results in motion sickness. This is accommodation-vergence conflict. And that can cause strain on the eyes.

One of the solutions for that sort of at a high level, is not having a single focal distance. As you can imagine, that’s hugely challenging, and some people are looking at light fields. Magic Leap is supposedly looking at that. That’s a big advantage that they have, to be able to make that more comfortable, especially for longer periods of usage. This isn’t bad if it’s only a couple of minutes at a time, but if you want to use VR for hours at a time, then a lot of people will start getting headaches because of that conflict. So a lot of researchers are looking to solve that problem.

VRguy: So as we start bringing our conversation to a close, if you had control over what Sensics, and Oculus, and Valve, and others, and Google work on in the next 18 months or so, what would you have us do?

Jason: Yeah, boy, there’s so many things to work on. I’m very happy to see companies are now offering two-handed controllers. A lot of the prior research actually focuses on one-handed interaction, but there’s so much more you can do with two hands. I would say, what I think is going to totally shift and be sort of that barrier that needs to be broken down of VR going mainstream, is in the mobile space, taking some of the things that we know that work really well, that Oculus, and HTC, and others are doing with hand tracking. Of course, there’s always improvements we can do with the head-mounted displays, working with OSVR open source virtual reality, for example. There’s always room for improvement there.

But I really think the break through is quality hand tracking, if we can get that working with mobile at a low cost for mobile, and take in those lessons that we’ve learned from HTC VIVE and Oculus with hand tracking, that we know can be so engaging and so effective, and you really start opening the design space of what you can do because you bring the hands into the world, when we’re able to start doing that with quality tracking on mobile displays, combined with phone or dedicated headsets, then I really think that’s going to just open up that mobile VR space and the value that can be offered, which is really needed to get the large number of users that we’d all love to have in this industry.

VRguy: Excellent. So, Jason, thank you so much. Where could people connect with you to learn more about what you’re doing?

Jason: Yeah, so you can follow me on Twitter at @TheVRBook, or you can shoot me an email at Jason@NextGenInteractions.com , and of course, you can just do a Google search as well to find me.

VRguy: Perfect. Well, thanks so much for joining me today.

Jason: Absolutely. Thanks for having me, Yuval.

LINKS:

VR Apocalypse—A game that utilizes dynamic rest frames: http://store.steampowered.com/app/554940/VR_Apocalypse/

MakeVR—an immersive modeling application that utilizes 3D Multitouch and precision https://www.viveport.com/apps/9e94a10f-51d9-4b6f-92e4-6e4fe9383fe9

The VR Book: Human-Centered Design for Virtual Reality http://www.thevrbook.net

PAPERS:

Fernandes, A. S., & Feiner, S. K. (2016, March). Combating VR sickness through subtle dynamic field-of-view modification. In 3D User Interfaces (3DUI), 2016 IEEE Symposium on (pp. 201-210). IEEE. http://ieeexplore.ieee.org/document/7460053

Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). A Simulator Sickness Questionnaire (SSQ): A New Method for Quantifying Simulator Sickness. International Journal of Aviation Psychology, 3(3), 203–220.

Poupyrev, I., Billinghurst, M.,Weghorst, S., and Ichikawa, T. (1996). The Go-Go Interaction Technique: Non-linear Mapping for Direct Manipulation in VR. In ACM Symposium on User Interface Software and Technology (pp. 79–80). ACM Press. http://dx.doi.org/10.1145/237091.237102

Young, S. D., Adelstein, B. D., and Ellis, S. R. (2007). Demand Characteristics in Assessing Motion Sickness in a Virtual Environment: Or Does Taking a Motion Sickness Questionnaire Make You Sick? In IEEE Transactions on Visualization and Computer Graphics (Vol. 13, pp. 422–428).

The post VRguy podcast Episode 25: Jason Jerald, Principal Consultant at NextGen Interactions appeared first on Sensics.

  continue reading

29 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide