VRUI

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
Post Reply
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

VRUI

Post by AndrewAPrice »

I preordered an Oculus Rift. I want to do something fun and experimental with it.

I found this video on YouTube: "Concept for an Oculus Rift user interface."

That has got me thinking about interfaces designed for head mounted displays. I like experimental things, where you throw away every preconception of how something should work and try to design something different.

With head mounted displays, such as the Oculus Rift, you have added elements like stereoscopy (a different image per eye), rotation (being able to turn your head), and position tracking (being able to move your head to look around something). It gives the illusion of being immersed in a 3D environment. I think this could make for an interesting user interface.

There are limitations - while you do have the freedom to view the world impressively, most users interactions will still be limited to the standard input devices - the keyboard and mouse. Some people have paired the Oculus Rift up with alternative input devices like motion tracking gloves, Wiimotes, or Kinect cameras. I'm going to be talking specifically about UIs you could design with a head mounted display + keyboard + mouse, but it would also be interesting to talk about UIs you could design with alternative input devices too.

The problem with representing things spatially, is that often we can only see a part of what we're interacting with at once. Programs are complex systems that need to be both comprehended and interacted with.

Comprehending programs spatially, I don't think will be a challenge. Our brains are capable of comprehending things spatially, even if we can only see a little of it at one time. We can easily comprehend the layout of a house - even a multi-story house that has a 3 dimensional layout, even if we can only see a part of the interior at a time.

If a program is represented as a 3D object, we can look around it, inside it, and get a pretty good idea of how the major parts of it are laid out. It's easy to comprehend.

The next element is interacting with a complex system. We need to make the interface intuitive and accessible. We want all relevant information accessible conveniently. This does not mean it has to be all on screen at once, but that we can access it near immediately when needed.

For example, when driving a car, we can only focus our vision on a limited range of inputs at once time - the dashboard, our mirrors, the road ahead. Yet we feel that all of that information is accessible because we can simply turn our head or move our eyeballs, and have any of that information relevant to driving a car available to us.

If we are multitasking, such as reading a book and watching a movie simultaneously, we want both things conveniently accessible. The most convenient action is rotating our head between the two items, to the point that both things feel accessible on demand. If the book was in one room, and the movie was playing in another, having to walk between the rooms would lower the accessibility to both task simultaneously, and loose our ability to switch between one to the other near instantaneously.

In a VR user interface, we don't necessarily need the entire interface on screen at once. We just want everything accessible near instantaneously. We could represent our interface mapped onto a sphere, and we would feel relatively unrestricted, as if we could see it all simultaneously and conveniently, because all we would have to do is rotate our head.

This is the same as multi-monitor setups. Even though we are only focusing on one screen at once, it's more convenient to simply look between two screens, than to switch windows on a single screen. It gives the illusion that both are accessible simultaneously.

[continued in next post]
My OS is Perception.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: VRUI

Post by AndrewAPrice »

In a VRUI, some things are very easy for us to imagine. For example - in a "fullscreened" video game, we would be completely immerse inside the video game. That's how technology like the Occulus Rift is used today.

But, in a multitasking environment - we don't want to be fully immersed in a single application at once.

How would we represent our applications, such as two database programs opened simultaneously? Would each program simply be a flat plane that floats in 3D (as current 3d windowing managers do) or would they be made up of 3D widgets that have their own depth and dimension?

What if you wanted to 'window' your immerse 3D game. Would it simply look like a floating portal into a 3d world? Or is there a way for us to show maybe 2 or more immersive 3D worlds simultaneously that we can rotate our head around? I think that challenge is more interesting that simply having 3d widgets.

Perhaps it's not possible, and the best way is to do a Linux-style terminal switching system, where you press ALT+F1, ALT+F2 to switch in between the running immersive UIs. But there will still be times when you want a video game and a web browser, for example, open simultaneously so you can reference one while interacting with the other.

I'm trying to see how you could create a productive, multitasking system using a head-mounted display.
My OS is Perception.
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: VRUI

Post by SpyderTL »

Just throwing out some ideas...

I like the virtual / holographic display UI that is used in Dead Space. Basically all of the program "windows" float virtually as you walk through the 3D environment. This may work well for a 3D OS front end.

Image

Another idea, the (running) program window could be represented as a cylinder, that you can "stand" inside and view the inside surface of, and turn your head 360 degrees to see the entire surface. Then for multiple applications you could "stack" these cylinders on top of each other, vertically, so you could look up and down to see multiple applications, or use the mouse/keyboard to slide the whole cylinder stack up or down, like hitting Alt-Tab in windows allows you to cycle through the windows.

Programs that are not currently running, or data files that are not currently open can be placed around the 3D environment, and clicking on them would "open" them by adding them to the 360 degree cylinder stack that you are currently "standing" in. You could even have multiple stacks running at the same time, and be able to leave one stack in a certain place, and "walk" to another stack and attach it so that it would follow you around, instead.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: VRUI

Post by AndrewAPrice »

In a video game, like your screenshot, you're exploring the game's level, and so the elements appear holographically inside your level. The OS doesn't have a level you're exploring. If you add a character that can walk, jump, affected by gravity, etc, and a 3d level - you're creating a video game. Is there something more flexible?

The stackable cylinders is a good idea. They don't have to be cylinders, but rather applications that are stacked vertically. Each application is given a Min Y/Max Y in which it can occupy.

I also thought you could probably map applications on the inside of a sphere. Imagine if you had a spherical screen wrapped around you. A mouse would work (mostly) intuitively (some fun stuff might happen at the poles :D) because you're still dealing with a 2d surface. Applications could deal with depth if they want to, but anything they draw has to appear within their sphere mapped curved rectangle "window".
My OS is Perception.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: VRUI

Post by Combuster »

Or you can submit a variant of command lists to the launcher and have things be really 3D rather than some compositing.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: VRUI

Post by AndrewAPrice »

Combuster wrote:Or you can submit a variant of command lists to the launcher and have things be really 3D rather than some compositing.
That is what I'm thinking. The problem we face (and this is in general with any method that updates live and isn't rasterizing the output - 2d or 3d) is each that the command list's drawing complexity can slow things down.

With a basic 2D window manager (one that is pixel perfect and doesn't scale, rotate, etc.) the final window contents can be rasterized down, no matter how complex the drawing routine is, into an array of pixels that can efficiently be blitted to the screen.

With our 3D 'window manager' - rotating our view (or even moving our head to the side for head tracking) would change our angle and position in the 3D world, requiring redrawing the command list so we can see the 3D object from our new perspective, unless the window was rasterized/flattened into a sphere-mapped texture, in which case you'd loose depth.

This may not be a problem if the 3d perspective is static, but adjusting your view (and with an ultra-sensitive head tracking unit may occur slightly with every frame) you're going to have to redraw everything within view every frame unless you decided to flatten/rasterize inactive applications.

You could rasterize into a 3D representation (voxels) - which would be constant memory/drawing requirements no matter the original complexity, but because our display technology isn't voxel-perfect (where as a basic 2D window manager running on a 2D screen can be pixel perfect) we would have issues with the image quality.
My OS is Perception.
onlyonemac
Member
Member
Posts: 1146
Joined: Sat Mar 01, 2014 2:59 pm

Re: VRUI

Post by onlyonemac »

SpyderTL wrote:Another idea, the (running) program window could be represented as a cylinder, that you can "stand" inside and view the inside surface of, and turn your head 360 degrees to see the entire surface. Then for multiple applications you could "stack" these cylinders on top of each other, vertically, so you could look up and down to see multiple applications, or use the mouse/keyboard to slide the whole cylinder stack up or down, like hitting Alt-Tab in windows allows you to cycle through the windows.
Kind of like a cylindrical SphereXP? Not quite VR, but SphereXP is a basic 3D window manager for Windows, and I think it's the most user-friendly 3D interface I have ever seen - one which actually enhances the user experience instead being hopelessly complicated to use and not offering much advantage to the user. It was the last thing that made Windows fun for me before I switched to Linux (and I'm actually quite upset that I couldn't find something similar for Linux - perhaps we need a sphere-based X server?).
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Post Reply