Lenovo and HTC plan to launch some of the first standalone Daydream VR headsets later this year, and Google has developed a reference platform with Qualcomm that will let other companies bring their own to market.
Since these devices don’t use your phone or a computer, all you have to do is put on the headset to start a virtual reality session. There’s nothing to set up, and no wires to plug in (or trip over).
While 1st-gen Daydream systems allowed you to look around a room, Daydream 2.0 will work with standalone headsets to offer 6-degrees of freedom position tracking, which means you can move up down, left right, or forward and backward.
But at the Google IO conference, Google representatives are explaining to developers that they may need to do things a bit differently when creating apps for the new standalone headsets.
Existing Daydream-compatible apps will work on the new headsets. But since there’s no phone that you can remove to complete some actions on a touchscreen, Google says developers need to create VR-friendly experiences for doing things like signing into an account or updating payment information.
Google also notes that many people will likely be using Daydream standalone headsets in relatively tight spaces. So the company does not encourage developers to create apps that encourage people to actually walk around. Instead, the idea is to let people sit or stand, but also to learn and shift their balance for a more realistic experience.
For example, the company showed off a dodgeball demo game, where you lean out of the way of incoming balls.
Among other things, Google encourages use of “teleportation” as an alternative to walking. You can move through virtual worlds with the use of a Daydream motion controller by pointing at the space where you want to be and clicking to teleport to that position.
It might not be as natural as walking, but it should help keep you from bumping into real-world walls that you can’t see. Other VR systems (like the HTC Vive) have technology to detect physical objects in your path and warn you to stay out of their way.
I don’t think there are any VR systems that use cameras in the way you describe. The Vive, for instance, actually has a camera, but it isn’t used in normal interaction and doesn’t even need to be on. The “chaperone” system is just based on fixed user-drawn line distance to the static laser sensors, which doesn’t use cameras at all, just precise timings on lasers and IR sensors on the tracked components.
Cameras, lasers… whatever. They both use light, right? I’ll update the article 🙂
Sort of? I think you’re still giving the impression that these devices are smarter than they actually are. Don’t get me wrong, I love my Vive, but the chaperone system is just a static user-drawn set of borders. The Vive doesn’t have the capability to “detect physical objects” besides the headset, the controllers, and the upcoming Trackers. If someone moves a chair, or a cat wanders into the designated area, the Vive has no way of knowing about the change. The universe, as far as the Vive knows, consists of two Lighthouses, a headset, and some controllers. The chaperone space is relative to the Lighthouses; as long as the Lighthouses are fixed *relative to each other*, the chaperone walls and the “floor” beneath it will be fixed relative to the Lighthouses, even if they were spinning, gravity were turned off, or objects were randomly moving through the space (but not blocking tracking).
Comments are closed.