Introduction

In June of this year, a group of four of us from Readify had the opportunity to attend the Microsoft Mixed Reality Partner Hack workshop in Melbourne, a four day event where we got first in depth look at the new HoloLens 2. There were essentially two goals, the first to port an existing HoloLens 1 application over to MRTK v2 and test it on a HoloLens 2 unit, and the second to actually try and incorporate some of the new unique features of HL2. It was also an amazing opportunity to speak with, and learn from, some of core hardware, software, and design team members working on HoloLens.

We decided to convert an application we had developed for the University of Queensland (UQ) named Croptimum. It is a simulation, based on real historical data, that allows students to experiment with and observe the growth and yield of sorghum and wheat crops in various types of soil.

My aim with this post is to avoid hearsay and really focus on the things we got to work with hands-on. The Microsoft website does a pretty good job of enumerating features of HL2, but let's dive deeper!

The New, The Shiny

Before getting into the finer details of design considerations for HoloLens 2, it's worth taking a look at some of the new features which we had heard about. Again I'm focusing on the things we actually got to experience first-hand.

Form Factor

I can confidently say that compared to HL1, HL2 is more comfortable...a lot more comfortable! While the total weight is only a tiny bit less (just over 500g), distributing the weight by moving the battery behind your head makes a massive difference in comfort on your head and neck; one of those design choices that in hindsight is so obvious. Additionally, the way the HoloLens sits on your head is a lot more similar to a baseball cap than goggles (they used that metaphor in the workshop). If you have used HL1 for any extensive period of time you have probably found yourself balancing it as high on your head as possible to avoid the extra pressure while maintaining a centred FoV. Now, essentially, is how the HL2 is designed to be worn by default, with no pressure on your nose whatsoever.

It's also worthwhile mentioning the ability for the visor to be flipped up. Not only is it a great addition for heavy users of HoloLens, as a developer it's fantastic being able to switch between my computer monitor and the HL2 display quickly.

Field of View (FoV)

Of course, another big improvement is the increased field of view. Beyond just being larger, it has a 3:2 (roughly) aspect ratio which feels more natural than HL1 16:9 aspect ratio. Here is a rough comparison:

I admit that while it's definitely a big improvement, it doesn't feel as big a change as it should, perhaps because you still notice the clipping especially at the bottom of your vision; it is human nature to look slightly down. Certainly, this was the case with our particular application since the main crop field is often placed somewhere in the lower half of your vision. Improvements are especially noticeable when you get up and close to a hologram. This is really important given a lot of HL2's story is around detailed hand recognition and interacting with holograms using your hands.

As a side note: one question that I wasn't able to get a clear answer on is whether the FoV limitations in HL1 and 2 purely come down to cost, and will continue to improve indefinitely, or if there is a hard ceiling to how large the FoV can be achieved with this tech...

Display

HL2 is using a new laser-based display system rather than the LCD based technology used in HL1. The result is not only improved FoV, but a much brighter display. In fact, they showed us a couple of newer units there that looked noticeably more bright and colour-accurate than the units we were testing with. That means Microsoft is still tuning even the hardware right up until release. We noted that the brightness of these units was quite comparable to our laptop screens and while we only tested them indoors, it's safe to say that they will get a lot more mileage than HL1 in outdoor scenarios.

It's worthwhile mentioning that the new display has some particular aliasing artifacts that were not present in HL1. To be fair I didn't notice this myself at first, and it was pointed out to me when looking at a hologram of a plant's roots which were very thin.

Hand Tracking

This is one of the features that most directly impacts how we'll change the way we build apps for HL2. The new and improved depth sensor is able to track hand movement and gestures to a surprising level of accuracy down to the finger and joint movement. This unlocks a whole bunch of new interaction models which feels like even Microsoft designers are just scratching the surface of.

There are some marked limitations to this technology though: because it's ultimately image (depth) based, it can't detect something that's covered. So if my right hand covers my left hand, the tracking of the left hand breaks. As such, it's best to avoid any interactions where your hands may cross over one another. Similarly trying to push a holographic button that's sitting flat on a table is not going to work so well. Another limitation is that currently, gestures don't always register accurately. Some gestures like a "pinch" (and an exaggerated one at that) are a lot more effective than a full hand grab, though I suspect at least some of that can and will be improved on the software side. That is all to say a solid dose of design care and creativity is needed when building features that use hand tracking. We'll touch on some of these later.

The "Wow" Moments

AR technology, much the same way as VR just a few years ago, is at such an early stage of maturity that every incremental change feels like a massive leap. The combination of all the new features of HL2 makes it a no-brainer over HL1. For me, there were a couple of magic moments that help it feel like this is tech is here to stay at least in some shape or form. My scientific definition of a "wow" moment is when I physically can't stop grinning...

The first was when I had put down a hologram in a busy room, and decided I wanted to create a recording but it was too busy in the conference room. I simply picked up the hologram with one hand, walked 20 meters out of the conference room and into the lobby, dropped it on a bench, turned it around by grabbing a corner, and started recording... probably the closest thing I can compare it to is the first time I grabbed a smartphone from my pocket and checked my email.

The other was when I placed down a hologram of a crop field, which I would typically project at a small scale that fits on a table, and decided to scale it up to near-full-scale. It's hard to describe being able to just walk up to and inspect a life-sized holographic sorghum plant. It's one of those moments that you simply can't reproduce on any other digital medium!

In part 2 of this post, I'll delve deeper into design considerations with HoloLens 2 and some of our learnings in converting an existing experience for HoloLens 2.