Advantages and disadvantages of Deviceless

Developing for the HoloLens 2 for augmented reality projects can be quite challenging. Even when you have a device, the deployment time for testing can be long and repetitive. What if you are left working on a project without the ease of access to a device? Especially at the present time with COVID-19, where everyone is working from home, sharing a HoloLens with your colleagues can be tough or not feasible. I will be discussing various topics that will aid in the development and give you an idea of concepts utilised in developing Mixed Reality Applications, In Part one of two blog posts I will cover the following: The pros and cons of device vs deviceless Unity’s build procedure for HoloLens applications Unity’s Pre-processor directives.

Physical Device

Firstly, let’s talk about the benefits and drawbacks of both device and deviceless scenarios.

Having a physical device gives you access to practical testing without the need to simulate such things as the natural environment around you or the surrounding area where the HoloLens 2 will be utilized. This helps greatly mitigate the risks that may arise from assuming your application will work in the targeted environment.  

Another huge benefit of having a device is ‘what you see is what you get’. Testing your solution on a device allows you to truly evaluate everything is in working order such as parts like the user interface work well in different lighting conditions. 

Another advantage of having a device is it gives you the ability to perform user testing. This is especially true if your application has a wide use case, you want to be able to have the participants test as much as possible using the device. There may be things that are not picked up until different users interact with the UI and follow the desired use flow through the application. 

There are negatives of having a physical device too, which is where the techniques below come in to help you overcome them. 

If you are working with Unity, the build time and packaging for deployment to the HoloLens 2 can be lengthy. This is due to the build procedure that is run to provision the application built for multiple different platforms and device architectures. Alterations to the application takes time, small changes such as changing the colour of a UI element or the text that is hard coded all require a rebuild to be visible on the device. However big or small the change is, it still needs to go through this process. 

My last two points for negatives are maintenance and availability. Since you have a physical apparatus you will need to take care of it making sure it has charge and everything is functional. Picking up a device you may share with your team and seeing it has no charge can be a common occurrence. Lastly availability can be a downfall, since these devices are not cheap you might not have the luxury of having one per team member which then means you might need to share which can cost time and problems with testing. 

Deviceless – Unity Editor/Emulator

When I refer to deviceless, I am talking about the Unity Editor’s inbuilt simulation tools and the Microsoft HoloLens 2 emulator.

Starting with the benefits, these can mirror the cons spoken about earlier with some additions. Firstly, maintenance is almost non-existent. Since everything is simulated and emulated, you can fire up the editor, test efficiently and quickly, debugging your code through the Unity scripts are straight forward and you don’t have to wait for a physical connection to be established with the device to debug your code. This is by far the best part of testing deviceless and one of the major benefits. Working with your designer colleagues on changes to parts of the application or solution you are building can surface the need to do rapid iterations and it helps to keep that progress going by getting the required alterations done through quick testing.  

Working deviceless does also have its drawbacks and I will explain them, so you are aware for when working on a project.

Testing is much harder as the input is now simulated. An example of this would be when you are testing distance and scale. You must adhere to design guidelines for positioning of UI elements on the user’s field of view because something might look right in the Unity editor. This affects ease of access for the user and general location of a popup box for example. However, when you test it on the actual device it is hard for the user to reach or interact with that object. Same applies for visuals, a bright red colour might look great in the Unity editor or emulator however might not come through the same when used on an actual device. Another downside to working deviceless are the hand manipulation controls, they can be cumbersome and take some getting used to. 

Unity HoloLens 2 Build Procedure

I briefly spoke about the build procedure that must take place to get your application built and running on a mixed reality headset such as the HoloLens 2. It can take a substantial amount of time due to the complex processes involved. Unity outlines instructions in their documentation that can be followed to reduce this time. However, testing using the Unity editor is ideal which I will explain further down. 

Diagram

Above is a modified diagram showing the flow of the build procedure that is involved to produce a HoloLens 2 UWP Application. Unity allows developers to build for many different platforms and architectures. To enable this, a process is utilized called IL2CPP. It takes the written C# scripts from your Unity application, compiles them to Intermediate Language ‘IL’ and then converts this to C++ code ready to be compiled targeting a specific architecture and platform. In this case targeting the HoloLens platform which is ARM, or if you are running the emulator, the platform is x86_64. 

Unity’s Platform Dependent Compilation

One of the benefits you get working with Unity’s C# scripting engine is the ability to use Pre-processor directives. For those reading who have never used them before, these useful scripting macros provide you the ability to use conditional statements to run your CSharp script logic. There are many ways to use this to your advantage when working on HoloLens 2 such as when using device specific features like Spatial anchors. Below are just some of the directives that Unity provides. 

Define Function
UNITY_EDITOR
#define directive to call Unity Editor scripts from your game code.
UNITY_EDITOR_WIN
#define directive for Editor code on Windows.
UNITY_EDITOR_OSX #define directive for Editor code on Mac OS X.
UNITY_EDITOR_LINUX #define directive for Editor code on Linux.
UNITY_STANDALONE_OSX #define directive to compile or execute code specifically for Mac OS X (including Universal, PPC and Intel architectures).
UNITY_WSA #define directive for Universal Windows Platform

An example use case applying these pre-processor directives is using the UNITY_EDITOR to isolate code to emulate spatial anchors. Since we cannot test this functionality inside the editor it can be emulated. Below is an example splitting the script logic into two parts, one that will execute when the app is running on a device which utilizes the WorldAnchorStore. The other that will run when we are running inside the editor.

Both these snippets of code Instantiate game objects in the scene but have different implementations. Using this technique, we can implement similar concepts into our application. In a scenario where you want more control, you could use another handy directive called UNITY_WSA. Let us say you want to determine if your code is explicitly running on a HoloLens. Below is a snippet code example of how you would go about doing this.

Utilizing this technique, I have created a POC video showing you this function in use, an example of where this could be used is to render HoloLens specific UI elements in your application.

Example Code Script:

Wrapping Up

Saving time and testing in the Unity editor is by far the best approach to get instant feedback and iterate on your project. Utilizing the pre-processor directives built into Unity allows you to implement some functionality that you would usually have to test on the device itself going through the build pipeline. In Part two of this series “Working deviceless”, I cover the MRTK (Mixed Realty Toolkit) input simulation options, the HoloLens 2 emulator, and its functionalities that assist in rapid development of robust applications.