Slides and code samples for VS Live Redmond

HoloLens, MVVM, Technical stuff, Universal Windows Platform UWP, VSLive, Windows 10, Work, WPF, Xamarin, XAML
No Comments

VS Live just took place in Redmond, and I had a great time. I had three sessions in one day, and I was really exhausted in the evening, but it was absolutely worth it. Speaking in Building 33 (the conference center on Microsoft campus) was an amazing experience. I have spent so many hours in this building, listening to amazing speakers of Microsoft and others, during MVP summits and other events… so really it was quite magical to be on the speaker side this time, in room St Helens.

vsliveredmond

Thanks to every one who came to my talks! I hope it was informative and useful, and that it encourages you to try those technologies and techniques.

Here are the pages for the talks I gave:

Windows Presentation Foundation (WPF) 4.6
Windows Presentation Foundation is what people are using to build real applications for the enterprise, the industry, the workplace, and for every situation where Windows 10 Universal isn’t quite ready yet. Far from being dead, WPF is 10 years old this year, and it’s still alive and kicking. It gives Universal Applications a run for their money. In this session, you’ll learn what is new in Windows Presentation Foundation, where it’s going in the future, and what you can achieve with WPF that Universal Application developers can only dream of. We’ll also see how these two roads cross and how existing WPF applications can be brought to Windows 10 using the Centennial bridge. Finally we’ll discover new features and tools recently implemented for WPF developers.

Windows 10 – The Universal Application: One App To Rule Them All?
Windows 10 and the Universal Windows Platform offer a lot of productivity and flexibility around targeting the broad set of devices that run Windows. As a developer, you have a lot of choice–from building a single binary that is identical on all devices, through to an app that adapts to the type of device and on to the point of building an entirely different app for each class of device. What’s the right thing to do? How should you think about building the “One App to Rule Them All?” What are the design and implementation trade-offs you need to consider? This session dives into these areas with a hands-on approach and shows what it really means to be building apps across families of Windows devices that have different capabilities. We will also talk about bridges (and especially the iOS Bridge to Windows 10), and new platforms such as Continuum and HoloLens (with live demos).

Building Truly Universal Applications with Windows 10, Xamarin and MVVM
With Windows 10 supporting an unprecedented number of platforms and form factors (from IOT to phones to tablets to laptops and desktops to XBOX and SurfaceHub, and even the new HoloGraphic computer HoloLens), the name ‘Windows 10 Universal application’ is fairly accurate. But to be honest, shouldn’t a truly Universal application run on Windows 7, iOS and Android devices too? Thankfully, this is possible thanks to a clever architecture pattern named Model-View-ViewModel, the .NET portable class libraries and the Xamarin frameworks. With these tools, we can structure an application so that most of the code is shared across all the platforms, and then build truly native UI that adapts without any compromises to the device it runs on. In this session, we will understand exactly how such universal applications are built. Laurent Bugnion, a XAML/C# expert, Microsoft and Xamarin MVP who started making universal applications before it was even a thing, will show you practical knowledge with a lot of demos. Come listen from the creator of the popular MVVM Light Toolkit how this powerful but simple library can be leveraged to help you target more users than you ever dreamed of!

Happy coding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

Unity: Adding children to a GameObject in code and retrieving them

HoloLens, Technical stuff, Unity, Work
2 Comments

Update: The following feedback was pointed to me:

  • You should be careful when using GameObject.Find(…). First, it is bad for performance (note that in my code, I was using it in the Start method and then caching it. This is OK according to documentation, from a performance standpoint). Then, it relies on strings to pass the name of the object you are looking for. That’s not easily maintainable. Instead, it is better to set a public field in your script, and then assign the GameObject that you need in the Unity editor. To keep things simple, I won’t be doing this here but check the blog post here for more information
  • When you remove objects from a scene, you should always start by the last object and then go upwards.

End of update

When you work in the Unity editor, it is quite natural to use hierarchies of objects. For instance, you will have a table object and on this table object you want to place some cups objects, but if you move the table, you want the cups to move too. That’s quite a natural thing to do because it corresponds to the way that things are organized in “real life”. In fact, it even makes sense to have a hierarchy where the parent is an empty GameObject (which will be invisible), this way you can create logical groups of items.

For example, you can go in the Editor’s Hierarchy panel, create an empty GameObject (Right click on the panel and select Create Empty), and name it “Container”. Then you set a transform on the container, for example position = 0, 0, 2 meaning that the Container will be positioned 2 meters in front of the origin point.

2016-06-22_16-35-34

 2016-06-22_16-36-14

Then you can right click on the Container and select 3D Object / Cube to add a cube to this parent. If you so, the new Cube will appear at the exact same position at the Container, as you would expect.

2016-06-22_16-36-57

 2016-06-22_16-39-29

In code however, this is a little more tricky. This is where we realize that there is no true hierarchy of object in Unity but there is a hierarchy of transforms. Let see how to do that:

Creating new objects below the Container

Since we have an empty GameObject named Container in the scene, we can retrieve it in a script with the following code:

_container = GameObject.Find("Container");

The _container field, as you would expect, is of type GameObject. If you inspect it in the debugger, you will see that it is placed, as expected, at (0, 0, 2) (this is the transform.localPosition).

Now let’s create a new Cube and add it to the GameObject. Unfortunately, we notice that there is no Add or AddChild or similar method on the GameObject class. This is where we need to work with Transforms. The following code helps:

var newCube = GameObject.CreatePrimitive(PrimitiveType.Cube);
newCube.transform.SetParent(_container.transform, true);
newCube.transform.localScale = new Vector3(0.2f, 0.2f, 0.2f);

In this snippet, we create a new Cube of 20 cm size, and we specify that the Cube’s transform’s parent is the Container’s transform (incidentally, Unity says that you should use SetParent and not the parent property directly, even though it can be set. Welcome to the ugly world of scripting ;). This will effectively create a hierarchy like we have in the editor. But there is a catch! If you run the code now, you will notice that the new Cube doesn’t appear, like you would expect, at (0, 0, 2) but it appears at (0, 0, 0). If you run this in a HoloLens, you will get pretty confused because (0, 0, 0) is probably going to be your head, and so the Cube is placed around your head and you won’t even see it until you move to a different location).

If you inspect the newCube in the debugger, you will see that its localPosition is set to (0, 0, –2). So it seems that Unity went out of this way to misunderstand what I was trying to do, and forced the newCube location to be at (0, 0, 0) globally, which means (0, 0, –2) relative to the parent. Ugh…

When reviewing the documentation, I found an overload of SetParent which takes a parameter named worldPositionStays of type bool. I thought that was promising, but setting this parameter to true didn’t change a thing. I also tried variations of the calls above. The newCube still appeared at (0, 0, 0). So to fix it, I forced the localPosition to be at (0, 0, 0) (relative to the Container). This way the global position of the Cube is (0, 0, 2).

newCube.transform.localPosition = new Vector3(0f, 0f, 0f);

This sounds unnecessarily complicated, so if I am doing things the wrong way, please add your knowledge in the comments, thanks!

Retrieving the children

Now how can we retrieve the children and iterate on them? For instance, this can be useful if you want to clean the scene by removing all the Cubes, but leaving the Container in place so the user can add more Cubes. Here too, we need to work with the transform hierarchy. Here is the code:

Note: For this simple demo I am not going to update the code BUT as was pointed in the comments, you should rather remove objects from the end of the hierarchy!

foreach (Transform t in _container.transform)
{
    var cube = t.gameObject;
    Destroy(cube);
}

In this code, we get the transform property of the Container and then we iterate through all its children. Then we retrieve the gameObject property of each transform, which corresponds to the Cube that we want to delete, which is done by calling the Destroy method.

Hopefully this quick tutorial will help you when you do this kind of things. Structure is very nice, but structuring a scene in code is a bit tricky. This should save you some time. Now again, if you see something that can be improved, don’t hesitate to point it out in the comments.

Happy coding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)

Share on Facebook
 

Slides and sample code for my presentations at #VSLive Boston

.NET, Conferences, Universal Windows Platform UWP, VSLive, Windows 10, Work, WPF, XAML
2 Comments

Thanks to everyone who came to my sessions at VSLive Boston. I had a great time. I hope it was informative and useful. I am aware that you take time out of your job to come and see us speak and I really hope that you found it worth your time.

I had two sessions:

Windows 10 – The Universal Application: One App To Rule Them All?

You can find the slides and sample code for this session here. This page also links to a video showing how Windows 10 Universal apps work on HoloLens!

Windows Presentation Foundation (WPF) 4.6

Here are the slides and sample code.

Thanks again for your warm welcome in Boston!! I even had some time to visit the city and had a blast in the historical places.

Happy coding
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

A world of devices – Upcoming talk

Personal, Technical stuff
No Comments

In September, I will give a special session in Amsterdam, at an event titled LevelUp Mobile. My talk is titled “A World of Devices”. Here is the abstract:

How do you feel when you forget your phone? If, like Laurent, you feel lost, you are probably also living in a world where devices are augmenting you and making you more connected, more efficient and more skilled (or is it addicted?). In this session, Laurent Bugnion, geek, developer and gadget addict will show you next generation devices and the collaboration between
them. From smartphones to smartwatches, from IOT and phablets to XBOX and Surface Hub, from Cortana to HoloLens, we live in a world of devices where software is coming out of the computer more and more. We’ll take a quick look at the past, a good look at the present and a glance at the future with lots of demos.

This talk is based on thoughts that I have had in my mind for many years already: Our devices, even the so-called smartphone (which, let’s face it, is not a phone but rather a pocket-sized computer with which you can, if you have to, place calls over the phone network) are augmenting us with superhuman abilities. How else could we know, with a precision of a meter or less, where we are positioned geographically? How else could we have access, from almost anywhere in the world (OK, that’s an exaggeration, but it is close enough for most of the civilized world) to a very significant portion of human knowledge? How else could we have access, anywhere and at any time, not only to enough movies, music and books to educate and entertain us multiple lifetimes. How else could we keep in touch 24/7 with our families and friends all over the world?

The smart device we have in the pocket is effectively turning us into cyborgs. Augmented humans. And this is only the beginning.

Picture1

These days, we witness an incredible number of new devices, with new form factors. Maybe the most intriguing ones are the ones placed in the “wearable” category. The most common are smartwatches (which, like the smartphones, are much more than “just” watches). In their lowliest form, they need a connection to a smart device to be effective. But many are doing more than that. For instance the Microsoft Band 2 has a built in GPS and so it can record a run even without the help of a connected device. It also has a wealth of sensors that can be used to build mobile applications: Pressure sensor (so you can use it as a barometer / altimeter), heart rate, pedometer, distance, stairs, accelerometer and gyroscope (so you can use it to control 3D scenes or even physical objects), and even a UV-meter to tell you when you should put on solar cream.

We also saw the appearance, and then the quasi-disappearance, of connected glasses made by Google. It was certainly an interesting sociological experiment. Very soon we witnessed a lot of resistance in the population. Many felt aggressed, violated by the glass wearers (who were often referred to as “glassholes”). They felt that the wearers were displaying arrogance, and invading their private life, possibly recording video or taking pictures. The fact that the device had a low battery life and a low resolution was not playing a role in this concern. Neither was the fact that our lives are already recorded all the time, either by so-called security cameras, or by the constant picture-taking that everyone is doing these days. For some reason, the Google glasses were different and were perturbing the peace much more than any other device. This is (partially, to be completely fair) why they failed to gain traction, and why you don’t see them anymore these days. There are no doubts in my mind that more such devices will appear on the marketplace at some point in the future, although probably in a different form factor, less conspicuous and closer to normal glasses. There are already amazing devices available such as this model which allows an app to inform a person with impaired vision about what is happening around him/her. See Shaqib Shaikh’s great video shown at Microsoft Build this year!

This brings us, of course, to augmented and virtual reality. Even though these are not new domains, we are seeing this year a lot of interest and new devices. Oculus Rift is releasing its second iteration, HTC is selling its Vive device which is pretty great. Even on the low end of the marketplace we see devices like the Samsung Gear which takes advantage of smart devices and turns them into a poor man’s VR headset. And on the AR front we have of course Microsoft HoloLens, released to developers, an incredible device that not only places holograms in your everyday world, but also interacts with the real world and allows, for example, virtual objects to bounce off the real tables and the real walls, and for virtual characters to sit in real chairs.

And the most beautiful is when all these devices, from small to large, interact together and create a connected, augmented environment. This is the promise of Windows 10 and its universal applications, of cross-platform computing which has never been so promising, of wireless protocols that connect devices in very short or very large range, and allows them to collaborate.

At the LevelUp event, I will be talking about these concepts but also demonstrating some of the futuristic features that are already available to developers and to the public. I am really looking forward to have the occasion to share these thoughts with the audience, and demo how far we came with devices and their interactions.

Happy coding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

Back from Sweden! Slides and code for my #DevSum16 session

.NET, Conferences, HoloLens, Technical stuff, Universal Windows Platform UWP, Windows 10, XAML
No Comments

I am back from Stockholm and the DevSum conference! It was a great trip, what a beautiful city and lovely people! It was great to be there with so many friends from our beautiful community, and especially my colleague and friend Rene Schulte.

On Thursday, I started by helping my good friend Tim Huckaby with his keynote. My role was very limited, just monitoring the live stream coming from Tim’s HoloLens. Tim is a great speaker and his keynote was very interesting and funny. So nice to see him! Later, Rene did a talk about HoloLens 3D development. Always a great pleasure to see Rene show how to use Unity and build 3D apps!

image6

On Friday, it was my turn to speak. I have a Windows 10 Universal session and I decided to spice it up with a few demos of new platforms. It was a bit scary because there were quite a few moving pieces, and everything had to play together perfectly. And it almost did! The only thing I didn’t think about is that the Continuum dock is “protected” with HDCP. That means that you cannot connect the Dock to a projector. Such an annoying (and useless) feature! Thankfully, the conference center where DevSum took place had awesome technicians, and one of them saved my day by connecting an HDMI-to-VGA adapter which circumvented the issue.

The talk was quite beefy: We defined what a Universal app is, then we talked about Adaptive UI. We discussed the Centennial and Islandwood bridges before talking about Continuum and HoloLens. The final demo saw me switch my presentation to my Windows 10 mobile phone (950XL), show some slides in Powerpoint, then demo some Universal apps on this platform. Finally I started the HoloLens application on my phone, in Continuum mode. Since this is also a Universal app, it adapted to the big screen beautifully and I could stream what I was seeing through my HoloLens. I demonstrated how universal applications run on the holographic platform, including our own apps. It’s just as simple as deploying them to the HoloLens!

image4

We definitely live exciting times, and it’s really great to be working on these new platforms. You can find the slides and sample code for this talk on my website. The session was recorded, and I will tweet when the recording is available for your viewing please, so stay tuned to my Twitter feed. Thanks to everyone who came to see this session. I really hope it was informative and useful to you!

Happy coding
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

My #HoloLens 101 notes

HoloLens, Technical stuff, Work
2 Comments

Yes finally I have a HoloLens device (thank you IdentityMine for footing the bill and facilitating the purchase). Even though IdentityMine has a long history of 3D development, and we have been working with HoloLens devices for the past few months in collaboration with Microsoft, so far my inputs have been limited to helping the team brainstorm concepts, test the apps, give feedback, and help with some presales activities. In the contrary of some of my colleagues, I am not a 3D developer yet, and I am eager to learn.

Here are a few notes I took during the learning stages. I am sure that many of you are in the same boat as I am, and these may come handy if you want to progress with HoloLens and Unity3D development.

A few preliminaries

As I mentioned, IdentityMine has been developing for HoloLens for a while now. You can see a session that my esteemed colleague Rene Schulte and I gave at the recent Build 2016 conference about an application that we built.

Even though you can code HoloLens applications with any 3D framework that supports Direct3D, the consensus these days (and the solution that Microsoft recommends) is to use Unity. This is a middleware (meaning that this is a set of APIs that will create code that then runs on a number of platforms, including HoloLens).

Rene Schulte has a very good blog post with recommendations. You should probably read it too!

Charging the device

You would think that charging the device is trivial! Well in fact… it charges with a micro-USB cable similar to what you use for your mobile phone (except if you have the latest generation which uses USB-C). Like for mobile phones, there are two kinds of cables: Charging only, and Charging+Data. Whichever you use, make sure that you test your cables and use the ones which charge the fastest. When testing cables to charge my mobile phones, I found huge discrepancies (some cables charge up too 3-4 times faster than others). A good choice is to use the cable and charger which come with the HoloLens device itself!

With a full charge you will be able to use the device for a few hours. I didn’t time it really, but when learning to code, I leave the device on and use it on and off for a whole evening without needing to charge. You can then charge it overnight, which is very convenient (it takes a while to get a full charge, so plan accordingly!).

Calibrating

Before you start using the device, you need to calibrate it. This is important so that your eyes can focus properly on the holograms. The Calibration app takes a couple of minutes to execute. The very first time you start the device, you will also get a tutorial about gestures. Later on, the Learn Gestures app can be started separately. Probably a good idea to run through this basic information! Note that you can modify and retrieve the Interpupillary distance (IPD) from the Device Portal (see below). This is an important value for a good experience!

Using apps

What better way to get started than to use apps! For a first try, the Holograms app (preinstalled) is cool: It allows you to place holograms anywhere in the room, and observe them from diverse angles. There are even some video holograms. It’s a good idea to use this app to learn to interact with the menus. Note that you will need an internet connection for this app to work!

In addition to the preinstalled apps, you have a choice of apps developed specifically for HoloLens in the store. I like the HoloTour, which gives you a guided tour of Rome and a high plateau in Peru. I hope we will see more content coming soon! Also available are games like Young Conker and Roboraid. Another app named Holo Anatomy will teach you about the human body. While HoloTour, Young Conker and Roboraid are what I would call full blown apps, Holo Anatomy is more of a proof of concept. Still cool to try!

Make sure to try the Fragments game too! It is probably the most compelling holographic experience at the moment, and integrates very nicely with the room you are in! One of the character was seen sitting on my office chair for example!!

There is also a great app developed on suggestions from the users’ community, called Galaxy Explorer. This app is available as source code on Github and you need to build it. See below in the “Learning to code” section.

You can also install 2D universal apps for Windows 10. Any UWP app should run fine though some (like Skype) have been customized for the HoloLens device. I installed OneDrive (which allows me to watch videos stored in the cloud), Groove Music (which lets me stream music from my OneDrive), Cast (a great universal podcast app which synchronizes content between all Windows 10 devices) and a few more.

Unfortunately I cannot find a way to download and save a file on the device, so for now the only media you can consume is online streamed media. This is OK but I hope we’ll be able to get some media on the drive for offline consumption at some point.

Here are a few screenshots. Note that some are a screen grab of the portal videostream so the quality is not great, but they are just illustrating the point. The actual quality is much better!

13173339_10154069664749651_6183228024374949655_o
The game “Fragments” is one of the most impressive HoloLens experiences at the moment.

20160512_133624_HoloLens
Groove Music and Skype pinned to my home office wall

2016-05-12_14-04-31
On a HoloSkype call with my colleague Rene. We both can draw on the scene (him in green and me in blue).

Finding your device’s IP address

In many occasions you will need your device’s IP address. To do this, wear the device, and then say “Hey Cortana”. After Cortana shows up and you hear the sound meaning that she is listening, ask “What is my IP address”. It will show up. Alternatively, you can open the Settings app, and then navigate to Network & Internet, Wi-Fi, Advanced Options and write down the IPv4 address. Note that you can also find it using your Router’s configuration menu, if you have access to it. It might be a good idea to configure your router to always use the same IP address for the device.

Using the device portal or the app

The device portal is a must-try. To open it is quite easy: First you need to find your device’s IP address (see above). Then you can enter this IP address in a web browser (ignore warnings about invalid security certificate), and it will connect to your device which acts as a web server. Note that on some networks, this will be prevented to work so make sure that the network you use is open to this.

Alternatively, you can try the HoloLens app available here. The cool thing is that it is a universal app, and so you can also let it run on a Windows 10 mobile phone. UWP FTW!!

The web portal lets you do the following operations:

  • Checking the device status and other information such as device name and Windows version, online status, temperature, battery level etc
  • Check your interpupillary distance (IPD) and modify this value
  • Change sleep settings. I recommend setting a high enough value so the device doesn’t go to sleep while you are coding, which can cause deployment to fail!
  • See in real time (almost) what the user is seeing. This is called Mixed Reality Capture (MRC). It even lets you make videos and take pictures (which include the holograms!). Note however that there is a lag of a couple of seconds between what the user sees and what is shown on the screen. Also, the screen refresh rate will drop to 30 FPS while the MRC is active (instead of ideally 60FPS). If you have fast animations, this can be a little disconcerting.
  • See the list of processes running on the device and getting performance information.
  • Seeing a list of installed apps and providing some management tools. You can even side load an app from this page.
  • Getting crash information.
  • Forcing the device to run in kiosk mode (i.e. disabling the start menu, disabling Cortana and hiding pinned applications). Great for demos!
  • Logging
  • Simulating rooms: You can take a capture of a given room and save it. Then you can pass this recording to another developer, who can use this page to load it to his device. Great when you need to test an app in some specific room conditions.
  • Networking settings and information such as device IP, MAC address etc
  • Virtual input, allowing the developer to simulate keyboard input. Note however that you can simply use a bluetooth keyboard if you want to easily enter text.

The universal app seems to show the live stream with less lag. Definitely worth a try!

Taking screenshots and videos

You can take screenshots and videos of what you see in two different ways. Note however that the resolution of the screenshots and the videos is not going to be as good as the real thing, because they downplay the device resolution when screen capture is on.

Using the device portal

If you are an operator for someone else trying the headset, launch the device portal (see above) and navigate to the Mixed Reality Capture section. You will see buttons allowing you to record, take photo or even see the live stream (with a few seconds delay).

Using Cortana

More spontaneously, if you are in a great experience on HoloLens and want to share with the world, you can take a screenshot by saying “Hey Cortana” and then “Take a picture”. Similarly, for videos, say “Hey Cortana” and then “Take a video”. When you are done filming, say “Hey Cortana” and then “Stop video”. This takes a bit of exercise to get right, so make sure to train before you do it for real. The pictures and videos will then show up in the Photos application on the device, as well as in the Device Portal, where they can be downloaded from.

Letting people try the HoloLens

For a better experience, you should always get a new user’s Interpupillary distance IPD! Use the Calibration app to do that. Alternatively, there are some devices one can purchase, but they require some know how so make sure to learn how to use them. Once a user’s IPD has been determined, you can always modify it or retrieve it from the Device Portal (see above).

One thing I noticed after letting a few “newbies” try it out: The “tap” gesture is not as easy as it sounds. A good friend never got it right. So in addition to the calibration, it is interesting to get some time to do the Learn Gestures applications too. At first it is a bit hard to guide them to start the Calibration application. What I did to help is this:

  • Connect the UWP app to the device and observe what they are seeing. Sure it drops the frame rate but it is really helpful to know what they are currently seeing, in the beginning.
  • You can do some gestures for them. Simply put your own hand in front of the visor and bloom or air tap. For example to teach my friend how to air tap, at first I told her to simply look at a tile and then I air tapped myself. This helped her to understand the gaze gesture better.
  • There is a clicker that comes with the HoloLens device, that you can use instead of an “air tap” gesture.

It is pretty overwhelming at first, and for us it is easy to forget that it is pretty overwhelming at first I was happy to have the occasion to observe a few people trying it out for the first time and learn from this experience.

About Unity

It’s easy to confuse Unity (the 3D middleware) with Unity (the Inversion-of-control framework). The fact that both of these are used by Microsoft makes it even more confusing. In case of doubts, make sure to use Unity3D instead of just Unity. In this post, I will simply talk about Unity and this is NOT the IOC one ;)

When you install a new version of Unity3D, it might install side by side with older version. DO NOT GET CONFUSED! If you open a HoloLens Unity project with a version of Unity that is not suitable, you will get VERY confusing error messages. Make absolutely certain that you have the correct version of Unity open. Currently, the version is

Learning to code

The HoloLens academy is a Microsoft offering and has a growing number of tutorials. You can find all the information on their webpage.

Note that you don’t strictly need a device to get started. There is a free emulator which works with Visual Studio and lets you try your code out (see below). It’s a good way to wait for a device, as getting one can take a while due to overwhelming demand.

Learning to deploy

Before you even get to coding, it would be a good idea to learn to build and deploy a project. For instance you can follow these steps to download and build an existing project, the open source Galaxy Explorer:

  • Go to the Galaxy Explorer GitHub repo
  • Fork or download the code as Zip file, and extract it.
  • Start Unity3D
  • In the start dialog, press Open
  • Navigate to the folder GalaxyExplorer and press Select
  • Wait until Unity loads the project
  • Open the menu File / Build Settings
  • Make sure that the Windows Store platform is selected
  • Set SDK to Universal 10
  • Set UWP Build Type to D3D
  • Make sure that Build and Run is set to Local Machine.
  • Press the Build button
  • In the Build Windows Store folder selection menu, create a New Folder and name it App.
  • Make sure that the App folder is selected.
  • Press Select Folder.

This will start the build process. Note that this only creates the Visual Studio project with all necessary files. You will still need to open the created Solution file. Follow the steps:

  • Navigate to the App folder that you created earlier.
  • Open the GalaxyExplorer.sln file in Visual Studio 2015.
  • Make sure to select the following configuration: Release / x86 / Remote Machine.
  • If the Remote Connections dialog shows up, enter your device’s IP address in the Address field and make sure that “Universal (Unencrypted Protocol)” is the Authentication Mode selected. Then press Select to establish a connection to your device.
  • If this is the first time you deploy, you will need to enter a pin. The pin should be shown on your device, but if it is not, go to Settings, Update, For Developers, Pair. Then enter the pin into the Visual Studio dialog.
  • Finally, select Debug / Start without Debugging.

If everything is configured correctly, the application should start, and you can feel confident that deploying works for your future studies.

Don’t let errors like “System.Object doesn’t exist” startle you

If you run the examples of the HoloLens academy, you might encounter some weird errors when you generate the Visual Studio code from Unity, and then open the resulting SLN file in Visual Studio. You will see a LOT of errors because the Nuget packages have not been restored yet. Do not fret, you can just build, which will force the Nuget packages to get restored, and all the ugly errors should go away.

Match the file name and the class name

In C#, it is highly recommended that the file name and the class name match, and that you have only one class by file. But these are recommendations only. When coding in Unity however, I had some compilation errors because the file name didn’t match the class name. If you follow the early tutorials (for example the 101), they will ask you to create a new C# script file with a certain name (for example GazeGestureManager). Make sure to enter that name as you create the script. If you don’t do this, but instead you create a new script file with the default name, and then rename the script file to GazeGestureManager, Unity will refuse to do anything with this file. This is because the class name inside the file and the file name don’t match. So be careful to follow the steps exactly.

Sometimes Unity starts MonoDevelop instead of Visual Studio

In the course of the tutorials, you will sometimes have to edit some C# script files. Unity comes preinstalled with the MonoDevelop code editor, but using Visual Studio is more comfortable. To ensure that Visual Studio is launched, check the following settings: Edit / Preferences / External Tools / External Script Editor and make sure it is set to Visual Studio 2015.

Even with this setting properly configured, it can happen that Unity gets confused and starts MonoDevelop anyway. In my experience the file opens in Visual Studio anyway and you can just close MonoDevelop and move on.S

Starting the emulator

The emulator is a good way to get started when you don’t have a device with you. Strictly speaking, it’s not really faster to deploy to the emulator than to a real device, and of course the experience is not comparable, so you will probably prefer an actual device if you have a chance. You can also simulate gazing at objects, tapping them, walking around etc (see below).

The emulator can be started from Visual Studio directly (it is called HoloLens Emulator in the list of all the emulators and devices). If you don’t see it, you might have to install it first :)

I had some cases where the emulator refused to start, because my machine didn’t have enough memory (for example on a Surface Pro 3 with 8GB RAM, the emulator requires 2GB RAM and Windows decided there wasn’t enough left… annoying!). In that case, you should do the following:

  • Create a BAT file.
  • Enter the following command:
    “C:\Program Files (x86)\Microsoft XDE\10.0.11082.0\XDE.exe” /sku HDE /video 1268×720 /vhd “C:\Program Files (x86)\Windows Kits\10\Emulation\HoloLens\10.0.11082.1039\flash.vhd”
  • Save the BAT file and restart your machine
  • Before you do anything else, run the BAT file to start the emulator.
  • Then only start Visual Studio and deploy to the emulator, which is already running.

Issues when deploying to the device

In theory you can deploy to the device using a USB cable attached to your laptop, or using an internet connection and “Remote Machine” (not Remote Device like stated in some locations in the official documentation).

To deploy using your internet connection, you must make sure that the device is connected to the same network as your PC. I had a few issues when trying to deploy at the hotel, and I suspect that there was a firewall or something which was preventing me from successfully deploying. I am still investigating this. At home it seems to work fine.

Make sure the device is awake…

This may sound silly but it happened to me a few times… I remove the device, place it on the table upside down (which seems the safest way), and then after I make changes to the code I hit Ctrl-F5… and it fails. What happens is that after a few minutes, the device goes to sleep. Even if you wake it up quickly after you start deploying, it doesn’t seem to work (I think there is a delay for the device to reconnect to the network). So really, make sure that it is up and running before you hit Ctrl-F5!

FAT vs NTFS

I seem to have issues deploying to the device when the source code is on a FAT filesystem (SD card). If I copy the same source code to my main SDD (which is NTFS), then it works. I will investigate more. It’s not completely surprising because I had the same issue before with Windows 10 UWP applications, and the HoloLens is a Windows 10 device after all. It would be nice to be able to do this from an SD card though, because a typical HoloLens project is pretty large. I’ll update this if I find a solution.

Deploying over USB

Here too I am having issues when I am trying to deploy over USB. I am using the original HoloLens USB cable, and I am getting a cryptic error (Unexpected Error: -2145648626). Here too I will try to find more information and update this. So far I didn’t manage to solve it, but here are a few indications from other people in the community that might help you:

  • If your HoloLens device goes to sleep, and then you wake it up and try to deploy to it, it might fail. In this case, restart Visual Studio and try again.
  • Sometimes the USB connection keeps connecting and disconnecting continuously. In this case, shut the device off, then plug the USB cable in. This will wake the device up. Then you can try deploying again.

Overall, it seems that deploying over Wifi is more reliable than over USB. That’s a shame because USB works faster. My guess is that it’s a firmware issue and I hope it will get fixed some time in the future.

Conclusion

Well that’s a pretty long post… so I will stop here and publish. I am not stopping my investigations and learnings however, so you can probably expect more such notes in the future. I hope you enjoyed reading and that it is useful!

Happy holocoding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

Slides and sample code for #XamarinEvolve and #Techorama

.NET, MVVM, Techorama, WPF, Xamarin, XAML
4 Comments

These past weeks have been busy with travel and speaking. After the wonderful time in San Francisco for Build 2016, I had a few precious weeks to prepare for Xamarin Evolve (Orlando, FL) and Techorama (Mechelen, Belgium). I just came back and here is the time to post the slides, sample code, and for Xamarin Evolve we even have a video of the talk!

Xamarin Evolve

2016-01-03_19-25-58

Evolve took place in tropical Orlando, and it was pretty nice to see sun, warm temperature and even some pool time on the day after the conference ended. I had a great time there. I talked about the DataBinding system in MVVM Light, which applies to Xamarin.iOS and Xamarin.Android. This critical part of all MVVM applications is there to ensure the connection between the ViewModel layer (typically in a portable class library and shared across all platforms) and the View layer. In Xamarin.Forms and on Windows, we don’t need an external databinding framework because we already have these (this is what you use when you write Text=”{Binding MyProperty}” in XAML). But in Android and iOS, there is no such concept, and this is where the MVVM Light platform-specific extensions come handy.

Here is the abstract (which was modified by Xamarin themselves… I normally don’t really use this marketing-y tone ;):

An In-Depth Study of the MVVM Light Databinding System

Living in the dark ages and still wiring up properties manually between your user interface and data source? Databinding is a technique to automatically synchronize a user interface with it’s data source and can vastly simplify how your app displays and interacts with data. While databinding is available out of the box for Xamarin.Forms and Windows applications, additional components are needed in Xamarin.Android and Xamarin.iOS. In this session, learn how to leverage databinding in your cross-platform applications as you master MVVM Light databinding and the MVVM pattern.

I created a page for this presentation on my website. There you will find the slides, video recording as well as the sample code.

Note: At the moment, some of the Xamarin Evolve videos are not working properly. Xamarin is informed. Thanks for your patience.

Techorama

techorama

Techorama is one of my favorite conferences, created by the community for the community after the cancellation of TechDays Belgium. Gill, Pieter and Kevin created a hell of a show, which grew to host more than 1000 visitors these days. The venue is awesome too, it is a movie theater and we get to project our slides and code on a huge screen. This year there were quite a few renowned speakers from the US and the whole world in fact. Even though I got to spend only one night at home after coming back from Orlando before flying again, which was quite tiring and a bit stressful, I was really looking forward to go to Mechelen. I hope you enjoyed my session there about WPF. It was a fun session where I talked about the differences between WPF and the Windows 10 Universal platform, about new development in WPF (especially tools such as the Visual Tree, the Live Property Explorer, and XAML Edit and Continue), about the Windows Bridge “Project Centennial” which takes a classic Windows app and “packs” it to transform it into a Universal application. We finished with an exciting demo of a new feature shown at Xamarin Evolve the week before: the Xamarin Workbooks, which allow you to create a rich document (using Markdown) with titles, subtitles, images etc, and allows including snippets of C# that will be executed by the Xamarin Inspector tool. Because the tool supports Android, iOS and WPF, it was a great find and it fitted well in my session which aimed to show that WPF is still very current and state of the art. So I happily changed my presentation to include it in the demos.

Windows Presentation Foundation (WPF) 4.6

Windows Presentation Foundation is what people are using to build real applications for the enterprise, the industry, the workplace, and for every situation where Windows 10 Universal is not quite ready yet. Far from being dead, WPF is 10 years old this year, alive and kicking, and gives Universal Applications a run for their money! In this session, Laurent Bugnion, a Microsoft Windows Developer MVP and Microsoft Regional Director, WPF expert since 2006, will show you what is new in Windows Presentation Foundation, where it is going in the future, and what you can achieve with WPF that Universal Application developers can only dream of.

The presentation’s page is on my website, and will give you access to the slides and the demo source code. Make sure to check the last couple of slides for more resources!

One more thing

I recently discovered (not sure how I missed that) that my session about Windows 10 UWP at the Future Decoded conference in London last year had been recorded. I added the video to the presentation’s page. So in case you want to know how to adapt your UWP app on multiple platforms, this is where you can go!

Happy coding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
 

Releasing MVVM Light V5.3 to Nuget

.NET, MVVM, Work, Xamarin, XAML
9 Comments

With Xamarin Evolve coming up (I am currently on the way to Orlando!), it was time to release a new version and take care of a few bugs, improvements and new features.

Update: To be clear I didn’t remove the Windows 8.1 version, it is still available! Only the Windows 8 version is removed.

In version 5.3, the majority of the activity was on the data binding system for Xamarin.iOS and Xamarin.Android, with many improvements in stability and performance, as well as a few new features. Existing code will continue to work as is, and at most you might have a few “deprecated” warnings, which will give you time to update your code. I also added 3 classes and a few helpers around them, which should help you greatly when you work with lists: ObservableRecyclerAdapter (for Android RecyclerView), ObservableTableViewSource (for iOS TableView) and ObservableCollectionViewSource (for iOS CollectionView). These objects will be detailed in separate blog posts.

In other areas not directly Xamarin-related, you will also find quite a few bug fixes. Here is list of changes. You will also find a few explanations below. I will publish additional blog posts with more details about the Observable list components mentioned above.

Removed old frameworks

In a desire of simplification, I removed the Silverlight 4, Windows Phone 7.1 and Windows 8 versions of MVVM Light from the repo. This does NOT affect the Silverlight 5, Windows Phone 8.0 and 8.1, Windows 8.1 as well as Windows 10 of course which are still very well supported. The main reason for not supporting these older versions is that they require Visual Studio 2012 and cannot be maintained or tested in VS2015. I hope that this doesn’t affect your projects too much. Should that be the case, please send me a note. We can always find a solution!

Fixing Nuget install.ps1 and documentation

There was a silly bug in the install.ps1 script that is run after the Nuget installation is performed, it is fixed now. Also, I added a more explicit description of the packages mvvmlight and mvvmlightlibs, in order to underline the differences between both.

Documentation: Closures not supported

In the same vein, I added explicit documentation warning that closures are not supported in the Messenger Register method, as well as in the RelayCommand Execute and CanExecute delegates. This is sometimes an issue. The reason is that these components are using WeakReference to avoid tight coupling with the objects using them. When we store the delegate that will be executed by the Messenger or respectively the RelayCommand, we store them as WeakAction instances. This causes the delegate to be “dehydrated” and stored as a MethodInfo. When we “rehydrate” the delegate, we notice that the closure has been lost, and the call fails. I am experimenting with ways to work around that limitation (i.e storing the delegates are actual Action or Func and not dehydrating them, which will of course cause a strong coupling to occur… but would be acceptable in some cases if the developer explicitly opts into this). For the moment, closures are not supported and the developer needs to find another way, for example storing the parameter in an attribute or a list of some kind.

ViewModelBase

I optimized the code a bit there and made it less redundant. I also made the RaisePropertyChanged methods public (was: protected) following a user’s request. This can make sense in certain scenarios when an external object wants to call this method on another VM.

NavigationService

Here too I fixed some bugs. I also added new features for iOS:

  • Navigating with an unconfigured key in iOS: In iOS, the key passed to the NavigateTo method is often the same value as the corresponding Storyboard ID set in the Storyboard designer for a given controller. Until V5.3, you had to configure the key for the NavigationService, which was often leading to code such as
    Nav.Configure(“SecondPage”, “SecondPage”);
    this didn’t quite make sense. As of V5.3, if you call NavigateTo with a key that is not configured, we will attempt to find a corresponding controller with this same Storyboard ID. Of course if you want to use a different ID, you can still configure the navigation service like before.
  • Retrieving parameter without deriving from ControllerBase: Until now, the implementation used to retrieve a navigation parameter from the NavigationService forced you to derive your controller from ControllerBase. That was annoying, especially in cases where you had to derive from another base class anyway. I removed this restriction. Instead, you can now retrieve the NavigationService from the ServiceLocator in the ViewDidLoad method, and then call the GetAndRemoveParameter method to get the navigation parameter (if there is one). This is the same method than in the Android NavigationService.

Binding system

I fixed a few bugs here which were causing the data binding to fail in certain scenarios. I also added a lot of unit tests to test current and new features (currently about 300 unit tests which run in both iOS and Android), as well as a few features. Expect samples very soon!

  • FallbackValue: This value can be set in the SetBinding method (or in the Binding constructor) and will be used in case an error occurs when the Binding is being resolved. This can happen, for example, if you have a complex path expression and one of the elements is null, which would cause a NullReferenceException to happen inside the Binding. This exception is caught and nothing happens in the application, but the FallbackValue will be used as the Binding value if the user has set it. This can be used for information purposes. For example, if your source property is MyViewModel.SelectedItem.Name, and nothing is selected, the SelectedItem is null and the FallbackValue (for instance “Nothing selected yet”) is used.
  • TargetNullValue: This value can also be set in the SetBinding method (or in the Binding constructor) and will be used in case the Binding value is null. This can also be used for information purposes.
  • SetCommand with static parameter: I added an overload to the SetCommand extension method which can be used to pass a static parameter to the ICommand’s Execute and CanExecute methods. Prior to V5.3, you had to define a binding for this (and use the SetCommand(…, Binding) method) which was very cumbersome in case the binding value never changed. Now you can use the SetCommand method with a simple value (or if you want to observe the value you can still use the SetCommand method with a binding).
  • SetCommand for ICommand: In previous versions, you could only use SetCommand with a RelayCommand (or RelayCommand<T>), which was an oversight. Now I modified this method to work with any ICommand.
  • New name for UpdateSourceTrigger: This method is now named ObserveSourceEvent. The old method is still available but marked as deprecated. The old name was confusing for users.
  • New name for UpdateTargetTrigger: Similarly, this method is now named ObserveTargetEvent. The old method is still available but marked as deprecated.
  • Binding with implicit event names: When you set a binding on a UI element in Android and iOS, you must also specify which event should be observed for changes. This is necessary because properties in these elements, unlike in Windows, aren’t depedency properties. For instance, you can choose between FocusChange or TextChanged for an EditText, etc. In the huge majority of cases however, the same event is used over and over for a given element.
    On Android, the TextChanged event is used for an EditText and the CheckedChange event for CheckBox; on iOS, the ValueChanged event is used for UISwitch, the Changed event for UITextView, and the EditingChanged for UITextField. Note that existing code does not need to be changed unless of course you want to simplify it. And like before, you can continue to use ObserveSourceEvent and ObserveTargetEvent (formerly UpdateSourceTrigger and UpdateTargetTrigger) to specify a different event if needed.
  • SetCommand with implicit event names: Like with the Binding class, when we set a Command on a control, we specify which event must be observed to trigger the command. For commonly used controls, this is mostly the same event. To simplify the code, you don’t have to explicitly specify these events anymore. The events that are observed implicitly are: On Android the Click event for Button, and the CheckedChange event for CheckBox. On iOS, the TouchUpInside event for UIButton, the Clicked event for UIBarButtonItem and the ValueChanged event for UISwitch. Of course you can also continue to specify these events explicitly, or any other event you wish to observe instead.
  • Binding to private fields and local variables: In previous versions, you could only set a binding on public properties (for example public MainViewModel Vm, or public CheckBox MyCheck; in V5.3, you can also set bindings on objects which are saved as private attributes. You can even create a new Binding (using the Binding constructor instead of the SetBinding method) on elements which are defined as local variables.
  • Binding on RecyclerView, TableViewSource and CollectionViewSource cells: The feature above (binding on local variables) can be useful, for example to create a new binding between a data item and the cell that represents it in a list control. I’ll have blog posts for RecyclerView, TableViewSource and CollectionViewSource.

As you can see, this is a massive change set and I am really happy that your excellent feedback has led to these improvements. Of course the task is not over and there is more coming. V5.3 should greatly improve working with data binding in MVVM Light, in Xamarin.iOS and Xamarin.Android. As always, keep your feedback coming!

For more information about data binding in Xamarin.iOS and Xamarin.Android, you can watch my Xamarin Evolve 2016 session (slides, sample code and video recording will be posted ASAP).

Happy coding!
Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

Why you shouldn’t buy the Earin True Wireless Earbuds

Technical stuff
4 Comments

I don’t usually write product reviews but here I will make an exception because my disappointment in both the product and the customer service are as high as the expectations I had.

The firm Earin went through Kickstarter and created a product that, on paper, is pretty amazing. A pair of super small earbuds that connect via bluetooth, and don’t have any wire between them. Basically you plug one earbud in one ear, the other in the other ear and you’re good to go. No cables, nothing.

I had this product delivered to my hotel in the US (in San Francisco for Build), so I could have it faster and try it during my trip. I ordered on March 16, it was delivered to me on March 28. The packaging is pretty amazing, a mix of very noble looking cardboard and magnets holding it together. Great first impression. Also, the idea of providing a charging case is really nice. You can see details and pictures here.

Now, my Jabra bluetooth headset is starting to get a bit old and the battery is not holding charges as well as it used to. So I was in the market for something else. The price for the Earin gave me pause (249 Euros!) but I thought that it looked like a really great product. Boy was I wrong.

Update: The last email from their customer service representative:

“Alright. You are very welcome to write back to the customer support once you feel cooperative and want to get the issue solved.
I wish you a very pleasant day.”

Passive aggressive much? Please tell me this person is going to get fired over this…

The issues

First, I paired the Earin L (left earbud) with my phone as instructed. Note that I have used a lot of Bluetooth devices with this phone, such as the Jabra I mentioned, the JBL speaker in my office, my car’s stereo device and more. Never had a single issue.

Unfortunately, the Earin kept dropping the connection. But even worse, the connection between the left and right earbud is extremely weak. In retrospect I should have expected it because there is a big bag of bones and fluids between them: My head. This is a basic flaw of the product and I cannot believe that they think it will work right. So not only the connection between the phone and the left earbud was bad, but the connection between the left and the right earbud was even worse. And there is nothing you can do about that one, because you cannot change the configuration of your ears.

Contacting customer service

I tried to talk to customer service but I had a bad experience there too. First they told me that they cannot refund me because I bought through Amazon. Ok fair enough, I contacted the Amazon reseller and am waiting for an answer.

More annoyingly, the tone of the representative is borderline insulting. Maybe it’s a language thing, not sure (Earin is located in Sweden) but you would expect them to be more respectful with customers who pay a lot of money for a defective product.

Then we tried some technical resolutions. In the end it boiled down to “you’re holding it wrong”. If I was holding the phone closer to my head, it would magically work better. Note that this doesn’t solve the bad connection between the two earbuds, so I wonder how holding the phone closer to my face would make things better. Also, I don’t wear shirts so I don’t have a pocket for that, and I use my phone in other occasions than jogging, so I cannot always use an armband.

The customer service representative reply:

“Thank you for the answers. So you are listening with a Windows phone as sending device, outdoors, with your sending device in the back pocket of your trousers. Okay. In this situation you write that you experience audio drop outs very frequently. Well, you are using the EARIN in far from optimal conditions, so I can understand that you get drop outs then.
As I wrote previously, it usually comes down to one thing: the distance between the phone and the master earbud. I cannot emphasize this enough. 
If that distance is not good enough to keep a drop out free connection, you can place the sending device closer: in the shirt pocket, or attached to an arm band on your over arm (usually used by joggers), of course on the same side as the master earbud.”

In other words, fuck you very much.

Lack of features

In addition to the inherent defectiveness of the product, there are also missing features that most other bluetooth headsets have.

  • No volume control on the earbuds. You need to take your phone out to control the volume.
  • No skip / forward / backward control on the earbuds. You need to take your phone out to skip a song.
  • No Cortana trigger on the earbuds. On my (old) Jabra, I can do a long press to start Cortana, which is super useful for example when skiing. Nothing like that on the Earin.

In short, the earbuds are just dumb earbuds. And they don’t even function as such.

Conclusion

I guess I was carried away by my enthusiasm and like Fox Mulder, I wanted to believe. But in the end there are basic physics at play here. The head is going to cause interferences. And the lack of features is just the last drop which makes me return the item.

I cannot stress that enough: Stay clear of this company. They don’t know what they built, and they don’t accept that it doesn’t work.

Laurent

GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook
 

Build keynote (day 2) Part 4

Build, Windows 10
No Comments

 

Qi Lu (Office 365)

  • Reinventing productivity
  • Largest user base (1.2 billion users!)
  • 3 billions minutes of Skype calls daily!
  • Connect to MS Office services
    • Unified API and SDKs
    • Single sign on (Azure)
    • Real time data
    • Intelligence
  • Microsoft Graph demo
  • OneDrive file picker integrate in DocuSign, allows to select files from OneDrive directly
  • Recipient selector can select users based on topics etc, even mistyped names
  • Shows custom integration of add-ins in Office
  • Starbucks CTO Gerri Martin-Flickinger
    • Shows integration of gift cards into Outlook
  • Conversations as a platform
    • Human language as extensible UI
    • Ubiquitous
    • WeChat in China
  • Announces general availability of Office 365 group connectors
    • Allows to connect services into the group conversation
  • Skype Web SDK integrates Skype within a website
  • Visit dev.office.com to start building
  • Incredible innovation

Steven Guggenheimer, John Shewchuk

  • Muzik demo
    • Programmable buttons
  • Project Murphy
  • Spotify video
  • Shows some additional UWP samles
  • Vuforia demo (VR middleware), turns 2D images into 3D meshes
  • Students, Imagine Cup
    • Group of 8th graders on stage
    • Their experiment goes in space on May 31st
    • Built with Lego and Microsoft!
GalaSoft Laurent Bugnion
Laurent Bugnion (GalaSoft)
Share on Facebook