Microsoft

In 2003, the Connectix Virtual PC and Virtual Server products (and their associated staff) were purchased by Microsoft. As one of the engineers working on Virtual PC at the time, this became my ticket to the Pacific Northwest.

After the move, my first task was to lead the the development of Microsoft Virtual PC 2004 and assist in the release of Microsoft Virtual Server 2005. These first releases were focused on polishing the rough edges and fixing the gaping security holes we had inadvertently introduced.

Once those were out the door, I took on a lead position on one of the three teams developing Microsoft’s new virtualization technology, Hyper-V. We had about three years to create a new design from scratch, and shipped the first version in 2008.

Following the successful launch of Hyper-V, I decided to take a break from management and became an individual contributor once again. I worked on a couple of tangentially related projects, including system support for mounting VHD images that shipped in Windows 7.

In 2009 I began working with another developer on an effort to port Windows to run on ARM processors. Eventually this work became Windows RT (sigh), which shipped on the original Microsoft Surface machines. Our version of the NT kernel also shipped in Windows Phone 8.

Microsoft Virtual PC 2004

When we developed Virtual PC for Windows at Connectix, we designed it for speed, and the best way to make it scream was to take complete and utter control of the entire machine for brief periods of time without the underlying OS knowing about it.

The reason this got us great performance was that we could directly leverage the MMU on the processor to manage memory. Without that, every single memory access would involve verifying permissions and mappings, and that would have crippled performance.

In order to begin taking over the machine, we first had to gain access to kernel mode. To do this, we just identified an unused interrupt vector in the OS’s vector table, and patched it to point to our own code—which lived in user mode. (Security experts are rightly freaking out at reading this.)

Once in control, we would run as long as we could, until an interrupt was signaled. When that happened, we would immediately put the OS’s page tables back and then hop to the OS’s interrupt handling code, pretending like nothing at all had happened.

A good analogy is if, right after leaving for work, someone breaks into your house, rearranges your furniture, and hangs out. Then, when they see you pull up in the driveway, they put everything back exactly the way it was and slip out the window, with you none the wiser.

Of course, the other thing that was happening around the time of our purchase was the release of Windows XP SP2, which was the big security-focused version of Windows XP. All code had to go through a security audit, especially code from outsiders.

Thus we were told that we needed to come up with a clean and proper way to do this switch, which involved installing a driver that could be called with proper permission checking by the OS.

Apart from security improvements, the rest of our efforts in this release went toward rebranding and minor tweaks. All told, it took a little more than a year to complete the work, and then it was time to move on to the real reason Microsoft bought Connectix: creating the next generation.

Building Hyper-V

By the time we began work on Hyper-V, VMWare had already made significant inroads in defining and dominating the x86 virtualization market. Microsoft wanted to make virtualization a key part of Windows Server 2008, and the Connectix engineers were at the heart of this effort.

One advantage of starting late is that both Intel and AMD were in the process of introducing virtualization extensions that formalized virtual machine support in the processor. (We eventually retrofitted that support into later versions of Virtual PC as well.)

The effort to develop Hyper-V was big enough that we split it into three parts: one team handled the hypervisor, one team handled VMBus and virtual devices (like networking and storage), and I led up the team that handled the rest, including the user-facing parts of the product.

Even within my third of the project, we were dealing with a very broad collection of components, including:

  • The VID (Virtual Infrastructure Driver) that talked directly to the hypervisor to control virtual machines at the lowest level.
  • The VMWP (Virtual Machine Worker Process), which served as a proxy for each running virtual machine running on the system.
  • The VMMS (Virtualization Machine Management Service) which hosted WMI classes that enabled the creation and management of machines from the outside world.
  • The Virtual Machine Manager MMC snap-in that provided a basic user interface for managing machines. (The expectation was that there would be more fully-featured managers developed later down the road.)

Over the course of the project, my team grew from around 6 to 12 people, and I ended up not actually writing very much code, as managing a team that size required all my attention.

As a result, after the successful launch of Hyper-V as part of Windows Server 2008, I decided to take a break from management and became an individual contributor once again. This meant finding something fun to work on.

Unfortunately, these things don’t always happen quickly. I killed time working on a couple of related projects, including system support for mounting VHD images that shipped in Windows 7. And then I found what was probably the perfect project for me.

Windows on ARM

It was in 2009 when I first heard of a small incubation effort to get Windows up and running on ARM processors. At the time I encountered it, the project was being driven by one person, who actually already had the OS booting with little official help.

But the time was right for official ARM support to become important. The dawn of multicore mobile processors was here, and Windows CE’s lack of MP support was beginning to put Microsoft at an even greater disadvantage.

Since ARM processors were getting more powerful all the time, and since NT was designed from the start to support running with multiple processors, it seemed inevitable that someone would make it an official ask soon, so why not start now?

This project, it turned out, was absolutely perfect for my skills. It involved both porting (which I had done at LucasArts and in my MAME work) and bare metal programming (which I had done throughout my career dating back to my early games).

One of the first things we did was to define the runtime environment requirements. At a minimum, we chose to standardize on modern ARMv7 processors, and to require both hardware floating-point support (VFP) and SIMD support (NEON).

Doing this meant we could ignore a lot of backwards compatibility issues and focus on just supporting the high-end fully-featured processors like the ARM Cortex A9/A15 and the Qualcomm Snapdragon class of CPUs.

Another controversial call we made was to fully embrace the Thumb-2 ISA. Since there was no legacy code running on these systems, there was no need to support two ISAs (both A32 and Thumb), so we standardized on the more compact form throughout the OS.

This ISA decision meant our compiler division needed to create new versions of the compiler optimized for Thumb-2, and also needed to support the ARM form of the NT exception unwinding model, which I ended up designing.

A lot of the work I did involved digging into other peoples’ code and removing bad assumptions. Over the course of its history, Windows NT has actually run on a lot of different CPU architectures: MIPS, Alpha, PowerPC, IA64 (aka Itanium), and both x86 and x64.

But over time, all except for the Intel-based ones (IA64, x86, and x64) had been dropped, and even IA64 was on the way out. So a certain myopia had inevitably creeped into some of the code, and we had to help educate people once again about RISC platforms like ARM.

All in all, it was very rewarding getting the full version of Windows up and running on a new platform. We actually had pretty much everything running smoothly before the concept of Windows RT was introduced, which put some artificial limitations into the system.

Eventually, Windows RT shipped on the Microsoft Surface and Surface 2 tablets. Shortly thereafter Windows Phone 8 shipped, using the Windows NT as their foundation instead of Windows CE.

And now, when I use my phone I know that my code is running millions of times per second.