...

Virtual TPM (vTPM) implementation in Xen

Date: 2009-01-22 10:25
Tags: en / tpm / xen

Recently I have been doing some research about vTPM specially using Xen, in the System Software Lab. at POSTECH in Korea. In this article I will try to explain how vTPM has been implemented in this Hypervisor.

Xen is a Virtual Machine Monitor (VMM or Hypervisor). It allows to run multiple operating systems on the same computer using one of these two virtualization technology: Paravirtualization (the guest OS must be modified to use the Hypervisor ABI instead of certain architectural features) or Hardware assisted virtualization (HVM, the guest OS runs unmodified if the CPU supports the Intel VT or AMD-V technologies).

Two years ago (2006), several groups of researchers started to work on the virtualization of the TPM (Trusted Platform Module) so that the Virtual Machines (VMs) can use the TPM functionalities. These researches have lead to one fundamental paper from IBM: vTPM: Virtualising the Trusted Platform Module and later on to one from Intel: TPM Virtualization: Building a General Framework. These papers serve as a base for the implementation of the vTPM in Xen, which was jointly developed by the people from IBM and Intel.

1. Introduction

1.1 General Architecture

There is different models and implementations of the virtualization of the TPM. The one present in Xen has been developed by IBM and Intel. More precisely IBM wrote the model-independent part of the vTPM implementation (the split-driver in the Linux kernel) and Intel wrote the model-specific part of it (the vTPM manager, the vTPM instance, the hot-plug script). But the vTPM model presented by IBM and the one presented by Intel have many similarities.

Basicly the vTPM architecture looks like the figure below. If you don't understand it, you should read the IBM's paper.

vTPM architecture
(figure from the paper: "vTPM: Virtualising the Trusted Platform Module" by IBM)

1.2 Differences

There are some fundamental differences between the implementation in Xen and the paper from IBM.

First, there is no PCRs mapping between the vTPM and the hardware TPM (the PCRs 0 to 8 are not the same in the vTPM and in the hardware TPM) because there is disagreement on how to do it correctly, specially on how to handle the the signatures for the quotes. The vTPM would sign the complete quote, but it does not own the mapped PCRs which is a problem. Also how to do it with a HVM? because this type of VM has its own set of BIOS/bootloader. It is hard to determine which measurement should be extended in which PCR, so they are just leaving the PCRs with the default boot configuration for a TPM.

Second, the vTPM migration uses a different protocol. Take a look at the section "Migration" (at the end of this document) for more information.

2. Components

2.1 vTPM Instance

A vTPM instance is the TPM of a VM. It is supposed to implement the full TCG TPM Specification version 1.2. Each VM has its associated vTPM instance running throughout the lifetime of the VM, so there as much vTPM instances as there is VMs running. A vTPM instance associated to a VM is unique.

The vTPM implementation in Xen is software-based, so a vTPM instance is just a piece of software running in the Dom0 (the developers wanted to add the possibility to run the vTPM instances inside a VM but this feature hasn't been finished, but you should be able to run it inside the VM it is associated with). Instead of writing everything from scratch, the guys from Intel have patched the Software-based TPM Emulator for Unix of Mario Strasser to get it working as a vTPM instance.

This TPM emulator is an almost-complete, flexible and low-cost TPM implementation released under the GNU General Public License version 2. The core of the emulator is implemented inside an user-space daemon called tpmd and the applications can access the TPM capabilities using named pipes, using the included TPM device driver library (tddl) or using the character device file /dev/tpm provided by the included kernel module called tpmd_dev. See this page for more information about the architecture of the TPM emulator.

To get the TPM emulator working as a vTPM instance, the useless parts for a vTPM instance have been removed (so there is no more TPM device driver library nor kernel module). To communicate with the modified user-land daemon, the named pipes have been renamed to reflect the instance id (also called dmi_id) of the TPM (for the paravirtualized VM) and a new channel of communication through UNIX sockets has been added (for the HVM VM).

The vTPM instance daemon is called vtpmd and its source code can be found in the directory tools/vtpm/ of the Xen source tree. This daemon is not directly launched by the user but automatically by the vTPM Manager. It takes three parameters:

When a vTPM instance is started, during its initialization phase it will try to restore its previous data (only with the startup mode save). The data of a vTPM instance is its non-volatile memory (keys, ...). It is saved each time the command TPM_SaveState is executed or after each command executed if vtpmd has been compiled with TPM_STRONG_PERSISTENCE defined (in the file tools/vtpm/vtpm/tpm/tpm_emulator.h). The vTPM instance does not store or restore its data by himself, but it pass through the vTPM Manager for disk access.

While running a vTPM instance just waits for commands that it will execute and sends the response over the same communication channel.

2.2 vTPM Manager

The vTPM manager creates and manages vTPM instances. When a VM is created, it will spawn a vTPM instance that will be associated to this VM.

When running a paravirtualized DomU, the vTPM manager has an additional purpose. It redirects the TPM commands from the DomU (by listening to the Back-End, /dev/vtpm) to the associated vTPM instance.

The manager require a hardware TPM (or an emulated one) on the computer that it is running on. The TPM is needed to bind (encrypt using a TPM key and the PCRs, so the data can only be decrypted by the TPM only) the non-volatile memory of the vTPM instances into the hard-drive. It is also needed to passthrough TPM commands to the hardware TPM (but this feature is never used).

Five TPM commands have been introduced to allow the vTPM instance to talk with the vTPM manager (so the vTPM manager will execute actions on behalf of the vTPM instance):

Five other TPM commands have been introduced to allow other components to talk with the vTPM manager. These are privileged commands. They are not sent by the VMs but by the Xen hot-plug script or the vTPM migration daemon:

Take a look a the file tools/vtpm_manager/manager/vtpm_manager.h for more information about the TPM/vTPM commands.

The vTPM manager daemon is called vtpm_managerd and its source code can be found in the directory tools/vtpm_manager/manager/ of the Xen source tree.

The vTPM manager launches three threads: one to listen to commands that comes from the hot-plug script (and the vTPM migration daemon), another to listen to the TPM commands that come from the paravirtualized guests (Back-End) and another to listen to the vTPM instance responses.

The vTPM manager uses some files:

2.3 Hot-plug script

The Xen hot-plug script is tools/hotplug/Linux/vtpm-impl. It is launched by xend events to perform administrative tasks on vTPM along with the VMs (the events are: create, start, resume, reset, suspend, delete and migrate). Basicly it just sends commands to the vTPM manager.

2.4 Communication channels between the components

There is two different ways used to enable the communication to the vTPM instances.

Communication channels with the vTPM manager

On a paravirtualized DomU, there is two named pipes (unidirectional) available in the Dom0 that are used for the communication between VTPM instance and the vTPM Manager:

On a HVM, it is a little bit different. A UNIX socket (bidirectional) in the Dom0 is used for the same purpose, except that the communication is between vTPM instance and the HVM hardware emulator (ioemu): /var/vtpm/sockets/%d.socket (HVM_RX_FIFO_D).

On both case (paravirtualized DomU or HVM), two additional named pipes are used by the vTPM instances to send special commands to the vTPM manager (to store and restore non-volatile memory):

The vTPM Manager use two other named pipes that are used by the hot-plug script and the vTPM migration daemon to send commands to the vTPM manager and receive a response. The commands sent over these pipes are privileged commands (open, close, destroy, migrate a vTPM instance):

NOTE: In the special file names, %d is the vTPM instance identifier (dmi_id).

3. Virtualization modes

3.1 Paravirtualization

To make TPM functionality available to a paravirtualized DomU, Xen uses the split-driver model. So the vTPM driver is split in two parts:

This driver is based on the Xen network driver. It is simple (around 800 lines of code for the FE and around 1000 lines for the BE) because all the work is done by the vTPM manager and by the vTPM instance.

Data exchange between the FE and the BE is ensured by the XenBus. It is just a nice API to use grant tables (in this case a single shared memory ring) and an event channel for asynchronous notifications of activity.

PV Xen
(The VMs on the left and the Dom0 on the right)

The Back-End prepend a 4-byte vTPM instance identifier to each TPM commands to identify the vTPM instance of the VM. The identifier is prepended in the BE so the VMs cannot forge commands and send them to another vTPM instance. Then the commands are multiplexed inside the character device file /dev/vtpm. This special file will be read by the vTPM manager which will redirect the command to the proper vTPM instance.

PV BE

3.2 Hardware assisted virtualization (HVM)

On the a HVM domain, the virtual hardware is emulated by ioemu (a modified version of qemu) so the guest operating system does not need to be modified. This program runs on the Dom0 and provides an emulated TPM in such way that the VM will believe it is a real hardware TPM.

The Trusted Computing Group has defined a specification called TPM Interface Specification (TIS) which define a standard way to interact with the hardware TPM. So if one driver implements this specification it will be able to drive almost all the TPM built nowadays. On Linux, this driver is called tpm_tis (drivers/char/tpm/tpm_tis.c).

The emulated TPM of ioemu implements TIS in the file tools/ioemu-dir/hw/tpm_tis.c (only 1100 lines of code) and redirects every TPM commands to its associated vTPM instance. So you just have to install a TIS driver on the VM to get the vTPM working.

HVM Xen

4. Migration

There is three steps to perform a vTPM migration. I will detail these steps using three sequence diagrams. In these diagrams vtpmd is the vTPM instance process, vtpm_managerd is the vTPM manager, vtpm-impl is the hotplug script which deals with vTPM, vtpm_migrator is the vTPM migration client (it runs on the source machine) and vtpm_migratord is the vTPM migration daemon (it runs on the destination machine and waits for incoming migration). The descriptions in brown are the command lines to spawn new processes, those in green are the TPM commands (introduced specifically for the vTPM migration protocol) and those in red are just simple descriptions of the action performed.

Step 0: verifies if the hotplug script supports the vTPM instance migration.

vTPM migration (step 0)

Step 1: consists of the exchange of the migration key. vtpm_migrator (on the source) connects to vtpm_migratord (on the destination) to get the migration key.

vTPM migration (step 1)

Step 2: consists of the migration of the vTPM instance. vtpm_migrator (on the source) connects to vtpm_migratord (on the destination) to send the vTPM instance which was packed and probably encrypted with the migration key.

vTPM migration (step 2)

But the current vTPM migration implementation in Xen doesn't really work. It would require more work to understand and fix it.

[Comment?]

Comment!

Name:
Email:
Website:
5 + 4 =
Comment: (no HTML)

(It can takes some time to update this page after posting your comment, so don't repost it.)