Xen is a high performance VM strategy that allows specially modified guest kernels to run concurrently on the same hardware. Control is shared among the competing running OS instances by the "hypervisor" (which is the Xen software specifically). The hypervisor is responsible for booting the dom0 or domain 0 which is the only instance that has direct access to the real physical hardware. Guest operating systems (domU or domain Unprivileged) negotiate with the hypervisor for resources. Xen is like an operating system for your operating systems in the same way that the normal Linux kernel is an operating system for your processes.
Xen is notable for being able to manage VMs at a low level on hardware that does not support native VM switching. I think that modern CPUs are capable of handling a lot of the functionality of the Xen dom0. I’m not sure how this affects the added utility of Xen in the modern world. I think it would be hard for CPU based virtualization technologies to do live migrations of a domU from one machine to another. This may be the compelling advantage that Xen maintains. Xen also can target processors not traditionally equipped for native VM operations (XenARM).
Here is a good article about Xen contrasting it with the main alternative, KVM.
Some notes for the Ubuntu people.
For some crazy reason, I feel like I would like my Xen to run on Gentoo. The reason for this is that I do not think that letting automagical "help" do the heavy lifting works for Xen. Too many things (kernel version, Xen version, distro version, etc, etc…) constantly change to allow for a hands off approach to work for more than about one day. Better to start taking control as much as possible as soon as possible. Also, Gentoo is ideal for highly specialized installations, for example, a single purpose specialized server typically seen in VMs.
BIOS
Before you begin check the BIOS (hold DEL key for ASUS on boot) to
see if virtualization features of the CPU are enabled. In theory this
isn’t required for paravirtualization, but it can’t hurt to do this
now while it’s easy. In my BIOS this was called Vanderpool Technology
.
Look for one of these in the flags
section of /proc/cpuinfo
.
-
vmx: Intel Vanderpool hardware virtualization technology
-
svm: AMD Pacifica hardware virtualization technology
Installation
I’m going to assume a stripped down Gentoo installation for dom0 because I’m not too sure how one would get an OS that only hosts VMs any other way.
-
Lance’s very useful notes - he seems to do things exactly like me. High five, Lance!
I will also assume 64 bit and native hardware VM capabilities as mentioned in my VM notes. I’m also going to assume that the kernel will be compiled independently of Gentoo.
First you need to emerge
-
app-emulation/xen
- This includes the hypervisor which lives in/boot/xen-4.3.2.gz
. There is also/boot/xen-syms-4.3.2
and some sym links such as/boot/xen.gz
to the hypervisor. -
app-emulation/xen-tools
- The complete collection of supporting files. This includes the tools used to start and manage VMs such as/usr/sbin/xl
. It also includes libraries (C & Python), man pages, scripts, and much much more. Before emerging this, it’s probably a good idea to make sure that theqemu
USE flag is set. If you get errors mentioningqemu
, this could be the issue.
Kernel Configuration
Next is configuring the kernel to properly accommodate the dom0. This is the OS that the hypervisor runs first upon power up and provides an environment from which to manage the other guest OS images (domU). For simplicity, it’s often best to also compile any features for both the dom0 and domU.
Set the following.
General setup --->
(xed-xen) Local version - append to kernel release
Processor type and features --->
[*] Linux guest support --->
[*] Enable paravirtualization code
[*] Paravirtualization layer for spinlocks
[*] Xen guest support
Bus options (PCI etc.) --->
[*] Xen PCI Frontend
<*> PCI Stub driver
[*] Networking support --->
Networking options --->
<*> 802.1d Ethernet Bridging
[*] IGMP/MLD snooping
[*] Network packet filtering framework (Netfilter) --->
[*] Advanced netfilter configuration
[*] Bridged IP/ARP packets filtering
<*> Ethernet Bridge tables (ebtables) support --->
Device Drivers --->
[*] Block devices --->
<*> Xen virtual block device support
<*> Xen block-device backend driver
Input device support --->
[*] Micellaneous devices --->
-*- Xen virtual keyboard and mouse support
[*] Network device support --->
<*> Xen network device frontend driver (NEW)
<*> Xen backend network device
Character devices --->
[*] Xen Hypervisor Console support
[*] Xen Hypervisor Multiple Consoles support
[*] Virtualization drivers ----
Xen driver support --->
[*] Xen memory balloon driver (NEW)
[*] Scrub pages before returning them to system
<*> Xen /dev/xen/evtchn device (NEW)
[*] Backend driver support (NEW)
<*> Xen filesystem (NEW)
[*] Create compatibility mount point /proc/xen
[*] Create xen entries under /sys/hypervisor (NEW)
<*> userspace grant access device driver
<*> User-space grant reference allocator driver
<*> Xen PCI-device backend driver
<*> Xen ACPI processor
Graphics support --->
<*> Frame buffer Devices --->
<*> Xen virtual frame buffer support
I found that some of the most illuminating documentation I found was
in the kernel compile help so I reproduced the interesting bits of
that here to avoid having to slog through menuconfig
to get at it.
Xen virtual console device driver.
Xen driver for secondary virtual consoles.
This driver implements the front-end of the Xen virtual block device
driver. It communicates with a back-end driver in another domain
which drives the actual block device.
The block-device backend driver allows the kernel to export its block
devices to other guests via a high-performance shared-memory
interface.
The corresponding Linux frontend driver is enabled by the
CONFIG_XEN_BLKDEV_FRONTEND configuration option.
The backend driver attaches itself to a any block device specified
in the XenBus configuration. There are no limits to what the block
device as long as it has a major and minor.
If you are compiling a kernel to run in a Xen block backend driver
domain (often this is domain 0) you should say Y here. To compile this
driver as a module, chose M here: the module will be called
xen-blkback.
This driver provides support for Xen paravirtual network
devices exported by a Xen network driver domain (often domain 0).
The corresponding Linux backend driver is enabled by the
CONFIG_XEN_NETDEV_BACKEND option.
If you are compiling a kernel for use as Xen guest, you
should say Y here. To compile this driver as a module, chose
M here: the module will be called xen-netfront.
This driver allows the kernel to act as a Xen network driver domain
which exports paravirtual network devices to other Xen domains. These
devices can be accessed by any operating system that implements a
compatible front end.
The corresponding Linux frontend driver is enabled by the
CONFIG_XEN_NETDEV_FRONTEND configuration option.
The backend driver presents a standard network device endpoint for
each paravirtual network device to the driver domain network stack.
These can then be bridged or routed etc in order to provide full
network connectivity.
If you are compiling a kernel to run in a Xen network driver domain
(often this is domain 0) you should say Y here. To compile this driver
as a module, chose M here: the module will be called xen-netback.
The balloon driver allows the Xen domain to request more memory from
the system to expand the domain's memory allocation, or alternatively
return unneeded memory to the system.
Scrub pages before returning them to the system for reuse by
other domains. This makes sure that any confidential data is not
accidentally visible to other domains. Is it more secure, but
slightly less efficient. If in doubt, say yes.
The evtchn driver allows a userspace process to trigger event
channels and to receive notification of an event channel
firing. If in doubt, say yes.
Support for backend device drivers that provide I/O services
to other virtual machines.
The xen filesystem provides a way for domains to share information
with each other and with the hypervisor. For example, by reading and
writing the "xenbus" file, guests may pass arbitrary information to
the initial domain. If in doubt, say yes.
The old xenstore userspace tools expect to find "xenbus" under
/proc/xen, but "xenbus" is now found at the root of the xenfs
filesystem. Selecting this causes the kernel to create
the compatibility mount point /proc/xen if it is running on a xen
platform. If in doubt, say yes.
Create entries under /sys/hypervisor describing the Xen hypervisor
environment. When running native or in another virtual environment,
/sys/hypervisor will still be present, but will have no xen contents.
Allows userspace processes to use grants.
Allows userspace processes to create pages with access granted
to other domains. This can be used to implement frontend drivers or as
part of an inter-domain shared memory channel.
The PCI device backend driver allows the kernel to export arbitrary
PCI devices to other guests. If you select this to be a module, you
will need to make sure no other driver has bound to the device(s)
you want to make visible to other guests.
The parameter "passthrough" allows you specify how you want the PCI
devices to appear in the guest. You can choose the default (0) where
PCI topology starts at 00.00.0, or (1) for passthrough if you want
the PCI devices topology appear the same as in the host.
The "hide" parameter (only applicable if backend driver is compiled
into the kernel) allows you to bind the PCI devices to this module
from the default device drivers. The argument is the list of PCI BDFs:
xen-pciback.hide=(03:00.0)(04:00.0)
If in doubt, say m.
This ACPI processor uploads Power Management information to the Xen
hypervisor.
To do that the driver parses the Power Management data and uploads
said information to the Xen hypervisor. Then the Xen hypervisor can
select the proper Cx and Pxx states. It also registers itself as the
SMM so that other drivers (such as ACPI cpufreq scaling driver) will
not load.
To compile this driver as a module, choose M here: the module will be
called xen_acpi_processor If you do not know what to choose, select
M here. If the CPUFREQ drivers are built in, select Y here.
Grub
The Xen system works by first booting into a Xen hypervisor which is something that comes from http://xen.org and not your real kernel. Then it immediately loads and spawns your dom0 which will be your controlling OS. This needs to be configured correctly in Grub for it to all work. With the strong pressure to go with Grub-2.0, that’s what will be shown here.
timeout=5
menuentry 'Xen Hypervisor' {
root=hd0,2
multiboot /boot/xen.gz dom0_mem=2048M
module /boot/vmlinuz root=/dev/sda2
}
menuentry 'Normal Kernel: vmlinux' {
root=hd0,2
linux /boot/vmlinuz root=/dev/sda2
}
menuentry 'Normal Kernel: vmlinux.old' {
root=hd0,2
linux /boot/vmlinuz.old root=/dev/sda2
}
Here is
a
list of all the hypervisor options, for example dom0_mem
as shown.
Note that where Grub1 started partitions numbering from 0, Grub2 numbers from 1. Confusingly, the devices are numbered the same way. In other words Grub2 uses (D,P+1) where D=Grub1 device and P=Grub1 partition. I find that one of the best ways to get this stuff straight is to just boot some kind of grub and then play around with tab completion in the command interpreter. Then come back and mount everything again and you’ll have the right volume ids. This is probably best to do after the first thing you hope is right, isn’t.
I had a weird problem where the boot would stop and…
Setting system clock using the hardware clock [UTC] ...
Give root password for maintenance
(or type Control-D to continue):
This was because one of the grub2 magic config scripts made a boot
profile which had the word single
in it. That is bad. Don’t use that
unless you’re sure you need it. Also from the automagically generated
profile, I don’t know what placeholder
was holding places for. It
doesn’t seem essential.
Checking For Hypervisor And Correct Xen Kernel
Look at /proc/xen/capabilities
to see if that exists and is active.
If not, the service daemons will not run.
If the /proc/xen
file system is simply not mounted, consider adding
something like this to /etc/fstab
.
xenfs /proc/xen xenfs defaults 0 0
Also dmesg | grep Xen
should show this somewhere.
[ 0.000000] Booting paravirtualized kernel on Xen
[ 0.000000] Xen version: 4.3.2 (preserve-AD)
If this isn’t working and your kernel is ready for Xen but not running
under the hypervisor, you should be able to find this in dmesg
.
Booting paravirtualized kernel on bare hardware
Try xl dmesg
and look for the following.
__ __ _ _ _____ ____
\ \/ /___ _ __ | || | |___ / |___ \
\ // _ \ '_ \ | || |_ |_ \ __) |
/ \ __/ | | | |__ _| ___) | / __/
/_/\_\___|_| |_| |_|(_)____(_)_____|
(XEN) Xen version 4.3.2 (@(none)) (x86_64-pc-linux-gnu-gcc (Gentoo 4.7.3-r1 p1.4, pie-0.5.5) 4.7.3) debug=n Tue May 20 16:19:29 PDT 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: dom0_mem=2048M
Also xl info
should produce a lot of interesting stuff.
Xen Services
In the /etc/init.d
directory, there should be some init scripts for
Xen services such as xen-watchdog
, xenconsoled
, xendriverdomain
,
xenstored
, xencommons
, xendomains
, xenqemudev
.
I added this.
rc-update add xenconsoled default
Make sure that this xenconsoled
service is started or else when you
try to use xl
to manage things it will hang so severely that it
pretty much can’t be killed.
It seems to start up most of the other ones as dependencies. This
command seems to be equivalent to xl list
.
# /etc/init.d/xendomains status
Name ID Mem VCPUs State Time(s)
Domain-0 0 2048 4 r----- 6.8
Auto Starting VMs
Once you have some VMs that you like to run, you might want them to start running without a manual restart in the event of a power cycle. I’m not 100% sure this works, but there are rumors that the way to accomplish this is something like this.
rc-update add xendomains default
cd /etc/xen/auto
ln -s /xen/conf/mylilVM.conf
In theory, this should start mylilVM
when the hypervisor starts.
Set Up Images
mkdir -p /xen/kernels
cp /boot/vmlinuz-3.14.xed-xen /xen/kernels/
mkdir /xen/disks
dd if=/dev/zero of=/xen/disks/ext4-gen2-core.img bs=1M count=10240
mkfs.ext4 -F -L XENGUEST /xen/disks/ext4-gen2-core.img
mkdir /mnt/img1
mount -o loop,rw /xen/disks/ext4-gen2-core /mnt/img1
mkdir /xen/configs
Install Guest OS
mkdir /mnt/img1
mount -o loop,rw /xen/disks/ext4-gen2-core.img /mnt/img1
wget -O - ftp://ftp.gtlib.gatech.edu/pub/gentoo/releases/amd64/current-iso/stage3-amd64-20140515.tar.bz2 | tar -xjf - -C /mnt/img1
mount -t proc none /mnt/img1/proc/
mount --rbind /dev /mnt/img1/dev/
cp -L /etc/resolv.conf /mnt/img1/etc/
chmod 644 /mnt/img1/etc/resolv.conf
cp /etc/portage/make.conf /mnt/img1/etc/portage/
mv /mnt/img1/etc/fstab /mnt/img1/etc/fstab.orig
echo "/dev/sda1 / ext3 noatime 0 1" > /mnt/img1/etc/fstab
time rsync -aP /usr/portage /mnt/img1/usr/
chroot /mnt/img1 /bin/bash
echo America/Los_Angeles > /etc/timezone
ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
date MMDDhhmmYYYY.ss
time emerge --sync
time emerge -v1 portage
time emerge -v @system
time nice emerge -vuDN vim ssh vixie-cron metalog bc htop gentoolkit app-misc/screen lftp ntp
eselect editor set vim
vi /etc/profile
emerge -C nano
passwd
Note that the portage tree is just replicated from dom0. This probably could be effectively shared. This is one of the kinds of tips that go with a fully optimal Gentoo cluster installation where the nodes all share resources (e.g. binary packages).
Prepare Guest’s Console
The first time I successfully got the guest OS to fire up, it went
through the full startup looking fine and then just stopped. I thought
it maybe hung, but after more poking around, I realized that it could
be shutdown properly too. So what was it doing just sitting there?
Well, just sitting there. An OS needs a way to interact with it and it
turns out that the guest OS can’t use "real" consoles as defined in
/etc/inittab
. To overcome this, you need the "Xen virtual console
device driver" mentioned earlier to be enabled with CONFIG_HVC_XEN=y
and then, importantly, hooked up to the guest. To do that add the
following line to the guest’s /etc/inittab
.
h0:12345:respawn:/sbin/agetty 9600 hvc0 screen
Now exit the chroot and mounted installation image.
exit
cd /mnt
umount /mnt/img1/proc /mnt/img1
Configuration Files
Create a configuration file for the guest.
kernel = "/xen/kernels/vmlinuz-3.14.4xed-xen"
memory = 512
name = "gen2-core"
disk = ['file:/xen/disks/ext4-gen2-core.img,sda1,w']
root = "/dev/xvda1"
extra = "raid=noautodetect"
The name
property must be unique.
The extra
property just adds arbitrary kernel options to the kernel
execution. I tried boot_delay
but I do not think it worked.
The disk
property as defined above maps to virtual /dev/sda1
on
the guest. Note that if you’re not using a loop mounted file:
for a
disk, you should use the prefix phy:
for "physical" device. Note
also that you’d leave off the /dev
which will be assumed (unlike the
file method). So a LVM volume would be phy:lvm/vg_ablab/root
and a
real disk partition would be phy:sdb3
.
There was some weirdness with the kernel not mounting the root
properly. I thought that if I set sda1
in the disk
parameter, than
the root parameter would take that too. But no, reading the errors
carefully, it says that only /dev/xvda
was available. Odd as this
configuration looks, it works.
Set Up Networking
Normally this needs to be done here, but when getting started it can be helpful to skip networking just to get the VMs running and then come back and get networking running. When setting up subsequent VMs after the procedure is worked out, this is the point in the process where networking should be included.
It can be a good idea to run a DHCP server on dom0 and do most of your configuration there. For details on the network configuration, see that section in this document.
Start A Virtual Domain
Try to spawn a domU.
xl create /xen/configs/gen2-core.config
If you get a xc: error: panic
check the spelling and paths (they
must be full) in the config file. Although the error message isn’t
exemplary, it sometimes has the real story hidden in it such as
xc_dom_kernel_file failed: No such file or directory
.
The next bit of progress was to get a clean run out of xl create
but
unfortunately, nothing seemed to come of it. It simply gave the
following message and ended with a clean exit code.
Parsing config from /xen/configs/gen2-core.config
Daemon running with PID 6253
When debugging such things, this can be helpful.
cat /var/log/xen/xl-gen2-core.log
Finally, I had a better look with this.
xl create /xen/configs/gen2-core.config && xl console gen2-core
Now I could see the guest kernel trying to boot. It looked like the kernel was found and started correctly, but it couldn’t find my image.
Once the VM is started, if you didn’t already connect to it, the
console
function does it.
xl console gen2-core
Note
|
Use CTRL-] to get out of a console. |
I do this kind of testing in a GNU Screen session and once the VM is running in one screen terminal, you can jump back to the dom0 in another and check on things.
# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 2048 4 r----- 636.3
gen2-core 39 512 1 -b---- 2.7
# xl uptime
Name ID Uptime
Domain-0 0 1 day, 12:15:57
gen2-core 39 18:08:43
r |
running |
b |
blocked - can mean sleeping due to lack of activity |
p |
paused - |
s |
shutdown |
c |
crashed - rare, must be configured not to restart on crash |
d |
dying |
You can shutdown a running VM with the shutdown
command or the
destroy
command. The former is like normal shutdown (which you can
do from within the VM environment if possible) and the latter is like
yanking out the power plug. I think you can use either the ID number
listed in xl list
or the full "Name".
xl shutdown 34
xl destroy gen2-core
Once you are able to get a VM to boot, connect to the console, log in, and play with the system a bit, the next step is to sort out networking.
Networking
w|
i| dom0 dom4
r| |----------------| virtual |-----|
e\---|eth0 /--vif4.0|---------|eth0 |*D
|*A| | *C | |-----|
| | | |
| \----br0 |
| *B | | dom7
| | *C | |-----|
| \--vif7.0|---------|eth0 |*D
|----------------| |-----|
-
A= Real physical ethernet (e.g.
Tigon3 [partno(BCM95721) rev 4201]
) -
B= 802.1d Ethernet Bridging
-
C= XEN_NETDEV_BACKEND
-
D= XEN_NETDEV_FRONTEND
The back end system allows for traffic to go to the real network using bridging, routing, or NAT.
To get a predictable, durable MAC address, this can be set. The Xen
Project has an format starting with 00:16:3e...
vif= ['mac=00:16:3e:xx:xx:xx']
vif= ['mac=00:16:3e:ba:da:55']
vif= ['mac=00:16:3e:ba:da:55,bridge=br0']
To just let the Xen system pick some nice MAC, use this:
vif= ['bridge=br0']
Bridging
-
Linux Bridging in general
On dom0 configure the network to do bridges. (Untested! I’m also not 100% sure about "enp2s0" vs "eth0".)
dns_servers="99.99.0.252"
bridge_br0="enp2s0"
config_br0="99.99.243.113 netmask 255.255.255.224"
routes_br0="default via 99.99.243.97"
Then configure the network scripts.
cd /etc/init.d; ln -s net.lo net.br0
emerge net-misc/bridge-utils
rc-update add net.br0 default
rc-update del net.enp2s0
After rebooting, things should come back like this.
# ifconfig br0 | grep -B1 inet
br0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500
inet 99.99.243.113 netmask 255.255.255.224 broadcast 99.99.243.127
Note
|
The network seemed completely functional with the device being br0 .
Be careful though; if something goes wrong, you could easily be locked
out. |
Note
|
In the good old days, using the ifconfig command would show
you all of the interfaces that the kernel knew about. Same for dmesg
| grep eth . But I’ve just noticed that even if both of those turn up
empty, if you try a ifconfig -a on the dom0, you can be surprised
to find that it’s there waiting to be configured. Definitely do this
explicit check. |
dom0 Network Setup
Setting up the dom0 is pretty easy since the eth is idealized. Just create a simple networking script.
# === Static Host with normal eth0 device
config_eth0="99.99.243.114 netmask 255.255.255.224"
routes_eth0="default via 99.99.243.97"
dns_servers_eth0="99.99.0.252"
Set that to activate on boot.
cd /etc/init.d; ln -s net.lo net.eth0
rc-update add net.eth0 default
rc-update add sshd default
Monitoring
-
xl list
- Shows vms currently running. -
xl info
- Dumps a lot of information about CPUs, memory, versions, etc. -
xl top
- Runs an interactive thing like top which shows VMs instead of processes. -
xl uptime
- Simple list of running VMs with their uptimes. -
xl dmesg
- Shows thedmesg
from the Xen hypervisor. This is a good way to check if the hypervisor is actually running.
Rough Notes On Migration
Here is how to move a VM from one physical machine to another.
-
Shut down VM:
drop a /etc/nologin to prevent any other logins looking for activity in ps, etc. Make sure any users stop what they're doing in a reasonable way.
-
Move the filesystem:
Start by creating a new file system on the target machine: VM=mysite lvcreate -l 64 -n $VM.swap -v xen_vg lvcreate -l 1000 -n $VM -v xen_vg mkswap /dev/xen_vg/$VM.swap mkfs.ext3 /dev/xen_vg/$VM
-
Copy over everything:
[root@source-dom0 ~]# mount /dev/xen_vg/mysite /mnt/temp/ [root@target-dom0 ~]# mount /dev/xen_vg/mysite /mnt/images/ [root@source-dom0 ~]# rsync -av /mnt/temp/* 172.22.14.12:/mnt/images/ [root@target-dom0 ~]# umount /mnt/images/ [root@source-dom0 ~]# umount /mnt/temp/ [root@target-dom0 ~]# cp /etc/xen/configs/hkn.config /etc/xen/configs/mysite.config [root@target-dom0 ~]# vi /etc/xen/configs/mysite.config *edit the LV the drives point to* [root@target-dom0 ~]# domain-start-hostname -f mysite -h mysite -c mysite.config [root@target-dom0 ~]# xl con mysite
Performance
After aspersions were cast by Little Penguin ("…virtual machines are acceptable, but come on, real kernel developers don’t mess around with virtual machines, they are too slow.") I became curious and wondered what the performance implications were. I did not do exhaustive testing but I did do one test thoroughly and the results were quite interesting.
I ran this command three times in a variety of environments.
dd if=/dev/urandom of=/dev/null bs=1024 count=100000
The time it took to complete this is summarized in the following table.
OS |
Kernel |
1 |
2 |
3 |
average |
CentOS 6.5 |
2.6.32-279.19.1.el6.x86_64 |
11.9595 |
12.0183 |
11.9948 |
11.9909 |
Gentoo |
3.14.4 - bare hardware |
7.56831 |
7.56305 |
7.56534 |
7.56557 |
Gentoo |
3.14.4 - Xen dom0 (alone) |
7.67515 |
7.68314 |
7.67696 |
7.67842 |
Gentoo |
3.14.4 - Xen dom0 (w/domU) |
7.67795 |
7.67635 |
7.6815 |
7.67860 |
Gentoo |
3.14.4 - Xen domU |
7.45921 |
7.45356 |
7.45782 |
7.45686 |
The CentOS machine is a production machine running on the exact same kind of hardware. The Gentoo tests used the exact same custom compiled kernel in all modes.
Troubleshooting
Having problems with some error messages relating to qemu? Try to have
qemu
enabled in your USE flags. The Gentoo Xen package maintainer
tells me this:
using USE="qemu" will enable qemu from app-emulation/xen-tools
which tested and released by xen upstream (recommend use case).
while USE="system-qemu -qemu" will use qemu from app-emulation/qemu
which not tested by xen upstream, and may have problem.
it's kind of qemu-unbundled version (experimental use case).
you may wonder why we provide this, see[1]
[1] http://wiki.gentoo.org/wiki/Why_not_bundle_dependencies