The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]



Индекс форумов
Составление сообщения

Исходное сообщение
"Релиз гипервизора Xen 4.6.0"
Отправлено Аноним, 15-Окт-15 08:26 
> Кто может доходчиво прояснить, для каких задач на данный момент Xen предпочтительнее
> чем KVM и почему? Спасибо.

Xen самодостаточен и компактен, а KVM основан на инфраструктуре такой сложной и раздутой системы как ядро Linux.

https://www.qubes-os.org/en/doc/user-faq/#why-does-qubes-use...

3.2. Xen vs. KVM security architecture comparison

Most Xen advocates would dismiss KVM immediately by saying that itʼs not a true bare-metal hypervisor, but rather more of a type II hypervisor added on top of (fat and ugly) Linux kernel. The KVM fans would argue that while Xen itself might indeed be much smaller than Linux with KVM, however that itʼs not a fair comparison, because one needs to also count Xenʼs Dom0 code as belonging to TCB, and that Dom0 uses a pretty much regular Linux system, which in the end comes down to pretty comparable code bases we need to trust
(Linux + KVM vs. Xen + Linux).

Both of those arguments are oversimplifications, however.
The thin vs. fat hypervisor argument
In KVM architecture each VM is just another type of a Linux usermode process. The exception being that such a “VM process” doesnʼt have access to the standard Linux system call interface, but instead can interact with the kernel via VMX or SMX intercepts. In that case the kernel actually becomes the hypervisor for all the VM processes. One should note, however, that the VM hypervisor interface in that case is much simpler than in case of a regular process kernel interface.

However itʼs not entirely true. Particularly the hypervisor still uses (or is free to use) the whole Linux kernel infrastructure, with all its drivers and internal interfaces. This makes the line between what code in the Linux kernel is, and what is not, used for handling various VM-generated events, to be blurry. This is not the case when we consider a true bare-metal hypervisor like Xen. In Xen, at no point does the execution path jump
out of the hypervisor to e.g. Dom03 . Everything is contained within the hypervisor.

Consequently itʼs easier to perform the careful security code audit of the Xen hypervisor, as itʼs clear which code really belongs to the hypervisor.

At the same time the above argument cannot be automatically transfered to Xenʼs Dom0 code.

The main reason is that it is possible to move all the drivers and driver backends out of Dom04 . The same is true for moving the IO Device Emulator (ioemu) out of Dom05 .
The only element (that is accessible to VMs) that must be left in Dom0 is the XenStore daemon, that is responsible for managing a directory of system-wide parameters (e.g. where is each of the backend driver located). That represents a very minimal amount of code that needs to be reviewed.


The I/O Emulation vs. PV drivers

KVM uses full virtualization approach for all its virtual machines. This means that every virtual machine in KVM must have an associated I/O emulator. KVM uses the open source qemu emulator for this purpose. In KVM architecture the I/O emulator is running on the host, although as an unprivileged non-root process.

Xen, on the other hand, allows for both fully virtualized, as well as para-virtualized virtual machines. This offers an option of either removing the I/O emulator completely from the system (in case the user wants to use only PV guests, as they donʼt need an I/O emulation), or to host the I/O emulators in dedicated minimal PV domains. Xen provides even a dedicated mechanism for this, the so called “stub-domains”.

The I/O emulator is a complex piece of software, and thus it is reasonable to assume that it contains bugs and that it can be exploited by the attacker. In fact both Xen and KVM assume that the I/O emulator can be compromised and they both try to protect the rest of the system from a potentially compromised I/O emulator. They differ, however, in the way the try to protect the system from a compromised I/O emulator.

KVM uses standard Linux security mechanisms to isolate and contain the I/O emulator process, such as address space isolation and standard ACL mechanisms. Those can be further extended by using e.g. SELinux sandboxing. This means that the isolation quality that KVM provides cannot be significantly better than what a regular Linux kernel can provide to isolate usermode processes6.

Xen uses virtualization for the I/O emulator isolation, specifically para virtualization, just in the very same way as it is used for regular VM isolation, and doesnʼt relay on Linux to provide any isolation.


Driver domains support

A very essential feature for the Qubes OS is the ability to sandbox various drivers, so that even in the case of a bug in a driver, the system could be protected against compromise. One example here could be a buggy WiFi driver or 802.11 stack implementation that could be exploited by an attacker operating at an airport lounge or in a hotel. Another example could be a bug in the disk driver or virtual filesystem backend, that
could be exploited by the attacker from one of the (lesser-privileged) VM in order to compromise other (more-privileged) VMs.

In other to mitigate such situations, Qubes architecture assumes existence of several so called driver domains. A driver domain is an unprivileged PV-domain that has been securely granted access to certain PCI device (e.g. the network card or disk controller) using Intel VT-d. This means that e.g. all the networking code (WiFi drivers, stack, TCP/IP stack, DHCP client) is located in an unprivileged domain, rather than in Dom0.

This brings huge security advantages -- see the specific discussions about network domain and disk domain later in this document.

KVM, on the other hand, doesnʼt have support for driver domains. One could argue it would be possible to add a support for driver domains (that is for hosting PV driver backends in unprivileged domains) using Linux shared memory, because KVM allows to use VT-d for secure device assignment to unprivileged VMs, but that would probably require substantial coding work. More importantly, because KVM doesnʼt support PV domains, each driver domain would still need to make use of the I/O emulator running on the host. As explained in the previous paragraph, this would diminish the isolation strength of a driver domain on a KVM
system.

Summary

We believe that the Xen hypervisor architecture better suits the needs of our project. Xen hypervisor is very small comparing to Linux kernel, which makes it substantially easier to audit for security problems. Xen allows to move most of the “world-facing” code out of Dom0, including the I/O emulator, networking code and many drivers, leaving very slim interface between other VMs and Dom0. Xenʼs support for driver domain is crucial in Qubes OS architecture.

KVM relies on the Linux kernel to provide isolation, e.g. for the I/O emulator process, which we believe is not as secure as Xenʼs isolation based on virtualization enforced by thin hypervisor. KVM also doesnʼt support driver domains.

 

Ваше сообщение
Имя*:
EMail:
Для отправки ответов на email укажите знак ! перед адресом, например, !user@host.ru (!! - не показывать email).
Более тонкая настройка отправки ответов производится в профиле зарегистрированного участника форума.
Заголовок*:
Сообщение*:
  Введите код, изображенный на картинке: КОД
 
При общении не допускается: неуважительное отношение к собеседнику, хамство, унизительное обращение, ненормативная лексика, переход на личности, агрессивное поведение, обесценивание собеседника, провоцирование флейма голословными и заведомо ложными заявлениями. Не отвечайте на сообщения, явно нарушающие правила - удаляются не только сами нарушения, но и все ответы на них. Лог модерирования.



Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру