Proxmox Virtual Environment

Proxmox Virtual Environment (Proxmox VE or PVE) is a hyper-converged infrastructure open-source software. It is a hosted hypervisor that can run operating systems including Linux and Windows on x64 hardware. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel[5] and allows deployment and management of virtual machines and containers.[6][7] Proxmox VE includes a web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included[8]), and full virtualization with KVM.[9] It includes a web-based management interface.[10][11]

Proxmox Virtual Environment
Proxmox VE 7.4 administration interface screenshot
DeveloperProxmox Server Solutions GmbH
Written inPerl,[1] Rust[2]
OS familyLinux (Unix-like)
Working stateCurrent
Source modelFree and open source software
Initial release15 April 2008 (2008-04-15)
Latest release7.4[3] Edit this on Wikidata / 23 March 2023 (23 March 2023)
Repository
Available in25 languages[4]
Update methodAPT
Package managerdpkg
PlatformsAMD64
Kernel typeMonolithic (Linux)
UserlandGNU
Default
user interface
Web Based
LicenseAffero General Public License
Official websitepve.proxmox.com

Proxmox VE is licensed under the GNU Affero General Public License, version 3.[12]

History

Development of Proxmox VE started when Dietmar Maurer and Martin Maurer, two Linux developers, found out OpenVZ had no backup tool and no management GUI. KVM was appearing at the same time in Linux, and was added shortly afterwards.[13]

The first public release took place in April 2008. It supported container and full virtualization, managed with a web-based user interface similar to other commercial offerings.[14]

Features

Proxmox VE is an open-source server virtualization platform to manage two virtualization technologies: Kernel-based Virtual Machine (KVM) for virtual machines and LXC for containers - with a single web-based interface.[9] The source is open, based on the GNU AGPL, v3. The company sells optional subscription-based customer support.[15] With a subscription, users get access to an enterprise software repository. [16]

VM emulated/paravirtualized hardware

Since Proxmox VE for Virtual Machines are using KVM (+QEMU) as hypervisor, it natively (from the GUI) supports many emulated and paravirtualized hardware components such as:

  • Motherboard chipset: i440fx or q35 (many versions)
  • BIOS: SeaBIOS or OVMF (UEFI)
  • Processor types: host, kvm32, kvm64, qemu32, qemu64 or specific CPU generation (features/flags) types like: athlon, opteron, phenom, broadwell, coreduo
  • Disk controllers: LSI53C895A, LSI53C810, MegaRAID SAS 8708EI, VirtIO SCSI (paravirtualized), VirtIO SCSI Single (paravirtualized), VMware PVSCSI
  • Display: Standard VGA, VMware compatible, SPICE, Serial, VirtIO GPU (paravirtualized), VirGL GPU (paravirtualized)
  • Network: Intel E1000, VirtIO (paravirtualized), Realtek 8139, VMware vmxnet3
  • RNG: VirtIO RNG
  • Balloon capabilities (VirtIO)
  • Support for QEMU guest agent.[17]

Proxmox VE also supports PCI and USB pass-through devices from the physical server. Additional options like CPU flags, different type of emulated hardware (that QEMU supports) etc. are also available from the CLI (shell).

Supported VM/Linux container operating systems

Since Proxmox VE is using KVM in the background, it supports all operating systems that KVM supports for virtual machines, like:[18]

Relating to Linux containers, since Proxmox VE is using LXC Linux containers, it supports any Linux distribution as a Linux container.

Nested virtualization

Proxmox VE also supports so-called nested virtualization , and it refers to the operating mode in which it is possible to run a hypervisor, such as PVE or others (eg. VMware ESXi, Microsoft Hyper-V, VirtualBox and others), inside a virtual computer, which runs on another hypervisor, instead of on real hardware. In other words, you have a hypervisor installed on hardware, which runs a guest hypervisor (as a virtual machine), which in turn can run its own virtual machines.

It is very useful for testing; for example, if you want to run a new version of Proxmox VE on an existing Proxmox VE server (hypervisor) and test new features inside it and run virtual computers under it. It's the same with other hypervisors (e.g. VMware ESXi, Microsoft Hyper-V, VirtualBox, etc.) that you might just want to run inside Proxmox VE (as a hypervisor) and test something on them.

In addition to the test application, it is possible to run virtual computers within this kind of nested virtualization that can act as hypervisors only when necessary. This is specifically the case with some network device and network emulators, such as:

Central management

Proxmox VE uses integrated web based management based on cluster technology and the Proxmox VE cluster filesystem (pmxcfs[22]). It means that there is no single point of failure in the cluster, since every node (physical host) has all the configuration files stored in a database-driven, multi-master, cluster-wide filesystem, shared among the nodes via corosync. Additionally, every node has all the roles and functions (management/firewall/storage/network/...) needed. As the cluster grows, the level of redundancy grows with it. This also means the user can configure anything on a single node inside the cluster and the changes will be synchronized across the cluster; immediately available to the other nodes.[23]

Network model

Proxmox VE use bridge model by default. Bridges are like physical network switches that work in OSI layer 2 (OSI 2), but implemented in software. All virtual machines or Linux containers can share a single bridge, or you can create multiple bridges to isolate (segment) the network. For the bridge itself you can choose between Linux native bridge and/or Open vSwitch bridge. In addition to that you can also natively use a bond (aggregation): Linux native or Open vSwitch kind, with the mode that is best suited for you (e.g. balance-rr, active-backup, balance-xor, broadcast, LACP, balance-tlb, balance-alb).

All of the above interfaces support VLANs (802.1Q). There is an additional SDN module (currently in testing in v.7.2.x) with which you can utilize: VLAN, QinQ, VxLAN and EVPN for network isolation/segmentation, among the other options specific to the SDN.[24]

Storage model

Proxmox VE supports flexible local and remote (network) storages.[25]

Supported local storage types are:

Supported remote (network) storage types are:

In addition, storages can be divided into two groups:

  • File level storage - which allow access to fully featured POSIX compliant filesystem.
  • Block level storage - which allow access to raw (block) data.

Even more, there is a need (and possibilities) to define the type of the storage (for each and every storage), depending on the usage of the specific storage (for storing the VM/Container disk images, ISO images, LXC templates, backups, etc.).

Storage replication

Proxmox VE supports redundancy for guests that are using the local storage in the way that it can replicate guest volumes to another PVE node (host) using the method so called storage replication (available only for ZFS).[28] Those replication tasks can be started automatically in selected replication time interval.[29]

High-availability cluster

Proxmox VE (PVE) can be clustered across multiple server nodes.[30]

Official documentation guarantees/tested the cluster size of up to 32 physical servers (nodes), no matter the CPU core number or any other hardware limitations, but even bigger cluster size is possible.[31][32] Those limitation are not imposed by PVE itself, but come from Corosync limits.

Since version 2.0, Proxmox VE offers a high availability option for clusters based on the Corosync communication stack. Starting from the PVE v.6.0 Corosync v.3.x is in use (not compatible with the earlier PVE versions). Individual virtual servers can be configured for high availability, using the built-in ha-manager.[33][34] If a Proxmox node becomes unavailable or fails, the virtual servers can be automatically moved to another node and restarted.[35] The database and FUSE-based Proxmox Cluster filesystem (pmxcfs[36]) makes it possible to perform the configuration of each cluster node via the Corosync communication stack with SQLite engine.[11]

VM/containers migration

At least since 2012, in a HA cluster, live virtual machines can be moved from one physical host to another without downtime.[37][38] Since Proxmox VE 1.0, released 29.10.2008[39] KVM and OpenVZ live migration is supported (starting from version 4.0 OpenVZ is replaced by LXC).[40]

Related to migration, PVE support next VM/Container migration types:[41]

  • Offline
  • Online (with some limitations[42] related to LXC containers)

Also PVE supports VM/Container migration from local storage's (local to Host/Hypervisor) or from (using) a shared storages (e.g. NFS/iSCSI/Ceph).

Clones and templates

PVE supports clones and templates. Clones[43] are exact copy of the VM/container and a template[44] can be created from VM/Container. One can use the template(s) to clone them to the new VM/container.[45]

Virtual appliances

Proxmox VE has pre-packaged server software appliances which can be downloaded via the GUI. It is possible to download and deploy appliances from the TurnKey Linux Virtual Appliance Library.[46][47][48]

Data backup

PVE includes backup software, vzdump, which allows data compression and on-line operation (snapshot mode).[49] Since 2020 there's also an advanced client-server software for that called Proxmox Backup Server (PBS), which offers deduplication, compression, authenticated encryption and incremental backups.[50]

Backup can be made only on storages selected as a backup (type) of the storage, and they may be on:[51][50]

  • NFS, SMB/CIFS, iSCSI, Ceph RBD, Proxmox Backup Server (even on local disk)

There are a few levels of backup:

  • Scheduled Backup job from the cluster view on selected VM/containers to selected backup storage
  • Manual backup of only selected VM/container

There are also a few backup modes:

  • Stop mode - provides highest consistency, it includes stopping VM/container, creating backup and when finished, starting it again.
  • Suspend mode - provides a little bit lower consistency, it includes suspending VM/container, then starting snapshot mode and creating backup and when finished, starting it again (from suspend).
  • Snapshot mode - provides lowest operation downtime, it includes snapshoting the VM/container, creating backup (live) without stopping the VM/container (usage of the guest agent is advised in this mode).

Snapshots

PVE also support creating the snapshot of the VM/container in time. You can create many snapshots (for example before updating the VM/container) and switch back to any one of them.[52]

Integrated firewall

PVE includes integrated firewall in three levels:[53]

  • Cluster wide.
  • Per PVE node (host/hypervisor).
  • Per VM/Linux container (guests).

Known limitations

In Proxmox VE 7.2 (May 2022) there are known limitations:[54]

  • Host (physical server) system maximum RAM: 12 TB
  • Host (physical server) system maximum CPU core count: 768 logical CPU cores
  • Number of hosts (physical servers) in high availability cluster: 32+ (official, but more is possible[31][32]).
  • Maximum number of bridges per host (physical server): 4094.[55]
    • Maximum number of logical networks when using VLANs: 4094.[55]
    • Maximum number of logical networks when using VxLANs (with SDN): 16 777 216.[56]
    • Maximum number of logical networks when using QinQ (so called Stacked VLAN): 16 777 216.[56]

See also

References

  1. "Proxmox Manager Git Tree". Retrieved 4 March 2019.
  2. "Proxmox VE Rust Git Tree". git.proxmox.com.
  3. "Proxmox VE 7.4 released!". 23 March 2023. Retrieved 23 March 2023.
  4. "projects / proxmox-i18n.git / tree". Retrieved 16 November 2022.
  5. "Proxmox VE Kernel - Proxmox VE". pve.proxmox.com. Retrieved 2017-05-26.
  6. Simon M.C. Cheng (31 October 2014). Proxmox High Availability. Packt Publishing Ltd. pp. 41–. ISBN 978-1-78398-089-5.
  7. Plura, Michael (July 2013). "Aus dem Nähkästchen". IX Magazin. Heise Zeitschriften Verlag. 2013 (7): 74–77. Retrieved July 20, 2015.
  8. "Proxmox VE 4.0 with Linux Containers (LXC) and new HA Manager released". Proxmox. 11 December 2015. Retrieved 12 December 2015.
  9. Ken Hess (July 11, 2011). "Proxmox: The Ultimate Hypervisor". ZDNet. Retrieved September 29, 2021.
  10. Vervloesem, Koen. "Proxmox VE 2.0 review – A virtualisation server for any situation", Linux User & Developer, 11 April 2012. Retrieved on 16 July 2015.
  11. Drilling, Thomas (May 2013). "Virtualization Control Room". Linux Pro Magazine. Linux New Media USA. Retrieved July 17, 2015.
  12. "Open Source – Proxmox VE". Proxmox Server Solutions. Retrieved 17 July 2015.
  13. "Proxmox VE 1.5: combining KVM and OpenVZ". Linux Weekly News. Retrieved 2015-04-10.
  14. Ken Hess (April 15, 2013). "Happy 5th birthday, Proxmox". ZDNet. Retrieved October 4, 2021.
  15. "Support for Proxmox VE". Retrieved 2021-10-19.
  16. "Package Repositories". Retrieved 2021-10-19.
  17. "QEMU guest agent". QEMU.org. QEMU.org. Retrieved 10 August 2022.
  18. "KVM". KVM. KVM.org. Retrieved 10 August 2022.
  19. "KVM wiki: Guest support status". Retrieved 2007-05-27.
  20. "Running Mac OS X as a QEMU/KVM Guest". Retrieved 2014-08-20.
  21. "status". Gnu.org. Retrieved 2014-02-12.
  22. "6. Proxmox Cluster File System (pmxcfs)". Proxmox VE Administration Guide. Retrieved 23 November 2022.
  23. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  24. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  25. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  26. "Roadmap". Proxmox. Retrieved 2014-12-03.
  27. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  28. "Storage Replication". Proxmox VE Administration Guide. Retrieved 14 November 2022.
  29. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  30. Wasim Ahmedi (2014-07-14). Mastering Proxmox. Packt Publishing Ltd. pp. 99–. ISBN 978-1-78398-083-3.
  31. "Proxmox VE Forum". Proxmox VE Forum. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  32. "Proxmox Cluster Manager". Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  33. "PVE HA Manager Source repository". Retrieved 2020-10-19.
  34. "Proxmox VE documentation: High Availability". Retrieved 2020-10-19.
  35. "High Availability Virtualization using Proxmox VE and Ceph". Jacksonville Linux Users' Group. Archived from the original on 2020-11-30. Retrieved 2017-12-15.
  36. "Proxmox Cluster File System (pmxcfs)". Proxmox VE Administration Guide. Retrieved 15 November 2022.
  37. "Qemu/KVM Virtual Machines - Proxmox VE".
  38. "Introduction to Proxmox VE". Linuxaria. Retrieved 2014-12-03.
  39. "Roadmap - Proxmox VE".
  40. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  41. "Guest Migration". Proxmox VE Administration Guide. Retrieved 15 November 2022.
  42. "Migration". Proxmox VE Administration Guide. Retrieved 14 November 2022.
  43. "10.4. Copies and Clones". Proxmox VE Administration Guide. Retrieved 23 November 2022.
  44. "10.5. Virtual Machine Templates". Proxmox VE Administration Guide. Retrieved 23 November 2022.
  45. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  46. "Announcing TurnKey OpenVZ optimized builds (+ Proxmox VE channel)". Alon Swartz. 24 February 2012. Retrieved 20 July 2015.
  47. The Proxmox developers have released several virtual appliances, which are ready-made OpenVZ templates that can be downloaded directly from within the Proxmox web interface.
  48. "The next server operating system you buy will be a virtual machine". Ken Hess. 15 October 2013. Retrieved 20 July 2015.
  49. "How to backup a Virtual Machine on Proxmox VE". Manjaro Linux Tutorial and Guide site. September 23, 2017. Retrieved October 4, 2021.
  50. "Introduction". Proxmox Backup Documentation. Proxmox Server Solutions GmbH. Retrieved 10 August 2021.
  51. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  52. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  53. "Proxmox VE Administration guide" (PDF). Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  54. "Proxmox VE comparison". Proxmox VE. Proxmox Server Solutions GmbH. Retrieved 10 August 2022.
  55. "Network config". Proxmox VE. Proxmox. Retrieved 10 August 2022.
  56. "Software Defined Network". Proxmox VE. Proxmox. Retrieved 10 August 2022.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.