My first experience with Linux was 10 years ago. Although I started my computing experience earlier than that (I attended computer classes since 1995) but I remain a n00b because the beige boxes scares me. I only started falling in love with computers and IT stuff when my father purchased our first family PC (powered by Pentium III) in year 2000. By having our own PC I can freely tinker around it without having to worry too much. Although the PC was pre-installed with Windows 98 SE, an article in local PC magazine drove my curiosity to try Linux and ended up installing RedHat in a dual-boot environment. I admit that I fell in love in Linux but I love Windows more because of it’s ease-of-use thus I set Windows as the default OS. Actually nobody in my family knows there’s Linux in the PC because it is only bootable via a boot diskette. Despite using Windows most of the time, I keep using Linux occasionally out of curiosity and started mastering it unknowingly in the process.
A couple of years since my introduction to Linux, I learned about the existence of special kind of software that would allow me to use Linux without having to set my beige box to dual-boot system. The software is known as virtual machine. I installed Connectix Virtual PC and began experimenting with various Linux flavors, often more than 2 at one time. Sure, having such load in a Pentium III box with maximum RAM of only 512MB is a pain but for a geek it was a pain worth bearing. However it still does not enough to make me a Linux convert because I still think that Linux was still immature for a beginner’s use. It was during the same time I introduced Linux to my family and nobody accepted it. Yes, the heavy reliance on CLI freaked my family members and unlike Windows which they can fix themselves, they’d left dumbfounded should they face problems in Linux. And I was more convinced that Linux is still not good as a beginner’s OS. Well, perhaps I’ve used the wrong distro but how should I know if the one I’m comfortable with may be too scary for others?
Fast forward a few years and I’ve almost gave up being a Linux evangelist to my family. Yeah, I know it wouldn’t succeed because I still not using Linux as my main OS up to that time. It’s not that I don’t want to but virtualization software consumed too much of my limited system resources, although my PC was among the most powerful of that time. Even if I set my PC into a dual-boot machine and dedicate all system resources to whatever OS I booted, it wouldn’t help either because I’m not happy with the hassle of having to reboot the machine just to switch the other OS. Then I think why not the computer developers simplify it? My computing knowledge were pretty much limited on that time. All that I could think for improvement is either to make the virtualization less resource hungry or something that I described as “hardware-level virtualization”. The former might be impossible because no matter how small footprint the virtualization software has, the overall system resources is still shared among the host SO and the guest OS. For the latter, I thought it was ridiculous until I read an article in another local PC magazine about the so-called “hardware-assisted virtualization” in 2005, around the same time of the emergence of multi-core consumer CPU.
From what I understood about hardware-assisted virtualization, it’s similar to my vision of hardware-level virtualization, where system resources are partitioned at hardware level instead of in software level as in the traditional software virtualization. In the article both AMD‘s “Pacifica” and Intel‘s “Vanderpool” were mentioned well. I thought the technology I’ve been waiting for has arrived but I was wrong. It’s all liars. The article mentioned about having a machine where we can boot into both system at one time without the need to install the virtualization software and we can switch between the OSes in real-time without having to reboot the system (let’s call it “double-boot” instead of dual-boot) or reloading the same OS without restarting it. Sounds nice because should the current working environment crashed, the loaded copy of the OS would take over and this could be done without the user noticing it. However I still haven’t seen my dream of “double-boot” system become true despite the technology is already available. The technology becomes useful only if the virtualization software is installed, which means it still need the host-guest relationship between the OSes, of which I think kills the purpose of having the hardware-level virtualization. I am highly disappointed. However there was one time in local PC expo where I saw an Apple representative demonstrated switching between Mac OS X and Windows XP in real-time using certain key combination. I asked him whether there are any virtualization software installed or not and he answered me the Mac only use Bootcamp. I’m not sure though whether it’s true or there were just some tricks because I never really have a chance to use Windows on Macs but whatever system it is, I only want to see the “double-boot” system become true.