This question is difficult because a desert island would not offer the convenience of electrical power or cable TV, nor would it provide a particularly computer friendly environment. If I understand your meaning, I would have to choose tape for now. My two reasons are one, tape enables me to backup data and remove it safely to an alternate location at the lowest possible cost; and two, tape gives data portability. This means I could restore files and data sets to whatever hardware platform is available post disaster. Most disk mirroring schemes require like arrays on both sides of the mirror. In a few years, as volume virtualization software improves and as storage arrays become commodity purchases with less proprietary software, I would choose disk. From a pricing, technology and management perspective, what are the most compelling advantages of disk over tape? Are there any?
Cost comparisons are fallacious. Who cares if media costs are becoming similar? It proves nothing about cost of ownership. Supposed technological advantages of one over the other are also pretty meaningless. Tape has been implementing the same read/write head technology, PRML, and other technologies used in disk for many years. But it is a linear, rather than a rotational media, so there's the rub. Ultimately, lots of tape and lots of disk are equally painful to manage.
Yeah, I love that comment "Tape is dead and SAN has killed it." Ironically, most SANs have been deployed for the expressed purpose of sharing a tape library. I think that the most important advances in tape are already known: remarkable capacity improvements and read/write speed improvements. But it is a balancing act; tape technology improvements depend on operating system improvements, network improvements and disk drive improvements to be fully actualized. You can't just quote the transfer speed of a tape drive and use it as an indicator of system performance. If the disk virtualization software being used doesn't understand or work well with the backup software, you can see data restores that take an unacceptable period of time to accomplish. We need to begin moving away from the device-specific evaluation of technology and into the systemic evaluation of solutions. There are too many variables as storage becomes more networked in design to evaluate performance strictly at the device level. What are the biggest issues in effectively protecting data?
Without a doubt, the biggest issue is heterogeneity. We have purchased storage on an ad hoc, or knee jerk, basis for over a decade. When we ran out of disk, we simply bought whatever array was hot at the moment and fielded it in our environment. The other key issue in data protection is ensuring that we are spending money wisely to protect the right data. Few companies have done the work necessary to identify the right data to include in backups or mirrors. Finally, I think that the dearth of effective storage management –- and I mean that in the broadest sense and not just in terms of software –- is contributing to the vulnerability of data.