Market Roundup May 27, 2005 AMD Specs Virtualization Technology BlueArc Introduces New Software Suite for Titan Network Storage System |
|
AMD Specs Virtualization Technology
This week at the LinuxWorld Summit
AMD released the specification for Pacifica, its virtualization technology for
server, workstation, mobile, and desktop systems. This technology will allow
users to run multiple versions of operating systems on one system, creating
virtual systems. AMD has indicated that Pacifica technology will be available
in client and server processors in the first half of 2006. Pacifica extends
AMD64 technology with Direct Connect Architecture to enhance virtualization,
doing so by means of a new model and features for the processor and memory
controller. Pacifica in itself will not grant virtualization, but is designed
to enhance and extend software-based virtualization from organizations such as
Microsoft, VMWare, and XenSource. According to AMD,
the new technology will help reduce complexity and increase security as well as
provide backward compatibility with existing virtualization software.
Virtualization technology has been a mainstay of UNIX
systems and mainframes for years. EMC’s VMWare was the original choice for
x86-based systems, but the introduction of Microsoft’s Virtual Server and
XenSource’s Open Source virtualization technology has driven increased interest
in the market. The ability to run multiple operating systems on various virtual
systems (sometimes called partitions) has been used to improve server
utilization rates, and has interesting security and performance implications
for desktop environments as well. We believe this technology will be adopted
fairly quickly as new products emerge and vendors help IT managers adapt the
technology to their environments. Intel of course has announced their own
version of a virtualization technology, which means that within the next year,
in addition to trying to sort out the impact of dual core technology and 64-bit
extension technology, customers will also have to keep track of virtualization
technologies. While these new capabilities are all good, the volume market is
more hesitant and confused than bullish on adopting these products. It’s not so
much that they aren’t inherently useful so much as it’s very difficult to sort
out whose implementation really works best for a particular type of workload,
how well the technology has been implemented for memory controllers or specific
iterations of a chip, and so forth. The OEMs also have multiple chips to choose
from in launching new servers and have also taken widely divergent paths. While
we applaud the technological advances, we beg the vendors and OEMs to sort out
the product profusion and option overload and help customers navigate to the
right solution for their platform needs.
From a virtualization technology viewpoint, it is a good thing that AMD and Intel have taken this course. Companies like IBM and Sun that own and control both chip and operating system development have always put virtualization features in both the hardware and the software. Optimal virtualization capability really dictates a joint hardware and software solution. We don’t anticipate that changing either. The problem for the x86 market is that you have Intel or AMD controlling the chip; Microsoft and various Linux gurus controlling the operating system; and VMWare, Microsoft, and other various Linux gurus controlling the hypervisor codes. Fortunately for users these companies are actively partnering together across the various technology layers so that customers can reap the benefits regardless of which combination they deploy. The negative side of this is that it requires more development time than, say, Solaris on SPARC or AIX/i5OS on Power due to the various permutations, combinations, and machinations of multiple vendors’ products. We also remain hopeful that once virtualization reaches the desktop, someone in marketing might actually work out how to articulate the business benefits of virtualization without first inundating buyers with deep technical dives, arcane technology arguments, and pseudo-religious wars about architectural implementations. Now that will be something to see.
BlueArc Introduces New Software Suite for Titan Network Storage System
BlueArc Corporation has announced
availability of a new software suite for its Titan storage systems. The latest
software release consists of five major components: BlueArc's
new Virtual Server supports up to eight logical servers per Titan, each able to
have separate IP addresses, management policies, and
dedicated port bandwidth. The Data Migrator enables
administrators to create policies based on rules, parameters, and triggers to
migrate data from one tier of storage to another. Support for iSCSI enables sustained wire speed performance for
block-level data transfer over standard IP networks, using SCSI commands, with each
Titan able to support over 8,000 LUNs. BlueArc's Remote Volume Mirroring enables drives in
different RAID arrays to be synchronously replicated at the block level, to
support realtime data mirroring at locations up to ten kilometers apart. In
addition, BlueArc introduced WORM file system support
for users seeking to preventing data on a disk from being modified or deleted
for a specified time period. BlueArc's new software
is available immediately for existing customers. Pricing details were not
announced.
For those who are into storage, May has proven to be an
exciting month. We have seen many storage-related announcements from the big
boys, and this week we have heard from BlueArc as
well. Interestingly, each vendor has articulated a story, whether it is central
or ancillary, about virtualization in their recent announcements. While the
implementation and degree of virtualization varies with each vendor, it is
clear that virtualization is on the top of the mind. Perhaps we should simply
dub May 2005 as Storage Virtualization month. But seriously, the buzz in the
marketplace related to virtualization cannot be overlooked.
So, the bigger question, at least for BlueArc, is: will embracing virtualization, WORM, iSCSI, and other new features help BlueArc to further its position in the marketplace? Possibly, but virtualization is not unique, or for that matter well differentiated by all of the players. BlueArc’s claim to fame is the ability to scale to the stratosphere as it sports a single file system that can support 256TB while delivering 20Gbps throughput on a multiple storage tiers. In this context the Data Migrator makes a great deal of sense, given it is unlikely that all files in such a massive file system will have the same corporate value at any point in time. So too does the remote mirroring and other capabilities that aim to help storage administrations calm the wild information beast while providing a cost-effective level of service internally. Sounds good, so what market forces will keep BlueArc in check? The scope of its customer base. The BlueArc challenge is that its existing customer base is largely defined by users who thrive on massive amounts of data, e.g., financial, life sciences, digital entertainment, and education, among others. These customers in general have an economic impetus to purchase highly engineered and high-end solutions to address high-end problems. But the market for storage as a whole is less performance-constrained and more resource-constrained (human and budget). So while BlueArc’s unified high-end NAS and SAN vision and support for multiple tiers of storage is impressive, it is a premium solution focused on customers that have enormous storage needs, which is not typical of SMBs, one of the hottest market opportunities at present. SMBs are being aggressively courted by the market leaders offering Express bundles focused on this technological resource-constrained audience. Nevertheless, BlueArc seems to be doing well for itself, but the long-term question for the company is whether its product offerings can successfully develop a following outside its current base, or whether it will be relegated to the upper echelon of the IT eco system like many others who were formerly in the mainstream of IT.
The End of the Line: HP and PA-RISC
According to news reports, HP will announce the final
processor upgrade to its HP-9000 UNIX servers at the end of this month with the
introduction of the PA-8900, which will feature a much larger L2 memory cache
than the PA-8800. The new chip’s processing speed will offer a slight
improvement over the PA-8800 chip, which it replaces, and will be the last of
the PA-RISC chips that HP ships. The company said it will support existing
HP-9000 servers until 2011. After the PA-8900 is out the door, HP will switch
exclusively to Intel’s Itanium chips to power its Integrity line of high-end
servers.
HP has long argued that Itanium is in fact an industry standard
and that its adoption and development of the chip will bring it great success
in the future. We have argued otherwise, and see Itanium as largely an HP
product that is supported only to the slightest degree possible by other
vendors. These vendors may even offer Itanium-based products, but do so without
a great deal of enthusiasm or effort. Why should they? Itanium has not taken
the market by storm and instead looks more and more like a strategic blunder by
HP with each passing day.
But what about PA-RISC customers? Sure, HP has said it will support these customers for six more years, but one has to wonder just how much enthusiasm customers will have for purchasing the last model off the assembly line. Certainly HP is trying to migrate existing PA-RISC customers over to Itanium, but does not appear to be having a great deal of success in doing so. It appears HP is hoping that by announcing the end of the line for the PA-RISC offerings it will get fence-sitting customers to get off the fence and make a decision about future IT deployments. One has to wonder just how well that strategy will work, given Itanium’s lukewarm reception in the market to date. While we expect many PA-RISC customers to get off the fence as the roll-outs of PA-RISC-based products comes to an end, we believe a lot of those customers are not going to fall on HP’s side of the fence and may in fact will look for other technology platforms with wider market acceptance than Itanium. Perhaps HP can change the market perception of Itanium, but we believe that it will be facing customers who are, in essence, risk adverse.
Anonymously Yours: IBM Protects Data Privacy
IBM has announced a new software product that the company
claims will ease the ability to share information without revealing private or
sensitive details contained therein. The DB2 Anonymous Resolution offering
allows companies to aggregate data in a form that does not include personal or
private information, and then share it with other companies or entities. The
DB2 Anonymous Resolution product creates digital signatures of data and removes
personal information before the data is sent out of the enterprise. The
original form of the data is stored internally. The DB2 Anonymous Resolution
product is part of the company’s Entity Analytics technology, which is used to
assemble accurate and current data across multiple databases by identifying multiple
entries for a single individual or data point. The DB2 Anonymous Resolution
offering is available immediately.
The recent rash of high-profile identity theft cases brings
the issue of identity security home to both individuals and companies in a
theory-free fashion. Theft or inadvertent release of data has become an
increasingly routine event, with increasingly harmful effects, as improved data
correlation creates more extensive profiles of individuals. Yet at the same
time enterprises are being asked to share and move data within their value net
or with other entities for analytic or predictive studies. The balancing act
between sharing data and protecting it gets more delicate with each passing
week.
In essence, the DB2 Anonymous Resolution product creates a second tier of data, one not as valuable as the original data set but perfectly suited for the task at hand, compiling statistical information models. The idea of tiered data arrangements — in other words giving data denominations like currency — is one that could provide enterprises with the means to safely share information in a more precise fashion. Instead of using $100 bills for each or any transaction, the DB2 Anonymous Resolution offering allows enterprises to “pay” with exact change in sharing data. We believe the fact that DB2 AR automates the process of data tiering could well provide a broader acceptance of the idea of creating different denominations of information, making the both the sharing and protecting of information more granular and effective across value nets, thereby increasing the value of those networks in the process.