Market Roundup EMC Avamar Extends to NAS and Virtual Machines Microsoft Tackles Identity Management: Peanuts for the Elephant in the Living Room? New Storage Offerings from Big Blue |
|
As demand for storage continues its unprecedented growth, IT
personnel are under increasing pressure to deliver more storage capacity, but
at the same time stay within often tight budgetary guidelines. Simply buying
more capacity is not the most cost-effective approach to addressing storage
needs; rather, maximizing the utilization and efficiency of the existing
storage infrastructure is more often than not the sound approach. Judicious use
of archiving and ILM can certainly help; however, one of the best ways to
reduce secondary storage demand is to eliminate duplication of files resident
on disks as well as eliminating duplicative backup traffic across the network.
This is where we see
File systems are a wonderful hierarchical storage paradigm,
but the many branches of the file tree can mask often significant duplication
in storage. For example, a given file that was distributed to several users as
an email attachment may be placed in several different directories on a central
file store. Not only does this duplicate or waste storage capacity, but when
backups are applied to the file store, the duplication is exacerbated as each
duplicate copy of the identical file is backed up, sent across the network, and
then stored independently on the backup volumes. In virtualized server
environments, the duplication can become even more extreme. Consider the system
boot and configuration files for a virtual server: each virtual machine
accesses the same set of files, yet operationally each acts as if it were its
own physical server. Traditional backup schemes would make copies of each
virtual server’s environment, and send them off to the backup store. Hence, the
greater the success in consolidating the servers the exact opposite takes place
with respect to backing up said virtual machines.
Fortunately, Avamar’s new support for VMware VCB addresses
this irony. By eliminating duplication as an inherent part of the backup
process, organizations should be able to significantly reduce the amount of
storage needed for backups as well as reduce the overall impact on the network
infrastructure in the delivery of backup data.
This announcement once again illustrates the thought
leadership of
Microsoft Tackles Identity Management: Peanuts for the Elephant in the Living Room?
Microsoft this week announced a series of offerings that
foster improved interoperability for online identity management. Microsoft has
been a recurring voice in the creation of an identity metasystem, an ecosystem
designed to enable the exchange of personal identity information on the
Internet so all parties may understand whom they are working with online. Three
core elements make up the identity metasystem: the people who are presenting
their identities, the website or online service
requesting proof of identity, and the identity providers who assert some
information about those people. The projects announced today improve
interoperability for each of the three metasystem components and represent the
next step in Microsoft's commitment to deliver interoperability by design.
In September 2006 Microsoft announced the availability of
thirty-eight Web services specifications under the Open Specification Promise
(OSP). A subset of those specifications, such as WS-Trust and
WS-SecureConversation, addressed identity metasystem scenarios and have
led to interoperable identity solutions such as Novell's Bandit project and the
Eclipse Foundation's Higgins Trust Framework Project. Microsoft is now making
the Identity Selector Interoperability Profile available under the OSP to enhance
interoperability in the identity metasystem for client computers using any
platform. An individual open source software developer or a commercial software
developer can build its identity selector software and pay no licensing fees to
Microsoft, nor will it need to worry about future patent concerns related to
the covered specifications for that technology. Microsoft is also starting four
open source projects that will help Web developers support information cards,
the primary mechanism for representing user identities in the identity
metasystem. These projects will implement software for specifying the website's
security policy and accepting information cards in Java for Sun Java System Web
Servers, Apache Tomcat,
The growth of identity theft and general public distrust of
doing business on the Internet is a deterrent to the overall growth of Internet
commerce. Identity management is in some ways the Augean Stables of security.
Governments, businesses, and organizations recognize the need to validate the
identities of those who access their IT infrastructure but have been loath to
put big bucks behind cleaning up the mess. Though many approaches have been
offered ranging from Public Key encryption schemes to two-factor authentication
featuring tokens with time-limited identity numbers, none have achieved general
acceptance. While vendors are jockeying for relative market position, none has emerged
as a standard.
We believe the widespread penetration of Active Directory, OpenLDAP Directory, and the capabilities of Microsoft Identity Lifecycle Manager 2007 put these products in the eye of the storm. Over the past several months we have noticed an increasing number of smaller, specialized vendors who are leaning on Active Directory in particular as an anchor to their implementation. We will be watching to see if there are large and powerful early adopters of this technology such as health care providers, payers, and insurers. Once a critical mass has been established, we believe a standard will emerge mostly because everyone wants one, and Microsoft could just be the appropriate party to catalyze a solution to this problem.
New Storage Offerings from Big Blue
May is shaping up to be the Month of Storage. With three of
the big players—
Not all that long ago, “serious” storage solutions would be
typified by high-speed FC disks, with rotation speeds of 10k or faster. These
solutions were definitely high performance, but with a commensurate price. It
is rather remarkable that in a very short period of time we are now seeing very
low-cost and high-capacity SATA and SAS drives now being included as options,
often targeted at second-tier solutions, in enterprise-oriented NAS or even
Duplicative files are one of the more obvious, yet not
always easy to tackle waste in enterprise storage. From an efficiency
perspective, having multiple copies of the same content just doesn’t make
sense. Happily the industry is beginning to address this in a more strategic
nature. One approach to reduce file duplication is deduplication in the backup
process, as addressed by
Overall, it is clear from the last several days that the major storage players are continuing to up the competitive ante in their storage solutions. We are pleased to see attention paid to deduplication as well as unification, and management efforts surrounding enterprise storage. With the roadmaps that some vendors have released regarding blades and other storage technologies, we expect that 2007 will remain an interesting year for the storage marketplace and ultimately the organizations that choose to avail themselves of the interesting wares that vendors are plying to trade.
Cisco and
Extending their strategic alliance, Cisco and
The planned Cisco/
When two large complementary vendors work together, it is
often necessary to look behind the curtain to see if there is more to the
relationship than appears at first glance. Cisco is all about the network and
Cisco has moved to outflank Symantec in an attack from
another direction with its recent acquisition of Broadware. Not only does that
acquisition expand Cisco’s presence in the physical security world but it is
another step down the road of fostering its customer’s use of the IP network as
a platform to converge applications from diverse portions of the organization.
Video and safety/security systems are a step along the road to mainstreaming
the often complex world of SCADA applications not to mention kiosks, special
purpose workstations, etc. We believe this announcement is more noteworthy for the
growing interrelationships being developed by Cisco than for the specific
products covered. We also believe that both
HP Announces StorageWorks XP24000 Disk Array
HP has introduced the HP StorageWorks XP24000 Disk Array, a high-end fault-tolerant storage solution that targets organizations seeking to consolidate workloads, reduce datacenter management costs, and provide 24x7 operational continuity. The XP24000’s new processor and provisioning technologies offer increased performance and lower power consumption than previous XP arrays as well as hot-swappable components, non-disruptive online upgrades, dynamic partitioning, and thin-provisioning software. XP StorageWorks Thin Provisioning Software also lowers power consumption and heat generation by reducing the total number of disks required in typical configurations. The XP24000 supports 4GBps interconnections between disks and hosts, and supports up to 1,152 disk drives and 332TB of capacity. It features enhanced HP StorageWorks XP Array Manager software, which allows customers to easily configure, manage, partition, and secure their data through a Web-based interface. With the HP StorageWorks XP External Storage Software, the XP24000 can virtualize HP and third-party storage to consolidate and manage multi-tier storage solutions while supporting up to 247 petabytes of external storage. In addition, the XP External Storage software integrates storage resources while aiding in data migration, array repurposing, and tiered storage. The HP StorageWorks XP24000 is expected to be available in July 2007. The company will continue to sell and support previously announced XP disk arrays, including the XP12000 and XP10000.
This latest offering from the HP StorageWorks line features various incremental improvements as well as more significant upgrades or new capabilities. To us, the most notable aspects of this announcement is the new crossbar and associated 4 Bps end-to-end bandwidth, the increased scale to support up to 579 TB of internal plus external capacity, XP Array Manager, and thin provisioning. The upgrades in the processors, throughput, and scalability are all indicative of the continued investment in storage performance by HP, but they also enable organizations to undertake larger scale consolidation projects than they previously would have be able. 579TB of storage is not usually in the scale of small organizations; however, it may well be for mid-sized firms or those who are involved in M&A activities, as well as life sciences, media, or other image-intensive verticals. In addition, the increased scale of externally managed resources allows for consolidation, at least from a management perspective of other storage silos, even if the data is not physically moved to internal XP24000 drives.
Although storage vendors of most every stripe continue to improve the performance and scope of their hardware, during the past few years the capability and importance of storage software has increased considerably. At the same time, the sheer volume of data being stored by organizations is growing dramatically, and so have the associated costs for storage software. We are pleased by HP’s decision to simplify its licensing model and to provide the equivalent function of several disparate products within its XP Array Manager software at a lower overall price. In addition, the elimination of license fees associated with external storage for raw capacity priced products encourages organizations to bring more of their storage under the XP24000 umbrella. Organizations garner considerably more value and productivity from their storage infrastructure through deployment of operational management software. By moving to a base price plus model, organizations have a simpler cost model to manage as their storage needs grow, but also have a model that is more conducive to departmental chargebacks or other cost-sharing regimens. Since the cost is segregated into base-plus-storage, incremental expense can be more easily matched to the users of the storage. With expectations of increased transparency both inside and outside of organizations, this approach simply makes more sense and allows IT to manage expense more effectively.
Of course, the most effective way to manage expense is to not incur the expense in the first place, or at least to defer the incurrence of the expense until a later time. This is one aspect of thin provisioning that we find intriguing. Through a thin provisioning approach, planning and allocating of storage for long-term needs continue, but the deployment of the storage is staged or otherwise aligned with the actual present need for capacity. As a result, strategic planning continues, which is a good thing, but expenses are in better alignment with productive value received, which is even better. Perhaps this is why an increasing number of vendors have started talking about thin provisioning.
Overall, we believe the target organizations would appreciate the improved price/performance and scalability that this solution offers. However, from a strategic perspective, the notion of thin provisioning is one that is well positioned to resonate in the marketplace. The combination of enhanced software with simplified pricing, increased performance and capacity, and thin provisioning illustrate HP’s desire to improve the price/performance of its solutions while simultaneously bringing customers’ capital investments more in line with the current need and value received.