Market Roundup October 28, 2005 Storage Infrastructure Management
Goes Open Source |
|
Storage Infrastructure Management Goes
Open Source
Aperi, a new Open Source community, was launched this week
with the mission to give customers more choices for deploying open standards-based
storage infrastructure software. The community
was announced by IBM, but will also include Brocade, Cisco, CA, Engenio,
Fujitsu Ltd, McData, Sun and NetApp. The
organization plans to develop a common storage management platform that gives
customers greater flexibility in how they manage storage environments. Aperi differentiates itself from other
industry groups, such as the Storage Networking Industry Association (SNIA),
explaining that other groups have focused on establishing standards, but Aperi
members plan on collaborating to develop an open source-based platform. Aperi will be managed by an independent,
non-profit, multi-vendor organization, and the resulting platform will be
provided free of charge. IBM plans to
donate part of its storage infrastructure management technology to the open
source community. Other members will
have the option to donate intellectual property as well. Aperi plans to build upon existing open
storage standards, including SNIA’s Storage
Management Initiative Specification (SMI-S).
Aperi is the latest in a wave of open and collaborative
communities that IBM has been instrumental in founding and promoting. Beginning with Eclipse in 2001, and
continuing with Power.org and Blade.org, IBM has pursued the notion of collaboration
with more than just presentations and discussions; it is contributing
intellectual property and providing access to its patent portfolio in some
instances. As an example, IBM announced
a major initiative this week to pledge royalty-free access to its patent
portfolio for the development and implementation of selected open healthcare
and education software standards built around web services, electronic forms,
and open document formats. In keeping with what has worked previously, IBM
intends to use the Eclipse organization as the model for the Aperi
initiative. Thus far, IBM has
credibility in launching organizations and then stepping aside to let them run
rather than trying to control their direction.
This bodes well for Aperi, as a thinly disguised vendor vehicle wouldn’t
get very far, but a real community just might.
On the other hand, we do have concerns with the principles behind this latest initiative. Storage is a strange area of IT. While most companies have server architectures and network architectures and even application architectures, only a small subset has data architecture. Not all companies have used storage resource management and may not know where the data sits, either from an application viewpoint or from a device standpoint. They may not even know the relative value of data from application to application within the organization. Good data management begins with knowing what the data is and what sort of management it requires. While better container management is a good idea, having a good grip on the content would be an even better first step. While we applaud trying to get the containers to work together better, we think that content needs to be tackled first, and Aperi does nothing for this particular problem. Additionally, while server vendors and the server departments of system vendors frequently function as though in their own separate universe, storage is just a specialized form of server: a data server. We would be happier to see an organization that comprises all the management vendors looking at common management across the organization and not just for storage. We’d also like to see participation from more vendors with management expertise. (Where are EMC, HP, HDS, Microsoft, and Symantec?) We believe that data is an integral part of IT and that the containers should be as much like other servers as possible. We are loath to support any further differentiation that may evolve separately from management at a higher level. At the same time, any time the vendors actually try to make users’ lives easier we must genuinely applaud and encourage them. We hope Aperi is a success, that other vendors join in, and that openness truly is a socially transmittable behavior.
Doing the IP Shuffle… Cisco Style
Cisco has announced a new series of Carrier Ethernet
products designed for service providers of various sizes, including telcos. The
10 Gigabit Ethernet offerings are designed to provide what Cisco calls Next
Generation Network functionality, including high-speed business-class IP
services as well as Layer 2 and 3 VPN services and voice, data, and video
capabilities, dubbed the “triple play” services. The products are positioned to
help carriers migrate their existing networks from more traditional
connectivity such as frame relay or ATM to more intelligent and capable IP
oriented networks. The products introduced included new routers, switches, and
other hardware designed to make the migration to IP more flexible and
appealing. Most of the products will be available by the end of this year.
10GB is cutting edge for Ethernet, and the speed is
important, but what is more important is the flexibility that a more
intelligent, evolved, modern (or pick some of your favorite enhanced
metaphors), transport mechanism offers combined with Cisco's driving of
intelligence down into the network so that the network is keenly aware of what
data types are being passed through. This allows for SLAs that support decent
video and audio interaction alongside more pedestrian data uses, such as file
transfer, etc. One competitive platform is the high-speed cable infrastructure
coming into many homes where services providers are offering TV, telephone, Internet,
and so forth. 10GB would be an alternative platform, which from a data
perspective would be simpler, since it all could simply be IP-over-Ethernet
supporting a variety of services. Of course, 10GB Ethernet has line-length
limitations like everything (including cable) so there will need to be some
degree of infrastructure upgrade/refresh beyond simply dropping in CAT6 wire or
Fibre equivalent into every home, office, cubicle, or
space station (Starbucks).
Nevertheless, this is bringing convergence further along, and while Cisco might wish it would displace all other existing network infrastructure, it won’t do that overnight, or perhaps ever, as the competitive installations have their pluses and minus as well. However, this does raise the “what is possible to do after all” bar higher than it has been before. But then again, as with all carrier-focused solutions, it is easy to wire the CO, or mega data center, but that last mile into the home and office building remains a most difficult of challenges. Perhaps the fact that an increasing number of homes and offices already have Ethernet wiring in them is one of the factors that will help Cisco along this path and help its customers grab a share of the pot of gold whose rainbow rises so brilliantly from delivering a plethora of converged services along that oh-so-valuable last mile.
IBM has announced the WebSphere
Application Server Community Edition, designed for mid-sized companies, large
enterprise departments, and business partners who would like to develop open source-based
applications or services. The new offering includes technology from Gluecode, which IBM acquired in May. The WAS CE offering is
based on the J2EE-Certified Apache Geronimo application server and IBM is
offering subscription-based support if the customer or partner requests it. The
Gluecode technology offers developers the ability to
build off of the Geronimo Application Server to provide Web applications,
messaging, and transactions; a JSR 168 portlet
container; the Apache Derby internal database; and a wide range of management
and centralized configuration and control features. Gluecode
has offered its base code for no cost; more feature-rich versions have come
with subscription pricing. The IBM WAS CE is also free for download. Support
services can be acquired for $900 per server per year.
As of this week, IBM is a company with a market
capitalization of more than $131 billion with quarterly revenues of more than
$20 billion. Yet the company, despite being highly focused on meeting sales and
revenue targets (just talk to IBMers for four or five
minutes, you’ll understand how much metrics mean within the company), seems to
be willing to offer up substantial goodies to the world at large for free. One
might think that Big Blue has suddenly got a case of Internet fairy dust
intoxication, where offering products for free was a model for the classic
“give it away for free but make it up on volume” affliction. Netscape tried it,
and other than bamboozling AOL, proved that that particular model needs more
than just a short-term plan of “give it away now and we’ll figure out something
later to make money.”
Not only is IBM well versed in the traditional ways of making money; the company has enough institutional wisdom to explore new ways to bring in customers and revenue. The WAS CE is yet another example of how IBM — unlike most if not all of its competitors — understands that investing in the future may mean doing things that in the short term show no specific revenue opportunities but could in the long term bring new persistent revenue streams for the foreseeable future. IBM’s commitment to open source technology is not only a way to move IBM hardware and software; it is a means to empower its channel and business partners, giving them the tools to offer new applications and services to their customers. IBM’s WAS CE emphasis on mid-tier companies and enterprise department level development is clearly a business partner enablement program, one ensuring that the partners that own both the relationship and niche market expertise with the end customer have new and more cost-efficient ways to deliver offering not only now but well into the future. IBM, unlike its competitors, continues to make a concerted, company-wide effort to capture SMB customers by investing in their ISV/SI/business partner ecosystem. At this point, they are simply running away from the competition in this regard, meaning that in the coming years they will be reaping a fine harvest indeed.
In 1994, the U.S. Congress instituted CALEA, the
Communications Assistance for Law Enforcement Act. Specifically designed for telephone
companies, the act requires that those companies’ communications services be
accessible to court-ordered wiretapping, and that all innovations be approved
by the FBI before being implemented, at the cost of the telephone
companies. Recently, the FCC demanded
that the Internet and all aspects of its communications (email, Instant Messaging,
VoIP) be subject to the same act, based upon the fact that Internet
communications have become a significant replacement to traditional telephony,
with the exception of having innovations subject to FBI pre-approval. This decision is being challenged in court by
a plethora of different organizations, including the Electronic Frontier
Foundation, the Center for Democracy and Technology, and the American Council
on Education.
Governmental eavesdropping existed long before
technology. American founding fathers
had no expectations of privacy when they sent communiqués, but most ordinary
citizens have a reasonable expectation of privacy, provided our anti-spyware software is working adequately. We as a society do tend to give our
government access privileges when we deem it warranted. But government agencies have in over zealous
times tended to cross the lines demarcated by various statutes unless their
processes are transparent. In the quest to apply wiretap rules to the Internet,
crossing those lines may well be inevitable.
With telephones and land lines, there are specific geographic locations that an agency can physically visit with their equipment for the purpose of wiretapping one specific line. Mobile telephones pose a problem for government agencies simply because they are mobile. The signal of a person in a car moves from tower to tower, making attempts at wiretapping problematic; however, signals and locations can be eventually tracked down. The Internet, with its near complete lack of geographic location, increases the difficulty exponentially. If something isn’t defined by a location, how could it possibly be tapped? At what point/location should it be tapped? As it exits the sender’s ISP or as it arrives at the recipient’s ISP? Things seem to be an all-or-nothing proposition when it comes to the Internet. One possible solution is to cast an extremely broad net, perhaps setting up network listening posts and eavesdropping on everyone in order to zero in on a few. This, of course, presents a virtual Pandora’s Box of problems. It would stretch the trust of any agency to focus only on that information pertaining to their specific investigation. What about the hoops vendors would have to jump through to make their products accessible to government agencies? What about Internet traffic that originates from overseas or leaves the U.S.? The Internet is not one thing or place, and the information that travels across it does not travel in one packet. Collecting all the information is a tedious prospect requiring tremendous reserves, and knowing when and where to collect it requires a psychic’s sensitivity. Remember the FBI’s Carnivore? And naturally, those the government is most interested to catch will be the ones who are most elusive and the government could wind up with a lot of phone calls discussing what time the kids need to be picked up from soccer, or who forgot what at the grocery store. The idea of wiretapping the Internet is not just physically problematic, but opens up a door that we are all rather ill-equipped to walk through.