September 3, 2004
EMC has announced new products in its Celerra NAS line: the NS500, which is available in both integrated and gateway (NS500G) configurations, and the NS704G gateway. The NS500 is available in one or two data mover configurations designed for mid-market needs, but can also be upgraded to the higher-performing NS700 or NS704G. The Celerra NS500 is available immediately with a list price of $40,000 for a 1TB, single data mover, integrated configuration, including CIFS and SnapSure. The NS704G provides increased performance and advanced clustering capabilities, and also offers upgrade path for EMC Celerra NS500G, NS600G, and NS700G customers. The NS704G features four data movers, and comes in both single and dual ControlStation configurations. The NS704G is available immediately with a list price of $165,000 for a four data mover configuration with CIFS and SnapSure. In an unrelated announcement, IBM announced the new TotalStorage DS300 and DS400. The DS300 is an iSCSI storage server designed to provide mid-market clients cost-effective options for transporting data through standard Internet protocols, while the DS400 is a 2GB FibreChannel system. Both systems run Windows and Linux. Both systems support IBM’s eServer xSeries and BladeCenter solutions, and are part of the company’s Express Portfolio offerings. Single controller models of the DS300 and DS400 are scheduled to begin shipping on September 24, with dual controller models scheduled for shipping on December 17. IBM said the entry model DS300 would cost under $3,000, but did not provide other pricing information.
Both the EMC and IBM announcements offer examples of how technologies once reserved for enterprises inexorably migrate down the IT food chain toward the lower-end businesses. While this behavior has become de rigueur in the server market, it has been less apparent (or at least recognized) in the data storage sector. However, that situation is likely to change over the next twelve to eighteen months, as businesses of every size come to realize the need for and pain of deploying storage solutions that meet and support the requirements of a growing range of government and industry regulations. In the case of IBM, the new TotalStorage DS300 and DS400 are designed to ease the migration woes of mid-market customers who are faced with the all too arduous prospect of stepping into the wide world of SAN. Using the new products’ iSCSI capabilities, customers can create new SANs with existing Gigabit Ethernet networks. In addition, the links between the DS300 and DS400 to IBM’s xSeries servers and BladeCenter solutions, as well as the company’s Express Portfolio, demonstrate how a systems vendor like IBM can leverage existing products and partnerships to foster systemic traction for new ones.
In a sense, EMC’s new Celerra solutions follow a somewhat opposite course toward the same mid-market Promised Land. The NS500 is the new entry-point solution for the company’s existing NAS line, and is aimed straight at the same lower-end sweet spot that Net App has mined so successfully over the years. But both the NS500G and NS704G NAS gateways provide EMC’s traditional customers an easy way of bringing network-based information into the circle of other storage solutions. Using an EMC gateway, customers can easily integrate NAS into their existing SANs, directing network-based information towards a range of EMC solutions including CLARiiON, Centera, and Symmetrix. By doing so, EMC adds yet another brick to the tiered storage base of its ILM strategy. The primary goal of most IT vendors is to develop new products that build on or build out from previous innovations and expertise. If that is the measure, both IBM’s new DS products and EMC’s new Celerra solutions are likely to succeed.
IBM and Intel have announced the public availability of the design specifications for switches, adapter cards, appliances, and communication blades for the IBM eServer BladeCenter platform. The companies said the open BladeCenter design specifications are intended to enable companies to develop and build compatible networking switches, blade adapter cards, and appliance and communications blades. They believe that hardware developers can now more easily develop and build compatible blade products in these categories and participate in the rapidly growing blades market served by the IBM eServer BladeCenter and Intel Enterprise Blade Server platforms. Through the release of the design specifications, they hope to harness the development power of the industry and more quickly deliver a comprehensive solution roadmap for their diverse customer base.
Two decades ago, the world didn’t know a PC from a DEC PDP-11. IBM and Intel changed all that when they co-developed a product that had no market and no application software, and whose OS development they farmed out to a young man named Gates. We all know what has happened since to the fortunes of all three companies and while we might say that each made missteps along the route to PC dominance, the trip has been good for users, the technology industry, and enterprises big and small. A pivotal “right” decision was to open the PC architecture not for modification, but for use by all comers who could build a board to fit in its backplane. As the PC caught on, hardware, software, and vertical applications of all stripes hit the market and the competing architectures (with the exception of Apple) withered and died under the weight of shrinking market share. With this new announcement, the dynamic duo of hardware is at it again. If they can convince the third party, value-add marketplace of the PC-quality potential of the BladeCenter, HP, Sun, and Dell will eventually be forced to accept and adopt it as the one and only blade architecture. While it is unlikely that any of these players will give up their own proprietary (there’s that word again) blade architectures easily, let alone soon, history has shown that in a commodity world, a standard platform is an absolute requirement for success. The question is not whether IBM/Intel have the best blade platform, but whether the market will trust them enough to enable lightning to strike the target for a second time.
The City of Philadelphia is going forward with a $10 million plan to offer wireless access throughout the city’s 135 square miles. The city is planning to place thousands of wireless transmitters around the city, perhaps on lampposts, to provide high-speed Internet access for any computer equipped with a standard wireless networking card. The city has not determined whether or what it would charge for the service, but city officials said any fees charged would be well below those charged by commercial broadband providers. New York City is also considering a plan for near-universal wireless deployment, and Amsterdam officials say they have completed their city-wide rollout of wireless access using only 125 transmitters. Other smaller cities in the U.S. have rolled out partial wireless or wired networks.
At the beginning of the 20th century, cities throughout the U.S. rapidly rolled out new infrastructure technologies like electricity and natural gas that gave them an edge in attracting the businesses that would employ their citizens and drive their tax base for years to come. This is a trend that will continue unabated until most U.S. urban areas have wireless coverage that is as ubiquitous as mobile phone coverage. Cities like Philadelphia, we believe, will demonstrate the substantial value associated with extending the infrastructure beyond its present limits and into the ether. Philadelphia and other cities that get ahead of the curve, like their counterparts a century ago, are going to enjoy a wide range of economic opportunities that will not be available to those behind the curve.
Whether the city of Philadelphia charges a nominal fee for the service, or even if they don’t charge anything at all, they are providing access to a greater number of local businesses, residents and transients, a positive development for both information dissemination and commerce. Second, by selecting and driving a single infrastructure, the city is laying the foundation for a whole new generation of smart devices that will be instantly linked to the ubiquitous network. The city will have the ability to monitor and manipulate vast portions of its existing infrastructure in a way that will save money, time, and in many cases, lives. A traffic light knocked out by a wayward bus will instantly notify the network of its condition, as will the bus. We suspect that many disparate communications networks now in existence can and will be replaced by the wireless network, offering further cost savings and simplification of city infrastructure. We also suspect a low-cost wireless alternative to pricier broadband options will have an impact on those prices, and certainly leverage for the city to strike more consumer-friendly deals with those providers seeking franchises within the city limits. As a result, local businesses will become more efficient and citizens will have more disposable income while this network will provide both citizens and the machinery of city government more accurate, timely, and granular information that will drive more economic activity while lowering civic costs substantially. Even Philadelphians will cheer that.
Industry researchers have released server sales guesstimates for the second quarter of 2004. Total server sales hit $11.5 billion in Q2, which is a year-over-year gain of 7.7%. Both Itanium- and Opteron-based platforms showed sales increases. Itanium server sales were $319 million in Q2, up from $287 million the previous quarter and $70 million in Q2 2003. Total Itanium shipments were 5,665 in Q2, compared to 2,717 for the same period in 2003. The average sales price per Itanium server in Q2 was approximately $56,000. HP, the server vendor with perhaps the most to gain, sold 4,789 Itanium servers, or 85% of the units sold. IBM sold 208 units compared to two units a year ago. The majority of the remaining Itanium-based units were shipped by SGI, NEC, Bull, and Dell. In comparison 60,000 Opteron-based servers were shipped in Q2 for $191 million in total revenues. Last year’s Q2 shipments were 2,735 units, for a total of $8 million in revenue. The average sales prices for these systems remained under $10,000. White-box server vendors dominated Opteron sales at $138 million. Among the big vendors, Sun Microsystems led IBM and HP, with $22m in sales on 5,254 units. IBM sold 3,780 units for revenues of $14 million and HP sold 2,754 units for revenues of $8 million. Dawning, a Chinese Super Server vendor and major player in the Chinese National Research Center for Intelligent Computing, also shipped 2,324 units for $5 million in revenue. No other vendor shipped more than 200 units in the quarter. Itanium and Opteron sales growth, although substantial, are still well behind Xeon sales.
The dramatic year-over-year growth in near commodity server shipments and revenues, although small in comparison to the total server market, has to be good news for both AMD and Intel. Intel probably fosters some regret that technical issues caused it to miss a window of opportunity in the 1998-2000 timeframe by not delivering the 64-bit “new and improved technology” to the market and potentially pre-empting AMD’s 64-bit extension adventures. AMD for its part made a big bet on Opteron and it appears to be paying off. The ebb and flow of the market, partners, and competitive action will determine AMD’s future fortune. Intel may have been on the bench but is now back in the game. With the EM64T Xeon offering Intel will likely prove once again that it’s adept at the game, even if it didn’t hit a home run in the first inning. Itanium as the technology we know today will continue to search for market niche, but in all probability will move on and be folded into some future Intel’s chip efforts that will combine many of the attributes of both EM64T and Itanium.
It strikes us as odd that at a time when Sun and HP are rumored to be having trouble meeting demand for Opteron servers, and IBM is experiencing slow HPC Itanium sales, IBM has not moved more quickly with its eSeries servers to capitalize on Opteron technologies or in fact even their own Linux on POWER solutions. Part of the problem may be in the heavily competing loyalties to Intel on xSeries servers. Nevertheless IBM may be leaving money on the table for HP and others. Meanwhile, it is clear that enterprise IT shops are looking for leading performance and price performance in servers, with the ability to address more than 4GB of memory and complete compatibility and outstanding performance with 32-bit applications. This is a technical spec that IT management wants to put to bed. After all, CIOs have more pressing issues – alignment of business processes and goals with IT service offerings, meeting compliance and regulator demands, consolidation and IT infrastructure cost containment – all for greater productivity, responsiveness, competitiveness, and profitability. Successful vendors have recognized that enterprise customers have moved esoteric technology debates to the background and pushed business issues front and center.
Veritas announced that they have purchased KVault Software Limited for $225 million. The UK-based company is a leader in email archiving software for Microsoft environments. Veritas intend to have KVS operate as a separate unit within the company. With the purchase of KVS, Veritas have announced the end of life for their Data Lifecycle Manager (DLM) product and they will offer the KVS Evault product in its place. They will continue to provide support for existing DLM customers for two more years.
Veritas’ strategy is called Utility Computing. And although we strongly dislike the name they’ve chosen, we strongly believe that the idea behind it is a good if ambitious one. Veritas would like to see storage turned into a set of services enabling IT managers to offer data services within organizations and allowing IT departments to provide the right level of service for the right application at the right price. When done right, it is a more efficient use of resources, and therefore helps lower costs. It also gives IT managers better control of the overall storage environment, which was rather tricky with storage distributed across servers and/or locations. Email archiving is also perhaps one of the best examples of a storage service. It’s easy for most people to grasp as they use and reuse email constantly. It’s also a fundamental part of many emerging compliance regulations, so it’s a straightforward sell to the executive level. It doesn’t hurt that Veritas is a strong brand in the backup world, and so this purchase, this product, and this direction, all fit nicely within Veritas’ strategic plans and reflect a positive move.
The ambitious part of Veritas’ plans is that Utility Computing is a concept that sits on a par with the ideas of large industry behemoths like HP with their Adaptive Computing and IBM with their On Demand computing. Veritas are much tinier than either company, and their product offerings are much more focused than those of HP or IBM. At the same time, for most companies the concepts of On Demand computing are vague and not something they readily relate to, a necessary evil of architecting a vision that spans numerous products, industries, and customer needs. The concept of utility services for storage is much easier to grasp and is also the sort of real business problem that managers are actively seeking to solve. In addition, both Veritas and KVS are strongly committed to the notion of hardware neutrality, which will appeal to customers with heterogeneous IT environments, which happens to be the norm. In the long term, this should boost the credibility and demonstrability of Veritas’ strategic direction. In the short term, it will affect their competitive position in backup by providing them with a stronger product offering than they previously had. It is also interesting to note that some 25% of KVS business was sold into EMC environments, which continues to be Veritas’ largest backup competitor with their Legato division. The purchase may also provide Veritas a short-term competitive entry in accounts they might not have seen otherwise.