IBM has introduced a new high-end UNIX server, the eServer p690, code-named “Regatta.” The p690 can be configured with up to thirty-two of IBM’s Power4 microprocessors, each of which contain two 1 GHz-plus processors, large memory cache, I/O and a high-bandwidth system switch. The p690 runs on AIX 5L, supports 64-bit Linux, can be run as a single large server or divided into as many as sixteen “virtual” servers running any combination of AIX 5L and Linux. Additionally, the p690 offers robust clustering features, and provides virtualization capabilities similar to IBM’s z900 mainframe that allow users to create virtual servers with single or multiple processors, and to dynamically reconfigure them while the machine is running. The p690 includes multiple layers of self-healing technologies derived from IBM’s eLiza project, including sensors to detect system problems early on and system logic capabilities designed to locate problems' root causes before they cause chain reaction failures. Pricing for the p690 begins at $450,000 for an eight-way, 1.1 GHz system with 8 GB of memory and 36.4 GB of storage. The system will begin shipping in volume in December 2001.
Given the plethora of recent server introductions from IBM, Sun and HP, we believe it is wise to examine both the text and subtext of such announcements. On the surface, the IBM p690 offers both obvious and subtle improvements over the company’s previous high-end UNIX products and the competition’s. The p690’s support of AIX 5L and 64-bit Linux continues IBM’s enthusiastic embrace of open source options across its entire product line, and inclusion of technologies from the eLiza project reaffirms its dedication to deploy mainframe-derived self-healing features across the IBM product line. On the performance side, IBM claims its Power4 processor's “server on a chip” technologies allow the company’s products to achieve similar or better performance with fewer processors than the competition’s. What will likely be of equal interest to customers are IBM’s software-based virtualization features that allow commercial applications to be deployed on single processors, a substantially more flexible solution than the hardware-focused methodologies of Sun Micro, which require partitions to contain at least four processors. This issue can be financially crucial to enterprise users, since Oracle and other database vendors price products according to the number of CPUs they run on. Flexibility and finance are twin themes running throughout the Regatta announcement. Not only does IBM claim that p690 users can accomplish more work with fewer processors, but the price of a p690 is roughly half the cost of a similarly configured Sun Fire 15k (aka Starcat).
In fact, Sun Micro plays heavily in the sub-text of today’s announcement (though far more subtly than Sun’s open hectoring of IBM in the Starcat launch last week). This none-too-gentle give and take can be traced to the shifting sands of the high-end UNIX arena, where the erosion of Sun’s once apparently insurmountable market share has been shadowed by gains from IBM, Compaq and HP. That said, how does Regatta match-up against Starcat? Some may view any comparison as a matter of apples and oranges since Starcat can be scaled up to 106 processors as opposed to Regatta’s 32, but do Sun’s bigger = better arguments hold enough weight for re-energized market dominance, especially when factoring in Sun’s claims that Starcat provides mainframe capabilities in a UNIX box? We think not. Given the shaky state of the economy and the excessive build-out of IT resources in the past half decade, we believe that enterprises of every size are currently more interested in affordability and flexibility than they are in owning the biggest server on the block. Rather than following Sun’s Swiss Army knife approach to product design that suggests one product can fill every high-end computing requirement, the p690 indicates IBM is sticking with the idea of developing products that are built to perform and priced to sell.
Intel has introduced a dozen new microprocessors based on the company’s new 0.13-micron manufacturing technology designed specifically for enhancing battery life in mobile PCs while maintaining robust levels of performance. Included in the announcement were the high performance Pentium III-M 1.2 Ghz chip and the ultra low voltage 700 Mhz Pentium III-M, both of which operate at 0.95 volts while consuming less than half a watt of power. The announcement included four other Pentium processors and six Celeron chips that span a variety of mobile computing needs and applications. In an unrelated announcement, IBM stated that it had launched a company-wide initiative to improve the energy efficiency of information technology for enterprises and consumers, and would establish a low-power computing program coordinated from the company’s research facility in Austin, Texas. IBM has also opened a low-power consulting practice, and is accelerating the development of low-power products including servers, storage systems, desktop computers and ThinkPad notebook computers.
These two announcements may seem connected by tenuous low power threads, but we believe they both touch some deeper issues worth discussing. While a host of companies have publicly embraced the notion of ubiquitous computing, with users having their processing cake and eating it whenever, wherever and however they want it, the challenge of operating increasingly powerful mobile PCs with increasingly bulky applications has severely affected battery life. Use of smart power-saving techniques and packing along extra batteries work up to a point, but Intel’s announcement illuminates the critical link between laptop batteries and basic user satisfaction. If the ubiquitous computing model is to work, manufacturers must aggressively explore the same sorts of lower power consumption technologies that Intel is pursuing.
IBM’s energy efficiency initiatives largely target the higher end of the computing and product scale and touch areas of concern that have long been a part of enterprise computing. Not so very long ago, when IBM produced mainframes whose water-cooling systems provided the only effective means of preserving the computers from overheating, Amdahl gained some fame by producing the first IBM-based mainframes with workable air-cooling systems. The days of water-cooled computers are long over, replaced by advanced cooling solutions and air-conditioned data facilities. But while improvements in computing performance have allowed users to save money on data processing, fluctuating energy costs have kept the price of operating energy-hog data processing facilities on the high side. Anyone who lived in California during the first half of 2001, where badly botched energy deregulation and a spot market gone mad with greed led to rolling blackouts across the state, understands just how serious the issue of energy and energy conservation can be. IBM’s company-wide initiative may be unique for now, but more vendors are likely to climb aboard the energy-saving bandwagon over time.
Excite@Home filed for Chapter 11 bankruptcy protection last Friday, citing that it had $150 million in cash and $1.1 billion in debt. As part of the bankruptcy plan, the company planned to sell its high-speed Internet access business to AT&T Broadband for $307 million, unless a better offer is forthcoming. With the asset purchase, AT&T would gain its 3.6 million subscribers and the company indicated that it would hire a portion of Excite@Home’s roughly 1600 employees. Excite@Home also indicated that it has retained the investment bank Houlihan Lokey Howard & Zukin Capital to assist it in pursuing strategic alternatives and continuing with the bidding process for its access service. The service provider states that it would continue to operate its high-speed service while a final sale is being approved. The sale of the Internet service business to AT&T is subject to approval by the U.S. Bankruptcy Court in San Francisco.
We have been saying a simple refrain for some time. It costs a bundle to build out a network, especially the dreaded “last mile.” As CLECs, DLECs, and others have gone under for similar reasons, the reality of today’s capital markets dictates that only the very big, with very deep pockets, will be able to bear this cost while awaiting their return through the annuities that this kind of business offers. It is not surprising that Ma Bell, oops, AT&T Broadband, invested in this company and other significant cable operations as these acquisitions provided a ready built “last mile” connection into homes and businesses across the country. While the carnage continues, AT&T picks up the pieces at fire sale prices, customer lists and all, and we believe will continue to weave its new network access points (the cable) together into a large, ultimately multi-regional access provider.
For some time, it has been posited that higher-speed access would lead to more and better use of the network. Unfortunately for those who bet it all on “content (advertising) is king,” the results have been disappointing. But we believe the changing nature of applications, network services and Internet user behavior only reaffirms the intrinsic value of cable Internet access along with other forms of persistent access. In a time where the computing model is beginning a transition to that of service-based computing, the need for persistent access to myriad dynamic, interactive, perhaps interchangeable business and personal information objects and services will be a driving force behind persistent connectivity. While Excite@Home has bought the farm, the idea of persistent access over cable should not be judged as flawed, but rather the extraneous business activities that cost the company far more than they ever returned in revenue.
Intel has apparently asked PC vendors to cease producing consumer PCs with floppy disk drives in the second half of next year. According to the reports, corporate machines will still include the 1.44 MB drives. Intel is also asking that serial ports and Ps/2 ports be discontinued at the same time. The rational behind the move is the integration of USB 2.0 in motherboards by the middle of next year. According to news reports, the parallel port will not be removed.
While not earth-shattering by any means, the apparent pending demise of the floppy disk drive is an occasion to take stock in just how far computing has come in the past 10-15 years. At one time, floppy drives not only provided the means for installing software but also enabled, in many cases, de facto networks. Disks changed hands and moved up and down the food chain in scores of organizations, becoming a new-fangled form of the classic manila interoffice mail envelop with the tie-down string. The distance between such simple “networking” and the present’s instantaneous worldwide file transfers between PCs seems very far indeed. While we calibrate that distance as substantial we believe it will be dwarfed by the changes we see as we turn our rangefinder toward the future. There, an even greater distance lies between NOW and a time five years ON. We see the PC – and more importantly the applications running therein – going through transformations that will make our present application environment seem as clunky as that old, buzzing floppy drive. Monolithic application packages will be replaced by a more ad hoc, on-the-fly application assembly environment that will allow for greater customization, finer application granularity and revolutionary changes in the enterprise decision-making process as it concerns IT purchases and deployment. Stayed tuned.
The Sageza Group, Inc.
Veterans Blvd, Suite 500
Redwood City, CA 94063-1743
650·366·0700 fax 650·649·2302
Europe (London) 44·020·7900·2819