Market Roundup |
October 19, 2001
This Week
Verizon Reaches 1,000,000 DSL Subscriber Mark – Now The Real Work Begins
IBM Describes “Autonomic” Computing Strategy/Effort
Microsoft/NEC Announce Strategic Alliance
WebMD Announces Severances Packages for Top Execs
Verizon announced this week that it now has more than 1,000,000 DSL subscribers nationwide. The company says it has targets for this year of 1.2 to 1.3 million DSL subscribers, a number it may not now hit due to slowing market conditions after the events of September 11. The company claims to have spent more than $1 billion to date in costs associated with DSL rollout, and claims that it has more than 32,000,000 lines ready for DSL service. Verizon has been aggressively pricing its DSL service with an introductory rate of $29.95 per month for the first three months of service. The company also claims that the wait times for DSL service have dropped from twenty-two days earlier this year to ten days now.
One million is always a nice number to trumpet, especially if you are providing Internet connectivity. Beyond its PR value, Verizon’s accomplishment really doesn’t mean all that much. What is going to mean something is the time when DSL and other broadband services not only are ubiquitous, but reliable and consistent. At the present time, DSL services meet their prime marketing message – they are faster than dialup connections – and not a whole bunch more. Drop-offs are still a part of the DSL experience, as are significant degradations in access speeds. While DSL companies love to taunt cable broadband providers with advertisements about network performance issues, we have seen our fair share of speed loss on the DSL side as well. Now, whether that is due to the DSL providers’ network infrastructure or the large Internet backbone is, in our opinion, largely irrelevant. As we move forward into a new era of Service Computing that requires a resilient, high capacity network to provide the necessary components to desktops on an ad hoc, as needed basis, network reliability demand is going to jump through the roof. In this emerging environment – and yes, Windows XP is going to give us an early taste of that with its automatic updates features – a less than nine-nines delivery environment is going to seem as useless and frustrating as a 14.4 modem does when it is dragged out of the closet and fired up for emergency situations. As Service Computing makes its presence felt more widely in the coming years, the connectivity companies like Verizon are going to need to focus on something more than subscriber numbers. Having millions of subscribers with spotty or unreliable service is not just going to be a PR nightmare, it is going to slow the adoption and acceptance of the inevitable coming of Service Computing. And that would be very bad news for the industry at large, indeed.
IBM has released a document entitled “Autonomic Computing: IBM’s Perspective on the State of Information Technology” that describes the company’s notion of the future of computing and the steps needed to make that vision real. In IBM’s view, computing complexity is increasing to a point where existing human IT administrators will be unable to effectively manage the number of tasks required. Add to that the growing shortage in skilled IT workers and the result is an equation with potentially disastrous results. IBM considers the solution to this problem to be a migration to automated computing processes that are akin to the human autonomic nervous system that regulates and manages body processes, responding automatically to circumstances and stimuli. IBM cited its own efforts including its eLiza initiative and Tivoli e-sourcing and systems management solutions, as well as academic efforts in the area of grid computing as contributing to what will eventually become autonomic computing. But the company also cautioned that for autonomic computing to succeed, vendors would need to approach solutions systemically and to fully embrace open standards rather than proprietary solutions.
While we find the IBM report notable on a number of levels, its greatest value may be as a roadmap to what we at Sageza refer to as Service Computing. Over the past eighteen months, increasing computing performance and decreasing price points are leading to a time when both hardware and software will be regarded as mere commodities. Stir into the mix the growth of heterogeneous computing environments, the ongoing build-out of broadband Internet connectivity and the development of high-performance distributed computing and networking technologies, and the result will be a Web-based infrastructure that can adequately support software-as-service business models such as Microsoft’s .NET initiative and SunONE. Over time, we believe that computing processes and capabilities of every kind will eventually be deployed and sold in models similar to telephone and utility services.
How does IBM’s autonomic computing effort fit into this scenario? For Service Computing to succeed, it will need to be supported by a robust, highly complex computing infrastructure based on open standards. For such an infrastructure to perform, its management, diagnostic and repair processes must be highly automated, to the point of being autonomic in the sense described by IBM’s newly published report. Can this possibly work? Yes and no. Advances in technology and shifts in the competitive landscape suggest that a time is fast approaching when Service Computing could exist technically and logistically. To our way of thinking, the potential benefits that Service Computing offers outweigh the risks and potential downside, but vendors who believe it threatens their own proprietary products will also figure in the depth and breadth of its eventual adoption. What will eventually come of all this is impossible to say, but IBM’s report on Autonomic Computing proposes a future where Service Computing will be the norm.
Microsoft and NEC have announced a strategic alliance to develop platform products, SI support and Internet services that will lead to new wireless and broadband business solutions. The alliance includes agreements for joint testing the integration of Microsoft’s 64-bit Windows .NET server products and NEC’s next-generation 64-bit IA server (code-name “AsAmA”), and pairing Microsoft’s appliance server software with NEC’s next-generation “blade” server products. Additionally, the two companies will evaluate and test IP-SAN technologies with NEC’s iStorage products, and will performance-test NAS solutions for future use. Finally, Microsoft and NEC will jointly pursue strategies for enhancing next-generation fault-tolerant or continuous-operation PC servers. No calendar or roadmap for developing or deploying new products was included in the announcement.
The Microsoft/NEC announcement is not at all surprising, as it follows similar strategic agreements Microsoft inked last year with Japanese hardware vendors Hitachi and Fujitsu. The deal also reflects the ongoing global effort Microsoft is putting behind its 64-bit server products, including deals announced in August of this year to deploy Windows Advanced Server Limited Edition on Itanium-based products from IBM, Compaq, HP and Dell by the end of the year. Microsoft’s XP-based 64-bit OS products are scheduled for release next year. The projected winners we see in this deal include Microsoft-committed businesses, who will enjoy increasing Windows options for workstations and fault-tolerant servers, as well as hardware vendors including those listed above that are committing themselves to an Itanium future. NEC, as well, stands to gain from the deal by raising its profile from the PC products the company is best known for to the production of higher-end Intel-based servers.
Microsoft, also, continues its blue ribbon performance. Despite the company’s ongoing anti-trust woes, controversies surrounding the upcoming XP release and complaints regarding projected changes in its product licensing rules, Microsoft continues to deliver on its enterprise product strategy. One result has been the increasing penetration of Microsoft Internet Information Server (IIS), especially in commercial and e-commerce Web sites, and the company’s plan to develop fault-tolerant products could help open lucrative new markets to them. If we had to pick prospective losers here, Sun Microsystems would probably lead the list for a pair of reasons. First, Sun’s role as the lone hardware vendor dedicated solely to proprietary chipsets and OS products is likely to put the company at an increasing disadvantage when its competitors begin to roll out new Itanium products. Second, Sun’s unbridled efforts to demonize all things Microsoft (most recently attempting to promote its own iPlanet server software as inherently more secure than IIS) is growing increasingly strident. Sun, in short, sounds like a company in trouble. Considering its success in executing its 64-bit server strategy, Microsoft acts as if trouble is the furthest thing from its mind.
WebMD joined the list of failing companies that has announced sizable severance packages for departing or departed executives. The company will pay former President Marvin Rich $1 million a year for the next three years, and Patricia Fili-Krushnel, the company’s former chief executive for consumer sales and services, will receive $750,000 per year for the same time period. WebMD previously paid former co-Chief Executive Jeffrey Arnold $4 million when he resigned last October. Other executives received smaller severance packages as well. WebMD’s stock is now listing around $4.50 a share, down from its all-time high above $100 a share in early 1999. The company posted a loss of $1.9 billion for the first half of this year.
While there is sure to be no small amount of groaning and griping about the golden parachutes offered to executives of this once high-flying Internet bubble rider, we will – for the most part anyway – refrain from adding our boot to the collective disgust over sizable payouts to executives who clearly failed in their mission. While the golden parachutes we are seeing here represent some of the gross over-compensation that we have seen throughout the Internet bubble, we think there is much more meaningful fodder in the roots of WebMD’s failure to gain real traction in the healthcare markets. Let’s start with the name, shall we? WebMD had all the initial good looks of an Internet start-up. It was going to combine two increasingly core elements of American life, health care and the Internet. Both are certainly growing sectors of the economy. A virtual can’t miss, right? Well, no. The health care industry is incredibly complex, with an arcane labyrinth of well-entrenched players, including HMOs, insurance and pharmaceutical companies, to name just a few. Mix in a heavy dose of increasingly complex matrix of local, state and federal regulatory and legislative oversight, and one has an industry that is all but impenetrable. And herein lies the lesson. Vertical market expertise will not only continue to be a necessity for vendors attempting to expand their markets; it will grow in importance. As we move away from a one-size-fits-all mentality of applications and hardware that dominated the early years of the Internet, we will see increasing demands from market sectors for highly integrated and knowledgeable solutions for specific needs. Simply offering to merge the Internet and fill in the blank market sector is not going to cut it in the future, just as it hasn’t in the near past. Vendors seeking to penetrate new markets need to choose partners and expertise providers that will help them mine gold from those markets, not golden parachutes.
The
Sageza Group, Inc.
900
Veterans Blvd, Suite 500
Redwood City, CA 94063-1743
650·366·0700 fax
650·649·2302
Europe (London) 44·020·7900·2819
Copyright © 2001 The Sageza Group, Inc.
May not be duplicated or retransmitted
without written permission