July 28, 2006
This week HP announced that it has signed a definitive agreement to purchase Mercury Interactive Corp., headquartered in Mountain View, California. The all-cash deal will see HP pay $52 per share which therefore values the acquisition at approximately $4.5 billion, net of existing cash and debts. It is worthwhile noting that at the end of December 2005 Mercury’s cash and investment balance stood at some $1.4 billion thereby making the net investment by HP stand at just over $3.1 billion. The acquisition will be conducted by means of a tender offer for all of the outstanding shares of Mercury, followed by merger of Mercury with an HP subsidiary. The tender offer is subject to the usual closing conditions and comes close on the heels of Mercury’s recent restatement of its financial returns for the fiscal years 2004, 2003 and 2002.HP expects to commence the tender offer promptly and the merger is expected to close in the fourth quarter of 2006.
This deal marks a major landmark in HP’s development as a software company. This is a major acquisition, indeed the largest by some way since the purchase of Compaq some four years ago. When completed Mercury is expected to become part of the Software division, and the new HP Software business will account for over $2 billion in annual revenue. Following the close of the transaction the sales force of the combined organization will start to reference-sell each other’s products. In addition to extending HP’s software array with Mercury’s Business Technology Optimization (BTO) solutions, the acquisition will add considerable software “depth” and experience. Formed in 1989, Mercury currently boasts of twenty-six offices worldwide and some 2,600 staff, which will allow HP to boost its high-level business service management sales skills.
There is no doubt that Mercury Interactive will sit well alongside its many recent software acquisitions with the HP OpenView suite of systems management offerings. Together the portfolio will offer considerable breadth and depth. HP is well positioned to manage the usual logistical and personnel challenges associated with a reasonably large acquisition. The company’s greatest efforts must be to establish consistent and easily understandable business value messaging for its ever-broadening software solutions. HP as a brand is very well known. However, HP has a lot of work to do, internally and externally, to promote and exploit its undoubted potential as a supplier of valuable business-critical software solutions.
IBM has announced the latest high-end servers in its System p family, the IBM System p5-590 and System p5-595, both equipped with POWER5+ processors, and up to thirty-two cores and sixty-four cores respectively. The new servers feature sixteen-core units called “books,” each containing two eight-core multichip modules (MCM) with four dual-core POWER5+ processors. Each processor chip contains 1.9MB of L2 cache and an integrated memory controller. The MCM contains 36MB of L3 cache per dual-core processor chip, and each book provides sixteen memory card slots to support up to 512GB of RAM each. The IBM Virtualization Engine allows each server to accommodate up to ten virtual server partitions per processor core. The sixty-four-core p5-595 running a single instance of IBM DB2 9 on AIX 5L, using IBM System Storage DS4800, processed 4,016,222 transactions per minute on the TPC-C benchmark. The company noted that its improved processor performance is due to the Dual Stress process that IBM originally developed for state-of-the-art video gaming consoles. This process involves simultaneously stretching and compressing the silicon to deliver up to a 24% transistor speed increase, at the same power levels, compared with similar transistors produced without the technology. IBM also announced the IBM Tivoli Usage and Accounting Manager, which collects information from operating systems, databases, networks, storage systems, applications and virtualized environments and tracks which part of an organization consumes each of these resources, enabling administrators to accurately monitor and bill for individual usage of virtualized resources. Additionally IBM is launching the IBM Server Consolidation Factory for System p, which delivers a complete solution including hardware, middleware, and consulting and deployment services, together with financing to help customers move to a virtualized environment on System p.
The IT motor speedway is one place where the drive for bigger, better, faster, and cheaper never seems to diminish. We see this in these announcements but we also see benefits derived from seemingly innocuous customer electronics finding their place inside some of the world’s largest and most power servers. The latest System p5 offerings are powerful systems with a lot to offer large organizations beyond simple benchmark bragging rights as they feature some of the most granular virtualization capabilities available today. The Tivoli Usage and Accounting Manager addresses what has been a missing component in many organizations’ IT consolidation efforts, namely the ability to track and manage the use of virtualized resources by many different departments/users. While IT consolidation makes a great deal of sense from an operational perspective, the politics of cost allocation within organizations have proven a challenge for some wishing to further rationalize IT resources. Hopefully, the Tivoli offerings will assuage much of this resistance as department budget managers come to realize that they will not be bearing the financial burden of someone else’s IT excess.
The Dual Stress process is interesting in that it illustrates the building-a-better-mouse-trap approach available to systems vendors who develop their own processor technologies. By maintaining a sizable investment in microchip design and manufacturing, IBM has been able to raise the bar on transistor performance while simultaneously improving energy efficiency and hence lower heat generation. While we have heard much in the marketplace as of late about improved cooling (which is very important), not generating the heat in the first place is an even better approach to cooling, and operationally a less expensive one. The breakthroughs in process that Dual Stress realized would not likely have been cost-effective from an R&D perspective if it were not for the millions of processors that could be sold into the customer electronics offerings in addition to the server marketplace. For some time we have heard the refrain of systems vendors that they are bringing high-end technology down the product line to systems that mere mortals could purchase. With Dual Stress it is ironic that something developed for the low end of the computing food chain could have some much value and impact up at the rarified levels of computing supremacy.
Lastly, the potential value of Server Consolidation Factory for System p should not be overlooked. While many organizations are filled with very capable IT talent, for these inhouse talents to undertake system migration and consolidation initiatives can significantly impact available resources for operating the rest of the business. The expansion of this service to include the System p should catch the attention of organizations that would prefer a little outside help to smooth the technology transition they are undertaking while allowing inhouse talent to remain focused on the primary customer, the business itself.
Overall, these announcements are further indication that IBM hardly considers the UNIX marketplace to be a stagnant one. The power, capability, and manageability of these offerings are state-of-the-art and help to ratchet up the competitive bar in this market. With upcoming announcements regarding entry-level systems, collectively these demonstrate the scalability of the Power architecture from the smallest of consumer electronics through entry level and all the way to the high end of data center computing.
Sun Microsystems and Greenplum have announced a new data storage appliance, based on Sun's new Sun Fire X4500 data server, which is powered by Dual-Core AMD Opteron processors, along with Greenplum's parallel distribution of PostgreSQL, known as Bizgres MPP. The turnkey appliance is targeted at organizations seeking to analyze hundreds of terabytes of business data in a cost-effective fashion. The appliance is based upon open source software including Solaris 10, PostgreSQL, and Solaris ZFS, and supports industry standards and interfaces including SQL, ODBC, and JDBC. Solaris ZFS is based on a transactional object model that removes most of the traditional constraints associated to I/O operations, resulting in performance gains. The companies claim the Query-In-StorageT design is capable of scanning 1TB/minute and can scale to hundreds of TBs of usable capacity while only consuming 90w of energy per TB of data. In addition, the appliance features failover and mirroring capabilities to deliver continuous uptime, while leveraging Sun's global support operations to help ensure 24x7 operation. The Data Warehouse Appliance will be available later this quarter, in usable database capacities of 10TB, 40TB and 100TB. Price points for the 40TB and 100TB configurations begin at $15,000 per usable TB, with pricing for the 10TB configuration starting at $25,000 per usable TB.
This is an interesting move by Sun and Greenplum. Appliances usually mean devices that are simple to install and hopefully simple to manage, something that is always a boon to ever-IT-personnel-starved SMBs. However, in this offering, the target market seems to be large organizations with tremendous data throughput that need access to all of their information all the time. Telecommunication companies first come to mind, but so do most organizations with a high degree of customer information or those that manage complicated products with extensive parts detail and supporting information. In this market segment, it’s encouraging to see a concrete example of an open source solution in action.
Business Intelligence has become increasingly important but many organizations seem to think that it is too difficult and/or expensive to undertake. BI is a very competitive market, with quite a few heavyweight vendors. It is interesting to note that not a few of the BI software players run on Sun hardware. Will this deal between Sun and Greenplum have a negative impact with them? Can Sun and Greenplum promote the Data Warehouse Appliance effectively enough to establish a significantly large user base or will this offering be just another example of an open source idea that made it to market but not much further? Given the complexity of BI, we would tend to think that the traditional view of open source, the roll-your-own low-cost approach, would not match market need. But given the importance of BI and the fact it remains out of reach of many, may cause this offering to gain some traction. Since it is backed by a leading services organization, this could help alleviate some of the open source concern. We will be watching this space closely, to see just how receptive the market is to this new offering.
The last few weeks have witnessed various elements of the legislative and administrative arms of the European Union (EU) flexing their muscles in the area of commerce. Over the course of the last ten days we have witnessed developments in the mobile/cell phone service area with the EU announcing plans to curtail the costs imposed by service providers on customers that use their mobile phones while traveling abroad. The proposals drawn up by the Information and Media department seek to cap the wholesale costs associated with the termination rates charged by service providers and the Commissioner hopes that the limits will eventually reduce roaming charges by up to 70%. The caps will apply to charges on both calls made and calls received by customers while abroad within the EU.
Originally the Commissioner had hoped to ban roaming charges completely but objections by both the industry itself and the many national regulators led to a change of mind. However, if the proposals do lead to a significant reduction in roaming charges the intervention of the EU is, for once, likely to be welcomed very warmly. After all, many of the players in the mobile space are multinational companies that run the networks in many of the countries, yet even their own customers today pay considerable roaming charges.
The fact that the EU has felt compelled to intervene in what is otherwise a relatively open market does raise some interesting questions. Central among these is why the market has not led to the lower price rates for international roaming that the EU clearly feels should be available. Could it be that in this market price competition does not appear to be working? If this is the case, why should that be so? Following on from this it appears certain that there is still room for considerable consolidation among the mobile service providers in Europe. There are many suppliers that are essentially national in scope and that serve their national markets only. Is this viable going forward? Will the European national governments allow vendor consolidation to take place rapidly?
It is perhaps the actual intervention of the EU itself that is most interesting. In everyday life, the role of the EU is considered by the man and woman in the street to add little “value.” In fact the EU is often the butt of sour jokes concerning its bureaucratic nature and its own perceived high costs. This intervention has received considerable publicity and has pushed many of the service providers onto the defensive, and should lead to reduction in customers’ bills. On its own this intervention by the EU may not be too significant. But coming as it does at a time when the EU/Microsoft fines are again in the news, one might posit that the EU is girding its loins ready to become more vocal in promoting “competition.” One can only speculate about where the EU might next intervene into a functioning market.