Market Roundup

May 18, 2007

IBM’s Project Big Green

EMC Releases ControlCenter 6.0

Autonomy Touts Forensic Preventative Medicine with New Module

Industrial Defender Targets Highly Vulnerable SCADA Market

 


IBM’s Project Big Green

By Clay Ryder

IBM has announced it is redirecting $1 billion per year across its businesses to increase dramatically the energy efficiency of IT operations. Called Project Big Green, IBM’s initiative targets corporate data centers where energy constraints and costs can limit their ability to grow. The initiative includes a new global green team of 850+ energy efficiency architects from across IBM. The company stated that for an average 25,000 square foot data center, the potential energy savings should be upwards of 42% which, based on the energy mix in the U.S., would equate to a 7,439-ton reduction in carbon emissions year. Project Big Green outlines a five-step approach to improving energy efficiency. The five steps are: Diagnose: energy assessment, virtual 3-D power management, and thermal analytics; Build: plan, build, or update to an energy-efficient data center; Virtualize: IT infrastructures and special-purpose processors; Manage: control with power management software; and Cool: exploit liquid cooling solutions inside and out of the data center. The company also stated that it will soon launch The Energy Efficiency Incentive Finder, a central website for details about energy efficiency incentives and programs that are available from local utility companies, governments, and other participating agencies anywhere in the world. IBM Global Financing is positioned as part of Project Big Green to provide a green wrapper of financing solutions to help organizations acquire the hardware, software, and services they need to build an energy-efficient data center while aligning upfront costs with anticipated project benefits.

IBM announced several products/services to address each of the steps outlined in Project Big Green. Among those announced for the Diagnose step are the IBM Data Center Energy Efficiency Assessment, which utilizes a new standard metric to rate datacenter energy efficiency and create a plan to increase efficiency; Mobile Measurement Technology, which measures 3-D temperature distributions through a new mobile position monitoring system; and the IBM Thermal Analysis for High Density Computing service. For the Build step, IBM announced the Energy Efficiency Self Assessment as well as the IBM Scalable Modular Data Center, a pre-configured 500 or 1,000 square foot energy efficient data center solution, among other offerings. Other announcements included Tivoli management software that will expand the IBM Cool Blue portfolio to monitor power consumption, set power policies, and track energy usage to facilitate the charge back of departments; the PowerExecutive software, part of the IBM Systems Director portfolio, that will be available across all IBM systems and storage as of November 2007; and patented “stored cooling” solution, the IBM Data Center Stored Cooling Solution service that dramatically increases the efficiency of the end-to-end cooling system.

Egad! When IBM does something big, it really does something big. The greening of the data center has been a very top-of-mind topic and we have seen many vendors bring products or announcements to bear illustrating the desire to be seen as an energy-efficient IT partner for businesses. But with few exceptions most announcements have been around point products or specific segments of the larger datacenter energy management and efficiency equation. A notable exception was last Fall when Hewlett Packard announced its Dynamic Smart Cooling and displayed some of its initiatives that addressed that the totality of the datacenter. Likewise, in these announcements we see IBM’s Project Big Green as a strategic initiative that will have impact far beyond IBM and its customers, but help set the tone and overall marketplace direction with respect to datacenter energy efficiency.

While there are five very logical steps articulated by Big Blue for organizations seeking enhanced data center energy efficiency, for most the first step of Diagnosis is the most relevant. We have a positive sentiment for the IBM Data Center Energy Efficiency Assessment, in part due to its use of a new standardized metric to gauge the overall energy efficiency. The lack of easily understood metrics has made it difficult for organizations to discuss how efficient their data centers are at present since few organizations would have the disparate collection of specialized knowledge requisite for such an undertaking. Further, the lack of distributed and easily moveable thermal monitoring points within the data center presents a challenge in assessing the thermal dynamics and hence the degree of resource utilization and waste taking place. It is clear to us that without a thorough monitoring assessment of heat and power usage levels it would be almost impossible for any organization to assess its efficiency and plan for improvement. At the same time, we expect that most organizations do not have the skill set or free time by which to undertake such a study, which makes the availability of outside services from trusted third parties all the more important.

As noted by Big Blue, provisioning software can reduce power consumption on servers by up to 80%. Hence the value of Tivloi management software addressing power consumption through power policies and tracking energy usage combined with PowerExecutive’s ability to allocate, match, and cap power and thermal limits at the system, chassis, or rack level becomes very apparent. We believe that once organizations have a clear understanding of their power and thermal envelopes in the data center, such software will become a no-brainier for the data center manager. Although the reduced cost in power consumed will likely be a welcomed result, more important is the reclamation of power and cooling capacity. As less power is drawn, more is available for future growth, and the same can be said for cooling capacity. In an era of blades, and other high-density form factors, this headroom for growth is more important than ever as organizations continue to deploy these density-packed IT technologies. This is a winning scenario as operations cost can decrease in the present but CAP EX for facilities in the future can be reduced as well.

Finally, we are always intrigued by liquid cooling. At one time, this was considered the dinosaur of IT, yet like its partner the mainframe, the technology has reinvented itself to become very relevant again. Chilled doors have streamlined the implementation of liquid cooling; however, in the highest echelons of computing, namely massively scaled systems, we see the potential for chilled plumbing making a comeback. Liquids can effectively remove heat from the insides of cabinets and racks, but if the heat is simply exchanged back into the data center air, the heat has merely been moved, not removed. By delivering this heat outside of the data center, the potential for liquid cooling will become more evident.

Overall, the extent of this initiative is considerable, even by Big Blue standards. We are glad to see the company restate its intentions to be a primary player in energy efficiency and expect the competitive pressure in this arena to help address one of the most daunting issues facing corporate IT managers at present.

EMC Releases ControlCenter 6.0

By Clay Ryder

EMC has released EMC ControlCenter 6.0, the latest version of the company's flagship Storage Resource Management (SRM) solution. ControlCenter 6.0 offers comprehensive support for VMware Infrastructure, including discovery, problem management, compliance, change management, provisioning, and reporting of VMware ESX Server host and guest servers to enable SRM in virtual environments. EMC ControlCenter complements VMware's VirtualCenter software by providing end-to-end storage relationship information from a VMware ESX server host to the physical array devices. Users can view properties, capacity, and usage information for a VMware ESX server host and corresponding virtual machines. The solution discovers individual VMware ESX Server guests and reports the capacity of virtual disk files and raw storage devices mapped to each virtual machine guest. ControlCenter 6.0 also enables users to provision, mask, and zone storage to VMware ESX server hosts. With the introduction of new reporting methods, users can easily access the data discovered by ControlCenter and tailor it to specific business requirements. For example, the new Query Builder feature within EMC StorageScope provides open access to a centralized reporting repository as well as customized reporting. Additionally, ControlCenter 6.0 now brings EMC ControlCenter StorageScope and EMC ControlCenter StorageScope File Level Reporter together, providing a single interface and infrastructure for enterprise-wide and file-level reporting. ControlCenter 6.0 also offers expanded heterogeneous platform support through expanded active management capabilities for Hitachi storage arrays and CTP-certified SMI-S 1.1 storage providers. EMC ControlCenter 6.0 will be available at the end of June.

As expected, this latest release features a variety of incremental improvements and product enhancements worthy of a mature SRM solution. However, what we find most compelling about this offering is how it works in conjunction with VMware technology to fold together physical and virtual environments, at least with respect to SRM. As virtualization continues its ascent to become a commonly accepted data center technology, managing these resources becomes more challenging, as does the task of integrating their existence into the typically better-managed physical realm. This is on top of the already often scattered monitoring of physical networks, servers, and storage resources, which EMC has sought to tame through its Smarts technology. With this release of ControlCenter we see tangible results afforded organizations who desire to overcome the virtual/physical divide.

For virtualization to achieve its logical conclusion, all resources in the data center will have both a physical and potentially multiple virtual identities. Effectively mapping the dynamic virtual world onto the largely static physical world is a continuous real-time activity. The ability to discover resources, associate virtual ones with their present physical instantiation, provision resources, and report upon their usage and state, demands a strategic effort, one that is typified by this latest EMC offering. The ability to leverage data through Query Builder or other mechanisms bodes well for organizations that for compliance or best practices reasons need to be able to report on the state of the infrastructure in variety of ways. Being able to create reporting on logical constructs such as files and directories combined with information on servers, storage devices, and arrays portends a new level of understanding of the information infrastructure but can potentially bolster any compliance policies put in place.

Overall, we see this as an important step forward in the continued maturity of ControlCenter as well as a good illustration of how EMC is integrating the formerly disparate worlds of networks, servers, and storage. The graphical interfaces and reporting tools are well positioned to ease the manageability of resources especially for organizations whose degree of IT specialization is limited yet still face an increasingly complex IT infrastructure. The unified view afforded by the new reporting capabilities combined with the inclusion of certain Hitachi-based storage solutions enhances the value proposition not only of ControlCenter, but of virtualization and consolidation initiatives overall. By seeking to bridge and map the virtual and physical divides, EMC is responding to a growing need in the data center as organizations of all sizes seek to virtualize, consolidate, and otherwise get their resources under control.

Autonomy Touts Forensic Preventative Medicine with New Module

By Lawrence D. Dietz

Autonomy Corporation plc today announced the immediate availability of IDOL ECHO, a new module that allows global organizations to forensically account, track, and trace the lifecycle of every single piece of data within an organization. This includes telephone calls, voicemails, emails, instant messages, documents, and videos. Based on Autonomy's pan-enterprise platform, the Intelligent Data Operating Layer (IDOL), and leveraging Autonomy's Meaning-Based Computing technology, ECHO is a key development in helping organizations proactively protect the integrity of their data and employees in an increasingly regulated environment. Autonomy's IDOL ECHO delivers fully auditable and accountable monitoring of information use to global enterprises. With its ability to conceptually understand both structured and unstructured data, IDOL ECHO promises to quickly and accurately audit the entire information lifecycle from creation to deletion.

IDOL ECHO can follow a traffic pattern of data, such as the path of an email attachment or voice mail. For example, companies can track who has read, heard, forwarded, and retained a message. It will also detect the influence of its content, including who within the enterprise has taken, re-purposed, been persuaded by, or even plagiarized what he/she has heard or seen. IDOL ECHO can also trace information as it jumps from mail to phone conversation to document. For example, person A receives a call from person B who then relays it in the parking lot to person C, who in turn instant-messages these ideas to person D. ECHO can then be asked ”who has heard this?” In addition, ECHO can report where an author lacks influence or how a community has behaved.

The December changes in the U.S. Federal Rules of Civil Procedure have significantly elevated the importance of discovery of electronically stored information (ESI). The new rules have formalized the need for large organizations to be aware of what information they have stored electronically, and where that information is located as well as its format. Rule 26(f) of the FRCP, for example, indicates that the parties must have an early discussion of electronic discovery (e-discovery) roughly seventy days after the complaint is filed. This is a key preparatory step to the meeting mandated by rule 26(a)(1) which requires initial disclosure of ESI two weeks after the mandatory conference held about ninety days after the complaint has been filed. While only 8% to 11% of all U.S. litigation is pursued in Federal Courts, the actual numbers are generally quite high and many state court systems look to the Federal courts for guidance in setting their own formal or informal rules. Also, given the general discomfort to judges by ESI and evidence from it, we expect to see more formal guidance from the courts to streamline the use of ESI going forward.

Thus far most vendors in the e-discovery space have targeted the reactive aspects of e-discovery. Very few vendors, aside from perhaps the archiving vendors, have promoted the need for organizations to take preventative action by installing forensic software. Sageza believes there is quite a bit of truth to the old adage ”an ounce of prevention is worth a pound of cure.” Organizations that are heavily involved in litigation, whether as plaintiff or defendant, would do themselves a favor through preparation. Since the discovery phase is far and away the most expensive component of complex litigation, organizations that are on top of their ESI are in a far better position to defend or attack than those that are flailing about trying to determine what is where. Forensics software such as that from Autonomy, Guidance Software, and NetWitness among others should be considered as a means to reduce expenses associated with litigation by those organizations who are frequently participants in the judicial system.

Industrial Defender Targets Highly Vulnerable SCADA Market

By Lawrence D. Dietz

Industrial Defender, Inc., previously known as Verano, Inc., this week launched its Co-Managed Security Services (CMSS) platform, the industry's first and only outsourced cyber-risk management and monitoring service designed for the realtime process control and SCADA environments. With its new CMSS offering, Industrial Defender provides the final element for a completely integrated Cyber Risk Protection Lifecycle which helps critical infrastructure organizations in the power, water, energy, transportation and chemical industries assess, mitigate and now manage risk. Industrial Defender Inc. acquired the CMSS offering from e-DMZ Security LLC in September 2006 and has now fully incorporated the outsourced offering into the Industrial Defender platform for a complete cybersecurity lifecycle solution. Offering a complete co-managed monitoring and management program for the perimeter, network, and host environments, Industrial Defender's CMSS team continuously captures global cybersecurity intelligence to provide customers with unique insight into cybersecurity attacks and vulnerabilities that can adversely affect the performance and integrity of process control/SCADA systems and associated networks. The outsourced CMSS model thus promises to enable cost savings, reduce total cost of ownership (TCO), improve overall revenue flow and free up employees to focus on managing the operational aspects of their business. The Co-Managed Security Service offering is the third and final component of Industrial Defender's new Risk Protection Lifecycle portfolio, joining Risk Assessment - Industrial Defender Consulting Services and Risk Mitigation - Industrial Defender Technology Suite.

The world of Supervisory Control and Data Acquisition (SCADA) security is among the most complex segments of the security space. The nature of SCADA and the demanding physical environments where these systems function combine to present very significant challenges operationally and in terms of costs. As an increasing number of SCADA processes move to the Internet, concern for their security has grown considerably. Dick Clarke, the former Cybersecurity Czar under President Bush, repeatedly sounded the alarm for heightened SCADA security in his numerous public presentations and in the attention given SCADA in the U.S. National Strategy to Security Cyberspace released by the Whitehouse in February 2003.

Most organizations employing SCADA are in complex businesses. They are mainly manufacturers of chemicals or similar products, or are processing petrochemicals, or are responsible for key segments of the national critical infrastructure such as electrical power production, water distribution, and sewerage processing. A managed security services offering with special expertise to deal with SCADA challenges fills a solid niche and we applaud Industrial Defender’s efforts to extend the range of security services available in this vital segment of our critical infrastructure.


The Sageza Group, Inc.

32108 Alvarado Blvd #354

Union City, CA 94587

510·675·0700 fax 650·649·2302

 

sageza.com

 

Copyright © 2007 The Sageza Group, Inc. May not be duplicated or retransmitted without written permission.