Market Roundup August 5, 2005 SuSE Linux to Return to Open Source Development |
|
This week EMC announced improvements to its CLARiiON family
of midrange storage. Among the changes and additions is the UltraPoint
Technology, which uses point-to-point connectivity between individual disk
drives to increase scalability and reliability as well as advanced diagnostics
to help isolate problems when they arise. EMC also expanded the functionality
of CLARiiON software. The Virtual LUN Technology allows customers to move
volumes within a CLARiiON CX system without disrupting applications in order to
improve performance and capacity utilization. EMC has upgraded SANCopy so that CLARiiON arrays can copy data between
systems. Copies to other CLARiiON and Symmetrix arrays, as well as arrays from
IBM, HP, Sun and HDS, are also supported. New CX series models will be
available with these capabilities.
The key point about these announcements is that they make
the CLARiiON family more like the Symmetrix family. This is impressive for a
couple of reasons. One is that while Symmetrix is an original creation of EMC, CLARiiON
came to EMC as the result of the purchase of DataGeneral.
EMC engineers have clearly done a lot of important work to make these two very
different classes of systems function together. While key differences remain,
the gap between them continues to narrow, especially as the underlying software
— the point of contact for most administrators — continues to grow together, so
that training in one system lays the foundation for understanding the other,
which surely eases management challenges. The second bit is that no other
vendor owns both its high-end as well as its low-end systems. HP and Sun use OEMed HDS systems, and IBM’s DS4000 systems are an OEM from
Engenio (formerly LSI Logic). While this hasn’t
really mattered to users in the past, as the two classes of systems — midrange
and high-end — continue to blur their distinctions, it becomes strategically
significant if the two systems are more alike than different.
The challenge behind what EMC is doing lies in the marketing. Granted, the technology is not simple, but compared to the marketing the technology is a relatively straightforward exercise. Unlike the recent Symmetrix announcement, which was about scale, scale, and scale, the CLARiiON announcement is built on a collection of subtle, incremental changes that individually may not draw much attention, but together equal real growth in platform maturity and capability. Of course, depending on customer need, a subset of the announcements will probably be most important. The way to solve storage problems such as insufficient space, slowing performance, or backup backlogs used to be fairly straightforward: purchase another array. However, while this may be a short-term analgesic, the long-term benefit is more likely to be found through applying the principles of ILM: putting the right storage in the right place at the appropriate point in its lifecycle, and having the tools to make that happen. Managing workloads and moving data between LUNs or arrays may be more appropriate, but managers need to take the time to learn what their data is, where the data is, what it’s doing, and where it needs to go next. CLARiiON customers should take the time to see how these new CLARiiON capabilities might help.
SuSE Linux to Return to Open Source Development
News reports this week indicate that Novell will launch a
community-based Linux distribution called OpenSuSE at
the upcoming LinuxWorld conference. The company plans
to set up a Web site, opensuse.org, and will allow developers to work with the
source code. The move by Novell follows a similar decision by Red Hat Software,
which created the Fedora Foundation and allowed the community at large to
develop a code base. At a certain point, the code is locked down and run
through quality assurance testing and then sold as a product. Novell is
expected to follow a similar path, allowing the open source community to
contribute code to the developing code base. News reports indicated that Novell
will oversee the initial development direction of SuSE
Linux but will eventually turn the oversight over to a committee.
To date Novell has been holding a good part of enterprise
code close to the vest, essentially turning an open source product into a more
proprietary solution, even if based on industry standards. As we saw with Sun’s
refusal to allow Java to be overseen by a public standards body, maintaining
that control can create a negative response from customers and developers
alike. In our mind, the Red Hat (and apparently soon-to-be Novell) approach is
a much wiser strategic decision than trying to assert control over the
development path of the code base.
Our thinking here is not particularly complex; it’s a simple matter of dancin’ with who brung ya. Linux’s success has been based on the fact that there is a base of developers that can grow as large as the market will bear. Novell has the opportunity to tap into a huge well of development talent that in many cases may produce innovations and solutions Novell engineers had never considered. Tapping into this near-limitless pool of high-power talent can only speed and extend the development of the code base while adding little internal cost. Not only do the financials make sense but the intangibles do as well. Allowing the open source community to participate continues a successful development process that has substantial future development opportunities in the form of LAMP and other open source projects. Throwing SuSE Linux into the public domain is a smart way to ensure that momentum remains as open source projects continue to transform software development projects.
IBM and NetApp Debut First Results of New Alliance
This week IBM announced new storage offerings for small and
mid-market businesses (SMBs.) The IBM TotalStorage
N3700 is a network-attached storage device (NAS), the first product of their
alliance with Network Appliance. The N3700 is designed for companies with fewer
than 1,000 employees, but IBM believes it can serve larger corporations with
branch or remote office locations with NAS requirements. The N3700 scales to 16TB,
and supports IP attached storage as well as iSCSI SAN
technology. It also has mirroring capability within the appliance as well as to
a remote device for disaster recovery functionality. In addition to the product
announcement, NetAPP and IBM also announced that the
companies plan to work closer together in the virtualization and blade server
product segments. In particular, they plan to focus on integrating IBM’s
storage virtualization technologies and the NetApp
V-Series and FAS storage system. NetApp also
indicated it will join IBM’s Blade.org initiative as one of the nine founding
members.
The partnership is a way for both NetApp
and IBM to strengthen their market presence against competitor EMC. NetApp has strength in NAS and IBM in disk and tape. EMC is
a strong presence in the NAS and disk markets, and offers tape products through
its relationship with ADIC. If IBM and NetApp can
form a genuinely symbiotic relationship and leverage each other’s strengths in
the marketplace, then perhaps they can form a united front against EMC. Of
course some of us wonder if IBM, being what they are, may purchase NetApp outright if the relationship gets comfortable. This
wouldn’t necessarily be a bad thing if they could leverage NetApp’s
presence with SMBs and NAS capacity with the IBM corporate strength.
On the other hand, the ongoing problem with storage is that the industry is just beginning to work out how to sell true end-to-end solutions. In systems, having desktops and servers and storage (meaning disk arrays) is important, but end-to-end storage solutions involving arrays, tape, NAS and SAN don’t really exist yet. This is because most users don’t have data architectures like they have server architectures, and they tend to purchase tactical answers rather than sorting out strategic solutions. However, companies are beginning to realize that business continuity and disaster recovery are significantly harder to do and more expensive if all storage everywhere is treated equally, so they’re working with vendors to establish data architectures and build end-to-end solutions that fit information lifecycle needs rather than parking garages for data bits. In a world where customers purchase solutions for information’s sake, the NetApp and IBM alliance looks really strategic.
Car makers and security experts are expressing concerns that
a new type of computing device might be vulnerable to hackers or viruses: the
car. Both groups said in published reports that the issue is one being closely
watched, even though no known infections have occurred to date. New cars
invariably contain some level of computerization for at least monitoring
emissions and engine performance. Other models have more sophisticated
computing capabilities that adjust ride settings, power availability to wheels,
and the like. Also, many cars have on-board navigation systems and entrainment
capabilities. Experts say that the Bluetooth technology that allows consumers
to connect different devices to the car’s computing system could be the bridge
by which a virus could be transmitted. Bluetooth technology was the vector used
to spread the Cabir cell phone worm, which reached
more than twenty countries.
This should really come as no surprise and for the moment it
may be something to chuckle about, but the future may bring events to bear that
are much less humorous in this regard. As automobiles become ever more
sophisticated and computerized, they begin to resemble modern aircraft which
use computers not only for navigation but maintaining aerodynamics and the
ability to fly themselves. A number of U.S. military
aircraft are totally dependent on these computers to fly: without their
constant manipulations of wing surfaces, planes like the F-16 would become very
expensive (and dangerous) rocks falling from the sky.
While such failures in an automotive computing system would usually be much less catastrophic, the implications for safety and monitoring systems across the board become much more dire as time goes on and connectivity become more ubiquitous. For example, infections and disruptions of hospital computer systems that run delicate monitoring machinery or surgical procedures could cause substantial problems as the return to doing things by hand may not be possible anymore in a society that is increasingly dependent on such systems to function. From our point of view, the issue of computer infections and damage should be viewed in the same way epidemiologists view the potential for a world-wide pandemic; that is, since air travel allows any person to go anywhere in the globe, the vectors of potential infections are limitless. Substitute connectivity for air travel and the same holds true for computer viral infections and the possibility of “pandemics.” Given our continued and increased dependence on IT infrastructure for the simplest of transactions and day-to-day life, such pandemics could be highly disruptive if not destructive. Are we ready?