Citrix Announces Citrix Desktop Server
Citrix Systems this week announced Citrix Desktop Server
1.0, which was previously known by code name Project Trinity. Desktop Server is
positioned for organizations wanting to improve desktop security, performance,
and reliability for their employees while also enabling their IT organizations
to more easily deliver, manage, and maintain desktops. Citrix Desktop Server is
the first purpose-built solution with “DynamicDelivery,” a technology that
automatically selects the right type of virtual desktop on demand, enabling IT
administrators’ flexibility in delivering the appropriate desktop for a given
user at lower cost while providing each user with a personalized computing
experience. It also improves IT flexibility and manageability through a unified
desktop management console that manages all desktops in a common way,
regardless of how they are installed in the datacenter. The solution delivers
Windows desktops from the datacenter as a secure on-demand service that
supports popular methods for installing desktop operating systems in a
datacenter including virtual machine environments, blade PCs, and Windows
Terminal Services. Employees who occasionally work from an alternate location
such as a home office will benefit since the desktop is delivered as an on-demand
service and is instantly on, always available, and accessible from any suitably
equipped location. IT can also perform proactive performance tuning on these
virtual desktops to provide additional CPU allocation, memory, or storage,
according to changing business and end-user needs. Desktop Server 1.0 is
scheduled for availability in Q2 2007.
To sum it up in a phrase, this is pretty darn cool. Citrix
has a long history in thin client and remote application technologies, and to
some this announcement may just be more of the same. But for the savvy IT
professional, this solution offers much more. Anyone who has maintained thick
client Windows desktops in a commercial setting knows the tradeoffs of local
performance and control versus manageability. In many scenarios, the
operational and maintenance costs are sapping the ability of the organization
to remain competitive, and even keep qualified IT staff. Through this solution,
Citrix has broken new ground by delivering valuable Windows applications to a variety
of alternative desktop form factors from a variety of server form factors. By
supporting virtualized servers and blades as well as traditional Windows
Terminal Services, Citrix can deliver server-based desktop applications from
current as well as leading edge server form factors that are gaining traction
in the enterprise. In addition, the support for virtualized server environments
makes this offering all the more appealing to IT as it grants even greater
flexibility in how applications servers are deployed. However, the cleverness
of this solution goes one step further with its implicit follow-the-user
mentality.
Consider the growing ranks of part-time/full-time home
office/roaming employees. Desktop Server allows these employees the flexibility
to work from home, the office, a suitably equipped hotel room, or most anywhere
there is a sufficient quantity of securable connectivity. The access device
could be a Windows-based terminal at an airport lounge, a desktop system at
home, or a laptop in the hotel room. In each case, the applications, data, and
user environment remains constant, and under centralized maintenance, backup,
and policy controls and it is not dependent on a specific thin client access
device as was the case with many past solutions. Further, for older desktops
that are not able to upgrade to current or future releases of Windows, the
hybrid thick desktop accessing remote applications regains value from past
capital expenditures by extending their useful life while transitioning much of
the operational headache of locally installed applications and data to the
server.
The thin client is not new, and we have been and remain strong supporters of the approach in the right scenario. With today’s announcement, Citrix has raised the bar in remote application delivery in our opinion and is offering a new level of deployment flexibility on both the server and client side. While this solution is not a universal fix all to the management and deployment issues of Windows applications, we believe its strategic value will be recognized by IT professionals charged with maintaining such environments. The notion of a consistent user experience that is flexibly deployed and centrally managed, and seamlessly follows the user, has long seemed beyond the reach of mere IT mortals. However, with Citrix Desktop Server, we believe this once sought-after pipe dream has become a reality that may be within the reach of IT professionals who would much rather deliver applications and add value to their organizations than toil away their days patching, upgrading, and deep-sixing a fleet of Windows-based desktops.
AT&T Moves into Web-Managed Security Services: More Moisture in the Cloud?
AT&T Inc. has announced the availability of AT&T Web
Security, a network-based security service that provides advanced Web content
and instant-messaging filtering. Available to companies in the United States
and around the world, AT&T Web Security is the newest addition to AT&T’s
enterprise security portfolio, which is focused on providing companies with
security services “in the cloud” to help remove the dependency on hardware and
software while supporting a “defense in depth” architecture with security features
built into different network layers and supporting processes. AT&T Web
Security provides companies with network-based capabilities to perform
Web-content filtering and screening for malware and spyware, and IM filtering
for malware, without dedicated hardware or software requirements. AT&T Web
Security is designed to monitor all unencrypted Web traffic, including HTTP
requests, and replies to HTTP requests and IM traffic. It can operate
independently or become fully integrated with AT&T’s other managed security
solutions.
AT&T Web Security features include monitoring and
reporting of Web traffic at the network level where customers may choose to
monitor individual end-users, which would require that the customer install
software on the user’s PC. Other features include a Web-based portal for
administration and reporting, including customized browser alert capabilities
and automated reports, near-real-time scanning of requested Web sites and files
to ensure that even trusted locations and files are monitored,
and IM-filtering capabilities with storage. AT&T delivers a suite of
Security and Business Continuity Services to help assess vulnerabilities,
help protect infrastructure, detect attacks, respond to suspicious activities
and events, and design enterprise networking environments for nonstop
operations.
While the notion of web security as a managed service may
not be new, AT&T has always been a force in the market place. Sageza
believes that managed security services have been less popular than they deserve.
Organizations that don’t regard information security as a core part of their
business should be especially attracted to services that will deal with
functions that would otherwise distract the company and perhaps employ scarce
resources. It’s interesting to note that if organizations want to monitor
particular end users then software (likely an agent) would have to be installed
on those target machines. We regard this as a limitation. However, the Web
portal for administration and reporting and the ability to monitor and filter
IM are countervailing strong points.
Over time we believe managed security services will evolve from much of today’s product-bound security functionality. Detecting malicious code and preventing it from harming the IT infrastructure are especially good functions to be addressed. It appears to us that Managed Security Services will ultimately look like software as a service (SaaS) in other sectors of the IT marketplace.
Sun Microsystems Donates Storage Technologies to OpenSolaris Community
Sun Microsystems has announced it is donating storage
technologies for developers to the OpenSolaris
community. Sun and its partners are contributing Point-in-Time Copy data
service and Remote Mirror data service; NFS v4.1 (parallel NFS); YANFS
(formerly WebNFS); iSCSI
device drivers; OSD device drivers and related software; and the QLogic Fibre Channel HBA driver. OpenSolaris already provides a native CIFS client, UFS,
Solaris Volume Manager, Traffic Manager (multipathing
I/O support), a Fibre Channel framework with drivers,
and traditional target drivers. Sun also indicated that it has plans to open-source
the following technologies in the future: Sun StorageTek
QFS shared file system, Sun StorageTek Storage
Archive Manager, Sun StorageTek 5800 client
interfaces and simulator/server, and other storage-related technologies. The
company also announced that the previously Sun-only administration features of the
Solaris ZFS file system would be donated to OpenSolaris.
The features donated include ZFS Clone Promotion, which allows storage users to
turn a clone back into the active file system; Recursive Snapshots; Double
Parity RAIDZ, which protects data if up to two devices fail; and Hot Spares for
ZFS Storage Pool Devices. In addition, Sun announced the formation of an open-source
storage development community on OpenSolaris.org to help give developers a
quicker way to deploy data-intensive applications. The community includes
developers adding data management functionality and customizing the storage
stack for new applications and platforms, system administrators implementing
Solaris technology in data centers, educators and
students researching data management in universities, and new users exploring
the technology.
We find this announcement interesting for a few reasons. First, it is more evidence of Sun’s desire to be considered a leading proponent of open-source solutions. Second, it illustrates the priceless nature of key networking and technologies. Third, it implicitly states Sun’s view and direction for the future of entry-level storage.
Sun continues down a path to change the rules of the systems vendor game as much as it can. Increasingly we have seen the company embrace standard architectures, components, and alternative sources for technology that complement its legacy of proprietary, higher-end solutions. Overall, we see this as a smart move for a company that can no longer lay claim to a market supremacy that it once enjoyed. Further, it recognizes that cap-ex and operational cost containment pressures that organizations have faced in recent years affect which solutions are purchased and how. One aspect of alternative sourcing is the open source community and industry standard IT commodity building blocks. Sun has been moderately successful in delivering entry level and mid-market solutions based upon this approach for servers, and it is clear that the firm would like to see the same happen with storage.
As we have articulated in the past, there are certain technologies that ultimately become priceless, in the sense that they must be ubiquitously available to achieve their maximum potential and that they must be without price/cost in order to become ubiquitous. Java was one early priceless technology, as was the IP protocol stack. Today, in the context of networked storage, this could also be said for NFS, YANFS, and iSCSI, as well as certain basic storage management functions such as point-in-time copy and remote mirroring. By widely distributing these technologies, the hope and potential lies in their becoming expected levels of capability in all storage solutions no matter the product position or price point. However, this does not preclude complementary value-added software or other technology; rather, it promotes the notion of the key value of these capabilities.
The StorageTek acquisition by Sun was viewed with skepticism by some, confusion by others, and a “let’s see what they ultimately do with it” by many. While the totality of StorageTek technology is not included in today’s announcement, key interfaces and core management technology are. From this, we can surmise that Sun wants to drive storage directions in this regard and that it considers these technological nuggets as ways to bolster its position in the larger storage discussion. At the lower end of the market, storage and its backup and archiving is for many small organizations at best a tactical endeavor, and at worst something that is considered expensive and a good topic on which to assume the ostrich position. However, reality dictates that storage should be a strategic undertaking, even for the smallest of operations. Although Sun stated that a driving factor in this open source endeavor was to get the community to develop less expensive storage solutions using off the shelf hardware and open-source software, it may turn out that Sun really wants the lower echelon of the marketplace to start thinking about storage on par with how it thinks about servers. As such, Sun would be able to engage these customers with a strategic message about storage, and ultimately be able to grow the potential customer set into higher value-added storage purchaser, who would have affinity for Sun’s other storage offerings. This would be consistent with its approach to OpenSolaris: get folks using the platform and recognizing its value, and then sell complementary technologies and services alongside to the now faithful customer.
Overall, we are impressed by Sun’s tenacity in its open source playbook, and believe that some segments of the market will respond positively to these announcements. Although the lines of distinction between storage and servers are blurring, storage is not a one-size-fits-all solution. Deploying, operating, and maintaining medium-sized and large storage solutions takes a skill set and technology base that is not easily addressed by non-differentiated low cost off-the-shelf components. We do not believe these announcements will have any significant impact on this segment of the market; however, for the lower end of the market it could be a different matter. Just as we have witnessed some interesting solutions woven out of open source plus commodity hardware in the server space, we would not be surprised to see similar undertaking concerning storage that targets tertiary needs, or the lowest entry point in the storage market. Although it would be easy to dismiss the net impact of this announcement as posturing by Sun, we think it would be premature to do so. Open source has had a significant impact on how software is viewed, even by large enterprises, and we think it would be foolhardy to dismiss open source’s potential impact on storage technology, even if it just gets smaller organizations thinking more strategically about storage.
Application Recovery: Stratus Addresses Advance Planning
Stratus Technologies, Inc. has announced a new set of
application and disaster recovery solutions to protect and rapidly recover
business-critical applications from outages, natural disasters, and other
disruptive events. Under the terms of a new partner agreement between
Double-Take Software and Stratus, these new solutions may be purchased with or
without continuously available Stratus ftServer
systems and currently include Double-Take realtime data replication and
failover for Windows Advanced Server Edition for the Server 2003 Enterprise
Edition or the Virtual Systems Edition for VMware and
Microsoft virtualization products; add-on software module for Windows Advanced
Server Edition that replicates operating system, applications, and data to an
image server for event-driven installation on a recovery server; and Industry
best-in-class service and support directly from Stratus Technologies. Stratus'
new application and disaster recovery solutions powered by Double-Take Software
are potentially a potent combination offering several benefits, including
continuous replication, automatic application failover, data recovery and
migration, compliance archiving, snapshot management, and centralized backup
services for virtual environments.
Double-Take, from Double-Take Software, combines realtime
replication and failover technologies to ensure users remain online in case of
a failure. Double-Take has no distance limitations, thereby providing failover
protection during local and regional failures. Working with Stratus to help
customers achieve and sustain the availability of information systems,
Double-Take Software combines multi-level intelligent compression, scheduling,
and bandwidth throttling to ensure the efficient replication of data across
standard LAN/WAN connections. Because it is hardware- and application-
agnostic, Double-Take enables the customer to leverage existing investments in
systems and connectivity while still achieving disaster recovery and continuity
goals. In addition to application and disaster recovery solutions, Stratus
offers other complementary services including application-recovery
configuration and failover/failback testing services;
hosted managed services for data replication, failover, and data vaulting at
Stratus-authorized data centers; and Web-based training for Double-Take for
Windows Advanced Server Edition.
Most organizations are unable to deal with a spectrum of business
interruptions. Many have “plans” incorporating remote hot or cold sites and
some even rehearse their plans. However, Sageza believes that organizations in
general have not paid sufficient attention to the spectrum of disasters. While
total disasters such as destruction of key facilities may be planned for, the
partial disaster, especially the “one-day disaster” where the event is over in
a day, are often left to chance.
We believe that combinations such as the Stratus Double-Take combination profiled above can offer users a way to address aspects of their business recovery plans that may have not received the appropriate degree of attention. Business recovery is the name of the game: keep the organization functioning with minimal negative impact while restoring the IT infrastructure to full capacity. Failover capabilities and geographic mirroring may be significant areas for organizations to address to ensure that they can continue to operate their critical functions regardless of the type or severity of the unplanned incident that befalls them.