Category Archives: Data Center

SDN and the Cloud – quick thought

If SDN, and it’s “sister” NFV, actually achieve the hype that has been circulating could we actually have a day where infrastructure cloud providers are really no longer “independently” purchased by the data center manager (or CMO, or COO, or whatever flavor of “business driven cloud consumer” you choose).  Instead, could we see a day where there are ecosystems in place such that the SDN management software has a direct link with specific cloud providers (e.g. one for compute, another for storage, etc.).  some have called that “real time infrastructure”.  My question though is – could, concurrently, each SDN ecosystem have an optimized set of API’s such that the SDN management software can dynamically provision and de-provision pre-determined, contractually bound, specific cloud sourced resources real-time, from a pre-selected cloud provider in that eco-system.

At that point, the data center manager really doesn’t care who the specific cloud provider is, assuming that the ecosystem has properly vetted that cloud provider.  If that is possible, then is it possible that one of the very large global infrastructure providers would own both ends (the SDN Management environment AND the cloud infrastructure services)?  Do IaaS cloud providers really then focus their attention on SDN developers, rather than data center managers?


The Cloud rains on a brittle market

Market trends gain strength from the correctness of their promises. Is it cheaper, faster, more secure? Dilute the promises and a forceful market becomes brittle. Crack a brittle market and revenue disappears.

By the way, what ever happened to Netbooks? Four years ago Netbooks were going to save the PC industry. Now they are gone, victims of the brittle market. IT departments loved Netbooks as their convenient answer to popular tablets. In a brittle market Netbooks were swamped by BYOD.

The market for enterprise computing equipment versus cloud services is in the brittle stage. The promises of the cloud gain strength while counter forces are losing at the flanks. This post is not about the cloud. This is about the market. Promises of lower cost, ease of use, scalable power are strengthening. The opposition is brittle and when it breaks the flow of dollars will shift dramatically.

The dollars in question are those spent by corporations on servers, storage and networking – the data center. Corporations buy name brand equipment. Cloud providers develop their own equipment and save money and by doing so. They also change the dynamic of revenue growth and profit generation in the IT market. A shift from corporate computing to the cloud means more than a redirection of dollars. It means a fundamental shift as cloud service providers challenge OEM equipment vendors in the development of new technology.

The cloud does to IT management what robots did to manufacturing so, naturally, there is internal management resistance. Management’s main and plausible objection has been security.  They also point to the increased cost of data connections. While conceding that these are crucial, and without in any way diminishing their importance, realize that one day security will be solved and the cost of data transport will continue to decline.  As this happens the brittle market will crack.

What happens then? Well, consider that Google, like all of the large scale cloud providers, does not use state of the art high end servers of the type you see at Interop. They optimize their cost, power consumption and space utilization with a vast array of commodity systems. On top of that, the Google file system competes directly with the value delivered by classic storage system vendors, demonstrating that cloud providers dilute the need for major manufacturers.

Imagine a world where pick up/ drop off laundry service was incredibly cheap and effective. Would you own a washer and dryer?

Like Westcon Group, Carousel Industries is Bullish on Virtualization – and Teamwork

As a premier global distributor, it’s our responsibility to work closely with our partners to ensure the greatest success possible for their customers.  The
recent efforts of Westcon and Carousel Industries to virtualize our own data
centers has enabled our companies to give customers the perspective they need
to fully realize the benefits of with virtualization and cloud computing.
To elaborate more on this great success story — Kevin Gulley, Editor of Carousel Connect – discusses these efforts in further detail.

Like Westcon Group, Carousel Industries is Bullish on
Virtualization – and Teamwork

At Carousel Industries, we understand the value of teamwork. That’s why we partner with the best and brightest IT vendors in the industry, including Westcon Group. So when they asked us to pull together a guest blog post for the Westcon blog, we were thrilled.  Westcon Group shares our passion for using technology to drive business value, whether it’s through unified communications, data center solutions, infrastructure or security.

And like Westcon, Carousel also “eats its own dog food” when it comes to technology by finding ways to use it effectively on our own internal networks. One recent success story we share is virtualization technology.

To keep up with our rapidly growing requirements for data center resources, in early 2010 Carousel set out to execute on a server virtualization strategy in our Exeter, R.I. headquarters. The idea was to provide greater application functionality and flexibility, especially for mobile users, while giving us some breathing room for expansion and lowering power and cooling costs.

The results have exceeded expectations.  We replaced 100 physical servers with just five machines for peak periods and three for nighttime hours. All told, we’ve virtualized more than two-thirds of our enterprise server infrastructure thus far. As a result, we’re saving lots of money on power, cooling, IT support and operational costs.

Westcon Group has gone even further, as it’s one of the few companies that has virtualized 100% of its server infrastructure. And it is likewise getting some big benefits, as outlined in this story from InformationWeek, which quotes Bill Hurley, CTO and Executive VP of Westcon Group:

The new virtualized environment requires fewer system administrators to manage, saving on managed services expenses, lowered the cost of data center consolidation, and lowered electricity consumption in the new digs. Hurley said getting to 100% virtualized has saved Westcon $1.1 million over a two-year period. He now runs 350 virtual servers on the 22 UCS blades, with some blades hosting only 3-4 virtual machines and some hosting 25-30.

Virtualization is about more than cost savings, as Bill highlighted during a recent CIO Tech Talk he did with us. The technology lays the foundation for lots of other applications that help Carousel and Westcon Group drive more business value, including unified communications, mobility solutions, security and, in particular, cloud computing.

We learned quite a bit during the course of our virtualization project – so much so that we wrote a white paper called “Best Practices in Data Center Virtualization” to share our experiences. One of those best practices is to make good use of expert resources and technology partners.

If you need help with your virtualization project, you’d do well to leverage the expertise and resources of a partner like Westcon Group. For years, Carousel has benefited from the experienced team at Westcon; we encourage you to do the same.

Hot, Not Hot, and Be On The Lookout For….


– Flat network – already discussed in earlier posts, but continues to remain an early, “going to get hotter”, topic. Each of the vendors is, or has, recently made significant announcements about their converged Ethernet/fabric/2-tier/1-tier offerings.  Driven in large part by the need for a data center network with lower latency, optimized for virtualization, the network is the data center, and the data center is the network.

– Data Center to Data Center networking – really a subset of the above, but there are nuances such as WAN Acceleration technologies specifically designed for DC to DC as opposed to DC to Campus. This nuance will become more and more of a marketing issue for those better positioned as opposed to those perhaps not really in that DC-to-DC space.

– SBC’s – starting to get the recognition of their importance relative to their role in UC. They can be considered the switch/firewall equivalent for VOIP/UC. As companies and the public overall migrate to VOIP and SIP, SBC’s become critical. Expecting steady growth with an inevitable over-hype by the media once they understand the technology in the next few months.

– Cloud failures – the stories will remain hot for a while. In addition to service failure there will be offering failures – established vendors pulling out of initial cloud forays.

Not Hot

– Cloud success stories – this will take a backseat for a while, but cloud successes will definitely continue nonetheless.

Be on the lookout for:

– Virtualization security – as vendors continue to realize the exposure that virtualization presents, more and more messaging and positioning will appear. The exposure is two-fold. First, the obvious – a new layer in the stack introduces new opportunities for bad people to do bad things. But second, perhaps not as obvious, is the governance associated with the potental consolidation of previously physically separate servers/applications/data onto one single physical server. The IT group doing the consolidation may not recognize the compliance risks they are introducing.  And potentially even more interesting, the hypervisor doesn’t have a mechanism to process business rules associated with the company’s compliance or regulatory policies yet.

– POE – probably not the most exciting discussion point, but POE dedicated vendors have technologies coming out that can help support the powering of all the new video demand going on in the network. This is especially important for the growth of outdoor video/signage (think stadiums and traffic). Many of the vendors embed POE, but some of it is “just enough” and really does not provide the flexibility companies will need as they grow their video usage.

– Tablet Videoconferencing – there is definitely the potential for a schism to appear. I think it is already appearing. We could end up with high end videoconferencing rooms and many low-end video conferencing end points being tablets. The issue over video quality is over. Pretty much every device now has HD capabilities. With the growth of tablets, I pads or Android, the consumerization of IT is forging some new paths in video and UC.

What is a Private Cloud ???

I recognize that the media has moved the term “cloud computing” into an over-hyped state.  But, as a CIO, I also know that there is real value in utilizing the cloud.  The “Public” Cloud.  What has me concerned is that the media is now calling everything “the cloud”, breaking it into public cloud services and private cloud services and I think I am missing the point with “Private Clouds”.

The categories of “cloud services” are, in simple terms:

1. Infrastructure as a service(IAAS) – this is the “storage as a service” or “compute as a service” type offerings.

2. Platform as a service(PAAS) – this is the Amazon EC2 or Microsoft Azure type offerings.

3. Software as a service(SAAS) – this is the type offerings.

One of the most appealing aspects of the cloud is that the cloud concept is based on a “pay by the drink” model.  You only pay for what you use.  When you’re not using it, you don’t pay – like a utility.

But this is where the benefit of the private cloud seems to break down.  It breaks down on two levels: First, as a CIO do I have or want to invest in having the capability to provide my enterprise with a pay-by-the drink model and the associated billing functionality and; Two, even if I had the capability, do I really want to have that as the model for my enterprise IT service?

The above presumes that when one talks about a private cloud they are not just talking about virtualization.  Virtualization is a great opportunity to more effectively and efficiently manage the data center.  Westcon’s data center is 100% virtualized.  We are a big proponent and find great value in virtualization.  And, the underlying principle that accelerates cloud offerings really is virtualization.  But, by definition a private cloud is more than just a virtualized data center.  The CIO delivering a private cloud has to provide the abovementioned cloud services while doing so with a pay-by-the-drink billing capability, competitively priced.

There has to be more.  For example, even if tomorrow the CIO made IAAS/PAAS/SAAS offerings available to his or her business units with a pay-by-the-drink usage tracking and billing capability, are the internal business units prepared to take on the responsibilities associated with consuming such services.  I know it’s been very fashionable to question the value of IT, but the truth of the matter is that every well-managed firm utilizes IT to compete more effectively.  Can the CIO compete with the public cloud offering on price, and still provide the competitive value inherent within a business-process savvy internal IT organization. Few CIO’s can compete with Google or Microsoft on price.  Therefore the CIO is then left with monetizing the infrastructure sitting in the enterprise’s data center.  And, the CIO must either monetize the business process services inherent within IT or dismantle those services.  This will not create value for the enterprise.  And I doubt the CFO wants to hear about all the capital infrastructure write-offs the CIO would need to incur to become price competitive.

There is no doubt that the public cloud can create value for the CIO and the enterprise.  But it requires proper planning, and its value in the short term is incremental.  But the concept of the private cloud is different.  It requires a substantial upheaval within the IT organization as well as within any business unit that relies on the IT organization.  It is unclear to me where the cost/benefit is within that internal upheaval.

Then again, if the private cloud is really just virtualization, then let’s just call it virtualization, and reinforce the value of virtualization’s benefits.

The World is Flat – And the Data Center Network?

Thomas Friedman in his book “The World is Flat: A brief History of the Twenty-First Century” analyzes how the world (in terms of commerce) became a level playing field as a result of globalization.

Is the Data center Network becoming flat?

Businesses reliance on IT to achieve more with less has never been greater. Flexibility and scalability of a fully virtualized or cloud data center will play a key role for the IT organization in their quest to keep up with the demand placed on them by the CXO.

Achieving a fully scaled out dynamic virtual data center (where applications and virtual servers can move seamlessly to other hosts) and a converged network (where all data center traffic, be it storage, messaging, or voice move onto a single network) is not possible with the current multi-tiered network.

The data centre network is the critical enabler of all services delivered from the data centre.  Many data centre networks in operation today were designed and architected to support a multi-tier network.

These setups were designed for traffic patterns that predate virtualization. They are not optimal for today’s brave new world of server consolidation, virtual machines, and cloud computing and 10 Gigabit switches.

The multi-tier network was created as a work around for the limitations of Spanning Tree Protocol (STP).

The main goal of STP was to give us a loop-free network. To achieve this, STP makes sure that there is only a single active path to each network device.  STP did manage to achieve its goals, but not without introducing limitations. Some of these limitations (listed below) contribute to the road blocks that needs to be addressed in order to achieve a fully scaled out and dynamic data centre.

  • Wasted bandwidth – by blocking some network paths in order to avoid loops, all the available bandwidth is not being used
  • Active path is not always the most cost effective – This impacts virtual machine and application portability
  • Fail over time – when a device fails, STP reconfigures the network and sets up new pathways, but it does so relatively slowly. This is not acceptable in today’s network

The workaround for STP limitations has been to keep Layer 2 networks relatively small and join them together via Layer 3 segments. – Welcome to 3-Tier Network.

Then came virtualization and unified network. It soon became obvious that the 3-Tier network is not ideally suited to support this new technology

For example, in order to do a non-disruptive VMotion, the source host and target host as well as their storage needs to be on the same Layer 2 network. In other words, live migration can only happen on a single subnet.

All of this (and host of other issues) leads to a requirement to make the data centre network more intelligent. The buzz word for this is FLATENNING THE NETWORK.

According to estimates by some analyst firm, if all businesses eliminated a single layer from their networks, they could collectively save $1billion in IT spending.

So what is the way forward and how are the vendors responding?

The way forward is to come up with technology that can address the STP issues at the same time flatten the network down to two tiers, and if possible one tier.

Transparent Interconnection of Lots of Links (TRILL) is a proposed standard from IETF that is aimed at eliminating the aggregation/distribution layer and creates a switch fabric. TRILL goal is to make the network more intelligent and eliminate all of the shortcomings of STP.

Radia Perlman (the creator of STP) is a member of the IETF working group developing TRILL.

TRILL is an emerging standard and some analysts believe that we are at least 2 years away from a matured standard-compliant implementation of technology such as TRILL. However vendors such as Brocade, Cisco, Extreme, HP/3com and Juniper have all come out with approaches that flatten the network down to two tiers, and in some cases one tier.

Westcon have over 25 years of experience in the networking business and our focus is to work with our customers and help them with the transition. The skills we have acquired over the years and the fact that we carry majority of these vendors mean we are well placed to educate and help our customers to negotiate the new world of a FLAT network.


If you are a regular reader of this blog you know that Westcon is a distributor of Networking, UC, Security, and Data Center technologies.  We do about $3.0 billion in revenue per year and distribute solutions to over 70 countries.  We conduct business under two brand names – Westcon and Comstor.  As CTO I am responsible for both the technology that runs the enterprise and also setting the overarching strategic direction of the firm from a technology perspective.  So, I get to work with the stuff we distribute.

There have been a number of posts on our Data Center Consolidation project and our migration to virtualization.  Our journey towards selecting the right platform for us was important on a number of levels.  Obviously we wanted to make the best selection for our enterprise from an ROI perspective.  But we also planned ahead and utilized the decision making process as a framework that we would share with our customers (Resellers, SI’s, Service Providers) to help them in understanding the decision making process of the average mid-size enterprise end-user/buyer. 

The Framework

Substantial technology decisions must always be premised on rational business justification.  The bigger the purchase, the more demand for a coherent, articulate business justification (ie not tech-talk).  The learning process involved in acquiring new technology has to be at two levels – what is the technological benefit and what is the business benefit.  In addition to learning about the technology and the business rationale, there is the need to actually experience the technology in action.  Especially if the technology is on the leading edge, such as Virtualization or Cloud.  And, no technology purchase happens in a vacuum.  Understanding how the new technology can be architected into a heterogeneous environment needs to be part of the process.  Lastly, the larger the decision, the greater the need for a clear plan, with specific milestones, in order to ensure business and technology decisions are synchronized for success.

LEAP – Learn, Experience, Architect, Plan

This high level decision making process became a framework that the channel can use in conversations with their customers.  In order to deliver on this framework, Westcon recognized that an environment was needed to step through the four steps of the process – learn, experience, architect and plan.  To achieve that goal, Westcon will begin launching LEAP Centers around the globe in the coming months.  The first center opens in Brussels, Belgium this week (September 29, 2010) followed by the opening of the Denver, Colorado LEAP Center on October 20, 2010.  Additional sites in Australia, Brazil, Singapore, and South Africa will come online in the future.

Each LEAP center is designed to help the channel (with end-users when desired) learn about new technologies, the business value of the technology, and how the technology actually performs in live, heterogeneous technology environments.  The goal is for Westcon’s customers (Reseller, SI or Service Provider) to truly understand these new technologies and what the proper architectures are for solutions that include these new technologies.  Ultimately the goal of the LEAP center is to create a more informed channel partner who can successfully sell the solutions efficiently and effectively.

Our experiences as an average IT organization showed that our engineer’s “new technology skepticism” was eradicated when they had the chance to work with the equipment real-time.  As an example, in the case of our data center consolidation, the original selling points for our technology configurations were superseded with more important benefits once the team got their hands on the equipment. Appreciation for these new capabilities would not have been accomplished without working with the equipment.  That is a key objective of the LEAP Center.

Your LEAP Center

Each LEAP Center is a fully configured data center, running hardware and software from multiple manufacturers in order to create a more realistic IT environment.  The center contains equipment from vendors that, in some cases, Westcon does not even distribute.  Westcon has full-time engineers dedicated to the running of each LEAP Center.   But make no mistake about it, the LEAP Center is a center for our customers.  We built it for our channel customers.  Our goal is for our customers to be successful in deepening relationships with, and ultimately selling to, their end-users by creating greater value to their customers through the knowledge gained in a LEAP experience.   We want our customers to consider the LEAP center as an important component of their portfolio of sales strategies, tactics and tools.  LEAP Centers will help our customers Learn and Experience the technology, but also be an environment wherein they can bring in their customers to work on specific Architectures and Plans.  In order to do that, Westcon has committed millions of dollars in the development of the LEAP Center.  Ultimately, Westcon believes that this investment for our Resellers, Systems Integrators, and Service Providers will generate profitable success for our customers, their customers, our vendor partners and the channel as a whole.

We look forward to seeing you at one of the LEAP Centers soon!