The next tech boom is already underway

Gregory Ness · October 20, 2008 · Short URL: https://vator.tv/n/4a6

Virtualization and cloud computing are driving us into a new era of network automation

Cloud computing has become a reality, yet the hype surrounding cloud has started to exceed the laws of physics and economics.  The robust cloud (of all software on demand that will replace the enterprise data center) will crash into some of the same barriers and diseconomies that are facing enterprise IT today.

Certainly there will always be a business case for elements of cloud, from Google's pre-enterprise applications to Amazon's popular services and the powerhouse of CRM, HR and other popular cloud services.  Yet there are substantial economic barriers to entry based on the nature of today's static infrastructure.

We've seen this collision between new software demands and network infrastructure many times before, as it has powered generations of innovation around TCP/IP, network security and traffic management and optimization.

It has produced a lineup of successful public companies well positioned to lead the next tech boom, which may even be recession-proof.  Cisco, F5 Networks, Riverbed and even VMware promise to benefit from this new infrastructure and the level of connectivity intelligence it promises.  (More about these companies and others later in this article.)

Static Infrastructure meets Dynamic Systems and Endpoints

I recently wrote about clouds, networks and recessions by taking a macro perspective on the evolution of the network and a coming likely recession.  I also cited virtualization security as an example of yet another big bounce between more robust systems and static infrastructure that has slowed technology adoption and created demands for newer and more sophisticated solutions.

I posited that VMware was a victim of expectations enabled by the promise of the virtualized data center muted with technological limitations its technology partners could not address quickly enough.  Clearly the network infrastructure has to evolve to the next level and enable new economies of scale.  And I think it will.

Until the current network evolves into a more dynamic infrastructure, all bets are off on the payoffs of pretty much every major IT initiative on the horizon today, including cost-cutting measures that would be employed in order to shrink operating costs without shrinking the network.

Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise's ability to monetize past investments.  Increasingly complex networks are requiring escalating rates of manual intervention.  This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.

Networks Frequently Start with Reliance on Manual Labor

Decades ago the world's first telecom networks were simple and fairly manageable, at least by today's standards.  The population of people who had telephones was lower than the population of people who today have their own blogs.  Neighborhoods were also very stable and operators often personally knew many of the people they were connecting. 

Those days of course are long gone, and human operators are today only involved in exceptional cases and highly-automated fee-based lookup services.  The Bell System eventually automated the decisions made by those growing legions of operators, likely because scale and complexity were creating the diseconomies that larger enterprise networks are facing today.  And these phone companies eventually grew into massive networks servicing more dynamic rates of change and ultimately new services.  Automation was the best way to escape the escalating manual labor requirements of the growing communications network.

TCP/IP Déjà vu

A very similar scenario is playing itself out in the TCP/IP network as enterprise networks grow in size and complexity and begin handling traffic in between more dynamic systems and endpoints.  The recent Computerworld survey (sponsored by Infoblox) shows larger networks paying a higher IPAM price per IP address than smaller networks.  As I mentioned earlier at Archimedius, this shows clear evidence of networks growing into diseconomies of scale.

Acting on a hunch, I asked Computerworld to pull more data based on network size, and they were able to break their findings down into 3 network size categories: 1) under 1000 IP addresses; 2) 1k-10k IP addresses; and 3) more than 10k IP addresses.  Because the survey was only based on about 200 interviews I couldn't break the trends down any farther without taking some statistical leaps with small samples.

 

Consider what it takes to keep a device connected to an IP network and ensure that it's always findable.  First, it will need an unused IP address. In a 1.0 infrastructure administrators use spreadsheets to track used and available IPs and assign them to things that are "fixed", like printers and servers. 

In a 2.0 world servers are virtual and dynamic, and move around even more frequently than wireless laptops and phones.  While the DHCP protocol can assign addresses dynamically - and lots of other configuration data too (like the address for critical infrastructure elements like the network gateway router, the DNS server, even device-specific configuration info, etc.), the pools of addresses handed out by DHCP have to be managed, and there are lots of reasons why admins need to know which device received a particular address - and applications need to able to reach devices by name (e.g. Windows host name) versus an IP address. 

Perhaps it takes 30 minutes on average to find an address, allocate it, get a device configured, update the spreadsheet and update DNS.  That was more manageable in a static world, though the increasing cost/IP to perform these tasks in larger networks is a direct consequence of manual systems breaking down in the face of scale.  Now consider a 30 minute process for a device - or a virtual application instance - that changes IPs every few hours, or faster.  When a 1.0 infrastructure meets 2.0 requirements, things start to break pretty quickly.

That is why, even with the simple act of managing an enterprise network's IP addresses, which is critical to the availability and proper functioning of the network, expense and labor requirements actually go up as IP addresses are added.  As TCP/IP continues to spread and take productivity to new heights, management costs are already escalating. 

This is a very fundamental observation based on one of the most common network management tasks. You can assume that there are other slopes even steeper because of complexity and reliance upon manual labor.

Some enterprises are already paying even higher expenses per IP address, and chances are they don't even know it because these expenses are being hidden within network operations.  Reducing headcounts risk increasing these costs further or making substantial sacrifices in network availability and flexibility.

IPAM as the Switchboard Metaphor

If something as simple and straightforward as IP address management doesn't scale, imagine the impacts of more complex network management tasks, like those involved with consolidation, compliance, security, and virtualization.  There are probably many other opportunities for automation tucked away within many IT departments in the mesh between static infrastructure and moving, dynamic systems and endpoints.

This will  force enterprise IT departments into similar discussions as those which likely took place decades ago within the Bell System when telecom executives looked at the dramatic increase in the use and distribution of telephones and mushrooming requirements for operators and switchboards and offices and salaries and benefits.  One can only imagine the costs and challenges that we would face today if the basic connection decisions were still made by a human operator.

The counterpart to the switchboard of yesteryear for IPAM is the spreadsheet of today.  Networking pros in most enterprises manage IP addresses using "freeware" that has an ugly underside; it produces escalating hidden expenses that are only now being recognized, mostly by large enterprises.  Mix the growth of the network with new dynamic applications and new factors of mobility with a little human error and you have a recipe for availability, security and TCO issues.

Many of these switchboards can probably be bought or manufactured today for a song, yet it is the other costs (TCO and availability and flexibility) which make them cost-prohibitive.

Server Déjà vu

Another one of the TCO fables that are similarly bound to take the steam out of cloud fantasies has to do with hardware expenses.  The cloudplex will utilize racks of commodity servers populated with VMs that can scale up as needed in order to save electricity and make IT more flexible.  That makes incredibly good sense, but are we really there yet?  No.

Servers have a very large manual labor component, according to an IDC Report hosted at Microsoft.com.  The drumbeat for real estate and electricity savings may play well to the bigger picture buyer; yet perhaps the real payoff of virtualization is its potential to automate manual tasks, like creating and moving a server on demand.

 

Just how many organizations have launched virtualization initiatives only to find out that they didn't have the infrastructure to allow them to save electricity, real estate or people power?  The network infrastructure simply wasn't intelligent enough to enable anything more than virtualization-lite, because the links between the infrastructure and the software were still manually constrained.

Yet one of the core promises of virtualization is to automate the deployment of server power.  If this is constrained by infrastructure1.0 (as I'm suggesting) then VMware and its partners need to address the "static infrastructure meets dynamic processing power" challenge rapidly in order to achieve levels of growth once expected in 2007.  With Microsoft now in the virtualization market thanks to Hyper-V, VMware's window of first mover advantage is starting to close.

Virtualization security now risks becoming a metaphor for other technology-related issues that could slow down the adoption of virtualization in the lucrative production data center market. 

Netsec Wasn't Ready for Virtsec 

The lack of network security connectivity intelligence meant that security policy, for example, would limit VMotion to within hardware-centric hypervisor VLANs.  Network security infrastructure wasn't prepared for the challenges of protecting moving, state-changing servers, despite the promise of a stellar lineup of VMsafe partners.

The promise of virtualization that drove VMware's stock price into the clouds eventually met up with lowered growth expectations as deployments were impacted by the lack of connectivity intelligence that no doubt impacted other potential business cases for the unquestionable power of virtualization to someday unleash new economies of scale and computing power.  These issues too will hit the cloud dream as they have also impacted other initiatives, albeit on a smaller and less visible scale.

Today there are plenty of new initiatives facing mounting pressures for connectivity intelligence and automation that have already left enterprise CIOs holding the bag for similar ecosystem finger-pointing.  Whether or not we enter a global recession, these pressures will continue and likely worsen.  They are artifacts of years of application, network and endpoint intelligence promises colliding with static TCP/IP infrastructure. 

Saving money by cutting network operations or capital budgets is the equivalent of Ma Bell laying off operators or closing switchboards in the midst of unstoppable growth.  Automation is the only way out, as Cisco's Chambers hinted recently.

Back to the Clouds and Virtualization

Cloud computing is dynamic computing power on a massive scale delivering new economies for IT services and applications.  In between those economies and the prices existing enterprises are already paying for their own services is the business case, in addition to operations, sales, marketing, and new infrastructure requirements. 

As much as cloud computing has rallied behind the prospect of electricity and real estate savings, the business case still feels like a dotcom hangover in some cases.  Virtualization is still a bit hamstrung in the enterprise by the disconnect between static infrastructure and moving, state-changing VMs; and labor is the largest cost component of server TCO (IDC findings) and a significant component of network TCO (as suggested by the Computerworld findings).  So just how much will real estate and electricity savings offset other diseconomies and barriers in the cloud game?  I think cloud computing will also have to innovate in areas like automation and connectivity intelligence. 

  

 

For the network to be dynamic, for example, it needs continuous, dynamic connectivity at the core network services level.  Network, endpoint and application intelligence will all depend upon connectivity intelligence in order to evolve into dynamic, automated systems that don't require escalating manual intervention in the face of network expansion and rising system and endpoint demands.

Getting beyond Infrastructure1.0's Zero Sum Game

Whether you "cloudsource" or upsize your network to address any of a number of high level business initiatives the requirements for infrastructure2.0 will be the same.  You can certainly get to virtualization and cloud (or consolidation or VoIP, etc) with a static infrastructure; you'll just need more "operators", more spreadsheets and other forms of manual labor.  That means less flexibility, more downtime and higher TCO; and you'll be going against the collective wisdom of decades of technologists and innovations.

This recession-proof dynamic gives the leaders in TCP/IP, netsec and traffic optimization an inherent advantage, if they can get the connectivity intelligence necessary to deliver dynamic services.  They have the expertise to build intelligence into their gear as they have demonstrated.  They just haven't had the connectivity intelligence to deliver the dynamic infrastructure.  Yet that is inevitable.

The Potential Leaders in Infrastructure2.0

Cisco is the leader in TCP/IP and has the most successful track record when it comes to executing in the enterprise IT market.  Cisco has kept up with major innovations in security and traffic management as well, and it is likely to become a leader in Infrastructure2.0 as enterprises seek to boost productivity as their networks continues to become strategic to business advantage in an uncertain world economy.

F5 Networks has become the leader in application layer traffic management and optimization, thanks to its uncanny ability to monetize the enterprise web, or the enterprise initiative to deliver its core applications over the WAN and Internet.  Their ability to merge load balancing with sophisticated application intelligence positions them to play an important role in the development of dynamic infrastructure.

Riverbed has come on the scene thanks to its ability to optimize a vast array of network protocols so that their customers could empower their branch offices like never before.  While many tech leaders focused on the new data center, Riverbed achieved stellar growth by focusing on the branch office boom enabled by breakthroughs in traffic management and optimization.  It was a smart call that has positioned Riverbed to be a leader in the emerging dynamic network.

Infoblox is the least known of the potential I2.0 leaders.  It is a private company that already counts more than 20% of the Fortune 500 as customers.  Its solutions automate core network services (including IPAM), enabling dynamic connectivity intelligence for TCP/IP networks.  (Disclosure: I left virtualization security leader Blue Lane Technologies in July to join Infoblox, largely because of their legacy of revenue growth, sizable customer base and the promise of core network service automation.)  Infoblox's founder and CTO is also behind the IF-MAP standard, a new I2.0 protocol that holds promise as a key element for enabling dynamic exchange of intelligence among infrastructure, applications and endpoints (think MySpace for your infrastructure).

VMware is executing on the promise of production virtualization and clearly now has the most experience in addressing the challenges of integrating dynamic processing power with static infrastructure.  I think the biggest challenges for VMware will be regarding how much it has to build or acquire in order to address these challenges.  Not all of its technology partners are adequately prepared for the network demands of dynamic systems and endpoints.  VMsafe was a big step forward on the marketing front, but partners have been slow to execute virtsec-ready products.

Google has no doubt benefited from the hype surrounding cloud computing.  They've been investing in cloudplexes and new pre-enterprise cloud applications.  While I do have reservations about their depth of infrastructure experience (versus the Nicholas Carr prediction of the eventual decline of enterprise IT) I think one would be hard pressed not to include them as a player driving requirements for a more dynamic infrastructure.

Microsoft has recently become more vocal on both virtualization and cloud fronts and has tremendous assets to force innovation in infrastructure, in the same way that its more powerful applications have influenced endpoint and server processing requirements.  They are likely to play a similar role as the network becomes more strategic to the cloud.

There are no doubt other players (both public and private) that promise to play a strategic role in this next technology revolution, including those delivering more power, automation and specialization around network, endpoint and application intelligence as well as enabling more movement and control in virtual and cloud environments.  All are welcome to join the conversation.

These leaders are well positioned to play a substantial part in the race to deliver Infrastructure2.0; and strategic enterprise networks promise to be big winners.  The dynamic infrastructure will change the economics of the network by automating previously manual tasks and will unleash new potentials for application, endpoint and network intelligence. It will also play a major part in the success or failure of many leading networking and virtualization players as well as enterprise IT initiatives during periods of economic weakness and beyond.  Infratsructure2.0 is the next technology boom.  It is already underway.

Support VatorNews by Donating

Read more from our "Trends and news" series

More episodes