Monday, August 4, 2014

Simplicity and focus trump cheap and even free.

AWS recently released Zocalo. 

“Fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity.

Users can comment on files, send them to others for feedback, and upload new versions without having to resort to emailing multiple versions of their files as attachments. Users can take advantage of these capabilities wherever they are, using the device of their choice, including PCs, Macs, and tablets. Amazon Zocalo offers IT administrators the option of integrating with existing corporate directories, flexible sharing policies, audit logs, and control of the location where data is stored.

Customers can get started using Amazon Zocalo with a 30-day free trial providing 200 GB of storage per user for up to 50 users”.

But who cares?

Well for starters, Dropbox, Box, Accelion, Huddle, Soonr, SugarSync, Google, Microsoft and Egnyte.  With Apple, AWS, Google and Microsoft now jumping into the fray of share, sync, and collaboration, the limbo music is starting to play.  AWS fired the latest shot below the water line with Zocalo pricing starting at $5 per user per month for 200GB of storage. Today, Dropbox charges twice that for half as much storage.

As they have demonstrated with compute, network and storage, when AWS competes, disruption and commoditization soon follow.

But Dropbox et all are not going to be throwing in their nearly $1.5B in cumulative funding towel any time in the near future.

Why?  Because simplicity and focus always win.

Box and Dropbox have made it extraordinarily simple for people to use cloud-based storage and become untethered from earthly storage persistence.  They have focused on simplifying the user experience, usability, and economic consumption models of their products for enterprise-IT. In many cases, as with security and enterprise integration e.g. LDAP, ACL’s, SSO, SharePoint, Salesforce etc., Box and Dropbox have done a great job in removing complexity.

AWS will succeed in one thing and that’s elongating sales cycles.  POC’s, user trials, and price comparisons will become the norm for all the players.  A real market.  Longer sales cycles is not what Box needs right now. The company’s line item for sales and marketing expenses expanded from $99.2 million for the year ending January 2013, to $171 million for the year ending January 31, 2014. This represents a majority of Box’s $100 million increase in operating costs during the same period.

Zocalo will resonate with developers accustom to AWS.  Much like Hipchat and Fuze are used in the bowels of engineering, yet the superior quality of experience of Webex, Go-to-Meeting, and Telepresence still rule.  AWS’s offerings are bountiful and confusing.  The plethora of services AWS has on the price book, are overwhelming. At the same time, there is very little cross-sell between the core strength positions of S3, EC2, and Zocalo.  AWS has not exactly demonstrated strength in moving up the stack to LOB or verticalizations.

Bottom-line, Zocalo will create sales chaff for the real-enterprise share/collaboration market and be a niche product, relegated to IT-infrastructure with minimal core AWS attach rate.


Tuesday, June 10, 2014

Embrace the Rate of Change and other lessons from Captain Ramius


Two recent dynamics have occurred in Enterprise IT that serve to accelerate the gap between applications and infrastructure advancement.

1. Outsourcing activities are Slowing infrastructure advancement

Outsourcing is now more popular than ever and the rise of the Service Provider is fueling this change.  Let’s look at the growth of traditional on-prem outsourcing contracts and the use of these on-prem contracts to migrate enterprise workloads to vendor managed clouds.  This is why the predominant growth and certainly where they’re betting future growth will occur out of the likes of: HP, IBM and CSC, is in their Service Provider business. This is the legacy of the outsourcing business where EDS, IBM GS and CSC competed for long-tail 3-5 year on-prem, outsourcing deals. As IBM builds up their enterprise cloud business with Softlayer, the margin can only be tolerated within these services orgs. These management and operating contracts are now the vehicles to migrate these workloads to private and public clouds.

The migration of these on-prem relationships to the vendor cloud will continue as these large system houses leverage their long-term customer relationships to migrate these workloads to their vendor clouds.

An interesting dynamic kicks in when a company has decided to migrate an enterprise workload to a Service Provider relationship, the rate of change is now tied to the framework of the outsourcing contract.  In other words, the Service Providers have an agreement with the customers to manage a set environment for a set amount.  Now if a customer desires to migrate to say a different platform or app version, this is a scope change to the management contract and has cost implications.  This dynamic usually kills all platform migration deals of legacy systems.

The true costs are the limits put on business agility.

2. The use of New App Dev models is Accelerating

So while the legacy slows, new app development accelerates.  It accelerates with the rise, availability and ease of use of the next generation platforms.  You can use Couch, Mongo, and Hadoop etc. as on-prem open source DB or use the same from AWS as a service.  You can provision resources instantaneously and be productive immediately.  Code snippets are grabbed from Open Source libraries and code banks. Any function from how to do an asset depreciation module to molecular dynamics exists in a code bank somewhere. http://en.wikipedia.org/wiki/List_of_free_and_open-source_software_packages

This new App Dev is all done on commodity infrastructure.  It has to be. Infrastructure that is RESTful, API driven and 100% commoditized. The rate of hardware commoditization is visibly killing EMC, NetApp, IBM and HP among others.  These models and margins will simply cease to exist.

It is time to narrow the gap

So while the legacy slows to a crawl due to technical and business inertia, new app dev is screaming. The combination is great for the vendors who embrace this accelerated change, divorce themselves from the past and fully embrace the future.

The only way IT can deliver infrastructure for the new paradigm of App Dev is to adopt a Public Cloud like model and deliver services on top of commodity infrastructure. Services that can be delivered with the Speed, QOS, Control and Costs that are available if not better than the Public Cloud.

This infrastructure simply cannot be delivered on antiquated platforms that are tied to supporting the past.  Remember the old Cobol Compilers that ran on the PC that took Cobol Code, compiled and executed it on the PC. Short-lived, bridges of backward compatibility that were rooted in the past not then future.

That is why Captain Ramius was right when he said, “Upon reaching the new world, Cortez burned his ships, as a result, his men were well motivated”  
Legacy is just that.  The past.  Therefore, #Demandincompatibility and move ahead to deliver agile value to the business.





Thursday, May 15, 2014

Local Clouds and The Coming Death of Legacy Stacks in the Cloud


Let me tell you a little secret about the “cloud.” It’s that right now in the enterprise, it’s a local game.

A few options come to mind when we think of enterprise, like the predominate force of AWS in test and development environments. Then you have providers like, HP-ES, IBM/SL, RackSpace, Teremark, and Google that are all trying to play enterprise production load catch-up.

For the meantime, I think it’s safe to call the enterprise private cloud a local game.

Take KIO Networks in Mexico City, or LGCNS in South Korea, or T-Systems in Germany; the strength that these local service providers have is that they are entrenched into the local economy. They are tied in with the governmental entities either with contracts, investment tax incentives, or in some cases, board relationships.  Governments encourage these types of business, as they are clean, provide local jobs, and are a good high-tech face for the country.

The key characteristics of these service providers are that they have the local connections and the P&L margin expectations for the long-tail economics of a service provider business.

In the case of KIO Networks, their margins are so tight that they build their data centers in cooler zones in Mexico City or bury them into the side of a mountain; because not running their chillers for several months out of the year is a competitive differentiator. Coming from a world of plump enterprise software/systems deployments, margins like this seemed like a foreign concept to me.

Just like our friends at AWS, service providers don’t write the books, they just sell them and are perfectly happy living with the retail economics. 

But in this never ending quest for margin, service providers need to standardize on control layers to manage each plane, like, computing, networking, storing, provisioning, and managing chargeback across these heterogeneous, at times customer dictated, and other times, commodity resource pools is vital. 
This long-term platform migration, which is primarily driven by the evolution of the service provider, is what’s posing a sea changing threat to the legacy of major profit pools from the likes of: IBM, HP, EMC, NetApp, etc.

In the early stages of a customer’s journey into the private/public cloud, they generally dictate the same legacy platforms they have run on for decades with these environments lifted and shifted into the services provider data center. In other words, let’s move my expensive proprietary boxes off of my data center floor and onto yours. 

This is precisely where the market is now.  But this phenomenon is just a hosting/colo way station on the way to the true public cloud.

Going forward, the confluence of software driven reliability, fault tolerance, and compliance are being delivered on top of commodity infrastructure selected by the service providers will be a way of life.   

Service provider’s margins will never tolerate the proprietary stacks of today.  

These business models are too far out of synch. AWS’s cloud doesn’t run on proprietary gear, why should yours?

Monday, May 5, 2014

Rewriting the Entire Customer Experience.


As I embark on my next career move as Chief Customer Officer of Formation Data Systems, I’m struck by the sheer magnitude of the opportunity. It’s the prospect of disrupting the traditional enterprise storage market and how a next generation data management layer can holistically unlock the value of traditional, No-SQL databases and AWS S3. The technical challenges and broad transformational opportunities are exhilarating.

But what gets me fired up beyond belief is the chance to truly define how a new company rewrites the entire customer experience.

Now that’s cool stuff.

To be part of a revolution in how customers obtain knowledge about: Formation Data Systems, the company, the people, and the products. Providing fuel to enable customers to make intelligent decisions and interact with a product and a company in a completely new way. 
   
In the past, IT was “sold” through traditional means of marketing awareness, campaigns, and marketing touches turning into leads, which turns into prospects, which turns into deals and sales.

We thought we were getting fancy when we started selling to LOB and IT or the populist approach of bypassing IT all together. The whole experience was an asynchronous push.  The sales “Firewall” was built to protect the customer from the technical complexities and harsh realities of the product. 

Enter the spin doctors obfuscating complexity with PowerPoint.  

I believe this cycle is antiquated and outmoded to not only how companies can and should interact with their customers, but also how customers and potential customers seek to understand disruptive technology and how it can improve their lives.

Customers expect and deserve more. 

Customer interactions should be enlightening and educational, where technical and business ideas are exchanged and refined collaboratively. Where flexible problem solving and options define customer success. 
Today, via social and affinity networks, technically savvy customers are exchanging ideas with scores of like-minded colleagues. Via these informal networks, the true customer experience begins long before a salesperson ever interacts with a customer. Customers don’t want to see high level PowerPoints because chances are they’ve already pre-read them on Slideshare.


So, after a long lineage at some of the most distinguished companies in Silicon Valley: PeopleSoft, Vignette, Documentum, EMC, and SAP, I’m truly honored to be able to take that depth of work and define the next generation of customer experience with you at Formation Data Systems.

The Rise of Hybrid Cloud Computing


Even if you’re not a technologist, I want you to understand that hybrid cloudcomputing is all about choice:
Choice about where your data resides. Choice about how your data is managed. Choice about where your data processing actually happens.
Choices can be used:
  • to make economic decisions to lower the total cost of ownership of data,
  • to maximize your quality of services, or
  • to comply with regulatory constraints on data sovereignty.
In today’s environment, vendors are moving fast. If we wind the clock back a year, the main cloud services like Microsoft Azure, Amazon, Terremark, or Rackspace were fairly proprietary, closed environments. But they all quickly realized that IT heterogeneity is what customers want.

If I take my infrastructure and my workloads and move them to the cloud, my ability to do so in a closed, homogeneous cloud is very limited.Customers Want Choice, Not Monolithic Options 
Managing a mix of platforms is a reality for CIOs’ deployment models. And ultimately, that’s what the cloud is: It’s a deployment model.
It’s the transportability of workloads that makes the hybrid cloud so important. Terremark, Rackspace and Amazon have visions to make this happen: To seamlessly transport workloads, so it doesn’t matter where your workload resides—whether it’s on premises or in the cloud.
Three years ago, this was called cloudbursting. This idea stalled and fizzled because the technology hadn’t arrived. But now we’re able to seamlessly transport workloads and data across multiple clouds: Public and private.
Amazon is just starting down this path—where you can submit a workload to a queue and Amazon will understand your needs for specific types of storage, compute cycles, and memory. Amazon will also give you some options for creating this cloud environment.
These options may include a priority queue where you pay extra and move to a higher priority. But if you’re okay with waiting a few hours and don’t mind the workload being run somewhere else in the world, then you’ll be charged a different fee.
In the past, high-performance computing was physically located on-premises. But with the cloud, you remove the sunk capital costs. Instead, you get on-demand access, paid for based on the urgency and priority to your organization.
Cloud computing allows anyone to gain access to supercomputer-like power, without traveling anywhere. Projects that require massive amounts of big-data manipulation and storage, like space exploration, genome sequencing, or finding energy reserves can all benefit.
Not Everyone Has A Supercomputer In The Basement 
A few years ago, I was working with a company in Boston doing human-genome sequencing. This is the perfect example of the value of big data, because it’s going to affect you and me as human beings.
To run a simulation of genomic sequencing data, this organization needed time on the IBM Blue Genesupercomputer, one of the fastest computers in the world at the time. They actually had to physically travel to the machine’s location and wait for processing time to become available.
Now, fast forward to the present: You can contact Amazon or Rackspace, who now have this type of computing capability, and you can rent the time and processing power. This really illustrates what cloud computing is all about.
I can now offer that Blue Gene machine to someone who wants to access it for a little while, in the cloud.
These Changes Are Bringing A Tectonic Shift 
And it’s all being driven by hybrid cloud computing. From the business perspective, it’s the route to a seamless, data-centric world.
Things that were limited by on-site physical capacity and storage behind my four walls are no longer holding us back. Suddenly, I’m only limited by my imagination and my ability to build the business.