I had heard about this EMC startup but didn’t know much about what they really do. My friend Arik Blum from HP takes the time to send interesting technology updates to his own private distribution list.
Atmos is COS “cloud optimised storage”, with web services such as SOAP and REST for access. Cloud Optimized Storage(COS) systems are geographically disperse yet managed as a single entity.
Information inside the Atmos repository is stored as objects. Policies can be created to act on those objects allowing Atmos to apply different functionality and different service levels to different types of users and their data, for example – Replication,DE-duplication, Deletion etc.
Atmos is designed for multi-Petabyte deployments. There are no LUNs. There is no RAID. There are only objects and metadata : Billions of objects globally distributed with policy based information management.
As new data gets written into the Atmos infrastructure it gets synchronously mirrored to N locations (depending on the policy). The goal for Atmos was to provide a low cost bulk storage system for these emerging markets, like Web 2.0 companies or other industries with lots of user generated content.
- From a hardware perspective, there’s nothing radical here. Drives are all SATA-II 7.2K 1TB capacity.
- Front-end connectivity is all IP based, which presumably includes replication too.
A few open questions are pending regarding ATMOS:
· What resiliency is there to cope with component (i.e a hard disk ) failure?
· What is the real throughput for replication between nodes?
· Where is the metadata stored and how is it kept concurrent?
· Where is the rich metadata going to come from?
· Is 1Gb/s enough to replicate my data to a remote site synchronously?
· Is this all battery backed write cache in case I experience a hardware failure?
· how long will it take to replicate a 1TB drive over IP?
With the announcement of Atmos today, EMC has also created a new acronym Cloud Optimized Storage (COS). Think of COS in terms of the evolution of three storage system acronyms over the last ten years.
SAN -> NAS -> CAS -> COS.
What follows is a brief description of this evolution in terms of value to the customer:
SAN Value = Centralized, secure multi-tenancy for blocks.
NAS Value = Centralized, secure multi-tenancy for files.
CAS Value = Centralized, secure multi-tenancy for objects (content + metadata).
COS Value = Globalized, secure multi-tenancy for content with rich policies.
In my mind the two distinguishing values that Cloud Optimized Storage adds to the party are summarized by using the words “Globalized” and “rich policies”.
- COS implies that the storage is globally accessible. The conventional understanding behind SAN, NAS, and CAS systems was that they were “frames” or “racks” that lived within the walls of a data center. Cloud optimized storage systems are geographically disperse yet managed as a single entity.
- COS also implies that rich metadata glues everything together. Centera introduced the option of appending metadata to content; COS introduces the imperative of attaching policies to content.
The Special Sauce of COS
Rich metadata in the form of policies is the special sauce behind Atmos and is the reason for the creation of a new class of storage system. Atmos contains five “built-in” policies that can be attached to content:
- Replication
- Compression
- Spin-down
- Object de-dup
- Versioning
When any of these policies are attached to Atmos, COS techniques are used to automatically move the content around the globe to the locations that provide those services. Customers can place content into Atmos (using REST/SOAP or CIFS/NFS/IFS) and then associate that content with one of the built-in policies. The Atmos architecture also allows for extensible policies. Customers that want to specify policies outside of those natively offered by Atmos can develop their own. For example, picture a customer that wants to add “Cheap Power” as a policy; Atmos can be programmed to globally move content to a location with the cheapest power rates.
Let’s Stop Here
When it comes to how the Atmos software has been built, there’s much more to say. I’ll be back with more detail about how this thing has been built. I’ll also do some comparative analysis of COS against conventional SAN/NAS/CAS technologies. Covering these items in this post, however, would take away the emphasis from this straightforward definition of COS:
Cloud Optimized Storage: global storage with a policy focus.
EMC Atmos is EMC’s first Cloud Optimised Storage offering designed for policy based information storage, information distribution and information retrieval at a global scale. GA code shipped at the end of June and customers and partners have been deploying Atmos repositories in their own environments since the second half of 08.
While some competitors were flapping their gums and asking whatever crazy questions came into their heads EMC was shipping a product whose team didn’t miss a single milestone and met their ship date. Now that the marketing machine has spun up and the EMC Sales sledgehammer is about to drive those competitors into the ground I’ll be following their backtracking with some enthusiasm.
So what is EMC Atmos? What Atmos isn’t is a clustered file system or a warmed over NAS offering clustered or otherwise. Atmos(phere) was designed by the Cloud Infrastructure and Services Division (CISD) from the ground up with a number of distinct characteristics.
- Information inside the Atmos repository is stored as objects. Policies can be created to act on those objects and this is a key differentiator as it allows Atmos to apply different functionality and different service levels to different types of users and their data. Managing information, which is what we should be doing, as opposed to wrangling blocks and file systems as we tend to do.
- There is no concept of GBs or TBs to EMC Atmos, those units of storage capacity are too small, Atmos is designed for multi-Petabyte deployments. There are no LUNs. There is no RAID. There are only objects and metadata.
- There is a unified namespace. Atmos operates not on individual information silos but as a single repository regardless of how many Petabytes containing how many billions of objects are in use spread across whatever number of locations available to who knows how many users.
- There is a single management console for management regardless of how many locations the object repository is distributed across. This global scale approach means that Atmos had to be an autonomic system. Automatically reacting to environmental and workload changes as well as failures to ensure global availability.
What those traits should highlight for you is that Atmos isn’t a SAN offering isn’t a NAS offering and neither is it a CAS offering. It’s a COS offering, cloud optimised storage, with web services such as SOAP and REST for access.
There’s a lot of info on Atmos on the various blogs and up on EMC.com but this entry is about “Building EMC Atmos” and for that information I went to one of the Atmos architects. Dr. Patrick Eaton. Patrick Eaton received his PhD from Berkeley and was one of the primary members of Professor John Kubiatowicz OceanStore project. As I learned from speaking to him he’s been thinking about stuff like this for a number of years and if he wasn’t building globally distributed storage systems he’d be indulging his passion for music working in the field of digital sound for a company like Yamaha or Korg.
With a tinge of regret he tells me that these days he’s more of a consumer than a creator of music but as he’s been busy building something new from the ground up that’s understandable.
In person he’s taller and younger than I had expected, he smiles easily and comes across as an open personality. Clearly not one of these academic types who had their sense of humour surgically removed before they submitted their thesis.
As I was to learn Atmos started with five people out at the EMC Cambridge facility working on it’s floor to ceiling whiteboards looking to solve a problem.
“Fundamentally this was a distributed systems problem. How do you take a loose collection of services distributed across a wide area and make them operate as you want them to operate?”
Fortunately for me this isn’t a question I have to answer or I’d need more than the floor to ceiling whiteboards but he pauses for a split second before moving on.
“EMC is really good at selling high end storage to really high end people. If you can drop tens of dollars per GB on a storage system man does EMC have offerings for you but data growth is continuing to explode and not everybody has data which justifies that level of expenditure or has the financial resources to justify spending that much money on storage. So, EMC was coming across a customer segment for whom they didn’t have an offering and the goal for Atmos was to provide a low cost bulk storage system for these emerging markets, like Web 2.0 companies or other industries with lots of user generated content.
Yes you can put that stuff on regular SAN or NAS systems and that’s what customers have been doing as the only other option was to start writing and maintaining their own storage software and build their own storage hardware. That’s far from ideal as the value of these companies is in their applications and the services those applications provide.
What we needed to do was provide a Terrabyte at something like ten or more times cheaper than existing SAN or NAS storage systems can offer. That is the problem Atmos was designed to solve and a key part of the product vision comes from the policy driven features of Atmos.
Yes you’re targeting the bulk storage market, the TME and Web 2.0 spaces with those mountains of user generated content, but people want to use that storage in very different ways.
Some people want to have one data centre, some want two others want many more.
Some need to support different types of workloads, various types of object sizes, control where they locate specific objects and how they get them close to their customer regardless of where on the planet the customer is located in relation to where the data was first stored
At the core of the Atmos design is how we allow customers to define policies as to how data actually hits disk. There are no administrators saying “Joe’s photos should be on this particular piece of spinning rust”, rather they write policies to describe how Joe is a subscription customer therefore his files require a certain number of copies associated with them for backup and should have a certain rolling retention policy in case he cancels his account. Thus they should be in this data centre here and not in one thousands of miles away.
But if Joe packs up the family and the dog and moves across country his data may be replicated to the data centre now closest to him depending on the policies applied to his files.
Information management is something EMC talks about a lot so providing a storage solution designed with policy based information management at it’s core is a big thing we wanted to do with Atmos. You’re not just storing information, you’re replicating it to where it’s needed and putting it as close to the user as possible. You’re compressing it, de-duplicating it or deleting it depending on what policies are applied to it and if it hasn’t been accessed in a while you can even spin down the drives inactive objects are stored on to save power.
Multi-tenancy, could we talk about that a bit more? Could I offer storage as a service to different users or organisations?
“Yes you could. Multi-tenancy means that Atmos can support many different tenants with logical isolation. Each tenant can have their own private namespace under the Atmos namespace but tenants are not aware of other tenants or the objects belonging to those tenants.
You could be providing services to users out on the Internet and hosting application test and dev as well as providing services to your internal business units, but none of those tenants would know about each other.”
We were talking about this being a low cost solution, what’s low cost at the scale we are talking about here? Sure there’s capacity cost but it’s not just that..
“Well not only does the initial cost of delivering the product to the doorstep have to be low but also it has to be something that the customer can maintain very easily and we’re talking about the Petabyte range when we’re talking about deploying this so one of the key design elements was how to provide a customer installable configurable and maintainable implementation.
Going back to the traditional EMC model of “We’ll make sure it works but you’re going to pay for it”, where parts show up at your door with a service engineer attached well that shoots the entire low cost target out of the water if you have to do that more than a few times a year.
That’s why a lot of the installation, configuration and maintenance can be done by the customer themselves.
Low cost, low touch, incredible scale and density. Billions of objects globally distributed with policy based information management. Petabytes of storage which could be in the same room or distributed around the world but with a single point of management. Those were some of the design goals.”
Okay so you’ve built and shipped Atmos, we’re were talking about having this pre-announcement chat back when you were just about to head off on holiday this past summer right after the code went GA, so what have you learned from building a product as opposed to working on a project?
“I learned a lot about managing cross continent teams. Maybe 50% of our developers and 80% of our QA is split between Beijing and Shanghai China. That’s a 12 hour difference which can be challenging since there’s no overlap during the day and there are cultural communication differences to factor in.
When the group was smaller I was exposed more to customer interactions and it was always interesting to get feedback and find out how they plan on using Atmos as opposed to how you think they’ll use it. Now it’s up and running in their environments I get a different kind of feedback as I’m watching how they’re actually using the product in production.
I was also blessed to join this group when there were five of us. I’ve been able to grow with the group and assume some responsibility and some leadership which has stretched me and it’s a stretching that a lot of freshly minted PhDs don’t get so early on in their career. It was pretty natural when there was five people here and maybe ten over there that I could take well defined pieces of the system and then lead them through implementation. Now that we’ve grown to over a hundred people you can’t take the people who’ve been there the longest and have them doing that.
I’ve been really blessed that way and really fortunate to have been able to join an organization in it’s infancy and be able to grow with the organization. The opportunity here has really been amazing.”
You moved from California to Massachusetts to join EMC and build Atmos from the ground up how did the move to the east coast turn out for you?
“We love it here. My wife and I are from the mid-west, which does have winters, so the seasons have made a welcome return. California has beautiful weather but it can start to feel like Groundhog Day while here the seasons are refreshing. The city is nice and I tell my manager all the time that we need to recruit more in California as there’s not a whole lot of places you can draw from in the US and with a straight face tell them that Boston has more affordable houses and better commutes.
Californians you can say that to and it’s true.”
Thursday, 13 November 2008
http://storagearchitect.blogspot.com/2008/11/obligatory-atmos-post.html
I feel drawn to post on the details of Atmos and give my opinion whether it is good, bad, innovative or not. However there’s one small problem. Normally I comment on things that I’ve touched – installed/used/configured/broken etc, but Atmos doesn’t fit this model so my comments are based on the marketing information EMC have provided to date. Unfortunately the devil is in the detail and without the ability to “kick the tyres”, so to speak, my opinions can only be limited and somewhat biased by the information I have. Nevertheless, let’s have a go.
Hardware
From a hardware perspective, there’s nothing radical here. Drives are all SATA-II 7.2K 1TB capacity. This is the same as the much maligned IBM/XIV Nextra, which also only offers one drive size (I seem to remember EMC a while back picking this up as an issue with XIV). In terms of density, the highest configuration (WS1-360) offers 360 drives in a single 44U rack. Compare this with Copan which provides up to 896 drives maximum (although you’re not restricted to this size).
To quote Storagezilla: “There are no LUNs. There is no RAID. ” so exactly how is data stored on disk? What methods are deployed for ensuring data is not lost due to a physical issue?
What is the storage overhead of that deployment?
Steve Todd tells us:
“Atmos contains five “built-in” policies that can be attached to content:
· Replication
· Compression
· Spin-down
· Object de-dup
· Versioning
When any of these policies are attached to Atmos, COS techniques are used to automatically move the content around the globe to the locations that provide those services.”
So, does that mean Atmos is relying on replication of data to another node as a replacement for hardware protection? I would feel mighty uncomfortable to think I needed to wait for data to replicate before I had some form of hardware-based redundancy – even XIV has that.
Worse still, do I need to buy at least 2 arrays to guarantee data protection?
Front-end connectivity is all IP based, which presumably includes replication too, although there are no details of replication port counts or even IP port counts, other than the indication of 10Gb availability, if required.
One feature quoted on all the literature is Spin Down. Presumably this means spinning down drives to reduce power consumption; but spin down depends on data layout. There are two issues; if you’ve designed your system for performance, data from a single file may be spread across many spindles. How do you spin down drives when they all potentially contain active data? If you’ve laid out data on single drives, then you need to move all the inactive data to specific spindles to spin them down – that means putting the active data on a smaller number of spindles – impacting performance and redundancy in the case of a disk failure. The way in which Atmos does its data layout is something you should know – because if Barry is right, then his XIV issue could equally apply to Atmos too.
So to summarise, there’s nothing radical in the hardware at all. It’s all commodity-type hardware – just big quantities of storage. Obviously this is by design and perhaps it’s a good thing as unstructured data doesn’t need performance. Certainly as quoted by ‘zilla, the aim was to provide large volumes of low cost storage and compared to the competition, Atmos does an average job of that.
Software
This is where things get more interesting and to be fair, the EMC message is that this is a software play. Here are some of the highlights;
Unified Namespace
To quote ‘zilla again:
“There is a unified namespace. Atmos operates not on individual information silos but as a single repository regardless of how many Petabytes containing how many billions of objects are in use spread across whatever number of locations available to who knows how many users.”
I’ve highlighted a few words here because I think this quote is interesting; the implication is that there is no impact on the volume of data or its geographical dispersion.
If that’s the case (a) how big is this metadata repository (b) how can I replicate it (c) how can I trust that it is concurrent and accurate in each location.
I agree that a unified name space is essential, however there are already plenty of implementations of this technology out there, so what’s new with the Atmos version? I would want to really test the premise that EMC can provide a concurrent, consistent name space across the globe without significant performance or capacity impact.
Metadata & Policies
It is true that the major hassle with unstructured data is the ability to manage it using metadata based policies and this feature of Atmos is a good thing. What’s not clear to me is where this metadata comes from. I can get plenty of metadata today from my unstructured data; file name, file type, size, creation date, last accessed, file extension and so on. There are plenty of products on the market today which can apply rules and policies based on this metadata, however to do anything useful, then more detailed metadata is needed.
Presumably this is what the statement from Steve means: “COS also implies that rich metadata glues everything together”. But where does this rich metadata come from?
Centera effectively required programming their API and that’s where REST/SOAP would come in with Atmos. Unfortunately unless there’s a good method for creating the rich metadata, then Atmos is no better than the other unstructured data technology out there.
To quote Steve again: “Rich metadata in the form of policies is the special sauce behind Atmos and is the reason for the creation of a new class of storage system.”
Yes, it sure is, but where is this going to come from?
Finally, let’s talk again about some of the built-in policies Atmos has:
· Replication
· Compression
· Spin-down
· Object de-dup
· Versioning
All of these exist in other products and are not innovative. However extending policies is more interesting; although I suspect this is not a unique feature either.
On reflection I may be being a little harse on Atmos, however EMC have stated that Atmos represents a new paradigm in the storage of data. If you make a claim like that, then you need to back it up.
So, still to be answered;
· What resiliency is there to cope with component (i.e HDD) failure?
· What is the real throughput for replication between nodes?
· Where is the metadata stored and how is it kept concurrent?
· Where is the rich metadata going to come from?
Oh, and I’d be happy to kick the tyres if the offer was made.