Object Storage 201 Q&A

Now available on-demand, our recent live CSI Webcast, “Object Storage 201: Understanding Architectural Trade-Offs,” was a highly-rated event that almost 250 people have seen to date. We did not have time to address all of the questions, so here are answers to them. If you think of additional questions, please feel free to comment on this blog.

Q. In terms of load balancers, would you recommend a software approach using HAProxy on Linux or a hardware approach with proprietary appliances like F5 and NetScaler?

A. This really depends on your use case. If you need HA load balancers, or load balancers that can maintain sessions to particular nodes for performance, then you probably need commercial versions. If you just need a basic load balancer, using a software approach is good enough.

Q. With billions of objects what Erasure Codes are more applicable in the long term? Reed Solomon where code words are very small resulting in many billions of code words or Fountain type codes such as LDPC where one can utilize long code words to manage billions of objects more efficiently?

A. Tracking Erase Code fragments have a higher cost than replication but the tradeoff is higher HDD utilization. Using Rateless coding lowers this overhead because each Fragment has equal value. Reed Solomon requires knowledge of fragment placement for repair.

Q. What is the impact of having HDDs of varying capacity within the object store?  Does that affect hashing algorithms in any way?

A. The smallest logical storage unit is a Volume. Because Scale-Out does not stripe volumes there is no impact. Hashing, being used for location would not understand volume size, so a separate Database is used, on a volume basis, to track open space. Hashing algorithms can be modified to suit the underlying disk. The problem is not so much whether they can be designed a priority for the underlying system, but really the rigidity they introduce by tying placement very tightly with topology. That makes failure / exception handling hard.

Q. Do you think RAID6 is sufficient protection with these types of Object Storage Systems or do we need higher parity based Erasure codes?

A. RAID6 makes sense for a Direct Attached storage solution where all drives in the RAID Set can maintain sync. Unlike filesystems (with a few exceptions) Scale-Out Object Storage systems are “Storage as a workload” systems that already have protection as part of the system. So the question is what data protection method is used on solution x as apposed to solution y. You must also think about what you are trying to do.  Are you trying to protect against a single disk failure, or are you trying to protect against a node failure, or are you trying to protect against a site failure. Disk failures – RAID is great, but not if you’re trying to do node failure or site failure. Site failure is an EC sweet spot, but hard to solve from a deployment perspective.

Q. Is it possible to brief how this hash function decides the correct data placement order among the available storage nodes?

A. Take a look at the following links: “http://en.wikipedia.org/wiki/Consistent_hashing“; https://swiftstack.com/openstack-swift/architecture/

Q. What do you consider to be a typical ratio of controller to storage nodes? Is it better to separate the two, or does it make sense to consolidate where a node is both controller and storage?

A. The flexibility of Scale-Out Object Storage makes these two components independently scalable. The systems we test all have separate controllers and storage nodes so we can test this independence. This is also very dependent on the Object Store technology you use. We know of some object stores where there is a 1GB RAM / TB of data, while there are others that use 1/10 of that.  The compute is dependent on whether you are using erasure coding, and what codes. There is no one answer.

Q. Is the data stored in the Storage depository interchangeable with other vendor’s controller units? For instance, can we load LTO tapes from vendor A’s library to Vendor B’s library and have full access to data?

A. The data stored in these systems are part of the “Storage as a workload” principle. So system metadata used to track Objects stored as a function within the Controller. I would not expect any content stored to be interchangeable with another system architecture.

Q. Would you consider the Seagate Kinetic Open Storage Platform a radical architectural shift in how object storage can be done?  Kinetic basically eliminates the storage server, POSIX and RAID or all of the “busy work” that storage servers are involved in today.

A. Ethernet drives with key value interface provides a new approach to design object storage solution. It is yet to be seen how compelling they are for TCO and infrastructure availability.

Q. Will the inherent reduction in blast radius by the move towards Ethernet-interface HDDs be a major driver of the Ethernet HDD in object stores?

A. Yes. We define Blast Radius by a compute failure that impacts access to connected hard drives. As we lower the Number of Connected Hard Drives to compute the Blast Radius is reduced. For Ethernet drives, you may need redundant Ethernet switches to minimize the blast radius.  Blast radius can be also minimized with intelligent data placements with software as well.

Join SNIA-CSI at the OpenStack Summit

Get the tips needed when implementing multiple cloud storage APIs. The SNIA Cloud Storage Initiative (CSI) is hosting a Birds of a Feather session – Tips to Implementing Multiple Cloud Storage APIs at the OpenStack Summit in Paris on November 5th at 9:00 a.m. Room 212/213.

There are three main object storage APIs today; OpenStack’s Swift (open but not standardized), Amazon’s S3 (proprietary yet a defacto standard) and SNIA’s CDMI (an ISO standard). With three APIs to support, it might sound expensive or difficult to support all of them, yet not doing so could be costly when customers want innovation and industry standard solutions and interoperability in your product.

What about the similarities and differences between the APIs, and can they be reconciled? Can these APIs be effectively and efficiently implemented in a single product? I hope you’ll join us at this session to learn about and discuss various ways to cope with this situation. You will discover best practices and tips on how to implement these three protocols in your cloud storage solution.

Register now. I look forward to seeing you on November 5th at the OpenStack Summit.

 

 

New Webcast: Object Storage – Understanding Architectural Trade-Offs

The Cloud Storage Initiative (CSI) is excited to announce a live Webcast as part of the upcoming BrightTalk Cloud Storage Summit on October 16thObject Storage 201: Understanding Architectural Trade-Offs. It’s a follow-up to the SNIA Ethernet Storage Forum’s Object Storage 101: Understanding the What, How and Why behind Object Storage Technologies.

Object-based storage systems are fast becoming one of the key building blocks for a cloud storage infrastructure. They address some of the shortcomings and provide an alternative to more traditional file- and block-based storage for unstructured data.

An object storage system must accommodate growth (and yes, the rumors are true – data growth is a huge and accelerating problem), be flexible in their provisioning, provide support multiple geographies and legal frameworks, and cope with the inevitable issues of resilience, performance and availability.

Register now for this Webcast. Experts from the SNIA Cloud Storage Initiative will discuss:

  • Object Storage Architectural Considerations
  • Replication and Erasure Encoding for resilience
  • Pros and Cons of Hash Tables and Key-Value Databases
  • And more…

This is a live presentation, so please bring your questions and we’ll do our very best to answer them. We hope you’ll join us on October 16th for an unbiased, deep dive into the design considerations for object storage systems.

 

What the CSI is Up to at SDC

What the Cloud Storage Initiative Is Doing At SDC

The SNIA Storage Developer Conference (SDC) is less than a week away. We’re looking forward to the conference and in particular want to make note of some exciting news and events that pertain to work the CSI is doing to promote standards that will increase the adoption, interoperability and portability of data stored in the cloud.

SDC Conference session: Introducing CDMI v1.1 – Tuesday, September 16th, 1:00 p.m. by David Silk. This session introduces the new CDMI 1.1 and provides an overview of capabilities the Technical Work Group have added to the standard, and what CDMI implementers need to know when moving from CDMI 1.0.2 to CDMI 1.1.

Cloud Interoperability Plugfest – Participants at the 12th Cloud Interoperability Plugfest will be testing the interoperability of their cloud storage interfaces based on CDMI. We always have a large showing of CDMI implementations at this event, but are also looking for implementations of Amazon S3, and OpenStack Swift, Cinder and Manila interfaces.

It’s not too late to register for this Plugfest. Find out how here.

SDC 2014 is going to be exciting and educational. It’s “one stop shopping” for IT professionals who focus on the tools, technologies and developments needed for understanding and implementing efficient data storage, management and security. The CSI hopes to see you there.

 

Getting Started with the CDMI Conformance Test Program

Together with our partner, TATA Consultancy Services, we recently had a great live Webcast to launch the Conformance Test Program (CTP) for the SNIA Cloud Data Management Interface (CDMI). CDMI is an ISO/IEC standard that offers end users simplicity and data storage interoperability across a wide range of cloud solutions. Interoperability and portability of data stored in the cloud has become a top IT priority. The CTP tests for conformance against the specification, and provides purchasers of certified cloud storage solutions the assurance that these solutions meet CDMI interoperability standards. Our Webcast is now available on demand. It details the benefits of the CDMI CTP program and explains how any cloud storage vendor or ISV can begin the CTP process. I encourage you to check it out to learn:

  • Key benefits of the CDMI standard for vendors and end users
  • Growing adoption of the CDMI standard
  • The suite of conformance tests required to achieve CDMI CTP certification
  • How to begin the CTP process

In addition to the Webcast replay, I encourage you to check out our CDMI CTP Frequently Asked Questions (FAQ). Getting started is easy. Just fill out the CTP form and you’ll be on your way.  

New Cloud Storage Meme – “Enterprise DropBox”

In a number of recent presentations on cloud storage recently, I have started by asking the audience “how many of you use DropBox?” I have seen rooms where more than half of the hands go up. Of course, the next question I ask is “does your corporate IT department know about this?” – sheepish grins abound.

DropBox has been responsible for for a significant fraction of the growth in the number of Amazon S3 objects – that’s where the files end up when you drop them into that icon on your laptop, smartphone or tablet. However, if that file is a corporate document, who is in charge of making sure the data and its storage meets corporate policies for protection, privacy, retention and security? Nobody.

Thus there is now growing interest in bringing that data back in-house and on premise for the enterprise so that business policies for the data can be enforced. This trending meme has been termed “Enterprise Dropbox”. The basic idea is to offer the equivalent service and set of applications to allow corporate IT users to store their corporate documents where the IT department can manage them.

Is this “Private Cloud”? Well, yes in that it uses capitalized corporate storage equipment. But it also sits “at the edge” of the corporate network so as to be accessible by employees wherever they happen to be. In reality, Enterprise DropBox needs to be part of an overall Bring Your Own Device (BYOD) strategy to enable frictionless innovation and collaboration for employees.

Who are likely to be the players in this space? Virtualization vendors such as Citrix (with its ShareFile acquisition) and VMware with its Project Octopus initiative look to be first movers in this space, along with start ups such as Oxygen Cloud. It’s interesting that major storage vendors have not picked up on this as yet.

Digging into how this works, you find that every vendor has a storage cloud with an HTTP based object storage interface that is then exposed to the internet with secure protocols. Each interface is just slightly different enough that there is no interoperability. In addition, each vendor develops, maintains and distributes it own set of client “apps” for operating systems, smartphones and tablets. A key feature is integration of the authentication and authorization with the corporate LDAP directory both for security and to reduce administrative overhead. Support for quotas and department charge back is essential.

Looking down the road, however, this proliferation of proprietary clients and interfaces is already causing headaches for the poor device user, who may have several of these apps on their devices (all maxed out to their “free” limit). The burden on vendors is the development cost of creating and maintaining all those applications on all those different devices and operating systems. We’ve seen this before, however, in the early days of the Windows ecosystem. You used to have to purchase a separate FTP client for early Windows installations. Want NFS? A separate client purchase and install. Of course, now all those standard protocol clients are built into operating systems everywhere. Nobody thinks twice about it.

The same thing will eventual work its way out in the smart device category as well. But not until a standard protocol emerges that all the applications can use (such as FTP or NFS in the Windows case). The SNIA’s Cloud Data Management Interface (CDMI) is poised to meet this need as it’s adoption continues to accelerate. CDMI offers a RESTful HTTP object storage data path that is highly secure and has the features that corporate IT departments need in order to protect and secure data while meeting business policies. It enables each smart device to have a single embedded client to multiple clouds – both public and private. No more proliferation of little icons all going to separate clouds.

What will drive this evolution? You – the corporate customer of these vendor offerings. You can ask the Enterprise DropBox vendors simply to “show me CDMI support in your roadmap”. Educate your employees about choosing smart devices that support the CDMI standard natively. Only then will the market forces compel the vendors to realize that there is no value in locking in their customers. Instead they can differentiate on the innovation and execution that separates them from their competitors. Adoption of a standard such as CDMI will actually accelerate the growth of the entire market as the existing friction between clouds gets ground down and smoothed out by virtue of this adoption.

Validating CDMI Features – Metadata Search

Here we go again with an announcement of a cloud offering that again validates an existing standardized feature of CDMI. The new Amazon CloudSearch offering lets you store structured metadata in the cloud and perform queries on the metadata. They missed an opportunity, however, to integrate this with their existing cloud object storage offering. After all, if you already have object storage, why not put the metadata with the data object instead of separating it out in a separate cloud?

CDMI lets you put the user metadata directly into the storage object, where it is protected, backed up, archived and retained along with the actual data. CDMI’s rich query functions are then able to find the storage object based on the values of the metadata without talking to a separate cloud offering with a new, proprietary API.

CDMI standardizes a Query Queue that allows the client to create a scope specification (equivalent to a WHERE clause) to find specific objects that match the criteria, and a results specification (equivalent to a SELECT clause) that determines the elements of the object that are returned for each match. Results are placed in a CDMI queue object and can be processed one at a time, or in bulk. This powerful feature allows any storage cloud that has a search feature to expose it in a standard manner for interoperability between clouds.

An example of the metadata associated with a query queue is as follows:

{
     "metadata" : {
          "cdmi_queue_type" : "cdmi_query_queue",
          "cdmi_scope_specification" : [
               {
                    "domainURI" : "== /cdmi_domains/MyDomain/",
                    "parentURI" : "starts /MyMusic",
                    "metadata" : {
                         "artist" : "*Bono*"
                    }
               }
          ],
          "cdmi_results_specification": {
               "objectID" : "",
               "metadata" : {
                    "title" : ""
               }
          }
     }
}

 

When results are stored in a query queue, each enqueued value consists of a JSON object of MIME-type “application/json”. This JSON object contains the specified values requested in the cdmi_results_specification of the query queue metadata.

An example of a query result JSON object is as follows:

{
     "objectID" : "00007E7F0010EB9092B29F6CD6AD6824",
     "metadata" : {
          "title" : "Vertigo"
     }
}

Thus if you are using your storage cloud for storing music files, for example, all of the metadata for each mp3 object can be stored right along with the object, and CDMI’s powerful query mechanisms can be used to find the files you are interested in without invoking a separate search cloud with disassociated metadata,

Validating CDMI features – Object Expiration

Validating yet another feature of the CDMI standard (see previous post for an earlier one), Amazon announced their Object Expiration feature for S3. While not a new concept for storage interfaces, it is the first cloud implementation of this capability that I know of. The idea is simply to have the server side of the cloud do object deletion on your behalf automatically, once the lifecycle of that data has completed.

As part of overall Data Lifecycle Management, object deletion is the most common terminal state for data. CDMI has standardized the interface for this capability in cloud storage with a comprehensive Retention and Hold Management feature (Chapter 17). The granularity of the standard CDMI feature is finer than that of the S3 feature in that it allows for retention and deletion on individual objects (although you could accomplish this in S3 with prefix = object name, it doesn’t scale using the header fields that Amazon uses). The S3 prefix mechanism can be used to scope the expiration policy down to individual “directories” (forward slash terminated parts of object names), and CDMI allows this also for the semantically equivalent CDMI sub-containers.

Complying with Regulations

Although the ability to delete objects when their lifecycle completes is useful, it is insufficient for complying with regulations such as Sarbanes-Oxley, or for eDiscovery needs during litigation. For most enterprises, they need to show that the data has not been modified during its lifecycle. In addition, if a subpoena is issued for the data – you DO NOT want the object deleted, even if it’s retention period has expired – this can cost you millions of dollars in a pending court case…

The CDMI standard anticipates that storage clouds will want to offer a more robust, full featured retention and hold management for corporate data, and that a standard means of achieving it will be needed. Take a quick look at Chapter 17 (it’s quite compact while being comprehensive) and investigate using the standard way to achieve this function. If you are a cloud vendor trying to emulate the S3 interface, good luck to you – Amazon will continue to expand the definition of what “S3” means (like adding this feature), forcing you to constantly modify your cloud’s storage interface to keep up (as well as requiring you to reverse engineer any bugs that exist).

Validating CDMI features – Server Side Encryption

One of the features of many storage systems and even disk drives is the ability to encrypt the data at rest. This protects against a specific threat – the disk drive going out the back door for replacement or repair. So it was only a matter of time before we would see this important feature start to be offered for Cloud Storage as well. Well, today Amazon announced their Server Side Encryption capability for their S3 cloud offering. This feature was anticipated by the CDMI standard interface when it was finalized as a standard back in April 2010.

Standard Server Side Encryption

So, how does CDMI standardize this feature? Well, as usual, it starts with finding out if the cloud actually supports the feature and what choices are available. In CDMI, this is done through the capabilities resource – a kind of catalog or discovery mechanism. By fetching the capabilities resource for objects, containers, domain or queues, you can tell whether server side encryption of data at rest if available from the cloud offering (yes this is granular for a reason). The actual capability name is: cdmi_encryption (see section 12.1.3). This indicates that the cloud can do encryption for the data at rest, but also indicates what algorithms are available to do this encryption. The algorithms are expressed in the form of: ALGORITHM_MODE_KEYLENGTH, where:

“ALGORITHM” is the encryption algorithm (e.g., “AES” or “3DES”).

“MODE” is the mode of operation (e.g.,”XTS”, “CBC”, or “CTR”).

“KEYLENGTH” is the key size (e.g.,”128″,”192″, “256”).

So the cloud can offer the user several different algorithms of different strengths and types, or if it only offers a single algorithm (such as the Amazon offering), the cloud storage client can at least understand what that algorithm is.

So how does the user tell the cloud that she wants her data encrypted? Amazon does this with a proprietary header of course, but CDMI does it with standard Data System Metadata that can be placed on any object, container of objects, queue or domain. This metadata is called cdmi_encryption (see section 16.4), and contains merely a string with a value chosen from the list of available algorithms in the corresponding capability. There is also a cdmi_encryption_provided metadata value to tell the client whether their data is being encrypted or not by the cloud.

Lastly, there is a system-wide capability called cdmi_security_encryption (section 12.1.1) that tells the user whether the cloud does server side encryption at all.

Server side encryption is an important capability for cloud storage offerings to provide, which is why CDMI standardized this in advance of having cloud offerings available. We expect more clouds to offer this in the future, and customers to soon realize that – without CDMI implementations, these offerings are locking them in and causing a high cost of exiting that vendor.

Plan to Attend Cloud Burst and SDC

Cloud Storage Developers will be Converging on Santa Clara in September for the Storage Developer Conference and the Cloud Burst Event

Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast-growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

The audience for the SNIA Cloud Burst Summit is IT storage professionals and related colleagues who are looking to cloud storage as a solution for their IT environments. The day’s agenda will be packed with presentations from cloud industry luminaries, the latest cloud development panel discussions, a focus on cloud backup, and a cocktail networking opportunity in the evening.

Check out the Agenda and Register Today…

 

Storage Developer Conference

The SNIA Storage Developer Conference is the premier event for developers of cloud storage, filesystems and storage technologies. The year there is a full cloud track on the Agenda, as well as some great speakers. Some examples include:

Programming the Cloud

CDMI for Cloud IPC

David Slik
Technical Director,
Object Storage
NetApp

Open Source Droplet Library with CDMI Support

Giorgio Regni
CTO,
Scality

CDMI Federations, Year 2

David Slik
Technical Director,
Object Storage,
NetApp

CDMI Retention Improvements

Priya Nc
Principal Software Engineer,
EMC Data Storage Systems

CDMI Conformance and Performance Testing

David Slik
Technical Director,
Object Storage,
NetApp

Use of Storage Security in the Cloud

David Dodgson
Software Engineer,
Unisys

Authenticating Cloud Storage with Distributed Keys

Jason Resch
Senior Software Engineer,
Cleversafe

Resilience at Scale in the Distributed Storage Cloud

Alma Riska
Consultant Software Engineer,
EMC

Changing Requirements for Distributed File Systems in Cloud Storage

Wesley Leggette
Cleversafe, Inc

Best Practices in Designing Cloud Storage Based Archival Solution

Sreenidhi Iyangar
Senior Technical Lead,
EMC

Tape’s Role in the Cloud

Chris Marsh
Market Development Manager,
Spectra Logic