Deciphering the Economics of Building a Cloud Storage Architecture

Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture. That’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast, “Create a Smart and More Economic Cloud Storage Architecture” on November 7th.

From an economic perspective, cloud infrastructure is often procured in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures which slows cost recovery based on fluctuating customer adoption. Giving large enterprises and cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.

From a technical perspective, clouds inherently require unpredictable scalability – both up and down. If you were to Read More, you’d know that digital marketing, SEO, and graphic design is the least of it.

Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows for storage capacity optimization, creating performance pools in the data center without compromising the responsiveness to the change in needs. Such architecture should also align to the data center level orchestration system to allow for even higher level of resource optimization and flexibility.

In this webcast, you will learn:

  • How modern storage technology allows you to build this infrastructure
  • The role of software defined storage
  • Accounting principles that impact CapEx and OpEx
  • How to model cloud costs of new applications and or re-engineering existing applications
  • Performance considerations

Marketing Your New Website From the Cloud

Launching a new website is an exciting venture, but it’s crucial to have a robust marketing strategy in place to ensure it reaches its intended audience. With cloud-based tools and services at your disposal, marketing your new website can be both efficient and effective. Here are some strategies to consider:

Leverage Cloud Analytics:

Data-Driven Insights: Cloud-based analytics platforms like Google Analytics provide invaluable insights into your website’s performance. Track visitor behavior, demographics, and engagement metrics to fine-tune your marketing efforts.

SEO Optimization: Use cloud-based SEO tools to optimize your website’s content and structure. Identify relevant keywords, monitor rankings, and ensure your site is search engine-friendly. Legal professionals may even seek law firm marketing services to optimize their web presence.

Social Media Marketing:

Content Scheduling: Cloud-based social media management tools like Hootsuite and Buffer allow you to schedule posts, ensuring a consistent online presence. Create engaging content that promotes your website’s value.

Paid Advertising: Platforms like Facebook Ads and Google Ads offer cloud-based advertising solutions. Target specific demographics, interests, and behaviors to reach your ideal audience. You can check out this reference to know more about marketing.

Email Marketing:

Email Campaigns: Cloud-based email marketing platforms like Mailchimp or SendinBlue can help you create and automate email campaigns. Build a subscriber list and send personalized content, including newsletters, product updates, or exclusive offers.
Content Creation and Collaboration:

Cloud Storage: Use cloud storage solutions like Google Drive or Dropbox to collaborate with your team on content creation. Share documents, images, and videos effortlessly.

Content Management Systems (CMS): Many CMS platforms, such as WordPress or Joomla, are cloud-based. They provide user-friendly interfaces for updating your website’s content regularly.

Performance Monitoring:

Uptime Monitoring: Cloud-based website monitoring services like UptimeRobot notify you immediately if your site experiences downtime. Ensuring your site is always accessible is crucial for user experience.

Load Testing: Perform load testing using cloud-based tools to simulate heavy traffic and ensure your website can handle increased user loads without slowing down.

Security and Backups:

Cloud Security: Protect your website from cyber threats with cloud-based security solutions. These services offer real-time threat detection and mitigation.

Automated Backups: Use cloud backup services to automatically back up your website’s data and files. This ensures you can quickly recover in case of data loss.

Scalability:

Cloud Hosting: Consider hosting your website in the cloud for scalability. Cloud hosting services like AWS, Azure, or Google Cloud can accommodate traffic spikes without performance issues.

Marketing your new website from the cloud offers a plethora of tools and services that can streamline your efforts. By leveraging cloud analytics, social media marketing, email campaigns, content creation, performance monitoring, security, and scalability solutions, you can reach your target audience effectively and ensure your website’s long-term success. Stay agile, adapt your strategies based on data insights, and continuously optimize your online presence to stay ahead in the competitive digital landscape.

Expert Answers to Cloud Object Storage and Gateways Questions

In our most recent SNIA Cloud webcast, “Cloud Object Storage and the Use of Gateways,” we discussed market trends toward the adoption of object storage and the use of gateways to execute on a cloud strategy.  If you missed the live event, it’s now available on-demand together with the webcast slides. There were many good questions at the live event and our expert, Dan Albright, has graciously answered them in this blog.

Q. Can object storage be accessed by tools for use with big data?

A. Yes. Technically, access to big data is in real-time with HDFS connectors like S3, but it is  conditional on latency and if it is based on local hard drives, it should not be used as the primary storage as it would run very slowly. The guidance is to use hard drive based object storage either as an online archive or a backup target for HDFS.

Q. Will current block storage or NAS be replaced with cloud object storage + gateway?

A. Yes and no.  It’s dependent on the use case. For ILM (Information Lifecycle Management) uses, only the aged and infrequently accessed data is moved to the gateway+cloud object storage, to take advantage of a lower cost tier of storage, while the more recent and active data remains on the primary block or file storage.  For file sync and share, the small office/remote office data is moved off of the local NAS and consolidated/centralized and managed on the gateway file system. In practice, these methods will vary based on the enterprise’s requirements.

Q. Can we use cloud object storage for IoT storage that may require high IOPS?

A. High IOPS workloads are best supported by local SSD based Object, Block or NAS storage.  remote or hard drive based Object storage is better deployed with low IOPS workloads.

Q. What about software defined storage?

A. Cloud object storage may be implemented as SDS (Software Defined Storage) but may also be implemented by dedicated appliances. Most cloud Object storage services are SDS based.

Q. Can you please define NAS?

A. The SNIA Dictionary defines Network Attached Storage (NAS) as:

1. [Storage System] A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored.

2. [Network] A class of systems that provide file services to host computers using file access protocols such as NFS or CIFS.

Q. What are the challenges with NAS gateways into object storage? Aren’t there latency issues that NAS requires that aren’t available in a typical Object store solution?

A. The key factor to consider is workload.  If the workload of applications accessing data residing on NAS experiences high frequency of reads and writes then that data is not a good candidate for remote or hard drive based object storage. However, it is commonly known that up to 80% of data residing on NAS is infrequently accessed.  It is this data that is best suited for migration to remote object storage.

Thanks for all the great questions. Please check out our library of SNIA Cloud webcasts to learn more. And follow us on Twitter @SNIACloud for announcements of future webcasts.

 

IP-Based Object Drives Q&A

At our recent SNIA Cloud Storage webcast “IP-Based Object Drives Now Have a Management Standard,” our panel of experts discussed how the SNIA release of the IP-Based Drive Management Standard eases the management of these drives. If you missed the webcast, you can watch it on-demand.

A lot of interesting questions came up during the live event. As promised, here are answers to them:

Q. Am I correct in thinking that each IP based drive will have a unique IP address?

A. Each Ethernet interface on the drive will have its own unique IP Address. Object Drives may be deployed in private address spaces (such as in a fully configured rack). In such configurations, two Object Drives might have the same IP address, but would be on completely separate networks.

Q. Assuming vendors will be using RedFish, will the API calls be made through existing middleware or directly to the BMCs (baseboard management controllers, specialized service processors that monitors the physical state of a computer) on the platforms?

A. Redfish can be supported by host based middleware, the enclosure’s BMC, or may be supported directly from the drive.

Q. Would a drive with native iSCSI protocol and an Ethernet interface be considered an “IP Drive”?  

A. Yes. This is why we use the generic IP Drive term as it allows for multiple protocols to be supported.

Q. What are the data protection schemes supported in the existing products in this space?

A. Examples of data protection typically used with IP drives include erasure encoding and traditional RAID.

Q. Is this approach similar to the WD Ethernet Drive?

A. The WD Ethernet Drive is an IP based drive.

Q. Do you expect to see interposers with higher Ethernet bandwidth that could be used with SSD vs. HDDs?

A. Yes, there are multiple examples starting to appear in the market of interposers for SSDs.

Q. Is this regular Ethernet or NVMe over Fabrics?

A. Regular Ethernet. This does not require Converged Ethernet, nor anything layered on that. NVMe over Fabrics could utilize IP based Drive Management in the future. In this era of rapid development, where progress is evident in various facets of life such as the internet, sports, and gaming, you can even explore the exciting avenue of betting on platforms like 아리아카지노, with the enticing prospect of winning valuable prizes adding an extra layer of thrill to the ever-evolving landscape of opportunities.

 

 

 

 

 

 

 

 

 

Security and Privacy in the Cloud

When it comes to the cloud, security is always a topic for discussion. Standards organizations like SNIA are in the vanguard of describing cloud concepts and usage, and (as you might expect) are leading on how and where security fits in this new world of dispersed and publicly stored and managed data. On July 20th, the SNIA Cloud Storage Initiative is hosting a live webcast “The State of Cloud Security.” In this webcast, I will be joined by SNIA experts Eric Hibbard and Mark Carlson who will take us through a discussion of existing cloud and emerging technologies, such as the Internet of Things (IoT), Analytics & Big Data, and more, and explain how we’re describing and solving the significant security concerns these technologies are creating. They will discuss emerging ISO/IEC standards, SLA frameworks and security and privacy certifications. This webcast will be of interest to managers and acquirers of cloud storage (whether internal or external), and developers of private and public cloud solutions who want to know more about security and privacy in the cloud.

Topics covered will include:

  • Summary of the standards developing organization (SDO) activities:
    • Work on cloud concepts, Cloud Data Management Interface (CDMI), an SLA framework, and cloud security and privacy
  • Securing the Cloud Supply Chain:
    • Outsourcing and cloud security, Cloud Certifications (FedRAMP, CSA STAR)
  • Emerging & Related Technologies:
    • Virtualization/Containers, Federation, Big Data/Analytics in the Cloud, IoT and the Cloud

Register today. We hope to see you on July 20th where Eric, Mark and I will be ready to answer your cloud security questions.

Containers, Docker and Storage – An Expert Q&A

Containers continue to be a hot topic today as is evidenced by the more than 2,000 people who have already viewed our SNIA Cloud webcasts, “Intro to Containers, Container Storage and Docker“ and “Containers: Best Practices and Data Management Services.” In this blog, our experts, Keith Hudgins of Docker and Andrew Sullivan of NetApp, address questions from our most recent live event.

Q. What is the major challenge for storage in containerized environment?

A. Containers move fast. Users can spin up and spin down containers extremely quickly. The biggest challenge in production-bound container environments is simply keeping up with the movement of data.

Docker Engine does not delete base container images when the container is shut down. Likewise, Registry assumes you’ve got unlimited storage on hand. For containers that push frequent revisions (as would be the case in a continuous delivery environment), that leads to a lot of orphaned container images that can fill up all available storage if left unchecked.

There are some community-led scripts that will help to keep things in control. That’s the beauty of community-led technology.

Q. What about the speed of retrieving the data from storage?

A. That’s where being a solid storage architect comes in. Every storage system has different strengths and weaknesses, so it’s important to engineer your solution to fit your performance goals. Docker containers are running on the main kernel of the host system. IO is not constrained by abstraction, as in the case of virtual machines. Rather, it is constrained more by density – hundreds of containers on a host can push massive IOPS, so you want your pipes fat and data sources close to the host systems.

Q. Can you expand on moving Docker Volumes from On-Premise bare metal to Cloud Service Providers? Data Migration? Encryption? 

A. None of these capabilities are built-in to Docker Engine. We rely on external storage systems to provide those features. Private-to-cloud replication is primarily a feature of software-based companies, like Portworx, Blockbridge, or Hedvig. Encryption and migration are both common features across other companies as well. Flocker from ClusterHQ is a service broker system that provides many bolt-on features for storage systems they support. You can also use community-supplied services like Ceph to get you there.

Q. Are you familiar with “Flocker” that apparently is able to copy persistent data to another container? Can share your thoughts?

A. Yes. ClusterHQ (makers of Flocker) provide an API broker that sits between storage engines and Docker (and other dynamic infrastructure providers, like OpenStack), and they also provide some bolt-on features like replication and encryption.

Q. Is there any sort of feature in the volume plugins that allows a persistent volume to re-connect to a container if the container is moved across multiple hosts?

A. There’s no feature in plugins to cover that specifically. The plugin API is very simple. In practice, what you would do is write your plugin to expose volumes to Docker Engine on every host that it’s possible to mount that volume. In your container specification, whether it’s a Compose file, DAB file, or what have you, specify the name of your volume. Wherever that unique name is encountered, it will be mounted and attached to the container when it’s re-launched.

If you have more questions on containers, Docker and storage, check out our first Q&A blog: Containers: No Shortage of Interest or Questions.

I also encourage you to join our Containers opt-in email list. It will be a good way to keep up with all the SNIA Cloud is doing on this important technology.

Cloud Storage: Solving Interoperability Challenges

Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. I hope you will join us on July 19th for our live Webcast, “Cloud Storage: Solving Interoperability Challenges,” to learn the major challenges facing the use of businesses services from multiple cloud providers and moving data from one cloud provider to another.

CSI Webcast graphic

We’ll discuss how the SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data and metadata portability between clouds and explain how the SNIA CDMI Conformance Test Program helps cloud storage providers achieve CDMI conformance.

Join us on July 19th to learn:

  • Critical challenges that the cloud storage industry is facing
  • Issues in a multi-cloud API environment
  • Addressing cloud storage interoperability challenges
  • How the CDMI standard works
  • Benefits of CDMI conformance testing
  • Benefits for end user companies

You can register today. We look forward to seeing you on July 19th.

Upcoming Webcast: The Impact of International Data Protection Legislation on the Cloud

Data Privacy vs. data protection has become a heated debate in businesses around the world as governments across the globe are proposing and enacting strong data privacy and data protection regulations. Join us on November 18th for our next Cloud Storage live Webcast “Data Privacy vs. Data Protection: The Impact of International Data Protection Legislation on the Cloud.

Mandating frameworks that include noteworthy changes like defining a data breach to include data destruction, adding the right to be forgotten, mandating the practice of breach notifications, and many other new elements are literally changing the rules when it comes to data protection. The implications of this, and other proposed legislation, on how the cloud can be utilized for storing data are significant. Join this live Webcast to hear:

  • “Directives” vs. “regulation”
  • General data protection regulation summary
  • How personal data has been redefined
  • Substantial financial penalties for non-compliance
  • Impact on data protection in the cloud
  • How to prepare now for impending changes

Our experts, Bob Plumridge, SNIA Europe Board Member; Eric Hibbard, Chair SNIA Security TWG, and I will all be available to answer your questions during the event. I encourage you to register today for this timely discussion. We hope to see you on November 18th!

OpenStack File Services for HPC Q&A

We got some great questions during our Webcast on how OpenStack can consume and control file services appropriate for High Performance Computing (HPC) in a cloud and multi-tenanted environment. Here are answers to all of them. If you missed the Webcast, it’s now available on-demand. I encourage you to check it out and please feel free to leave any additional questions at this blog.

Q. Presumably we can use other than ZFS for the underlying filesystems in Lustre?

A. Yes, there a plenty of other filesystems that can be used other than ZFS. ZFS was given as an example of a scale up and modern filesystem that has recently been integrated, but essentially you can use most filesystem types with some having more advantages than others. What you are looking for is a filesystem that addresses the weaknesses of Lustre in terms of self-healing and scale up. So any filesystem that allows you to easily grow capacity whilst also being capable of protecting itself would be a reasonable choice. Remember, Lustre doesn’t do anything to protect the data itself. It simply places objects in a distributed fashion of the Object Storage Targets.

Q. Are there any other HPC filesystems besides Lustre?

A. Yes there are and depending on your exact requirements Lustre might not be appropriate. Gluster is an alternative that some have found slightly easier to manage and provides some additional functionality. IBM has GPFS which has been implemented as an HPC filesystem and other vendors have their scale-out filesystems too. An HPC filesystem is simply a scale-out filesystem capable of very good throughput with low latency. So under that definition a flash array could be considered a High Performance storage platform, or a scale out NAS appliance with some fast disks. It’s important to understand you’re workloads characteristics and demands before making the choice as each system has pro’s and con’s.

Q. Does “embarrassingly parallel” require bandwidth or latency from the storage system?

A. Depending on the workload characteristics it could require both. Bandwidth is usually the first demand though as data is shipped to the nodes for processing. Obviously the lower the latency the fast though jobs can start and run, but its not critical as there is limited communication between nodes that normally drives the low latency demand.

Q. Would you suggest to use Object Storage for NFV, i.e Telco applications?

A. I would for some applications. The problem with NFV is it actually captures a surprising breadth of applications so of which have very limited data storage needs. For example there is little need for storage in a packet switching environment beyond the OS and binaries needed to stand up the VM’s. In this case, object is a very good fit as it can be easily, geographically distributed ensuring the same networking function is delivered in the same manner. Other applications that require access to filtered data (so maybe billing based applications or content distribution) would also be good candidates.

Q. I missed something in the middle; please clarify, your suggestion is to use ZFS (on Linux) for the local file system on OSTs?

A. Yes, this was one example and where some work has recently been done in the Lustre community. This affords the OSS’s the capability of scaling the capacity upwards as well as offering the RAID-like protection and self-healing that comes with ZFS. Other filesystems can offer those some things so I am not suggesting it is the only choice.

Q. Why would someone want/need scale-up, when they can scale-out?

A. This can often come down to funding. A lot of HPC environments exist in academic institutions that rely on grant funding and sponsorship to expand their infrastructure. Sometimes it simply isn’t feasible to buy extra servers in order to add capacity, particularly if there is already performance headroom. It might also be the case that rack space, power and cooling could be factors in which case adding drives to cope with bigger workloads might be the only option. You do need to consider if the additional capacity would also provoke the need for better performance so we can’t just assume that adding disk is enough, but it’s certainly a good option and a requirement I have seen a number of times.

 

Cloud Storage Development Challenges – An SDC Preview

This year’s Storage Developer Conference (SDC) is expected to draw over 400 storage developers and professionals. On August 4th, you can get a sneak preview of key cloud topics that will be covered at SDC in this live Webcast where David Slik and Mark Carlson Co-Chairs of the SNIA Cloud Technical Work Group, together with Yong Chen, Assistant Professor at Texas Tech University will discuss:

  • Mobile and Secure – Cloud Encrypted Objects using CDMI
  • Object Drives: A new Architectural Partitioning
  • Unistore: A Unified Storage Architecture for Cloud Computing
  • Using CDMI to Manage Swift, S3, and Ceph Object Repositories

You’ll learn how encrypted objects can be stored, retrieved, and transferred between clouds, how Object Drives allow storage to scale up and down by single drive increments, end-user and vendor use cases of the Cloud Data Management Interface (CDMI), and we’ll introduce Unistore – an innovative unified storage architecture that efficiently integrates heterogeneous HDD and SCM devices for Cloud storage systems.

I’ll be moderating the discussion among this expert panel. It should be an enlightening and lively hour. I hope you’ll register now to join us.