The ReleaseTEAM Blog: Here's what you need to know...
Going Cloud-Native: Data Storage and Costs
In our last two blogs, we discussed the benefits and pitfalls of going cloud-native and how organizations should take extra care in implementing observability in cloud environments. This month, we examine how organizations working in one or more clouds might simplify data management and costs.
Cloud Data grows…quickly
DevOps teams using cloud services often self-provision the resources they need to build products, which reduces bottlenecks in the development process. Cloud storage provides more flexibility to accommodate growing data sets and spikes in storage demand compared to on-premises data centers.
However, a large team of developers, testers, and ops engineers all provisioning their own cloud storage to meet their individual and immediate needs may result in unexpected cloud storage costs.
Strategies for reducing cloud storage costs
Classify data and choose the right storage type
Major cloud providers, including AWS, Azure, and Google Cloud, offer tiered pricing structures for storage. Pricing structures can rely on both storage type (file, object, and block storage) and data access frequency.
Cloud storage that is accessed frequently is more costly. Infrequently accessed data (or perhaps rarely accessed, such as backups) can be stored in long-term, lower-cost storage. Storage labels vary between different cloud providers, but they generally follow naming conventions of standard (more expensive, frequent access), nearline, cold/glacier (long-term
storage), and archive. In the table below, we show a sample of the storage options available on different cloud providers:
Use data deduplication to shrink your data footprint. Deduping reduces storage by avoiding copying the same data over and over again. For example, assume you and I were both working on photoshopping the same photo, and we saved our work. Deduping would store the identical parts of our work (the original image) only once, plus the parts that were different – your layer that added a shimmering light and my layer that added a mustache to the subject. Without deduping, both complete project files would be stored, using more space and potentially costing more.
Evaluate multi-cloud storage
If your DevOps teams are utilizing different services hosted in multiple cloud providers, they may need multiple copies of the same data available in each cloud. Multiple copies mean paying to store the same data two or three times, driving up the total costs for your business. It can be even more expensive to copy data down from one cloud to upload to another cloud provider, especially if this is a regular occurrence.
Multi-cloud storage, such as the services available from Pure Storage, Dell, or Faction, can avoid creating these copies without sacrificing cloud choice or performance. One copy of your data is stored outside, but in near proximity to, the cloud providers your teams need.
There are many benefits to pursuing a cloud-first strategy, but it takes planning to ensure that your teams use the right resources cost-effectively. Planning ahead empowers your teams to select cloud storage when they need it without wasting time trying to figure out which type of storage or access is the best fit.