
Engineer-to-engineer with Logan Clemons
You have cloud challenges? We have Logan Clemons, cloud infrastructure expert, problem-solving enthusiast and key member of our CRE team at DoiT.
View all DoiT Cloud Intelligence features
Insights, tips and perspectives from cloud experts
Foundational expertise and future-ready recommendations
In-person and virtual tech talks
Meet the team leading DoiT
Hear what’s new at DoiT
Unlock revenue through expert partnerships
Discover career opportunities
Award-winning partners of cloud providers
We explore the importance of aligning technical decisions with business requirements through a story about a customer that ultimately reduced storage costs by 40% by doing so.
The customer builds an imaging solution for optometrists. Initially, the customer opted to use Google Cloud Persistent Disks due to its speed, growing capacity, and backup capabilities. The goal was to enable optometrists to quickly retrieve patient images, ideally reducing retrieval time from three seconds to one second.
However, the choice of Persistent Disk led to unintended consequences. The company amassed petabytes of images on PD, substantially increasing their storage costs without corresponding benefits to their business needs.
Think for a moment about the average visit to an optometrist. Patients may get their eyes imaged, and these images are uploaded to the cloud storage for future reference. The retrieval speed between two seconds and ten seconds typically doesn’t make a significant difference in patient satisfaction or clinical outcomes. More often than not, these images are never accessed again once stored, except for compliance reasons.
The reality is that opting for the fastest and most robust storage solution—Persistent Disk—resulted in immense costs primarily because a considerable portion of the stored data was rarely, if ever, accessed. What could have been a straightforward solution turned into a significant expense due to a mismatch between technology and business needs.
By questioning the necessity of such speed and this storage type—simply asking “Do you really need this fast?”—it became clear that storing these rarely accessed images on Persistent Disk was not necessary.
The customer decided to transition their storage to Google Cloud’s auto-class storage, which automatically moves infrequently accessed data to cheaper storage tiers, ultimately leading to a 40% reduction in storage costs when it was all said and done.
This clip comes from Episode 18 of Cloud Masters, where three Technical Account Managers at DoiT shared some cloud management strategy tips with us, using real customer experiences to highlight why they can be so impactful.
📺 Watch the full episode: Full Episode
You have cloud challenges? We have Logan Clemons, cloud infrastructure expert, problem-solving enthusiast and key member of our CRE team at DoiT.
You know that success with the cloud starts with proper planning, but do you know as much about planning and structuring cloud data as Diego Munoz?
How did a small-town internet service provider technician from Texas come to lead DoiT’s global client services team?
From cost optimization to cloud migration, machine learning and CloudOps,
we’re here to make the public cloud easy.
From cost optimization to cloud migration, machine learning and CloudOps, we’re here to make the public cloud easy.
Ready to get started?
You will receive a calendar invite to the email address provided below for a 15-minute call with one of our team members to discuss your needs.
6 Responses
Very useful guide.
The link to calculate the optimal amount of slots doesn’t work (“BQ SE max configuration.sql”), can you fix it please?
Not sure which link you are referring to…
The link is fixed.
ec2 instance connect appears to be locked down to SSH and RDP protocols (ports 22 and 3389 only), meaning you can’t use it for databases in the way this post suggests. You still need to ssh to some instance then connect to the DB from there – the advantage is you don’t need to expose that ec2 instance publicly.
If you go through the above guide, you’ll just get the following error:
awscli.customizations.ec2instanceconnect.websocket – ERROR – {“ErrorCode”:”InvalidParameter”,”Message”:”The specified RemotePort is not valid. Specify either 22 or 3389 as the RemotePort and retry your request.”}
did you actually try the above out successfully?
also discussed here: https://repost.aws/questions/QU_h42-ck0R-alITadXrrXSQ/rds-configuration
ec2 instance connect appears to be locked down to SSH and RDP protocols (ports 22 and 3389 only), meaning you can’t use it for databases in the way this post suggests. You still need to ssh to some instance then connect to the DB from there. If you go through the above guide, you’ll just get the following error: awscli.customizations.ec2instanceconnect.websocket – ERROR – {“ErrorCode”:”InvalidParameter”,”Message”:”The specified RemotePort is not valid. Specify either 22 or 3389 as the RemotePort and retry your request.”} did you actually try the above out successfully? also discussed here: https://repost.aws/questions/QU_h42-ck0R-alITadXrrXSQ/rds-configuration
Always curious to learn more about Cloud data