Cloud Masters Episode #117
LLM Security Risks and Mitigation Strategies
We discuss the top security risks to be aware of when implementing LLMs in your product, and how to prevent them from occurring in the first place.
Cloud Masters Episode #117

With DoiT Spot Scaling, automate your AWS Spot Instances to save up to 90% on compute spend without compromising reliability.

Cloud Masters
Cloud Masters
LLM Security Risks and Mitigation Strategies
Loading
/
Cloud Masters
Cloud Masters
LLM Security Risks and Mitigation Strategies
Loading
/

Episode notes

Key Moments

00:00: Introduction
01:18: Promp injection
04:10: Consequences of insufficient validation
06:25: Human-in-the-loop
08:25: When LLM agents miscommunicate
16:30: Training data poisoning
24:45: Concern with out-of-the-box LLMs
29:50: Language and Regional Variations
34:10: Shared responsibility
37:40: Sanitizing LLM outputs
39:10: Giving too much access to data
43:29: Overreliance on LLM outputs
45:00: Balancing user trust vs. speed
50:08: The potential of state-sponsored LLMs

Additional resources:

About the guests

Mehdi Nemlaghi
As an AI/ML subject matter expert, Mehdi helps companies architect AI/ML solutions to their challenges. He was previously Chief Algorithms Officer at SESAMm, setting the technical direction for ML projects. Additionally he was Head of Data Science at GROUPE M6, managing a team of 5 data scientists & 2 data analysts who built out data products and developed algorithms to increase customer retention and reduce churn.
Eduardo Mota
Eduardo is a Senior Machine Learning Specialist, providing architecture advice to hundreds of companies wishing to deploy AI and ML solutions to enhance their product or resolve operational issues.
Gad Benram
Gad is the founder and CTO of TensorOps, which offers expert services for AI-driven applications as well AIOps and AI cost optimization.
As an AI/ML subject matter expert, Mehdi helps companies architect AI/ML solutions to their challenges. He was previously Chief Algorithms Officer at SESAMm, setting the technical direction for ML projects. Additionally he was Head of Data Science at GROUPE M6, managing a team of 5 data scientists & 2 data analysts who built out data products and developed algorithms to increase customer retention and reduce churn.
Eduardo is a Senior Machine Learning Specialist, providing architecture advice to hundreds of companies wishing to deploy AI and ML solutions to enhance their product or resolve operational issues.
Gad is the founder and CTO of TensorOps, which offers expert services for AI-driven applications as well AIOps and AI cost optimization.

Related content

The cost impact of Large Language Models (LLMs) in production
We cover the ever-growing importance of Large Language Models (LLMs) in applications, how LLM costs can easily compound once in production, and breaking down the costs associated with using LLMs.
No longer a pipe dream — Gen AI and data pipelines
Exploring the impact that Gen AI will have on data pipelines and data engineering overall.
Observability of LLMs in Google Cloud
ML and AI specialists Eduardo Mota and Sascha Heyer join us to explore the complexities of observability of LLM-powered features. Packed with tons of real-life customer anecdotes and best practices, they discuss the challenges and strategies for monitoring Gen AI systems, emphasizing the importance of metrics in understanding system interactions, especially given Gen AI’s non-deterministic nature.

Schedule a call with our team

You will receive a calendar invite to the email address provided below for a 15-minute call with one of our team members to discuss your needs.

You will be presented with date and time options on the next step