
ASTRI developed an integrated security framework to safeguard enterprises using Large Language Models (LLMs) against sensitive data leakage and harmful outputs. The system combines user-defined sensitive data taxonomy, domain-specific model fine-tuning, format-preserving encryption (FPE), and real-time policy management to ensure confidential information is protected and AI adoption remains compliant and reliable.
As LLMs are adopted across finance, legal, and education, risks of data leakage and undesirable responses grow. Existing LLMs lack precise, organization-specific sensitive data detection, cannot anonymize data while preserving contextual semantics, and do not allow for dynamic policy customization, resulting in security, compliance, and trust challenges.