ASTRI developed an integrated security framework to safeguard enterprises using Large Language Models (LLMs) against sensitive data leakage and harmful outputs. The system combines user-defined sensitive data taxonomy, domain-specific model fine-tuning, format-preserving encryption (FPE), and real-time policy management to ensure confidential information is protected and AI adoption remains compliant and reliable.
As LLMs are adopted across finance, legal, and education, risks of data leakage and undesirable responses grow. Existing LLMs lack precise, organization-specific sensitive data detection, cannot anonymize data while preserving contextual semantics, and do not allow for dynamic policy customization, resulting in security, compliance, and trust challenges.
Hong Kong Applied Science and Technology Research Institute (ASTRI) was founded by the Government of the Hong Kong Special Administrative Region in 2000 with the mission of enhancing Hong Kong’s competitiveness through applied research. ASTRI’s core R&D competence in various areas is grouped under four Technology Divisions: Trust and AI Technologies; Communications Technologies; IoT Sensing and AI Technologies and Integrated Circuits and Systems. It is applied across six core areas which are Smart City, Financial Technologies, New-Industrialisation and Intelligent Manufacturing, Digital Health, Application Specific Integrated Circuits and Metaverse.