Safety in AI doesn’t necessarily mean it’s trustworthy: IIT professor writes
By
Neha Kumari
Trust and safety in artificial intelligence (AI) are fundamentally different frameworks that often produce contrasting design decisions, evaluation methods and architectural choices.
Safety focuses narrowly on preventing harm directly caused by the AI system itself, primarily through internal technical controls and security measures.
