Self-regulation in AI models on the cards
Firms deploying artificial intelligence (AI) technologies having consumer interface will need to either self certify their models so as not to cause any harm to users, or get the same done by third party agencies. Sources said that the government will not be getting into checking either the robustness or safety of any use cases developed using such technologies. Instead, it will broadly lay down some standards like reliability, explainability, transparency, privacy, and security, against which the firms will be required to test their AI models. Such a move would not deter innovation but adopt a light touch to regulatory approach.