CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps

CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps

SUMMARY

The Indian Computer Emergency Response Team (CERT-In) has warned against vulnerabilities in AI design, training and interaction mechanism

The cybersecurity watchdog noted that all the AI apps are not safe and therefore advised users signing up for them to consider using an anonymous account not linked to their personal or professional identity

As per CERT-In’s latest advisory, the “vulnerabilities” include technical issues such as data poisoning, adversarial attacks, model inversion, prompt injection and hallucination exploitation

The Indian Computer Emergency Response Team (CERT-In) has warned against multiple “vulnerabilities” in AI design, training and interaction mechanism.

The cybersecurity watchdog noted that all the AI apps are not safe and therefore advised users signing up for them to consider using an anonymous account not linked to their personal or professional identity.

As per the latest advisory, the “vulnerabilities” include technical issues such as data poisoning, adversarial attacks, model inversion, prompt injection and hallucination exploitation.

The advisory further said that AI has accelerated automating of routine tasks, fostering creativity and supporting business functions such as customer services, logistics, medical diagnosis and cybersecurity.

“Artificial Intelligence has become a hallmark of innovation, revolutionising industries ranging from healthcare to communications. AI is increasingly used to handle activities traditionally undertaken by humans,” CERT-In said.

The note highlights six major types of attacks that pose significant threats to AI applications’ security, reliability and trustworthiness:

Data poisoning: Manipulating training data to produce inaccurate or malicious outputs

Adversarial attacks: Changing inputs to trick AI models into giving wrong predictions

Model inversion: Extracting sensitive information about training data through analysis

Model stealing: Copying AI models by repeatedly querying them

Prompt injection: Introducing malicious instructions to bypass AI safeguards

Hallucination exploitation: Taking advantage of AI’s tendency to generate fabricated outputs

Threat actors can take advantage of the rising demand for AI apps to create fake apps designed to trick users into downloading them, the advisory said.

If someone downloads these fake AI apps on their devices, it maximises the opportunity to install malware designed to steal all their data, the advisory says, asking users to practice due diligence before clicking the ‘download’ button in order to minimise AI cybersecurity risks.

This also comes days after the Ministry of Electronics and Information Technology (MeitY) recommended setting up a dedicated AI governance board to review and authorise AI applications in the country.

The framework called for empowering the proposed board with powers to ensure AI initiatives align with legal instruments and address ethical considerations. In addition to the governance board, MeitY has also advocated for an “AI Ethics Committee” to design and integrate standard AI practices into all project stages, according to the competency framework.

The regulatory push also parallels rapid developments in India’s AI ecosystem, with both public and private sectors making significant investments. Earlier this month, MeitY launched the IndiaAI Compute Portal, a unified datasets platform AIKosha, and an accelerator programme for homegrown AI startups as part of the larger INR 10,300 Cr IndiaAI Mission.

In the private sector, Jio Platforms recently announced plans for a cloud-based PC to help users deploy compute-intensive AI applications, alongside developing “JioBrain,” a machine learning-as-a-service offering for enterprises.

According to a BCG and Nasscom report, India’s AI market could reach $17 billion by 2027 with a CAGR of 25-35%.  IDC forecasts AI spending in India will hit $6 billion by 2027, growing at 33.7% annually from 2022.

Note: We at Inc42 take our ethics very seriously. More information about it can be found here.

You have reached your limit of free stories
Become A Startup Insider With Inc42 Plus

Join our exclusive community of 10,000+ founders, investors & operators and stay ahead in India’s startup & business economy.

2 YEAR PLAN
₹19999
₹7999
₹333/Month
UNLOCK 60% OFF
Cancel Anytime
1 YEAR PLAN
₹9999
₹4999
₹416/Month
UNLOCK 50% OFF
Cancel Anytime
Already A Member?
Discover Startups & Business Models

Unleash your potential by exploring unlimited articles, trackers, and playbooks. Identify the hottest startup deals, supercharge your innovation projects, and stay updated with expert curation.

CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps-Inc42 Media
How-To’s on Starting & Scaling Up

Empower yourself with comprehensive playbooks, expert analysis, and invaluable insights. Learn to validate ideas, acquire customers, secure funding, and navigate the journey to startup success.

CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps-Inc42 Media
Identify Trends & New Markets

Access 75+ in-depth reports on frontier industries. Gain exclusive market intelligence, understand market landscapes, and decode emerging trends to make informed decisions.

CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps-Inc42 Media
Track & Decode the Investment Landscape

Stay ahead with startup and funding trackers. Analyse investment strategies, profile successful investors, and keep track of upcoming funds, accelerators, and more.

CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps-Inc42 Media
CERT-In Warns Users Against ‘Vulnerabilities’ In AI Apps-Inc42 Media
You’re in Good company