A vast majority of professionals in Pakistan are embracing artificial intelligence (AI) tools for their daily work, but most remain untrained in their safe and responsible use, according to Kaspersky’s latest report titled “Cybersecurity in the Workplace: Employee Knowledge and Behavior.”
The study revealed that 86% of professionals in Pakistan now rely on AI tools for various work-related tasks. However, only 52% of these users have received any formal training on the cybersecurity aspects of AI, leaving a significant gap in awareness about potential risks such as data leaks, prompt injections, and misuse of neural networks.
While 98% of respondents in Pakistan said they understand what “generative artificial intelligence” means, for many, this understanding extends well beyond theory. AI has become embedded in their day-to-day work: 68% use AI for writing or editing text, 52% for drafting emails, 56.5% for creating images or videos, and 35% for analyzing data.
However, the report warns that this growing dependence on AI is not matched by adequate training. 21% of professionals admitted they have received no AI-related training whatsoever. Among those who did, 66% were taught how to effectively use AI tools and create prompts, while only 52% received any instruction on AI-related cybersecurity risks.
Despite this, most workplaces appear to be embracing AI integration. 81% of employees said generative AI tools are officially permitted within their organizations. In contrast, 15% reported that such tools are banned, while 4% were unsure about their company’s stance. Kaspersky notes that, in many cases, employees use AI without clear corporate oversight — a phenomenon often referred to as “shadow IT.”
To address this, the report stresses the need for companies to develop comprehensive policies that clearly define how AI can be used within the organization. Such policies, Kaspersky suggests, should set boundaries on what types of data can be processed through AI, specify which tools are approved, and restrict unauthorized use in sensitive functions.
Rashed Al Momani, General Manager for the Middle East at Kaspersky, said:
Kaspersky recommends that organizations train employees on responsible AI usage and provide IT specialists with advanced knowledge of AI exploitation techniques and defense strategies. The company’s Automated Security Awareness Platform offers specialized AI security training, while its Large Language Models (LLMs) Security course focuses on protecting systems that integrate AI tools.
The cybersecurity firm also advises ensuring that all employee devices have security solutions installed, such as its Kaspersky Next suite, to protect against phishing, malware, and fake AI tools. Finally, it urges companies to create comprehensive AI-use policies guided by Kaspersky’s best practices for secure implementation.
About the Author
Written by the expert legal team at Javid Law Associates. Our team specializes in corporate law, tax compliance, and business registration services across Pakistan.
Verified Professional
25+ Years Experience