Boztek

Big on prevention, even bigger on AI

ESET World 2024 convened hundreds of cybersecurity professionals, analysts, and decision-makers, showcasing the company’s vision and innovations while addressing critical trends in cybersecurity, particularly focusing on artificial intelligence (AI). Central themes of the conference included ESET’s advanced threat research and evolving perspectives on AI’s role in cybersecurity.

The conference opened with ESET’s Chief Technology Officer, Juraj Malcho, who highlighted the major challenges and opportunities presented by AI. He probed fundamental inquiries about AI’s revolutionary potential, suggesting that while current AI technologies, especially large language models (LLMs), have transformed industries, their limitations necessitate careful consideration of their applications in cybersecurity. AI can streamline cyber defense by dissecting complex attacks and optimizing resource allocation, which is particularly beneficial for understaffed IT departments.

Following Malcho, Juraj Jánošík and Filip Mazán from ESET provided an insightful overview of AI and machine learning, detailing their biological inspirations and the construction of artificial neural networks. Mazán emphasized that more complex AI models could actually decrease utility and that human oversight is crucial to refine these models and mitigate risks of operational failure.

Mazán also described how ESET’s internal AI practices lead to more efficient and accurate threat detection. Despite the capabilities of LLMs, he noted significant limitations like misinformation generation—termed “hallucinations”—where models produce confident but inaccurate content. Such inaccuracies pose risks, especially in cybersecurity contexts.

Jánošík elaborated on additional limitations of contemporary AI, including issues of explainability, transparency, and reliability. Current models, which rely on intricate statistical correlations, are often difficult for humans to interpret, and proprietary nature limits understanding of their decision-making processes. Generative AI’s potential to produce misleading information raises concerns; for instance, legal incidents have arisen when platforms inaccurately advise users, as seen in a case involving Air Canada’s chatbot.

On the offensive side, Jake Moore, ESET’s Global Cybersecurity Advisor, showcased how AI tools facilitate vulnerabilities, enabling actions like RFID card cloning and the creation of deceptive deepfakes. He presented a startling demonstration where his deepfake impersonation of a company’s CEO managed to fool numerous individuals, underscoring the urgent need for vigilance regarding the credibility of digital content.

The conference culminated in a panel discussion with Jánošík, Mazán, and Moore moderated by Victoria Pavlova, where they discussed the ramifications of AI deployment for businesses. They were united in concern over the general public’s lack of awareness regarding AI’s abilities and the potential exploitation by malicious actors. While sophisticated AI-generated malware is not imminent, threats such as enhanced phishing strategies and public-resource-dependant deepfakes remain pressing.

Moreover, data privacy issues stemming from AI use were a significant focus, especially in the context of regulatory frameworks in the EU like GDPR and the AI Act, which are often insufficient on a global scale. There was consensus that enterprises should prioritize internal data protection, utilizing enterprise versions of generative models to mitigate risks associated with public data storage.

To address these challenges, Mazán advocated organizations to start with open-source models to fulfill basic needs before transitioning to more complex solutions, thereby maintaining control over sensitive data. Jánošík rounded off discussions by emphasizing that organizations must recognize both the advantageous and detrimental aspects of employing AI, stressing the need for clear guidelines coupled with common sense to ensure secure operational practices.

The conference highlighted the dual nature of AI’s evolution—a toolkit for advancing cybersecurity practices and a set of challenges requiring careful navigation. Raising awareness, encouraging critical thinking, and fostering responsible AI use emerged as essential strategies in managing the evolving landscape of AI in the cybersecurity domain.