Daniel Trauner of Axonius talked about new data from Axonius which has uncovered a surprising paradox in the way companies are investing in AI– the “double-edged sword” conundrum– where it is both deployed as a security tool and feared as a capable measure for malicious activity.

Security budgets are increasing, specifically investing in AI to compensate for workforce skill gaps. Yet, 72% of respondents express major concerns around AI adoption. This begs the question: are teams using AI as a solution to a problem they don’t completely understand in a “follow the crowd” strategy?

Daniel and Julian highlighted the opposing forces of concern and pressure to add AI features, and discussed the history of AI and its current state in the industry. While they acknowledged the excitement around advancements in AI technology, they cautioned that it’s not a silver bullet yet. They also talked about the race to incorporate AI into businesses and the concerns around where to build it into products.

The speakers discussed the risks associated with AI technology, particularly in terms of data governance and the potential for employees to misuse the technology. They emphasized the importance of creating policies and governance around the use of AI technology to mitigate these risks. Daniel shared a successful use case of a basic chat bot that employees can use to ask questions about security policies and procedures. The bot retrieves information from the company’s policies and constructs an answer based on the question asked.

The importance of thinking human-first when considering use cases for AI was also discussed. They talked about the use of natural language processing tools for crafting phishing emails and writing code, as well as the increase in high impact vulnerabilities being exploited by hackers.

You can find the full findings of the report here.