Published: May 18, 2023
By Jordan MacAvoy blog posts
Recently, we conducted a survey to gather insights from professionals like yourself, and the results were nothing short of eye-opening. It’s clear that AI and the capabilities it unlocks will have a profound impact on our field. The recent leaps in accessibility and the democratization of tools (thanks OpenAI for kickstarting it) will no doubt revolutionize all walks of life, how companies manage security, risk, privacy, and compliance will be no different.
The survey findings spoke loud and clear: AI is poised to fundamentally reshape our industry. It’s not just another passing trend; it’s a game-changer that demands our attention. But what does this mean for us? According to the survey, the majority of respondents agree that AI will make certain tasks considerably easier. Imagine streamlined network monitoring, efficient control testing, and the ability to implement a robust information security framework with relative ease. Exciting, right?
However, as we embark on this AI-infused journey, we must also address the need to adapt our information security policies to this new landscape. As AI continues to mature, it’s essential that our policies evolve in parallel to ensure we harness AI’s potential securely and responsibly. We must strike a delicate balance between innovation and safeguarding our digital assets. Sam Altman, CEO of OpenAI, spoke about this topic this week (see video below):
The Skynet-like (Terminator movie reference for those who don’t know) concern is real and safeguards must be put in place, but a concern that hits a bit closer to home for many is the fear of job displacement. It’s true that AI has the potential to reduce the number of full-time employees required in information security teams. It is more likely that AI will automate most or all of routine and repetitive tasks to allow already stretched thin security teams to focus on more high-value, mission critical tasks. In this way, AI will be a powerful ally to resource constrained teams.
While the promises of AI are compelling, we must also acknowledge the concerns raised by the survey participants. The maturity of AI technology and the quality of its work are legitimate considerations. Trusting machines with sensitive company information and critical security responsibilities demands a cautious approach. We must ensure that AI systems are thoroughly vetted, robust, and capable of delivering reliable results before fully embracing them.
Now, let’s dive into the intriguing insights provided by our survey respondents:
The last bullet encourages us to proceed with caution. The survey respondents highlighted specific concerns that deserve careful consideration. Issues such as the exposure of confidential data in low-security or public technologies, as well as the appropriate establishment of internal policies and guardrails for AI usage, are paramount. We must ensure that the benefits of AI are balanced with robust security measures and stringent privacy protocols.
Before we conclude, I want to acknowledge the additional comments shared by some survey participants. It’s clear that workload challenges, particularly related to third-party risk and security questionnaires, remain a significant hurdle. As we navigate the AI-powered landscape, it’s crucial to strike the right balance. That being said, AI is here to stay, it will fundamentally shape the future of this type of work and we should all be both embracing and preparing for that eventuality.