Expert warns AI’s biggest issue isn’t the mass layoffs — but something way more dangerous
The influence of Artificial Intelligence on American businesses has been quite profound. We are currently in the midst of an industry-wide shift, with companies rapidly adopting AI technologies to reshape how they operate. Unfortunately, this has led to a pretty significant layoff wave, with top companies like Amazon, Nike, and others cutting thousands of jobs as they restructure their operations around AI-driven systems. However, Alfredo Hickman, the chief information security officer at Obsidian Security, recently claimed that despite layoffs affecting people directly, they are not the biggest worry when it comes to large-scale AI adoption.
According to a report by CNBC, Hickman claimed that when it comes to AI, “We’re fundamentally aiming at a moving target.” By that, he means that AI is developing at a breakneck speed, and it won’t be long before the systems get so complex that humans won't be able to fully comprehend, control, or predict them. Interestingly, Hickman said he recognized the scale of the issue during a conversation with the founder of a company that builds core AI models, who admitted they do not know where AI will be in the next two to three years. As Hickman put it, “The technology developers themselves don’t understand and don’t know where this technology is going to be.”
While we cannot write off the fact that AI has streamlined a lot of processes, there’s an increasing difference between how companies expect AI-driven systems to behave and how they actually end up after deployment. Because of this gap with human comprehension, AI can also end up introducing small errors into the system, which naturally goes completely unchecked. These errors can build up over months or even years, ultimately resulting in massive losses for the company.
Noe Ramos, vice president of AI operations at Agiloft, addressed the issue of small errors scaling over time, as she said, “Autonomous systems don’t always fail loudly. It’s often silent failure at scale.” However, despite the error being minor, she claims the impact spreads quickly, and by the time companies notice it, the damage is done. “Those errors seem minor, but at scale over weeks or months, they compound into that operational drag, that compliance exposure, or the trust erosion. And because nothing crashes, it can take time before anyone realizes it’s happening,” she said.
Interestingly, most of these issues boil down to either people having blind faith in AI or companies putting intense pressure on engineers to adopt the technology as quickly as possible. Hickman addressed the rate at which US companies are adopting AI-driven systems, saying, “It’s almost like a gold rush mentality, a FOMO mentality, where organizations fundamentally believe that if they don’t leverage these technologies, they are going to be put into a strategic liability in the market.” Oftentimes, such a high rate of adoption leaves no window of experiments, increasing the chances of things going wrong in the future.
That said, John Bruggeman, the chief information security officer at CBTS, believes the only solution to this issue is having a kill switch so that companies can avoid the situation from getting out of hand. “The CIO should know where that kill switch is, and multiple people should know where it is if it goes sideways,” he insisted.
More on Market Realist:
JPMorgan CEO issues major warning for society to prepare for AI job losses before it’s too late
OpenAI CEO Sam Altman believes AI is being used as a scapegoat for recent mass layoffs
Former Google insider issues a serious warning if AI growth remains unchecked