At a recent safety conference, I was excited to see a talk on artificial intelligence in construction. My excitement quickly turned to disappointment when I realized the speaker merely promoted a construction camera system with basic AI functionality—tracking hardhat compliance and flagging puddles for cleanup. I'd seen these surface-level applications before. I wanted to hear about how the current wave of generative AI is revolutionizing construction work at a fundamental level. The presentation clarified that our industry is still grappling with meaningfully implementing AI technology.
In this article, I'll explore two critical domains where AI is transforming safety management: first, its potential to automate traditional safety tasks like risk assessments and data analysis, and second, perhaps more importantly, how safety professionals can leverage AI to enhance or augment their own effectiveness.
This distinction between automation and augmentation is crucial for understanding AI's true potential in construction safety. Another way to look at it is by examining whether generative AI is an Automator or an assistant.
PC Magazine defines Automator as “Hardware and software that performs an operation without manual intervention. An Automator is an umbrella term for computers and applications that emulate tasks typically performed by people." In other words, it is the much-fretted AI as a replacement for people. Whereas AI as an augmenter is AI that supports and enhances a person's performance or productivity.
Figure 1 Made using Flux 1.1 Image with prompt, "A clean, minimalist illustration of two construction buckets side by side against a white background. The left bucket is labeled 'AI AUTOMATION' and contains icons representing automated safety tasks - small symbols of checklists, data charts, cameras, and warning signs. The right bucket is labeled 'AI AUGMENTATION' and contains tools representing enhanced human work - a magnifying glass, a pencil, a clipboard, and a brain symbol. Both buckets should be the classic orange/yellow color of construction equipment, with clear black text labels. The icons inside should be simple, black line drawings floating slightly above each bucket, suggesting activity. Add a subtle shadow beneath each bucket for depth.”
The research on AI's capabilities in construction safety tells an interesting story. Recent studies show that tools like ChatGPT can outperform human experts in certain aspects of risk management. One study even found that GPT-4 scored higher than human experts in a blind peer review of construction project risk management (Can ChatGPT exceed humans in construction project risk management?).
But AI could be better; the same paper discusses how it struggles with more nuanced problems and is unable to reproduce human intuition, which comes from training, education, and years of experience.
When I’ve experimented with AI vision models, I’ve found their ability to identify hazards impressive. That said, I struggle to see a future where we deploy cameras across construction sites and command an AI overlord to monitor compliance. No doubt some eager construction technologists are already working on such a project, but the whole thing feels misguided, not only for its obvious Big Brother undertones. We might need AI capable of identifying hazards and assessing risks, but only if it can show us something that isn't obvious, rather than a missing hard hat. That and the technology must be elegantly introduced into the work environment so that it doesn’t alienate the workforce.
Figure 2 Generated using Ideogram, using the prompt, " A cartoon-style illustration of a safety professional sitting at a desk, wearing a construction bucket labeled 'AI AUGMENTATION' tilted slightly on their head like an unusual hard hat. The bucket should have glowing blue circuit patterns and small floating holographic icons emerging from it (safety symbols, clipboard, brain, magnifying glass). The professional is wearing typical office attire and is working at a modern computer setup with multiple screens. On the screens, show safety reports and risk assessments being actively enhanced with AI assistance (represented by subtle blue highlighting and floating text suggestions). The professional looks confident and slightly amused, suggesting they're embracing this new technology. The overall style should be clean and professional but with a touch of whimsy, using a warm color palette with blue AI accents. Include a small coffee mug on the desk with a safety slogan for extra personality.”
The other use case is the one I've been demonstrating in these articles, the one I've also been using since I first discovered generative AI for myself. That use case sees generative AI as a tutor, assistant, teacher, mentor, planner, analyst, reviewer, editor, and a whole host of other functions it is great at serving.
The distinction between these two domains - automation and augmentation - is crucial for understanding how AI will reshape safety management.
When I use ChatGPT to help me write a job hazard analysis, I'm not replacing my expertise but amplifying it. The AI helps me consider angles I might have missed, suggests more straightforward ways to communicate risks, challenges my assumptions, and checks my biases. But ultimately, I'm the one making the decisions and ensuring the analysis fits our specific workplace needs.
I use AI daily. It has so fundamentally changed how I work that I'm convinced I have significantly increased my productivity. Furthermore, I've seen coworkers overcome writing blocks and use AI to help them with perceived shortcomings that made them avoid certain types of safety work, like writing reports and proposals.
Of course, there are the risks associated with using AI; these are innumerable and perhaps the topic of a separate article. But we mustn’t let the risks prevent us from getting to know this technology more. We don’t have a choice. Technology is moving so fast that the transformation of our workplaces is inevitable. For now, at least, I think the writing on the wall is clear—AI is best posed as an augmentor and not an Automator of our safety abilities.
Figure 3 Ideogram image created using prompt, " A split-screen illustration showing two construction site scenarios. On the left, a sterile, automated site with surveillance cameras, drones, and robots monitoring workers (shown in cold, mechanical blues and greys) and telling them how to do their jobs. On the right, a warmer scene showing a safety professional collaborating with workers, using a tablet with holographic AI assistance floating above it, highlighting the human-AI partnership. The contrast should be clear but not cartoonish, using an architectural technical drawing style.”
As we stand at this technological crossroads, the path forward for AI in construction safety is becoming more apparent. While the industry may be distracted by flashy automation solutions like AI-powered surveillance systems, the real revolution is happening more quietly in the daily work of safety professionals discovering AI's potential to make their lives easier and the products of their work more palatable to those they serve.
The distinction between automation and augmentation isn't just semantic – it represents two fundamentally different approaches to incorporating AI into safety management. The automation path, focused on replacing human functions with AI systems, risks missing the nuanced, relationship-based nature of effective safety management.
The augmentation approach described throughout this article leverages AI to enhance our capabilities, allowing us to work more effectively while maintaining the human judgment and contextual understanding crucial in our field. The future of safety management isn't about choosing between human expertise and artificial intelligence; it's about finding the sweet spot where both can work together.