Have you been receiving better-quality emails lately? Ones that begin with phrases such as “I hope this email finds you well” and end with neat summaries of the critical points you read within the body of the email. These communications are written clearly, often in a gentle, courteous and upbeat tone. If you’ve noticed changes like these, you might be the recipient of communications that were wholly or partially written by artificially intelligent chatbots like ChatGPT or Google’s Bard.
Chatbots like these offer evident and immediate value to their many users. From the construction manager developing a construction implementation plan to the safety manager crafting a procedure, the chatbot provides a quick-and-easy solution to producing text that mostly makes sense (more on this later). The accessibility of these tools means that people working in industries like construction are trying them out. Chances are, even if you are unaware of it, people within your organization are already teaming up with chatbots to craft emails, draft letters, write Excel formulas and develop safety procedures.
Getting caught up in all the real and imagined benefits this technology offers is easy. After all, AI chatbots produce human-like responses to our prompts and provide nuanced answers to complex questions. It's hard not to see that they have the potential to free us from many mundane administrative tasks. AI chatbots could enable safety professionals to focus on what matters most – ensuring a safe and healthy work environment. However, the picture is not entirely rosy, and there are growing concerns about the risks of relying too heavily on AI.
Natural Language Processing
Chatbots like AI rely on Natural Language Processing (NLP), a branch of computer science concerned with training computers to understand human language. Computers are trained on Large Language Models, essentially massive text libraries often filled with information pulled from the internet. This training provides the chatbot with a model of language that helps it predict the next word in a sentence based on the probability that specific terms will follow those that came before them. When I typed the phrase “life is but a…” into ChatGPT, it continued with the word “dream” four out of five times and provided slightly different explanations of the origins of the complete phrase. The fact that these chatbots are, at some level, no more than probabilistic mathematical algorithms has led some researchers, like Washington University Professor Emily Bender, to declare that they should not be called artificial intelligence and that the writing they produce should be referred to as synthetic text.
Synthetic Text Is Not a Demonstration of Comprehension
Bender warns us not to be too quick to mistake synthetic text generated by chatbots for a demonstration of comprehension. ChatGPT’s ability to answer a question is a complex form of mimicry supported by its underlying algorithm. So, what does that mean for the many people using these tools, including those working within construction safety?
Let’s Start by Being Honest with Ourselves
AI chatbots don’t seem to be going away anytime soon. Early adopters are using this technology already, and it’s increasingly likely that people who work for or with you are using them. So have a conversation with your team about using these tools. In the context of safety management, evaluating the potential benefits and concerns associated with using AI chatbots is essential. While they can help with administrative tasks, they should not be relied upon to replace human expertise, critical thinking and judgment.
It’s essential to recognize that chatbots have limitations and can make mistakes. They cannot understand the context and have been known to confidently produce entirely false passages of information, known as hallucinations.
No matter how you end up using chatbots, keep it in context as a tool to help you improve your writing or actions, and not as a replacement for your own intent and best judgement.