AI is technology. Wading through the hype to understand what it can and cannot (so far) do, knowing your professional pain points, and then identifying internal technology champions on your team are three keys to success when it comes to building a foundation to most effectively harness artificial intelligence (AI). Approached correctly, AI can raise the bar on construction performance, speakers at Smart Foundations: Preparing for AI and ML Integration in Construction earlier this year (July) told attendees. 

“You have to know yourself and I don’t just mean therapy,” half-joked Alan Espinoza, founder at ConTech. “As an organization, you have to know what your pain points are and what your processes are,” and only then can you use that information to leverage AI to look for efficiencies to assuage those challenges,” he said. 
However, panelists also stressed that a robust AI solution is only as strong as the data feeding it. Urging attendees to “take a step back” before leaping into AI, Ivy Nguyen, principal of Autotech Ventures, said it was vital to understand “the minimum requirements needed to be able to apply AI.”

According to Nguyen, first and foremost, you need clean, reliable data, and you must be able to capture and access it with relative ease. “Whether it’s sensors, it’s instrumenting your software and workflow products,” the data must capture “how something is done. You need something to store that data and easily retrieve it; you need the tools to manipulate that data and pour it into an AI model. All those pieces need to exist in order for you to start running an AI model,” she said.

“Data is the fuel; [you] have to have a good strategy around your data,” agreed Dan Williamson, director of artificial intelligence at Ryan Companies. However, it can’t be a rudderless strategy, he stressed. “It’s important to put someone in charge of thinking about your future strategy related to AI,” he said. Noting the responsibility often falls to an innovation team, Williamson said that a team is okay if there is clear “leadership around it...because it changes so fast.”

And while AI can sometimes “hallucinate” by guessing about an answer, Joel Hutchins, chief product officer of Slate, said he was more worried about a different kind of hallucination. “It’s hard to navigate through all the [hype] and the over-promising on the capabilities of AI.”

Stressing the need for increased employee education, he said, “The dialogue around AI is important. It's not magic. Ignore the hype. Focus on the outcomes.  What problem are you solving?”

Agreeing it is vital to have a senior staffer as an AI champion, he suggested the ultimate litmus test will be lower down the organization chart. “Ask boots on the ground if [AI is] helping,” he said.

“Test AI applications in workflow from top down, then get feedback from bottom up is crucial,” said Espinoza. And find ways to inspire the entire team, he added. For example, identifying easy ways for AI to help, say with summarizing meetings, is the “low hanging fruit to help people see value in [AI and] you get more traction finding those little wins.”

Taking a deeper dive into the issue of AI “hallucinations,” Nguyen reminded attendees how important it was to understand where AI is sourcing its data. “Traceability is important,” she said. “Trust but verify” when it comes to AI,” she said, noting ChatGPT can give seven different answers to the same question, and that kind of unchecked imprecision is unacceptable in many instances.

Williamson summed up, “AI is just another technology. It’s more important you figure out [the] problem to solve; don’t start with applying AI [for the sake of applying AI]; that’s a recipe for failure.”