Engineering has always been hands-on. Whether blueprints and project designs have originated from drafting tables or computers running sophisticated CAD or BIM software, humans have been at the controls.
Now, artificial intelligence (AI) is flipping the script. Over the last few years, AI has appeared in tools, applications, and processes that touch nearly every aspect of engineering. These systems—including generative AI models that can chat, write content, summarize complex documents, and automate processes—increasingly take humans out of the driver’s seat.
“AI is exploding onto the scene. It is redefining the fundamental way architects and engineers work,” says Mark Blankenship, director of risk management at Willis Towers Watson and a member of ACEC’s Risk Management Committee. “There are enormous benefits associated with the technology, and it is something that firms in the A/E/C space cannot ignore. But it’s also critical to address the ethical and legal risks related to AI.”
To be sure, as AI moves into the mainstream of engineering, firms must adjust and adapt. There’s a need for policy updates, new technology controls, employee training, and various other guardrails that allow a firm to tap into the power of the technology while mitigating the risks that AI introduces.
“Firms should have checks and balances in place, especially since AI is changing so rapidly,” says Lillian Minix, marketing communications manager at Timmons Group, a full-service engineering firm headquartered in Richmond, Virginia. “Things can go astray without the right understanding of what AI does and doesn’t do well.”
“A firm should focus on transparency and disclose the use of AI to clients.”
MARK BLANKENSHIPDIRECTOR OF RISK MANAGEMENTWILLIS TOWERS WATSONMEMBER, ACEC’S RISK MANAGEMENT COMMITTEE
Engineering firms have steadily adopted AI over the last several years. It has helped improve processes, automate tasks, and put data to work in new and innovative ways. Yet, the introduction of generative AI—built on large language models (LLMs) that write text and generate images—has changed everything. Suddenly, AI can do things that previously only humans could.
Firms are now using AI to generate designs; write proposals; establish interactive knowledge bases; and analyze project costs, timelines, and performance. Blankenship says he found about 100 software applications in the A/E/C space that offer AI features when he conducted a web search a year ago. Recently, the number topped 1,100. And while public models like ChatGPT, Gemini, and Copilot have garnered the bulk of attention, some firms are also developing proprietary generative AI models using their own data and designs. This makes it possible to tailor results to the specific needs of the firm.
“Firms should have checks and balances in place, especially since AI is changing so rapidly. Things can go astray without the right understanding of what AI does and doesn’t do well.”
LILLIAN MINIXMARKETING COMMUNICATIONS MANAGERTIMMONS GROUP
Moving faster and innovating better are central to AI. Yet, caveats exist. Certain types of AI are often called “black boxes” because scientists and software developers don’t always fully understand how models learn, how these models arrive at conclusions, and whether the information that models produce is accurate and outside the boundaries of copyright infringement. Data scientists train AI models on huge volumes of publicly available data, which can lead LLMs to repurpose words, documents, designs, and concepts in curious and sometimes problematic ways.
“Ownership of data is a huge concern,” observes Susan Turrieta, a principal at Walter P Moore and Associates and a member of ACEC’s Technology Committee. “Things can get fuzzy if you are inserting client text, images, and other content into a model—or pulling content from a large public model. Unless you actively label the data (a human adding context or identifying specific objects or concepts) before it goes into an AI system, it’s impossible to know where it will go or when, where, or how it will show up.”
It’s more than an abstract problem. Litigation surrounding copyright ownership is escalating. For example, The New York Times sued Microsoft and OpenAI, alleging they used articles from the newspaper to train AI models, resulting in Copilot and ChatGPT mimicking writing styles, copying language patterns, and serving up false attributions. This undermines its fundamental business model, the Times argues. “This case has implications that extend across industries and businesses,” Blankenship says.
There’s also the fact that on an ethical level, some clients aren’t comfortable receiving machine-generated content for proposals, designs or renderings, and other materials. Recently, Blankenship encountered a case in which a company that was soliciting an RFP for a project received two nearly identical proposals from different engineering firms. “Only five words were different in the two proposals…You don’t want to receive a call with someone saying, ‘We need to talk.’ You don’t want to be viewed as a company that lacks creativity and innovation,” he says.
The potential ethical and legal concerns don’t stop there. Generative AI models produce so-called hallucinations—invented facts and information—that can directly lead to legal issues. Training biases can result in underperforming systems that discriminate when hiring or fuel bad business decisions. There’s also the risk of data leakage, particularly in proprietary systems. When data scientists train generative AI models on large datasets, sensitive information—including payroll records and intellectual property—can get swept into them. Later, unauthorized staff might view the data or gain access to the model or its training data.
The takeaway? “Unchecked mistakes in AI output can cause delays and compromise the structural integrity of a project,” says Brent C.J. Britton, a partner at the law firm of Bochner, PLLC. “They can lead to lost money, lawsuits, and reputational damage that undermines the value of the firm.”
It’s essential for firms to devise a strategic framework to oversee the use of AI. Most importantly, human oversight is critical, says Nick Decker, engineering segment lead for software firm Egnyte. “It’s the responsibility of engineers and the innovation group to push the boundary and move fast,” he says. “It’s the responsibility of a governance team to ensure that a firm is minimizing risk but not blocking innovation. There’s a balancing act that needs to play out—with executive guidance.”
Strong policy controls are vital. “You have to understand how your organization uses AI tools,” says Catherine Bragg, senior vice president and deputy general counsel at global engineering and consulting firm TRC Companies, Inc., and a member of ACEC’s Risk Management Committee.
“AI offers enormous opportunities and, in order to remain competitive, firms must make it part of their business.”
CATHERINE BRAGGSENIOR VICE PRESIDENT AND
DEPUTY GENERAL COUNSELTRC COMPANIES, INC.MEMBER, ACEC’S RISK MANAGEMENT COMMITTEE
It’s important to define roles and establish clear policies about how, when, and where AI can be used. “You also have to constantly review things to make sure your policy is up to date,” Bragg says.
According to Britton, firms can take several steps to minimize risks. The process starts with thorough testing and validation. Any AI system should be rigorously tested before deployment, and all output should be proofread and factchecked before it is used internally or externally. Firms might allow employees to use generative AI as a tool to help write emails, documents, and other materials, but the text should never be copied and pasted without a review. “No one should be allowed to rely on raw AI output,” he says.
Reporting issues and concerns with AI is essential, Britton says. It’s wise to establish clear internal communication channels, seek feedback, and listen to employees when they report AI risks.
As firms look to build their own generative AI systems, it’s vital to plug in high-quality training data and ensure that the model is trained correctly. This includes using high-quality and unbiased datasets and adhering to best practice data science methods to build the model. “In this day and age—when even basic facts are in dispute—finding unbiased training data may be the most difficult part of using AI productively,” Britton says.
At Timmons Group, the importance of responsible AI—using the technology ethically and for beneficial purposes—takes center stage. “It can enhance creativity, speed processes, and lead to improved results when we use it conscientiously,” Minix says.
As a result, the firm emphasizes the value of using AI and makes tools available to teams, yet it also focuses on plugging human judgment into any interaction with AI technologies. “As humans, we own the cognizance of our relationships with AI. No one is responsible for AI’s outcomes other than the people who are guiding its solutions,” she says.
“Ownership of data is a huge concern. Things can get fuzzy if you are inserting client text, images, and other content into a model—or pulling content from a large public model.”
SUSAN TURRIETAPRINCIPALWALTER P MOORE AND ASSOCIATESMEMBER, ACEC’S TECHNOLOGY COMMITTEE
The ACEC Risk Management Committee’s AI Risk Subcommittee, led by Bragg and Blankenship, released detailed guidelines in July involving the use of AI at engineering firms (see sidebar below). Among other things, the committee suggests a thorough review of AI policies at the board level, assigning oversight and responsibility among senior staff, developing an authorized use policy, and identifying use cases that could result in disciplinary action and termination. “A firm should focus on transparency and disclose the use of AI to clients,” says Blankenship.
“Nobody is going to build the guardrails for you. It’s critical to devote time and resources to getting AI right,” he says.
Establish an AI ethics framework.
Develop clear guidelines on AI use, including data privacy, bias, transparency, and accountability.
Focus on risk assessment.
Identify potential risks associated with AI applications and pinpoint mitigation strategies.
Conduct regular audits and reviews.
Have periodic evaluations of AI systems to ensure compliance with policies and ethical standards.
Keep an eye on data quality.
Use robust data governance practices to promote data accuracy, security, and privacy. Plug in diverse data sets, bias detection tools, and model evaluation methods to reduce bias.
Elevate explainable AI.
Aim for AI models that deliver clear explanations for decisions. This enhances transparency and builds trust.
Maintain strong security.
Build in technology controls and employee access controls to prevent unauthorized users from gaining access to AI models and systems.
Keep humans in charge.
Maintain human oversight in critical AI applications to avoid unintended results. Define clear roles and responsibilities for AI development, deployment, and management.
Establish an accountability framework.
Create mechanisms for identifying and rectifying AI-related issues and problems.
For more information, visit: www.acec.org/resource/guidelines-on-the-use-of-ai-by-design-professional-firms/
For some engineering firms, the accumulated risks and dangers of AI may seem like a reason to apply the brakes and take a waitand-see attitude. But Bragg says AI is here to stay, and embracing the technology can enhance workers’ creativity and productivity. “AI offers enormous opportunities and, in order to remain competitive, firms must make it part of their business.”
Engineers and others must also understand that AI doesn’t represent an existential risk to their jobs—at least not for the foreseeable future. “A bigger threat for workers is falling behind and failing to keep up with the skill level required for today’s engineering firm,” Bragg says.
The goal should be to help employees maximize their understanding of AI and use it in new and productive ways, she says. “The idea is to solve complex problems and advance the engineering field.”
Samuel Greengard is a technology and business writer based in West Linn, Oregon. He has contributed to Entrepreneur, Information Week and Wired.