Column: Making AI Work in Engineering Research

Column: Making AI Work in Engineering Research

The strengths, weaknesses, and strategies necessary to soundly integrate AI into research workflows.
AI has been bringing dramatic change to many fields, and that includes mechanical engineering research, where it is being used to streamline work and speed up processes—and by extension, accelerate innovation.

Artificial intelligence allows researchers to quickly analyze large amounts of data, perform complex calculations, and search for relevant research. It can be used to automate the writing of code, which makes it easier for researchers to include software-enabled features in new designs.

That capability can also enhance the tools they use in their work. For example, R open-source software is often used to analyze statistics, but writing the code for R queries can be difficult and time-consuming. “AI tools can write R very quickly and very accurately,” said Robert Marchini, strategy manager at ASME. “That can save several days of having to write and debug R code.”

AI is also being used to enhance other engineering tools, including simulation programs, design applications, and digital twins.

In short, AI is proving to be a powerful tool for researchers. But its use creates potential problems on several fronts. Researchers need to take steps to address those issues—or they run the risk of causing problems, rather than driving innovation, with their efforts.
 

Avoiding the pitfalls


AI is not infallible. It can make mistakes that lead to anything from flawed research publications to dangerous engineering designs. And the result could be legal and safety problems, eroded trust, and, ultimately, ethical issues for engineers, who have a fundamental responsibility to avoid doing harm to people and society.

The key to the safe and effective use of AI is strong governance of the technology. The appropriate level of governance and the nature of AI policies will depend on the specific organization’s situation. But there are several key areas of risk that should be considered.
 

Bias and data integrity


AI is only as good as the data it uses to develop recommendations. But datasets might be inaccurate, out of date, or so incomplete that they don’t reflect the real situation. These issues can lead to erroneous results or biased recommendations that can be ineffective or harmful.

AI bias has reportedly led to problems such as unfair categorization of minorities in the criminal justice system or discrimination against women in hiring. But bias may come in less obvious forms. For example, explained Marchini: “AI tools are generally trained on the English language, so if you’re pulling research papers, the system may not know about papers that are very important in the field that are in Chinese, Hindi, or any other language.”

More for You: IP Protection Considers AI

To avoid such problems, engineering organizations should develop policies for curating diverse, representative datasets. They should also be alert to the problems inherent in synthetic data created by AI to fill gaps in datasets. Often, the generation of such data “is very much a black box, and it can act in ways that cannot be fully understood by researchers, and so may have implicit bias,” Marchini said. Researchers should also follow up with ongoing monitoring for bias and inaccuracy and make adjustments accordingly.

Meanwhile, AI prompts should provide context that helps the system focus on the right data. “Instead of saying ‘give me some information about this,’ you could say, ‘you are to do this, and you are to consider these factors when you create your result,’” Marchini noted. “Giving the tool more information about what it’s supposed to be doing will lead to better outcomes.”
 

Intellectual property and privacy


AI’s possible use of proprietary data can cause plagiarism or copyright infringement problems. It could also lead to the inadvertent exposure of trade secrets or personally identifiable information. But the widespread use of AI is still relatively new, and these legal issues are still evolving, creating uncertainty about the use of certain types of data in AI.

In that environment, AI governance policies should err on the side of caution—and organizations should require AI users to license the material they draw on. “You need to have clear legal authorization from the owners of the information put into an AI tool,” Marchini said. “This requirement provides a lot of clarity and helps ensure the data is used in an ethical and responsible way.” And as always, sensitive and proprietary data should be handled in a safe and secure manner.

Again, AI is powerful, but it is not infallible. It makes mistakes and experiences hallucinations—often, because it detects patterns or things that don’t exist—and it can only do so much. “AI is very good at some things, very bad at other things, and it has its limits,” Marchini said. “For example, ask any AI tool to generate 100 completely random words. It just cannot do that.” In general, an AI tool is not thinking. It’s making informed guesses based on what it’s been trained on.

Ready to advance your research career?

Explore ASME’s collection of journals for available opportunities to share your work on a global stage.
That reality makes human oversight of AI essential. AI does not know whether its recommendations are correct, but humans can apply real-world experience, common sense, and ethics to make that determination. Thus, governance policies should require human validation and verification of AI outputs, especially in complex situations.

There are tools that can help to detect bias or hallucinations, among other things. But at this point in AI’s adoption, Marchini said, verification and validation will often involve traditional manual efforts. When AI provides a citation for a paper, the best way to confirm that the AI tool is correct is to go to the link and look at the paper. It may also be useful to involve people from various disciplines in assessing outcomes.

“If you are doing an engineering design review, you should have multiple people approving any changes. That’s the most effective way. And that’s true regardless of whether the design was informed by AI,” Marchini emphasized.
 

Transparency


When engineering researchers use AI in their work, other researchers and reviewers need to know when a recommendation or report was created with AI. That helps ensure the responsible use of AI and builds trust in the technology and engineering research in general. It is also useful for others who want to replicate results—but that may not always be easy. “AI frameworks are rapidly evolving, and AI programs generally are making adjustments on the fly to the correlations they look for when answering prompts,” said Aaron Koskelo, editor of ASME’s Journal of Verification, Validation, and Uncertainty Quantification. “Thus, the ‘model’ whose results one is trying to reproduce may not be static. Even if the model is static, the outcome could still change, depending on which of several ‘plausible’ paths happens to be taken by the AI. A worst-case example of a bad outcome is the occurrence of hallucinations. All of this can make reproducing previous results an inexact endeavor.”

You Might Also Enjoy: Engineering Education Falls Short on GD&T

The key to transparency is clear documentation of AI’s role in the research. This is especially important when publishing a research paper. Papers should explain the tools, algorithms, datasets, parameters, and prompts that were used in the effort. They should also include an analysis of possible AI biases and errors, and the steps taken to mitigate such problems. And they should explain how AI was used in interpreting results and producing the paper itself. Overall, Marchini said that researchers should keep two related “north stars” in mind—disclosure and accountability. That is, they should explain what was done with AI and take responsibility for the findings regardless of AI’s contribution. In the end, it is the human’s name on the paper.

Peter Haapaniemi is a freelance business and technology writer based in the Detroit area.
The strengths, weaknesses, and strategies necessary to soundly integrate AI into research workflows.