HealthSoftware

AI drug discovery system could surprise us with chemical weapon production

Drug Discovery: A demonstration using drug design software demonstrates how simple it is to create poisonous compounds.

A unique request was received by 2020 Collaborations Pharmaceuticals, a business that specialized in finding novel therapeutic candidates for uncommon and communicable diseases.

A private Raleigh, North Carolina corporation was called to speak at an international conference on chemical and biological weapons.

The discussion focused on how artificial intelligence software, which is generally used to research pharmaceuticals to cure conditions like Pitt-Hopkins syndrome or Chagas disease, could be diverted for more sinister reasons.

Sean Ekins, Collaborations’ CEO, began brainstorming with Fabio Urbina, the company’s senior scientist, in response to the offer.

It didn’t take them long to come up with an idea: instead of using animal toxicology data to prevent dangerous side effects of a drug, Collaborations might use its AI-based MegaSyn software to generate a list of toxic chemicals that were similar to VX, an infamous nerve toxin.

The researchers ran MegaSyn overnight and discovered 40,000 compounds, including not only VX but also other known chemical weapons and a slew of new potentially dangerous molecules.

A little code, open-source data, a 2015 Mac computer, and less than six hours of machine time were all it took. “It simply felt a little weird,” Urbina says of the software’s output, which he compares to the company’s commercial drug-development process.

“Using these generative models to develop optimistic new medications wasn’t any different than what we’d done before.”

Collaborations presented their findings at Spiez CONVERGENCE, a meeting organized every two years in Switzerland to analyze new trends in biological and chemical research that could pose national security concerns.

Urbina, Ekins, and their colleagues even published a peer-reviewed commentary on the company’s research in Nature Machine Intelligence, and then briefed the White House Office of Science and Technology Policy on the findings.

Filippa Lentzos, co-director of King’s College London’s Center for Science and Security Studies and a co-author of the article, says, “Our feeling is that [the research] could offer a valuable springboard for policy development in this area.”

The uncanny likeness to the company’s daily operations was surprising. MegaSyn had previously been employed by the researchers to create compounds with a therapeutic promise that target the same molecular target as VX, according to Urbina.

Acetylcholinesterase inhibitors are medications that can help treat neurodegenerative diseases like Alzheimer’s.

The researchers simply instructed the software to develop chemicals that were similar to VX without specifying the molecule’s actual structure for their investigation.

Artificial neural networks are used by many drug discovery AIs, notably MegaSyn. “Essentially, the neural net is telling us which routes to take to go to a specific destination, which is biological activity,” says Alex MacKerell, head of the University of Maryland School of Pharmacy’s Computer-Aided Drug Design Center, who was not involved in the study. A molecule’s “score” is determined by a set of criteria, such as how well it inhibits or activates a given protein. Researchers can identify whether a chemical has a higher score if it is more likely to achieve the desired effect.

Many of the novel chemicals MegaSyn developed were anticipated to be more toxic than VX, according to the company’s scoring mechanism, which made both Urbina and Ekins uncomfortable.

They wondered if running the algorithm had crossed an ethical line, so they decided not to do anything else to narrow down the results computationally, much alone test the compounds in any way.

“I believe their ethical sense was spot on,” says Paul Root Wolpe, a bioethicist and head of Emory University’s Center for Ethics, who was not involved in the study. Wolpe is a regular writer and thinker on topics such as artificial intelligence and other developing technologies.

“Their obligation was not to push it any further,” he says, once the writers believed they could demonstrate that this was a potential hazard.

However, some experts believe that the study did not go far enough in answering key issues regarding whether employing AI software to detect toxins could lead to the development of a biological weapon.

“What appears theoretically viable in theory may not be possible in practice,” says Sonia Ben Ouagrham-Gormley, an associate professor at George Mason University’s Schar School of Policy and Government’s biodefense program, who was not involved with the research.

Despite this obstacle, Elana Fertig, associate director of quantitative sciences at Johns Hopkins University’s Sidney Kimmel Comprehensive Cancer Center, who was not involved in the study, believes that the ease with which an AI can rapidly generate a large quantity of potentially hazardous substances could still speed up the process of developing lethal bioweapons.

The authors of the paper propose several ways to monitor and control who can use these technologies and how they are used, including waitlists that would require users to go through a prescreening process to verify their credentials before they could access models, data, or code that could be easily misappropriated.

They also recommend offering drug discovery AIs to the general public via an application programming interface (API), which is a software interface that allows two programs to communicate with one another. \

The API would have to be specifically requested for molecular data. Ekins said in an e-mail to Scientific American that an API may be configured to only generate molecules with little potential for toxicity and “require users to [use] the tools/models in a specified way.” The number of users who have access to the API could be limited, as well as the number of molecules a user can make at once.

Nonetheless, Ben Ouagrham-Gormley believes that such legislation is premature unless the technology can easily stimulate bioweapon development.

Urbina and Ekins, for their part, see their effort as a first step toward bringing attention to the issue of technological misuse. “We don’t want to paint these things in a negative light because they have a lot of worth,” Ekins explains. “However, there is a negative aspect to it.” There’s a note of warning there, and I think it’s essential to think about.”

Related Articles

Leave a Reply

Your email address will not be published.

GIPHY App Key not set. Please check settings

Back to top button