Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Antibiotic Identified by AI (nature.com)
176 points by bookofjoe on Oct 17, 2023 | hide | past | favorite | 50 comments


Pretty well done - especially the experimental validation. The main conclusions are accurate, but there is some BS around the periphery. A few notes below.

- The secret sauce here was the large, high quality experimental dataset they generated.

- Chemprop is meh.

- Is a tanimoto similarity < 0.3 really dissimilar? Depends on the chemical fingerprint parameters they are using in RDkit. For ECFP4 the expected similarity of two random molecules is about 0.1. A value of 0.3 about the cutoff you would use for a similarity based virtual screening with ECFP4 fingerprints.

- The cheminformatic filtering steps that they did on their hits was well done.

- The experimental validation of the target was well done. A good illustration of all the preclinical work it takes to validate a molecule and mechanism on the way to an IND.

- They got very lucky. Most of the time your compound hits are not all that potent and require extensive med chem optimization. That was not the case here.


Yes, and also a good counterexample to some of the "thinkism" touted in AI, i.e., this is a result based on carefully curated data not just some machine just "thinking up" something.


The pharma industry has been using a combination of computational techniques and high throughput screening for decades.

I really don't see what's new here (except that they found a promising compound).

The part of the abstract that I have access to doesn't say how far abaucin got in the compound screening pipeline. Has it been shown effective in vitro or in vivo? What side effects have been screened for? Does it reach the infection site? Does it survive the digestive tract? Does it interact with common foods?

Here's a wikipedia article about how high throughput drug screening works. The inputs to these processes are traditionally computationally generated:

https://en.wikipedia.org/wiki/High-throughput_screening

Choice quote from wikipedia:

The term uHTS or ultra-high-throughput screening refers (circa 2008) to screening in excess of 100,000 compounds per day.

edit: Also, what percentage of compounds generated with this computational technique make it to each stage of the screening pipeline, and how does that compare to existing computational techniques?


I’ll have to double check, but I think they tested in an animal (mouse) model. Not sure how far the compound got in the pipeline got for the original indication, but since it is from the repurposing hub they might be able to jump to phase 2.


Thanks for the info. I'm been delving slowly into this field as part of the lab I'm working with but I come from a purely CS background. So I'm not sure how much chemistry I should be knowing to build decent ML models. For now I've been getting away with xgboost and rdkit. There are so many quirks within this field that I'm not sure how one can apply ML to it. Tautomers being one of them


Another useful resource might be this blog : https://depth-first.com/ Richard Apodaca has some very clear explanations of topics in the field.


Alex Tropsha has some pretty solid articles on best practices.


I’ll take luck when it comes to new antibiotics.

Do you think we will come up with AIs that can model potency? How can we further improve leveraging AI beyond discovery?


They already do. QSAR has been a thing for a long time. The limiting factor is almost always the data. The publicly available SAR data in chembl / pubchem is trash, and why would you spend time and money designing and generating a good training dataset for a model when you could just be optimizing the compound?

Keep in mind that the dataset generated during optimization and the one you would prefer to train a model are quite different.

Also, in practice optimizing a lead compound is about more than just potency. You are also worried about things like toxicity, bioavailability, off target effects (selectivity), synthesis routes / yields (manufacturability), etc. And functional readouts (the assays you run) are not consistent across diseases / indications / targets / compounds.

That being said I am almost certain someone has used Bayesian optimization for an assist.


IMHO comparing QSAR to modern deep learning methods is sort of like comparing linear regression to modern deep learning methods: a technique that is wholly unsuitable for the sort of problems that deep learning is good at.

While I generally agree the data quality is poor, let's be clear: by exploting the data in PDB and Uniprot, Deepmind "solved protein structure prediction". There is a lot more value in the data that's out there, just mostly underutilized.


> How can we further improve leveraging AI beyond discovery?

Poorly. AI does not extrapolate. It can explore defined interpolation spaces with an outcome artifact in input areas where there is little data, but typically cannot extrapolate outside of convex hulls, provided classes, or other hard boundaries very well.

In effect, you have to be crafty and clever about the space an AI is to search within.


Thank you for this insight — it’s very enlightening for someone outside of this professional area.


Here in Canada there was a recent article about a woman with a UTI she was given antibiotics but the infection always returned. That went on for eight years! At the last of it she had to have a kidney removed and then the other kidney was at risk too. Then they tried bacteriophages in combination with antibiotics and she was cured in a short duration.

https://www.ctvnews.ca/health/first-canadian-trial-successfu...


Social media comments and journalists love to talk about how many may die if we fail to regulate and slow down the progress of AI. ("six month pause", etc)

Very few discuss how many lives may be lost if we regulate and slow down the progress of AI.

Rather than constantly focusing on x-risk, we should be just as often discussing the reasons to believe that life-saving and perhaps even civilization-saving breakthroughs may come about due to advances in AI technology.


the antibiotic in question: https://en.wikipedia.org/wiki/Abaucin


This is what I think AI can be great at. Finding these hidden global maximums where humanity gets stuck in a local maximum


Thats step 1 of 10, you still need to do toxicity, side effects, absorbion, etc


This looks like a decent non-paywalled summary:

https://www.cidrap.umn.edu/antimicrobial-stewardship/artific...


I thought the same with the NIH article, but the date on that one is May and the nature paper is October-- unless I am confused.


Cesar & Nunez (October) is basically just a summary of Liu et al (May).


Is from may. The Wiki article on this anti-biotic has sources from then.

This link is a better and not-paywalled version from then:

https://www.bbc.co.uk/news/health-65709834


Argh behind a paywall! If you think it's a break-through, it's announcements and papers should be public domain! That way, the rewards and punishments emerge naturally.


I have access provided by the university but I didn’t dare to upload it elsewhere because my university said the downloaded file contains information that could link to me. If only we have a system for this, oh wait that thing is called Scihub


Do they diff multiple independently requested copies to anonymize the source? I still see request information in the PDFs from sci-hub.


I can't find it (yet) on AA either.

Too bad.

https://annas-archive.org/search?q=10.1038%2Fs41589-023-0144...


I thought only non-paywalled (or easily bypassed) articles were allowed on HN.

I tried archive.is with no luck.

How can we have any meaningful discussion on the article without reading it?


Yah for AI, I guess.


Good point: what if we asked AI to find a nonpaywalled version/PDF?


Wonderful. Now don’t make the same mistake we made with all the other ones… sharing the recipes with every health department in the world who allow them to given to people for every cough and sniffle (there’s got to be some way to make certified doctor prescriptions a prerequisite for distribution worldwide…). Antibiotic resistance is getting scarier and scarier, just because of how carelessly the ones we have had been distributed


We could easily reduce the risks but we're collectively too lazy to do it. Stop feeding antibiotics to animals, stop giving out strong antibiotics without supervision, sanction third world countries who do not put restrictions on access to antibiotics.


Resistance against antibiotics cannot come free for bacteria. If we stop using some, the resistance should vanish as quickly as it turned up, shouldn't it?


It's not like it's some mechanism that they "fight" against the antibiotics. Once their protein structure has evolved (naturally selected under external pressure), it comes for free. Now remove the antibiotics, they will start diverging from the "resistance" yes, but the baseline will be "resistant" and as soon as you re-introduce the same antibiotic back, they will become the dominant strain. The only way to prevent this is to make sure they never become the dominant strain under selection pressure - although it's too late for most of the cases.


Not quite. Often an evolutionary change may come at a cost to fitness under circumstances that differ from the current environment.

As such, once the antibiotics are stopped, non-resistant bacteria may be better able to compete for resources and multiply.

There's a reason super resistant strains are most commonly found in hospital and health care settings: https://www.cdc.gov/drugresistance/biggest-threats.html

If the ongoing cost of the mutation was truly free, one would expect it to spread out beyond these places more prominently.

Edit: I may have under-emphasized that there is no guarantee, and we shouldn't rely on this property.


Amazing. This is the future.


Checkmate, biotics.


Not so fast, those bacteria have a way of mutating to beat antibiotics. This war has lead to strains that are harder to treat, and there's no sign of this stopping.


* Assuming you were 100% correct, that means it's a rate problem. Assume it takes time X for bacteria to evolve resistance and it takes time Y for us to develop each new antibiotic. As of now, X << Y, but Y is falling. Once Y < X we are developing new antibiotics faster than resistance is evolving, it's basically a solved problem.

* Fitness is not a 1-D line. Every time a bacteria evolves resistance, it pays a cost for doing so (in lab experiments it's pretty easy to generate resistance by adding antibiotics, and the resistance factors disappear very quickly once the antibiotic is removed, suggesting a fitness cost). So even while they're getting harder to treat they're also getting less robust in general.


> Fitness is not a 1-D line

I never thought about it this way but this is really insightful. I was assuming it will be an endless war but if the cost gets prohibitively high to maintain resistance we have a real advantage


We really have no option but to hope we can keep finding new antibiotics until we find a final solution using genetic engineering or nano bots.


If we have enough of them we can just rotate them around. Stop using a particular antibiotic for a few decades and the resistance against it drops.


> We really have no option but to hope we can keep finding new antibiotics

Sure we do. We could stop giving out antibiotics like candy for every sprained ankle and cough to not cultivate resistance.


> we find a final solution using genetic engineering or nano bots

Why would such solutions be immune to the bacteria evolving escape?

Or does the 'final' bit means you accidentally kill us all by genetic engineering or nano bots?


Because if we have those tools we can always easily find new sites to attack. Molecular/nanoscale tools that can target specific cells would solve most of our illnesses, bacteria or no.


It may come as a surprise then that we already use molecular tools to fight bacteria. That's what antibiotics are - fairly small, relatively simple molecules.

The problem with bacteria is they are a moving target - it's not as if they aren't constantly under attack from all sides already.

Bacteriophage ( viruses that target bacteria ) are ubiquitous, there is bacteria on bacteria action ( in fact most current antibiotics are molecules one bacteria or fungi developed to kill another ), and organisms like us - are constantly trying to kill bacteria through multiple mechanisms.


I know, we're locked in an arms race with bacteria (and fungi).


The bacteria have anti-antibiotics. We need anti-anti-antibiotics.


phages are one answer. Basically bacteria hitmen. Last I knew they are ok to protect your lunch meat but not for things you'd think they'd be frontrunners for like mersa


Certainly makes sense that eliminating biotics would be one of the first things the Synthetics would want to do to pave way for the Reaper invasion.


now find a way to remove those hydrocarbons creating by useless protein folding computation done in vain





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: