The AI-drug-discovery sector in 2026 has split into two camps. One camp wants to design new molecules from first principles. The other wants to make existing pharma R&D faster, cheaper and less likely to fail. The first camp gets the magazine covers. The second camp is closer to a working business.
Owkin sits firmly in the second camp. The Paris-and-New-York-based platform has spent close to a decade building what its leadership calls a “biological artificial superintelligence“, anchored on one of the largest networks of proprietary patient data in oncology. Sanofi committed around $180m to the partnership in 2021. K Pro, the company’s main agentic AI co-pilot for biopharma R&D teams, became available in the AWS Marketplace in November 2025.
Andrea Tassistro is Owkin’s chief transformation officer. Asked at the Health Tech Global Summit in Basel in early March 2026 what the radically optimistic version of his work would look like, his answer was the kind of inversion most AI-pharma coverage is too cautious to publish: that today’s roughly 95% drug-discovery failure rate could, in the long run, become the opposite. He was equally direct about the binding constraint. The mathematical engineering inside an AI model, he argues, is increasingly commoditised. The proprietary patient data it needs to actually reason about biology is not. Most of the science talent is in Europe; most of the data-access permission is in China; most of the capital is in the US. Each of those is a partial answer.
He also takes the unsentimental view that ageing should be improved rather than solved, a less performative position than the longevity sector usually offers from someone in his seat.
How do you describe Owkin in simple terms?
Owkin is an agentic drug discovery platform — an infrastructure designed to build biological artificial superintelligence to support pharma, reduce drug discovery spending, reduce time to market, and increase the number of molecules that reach the market.
Within that, you have K Pro.
K Pro is our main product. It’s the main agent. K Pro is a tool that helps scientists make better decisions in preclinical trials, and now across all phases of drug discovery.
What’s the difference between your AI and an LLM?
Large language models need very good data to become powerful and do reinforcement learning. On the web, you can scrape vast amounts of data, so the edge becomes the mathematical science behind the model.
In biology it’s different. You can’t just scrape the web to train an LLM properly. You need access to real patient data to train models that can reason accurately. Patient data isn’t available to everyone. Because we have one of the largest networks of patient data, our models tend to outperform other reasoning models in biology.
If this works at scale, what impact could what you’re doing have on the speed of drug discovery?
Ideally, we can compress time in the clinical phases. If you can target the right patient for the right molecule at the right time, with the right practices, you can reduce time significantly.
There is also a non-compressible element: laboratory work where you need to test. If we move towards a world where labs are automated, then we can compress timelines further.
Can you quantify any of that? How long does it take now, and how long might it take?
It’s very difficult to quantify, and I don’t want to give a number that I’ll be held accountable for. But ideally, once the industry standards are ready to accept this kind of intelligence, you could see something like a 10–15% reduction in time in clinical trials.
Reducing time also reduces investment because you spend money more efficiently in trials. But the other big point is that if you reduce time and spending, you can pursue more targets with the same resources. Instead of one drug reaching the market, maybe you have four, five, or ten.
A lot of molecules that are shelved today could potentially go back into clinical trials with higher success rates and reach the market.
If we take a radically optimistic view, what’s the best-case scenario outcome?
Best case: we reduce failure dramatically. If today bringing a drug to market is something like a 95% failure rate, maybe in the future it becomes the opposite.
A large reasoning model could test the right molecular structures for success in specific patient groups. And over time we could move towards tailor-made drugs because people respond differently.
For simple issues like a headache or flu, taking the same drug is fine. But for oncology, immunology and more aggressive diseases, the future could be much more personalised.
Paint a picture of what that means for patients. If you flipped a very high failure rate to a very high success rate, what changes for a patient that isn’t possible now?
It would mean a patient’s health data exists in a form that can be analysed properly. Based on that analysis, drugs could be built or selected for them.
At least initially it might not mean one unique drug per person, but more refined segmentation. For example, instead of “breast cancer” being treated as one bucket, you might split it into ten patient types and have ten different drugs — each better matched to how those patient types respond.
Are you relying on other areas of medicine to progress in order for you to move forward? For example, would wider access to genomes help?
Yes. Today in Europe, access to patient data is extremely complicated and highly regulated — for good reasons. But that level of regulation makes it harder for companies like Owkin to achieve what they need.
There needs to be a change in how we access patient data if we want these breakthrough moments and more tailor-made solutions.
If flipping the failure rate like that is the ambition, what’s the biggest challenge to getting there?
The biggest challenge is harmonising data to the highest possible level — so that, like the internet, the main problem becomes the mathematical reasoning behind the models rather than the quality and accessibility of data.
We also may end up with different large reasoning models for different therapeutic areas. You’d have the best model for oncology, another for radiology, another for immunology, and so on.
Beyond what you’re doing today, where else could AI be applied in healthcare?
We’ve discussed this internally, and it’s not something for us, but if drugs become tailored to patient types, manufacturing becomes a major challenge.
You can’t use the same mass-production model if you’re producing more personalised therapies, but you still need to deliver them at a fair price and at scale.
So a major next application will be AI in drug manufacturing — potentially involving robotics — focused on how to build tailored drugs at scale for a growing population.
You’re working in drug discovery, which contributes to healthspan and longevity. Outside of your own work, what interests you most about longevity?
I’m interested in rejuvenation. I’m not an expert, but the idea is how we can slow down cells ageing, and maybe even reverse aspects of ageing — so you could have the energy level of a 20-year-old even if you’re 45.
Bryan Johnson and Blueprint are an example. In my view, what he’s doing — putting his body through extreme testing — is amazing. If it becomes a real path to rejuvenation, that’s extraordinary. At the very least it has raised awareness.
Do you think ageing is something that can be solved, or just managed better?
I don’t think it can truly be “solved”. There’s also an ethical question: do we even want to solve ageing?
There are already many people on Earth — how do you manage the number of humans on the planet if lifespans extend dramatically? And many people don’t actually want to live for 300 years.
So maybe ageing isn’t solved, but improved. You still grow old, but you live better — because we give the right treatments, help people stay in shape, exercise, have more energy, sleep better, and so on.
At this health tech conference, what do you think is underappreciated or deserves more attention?
I wouldn’t say it’s under the radar, but the biggest topic right now is access to patient data. I’m meeting lots of people working on it.
It’s the biggest challenge we have in healthcare: how do we unlock data to build the best models and try to solve biology, which is close to impossible at a purely human level. There are too many parameters. It needs an intelligence to do it.
Do you think the data-access barrier is a symptom of regulations built for a previous era, or is it more ideological?
It’s built for a previous era. It’s good that regulation exists, but you need to decide where to set the boundary.
For example, China has far less regulation and supplies a huge proportion of generic drugs in Europe. How much do we want to compete in that segment? And how do we do that without losing the ethical standards we want to keep when it comes to data access?
I thought you were building in Europe and that your AI development was in the US. How do you compare the two landscapes?
Our AI development is mainly in France. The company is US, but most of the team is based in Paris.
Honestly, I think the talent level in Europe is better than in the US — not only educationally, but in terms of depth across topics.
The US advantage is access to capital. That allows companies to test and try more than Europeans can. That’s how they keep an edge.
So one way Europe could compete is simply having more capital. And in any case, Europeans tend to be more efficient — it’s in our DNA to be more efficient and professional when building technology.


