Cool Startup: twoXAR

Andrew Radin x 2
Andrew M. Radin (left) with friend and twoXAR business parter Andrew A. Radin.


It’s not every day that you meet someone with the same name as you. And it’s even less likely this person will have similar interests and be someone with whom you might want to start a business.

But that’s exactly the story of the two Andrew Radins, founders of twoXAR.

Chief Business Officer Andrew M. Radin met his co-founder and Chief Executive officer Andrew A. Radin battling over a domain name–you guessed it–andrewradin.com.  About six or seven years ago, the former asked the latter, who owned the domain, if he could buy it from him and was told (in not so many words) to get lost.

Somehow, this exchange sparked a friendship, first on Facebook, then through commonalities such as travel to China, working in science and tech and their independent, entrepreneurial pursuits.  A little over a year ago, as Andrew A. Radin completed work on a computational method to enhance drug and treatment discovery, he naturally thought of joining forces with his namesake and friend, Andrew M. Radin.

For Andrew M. who was just completing his MBA from MIT Sloan, the timing was right and the discovery compelling enough to turn down other appealing job offers and join Andrew A. in forming the aptly named twoXAR (pronounced TWO-czar). Based in Silicon Valley, the company predicts efficacy of drug candidates by applying statistical algorithms to various data sets.We caught up with Andrew M. Radin recently to hear about their exciting new venture and their progress.

Tell us about what you do at twoXAR.

We take large diverse, independent data sets including biological, chemical, clinical etc.–some subsets include gene expression assays, RNA-seq, protein binding profiles, chemical structure, drug libraries (tens  of thousands of drugs), whatever we can get our hands on–and use statistical algorithms to predict efficacy of drug candidates in a human across therapeutic areas. The raw output from our technology (DUMA Drug Discovery Platform) is the probability of a given drug to treat a given disease. It all takes only a matter of minutes.

Where do you get your data sets?  Are they from clinical trials?

Some of our data comes from clinical trials, but we pride ourselves on using data sets that are largely independent from each other and come from a variety of sources along the biomedical R&D chain–as early as basic research and as late as clinical data from drugs that have been on the market for 30 years.  All of these data sets are extremely noisy, but we specialize in identifying signal in this noise then seeking overlapping evidence from radically different data sets to strengthen that signal.

These data come from proprietary and public sources. The more data we have, the better results DUMA delivers.

Could you give an example of how you could use this tool in pharmacologic research?

Our technology allows us to better characterize the attributes of a disease beyond just gene expression. We can examine how a drug might be related to a myriad of informational evidence streams allowing a researcher to build more confidence on a prediction for drug efficacy.

Let’s take Parkinson’s Disease as an example. Existing treatments focus on managing the symptoms. The real societal win would be to stop, and possibly reverse, the progression of the disease altogether. This is what we are focusing on.

In Parkinson’s disease, we’ve acquired gene expression data on over 200 Parkinson’s patients sourced from the NIH and examined over 25,000 drug candidates and have found a handful of promising candidates across a variety of mechanisms of action.

So you can “test out” a drug before actually running a clinical trial?

That’s the idea. Using proprietary data mining techniques coupled with machine learning, we’ve developed DUMA, an in silico drug discovery platform that takes a drug library and predicts the probability of each of those drugs to treat the disease in question in a human body. We can plug in different drug libraries (small molecules, biologics, etc.) and different disease data sets as desired.

At this stage we are taking our in silico predictions to in vivo preclinical studies before moving to the clinic. Over time we aim to demonstrate that computational models can be more predictive of efficacy in humans than animal models are.

It seems, intuitively, that this would be really valuable, but I would imagine that your clients would want to see proof that this model works.  How do you prove that you have something worthwhile here?

Validation is critical and we are working on a number of programs to demonstrate the effectiveness of our platform. First, we are internally validating the model by putting known treatments for the disease into DUMA, but blinding the system to their current use. If in the results the known treatments are concentrated at the top of our list we know it’s working. Second, we take the drug candidates near the top of the list that are not yet known treatments and conduct preclinical studies with clear endpoints to demonstrate efficacy in the physical world. We are currently conducting studies with labs who have experience with these animal models to publish methods for peer-reviewed journals.

You have a really advanced tool to come up with potentially great treatments, but what’s to say that’s better than what’s going on out there now?  How do you prove it’s better or faster? 

If you look at drug industry trends, the top drug companies have moved out of R&D and become marketing houses–shifting the R&D risk to startups and small and medium drug companies. Drug prospecting is recognized to be extremely risky and established methods have produced exciting results in the past but have, over time, become less effective in striking the motherlode. Meanwhile, the drug industry suffers from the same big data woes as many industries–they can produce and collect petabytes and petabytes of data, but that goldmine is near-worthless if you don’t have the tools to interpret it and extract the gold. Advances in data science enable twoXAR to analyze, interpret, and produce actionable results with this data orders of magnitude faster than the industry has in the past.

It seems that this could be scaled up to have many different applications.  How do you see twoXAR transforming the industry? 

In regards to scale, not only can computational platforms look at more data faster than humans without bias, much smaller teams can accomplish more. At twoXAR, we have a handful of people in a garage and we can essentially do the work of many wet lab teams spanning multiple disease states. Investors, researchers, and patient advocacy groups are very interested in what we are doing because they see the disruptive potential of our technology and how it will augment the discovery of new life-saving treatments for our families and will completely recast the drug R&D space. One of the things I learned at MIT from professors Brynjolfsson and Little is that the increasingly exponential growth of technological progress often takes us by surprise. I predict that tectonic shifts in the drug industry will be coming much quicker than many folks expect.

To learn more about twoXAR, visit their website and blog.

This article was originally published on MedTech Boston.

Precision Medicine: Pros & Cons

m150_2_014i
23 chromosomes (image from Scientific American)

This past week, President Obama announced a $215 million proposed genetic research plan, called the Precision Medicine Initiative.  According to the plan,  the NIH would receive $130 million towards a project to map the DNA of 1 million people, the National Cancer Institute would receive $70 million to research the genetic causes of cancer, the FDA would receive $10 million to evaluate new diagnostic drugs and devices, and finally, $5 million would be spent on tech infrastructure to analyze and safely store this data.

Not surprisingly, this announcement sparked some online controversy.  If internet pundits are to be believed, this plan is going to prevent you from ever finding a mate, an employer, get health insurance, cause us all to become part of a giant genetic experiment to tailor human beings, and will also put us into crippling debt and line the pockets of Big Pharma.  I’m not even sure I covered it all…The complaints ranged from reasonable to ridiculous.  The most amusing are the conspiracy theorists who are certain that Obama must be plotting a genetic apocalypse.

But, in all seriousness, I have to admit I have concerns as well, despite being mostly optimistic about this news.

Here are some of the exciting positives offered by the precision medicine plan:

  • New diagnoses:  We may finally be able to identify genetic causes of diseases that were previously unknown.
  • Prevention vs. disease management:  Knowing genetic risks ahead of time can help us to focus more on preventing disease rather than reacting after-the-fact, once the disease occurs.
  • Early diagnosis:  We may be able to detect diseases earlier and at a more treatable stage.
  • Protective genes:  Some people have certain genes that protect them against diseases or prevent them from “expressing” their bad genes.  Studying these differences may help us to learn how to protect ourselves against those diseases.
  • Drug development:  Therapies can be developed in a faster and more efficient way by targeting certain genetic problems, rather than using the traditional trial-and-error method.
  • Personalized treatments:  Treatments can be tailored to a patient’s unique genetic aberration and we can avoid giving treatments to patients that we know may cause adverse reactions or that will fail to work.
  • Population health:  We can study genetic patterns in populations of patients to find out causes of diseases, develop treatments, and find ways to prevent disease.
  • Healthcare costs:  There’s a potential to reduce healthcare costs if focus changes to prevention rather than treatment of disease and also if we can streamline drug development.

But, let’s also look at the potential downsides:

  • Data storage:  We already know that gene sequencing of an individual produces MASSIVE amounts of data.  The sequencing of a million people is going to produce unimaginable amounts of data.  How will we store all this big data and analyze it to make any sense of it?
  • Privacy/Security:  Is there anything more personal and vulnerable to cyber-attack than your genetic information?  I wonder if the $5 million allotted to this effort will really be enough.
  • Data relevance:  According to Obama, the data will be collected from 1 million volunteers.  That’s not a random cross-section of people in the US and may not represent the population adequately in order to make population health recommendations.  I’d argue that only certain types of people would sign up and other types won’t.  Would we miss certain disorders? Would we see too much of another disorder in a population of volunteers for this project?
  • Culture:  How do we prevent people from abusing this information and not using it to screen potential partners, deny insurance coverage, denying jobs?  How will this affect culture?  Will we be cultivating a different kind of racism, on a genetic basis?  Are we on the path to a real-life version of the movie Gattaca?
  • Ownership:  Who will claim ownership of this data?  Will it be the government?  I’d argue that this data should be owned by the individuals from whom it comes, but the experience of the genetic sequencing (now genetic ancestry) company 23 & Me is worrisome.  For the time being, the FDA has blocked the company from allowing individuals from having access to their own genetic information.  Will this change as part of the new initiative or not?
  • Drug/device industry:  Genetic research and development of treatments has been very promising and productive in the private sector.  How will government involvement affect research?  Will our governmental agencies work cooperatively with them or competitively?  Again, if the experience of 23 & Me is any indication, this is a real concern.
  • Healthcare Costs:  Yes, there’s potential to decrease costs, but there’s also potential in greatly increasing costs.  It’s no small feat to genetically map a population, analyze the information, store it safely and securely, and develop recommendations and treatments.

Part of me is excited about the potential and I think that it probably does take a huge governmental initiative to tackle and impact population health, but another part of me is concerned about government invading a space that is so personal and private and I wonder if it could slow down progress in developing life-saving therapies in the private sector.

What do you think?  Are you excited or nervous about President Obama’s Precision Medicine Initiative?