Cool Startup: GenoSpace

daniel meyer

Healthcare is drowning in a deluge of data. Decision-makers must somehow make sense of a heterogeneous array of information — demographic, clinical, patient-generated, treatment and outcomes data. The latest waves of information also include data from mHealth and genomic sources. It’s not hard to imagine that many in the healthcare industry suffer from information overload and struggle with a bit of ‘analysis paralysis.’ How can organizations make sense of all this big data and actually harness it to improve healthcare and outcomes?

One company helping answer this question is GenoSpace, an ambitious genomic and health data management startup based in Cambridge, Mass. Its current chairman, John Quackenbush, and CEO, Mick Correll worked together in the Center for Cancer Computational Biology at Dana Farber before co-founding the company in 2012. Contracts with notable customers like the Multiple Myeloma Research Foundation (MMRF) and PathGroup funded GenoSpace before the first round of outside funding in 2014.

It was around that time that GenoSpace hired Daniel Meyer, an entrepreneur with a background in venture capital, as Chief Operating Officer. According to him, it was GenoSpace’s ability to attract high-quality customers early on (a rarity for most early-stage companies in life sciences) that convinced him to join. Recently, we sat down with Meyer to learn more about how GenoSpace helps healthcare organizations make sense of all the big data.

Tell us about what you do at GenoSpace.

When you’re dealing with genomics and other biomedical data, there are a variety of different users and reasons for their use. So you could have an institution that has users engaged in research, clinical development, lab medicine and clinical care. They have different software application needs that cut across the same or similar data sets. One of the things we try to do is develop the tools, the interfaces and the experience that will enable all of those different people to get the most from the data.

Could you go over your major offerings?  

We have three primary categories of offerings: analysis and interpretation of a single assay result together with phenotypic and other clinical data, interactive analysis of data from many individuals as a group, such as from a large observational study (where we really excel is when a customer has integrated demographic, clinical, genomic, treatment, outcomes and other data) and enabling patients to directly report and interact with their data. We’ve created software applications and web-based sites for patients to upload their data, track their results and better understand their condition. Although we have a core competency in genomic data, we do not only deal with genomic data.  Research and clinical care rarely rely solely on a single data type.

Now that Obama has announced the Precision Medicine Initiative expanding genomic study, do you also expect your work to expand?

We think it’s a fascinating announcement and those are the types of initiatives we support. One of the interesting things is that we have customers right now solving many of the problems that the initiative will face. For example, we have been working with Inova, a healthcare system based in Northern Virginia that serves more than two million people per year in the metro DC area. They have been collecting a rich set of whole-genome sequencing data together with structured clinical data on thousands of people. Their data management and analysis needs map directly to those of precision medicine initiatives like the one announced by the White House.

I’d imagine that you’d have greater demand on the private side.  

We have spent most of our time there. Our first clinical lab customer, PathGroup, is delivering industry-leading molecular profiling across a wide geographic footprint, including to some big cities in their coverage area and also smaller cities and towns.  Our ability to help them bring academic-quality medicine to community oncology is a huge impact. Roughly 85% of oncology patients are treated in a community setting. If you’re only deploying in major cities with academic medical centers, you’re missing out.

What are your next plans? Any new projects or goals?

We are  looking to expand to different customer use cases. That can be in terms of the therapeutic indication, such as rare diseases, neurologic or cardiac disease. But it can also be integrating different kinds of data. We have a lot of experience working with demographic, diagnostic, treatment and outcomes data together with genomic results, and there are more opportunities to expand.

Are you also working on using machine learning to do predictive analytics?

We think about that a little differently. There’s supervised analysis, the user asking questions and getting answers about the data, and there’s unsupervised analysis. For many of our customers, they’re not looking for a black box. Our goal is not to replace molecular pathologists, but to work hand in hand with them to make sure their work is better, more operationally efficient and more sustainable, particularly if it’s a commercial entity.

That last piece is underappreciated by a lot of folks. We do a lot of work in genomics and in precision medicine and there’s a lot of science and advanced technology. All that work is lost in most settings if you don’t deliver it properly. You have to understand the science and the innovation, but also how to get it in the hands of people who can impact patients. That’s a big part of what we do.

Any final thoughts?

One of the fun things about being here is we have folks with a lot of different capabilities—in software engineering, interactive design, data science, etc. For a lot of the interesting problems that people are trying to solve in medicine, it takes that interdisciplinary team approach as opposed to a whole bunch of people with the same type of experience.

To learn more about GenoSpace, visit their website at genospace.com or follow them on Twitter at @GENOSPACE.

This article was originally published on MedTech Boston.

Cool Startup: twoXAR

Andrew Radin x 2
Andrew M. Radin (left) with friend and twoXAR business parter Andrew A. Radin.


It’s not every day that you meet someone with the same name as you. And it’s even less likely this person will have similar interests and be someone with whom you might want to start a business.

But that’s exactly the story of the two Andrew Radins, founders of twoXAR.

Chief Business Officer Andrew M. Radin met his co-founder and Chief Executive officer Andrew A. Radin battling over a domain name–you guessed it–andrewradin.com.  About six or seven years ago, the former asked the latter, who owned the domain, if he could buy it from him and was told (in not so many words) to get lost.

Somehow, this exchange sparked a friendship, first on Facebook, then through commonalities such as travel to China, working in science and tech and their independent, entrepreneurial pursuits.  A little over a year ago, as Andrew A. Radin completed work on a computational method to enhance drug and treatment discovery, he naturally thought of joining forces with his namesake and friend, Andrew M. Radin.

For Andrew M. who was just completing his MBA from MIT Sloan, the timing was right and the discovery compelling enough to turn down other appealing job offers and join Andrew A. in forming the aptly named twoXAR (pronounced TWO-czar). Based in Silicon Valley, the company predicts efficacy of drug candidates by applying statistical algorithms to various data sets.We caught up with Andrew M. Radin recently to hear about their exciting new venture and their progress.

Tell us about what you do at twoXAR.

We take large diverse, independent data sets including biological, chemical, clinical etc.–some subsets include gene expression assays, RNA-seq, protein binding profiles, chemical structure, drug libraries (tens  of thousands of drugs), whatever we can get our hands on–and use statistical algorithms to predict efficacy of drug candidates in a human across therapeutic areas. The raw output from our technology (DUMA Drug Discovery Platform) is the probability of a given drug to treat a given disease. It all takes only a matter of minutes.

Where do you get your data sets?  Are they from clinical trials?

Some of our data comes from clinical trials, but we pride ourselves on using data sets that are largely independent from each other and come from a variety of sources along the biomedical R&D chain–as early as basic research and as late as clinical data from drugs that have been on the market for 30 years.  All of these data sets are extremely noisy, but we specialize in identifying signal in this noise then seeking overlapping evidence from radically different data sets to strengthen that signal.

These data come from proprietary and public sources. The more data we have, the better results DUMA delivers.

Could you give an example of how you could use this tool in pharmacologic research?

Our technology allows us to better characterize the attributes of a disease beyond just gene expression. We can examine how a drug might be related to a myriad of informational evidence streams allowing a researcher to build more confidence on a prediction for drug efficacy.

Let’s take Parkinson’s Disease as an example. Existing treatments focus on managing the symptoms. The real societal win would be to stop, and possibly reverse, the progression of the disease altogether. This is what we are focusing on.

In Parkinson’s disease, we’ve acquired gene expression data on over 200 Parkinson’s patients sourced from the NIH and examined over 25,000 drug candidates and have found a handful of promising candidates across a variety of mechanisms of action.

So you can “test out” a drug before actually running a clinical trial?

That’s the idea. Using proprietary data mining techniques coupled with machine learning, we’ve developed DUMA, an in silico drug discovery platform that takes a drug library and predicts the probability of each of those drugs to treat the disease in question in a human body. We can plug in different drug libraries (small molecules, biologics, etc.) and different disease data sets as desired.

At this stage we are taking our in silico predictions to in vivo preclinical studies before moving to the clinic. Over time we aim to demonstrate that computational models can be more predictive of efficacy in humans than animal models are.

It seems, intuitively, that this would be really valuable, but I would imagine that your clients would want to see proof that this model works.  How do you prove that you have something worthwhile here?

Validation is critical and we are working on a number of programs to demonstrate the effectiveness of our platform. First, we are internally validating the model by putting known treatments for the disease into DUMA, but blinding the system to their current use. If in the results the known treatments are concentrated at the top of our list we know it’s working. Second, we take the drug candidates near the top of the list that are not yet known treatments and conduct preclinical studies with clear endpoints to demonstrate efficacy in the physical world. We are currently conducting studies with labs who have experience with these animal models to publish methods for peer-reviewed journals.

You have a really advanced tool to come up with potentially great treatments, but what’s to say that’s better than what’s going on out there now?  How do you prove it’s better or faster? 

If you look at drug industry trends, the top drug companies have moved out of R&D and become marketing houses–shifting the R&D risk to startups and small and medium drug companies. Drug prospecting is recognized to be extremely risky and established methods have produced exciting results in the past but have, over time, become less effective in striking the motherlode. Meanwhile, the drug industry suffers from the same big data woes as many industries–they can produce and collect petabytes and petabytes of data, but that goldmine is near-worthless if you don’t have the tools to interpret it and extract the gold. Advances in data science enable twoXAR to analyze, interpret, and produce actionable results with this data orders of magnitude faster than the industry has in the past.

It seems that this could be scaled up to have many different applications.  How do you see twoXAR transforming the industry? 

In regards to scale, not only can computational platforms look at more data faster than humans without bias, much smaller teams can accomplish more. At twoXAR, we have a handful of people in a garage and we can essentially do the work of many wet lab teams spanning multiple disease states. Investors, researchers, and patient advocacy groups are very interested in what we are doing because they see the disruptive potential of our technology and how it will augment the discovery of new life-saving treatments for our families and will completely recast the drug R&D space. One of the things I learned at MIT from professors Brynjolfsson and Little is that the increasingly exponential growth of technological progress often takes us by surprise. I predict that tectonic shifts in the drug industry will be coming much quicker than many folks expect.

To learn more about twoXAR, visit their website and blog.

This article was originally published on MedTech Boston.