Although gene mutations are the primary drivers of carcinogenesis, an array of complex and tumor-specific molecular interaction networks determine cancer cell behavior. To learn more about this line of inquiry, The ASCO Post recently spoke with Andrea Califano, Dr., Professor of Chemical Biology and Systems Biology in the Department of Biomedical Informatics at Columbia University Medical Center, New York. Dr. Califano’s lab combines computational and experimental methodologies to reconstruct the regulatory and signaling logic of human cells in genome-wide fashion.
Please tell the readers a bit about your background and the road to your current work in systems biology and cancer at Columbia.
I began my career as a theoretical physicist in Florence, Italy, which is where I did my doctoral studies, followed by a postdoctoral degree at the Massachusetts Institute of Technology in computational physics. At that time, physics had pretty much dried up as a discipline, so I went to IBM to do research in artificial intelligence (AI). I started the first computational biology group in 1991, which became the IBM Computational Biology Center.
My lab examines master regulators and their networks, which allows us to understand the key genetic events and vulnerabilities of the cancer cell that can be used to develop therapies.— Andrea Califano, Dr.
Tweet this quote
I left IBM to start my own experimental lab and started a company called First Genetic Trust, but when I was offered a position at Columbia, I decided that would better fit my long-term goals. I set up a lab and learned experimental biology, and we now have about 30 people here, split pretty evenly between computational and experimental biologists, as well as personnel working in drug discovery or running clinical studies.
Please describe your work in master regulator analysis and how it gives us a deeper understanding of the oncogenic process.
What we call cancer systems biology is essentially model-based biology. Instead of using brute force to go after one gene at a time, we build and use genome-wide models to better understand how the cell works, which is the foundation of our work. These models suggest that rather than concentrating on the etiology of a cancer, we should rather focus on the mechanisms that homeostatically maintain the cancer cell state.
There is a huge paradox in cancer. If you look at the transcriptional state of the cancer cell and you examine a particular tumor subtype—say, triple-negative breast cancer—you’ll see that the transcriptional state of cell from different patients is really no more different than what you would observe between their normal cells, for instance, their fibroblasts or B cells. In sharp contrast, when you look at the genetics of triple-negative breast cancer, the mutations are all over the place, yet these tumor cells collapse into a transcriptional state that is extremely similar across patients. In normal cell physiology this is accomplished by a process called canalization, which is the foundation of homeostasis that allows the cell to maintain its state, largely independent of genetic variants and other exogenous factors.
To buffer against this variability and prevent things from going awry, the cell needs an awful lot of machinery. We have shown that, in cancer, this machinery is implemented by a small set of proteins, called “master regulators,” that are responsible for canalizing the effect of most mutational events in a patient. We have shown that master regulators work in concert, within tightly autoregulated modules to ensure the stability of the cancer cell state.
Think of this cellular machinery as an air conditioner. When the temperature of a room goes a little bit above or below the set point, loops in the system signal for adjustments to maintain a stable temperature. The master regulators act like environmental sensors that prevent changes in the environment, such as the delivery of a drug, from producing catastrophic effects in the cancer cell. They prevent cancer cells from becoming unstable and easy to kill but, in doing so, they also provide a new class of tumor vulnerabilities.
My lab examines master regulators and their networks, which allows us to understand the key genetic events and vulnerabilities of the cancer cell that can be used to develop therapies. In the process, we search for drugs that will collapse the entire master regulator module and destroy the cancer cell.
Please tell us a bit about your version of N-of-1 trials.
The term “N-of-1” is a bit of an oxymoron because N means how many patients you need to get statistical significance, and it’s impossible to achieve meaningful statistics with only one patient. However, in the jargon, N-of-1 describes an approach where we attempt to identify therapies that are tailored to a specific patient in a somewhat quantitative and data-driven fashion. So instead of doing one N-of-1 trial, you’re doing many of them thus producing statistically sound results. In our main N of 1 study we profile the patient’s RNA rather than DNA to identify treatment for patients across 14 distinct malignancies who have failed at least 3 lines of therapy; we have already evaluated 39 drugs with extremely encouraging results and we have six such trials already open or opening up.
When you have mutations in an individual oncogene, it turns out that there’s an entire piece of machinery that is required to prevent cancer cells from dying. The cancer can leverage this machinery to also rapidly adapt to drugs targeting individual proteins. In addition, among the billion cells in a typical tumor mass there may be mutants that are not affected by the drug and will take over.
“We finally seem to have the tools to understand the most ‘atomic’ level of this disease, how it starts, develops, and responds to treatment cell by cell.”— Andrea Califano, Dr.
Tweet this quote
However, we often lack mechanistic rationale for why some patients in a study may respond and others may not or may initially respond only to later relapse with a drug resistant tumor. If the N-of-1 approach is successful, clinical trials may be designed to track the evolution of the tumor vulnerabilities after each treatment and to identify optimal treatment for cancer patients drawing from a vast arsenal of available FDA-approved and late-stage experimental drugs that we can use individually or in combination.
To reverse engineer the patient’s tumor—much as you would an unidentified piece of machinery that no longer functions properly—we analyze its RNA profile, using systems biology approaches such as OncoTreat and OncoTarget that have been NY CLIA certified. These leverage computational models that require supercomputers to understand how tumor cells are regulated, to identify the critical master regulator proteins whose activity is necessary to maintain tumor state, and to prioritize drugs that can invert their activity.
Artificial Intelligence and Cancer Research
Artificial intelligence is gaining traction in certain areas of cancer research. What role does it play in your research?
AI leverages a branch of computer science called “machine learning” to develop algorithms that can predict specific events from large training data sets. With that, the AI can make inferences about things it hasn’t seen before. The problem with that approach in biology is that it only works really well when the interconnectivity among the various features that make up your space is relatively modest.
For instance, if you’re learning about people’s buying patterns in supermarkets, it’s a model where the connectivity is relatively low because there’s very little interaction between a box of diapers and a bottle of olive oil. You can use a model that keeps events statistically independent, and therefore, when you see something that is not statistically independent, you think, “Aha! There’s a correlation.”
Much of AI uses big data, but in biology, the term big data is somewhat hyped to mean anything that doesn’t fit on a spreadsheet. Although biology data aren’t very big at all, compared to those collected by Google or the NSA for instance, they are not just complicated they are highly complex. The diagram of the space shuttle is complicated because it has a lot of parts but we understand in a very predictable way how those parts work together; a complex system is one where even though we have all the parts we cannot predict its behavior. Given biology’s complexity, AI doesn’t work that well; rather, it is much better suited to the study of clinical data. To make AI productive in molecular biology and cancer research, we need extremely powerful models that simplify the data, so that not all 20 million genes and their products look like they’re independent variables.
Please share a closing thought about cancer research moving forward.
There have been a number of pivotal points during the past 30 years when we’ve said, aha, a breakthrough. Now, with the ability to study cancer at the single cell level, we finally seem to have the tools to understand the most “atomic” level of this disease, how it starts, develops, and responds to treatment cell by cell by cell. Thereby, we can start to make drugs that target the dependencies and escape mechanisms of individual cancer cells. And we can start tackling what is probably the most formidable challenge in cancer: discovering how the cancer reservoir gets replenished and how the cancer cell creates a cloak of invisibility so the immune system won’t attack it. ■
DISCLOSURE: Dr. Califano is a founder, shareholder, director and consultant for DarwinHealth Inc. In the past 12 months, he has also consulted for AbbVie, the Encheng Group, and Shanghai Cell Therapy.