Department of Systems Biology bioengineer Harris Wang describes the goals of the Human Genome Project - Write (HGP-write), an international initiative to develop new technologies for synthesizing very large genomes from scratch. 

In June 2016, a consortium of synthetic biologists, industry leaders, ethicists, and others  published a proposal in Science calling for a coordinated effort to synthesize large genomes, including a complete human genome in cell lines. The organizers of the project, called GP-write (for work in model organisms and plants) or sometimes HGP-write (for work in human cell lines), envision it as a successor to the Human Genome Project (retroactively termed HGP-read), which 25 years ago promoted rapid advances in DNA sequencing technology. As the ability to read the genome became more efficient and less expensive, it in turn enabled a revolution in how we study biology and attempt to improve human health. Now, by coordinating the development of new technologies for writing DNA on a whole-genome scale, GP-write aims to have a similarly transformative impact.

Among the paper’s authors were Virginia Cornish and Harris Wang, two members of the Columbia University Department of Systems Biology whose contributions to the field of engineering biology have in part made the idea of writing large-scale DNA sequences imaginable. We spoke with them to learn more about what GP-write hopes to accomplish, its potential benefits, and how the effort is evolving.

Electronic media offer valuable tools for learning, but what is the best way to integrate these technologies within the traditional university setting?  Brent Stockwell, a faculty member in the Columbia University Department of Systems Biology, recently asked himself this question about blended learning, an educational approach he had begun incorporating into his undergraduate biochemistry class. As Columbia News reports, the results of this investigation have been published in the journal Cell:

DIGGIT identifies mutations upstream of master regulators.

A new algorithm called DIGGIT identifies mutations that lie upstream of crucial bottlenecks within regulatory networks. These bottlenecks, called master regulators, integrate these mutations and become essential functional drivers of diseases such as cancer.

Although genome-wide association studies have made it possible to identify mutations that are linked to diseases such as cancer, determining which mutations actually drive disease and the mechanics of how they do so has been an ongoing challenge. In a paper just published in Cell, researchers in the lab of Andrea Califano describe a new computational approach that may help address this problem.

Comparing human and mouse prostate cancer networks

Computational synergy analysis depicting FOXM1 and CENPF regulons from the human (left) and mouse (right) interactomes showing shared and nonshared targets. Red corresponds to overexpressed targets and blue to underexpressed targets.

Two genes work together to drive the most lethal forms of prostate cancer, according to new research by investigators in the Columbia University Department of Systems Biology.  These findings could lead to a diagnostic test for identifying those tumors likely to become aggressive and to the development of novel combination therapy for the disease.

The two genes—FOXM1 and CENPF—had been previously implicated in cancer, but none of the prior studies suggested that they might work synergistically to cause the most aggressive form of prostate cancer. The study was published today in the online issue of Cancer Cell.

A panel at the Helix Center, titled "Synthetic and Systems Biology: Reinventing the Code of Life included Columbia University professors Saeed Tavazoie and Andrea Califano, as well as Michael Hecht (Professor of Chemistry, Princeton University), Mark Fishman (President, Novartis Institutes for BioMedical Research), Christopher Mason (Assistant Professor of Physiology and Biophysics, Institute for Computational Biology, Weill Cornell Medical College), and Michael Waldholz (Medical Science Writer and Media Consultant).

Advances in genomics and the development of new technologies over the past decade have given biologists the ability to engineer DNA to perform specific functions. This emerging science, called synthetic biology, holds great potential for a number of applications, and experiments have already been done to reprogram algae to produce biofuels, design bacteria that can sense and consume toxic substances, and use living cells to manufacture compounds that can be used as drugs.

Synthetic biology has emerged in parallel with systems biology, but in many ways the two sciences are closely intertwined. As systems biology improves our mechanistic understanding of how biology functions at the molecular level, synthetic biology is taking this knowledge to push biology in new directions, from synthesizing molecules using biology all the way to synthesizing new forms of biological life.

In a public roundtable discussion at the Helix Center in New York City, Columbia University Department of Systems Biology professors Saeed Tavazoie  and Andrea Califano  joined a panel of experts in discussing the intersection of systems and synthetic biology, and the role that these two disciplines will play in the development of the biological and biomedical sciences in the coming years.

Models of Evolution In Charles Darwin's seminal treatise On the Origin of Species there is only one image, which visualizes evolution as following a branching pattern in which species diverge into lineages over time like the limbs on a tree. With the increasing availability of genomic data, scientists have attempted to understand evolution at the molecular level by using a similar phylogenetic paradigm, but as Department of Systems Biology Assistant Professor Raul Rabadan , MD/PhD student Joseph Chan, and Stanford University mathematician Gunnar Carlsson point out in a new paper published in the Proceedings of the National Academy of Sciences , it has a number of shortcomings when applied in this way. By developing a new mathematical approach based on a method called persistent homology, the researchers produced several insights into viral evolution that could not be found using other means.

High-throughput screening’s ability to perform thousands of experiments efficiently and under carefully controlled conditions has made it an important tool for basic and translational biological research. At Columbia University, the JP Sulzberger Columbia Genome Center and the Chemical Probe Synthesis Facility provide a flexible platform for researchers interested in applying high-throughput experimentation in their work. On December 17, 2012, the Genome Center hosted a symposium to spotlight its capabilities in high-throughput screening, to explain the important role that synthetic chemistry plays in high-throughput screening, and to describe some recent research projects at Columbia that have utilized these tools.