doi:10.1038/nindia.2011.150 Published online 24 October 2011
The classic approach of doing science is centered around observation and classification — observe a measurable quantity (e.g. flower colour or height of plant), collect data from a large number (of plants), find patterns and classify organisms into categories. Analytical techniques were rarely used in understanding biology, primarily due to the nature of questions they threw up and unavailability of microscopic and biochemical descriptions.
The science of taxonomy was built upon the foundation of searching common patterns and classifying organisms hierarchically. Mendel made the first bold attempt to look beyond a horizontal (population-based) description and move vertically from higher level of phenotype to ground level genes, without the aid of a microscope! After the Mendelian era, biology got predominately biochemical and microscopic. However, biological data was mostly qualitative, analysed humanly and did not require special mathematical techniques or computational infrastructure for interpretation.
As technological tools got more sophisticated, scientists observed a soup of parts in a tiny cellular bowl and asked: how are these parts created, used and recycled? What is the role of these parts in determining higher order behaviour?
A paradigm shift happened with genome sequencing and microarray technologies introduced in the early 1990s. Suddenly there was a surge in real-time biological data at various levels of molecular descriptions. Instead of focusing on one gene, people could now study hundreds of gene expression events together. The boundaries of a traditional system were expanded to include more details.
By the mid 90s, systems biology was taking shape from the seeds sown way back in 1944 by Norbert Weiner, who foresaw the need for a systems approach. Simultaneously, interesting technological advances were taking place in computer sciences. The microprocessors got faster and storage cheaper. The time was ripe to collect and store large amounts of data in computers for analysis and interpretation. Systems Biology, as a formal discipline, was born.
For many years, people were confused about (and probably still are) this new discipline. Proposed as a specialized field of science by some, it is viewed as an 'approach' rather than an independent discipline by many.
A grand challenge in systems biology is to build a molecular inventory of the organism and connect the molecules into meaningful correlations that can explain higher order behaviours.
Given that different biological processes demand different modeling strategies, it remains to be determined if a virtual molecular connectivity reactor can be developed to execute qualitative and quantitative molecular interactions seamlessly1.
The benefits of building virtual models, bio-molecular pathways and networks are many — in selecting drug targets, in identifying potential toxic effects of lead compounds, in lowering R&D costs, in speeding up drug discovery or in developing smarter therapeutic strategies.
The success of developing virtual cell models depends upon completeness and accuracy of data. Identifying systems, building biologically accurate models, with appropriate parameters and performing sensitivity analysis provide a robust ecosystem for carrying out drug development studies.
There have been significant contributions in this direction, be it the development of a virtual heart2 or construction of the first Leishmania major metabolic network that included 560 genes, 1,112 reactions, 1,101 metabolites at eight unique sub-cellular localisations3. Biologists have also built a genome-scale constraint-based model of the Pseudomonas aeruginosa strain PAO1, mapping 1,056 genes whose products correspond to 833 reactions and connect 879 cellular metabolites4.
With more data coming at sequence, expression, structure and molecular interaction levels, the community will increasingly see the use of virtual cell models in addressing practical problems of concern to humans.