Blog

Outside the Echo Chamber

posted Sep 24, 2017, 5:20 PM by Vondriska Lab

Growing up in the 1980’s, Saturday morning cartoons aside, the only television programs I was allowed to watch were Nova and National Geographic. My parents were not formally educated beyond high school, but they were solidly middle class: we had a comfortable life and they taught me to be curious and value objectivity. In short, they inadvertently taught me to think like a scientist. 

In the present day, when the average American cannot afford to purchase a new car[1], advocating for more federal spending on basic scientific research is unlikely to resonate with a large portion of the electorate. When we as scientists are lamenting how to support the next generation of investigators, our own long-term job security and whether there should be a limit on the number of grants any one person can have, spare a thought for the average American voter, whose real income has not increased since the previous millennium.[2]

That research into the fundamental basis of life is a worthy pursuit of mankind (and thus a worthy use of taxpayer money) is self-evident to the scientist and to most non-scientists in academia. This statement does not hold true for the American population as a whole. 

That it is wise to recruit the most talented people to pursue a scientific goal, regardless of what country they hail from, is uncontentious in academia. When your family or your neighbor’s family is struggling to make it paycheck to paycheck, hand wringing about being able to hire programmers or post-docs from abroad (especially if it is not commonly appreciated what a programmer or a post-doc actually does) can ring hollow.

Thus, while it is certainly true that adherence to the pure objective of increasing human knowledge must practiced within the scientific community[3],[4], this vigilance must be matched by a demonstration of the return on investment in nakedly economic and social terms to the American electorate. Scientists have the duty at once to resist commoditization of science, while also quantifying what they do and attaching value accordingly.

This is not a partisan issue—it is an economic one. Democrats and Republicans can be vocal supporters of basic science research (notable examples include Newt Gingrich, Arlen Spector, Sheldon Whitehouse and of course Al Gore) but these folks are coming from the upper middle class of income, from a highly educated portion of the population whose jobs (and whose family’s livelihood’s) are unlikely to be negatively affected by globalization. These politicians, while driven to an unknowable degree by their own altruistic convictions, are servants to their constituents. As soon as those constituents fail to see science as a priority, the politicians will stop advocating for it or risk being thrown out.

To be clear: this is not an argument that because times are tough it is acceptable to discriminate, to weaken principles or narrow the goals of scientific inquiry. Quite the opposite. Because of contemporary media balkanization, the task of publically defending the indiscriminate, objective nature of scientific inquiry has become ever more pressing. Equally critical is the need to establish in the public conscience—the way President John F. Kennedy did with his speeches about the space race—that our country’s competitiveness, its leadership role in the world, is rooted in its support of science and technology.


What is the solution? 

Connect science with the economy and national security

Various post-war factors conspired to make the United States a beacon for scientific research in the second half of the 20th century: preserving this position is critical to long term social and economic stability.[5] Because the United States is a leader with many emerging technologies, our scientists and legislators (and by proxy, the values of the American people) to a large part determine the guardrails within which entire scientific disciplines are pursued around the globe.

The fact that many ambitious and highly educated people want to train in the United States is a boon for the economy: most scientists come here already formed in their home countries, meaning that no United States taxpayer investment was required for their training. While working in the United States, they pay taxes and spend money. Any basic discoveries they make become, by way of publication and/or intellectual property, a domestic asset to recruit future talented investigators. This is a net gain for the economy and an insurance policy regarding American global leadership. 

Engage politicians (or become one)

The old adage about writing one’s congressman has some truth to it. So does the one about talking over the fence to one’s neighbor. Even better, for those so inclined, is to engage the political process directly: the US congress has a paucity of STEM doctorates currently serving, although President Obama’s administration was notable in that Secretaries of Defense and Energy held doctorates in physics.[6] The medical community has a much greater representation in federal government[7]—engagement of basic scientists in this theatre could only strengthen the institutional commitment to funding research.

Change the way students are taught

Undergraduate education is the best window, in terms of scale of impact and receptiveness in the target audience, to convey to the next generation of American citizens the manifold benefits to society from basic research. Students majoring in biology and chemistry go on to be physicians, lawyers, CEOs: we educate these students in the central dogma and the theory of evolution, why not in the economic benefits of basic research and communication of science to the general public? A goal should be to empower students of science, most of whom do not go on to be practicing scientists in academia or industry, with the quantitative data on how basic science advances the national interests. 

Improve messaging through federal funding agencies, NGOs and academic institutions

Governmental funding agencies like the National Institutes of Health and the National Science Foundation must answer to Congress. The scientists and administrators at the NIH and NSF deal on a daily basis with how to justify to the Congress (and thus to the public) how tax dollars are spent. How can this channel of communication be improved, more tightly linking basic research to job creation, intellectual property and GDP? The NIH in particular has pursued initiatives over the past few years aimed at better quantification and accountability for federal grant dollars—grant recipients need to go further in aiding this objective. So too with non-governmental organizations and our own home institutions, along with stakeholders to whom they answer: that medical breakthroughs and technological leaps arise from studying fundamental processes needs to be continually restated, in language that governmental and NGO leadership can wield to constituents. The instinct of many institutions to foster cross departmental research teams focused on disease areas is wise—but these efforts must be firmly rooted in a public relations campaign to explain what basic research is, and that if we do not have it, there will be nothing to translate to the clinic.

Engage non-science media

The modern technological ecosystem has led us to take for granted the continual appearance—like care packages from a more advanced civilization—of new devices, services and tools. But this complacency belies the connection between government funding on the front end and academic/private sector R&D, new start-ups and intellectual property on the back end. This connection needs to be clearly articulated to the public, emphasizing not only the high profile technological innovations that lead to things like iPhones and electric cars, but also the less headline-grabbing small business innovations that arise for basic research in universities or private companies. Individual disciplines have begun to recognize the need to measure the contribution that public funding of basic research delivers to society[8]—these quantitative data need to be articulated to the non-science public. As has been recognized, it is imperative that this dialog with the public eschews hyperbole: we are not to become salesmen, but rather, responsible stewards of public investment and honest communicators of the resulting benefits to society. 

I imagine that the idea of tying fundamental research to pedestrian concerns like jobs, the economy and national security is distasteful to many scientists. So be it. It matters not how pure the scientist’s motives are for the use of public funds—we cannot expect the public to tacitly agree to support such endeavors indefinitely. Communicating directly to the public the return on investment from basic science is every scientist’s responsibility.   



[1] https://www.the-tls.co.uk/articles/public/trump-dynasty-luttwak/

[2] https://en.wikipedia.org/wiki/Household_income_in_the_United_States

[3] Michele Pagano. Nature. 2017; 547:381; PMID: 28748949

[4] William G. Kaelin. Nature. 2017;545:387; PMID: 28541345

[5] Reif L.R. Foreign Affairs. May/June 2017.

[6] https://en.wikipedia.org/wiki/List_of_United_States_politicians_with_doctorates

[7] https://en.wikipedia.org/wiki/Physicians_in_the_United_States_Congress

[8] Hill, JA et al. Circulation Research. 2017. doi: 10.1161/CIRCULATIONAHA.117.029343; PMID: 28684531

N>1

posted Aug 21, 2017, 4:49 PM by Vondriska Lab

Pretty early on it was decided there had to be a strict limit on the number of times you could go back and help yourself. And you had to list all of these sojourns clearly on your curriculum vitae, replete with vectorial time stamping, making it clear which degrees were taken first (in the now defunct, but still intuitively useful, unidirectional sense of time). In the not uncommon situation where exams or experiments actually came out worse the second time round, which version was to be considered current had to be stated.

Each reversion, as they became known, had to be logged with departure time/place, recall time/place, time spent re-doing dissertation work (‘time’ is used here in a reference-free context…like how counting to three before jumping off the diving board has no meaningful orientation within the 24 hours of a day, rather only within the context of getting up the nerve to enter the void), official seals verifying lack of contact with and/or attempts to sabotage present day competitors and return time/place, along with newly developed mentors/collaborators for COI purposes.

Year zero for academic time travel was 2017. Some folks were doing it clandestinely for a few months before government spooks caught the geeks based on time pulse signals. It is commonly accepted that several of these initial trips were forward, i.e. into the future-future, ahead of 2017, evinced by the overnight patenting of a bunch of certainly imaginable, but highly technical and as yet infeasible, really cool stuff.

Google crashed when ~10,000 people were simultaneously credited with the 1962 Nobel Prize in Physiology or Medicine.

Traveling forward was outlawed soon thereafter, as more senior investigators lamented that trainees would lose focus in the laboratory and just want to look for the answer in the back of the book, as it were.

Like always, the biological and social sciences followed their physical science cousins into the breach. Within a year, traveling back to repeat experiments and improve error bar appearance and lower p values and whatnot was so ubiquitous that reviewers and editors started to ask for this to be declared in the Methods of accepted papers. I was giving a lecture myself on some of our preliminary animal data that autumn and after the perfunctory applause a colleague approached the mic and said, “During your lecture I had my post-doc revert and conduct these studies in humans. Our paper is In Press. Just a heads up.”  Biotech and Pharma—IP in general—were obviously a straight up nightmare. Something had to be done.

NYT page three headline that winter: “Fight Over Credit Intensifies as Third Group this Month Cures Cancer, Heart Disease and Alzheimer’s”

The tragedy of the commons for scientists had become existential: you could go back and discover something for the first time, but you couldn’t prevent others from doing the same. The question then became did you want to spend all your time in continual reversion loops with your competitors accumulating an excellent number of diverse time stamps on your passport but having little time for teaching and service and forget about more pedestrian stuff like mowing the lawn or changing your oil, or do you reserve time travel for personal use and then have to watch the field pass you by?

Thankfully, governments sent some of their best people, plus the journals and some of the larger scientific NGOs, to huddle and produce what became known as The Cabos Consensus. Cabos San Lucas was chosen to host the meeting because of large distance from any of the portals in the then still nascent reversion industry precluded people from going back and redoing the negotiations midstream ad nauseam, which would have been really annoying and counterproductive. It was also ideal because all parties liked deciding the future and the past at once in a location with favorable currency exchange rates and terrific weather, food and beaches. The consensus stated, basically, that the aforementioned official declarations of research-related time travel needed to be made in dossiers, which restored some semblance of normal academic collegiality, yardsticking and professional advancement. Nobody wanted to go back and change that.



[Author’s note: This is a work of fiction in the mold of Nature’s Futures series.]

My heart will grow on

posted Jan 27, 2017, 7:03 PM by Vondriska Lab   [ updated Jan 27, 2017, 7:04 PM ]

Developmental biology provides new promise for organ transplantation

 

A combination of genetic engineering and stem cell biology could revolutionize treatment of multiple human diseases

 

When, on the back of Pegasus, Bellerophon encountered the Chimera, the beast was doubtless the stuff of nightmares: a body part lion and part goat, whose tail was a serpent. In contemporary times, a chimera is considerably less frightening, referring in biology to an organism that arises from multiple parts of genetically different organisms. This can happen naturally, for example when two sperm fertilize one egg during sexual reproduction, or in the laboratory, when pluripotent cells, rather than germ cells, are combined with a developing blastocyst (the early structure which has just begun to commit to the distinct lineages that will in turn generate the organs and tissues of the animal). In the laboratory, however, chimeras are usually not made in the Homeric sense: that is, typically the cells, which are mechanically injected into the blastocyst, are from the same species as the blastocyst. In recent years, however, scientists of have figured out how to get cells from one species to incorporate into the developing blastocyst of another species, but until recently this has been an inefficient process and was limited mostly to rodents and lower organisms.

As they report this week in the journal *Cell, Professor Juan Carlos Izpisua Belmonte at the Salk Institute in La Jolla California and colleagues have potentially revolutionized experimental chimerism by combining two recent technological innovations: the first is CRISPR-Cas9, which enables rapid and precise genome editing, and the second is induced pluripotency, which generates—from a small sample of somatic cells—pluripotent cells (called iPS cells) which can give rise to all cells in the body. Unlike the mythical chimera, the chimera generated in the laboratory does not have body parts solely from the cells (and thus resembling the features of) one organism or another: experimental chimeras do not have the head of a pig and the feet of a man. Instead, each body part is a mosaic of cells from each organism. To circumvent this issue, Professor Izpisua Belmonte and his team used CRISPR-Cas9 to selectively delete in the mouse blastocyst genes controlling the development of a given organ and then injecting into these mouse blastocysts pluripotent cells from a rat. In this scenario, if chimerism occurs, the targeted organ should be made up of cells from the rat, not the mouse: remarkably, the investigators demonstrate that they have pulled this off for both the pancreas and the heart.

But a mouse is not a man. So in a separate set of experiments, the researchers attempt to extend these observations to larger animals, choosing the pig to provide the blastocyst. Here the investigators employed the second tool—induced pluripotency—to generate iPS cells from human fibroblasts (taken from the foreskin, incidentally, although in principle any adult cell will work). When these induced pluripotent stem cells were injected into the pig blastocysts and the blastocysts subsequently implanted into a surrogate sow, resultant pig fetuses had organs containing human cells. At this point the process is very inefficient, with only a vanishingly small number of the cells in the pig being of human origin, although with further technical innovation, these numbers should be increased substantially.

If these two advances can be combined, the results would be potentially ground breaking for biomedical research. iPS cells, which are experimentally generated in the lab from normal adult skin cells, obviate ethical concerns incumbent with the use of human embryos or embryonic stem cells. If CRISPR-Cas9 can be perfected in pigs, the possibility exists that human organs and tissues could be grown in these animals.

The most provocative use for such human-pig chimeras would be organ transplantation. How might this work? Say your doctor informs you that your heart is failing and you need a transplant. Rather than get on a waiting list, you would instead donate some skin cells and have a heart—your heart, in fact, because since it comes from your cells it would be genetically identical to you—grown up in a pig over a few months, after which a surgeon would transplant your new heart from the pig to you.

Substantial technical hurdles remain before such practices will be a reality, but there are many other nearer-term uses for this technology, including drug screening and disease modeling. After slaying the Chimera, Bellerophon’s arrogance led him to challenge the gods, with predictably tragic results. How researchers advance these new findings towards helping mankind will determine whether they share his hamartia.

*Link to the paper: http://www.cell.com/cell/fulltext/S0092-8674(16)31752-4

 

Cardiovascular Proteomics Grows Up: Will It Move Out of the Basement?

posted Jul 22, 2015, 5:38 PM by Vondriska Lab

Please see the American Heart Association's Science News Section for my commentary (link) about the recent scientific statement from the AHA on cardiovascular proteomics (link).

A Scientific American President

posted Mar 10, 2015, 8:40 AM by Vondriska Lab

As we brace for the coming tsunami of campaigning for the next American presidency—expected to consume over a billion US dollars from each of both the major parties, plus perhaps again as much from party-affiliated special interest companies and individuals—should not the scientist, like any other voter, consider what values one’s profession could contribute to the stewardship of the nation? Maybe the STEM electorate should consider putting forth one of our own: could some of the successes of the German economy be attributed to the analytical mind of Angela Merkel, who holds a doctorate physical chemistry? (But then, Marion Barry had a graduate degree in organic chemistry and Mahmoud Ahmadinejad has a doctorate in civil engineering…physician Rand Paul publically disparages vaccination, one of the greatest accomplishments in the history of medicine). No, rather than looking for a scientist to run for office, what the scientific community can do is act as a voter bloc. What sort of values might we advocate for, notwithstanding our own personal (subjective) political tendencies?

 

Reliance on Data

While the candidate surely appreciates the polling data streaming to the campaign bus, how many politicians understand the actual numbers behind the positions they hold? Stump speeches rarely allow for the data behind the dogma to be completely presented, but debates and campaign policy statements are places where the candidates should be taken to task for a granular defense of the positions on domestic and foreign issues. How many previous candidates wilted when asked for the specifics on the positions they take? Scientific thinking can ensure that the politician is held to the details.

 

Hypothesis-Based Inquiry

Every candidate will refuse, out of hand, to answer hypothetical questions. And they will get away with this refusal. This is absurd. Effectiveness as a leader of any group is defined, perhaps principally, by how one can deal with the unknown, a concept that is inherently hypothetical. We should stop electing leaders for their positions on events passed. Candidates must be made to cogently propose and respond to hypothetical situations about all issues of national importance. Only if the public demands that the candidate has the gravitas to articulate, and then publically defend, logical decision-making about the future (informed by wisdom of the past, fine) can we expect that s/he will have the boldness, when elected, to prosecute this vision.

 

Creativity

Unlike the scientist, it would be imprudent for a president to approach the job as an experimentalist. A trial and error approach to foreign or domestic affairs seems unwise. But like the scientist, the president can seek to innovate new solutions. How might this look in real terms? Beyond the sound bites, the scientific candidate would explore creative approaches to existing, intractable problems. And in turn, the politician would be judged on her/his ability (and willingness) to marshal the right people to quantitatively evaluate the efficacy of a proposed solution. Creativity is a mixture of boldness and humility, a willingness to take on seemingly intractable problems while remaining grounded at once in reality (history, for the politician, or in the published literature, for the scientist) and optimism.

 

Peer Review, Transparency & Reproducibility

To suggest a change in the method, the substance, of politics, in any organization, let alone the American government, is a fool’s errand. Human social interaction will not change in an election cycle. The electorate can, however, have a strong influence on what politicians need to talk about to get elected. Jon Stewart and Matt Drudge have powerful effects on what politicians must say and even what actions they must take. Most scientists do not have audiences of similar scale, but the principle of vocal criticism of the government is the same, be in on a blog or over the hedges with the neighbor.

 

The scientist can wait for the campaign fury in the run up to November 2016 blow over and hope an American president is elected that not only supports science but that brings to bare on governance the virtues of science. But inaction and waiting are not the methods of science. While the means of politics are fixed, the goals are malleable and the scientist must be willing to wield the hammer. The reliance on government funding to support scientific research and medical/engineering advances—apart from individual civic duty—means that federal politics is (another) area where the scientist cannot remain silent. 

 

Tom Vondriska

March 10, 2015

Scientific Research and National Security

posted Sep 20, 2013, 12:19 PM by Vondriska Lab

It is hard to consume news media in the United States today and not be inundated with gloomy predictions about the impending budget sequester. Regardless of how one views the political infighting amongst our elected officials, there are certain truths about how a failure to resolve the budget impasse will impact many aspects of our daily lives. Critical among these is scientific research. The National Institutes of Health have already initiated cuts to the budgets of peer-reviewed, “funded” grant applications, notwithstanding pay-lines that remain at or near the single digits for many agencies. The impact of decreasing federal investment on the future of scientific research, particularly in the area of supporting young investigators emerging from training, has been poignantly highlighted,1 as has been the general need for scientists to better communicate the value of scientific discoveries to the public,2 so as to justify investment of tax dollars.

     I submit that the even more nefarious effect of a failure to support scientific research comes in the realm of national security. To be clear, I am talking about all scientific research, not just that associated with defense. Ever since Western European nations made the remarkable decision to create a profession in which a person was tasked with investigation of the unknown (that is, the scientific researcher), the most creative and motivated scientists have sought PhD training in the countries that afforded the best infrastructure for such investigation. It was not always so, but following World War II, a conflation of geopolitical changes and wise public investment made the United States the most sought after destination for academic training at all levels. Years later, the United States’ putting a man on the moon was an unequivocal message to the rest of the world: this is the place where dreams can be pursued, where big ideas have the chance to be realized, regardless of whether you are a physical, social, biological or natural scientist (and regardless of your political inclinations).

     My colleagues who have immigrated to the United States over the past 25 years invariably cite the opportunity for career development and scientific research in the United States as the number one reason they came. This is a clear example of American soft power in the global domain arising from public investment in scientific research at home. It is an obvious success story for the openness of our society, too; but this openness would have no impact on national security if we were unable to attract the most brilliant scientists and engineers.

     No serious debate on our country’s governance questions the need for a strong military…in fact, few would question the necessity that the United States should have the strongest military. This logic is no less true for scientific research—including infrastructure and human talent. Educating the public about the importance of scientific exploration goes beyond explaining the fruits of the discoveries for human health, the development of new technologies and our knowledge of the natural world. Our communication with the public and elected officials must make clear the causal link between American preeminence in scientific research and our national security. Our competitors around the globe clearly understand this connection and accordingly are investing heavily in science. The magnitude of damage to our scientific infrastructure, and the extent of attrition of the most talented scientists, inflicted by continued cuts to publically funded research are impossible to predict. It is inconceivable to argue, however, that as we continue to starve the engine that attracted the best from around the world—while at the same time, our competitors wisely fuel their own—that we can remain leaders in years to come.

 

Tom Vondriska

September 20, 2013

  

1.         Alberts B. Am i wrong? Science. 2013;339:1252

2.         Bubela T, Nisbet MC, Borchelt R, Brunger F, Critchley C, Einsiedel E, Geller G, Gupta A, Hampel J, Hyde-Lay R, Jandciu EW, Jones SA, Kolopack P, Lane S, Lougheed T, Nerlich B, Ogbogu U, O'Riordan K, Ouellette C, Spear M, Strauss S, Thavaratnam T, Willemse L, Caulfield T. Science communication reconsidered. Nature biotechnology. 2009;27:514-518

 

On Knowing Things

posted May 3, 2013, 4:47 PM by Vondriska Lab

To know something is so fundamental to the western psyche that considering a definition of knowledge may seem pedestrian. How many times per day do we think or utter the phrase “I know” as a response in the affirmative or to register agreement? We know an infinite number of individual things: our home address, the name of our first pet, famous world capitals, specialized information associated with our professions—the colors in a favorite painting or lines of a sonnet, perhaps. We also know seemingly very complex things: manners associated with a culture in which we were raised, written and spoken language(s), mathematical principles, and how to play a musical instrument are just a few examples. Some of these types of knowledge clearly have active cognitive elements as well as subconscious ones, but all seem to fit in some way within the scope of what we as humans feel justified in claiming we know.

            But what is the meaning of knowledge? Such a broad question can not be tackled as a categorical imperative in an essay of this scope—indeed, as will be discussed below, the meaning of knowledge has evolved to mean different things within different sub-disciplines of the same field, let alone between, say, the sciences and humanities—so we endeavor to consider this question in the context of biomedical science. The fundamental basis of scientific inquiry from the dawn of human existence shares the thread of (i) sensory-based observation, (ii) formation of theory (or, prediction) and (iii) experimentation. Knowledge, then, should emerge from this formula and should require the input of each of the three components. This is an important distinction, wherein scientific knowledge has differed from the non-scientific variety: principles of scientific knowing are reasonably impervious to the Weltanschauung of the masses (e.g. humans knew the world was flat based on scientific principles…until scientific principles allowed star gazers and experimentalists [sea-farers] to prove otherwise) whereas conventional knowledge is not (e.g. ancient Romans knew the correctness of “rendering unto Caesar,” while modern western societies have a more nuanced knowledge of taxation and shared governance). Cultural shifts within societies over time, and between societies at a given moment, establish widely varied understandings of knowing and knowledge; scientific knowledge is not at the whim of cultural, political, or emotional tangents (As an aside, this is of course not to say that science cannot be politicized, as there are many unfortunate examples where it has been. Furthermore, cultural, political and emotional factors all play a large role in the method and scope of scientific inquiry, as citizens and politicians shift the emphasis of inquiry to new needs and interests of the population; these factors do not, however, affect the fundamental meaning of scientific knowledge.) Here we argue that the basis for scientific knowledge is uniquely human and would define it as the ability to predict and/or reproduce a phenomenon in the nature world through the use of the five senses and, as is increasingly the case in all of science, specialized tools for dissection, data collection and data analysis. Scientific knowledge is universal, which is to say, it is always the same, in any language or under any social construct, until a new theory, experiment and observation necessitates a new synthesis.

            There is a critical distinction between knowledge, which is pursued and obtained in many fields (including as dealt with herein, science), from truth, which is the sole purview of mathematics, religion and some disciplines of philosophy. Truth is not the domain of science, because truth is not an unbiased attempt to rationally explain the world and it discards the assumption of fallibility, which itself enables the existence of scientific knowledge. Whether anything is permanent or eternal is a question with interesting cosmic considerations—some would argue central to the drive of human inquiry—but ontological permanence is not within the scope of scientific observation. There is also a distinction to be made between knowledge and information, the latter herein referring to a medium of the former: information is the content, the substance; knowledge includes the retrieval, processing and usage.

            The features of human inquiry in biomedical science are changing on a scale that requires reevaluation of what it means to know something and what it means to reproduce an observation. The purpose of this essay is to address how scientific language and reasoning can evolve to respond to this change. We begin with a consideration of the evolution of scientific inquiry, particularly with regard to how the use of tools for data collection have changed the scale of information humans can acquire and how this has in turn changed the means for generating knowledge.

 

Use and Evolution of Tools, Changes in the Acquisition of Information and Knowledge

            Early man distinguished himself from, and out-competed, ancient hominids based on improved cognitive abilities and use of tools. In this regard, humans and their evolutionary predecessors are still thought to be distinct from non-humans based on human ability to store and process information to make future decisions; apply this to things like prevention of conception or treatment of wounds sustained hunting or in combat, and you have the dawn of biomedical knowledge. Agriculture likewise represents an example of non-medical scientific knowledge. Although precise determinations of when language arose vary (based in part on the definition of language), pre-human primates were verbally communicating 1-2 million years ago, whereas modern language in pre-humans has probably been around for 100,000 years. Knowledge transfer, then, would have taken the form of oral traditions from the time developmentally modern humans arose (~50,000 years ago) until the emergence of written language, around 3,200 BC in Mesopotamia.

            Fast-forward to the 20th century and the emergence of the computer. A mere half century before the present, the totality of human knowledge, while vast, could be reasonably quantified in a straightforward manner: count the number of books. Consider early thinkers like Socrates or experimentalists like Leonardo da Vinci, Galileo Galilei and Benjamin Franklin: these scientists used their senses and engineered tools and methods to acquire knowledge beyond the state-of-the-art of their contemporaries. The middle of the last century, all humans recorded their observations, theories and data, and performed their calculations and analyses, on paper, storing them in printed books or articles. With the emergence and now ubiquity of computers in all aspects of human existence, the type and scale of information to be obtained and stored has undergone a quantum leap. From everyday things like email and Wikipedia to the advanced calculations performed by engineers, physicists and life scientists—humans now possess an ability to acquire and process information orders of magnitude greater than possible with the human mind alone. (As an interesting aside, it is certainly an unresolved question (Sparrow, Liu et al. 2011) what the effect of off-loading information storage from our brains to computers and the Internet will have on human cognition, intelligence and creativity, as well as on pure social constructs like communication. Whether delegating cognitive tasks to computers frees our minds for creative pursuit or atrophy is also unknown, although the experiment is ongoing.) But does this information constitute knowledge? Akin to when all human knowledge was stored in books one could reasonably argue that no single person knew the totality of human knowledge, but this sum could be rightfully considered the domain of human civilization as a whole—because someone knew every individual piece of information. Not so in the present day: much of modern human activity relies of information never processed in the mind of a human being, and that no human being retains for calling forth. So we are left with the need to determine what in the vast amounts of information that surrounds human societies constitutes knowledge—when can something be said to be known in the present day?

            In addition to storing information (some of which one assumes passes muster for being knowledge), computers are prime examples of tools that have enabled new types of information to be acquired in the modern era, but they are certainly not the only example. Science and technology are the areas in which perhaps the advances of tools have most fundamentally changed what it means to know something. Gone are the days when the curious, innately skilled, amateur could make ground-breaking discoveries by simply using a scalpel and scissors. In biomedical research, the frontier of human knowledge has progressed to the nano-scale (one millionth of a meter), wherein new discoveries are made by manipulating and measuring molecules, which can only be visualized and quantified with the most sophisticated (and expensive) instrumentation and using complex methodologies known only to very specialized researchers (see (Chen, Mias et al. 2012) for just one example in the area of large-scale biology). It is virtually impossible for a person without formal laboratory training, most times involving graduate education, to make a novel discovery about basic biology and/or medicine. Similar changes have occurred in the acquisition of knowledge about the universe, driven now by data from telescopes launched deep into space and theories based on observations that would be impossible with the tools our recent ancestors used to observe that the earth was not the center of the cosmos. This change in the scale of human inquiry necessitates a change in the language used to describe knowledge.

 

What Knowledge Is Not

            The most intuitive method of acquiring information about the world is through direct observation with the five senses. Observation, however, does not constitute knowledge. The recalling of an observation and the processing of the observed information are separate from the act of observation, with only the combination of all three constituting knowledge. An example of the dog who observes that begging at the dinner table produces food scraps from his owners: the observation of the food is not enough—for the dog to have knowledge about this experience and hence for him to manipulate the information to affect his actions, he must identify the event (the begging, in this case) and process, or associate, the event with another fact or outcome (in this case, receiving the leftover bones). Here, the dog can be said to have knowledge that beseeching his owners produces a meal.

            Neither information nor data are knowledge for a similar reason—they are inert and, alone, neither convey, nor contain, meaning. Meaning is imparted by the mind of the being that processes the information. (As an aside, the phrase “knowledge in books”, ergo, is a contradiction in terms. Information in books only becomes knowledge when a human being reads and comprehends it.) Knowledge, unlike data or information, requires a sentient being for its existence. One should also acknowledge at this juncture the concept of truth which is also neither pre-requisite, nor result, of knowledge. For obvious reasons one’s concept of truth is a central issue to a discussion on knowledge; as discussed above, truth is a dogmatic absolute with no place in science.

 

What Knowledge Is

            Knowledge is the processing of past events to influence the present. I know that I left my car keys on the refrigerator and so I fetch them from there on my way out the door in the morning. There need be no philosophically more complex synthesis to understand, at its core, what knowledge is. Why, then, is it so challenging to estimate what humans know as a species, and how knowledge is transferred (or not) between individuals and generations?

            Take the area of science: the chemistry professor clearly knows the make up of a DNA molecule: the chemical structures of the bases, the positioning of the sugars and phosphates, the logic for bonding between the bases in the double helical structure of the molecule…these are immutable facts, tested by thousands of experiments by different scientists all over the world. This information is contained in textbooks and remains knowledge when passed onto new minds through lectures or reading. Here the iterative path to knowledge is ostensibly clear: observation, overt experimentation, and interpretation. But what about scientific knowledge not obtained from textbooks, lectures or personal instruction? Knowledge that seems innate? Take the surgeon. An anatomy professor may know the Latin name of every nerve, muscle and ligament, but you would instead hire the skilled physician, whose textbook knowledge is perhaps indeed formidable, based instead on her practical knowledge to do your gall bladder surgery or hip replacement. It seems, then, that the physician here knows something that is completely reliant on: (i) her specific mind; (ii) her training; and (iii) her command of her experience in real-time. Similar examples exist in laboratory science, where some individuals just have “good hands” (or not) at the bench, a trait that is present or absent without correlation with knowledge acquired in a didactical manner. Other commonplace examples are a racecar driver or Wall Street trader: these individuals posses a core set of—perhaps universally attainable—facts that their own minds allow them to manipulate in a manner that creates specialized knowledge. The surgeon and the racecar driver posses knowledge of the variety that circumvents cognition, proceeding straight from the subconscious to the action, demonstrating that while this trait (cognition) may be sufficient for knowledge, it is not necessary. The antithetical consideration is one in which knowledge exists but is not applied—where cognition is engaged but action does not occur. This knowledge is no less valid than the foregoing examples.

             These considerations create two intriguing mandates: first, it means that knowledge need not be communicable to exist and second, while information or truth may be universal, knowledge is not. Neither of these is particularly satisfying from a scientific point of view, in which we pride ourselves on the ability to codify nature and reduce to practice (if not also to principle) that which is observed. What value is there for the scientist to consider definitions of knowledge? What role if any does the scientist play in the collective knowledge of the human species?

 

Evolution of Science, Evolution of Knowledge

            Unlike the humanities, which have a no less fundamental role in creation of human knowledge, scientists have the unique requirement that such knowledge must be transmittable through the five senses and observable in the same way (there are certain thought experiments here where this may not play out, but suffice it to say, the potentiality of observation by the five senses must exist, even if the reality does not). A novel may create soaring emotions in a reader, but the writer has relied on the theater of the reader’s mind, not any real sensory stimulation, for the creation. The scientist is bound by what can be shown, or more important, what can be reproduced. No one need reproduce Beethoven’s Moonlight Sonata for it to be a transcendent observation about the universe (if one chooses to think in such terms). But any scientific observation, no matter how stunning, is meaningless if the observer cannot codify it somehow and if another person cannot reproduce it.

            We propose that the semantics around knowledge and the manner in which knowledge is codified by scientists should actively take into account the following basic principles in the current stage of human scientific development: (i) what it means to know a principle in a given scientific discipline should be addressed (debated may be a better term…) explicitly and publically by scientists in that field, to avoid rampant accumulation of data and review papers; (ii) knowledge should be codified using a logic that is appropriate for the given discipline (an example in genome biology is as follows: DNA is an inviolable genetic code for RNA—an ATG is necessary and sufficient for an AUG, and in turn for a methionine—but whether there is a histone code, wherein specific modifications to chromatin constitute necessary and sufficient conditions for a subsequent event, is an area of active debate (Strahl and Allis 2000; Lee, Smith et al. 2010)); (iii) scientists must become overall better communicators, as has been well-recognized, not only with the non-scientific public but also, no less importantly, amongst scientific disciplines. We need to learn to constantly and actively evolve our language to accommodate for our own discoveries and for those of other fields, including non-science; (iv) scientists need to learn to think like non-scientists and incorporate, when appropriate, knowledge structures in the humanities. As has been discussed in detail elsewhere (Shlain 1991), human insight, and thus knowledge as well, proceeds by Heuristic strides in humanities and science in parallel, with each of these disciplines, which are often (perhaps inappropriately) segregated in modern education, fundamentally changing the logic structure in the other throughout recorded history; and (v) the codification of knowledge within and between disciplines must be an active process and must be continually revisited as the nature of human inquiry advances. What makes humans unique is our ability to universalize individual experience—how this process changes to adapt to modern methods of experiencing the world has fundamental implications for the intellectual evolution of the species.

 

Matthew Thompson, Brad Picha, Thomas M. Vondriska

May 2013


Literature Cited


Chen, R., G. I. Mias, et al. (2012). "Personal omics profiling reveals dynamic molecular and medical phenotypes." Cell 148(6): 1293-1307.

Lee, J. S., E. Smith, et al. (2010). "The language of histone crosstalk." Cell 142(5): 682-685.

Shlain, L. (1991). Art & Physics: Parallel Visions in Space, Time and Light. USA, Perennial.

Sparrow, B., J. Liu, et al. (2011). "Google effects on memory: cognitive consequences of having information at our fingertips." Science 333(6043): 776-778.

Strahl, B. D. and C. D. Allis (2000). "The language of covalent histone modifications." Nature 403(6765): 41-45.

 

 

Evolution Redux

posted Sep 25, 2012, 5:02 PM by Vondriska Lab

To many of us practicing science on a daily basis, the following claim may seem odd or even absurd at first pass: the recent report in Nature (PMID: 22992527; link) on the 30 year Escherichia coli experiment from Richard Lenski’s lab at Michigan State University has reintroduced the scientific method to one of the cornerstones of modern scientific thought—evolution.

 

Lenski began an experiment in evolutionary biology in 1988, when he propagated individual bacterial clones, splitting the cells daily, for nearly 55,000 generations, leading up to the present day. The purpose was to observe evolutionary change within this population in real time and to understand the role of various environmental and stochastic events in enabling selection. Every day for the last 30 years, one aliquot of those cells, amounting to a clonal sample of each of the ~55,000 generations, was frozen down by Lenski (or likely a grad student/post-doc). Herein lies the utility of this Herculean feat of patience and experimentation: because the ancestors to the current generation (and all those that will come after it…the experiment is still ongoing) are still alive and in the suspended animation of the -80°C freezer, this experiment in evolution can be repeated!

 

This is just what Lenski and his team did in the paper by Blount et al. Having observed spontaneous acquisition of a phenotype allowing utilization of citrate for an energy source, the lab traced the change to a genomic rearrangement that removed a block on citrate transporter expression around generation 30,000. However, this rearrangement alone was not enough to make the new phenotype an effective target for selection. Through elegant additional work, the lab proposes a model where 1) genomic rearrangements, in this case the transposition of an active promoter upstream of an evolutionarily silenced gene, can cause re-expression of a seemingly new protein for the given organism; 2) compounded by random potentiating mutations, and, of course, 3) selective pressures (in this case apparently positive ones, in the availability of oxygen and citrate in the growth media), new traits can arise. Because this bug was in the freezer, they thawed it and repeated the experiment, observing the emergence of this same phenotype a second time under the same controlled laboratory conditions.

 

That natural selection occurs is rarely denied, even by the non-scientific community, but proving evolution can occur has been more of a bugaboo. In fact this is true for many areas of science, where theories become adopted, due to the impracticality or impossibility of directly testing them (see PMID: 22914130, link, for a recent discussion of this point for the Higgs Boson and origin of the universe). Lenski and colleagues have taken evolution out of this fuzzy area of over-whelming evidence but experimental intractability. This feat is remarkable for many reasons including Lenski’s dedication to the task (What other mysteries will be revealed when the organism gets to generation 200,000? 1,000,000?), the scientific community’s support, the funding from the NSF, NIH and other sources (including several forward-thinking private foundations, per the paper's acknowledgements), and most of all, for the exciting discovery getting at the How? of evolution, that is, of the principles governing rapid emergence of phenotypes through slow genetic change over thousands of generations (or as Lenski and colleagues describe, potentiation, actualization and refinement of a trait).  Perhaps most important, it is a victory against the defeatism of “it’s too complex to have happened by chance.” 

 

Tom Vondriska -- September 25, 2012

Standards for Research and the Meaning of Knowledge in Science

posted May 25, 2012, 2:23 AM by Vondriska Lab   [ updated Apr 27, 2016, 9:35 AM ]


Brad Picha, Matt Thompson & Tom Vondriska -- May 12, 2012

Please see Nature, PMID: 22552083

Reform Education: Bottom Up, Not Top Down

posted May 8, 2012, 10:29 PM by Vondriska Lab   [ updated May 25, 2012, 2:29 AM ]


In his recent opinion piece in Nature, Professor Mark Taylor discusses (Nature. Vol 472. 261. 21 April 2011) the idea that academic institutions currently produce too many PhD-level investigators in multiple fields leading, in his words, “to a cruel fantasy of future employment”. This observation, along with his title (“Reform the PhD system or shut it down”), and the contention that “most doctoral programs conform to a model defined in European Universities in the middle ages, in which education is a process of cloning that trains students to do what their mentors do”, Taylor displays a sense of hyperbole, but nevertheless hits on a critical challenge for academic mentors as well as administrators of doctoral programs across disciplines.

Toward the end of his otherwise honed prescription for reforming doctoral education, Taylor attempts to expand the scope of his thesis with the following statement: “Although significant change is necessary at every level of higher education, it must start at the top, with total reform of PhD programs in almost every field” (emphasis added). By expanding the mandate of PhD program reformation to addressing problems with higher education in general, Taylor mistakenly places the onus on more advanced degrees to improve the quality of the less advanced ones. This is counter-productive for a demand-based, product-driven educational system. The goal of reforming doctoral programs should be to improve doctoral education. To improve undergraduate education, the focus should shift to the undergraduate student as a product in itself, emphasizing the importance of liberal arts exposure and teaching the individual how to think, perform critical analysis and hypothesize (as poignantly articulated by Professor James Economou at a recent UCLA post-doctoral event, see here). Now the focus of undergraduate education has shifted, in many cases, to producing fodder for graduate school and/or bringing students up to speed on what they missed from high school; especially in the sciences, few undergraduates come out of school adept at actually doing what they have a degree in. For this to be effective, high school education must again be made to mean something, such that undergraduate institutions can teach liberal arts, humanities, philosophy and science, rather than basic math, grammar, composition and rudimentary social sciences. The reform of the larger educational system, then, must come from the bottom up.

Tom Vondriska -- May 22, 2011

1-10 of 11

Comments