Category Archives: Open Access

Ten papers demonstrating unequivocally that scientists have a sense of humour

Of all the adjectives that come to mind when you think of academic or scientific writing, there’s one I’d bet sinks to the bottom of the list regardless of the audience. You might think a paper is unintelligible, incomprehensible, jargon-filled, complicated, detailed, sometimes exciting, often boring, but certainly not funny. I mean, science is serious business, and no researcher in his or her right mind would dare compromise citations for laughs.

Or would they?

Every year the scientific community waits on the edge of its collective seat for the announcement of the IgNobel Prizes, celebrating real research that “makes people laugh, and then think.” The lucky winners become the laughing stock for awhile, but the giggling is often followed by a sober realization: “Someone actually studied that? Seriously?”

Part of what makes the IgNobels so unabashedly funny is that often, they start out with a completely serious question. Does a person’s posture affect their estimation of an object’s size? (Apparently so, especially if you lean slightly to the left). Can a doctor accidentally make your butt explode while performing a colonoscopy? (Turns out it’s rare, but I’m sure…um…relieved that they did the research).

What’s a little harder to find in the scientific literature are examples of researchers being intentionally cheeky. But such examples do exist, and they call out for a Top 10 List.

Here it goes:

10. The unsuccessful self-treatment of a case of writer’s block.” (D. Upper. Journal of Applied Behaviour Analysis, 1974)

The reviewer’s comment on this paper sums it up best: “Clearly it is the most concise manuscript I have ever seen — yet it contains sufficient detail to allow other investigators to replicate Dr. Upper’s failure.” 

Which is precisely what other investigators set out to do. That it took 33 years and a research grant of $2.50 underscores the scope of the problem. Writing, it seems, is hard.

Many papers tackling such a difficult problem are lost to researchers outside of the discipline, but the beauty of Upper’s pivotal work is that it can be readily applied to different fields. Consider its recent application to a study in molecular biology:

Screen shot 2013-06-25 at 5.55.09 PM

Excerpt from Lorenz et al. (2011) ViennaRNA Package 2.0; Algorithms for Molecular Biology, 6:26 (Click to enlarge)

Only time will tell whether science will have the last…or first…word on this mystifying phenomenon.

9. “Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?” 

What at first seems like just another paper full of jargon about faster-than-light particles reduces to a simple and elegant conclusion: “Probably not.”

8. Synthesis of Anthropomorphic Molecules: the NanoPutians” (Chanteu & Tour, Journal of Organic Chemistry, 2003)

They drew stick figures. With molecules!

Ever vigilant, however, of illustrating their nano-peeps in a way that would not be representative of their equilibrium state, the authors add this caveat: “…the liberties we take with the nonequilibrium conformational drawings are only minor when representing the main structural portions; conformational license is only used, in some cases, with the NanoPutians’ head dressings.”

Science…solving bad hair days, one molecule at a time.

7. “Trajectory of a falling Batman” (Marshall et al., Journal of Physics Special Topics, 2011)

This study is exactly what it sounds like. They analyzed the path of a falling Batman to determine whether our beloved caped crusader could indeed survive on a Batwing and a prayer. Their grim conclusion? Splat.

Or, as the authors put it (albeit far less eloquently in my opinion): “Clearly gliding using a batcape is not a safe way to travel, unless a method to rapidly slow down is used, such as a parachute.”  Noted.

6.  “The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute” (Lim et al, British Medical Journal, 2005)

Don’t you just hate it when your co-workers steal all the teaspoons?  If you’re a real scientist, you don’t get mad. You get a publication!

5. Chicken, chicken, chicken.

There are few things more satisfying in science than publishing a really important paper, and then being asked to present it at a really important conference. This is one of those truly remarkable papers that must be seen to be appreciated.

Now, you might think that a study such as “chicken chicken chicken” could have little application outside the poultry world. But you’d be wrong. Indeed, I know at least a few people who have adopted “chicken chicken” as the universal code word for a scientific talk that has gone on way too long. So here’s my public service announcement to researchers everywhere: if you’re ever speaking and the audience starts muttering “chicken chicken” to themselves, they’re not hungry — they want you to stop.

4. Santa and the moon” (Barthell, CAPJournal, 2012)

Do you remember when you were a kid on Christmas morning, and how you carefully examined the wrapping paper for its scientific accuracy before meticulously unwrapping the toy you’d been waiting for all year?

I totally thought so.

This one belongs in the category of “You might be a scientist if…”

Parents: If your child ever exhibits signs of trauma from the inaccurate portrayal of moon phases on wrapping paper, it might be time to have a serious talk about graduate school. Might I recommend this book at bedtime?

3. Absolute dating of deep-sea cores by the PA(232)/TH(230) method and accumulation rates: a reply” (Journal of Geology, 1963)

Sometimes, despite their best efforts, scientists make mistakes. Luckily, when this happens, there are usually other scientists happily willing to point it out. Such was the case with this paper, in which some scientists pointed out an error in the original paper, and the authors simply replied, “Oh well, nobody is perfect.”  Gotta admire their honesty.

2. Two body interactions: a longitudinal study

Who says romance is dead? Nothing quite says “publish or cherish” like a marriage proposal embedded in a scientific paper!

1. 20 more hilarious scientific papers in five minutes.

Thanks to Seriously, Science?  (or, the artists formerly known as NCBI ROFL).

More on Myriad, its implications beyond medicine, and what I think would have made more sense

Yesterday, when I heard the news about the Supreme Court’s ruling on the Myriad gene patenting case (read yesterday’s post for details), I was frustrated for two reasons. First, my newsfeed was full of headlines saying that genes could no longer be patented, which only sort of partially reflects the true implications of the Supreme Court’s ruling. Second, I was frustrated by the fact that the Supreme Court’s decision only sort of partially dealt with the fundamental issue in the whole case.

To be fair, the split ruling, which invalidated Myriad’s patents on isolated human DNA, did effectively address the immediate concern over patient access to genetic testing services relying on the naturally-occuring BRCA1 and BRCA2 sequences. Myriad no longer has a complete monopoly over the BRCA tests. This may provide some relief to other labs wishing to offer the test (and patients hoping to receive it at a competitive cost), but the immediate benefits are likely going to be short-lived: Myriad’s patents were due to expire by 2015 anyway, and such single-gene testing methods are rapidly losing steam as it’s becoming feasible to sequence whole genomes at a lower cost than the individual tests. So yes, it’s a win, but beyond the ideological victory, it seems too little, too late.

In its compromise — overturning patents on isolated DNA while upholding patents on synthetic cDNA — the Court failed to recognize that the central problem with gene patents is that our genetic information, in whatever chemical form it is expressed, is inherent to nature. It’s not a human invention. And as I argued yesterday, the process of creating cDNA from a naturally-occuring sequence is completely obvious to anyone working in the field. It’s a standard procedure — and indeed cDNA often forms the basis for many applications of genetic engineering.

Take plant biotechnology, for example. Let’s say I find a plant that tolerates stress extraordinarily well, and I can attribute that tolerance to a specific gene. If I wanted to isolate that gene, I probably wouldn’t go digging through the chromosomes looking for the sequence — there’s a lot of extra “junk” in the chromosomal DNA sequences, stuff that isn’t functionally relevant for my purposes. Instead, I’d go looking for the active version of the gene, its “transcript” — the mRNA molecule that has already had all the junk taken out and only contains the information that’s relevant to me. The end product, when I copy that mRNA transcript, is cDNA.  According to the Supreme Court’s ruling, that sequence I’ve just copied is no longer a product of nature, and I could patent it, as is, even before I’ve done anything useful with it. Of course, the next step might be to take that cDNA and introduce it into a crop plant, thereby conferring stress tolerance to that crop. I could also patent that. But unlike the first step of merely arriving at cDNA, in applying it, I’ve done something inventive.

But I could have just as easily stopped at the cDNA step, taken my patent on the cDNA, and effectively blocked anyone else from doing anything useful with it. If anyone wanted to use my gene, they could certainly  (thanks to yesterday’s ruling) go back to the original source and re-isolate it, but the process would result in the creation of a cDNA identical to the one I have a patent on. To make matters worse, many  patents don’t just cover a single unique sequence; they cover that sequence and anything closely related to it. This is intended to stop people from taking my patented sequence, making a minimal number of changes that have no functional relevance, and calling it a different sequence. However, the nature of genetic information is such that genes with similar functions tend to have similar sequences. Nature does not often completely reinvent the wheel. And since cDNA represents the essential functional part of a gene — the instructions that actually produce something — it also tends to reflect the least variable regions of the original DNA sequence. So one patent on a cDNA can, intentionally or not, create widespread barriers to the application of similar genes.

So yes, the ruling against patents of isolated naturally-occuring DNA is a win, but I don’t think it will have a lot of meaningful impact on the biotech industry. The industry will go on operating in an ever-increasing tangle of gene patents (pardon me, cDNA patents), and the outcome is essentially the same: companies will still have the right to own the basic genetic building blocks, regardless of whether or not they’ve done anything inventive with them.

Here’s what I would have preferred to see happen, and what I think would have been far more consistent with the legal precedent cited in the ruling. Throughout the Myriad case, all of the decisions have cited Diamond v. Chakrabarty  as the relevant precedent for determining whether isolated DNA or cDNA have “markedly different characteristics from any found in nature.”  The case surrounded the patent eligibility of a genetically-modified bacterium that could clean up oil spills. Was the bacterium a product of nature, or had the modifications resulted in something entirely new? The Court ruled that the genetically-modified organism was patent-eligible, because of its markedly different characteristics arising from human intervention.

While I thought Diamond v. Chakrabarty was kind of an apples-to-oranges comparison with Myriad — looking at a whole organism  with obvious human invention vs. isolated pieces of DNA, the case does present a compelling standard for biotechnology-related patents. The oil-eating bacterium became patentable at the point where its character, as a result of human invention, became unlike anything found in nature. Makes sense.

Rather than allowing patents on every individual building block, why not regard the building blocks as open technology, and provide patent protection on the unique products that arise from combining  different building blocks in different ways for different purposes? Isn’t that the essence of innovation — taking the pieces that are available and assembling them in creative ways to solve problems? It’s kind of hard to foster this kind of innovation when one person owns all the blocks, or each block is owned by somebody different and you have to come to some agreement about how the blocks can be used.

We now live in a world of big data and synthetic biology. We have so much information about different genes from different species, and the technology to combine them in infinitely diverse ways. Yet, today, the first step in any biotechnology project is to establish freedom-to-operate — navigating the potential patent land mines that litter the landscape.

With Myriad, the Supreme Court sought to strike a balance between the established expectations from 30 years of gene patenting practice, and supporting ongoing innovation. I get that. But I wish they could have looked farther ahead than the implications of Myriad and seized the opportunity to clear a path for future innovation that isn’t based on who has access to which building blocks.

 

A fireside lesson in science communication from the woefully misinformed

Having spent the better part of my research career working in the field of plant biotechnology, I’m no stranger to scientific controversy. As a researcher, I’ve met my fair share of people — including other scientists — who hold strong opinions about genetically-modified food: about whether it’s safe for humans and the environment, whether it’s really needed to address global food security, whether it should be labeled, who really benefits (other than biotech companies and patent attorneys), and even why, as a biotechnologist, I should burn in Hell.

I’m not easily rattled by a scientific debate.

I’m well-versed in the arguments both for and against, and my personal opinions on the issue are neither black nor white. I’ve spent much of my career immersed in the scientific evidence and the best I can settle on personally is an evolving shade of grey — biotechnology is not inherently good or bad, neither panacea nor Pandora’s Box. When it comes to a debate, as a scientist, my default setting is to keep calm and consider the evidence.

But last weekend I was sitting around a campfire with a couple of my scientist friends when an acquaintance (not a scientist) started challenging us on various scientific topics — from genetically-modified foods to factory farming to Angelina’s double mastectomy. It was by far one of the most exasperating evenings I’ve ever spent discussing science. It was also perhaps one of the most educational in what it highlighted about the challenges of modern science communication.

When I started writing about science, I had an editor who encouraged me to focus on the art of translation — turning technical jargon into something accessible by a reader he fondly referred to as “Joe Lunchbucket.” Joe Lunchbucket was the reader who browsed the paper over a ham sandwich and would only read a science article if it was wrapped with a shiny, friendly bow that said “Gee whiz, this is cool, you gotta see this!” In time, however, I learned that translation wasn’t enough. It’s hard to get people excited about science for science’s sake — there has to be a compelling story, something for the audience to connect with above the science itself. After years of being trained as a researcher to strip the humanity out of my writing, I had to learn to put it back in, to stop “writing about science” and start writing about people and the process of discovery.

Joe Lunchbucket made me a better writer, but now I think he’s a relic of a simpler journalistic era. Science communication is far messier today than even when I first dipped a toe in the water a decade ago. It’s no longer fundamentally about telling a good story or making sense of the jargon, although those skills are still vital. Now it’s as much about rising above the noise, fighting fiction with fact, battling stubborn misperceptions, and countering fear-mongering emotional arguments with…logic?

If only it were that easy…

Case in point: our opponent in last weekend’s campfire debates was a middle-aged woman, raised in a small rural town, no post-secondary education. What she knows about science, she’s learned from decades-old high school lessons and the internet. She’s health-conscious and environmentally-conscious and I’ll give her the benefit of the doubt that her motivation to be informed about issues she cares about is sincere. But this is a person who still believes, despite her “research”, that a woman shouldn’t run because her uterus might fall out, that commercial chickens are no longer raised with legs, and that wheat gluten is an evil product of genetic engineering.

How do you even begin to argue with a person who reacts to information without having even a basic understanding of it? How do you dispel misinformation about science with a person whose sole connection to the subject is grounded in emotion rather than logic? How can you compete calmly against sensationalist drivel?

I can tell all the stories I want, I can counter with facts based on peer-reviewed literature, I can even empathize with the difficulty of distinguishing the good information on the internet from the bad. But who am I but a scientist? I’m one of the people who once made a living advancing the very technology she fears. Why would my knowledge of the subject come as any comfort to her? What have I done to earn her trust?

Granted, it also doesn’t help that stereotypes persist about scientists being cold and unfeeling and distant,  or that we have a government that wants its scientists to cater to commercial interests, undermining our ability to represent ourselves as objective, trustworthy sources of information. But what have we done as researchers, to help ourselves?

I think we have done science a disservice in separating the process of doing science from the process of communicating it to the public. As researchers, our professional obligation is to publish our work in peer-reviewed academic journals, to share it with other researchers. We write in a technical language and publish in a medium that largely excludes the public. We can partially address the issue of access by insisting that publicly-funded research be made available free to the public, and I’m a strong advocate for such open access initiatives. But access alone isn’t enough. It doesn’t do anything to address the issue of comprehension.

To support access without comprehension only opens the door to the further spread of misinformation — perpetuated by well-intentioned (and sometimes not so well-intentioned) people who understand just enough of the scientific detail to get it wrong. Traditionally, we’ve relied on professional journalists to get the story right — sometimes with mixed results. Now we have a whole slew of advocacy groups and “citizen journalists” who flood the internet with their own interpretation of the science. The loss of journalistic gatekeepers isn’t necessarily a bad thing as public engagement is concerned, but as researchers, it’s clear that we can’t continue to rely on others to get the story right. We can’t shut ourselves out of the public conversation and then expect our voices to be respected. Now more than ever, researchers need to be proactive in engaging directly with the public.

We’re also working in a political climate that is becoming increasingly hostile to basic research. Researchers are under significant pressure from the government and funding bodies to deliver short-term economic outcomes — to focus on applied, industry-friendly research with commercial applications. So far, the pushback from researchers seems to focus on the threat to academic freedom, but does anyone outside of academia really understand what that means? Will Joe Lunchbucket have any sympathy for a bunch of tenured Ivory Tower white coats complaining that they can no longer do whatever they want?  The threat to academic freedom is very real and very serious, but it’s not an argument that’s going to resonate with people who don’t really understand what researchers do. And as long as we’re relying on other messengers to explain the value of research to the public, we’re hardly in a position to complain that the public doesn’t comprehend the severity of the situation.

When I first got involved in science writing and outreach as an undergraduate student, one of my professors responded with an air of disgust: “What self-respecting scientist does that?”

At the time, it was enough to temporarily shake my conviction. But a decade later, I wonder what self-respecting scientist can afford not to?

The frontier is in your living room: playing games for science

People sometimes ask me now that I am out of the lab if I ever miss doing science, and I’m never quite sure how to answer that. It all depends on how you define “doing science.”  I think the question they are really asking is whether I miss the lab work. That’s a different question for me. I think “lab work” and “doing science” stopped being synonymous at some point after I fell down the administrative rabbit hole of lab management.  In that sense, I missed “doing science” long before I left the lab.

But what does it mean to “do science”, anyway?

This is hardly a semantic question. Modern science is often perceived as something carried out by experts. When we say something is “scientifically proven”, it implies (rightly or wrongly) that it has passed some rigorous and authoritative standard, and the gatekeepers of that standard are, naturally, the highly-trained professionals we call scientists. But in turning science as a whole over to the expert, we introduce a barrier to broader public understanding and appreciation for science as a means of learning about the world.  In other words, if science is something done by experts, and you’re not an expert, then science is off-limits to you.

It’s an ironic twist considering that science itself is a relatively young profession. The earliest pioneers in science were often self-taught amateurs and tinkerers — people who had little formal training but were passionately curious. Their work laid the foundation for modern science, and yet, if they were working today in their basements and garages, we might question the validity of their contributions to science. Today they would be viewed as quacks, operating on the fringes of science — not “real” scientists.

Obviously, the nature of some fields of research is much different today than it was 200 years ago. There are all kinds of sane, practical, reasons not to be doing molecular biology in  your basement. But I think we have to be cautious about confusing the practise of science with the spirit of it. The spirit of scientific inquiry demands that there be a place for the amateurs and the tinkerers — those who relish what Einstein once called “the sheer joy of finding out.”

Recently, I was reminded of a paper published a couple of years ago in Biology Letters, written by a group of 10-year-olds, about an experiment their class had conducted with bees. I remember being charmed by the elegant simplicity of it all, right down to their hand-drawn crayon figures – these children had embarked on a project from which they came to understand science as a simple process of asking questions and finding the answers. In a recent TED talk, their advisor recounts the challenges of trying to get the students’ work published — the “experts” in this case did not believe that children could make a valuable contribution to science. And yet, they’d answered a question that none of the experts had ever answered before. They discovered something new.

To me, that’s what the spirit of science is all about, and that’s something I believe should be accessible to everyone. People need to know how science works. Science simply can’t thrive as a mysterious black box that only experts can decipher.

One of the wonderful consequences of the growing popularity of “crowdsourcing” is that it can also be applied to scientific problems. Thanks to websites like Scistarter, Zooniverse, and Games for Change, researchers can now directly engage the public in collecting or analyzing data on a large scale. Through so-called “citizen science” projects,  you can contribute to research just by observing what’s in your own neighbourhood, or by playing a computer game. The very definition of what it means to “do science” is changing, and I think it’s only for the better.

It also means, incidentally, that in some ways I get to “do” more science now than I did in my final year in the lab. I mean, if I am going to spend a Saturday playing games instead of writing papers, why not get sucked into a real-life puzzle like Fold.it, EteRNA, or Phylo?

Projects like this also open up new interdisciplinary possibilities. Have graphic design skills? You can make infographics for NASA. Or if you’re into music, you might try your hand at describing digitized collections of piano scores. Simply by moving these projects online and tapping into the public’s interests, there’s a whole new opportunity to break down disciplinary boundaries and stereotypes about what constitutes “real science.”

Perhaps most importantly, citizen science projects have the incredible potential to start a public discussion about science, something that politicians, journalists, and scientists often aim to do, only to fall seriously short of engaging their audience.

It will be interesting to see how these early forays into public engagement in science play out (no pun intended). I hope it’s  the beginning of a more enlightened era for scientific communication, one that engages the public actively rather than expecting them to be passive consumers of knowledge handed down by experts.

Publish or perish — what’s behind door #3?

There has been a lot of discussion over the past year about the state of academic publishing — from the exploitive business models of the publishing heavyweights, to the crisis this creates for libraries, and the emerging case for open access. Longtime readers of this blog will know that I am a strong proponent of open access initiatives, and that I believe this kind of public debate on the issue is long overdue.

But I also wonder if it misses the point.

You see, we’re still not addressing the issue of what makes researchers beholden to publishers in the first place. The publishing models may be changing, but the culture that drives the explosive growth of academic publishing has not. So while open access promises to bring down one barrier to scientific communication, we continue to ignore what is potentially a much more formidable and serious challenge — the sheer volume of papers being published.

Although it’s difficult to determine accurately, it has been estimated that there are more than 50 million scholarly papers in existence right now, with more than 1.5 million new articles being added each year (and steadily climbing). That’s about 3 papers being published every minute.  And the sad reality is that the vast majority of these papers will never be cited.

This begs an obvious question: if the purpose of scholarly publishing is to communicate our results, but the majority of papers go unread or uncited, are we actually communicating? More to the point, why do we continue to “publish-or-perish” if most of our publishing efforts are for naught?

The truth is, it’s not primarily about communication anymore; it’s about satisfying institutional demands that treat publications as a proxy for research excellence. That idea took root over a century ago, when William Rainey Harper, then president of the University of Chicago, first proposed that advancement in academic rank and salary should be tied “more largely” to research productivity. The policy was quickly adopted by other research-intensive universities. By the 1930s-40s, the phrase “publish-or-perish” had been coined to describe the pressure on academics to publish, and today, it’s practically synonymous with the academic lifestyle. Today, scholarly publications are not only used to measure individual performance, but also to help determine institutional rankings and to measure national performance on innovation.

Today, the pressure to publish comes not from a demand by the consumers of knowledge (i.e. other scholars), but from a demand by the merchants of knowledge — those whose profits and prestige are linked directly to growth in research publications.

What about the rest of us?

Short-term career advancement aside, the publish-or-perish culture doesn’t really benefit individual researchers, who invest much of their time and energy either in producing papers or in reviewing them — a huge waste if most of those articles fail to reach their audience.

It’s not particularly beneficial to students, since researchers are often forced to prioritize research output over teaching. For grad students, publish-or-perish means they spend much of their time learning and conforming to an academic career structure that will not serve the 70-80% of them who will not have access to stable academic careers no matter how much they publish. That’s another post entirely.

It doesn’t benefit scientific integrity, since the unrestrained growth in publications places more and more stress on the system of peer-review. The pressure to publish is so intense that researchers will resubmit their rejected manuscripts multiple times, working their way down from the top-tier journals to the electronic slush piles that have lower standards of peer-review. Each published paper then, often represents multiple rounds of peer-review, each a burden on the system, and even then, it’s still possible for researchers to pay for access to bottom-feeder journals just to get the paper out, however bad. Bad research in turn places its own burden on the system as precious resources are invested in a futile effort to validate them.

In recent years we’ve seen everything from rampant plagiarism and duplication of publications, to data manipulation or fabrication, to elaborate “citation cartels” used to artificially inflate the prestige of certain journals. Just last week, a researcher was caught using his own email address under a fake name so that he could “peer-review” his own papers!

On one hand, we might be comforted by the fact that the peer-review system is catching these cases of obvious misconduct, but on the other hand, shouldn’t we be more alarmed about the institutional pressures that are leading to the behaviour in the first place?

And what of the higher-profile retractions? What does it say about the effectiveness of our current system of peer-review when the highest-profile journals (Nature, Science, Cell, etc.) also tend to have the highest rates of retraction?  True, these journals tend to be on the cutting edge and we know it can cut both ways sometimes. These journals also have large audiences: more eyes means more potential for catching a mistake. But these are also the journals that tend to get the most press, so such high-profile retractions also have the potential to negatively impact public trust of science. When you have researchers chasing publication in these journals as a career-making move rather than it being based on the quality of their data, it can only harm the collective image of the research community.

Public trust isn’t the only thing at stake. Scholarly communication is, by its highly technical nature, exclusive to a particular audience. Even with the expansion of open access publishing, which would make papers accessible to the public, there is still a significant barrier for the public in understanding that original research. And yet, never before has the public understanding and acceptance of science been so important. The big issues of the day — energy, sustainability, climate — require a certain acceptance of scientific information. But outreach activities often don’t count for much in faculty evaluations — so researchers tend to remain focused on writing papers for other scholars. In effect, the publish-or-perish system places more value on scholarly papers that go unread than on the myriad ways that researchers could otherwise be connecting with a non-academic audience to inform the public debate.

This is perhaps the most tragic waste of our scientific knowledge, and the point at which I begin to seriously question the ethical footing of the publish-or-perish culture. In an era of economic instability and fiscal restraint, when public access to scientific knowledge is key to advancing the most pressing social, economic, and political issues of the day, is it ethical for publicly-funded institutions to continue demanding that researchers prioritize scholarly publications (largely for the sake of prestige), at the expense of actually reaching the audience that can most benefit from the research?

What’s the alternative and how do we make it happen?

Follow

Get every new post delivered to your Inbox.

Join 96 other followers

%d bloggers like this: