Tuesday, September 6, 2016

Examples of pre-interview questions

Last year, several postdocs at my institute (including me) were applying for faculty positions at North American institutions. Frequently, before on campus interviews, a 'long' list of people are asked to take part in phone/Skype interviews before a short list for campus visits is decided on. Since this step is now so common, postdocs put together an informal list of all the questions people had been asked during this initial interview*.

I found the list helpful. The usual caveats apply - different types of institutes and search committees will have different priorities and focus on different types of questions (e.g. teaching vs. research). Thinking about the answers to these questions ahead of time can be helpful for developing a vision of how you approach teaching and research, and being clear in how you communicate that.

(*Thanks to Iris Levin for originally curating this list)

Big picture questions:
Why X institution?
What do the liberal arts mean to you? Why are you interested in a career at a liberal arts college?
Tell us about contributing to XX college’s emphasis on liberal arts in practice, interdisciplinary and/or international aspects of education
How will our Biology Dept enhance your teaching and research?

Teaching focused questions:
General approach
What courses are you best suited to teach and how would you teach it?
What does a typical day in your class look like?
What do you feel you would add to graduate and undergraduate training in the department?
What is the biggest challenge in teaching?
You will teach X course every semester, how would you keep it exciting?
How would you teach a lab differently for introductory, intermediate or advanced students?

Specifics about courses
How would you teach X class?
What sort of interdisciplinary and/or first-year seminar course would you teach?
What sort of non-majors course would you teach? How would you teach it differently for non-majors vs. majors?
What new course(s) would you develop and how?
Tell us about your approach to teaching an XXX course for students who have had one introductory biology course
Tell us about incorporating quantitative and analytical reasoning into an XXX course
Tell us about using open-ended, inquiry-based group work in an introductory biology course

Research focused questions:
Approach and interests
Briefly summarize your most significant research contribution.
Tell us about your research program
You work on xyz – how would you conduct your research here?
How do you see your research complementing that of others in the department, and what do you view as your unique strengths?
Where do you see yourself in 5 years? Where do you see yourself in 10 years?
Who would you collaborate with here? 
How would you collaborate with faculty and bridge different fields?
What sort of projects would you do with graduate students? 
How would undergrads be involved with your research and what would the outcomes be?
Tell us about your approach to mentoring undergraduates in research

Funding
What sources of funding would you pursue to support your research program?
What grants would you apply to? 

Integration with teaching?
What contributions would your research make to these courses?
How would you involve students in your research outside or inside classroom

Misc (what type of colleague would you be?):
How would you contribute to the larger campus community?
How do you address diversity in your teaching and research?
What do you feel you can contribute to efforts to cultivate a wide diversity of people and perspectives at XX College?
Describe what you know about X college, how you would fit in, and any concerns.
How do you deal with conflict?
What has been the biggest obstacle in your professional development?


If you have more to add, please comment!

Friday, September 2, 2016

Science in many languages.

The lingua franca of biology is English, although through history it has variously been Latin, German, or French. Communication is fundamental to the modern scientific landscape, and English dominates the international ecological community. To be indexed by SCOPUS, a journal must be written at least in part in English. All major ecological journals are published in English, and clear, understandable writing is unquestionably an advantage in having work published. Large international conferences are usually conducted in English. Sometimes there is no translation for a key word and the English version is used directly, regardless of the language of the conversation. Even base commands in coding languages like R are in English. There is an undeniable but some times unmentioned advantage to being a native English speaker in science.

A common language is inevitable and necessary to communicate in a time of global connectivity, but it is also necessary to acknowledge that many scientists speak English as a second (or third, or fourth) language and barriers can arise as a result of this. The energy activation to move between languages is high for people, and it can take longer to read and write. But sometimes the costs are more subtle: for example, students may be less likely to give oral talks at conferences as a result of concerns about being understood. Even if they are relatively proficient, the question period after talks is difficult, since questions are often spoken quickly, are not clear, and are expressed in a variety of accents. That’s a difficult situation to address directly, but there are ways to facilitate communication across a variety of English proficiencies. And many of these are simply good practices for communication in any language.

First: slow down. Some of us are guiltier than others, but if you speak too fast, you lose listeners. This is another reason to consciously try to breath and relax during presentations and lectures. Some people speak so quickly that even the native English speakers have trouble following along. Now imagine listening to that talk while needing a little extra processing time.

When you give lectures and presentations, make sure that the slides and the verbal component both provide the overall message. I’ve followed talks in French and Spanish before, because the slides were well-composed (and in English). If someone misses something you say, it should be possible to follow the important points by the slides alone. And vice versa. This is good advice for any talk. Don’t be boring, but also be aware of when overuse of idioms or culture-specific references prevent understanding.

Sometimes fluent English speakers unknowingly dominate conversations because they speak faster and may be more confident in expressing themselves. In group activities like workshops and meetings, allow breaks in the conversation so that non-native speakers (or just less dominating personalities and quieter people) have a chance to express themselves as well.

An ear for accents comes from practice listening. Practice speaking improves accent. It’s a mutually beneficial relationship.

Also, remember that culture and language interact. English is interesting in that we have no pronouns differentiating between formal and informal relationships (we have ‘you’, not ‘tu’/‘vous’, etc.). This can make English speakers seem informal and friendly, or disrespectful, depending on the context. Keep this context in mind when interpreting interactions.

Thursday, September 1, 2016

#EcoSummit2016 The internationalism of ecology –variety is the spice of science

To look around at the faces, or to hear the languages at any science conference is to see the world in a single place at a single time. Science is one of the truly global enterprises, involving people from all regions. Of course this is not to say that science isn’t disproportionately dominated by some countries and regions, but geography does not have a monopoly on ideas. In my lab over the past seven years I have had 15 graduate students and postdoctoral researchers come through my lab from 9 different countries. The question is: does this internationalism influence science? Or does science happen in the same way regardless of who is doing it?

Caroline and I have had a couple of conversations on this topic, and we have both noticed that there seem to be cultural differences in various aspects of how science is done. Of course there is substantial variation among people regardless of their geographical origin, but there are important and maybe subtle differences. From how many hours a day people work, to how professors interact with students and junior researchers, to how quickly new ideas and tools are adopted, there are noticeable differences among geographical regions.

This geographical variation results in different priorities and emphases, and different rates of scientific production, but there is no ideal way. As students move around, international collaborations grow, and people meet and talk at conferences, the best parts of these cultural differences are transferred. I can say that from my year in China, how I view certain elements of my science has changed, and I suspect my Chinese students would say the same about their interactions with me.


The Ecosummit conference we are at is a very international meeting with 88 countries represented. This makes for fertile ground for the sharing of not only scientific ideas and methods, but also learning and sharing notions of what it means to be a successful scientist. This variety is the spice of good science.

Wednesday, August 31, 2016

#EcoSummit2016: Conferences –the piñata of ideas.


One of the greatest benefits of attending conferences is that they represent learning opportunities. I don’t necessarily mean learning about new techniques or analyses, though you can undoubtedly find out about these at conferences, but rather conferences are opportunities to hear about new concepts, ideas and paradigms. In some ways conferences are like a piñata of ideas –they are chalk full of new ideas but you never know which you’ll pick up.

Ecosummit is not the typical conference I go to, it is much more diverse in topics of talks and disciplines of the attendees. This diversity –from policy makers, to social scientists, to ecologists, means that I am exposed to a plethora of new concepts. Here are a few nuggets that got me thinking:

  • Knowledge-values-rules decision making context. Policy decisions are made at the interface of scientific knowledge, human values (what is important to people –e.g., jobs), and rules (e.g., economic laws). This seems like a nice context to think about policy, though it is not clear about how we prioritize new knowledge or alter values.

 


  • Adaptation services. I work on ecosystem services (e.g., carbon storage, pollination support, water filtration, etc.), but I learned that ecosystems also provide adaptation services. These are aspects of ecosystems that will help human societies adapt to climate change (e.g., new products).


  • Trees and air pollution. The naive assumption most of us make about trees in urban areas are that they improve local air quality. However, I saw a couple of talks where this may not necessarily be the case. Some species in North American (red oak, sweet gum, etc.) release volatile organic compounds. Spruce plantations may not take up nitrogen oxides, and in fact might release it. Thus we need to be careful on how we sell the benefits of urban trees.


  • Transformative. This is a term I have certainly heard and used before, but in listening to a wide variety of talks, I realize it is used in different contexts to mean different things. I think it best to avoid this term.



  • a-disciplinary.  I heard a guy say in a talk that he was a-disciplinary and so was not bound to the dogmas and paradigms of any discipline (I already have a hard time wrapping my head around interdisciplinary, multidisciplinary, transdisciplinary, etc.). He then presented a new paradigm and a number of prescribed well-formulated tools used to move from idea, communication, to action. I think the irony was lost on him.

Tuesday, August 30, 2016

#EcoSummit2016 Day 2 History and ecology go hand in hand.

The role for history in ecology can be tough to generalize. For a neo-ecologist, 5 years may be a long time scale, for a restoration ecologists, 50 years might be, for a paleoecologists, millions of years could matter. But multiple talks today argued that without considering history we are lost. Whether it is using climate history to understand how the effects of past climate change are still being felt today (from Jens Christian-Svenning), or the oft-mentioned debate about whether local biodiversity is truly in decline, the past is necessary to understand our changing planet. [The various papers on this debate came up in at least 3 of talks I saw]. Regardless of which way the diversity relationship goes, Frederic De Laender pointed out that loss of functioning due to increased environmental stress meant that local communities were changing anyways.

Related to this is the question of how ecologists should incorporate and understand the role of human history in their studies. Are humans simply a disturbance? A covariate in a statistical analysis? Or an intrinsic component of ecology across the globe? What is a baseline for ‘naturalness’ in the absence of humans anyways? Further, human records can be potentially misleading as ecological research tools. For example, P. Szabo showed that the popular conception in the Czech Republic (based on archival data) is that beech forests are the true ‘natural’ forest, and coniferous forests were simply the result of forestry plantations. Policy reflects this and promotes preservation of broad leafed forests. However, analysis of paleo-pollen data showed that in fact spruce and other conifers appeared to have dominated for thousands of years in some regions. What then is the true ‘natural’ forest? From Emily Southgate’s fascinating talk showing how the development of an oil refinery in the 1800s and its impacts could be investigated using historical land surveys, to maps showing the still-unfilled ranges of European tree species, the legacy of the past is clear in present day data.

Monday, August 29, 2016

#EcoSummit2016 Day 1 - Reconciling the warp and weft of ecology

For the first time since 2008 I didn’t make it to ESA, and instead I get to attend my first EcoSummit, here in Montpellier. Participants represent a more European contingent than the typical ESA, which is a great opportunity to see a slightly different group of people and topics.


Two plenary talks were particularly memorable for me. First, Sandra Diaz gave a really elegant talk that spanned from patterns of functional diversity to the philosophy of ecology. A woven carpet provided the central analogy. A carpet includes the warp – the underlying structure of the carpet – and the weft – the supplementary threads that produce the designs. Much like species, a great diversity of colors and patterns arise from the weft, but the warp provides the underlying structure. The search for a small number of general functional relationships one way ecologists can look for the structural fabric of life. Much like Phil Grimes, an earlier speaker, Diaz has attempted to identify generalities in ecology. It’s worth reading the paper she discussed for much of her talk, which attempts to describe a global spectrum of plant function (Diaz et al. 2016). Diaz noted, however, that your focus should be determined by your questions. And you need both details and generalities if you want to provide predictions at a global scale but with a local resolution.

The other plenary of note was from Stephen Hubbell (it actually preceded Diaz’s talk), and it provided a contrasting approach. Hubbell discussed a number of detailed analyses to derive a general conclusion about processes maintaining tropical tree diversity.  Data from the Barro Colorado island provides information about changes in growth rate, abundances, presence/absence, distances between species. It shows seemingly large shifts in abundance and composition through time. And Hubbell (in a fairly provocative mood) suggests that it shows that ‘community ecology is a failure’. I would argue against that statement, and what Hubbell really seemed to be saying is that expectations of equilibrium and equilibrium models (L-V, etc) are not useful. Instead, factors such as weak stabilizing mechanisms and demographic stochasticity may be enough to understand high diversity regions.

Tuesday, July 26, 2016

Summer hiatus, back for EcoSummit



As you may have noticed, the EEB & Flow is taking a much needed summer break.

We'll be back for the EcoSummit Congress from Montpellier, France starting Aug. 29th. :-)

Monday, July 18, 2016

The Forest, the Trees, and the Phylo-diversity Jungle

with Florent Mazel

As has been a recurrent topic on the blog recently (here, and here and elsewhere), it is difficult to know when it is appropriate and worthwhile to write responses to published papers. Further, a number of journals don't provide clear opportunities for responses even when they are warranted. And maybe, even when published, most responses won't make a difference anyways. 

Marc Cadotte and I and our coauthors experienced this first hand when we felt a paper of ours had been misconstrued. We wanted to provide a useful, positive response, but whether the time investment was worthwhile was unclear. The journal then informed us they didn't publish responses. We tried instead to write a 'News and Views' piece for the journal, which it ultimately declined to publish. And really, a response piece is at cross-purposes from the usual role of N&V (positive editorials). In the end, rather than spend more time on this, we made the manuscript available as a preprint, found here

The initial response was to a publication in Ecography from Miller et al. (2016) [citations below]. Their paper that does a nice job of asking how well 32 phylo-diversity metrics and nine null models discriminate between community assembly mechanisms. The authors first simulated communities under three main assembly rules, competitive exclusion, habitat filtering, and neutral assembly. They then tested which combination of metrics and null models yielded the best statistical performance. Surprisingly, only a fraction of phylo-diversity metrics and null models exhibited both high statistical power coupled with low Type I error rate. Miller et al. conclude that, for this reason, some metrics and null models proposed in the literature should be avoided when asking if filtering and competition play an important role in structuring communities. This is a useful extension for the eco-phylogenetic literature. However, the authors also argue that their results show that a framework for phylodiversity metrics introduced in a paper by myself and coauthors (Tucker et al. 2016) was subjective and should not be used. 

What was disappointing is that there is a general issue (how can we best understand phylogenetic metrics for ecology?) that could benefit from further discussion in the literature.

Metrics can be analysed and understood in two ways: (1) by grouping them based on their underlying properties (e.g. by comparing mathematical formulations); and (2) by assessing context-dependent behaviour (e.g. by comparing metric performance in relation to particular questions). The first approach requires theoretical and cross-disciplinary studies to summarize the main dimensions along which phylo-diversity metrics vary, while the second provides a field-specific perspective to quantify the ability of a particular metric to test a particular hypothesis. These two approaches have different aims, and their results are not necessarily expected to be identical.

One reason there are so many metrics is that they have been pooled across community ecology, macroecology and conservation biology. The questions typically asked by conservationists and macroecologists, for example, differ from those of community ecologists. Different metrics frequently perform better or worse for different types of problems. The second approach to metrics provides a solution to this problem through explicitly simulating the processes of interest for a given research question (e.g. vicariance or diversification processes in macroecological research), and selecting the most appropriate metric for the task. The R package presented by Miller et al., as well as others (e.g. Pearse et al. 2015) all help facilitate this approach. And it can be very useful to a field when this is done thoroughly.

But this approach has some limitations as well - it is inefficient and sensitive to choices made in the simulation process. It also doesn't provide a framework or context in which to understand results. The general approach fills this need: the Tucker et al. paper took this approach and classified 70 phylo-diversity metrics along three broad mathematical dimensions: richness, divergence and regularity--the sum, mean and variance of phylogenetic distances among species of assemblages, respectively. This framework is analogous to a system for classifying functional diversity metrics (e.g. Villéger et al. 2008), allowing theoretical linkages between phylogenetic and functional approaches in ecology. We also carried out extensive simulations to corroborate the metric behaviour classification system across different assembly scenarios.

The minor point to me is that, although Miller et al. concluded this tripartite framework performed poorly, their results appear to provide independent support for the tripartite classification system. (And this is despite some methodological differences, including using a clustering algorithm instead of an ordination approach for metric grouping). The vast majority of metrics used by Miller et al. on their simulated communities group according to this richness-divergence-regularity classification system (see our Fig 2 vs. Miller et al.'s Fig 1B). And metrics like HAED and EED, which stem from a mathematical combination of richness and regularity dimensions, are expected to sometimes cluster with richness (as observed by Miller et al. but noted as evidence against our framework), and sometimes with regularity. There is specific discussion on this type of behaviour in Tucker et al., 2016.
Tucker et al. Fig. 2. "Principal components analysis for Spearman’s correlations between the a-diversity metrics shown in Table 1. Results represent measures taken from 800 simulated landscapes, based on 100 simulated phylogenetic trees and eight landscape types defined in Table 2 (see Appendix S2) for detailed methods. (A) All metrics excluding abundance-weighted metrics and those classified as parametric indices. (B) As in A, but with abundance-weighted metrics included (underlined). (C) As in B, but with parametric indices (black), and indices that incorporate multiple dimensions (underlined) included (e.g. all a-diversity metrics). X and Y axes are scaled to reflect explained variance (PC1 = 41.8%; PC2 = 20.5% for the PCA performed with all metrics, shown in (C))." 

Miller et al. Fig 1B.  "Dendrogram of intercorrelations among the phylogenetic community structure metrics, including species richness itself (labeled richness). Group 1 metrics focus on mean relatedness; Group 2 on nearest-relative measures of community relatedness; and Group 3 on total community diversity and are particularly closely correlated with species richness. Four metrics, PAE, EED , IAC, and EAED show variable behavior. They do not consistently cluster together or with each other, and we refer to their placement as unresolved. The branches of the dendrogram are colored according to the metric classifications proposed by Tucker et al. (2016): green are “regularity” metrics, pink are “richness” metrics, and yellow are “divergence” metrics."
The major point is that dismissing general approaches can lead to more confusion about phylogenetic metrics, leading users to create even more metrics (please don't!), to conclude that particular metrics should be discarded, or to adopt hard-to-interpret metrics because some study found they were highly correlated with a response. Context is necessary.

I think both approaches have utility, and importantly, both approaches benefit each other. On one hand, detailed analyses of metric performance offer a valuable test of the broader classification system, using alternative simulations and codes. On the other hand, broad syntheses offer a conceptual framework within which results of more focussed analyses may be interpreted.

For example, comparing Miller et al.'s results with the tripartite framework provides some additional interesting insight. They found that metrics closely aligned with only a single dimension are not the best indicators of community assembly. In their results, sometimes the metrics with the best statistical performances are Rao’s quadratic entropy and IntraMPD. Because of the general framework, we know that these classified as are 'hybrid' metrics that include both richness and divergence in phylogenetic diversity. Taking it one step further, because the general framework connects with functional ecology metrics, we can compare their findings about Rao's QE/IntraMPD to results using corresponding dimensions in the functional trait literature. Interestingly, functional ecologists have found that community assembly processes can alter multiple dimensions of diversity (e.g. both richness and divergence)(Botta-Dukát and Czúcz 2016), which may provide insight to why a hybrid metric is useful for understanding community assembly.

In summary, there is both a forest and individual trees, and both of these are valid approaches. I hope that we can continue complement broad-scale syntheses with question- and hypothesis-specific studies, and that as a result the field can be clarified.

References:
Botta-Dukát, Z. and Czúcz, B. 2016. Testing the ability of functional diversity indices to detect trait convergence and divergence using individual-based simulation. - Methods Ecol. Evol. 7: 114–126. 

Bryant, J. A. et al. 2008. Microbes on mountainsides: contrasting elevational patterns of bacterial and plant diversity. - Proc. Natl. Acad. Sci. U. S. A. 105: 11505–11. 

Graham, C. H. and Fine, P. V. A. 2008. Phylogenetic beta diversity: linking ecological and evolutionary processes across space in time. - Ecol. Lett. 11: 1265–1277. 

Hardy, O. 2008. Testing the spatial phylogenetic structure of local communities: statistical performances of different null models and test statistics on a locally neutral community. - J. Ecol. 96: 914–926. 

Isaac, N. J. B. et al. 2007. Mammals on the EDGE: conservation priorities based on threat and phylogeny. - PLoS One 2: e296. 

Kraft, N. J. B. et al. 2007. Trait evolution, community assembly, and the phylogenetic structure of ecological communities. - Am. Nat. 170: 271–283. 

Pavoine, S. and Bonsall, M. B. 2011. Measuring biodiversity to explain community assembly: a unified approach. - Biol. Rev. 86: 792–812. 

Pearse, W. D. et al. 2014. Metrics and Models of Community Phylogenetics. - In: Modern Phylogenetic Comparative Methods and Their Application in Evolutionary Biology. Springer Berlin Heidelberg, pp. 451–464. 

Pearse, W. D. et al. 2015. pez : phylogenetics for the environmental sciences. - Bioinformatics 31: 2888–2890. 

Tucker, C. M. et al. 2016. A guide to phylogenetic metrics for conservation, community ecology and macroecology. - Biol. Rev. Camb. Philos. Soc. doi: 10.1111/brv.12252.

Vellend, M. et al. 2010. Measuring phylogenetic biodiversity. - In: McGill, A. E. M. B. J. (ed), Biological diversity: frontiers in measurement and assessment. Oxford University Press, pp. 193–206. 

Villéger, S. et al. 2008. New multidimensional functional diversity indices for a multifaceted framework in functional ecology. - Ecology 89: 2290–2301. 

Webb, C. O. et al. 2002. Phylogenies and Community Ecology. - Annu. Rev. Ecol. Evol. Syst. 33: 475–505. 

Winter, M. et al. 2013. Phylogenetic diversity and nature conservation: where are we? - Trends Ecol. Evol. 28: 199–204.

Thursday, June 30, 2016

The pessimistic and optimistic view of BEF experiments?

The question of the value of biodiversity-ecosystem function (BEF) experiments—their results, their relevancy—has become a heated one in the literature. An extended argument over the last few years has debated the assumption that local biodiversity is in fact in decline (e.g. Vellend et al. 2013; Dornelas et al. 2014; Gonazalez et al. 2016). If biodiversity isn't disappearing from local communities, the logical conclusion would be that experiments focussed on the local impacts of biodiversity loss are less relevant.

Two papers in the Journal of Vegetation Science (Wardle 2016 and Eisenhauer et al. 2016) continue this discussion regarding the value of BEF experiments for understanding biodiversity loss in natural ecosystems. From reading both papers, it seems as though broadly speaking, the authors agree on several key points: that results from biodiversity-ecosystem functioning experiments don’t always match observations about species loss and functioning in nature, and that nature is much more complex, context-dependent, and multidimensional than typical BEF experimental systems. (The question of whether local biodiversity is declining may be more contested between them). 

Biodiversity and ecosystem experiments typically involve randomly assembled plant communities containing either the full complement of species, or subsets containing different numbers of species. Communities containing lower numbers are meant to provide information about the loss of species diversity a system. Functions (often including, but not limited to, primary productivity or biomass) are eventually measured and analysed in relation to treatment diversity. Although some striking results have come out of these types of studies (e.g. Tilman and Downing 1996), they can vary a fair amount in their findings (Cardinale et al. 2012).

David Wardle’s argument is that BEF experiments differ a good deal from natural systems: in natural systems, BEF relationships can take different forms and explain relatively little variation, and so extrapolating from existing experiments seems uninformative. In nature, changes in diversity are driven by ecological processes (invasion, extinction) and experiments involving randomly assembled communities and randomly lost species do nothing to simulate these processes. Wardle seems to feel that the popularity of typical BEF experiments has come at the cost of more realistic experimental designs. This is something of a zero-sum argument, (although in some funding climates that may be true...). But it is true that big BEF experiments tend to be costly and take time and labour, meaning that there is an impetus to publish as much as possible from each one. Given BEF experiments have changed drastically in design once already, in response to criticisms about their inability to disentangle complementarity vs. portfolio effects, it seems they are not inflexible about design though.

Eisenhauer et al. agree in principle that current experiments frequently lack a realistic design, but suggest that there are plenty of other types of studies (looking at functional diversity or phylogenetic diversity, for example, or using random loss of species) being published as well. For them too, there is value in having multiple similar experiments: this allows metaanalysis and comparison aggregation, and will help to tease apart the important mechanisms eventually. Further, realism is difficult to obtain in the absence of a baseline for a “natural, untouched, complete system” from which to remove species.

The point that Eisenhauer et al. and Wardle appear to agree on most strongly is that real systems are complex, multi-dimensional and context-dependent. Making the leap from a BEF experiment with 20 plant species to the real world is inevitably difficult. Wardle sees this is a massive limitation, Eisenhauer et al. sees it as a strength. Inconsistencies between experiments and nature are information that highlight when context matters. By having controlled experiments in which you vary context (such as by manipulating both nutrient level and species richness), you can begin to identify mechanisms.

Perhaps this is the greatest problem with past BEF work, is that there is a tendency to oversimplify the interpretation of results – to conclude that ‘loss of diversity is bad’ but with less attention to ‘why’, 'where', or 'when’. The best way to do this depends on your view of how science should progress. 

Wardle, D. A. (2016), Do experiments exploring plant diversity–ecosystem functioning relationships inform how biodiversity loss impacts natural ecosystems?. Journal of Vegetation Science, 27: 646–653. doi: 10.1111/jvs.12399

Eisenhauer, N., Barnes, A. D., Cesarz, S., Craven, D., Ferlian, O., Gottschall, F., Hines, J., Sendek, A., Siebert, J., Thakur, M. P., Türke, M. (2016), Biodiversity–ecosystem function experiments reveal the mechanisms underlying the consequences of biodiversity change in real world ecosystems. Journal of Vegetation Science. doi: 10.1111/jvs.12435

Additional References:
Vellend, Mark, et al. "Global meta-analysis reveals no net change in local-scale plant biodiversity over time." Proceedings of the National Academy of Sciences 110.48 (2013): 19456-19459.

Dornelas, Maria, et al. "Assemblage time series reveal biodiversity change but not systematic loss." Science 344.6181 (2014): 296-299.

Gonzalez, Andrew, et al. "Estimating local biodiversity change: a critique of papers claiming no net loss of local diversity." Ecology (2016).

Tilman, David, and John A. Downing. "Biodiversity and stability in grasslands." Ecosystem Management. Springer New York, 1996. 3-7.

Cardinale, Bradley J., et al. "Biodiversity loss and its impact on humanity."Nature 486.7401 (2012): 59-67.

Tuesday, June 14, 2016

Rebuttal papers don’t work, or citation practices are flawed?

Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.

Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices? 

Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing. 

Todd et al (2007) suggest that journals institute random audits of citations to ensure authors take greater care. This may be a good idea that is difficult to institute in journals where peer reviewers are already in short supply. It may also be useful to have rebuttal papers considered as part of the total communication surrounding a paper - the full text would include them, they would be automatically downloaded in the PDF, there would be a tab (in addition to author information, supplementary material, references, etc) for responses. 

More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.

Which is to say, that I still hope that rebuttals can work and be valuable.