Sunday, 23 April 2017

Sample selection in genetic studies: impact of restricted range

I'll shortly be posting a preprint about methodological quality of studies in the field of neurogenetics. It's something I've been working on with a group of colleagues for a while, and we are aiming to make recommendations to improve the field.

I won't go into details here, as you will be able to read the preprint fairly soon. Instead, what I want to do here is to expand on a small point that cropped up as I looked at this literature, and which I think is underappreciated.

It's to do with sampling. There's a particular problem that I started to think about a while back when I heard someone give a talk about a candidate gene study. I can't remember who it was or even what the candidate gene was, but basically they took a bunch of students, genotyped them, and then looked for associations between their genotypes and measures of memory. They were excited because they found some significant results. But I was, as usual, sitting there thinking convoluted thoughts about all of this, and wondering whether it really made sense. In particular, if you have a common genetic variant that has such a big effect on memory, would this really show up in a bunch of students – who are presumably people who have pretty good memories? Wouldn't it rather be the case that what you'd expect would be an alteration in the frequencies of genotypes in the student population?

Whenever I have an intuition like that, I find the best thing to do is to try a simulation. Sometimes the intuition is confirmed, and sometimes things turn out different and, very often, more complicated.

But this time, I'm pleased to say my intuition seems to have something going for it.

So here's the nuts and bolts.

I simulated genotypes and associated phenotypes by just using R's nice mvrnorm function. For the examples below, I specified that a and A are equally common (i.e. minor allele frequency is .5), so we have 25% as aa, 50% as aA, and 25% AA. The script lets you specify how closely these are related to the phenotype, but from what we know about genetics, it's very unlikely that a common variant would have a value more than about .25.

We can then test for two things:
1)  How far does the distribution of genotypes in the sample (i.e. people who are aa, aA or AA) resemble that in the general population? If we know that MAF is .5, we expect this distribution to be 1:2:1.
2) We can assign each person a score corresponding to number of A alleles (coding aa as zero, aA as 1, and AA as 2) and look at the regression of the phenotype on the genotype. That's the standard approach to looking for genotype-phenotype association.

If we work with the whole population of simulated data, these values will correspond to those that we specified in setting up the simulation, provided we have a reasonably large sample size.

But what if we take a selective sample of cases who fall above some cutoff on the phenotype? This is equivalent to taking, for instance, a sample from a student population from a selective institution, when the phenotype is a measure of cognitive function. You're not likely to get into the institution unless you have a good cognitive ability. Then, working with this selected subgroup, we recompute our two measures, i.e. the proportions of each genotype, and the correlation between the genotype and the phenotype.

Now, the really interesting thing here is that, as the selection cutoff gets more extreme, two things happen:
a) The proportions of people with different genotypes starts to depart from the values expected for the population in general. We can test to see when the departure becomes statistically significant with a chi square test.
b) The regression of the phenotype on the genotype weakens. We can quantify this effect by just computing the p-value associated with the correlation between genotype and phenotype.

Figure 1: Genotype-phenotype associations for samples selected on phenotype

Figure 1 shows the mean phenotype scores for each genotype for three samples: an unselected sample, a sample selected with z-score cutoff zero (corresponding to the top 50% of the population on the phenotype) and a sample selected with z-score cutoff of .5 (roughly selecting the top third of the population).

It's immediately apparent from the figure that the selection dramatically weakens the association between genotype and phenotype. In effect, we are distorting the relationship between genotype and phenotype by focusing just on a restricted range. 

Comparison of p-values from conventional regression approach and chi square test on genotype frequencies in relation to sample selection

Figure 2 shows the data from another perspective, by considering the statistical results from a conventional regression analysis, when different z-score cutoffs are used, selecting an increasingly extreme subset of the population. If we take a cutoff of zero – in effect selecting just the top half of the population, the regression effect (predicting phenotype from genotype), shown in the blue line, which was strong in the full population, is already much reduced. If you select only people with z-scores of .5 or above (equivalent to an IQ score of around 108), then the regression is no longer significant. But notice what happens to the black line. This shows the p-value from a chi square test which compares the distribution of genotypes in relation to expected population values in each subsample. If there is a true association between genotype and phenotype, then greater the selection on the phenotpe, the more the genotype distribution departs from expected values. The specific patterns observed will depend on the true association in the population and on the sample size, but this kind of cross-over is a typical result.

So what's the moral of this exercise? Well, if you are interested in a phenotype that has a particular distribution in the general population, you need to be careful when selecting a sample for a genetic association study. If you pick a sample that has a restricted range of phenotypes relative to the general population, then you make it less likely that you will detect a true genetic association in a conventional regression analysis. In fact, if you take a selected sample, there comes a point when the optimal way to demonstrate an association is by looking for a change in the frequency of different genotypes in the selected population vs the general population.

No doubt this effect is already well-known to geneticists, and it's all pretty obvious to anyone who is statistically savvy, but I was pleased to be able to quantify the effect via simulations. It is clear that it has implications for those who work predominantly with selected samples such as university students. For some phenotypes, use of a student sample may not be a problem, provided they are similar to the general population in the range of phenotype scores. But for cognitive phenotypes that's very unlikely, and attempting to show genetic effects in such samples seems a doomed enterprise.

The script for this simulation should be available here: https://github.com/oscci/mysqing.
(I am a github novice, but I'm sure someone will tell me if I've got that wrong).






Sunday, 5 March 2017

Advice for early career researchers re job applications: 1. Work 'in preparation'

Image from: https://fdudhwala.wordpress.com/2017/01/05/first-time-publishing-woerries/
I posted a couple of tweets yesterday giving my personal view of things to avoid when writing a job application. These generated a livelier debate than I had anticipated, and made me think further about the issues I'd raised. I've previously blogged about getting a job as a research assistant in psychology; this piece is directed more at early career researchers aiming for a postdoc or first lectureship. I'll do a separate post about issues raised by my second tweet – inclusion of more personal information in your application. Here I'll focus on this one: 
  • Protip for job applicants: 3+ 1st author 'in prep' papers suggests you can't finish things AND that you'll be distracted if appointed

I've been shortlisting for years, and there has been a noticeable trend for publication lists to expand to include papers that are 'in preparation' as well as those that are 'submitted' or 'under review'. One obvious problem with these is that it's unclear what they refer to: they could be nearly-completed manuscripts or a set of bullet points. 
  
My tweet was making the further point that you need to think of the impression you create in the reader if you have five or six papers 'in preparation', especially if you are first author. My guess is that most applicants think that this will indicate their activity and productivity, but that isn't so. I'd wonder whether this is someone who starts things and then can't finish them. I'd also worry that if I took the applicant on, the 'in preparation' papers would come with them and distract them from the job I had employed them to do. I've blogged before about the curse of the 'academic backlog': While I am sympathetic about supporting early researchers in getting their previous work written up, I'd be wary of taking on someone who had already accumulated a large backlog right at the start of their career.

Many people who commented on this tweet supported my views:
  • @MdStockbridge We've been advised never to list in prep articles unless explicitly asked in the context of post doc applications?. We were told it makes one looks desperate to "fill the space."
  •  @hardsci I usually ignore "in prep" sections, but to me more than 1-2 items look like obvious vita-padding
  • @larsjuhljensen "In prep" does not count when I read a CV. The slight plus of having done something is offset by inability to prioritize content.
  • @Russwarne You can say anything is "in preparation." My Nobel acceptance speech is "in preparation." I ignore it.
  • DuncanAstle I regularly see CVs with ~5 in prep papers... to be honest I don't factor them into my appraisal.?
  • @UnhealthyEcon I'm wary if i see in-prep papers at all. Under review papers would be different.
  • @davidpoeppel Hey peeps in my labs: finish your papers! Run -don't walk -back to your desks! xoxo David. (And imho, never list any in prep stuff on CV...)
  • @janhove 'Submitted' is all right, I think, if turn arounds in your field are glacial. But 'in prep' is highly non-committal.

Others, though, felt this was unfair, because it meant that applicants couldn't refer to work that may be held up by forces beyond their control: 
  • @david_colquhoun that one seems quite unfair -timing is often beyond ones's control
  • @markwarschauer I disagree completely. The more active job applicants are in research & publishing the better.
  • @godze786  if it's a junior applicant it may also mean other authors are holding up. Less power when junior
  • @tremodian All good except most often fully drafted papers are stuck in senior author hell and repeated prods to release them often do nothing.
 But then, this very useful suggestion came up:  
  • @DrBrocktagon But do get it out as preprint and put *that* on CV
  • @maxcoltheart Yes. Never include "in prep" papers on cv/jobapp. Or "submitted" papers? Don't count since they may never appear? Maybe OK if ARKIVed
The point here is that if you deposit your manuscript as a preprint, then it is available for people to read. It is not, of course peer-reviewed, but for a postdoc position, I'd be less interested in counting peer-reviewed papers than in having the opportunity to evaluate the written work of the applicant. Preprints allow one to do that. And it can be effective:
  • @BoyleLab we just did a search and one of our candidates did this. It helped them get an interview because it was a great paper
But, of course, there's a sting in the tail: once something is a preprint it will be read by others, including your shortlisting committee, so it had better be as good as you can get it. So the question came up, at what point would you deposit something as a preprint? I put out this question, and Twitter came back with lots of advice:
  • @michaelhoffman Preprint ≠ "in prep". But a smart applicant should preprint any of their "submitted" manuscripts.?
  • @DoctorZen The term "pre-print" itself suggests an answer. Pre-prints started life as accepted manuscripts. They should not be rough drafts.
  • @serjepedia these become part of your work record. Shoddiness could be damaging.
  • @m_wall I wouldn't put anything up that hadn't been edited/commented by all authors, so basically ready to submit.
  • @restokin If people are reading it to decide if they should give you a job, it would have to be pretty solid. 
All in all, I thought this was a productive discussion. It was clear that many senior academics disregard lists of research outputs that are not in the public domain. Attempts to pad out the CV are counterproductive and create a negative impression. But if work is written up to a point where it can be (or has been) submitted, there's a clear advantage to the researcher in posting it as a preprint, which makes it accessible. It doesn't guarantee that a selection committee will look at it, but it at least gives them that opportunity.



Thursday, 23 February 2017

Barely a good word for Donald Trump in Houses of Parliament


I am beginning to develop an addiction to Hansard, the public record of debates in Parliament and the House of Lords. It's a fascinating public record of how major political decisions are debated, and I feel fortunate to live in a country where it is readily available on the internet the day after a debate.

The debate on Donald Trump's state visit was particularly interesting, because it was prompted by a public petition signed by 1.85 million people, which read:

Donald Trump should be allowed to enter the UK in his capacity as head of the US Government, but he should not be invited to make an official State Visit because it would cause embarrassment to Her Majesty the Queen.

I've been taking a look at the debate from 20th February, which divided neatly down party lines, with the Conservatives and a single DUP member supporting the state visit, and everyone else (Labour, Lib Dems, SNP and Green) opposing it.

A notable point about the defenders of the State Visit is that virtually none of them attempted to defend Trump himself. The case that speaker after speaker made was that we should invite Trump despite of his awfulness. Indeed, some speakers argued that we'd invited other awful people before – Emperor Hirohito, President Ceausescu, Xi Jinping and Robert Mugabe - so we would be guilty of double standards if we did not invite Trump as well.

It was noted, however, that this argument did not hold much water, as none of these other invitees had been extended this honour within a week of being elected, and other far less controversial US presidents had never had a State Visit.

The principal argument used to support the government's position was a pragmatic one: it will be to the benefit of the UK if we work with the US, our oldest ally. That way we may be able to influence him, and also to achieve good trade deals. Dr Julian Lewis (Con) went even further, and suggested that by cosying up to Trump we might be able to avert World War 3:

…given he is in some doubt about continuing the alliance that prevented world war three and is our best guarantee of world war three not breaking out in the 21st century‚ do they really think it is more important to berate him, castigate him and encourage him to retreat into some sort of bunker, rather than to do what the Prime Minister did, perhaps more literally than any of us expected, and take him by the hand to try to lead him down the paths of righteousness? I have no doubt at all about the matter.

He continued:
What really matters to the future of Europe is that the transatlantic alliance continues and prospers. There is every prospect of that happening provided that we reach out to this inexperienced individual and try to persuade him‚there is every chance of persuading him, to continue with the policy pursued by his predecessors.

I can't imagine this is an argument that would be appreciated by Trump, as it manages to be both patronising and insulting at the same time.

The closest anyone dared come to being positive about Trump was when Nigel Evans (Con) said:

We might not like some of the things he says. I certainly do not like some of what he has said in the past, but I respect the fact that he is now delivering the platform on which he stood. He will go down in history as the only politician roundly condemned for delivering on his promises. I know this is a peculiar thing in the politics we are used to here‚- politicians standing up for something and delivering‚- but that is what Trump is doing.

But most of those supporting the visit did so while attempting to distance themselves from Trump's personal characteristics, e.g. Gregory Campbell (CON):

My view is that Candidate Trump and Mr Trump made some deplorable and vile comments, which are indefensible - they cannot be defended morally, politically or in any other way - but he is the democratically elected President of the United States of America.

Other made the point in rather mild and general terms, e.g. Anne Main:

Any of us who have particular concerns about some of President Trump's pronouncements are quite right to have them; I object completely to some of the things that have been said.

If we turn to the comments made by the speakers who opposed the state visit, then they were considerably more vivid in the negative language they used to portray Trump, with many focusing on the less savoury aspects of his character:
Paul Flynn (Lab) referred to the 'cavernous depths of his scientific ignorance'. Others picked up on Trump's statements on women, Muslims, the LGBT community, torture, and the press:

I think of my five-year-old daughter when I reflect on a man who considers it okay to go and grab pussy, a man who considers it okay to be misogynistic towards the woman he is running against. Frankly, I cannot imagine a leader of this country, of whatever political stripe, behaving in that manner. David Lammy (Lab)

President Trump's Administration so far has been characterised by ignorance and prejudice, seeking to ban Muslims and deny refuge to people fleeing from war and persecution. Kirsten Oswald (SNP)

Even if one were the ultimate pragmatist for whom the matters of equality or of standing against torture, racism and sexism do not matter, giving it all up in week 1 on a plate with no questions asked would not be a sensible negotiating strategy. Stephen Doughty (Lab)

I fought really hard to be elected. I fought against bigotry, sexism and the patriarchy to earn my place in this House. By allowing Donald Trump a state visit and bringing out the china crockery and the red carpet, we endorse all those things that I fought hard against and say, Do you know what? It's okay.  Naz Shah (Lab)

Let me conclude by saying that in my view, Mr Trump is a disgusting, immoral man. He represents the very opposite of the values we hold and should not be welcome here. Daniel Zeichner (Lab)

We are told that Trump is very thin-skinned and gets furious when criticised. It is also said that he doesn't read much, but gets most of his news from social media and cable TV, and is kept happy insofar as his staff feed him only positive media stories. If so, then I guess there is a possibility his team will somehow keep Hansard away from him, and the visit will go ahead. But it's hard to see how it could possibly succeed if he becomes aware of the disdain in which he is held by Conservative MPs as well as the Opposition. They have made it abundantly clear that the offer of a state visit is not intended to honour him. Rather they regard him as a petulant but dangerous despot, who might be bribed to behave well by the offer of some pomp and ceremony.

The petition to withdraw the invitation has been voted down, but it has nevertheless succeeded by forcing the Conservatives to make public just how much they despise the US President.





Saturday, 18 February 2017

The alt-right guide to fielding conference questions


After watching this interview between BBC Newsnight's Evan Davies and Sebastian Gorka, Deputy Assistant to Donald Trump, I realised I'd been handling conference questions all wrong. Gorka, who is a former editor of Breitbart News, gives a virtuoso performance that illustrates every trick in the book for coming out on top in an interview: smear the questioner, distract from the question, deny the premises, and question the motives behind a difficult question. Do everything, in fact, except give a straight answer. Here's what a conference Q&A session might look like if we all mastered these useful techniques.

ED: Dr Gorka, you claim that you can improve children's reading development using a set of motor exercises. But the data you showed on slide 3 don't seem to show that.

SG: That question is typical of the kind of bias from people working at British Universities. You seem hell-bent on discrediting any view that doesn't agree with your own preconceived position.

ED: Er, no. I just wondered about slide 3. Is the difference between those two numbers statistically significant?

SG: Why are people like you so obsessed with trivial details? Here we are showing marvellous improvements in children's reading, and all you can do is to pick away at a minor point.

ED: Well, you could answer the question? Are those numbers significantly different?

SG: It's not as if you and your colleagues have any expertise in statistics. The last talk by your colleague Dr Smith was full of mistakes. She actually did a parametric test in a situation that called for a nonparametric test.

ED: But can we get back to the question of whether your intervention had a significant effect.

SG: Of course it did. It's an enormous effect. And that's only part of the data. I've got lots of other numbers that I haven't shown here. And if we got to slide 3, just look at those bars: the red one is much higher than the blue one.

ED: But where are the error bars?

SG: That's just typical of you. Always on the attack. Look at the language you are using. I show you all the results in a nice bar chart, and all you can do is talk about error. Don't you ever think of anything else?

ED: Well, I can see we aren't going to get anywhere with that question, so let me try another one. Your co-author, Dr Trump, said that the children in your study all had dyslexia, whereas in your talk you said they covered the whole range of reading ability. That's rather confusing. Can you tell us which version is correct?

SG: There you go again. Always trying to pick holes in everything we do. Seems you're just jealous because your own reading programs don't have anything like this effect.

ED: But don't you think it discredits your study if you can't give a straight answer to a simple question?

SG: So this is what we get, ladies and gentleman. All the time. Fake challenges and attempts to discredit us.

ED: Well, it's a straightforward question. Were they dyslexic or not?

SG: Some of them were, and some of them weren't.

ED: How many? Dr Trump said all of them were dyslexic.

SG: You'll have to ask him. I've got parents falling over themselves to get their children enrolled, and I really don't have time for this kind of biased questioning.

Chair: Thank you Dr Gorka. We have no more time for questions.

Friday, 17 February 2017

We know what's best for you: politicians vs. experts


I regard politicians as a much-maligned group. The job is not, after all, particularly well paid, when you consider the hours that they usually put in, the level of scrutiny they are subjected to, and the high-stakes issues they must grapple with. I therefore start with the assumption that most of them go into politics because they feel strongly about social or economic issues and want to make a difference. Although being a politician gives you some status, it also inevitably means you will be subjected to abuse or worse. The murder of Jo Cox led to a brief lull in the hostilities, but it's resumed with a vengeance as politicians continue to grapple with issues that divide the nation and that people feel strongly about. It seems inevitable, then, that anyone who stays the course must have the hide of a rhinoceros, and so by a process of self-selection, politicians are a relatively tough-minded lot. 

I fear, though, that in recent years, as the divisions between parties have become more extreme, so have the characteristics of politicians. One can admire someone who sticks to their principles in the face of hostile criticism; but what we now have are politicians who are stubborn to the point of pig-headedness, and simply won't listen to evidence or rational argument. So loath are they to appear wavering, that they dismiss the views of experts.

This was most famously demonstrated by the previous justice secretary, Michael Gove, who, when asked if any economists backed Brexit, replied "people in this country have had enough of experts". This position is continued by Theresa May as she goes forth in the quest for a Hard Brexit.

Then we have the case of the Secretary of State for Health, Jeremy Hunt, who has repeatedly ignored expert opinion on the changes he has introduced to produce a 'seven-day NHS'. The evidence he cited for the need for the change was misrepresented, according to the authors of the report, who were unhappy with how their study was being used. The specific plans Hunt proposed were described as 'unfunded, undefined and wholly unrealistic' by the British Medical Association, yet he pressed on.

At a time when the NHS is facing staff shortages, and as Brexit threatens to reduce the number of hospital staff from the EU, he has introduced measures that have led to demoralisation of junior doctors. This week he unveiled a new rota system that has a mix of day and night shifts that had doctors, including experts in sleep, up in arms. It was suggested that this kind of rota would not be allowed in the aviation industry, and is likely to put the health of doctors as well as patients at risk.
A third example comes from academia, where Jo Johnson, Minister of State for Universities, Science, Research and Innovation, steadfastly refuses to listen to any criticisms of his Higher Education and Research Bill, either from academics or from the House of Lords. Just as with Hunt and the NHS, he starts from fallacious premises – the idea that teaching is often poor, and that students and employers are dissatisfied – and then proceeds to introduce measures that are designed to fix the apparent problem, but which are more likely to damage a Higher Education system which, as he notes, is currently the envy of the world. The use of the National Student Survey as a metric for teaching excellence has come under particularly sharp attack – not just because of poor validity, but also because the distribution of scores make it unsuited for creating any kind of league table: a point that has been stressed by the Royal Statistical Society, the Office for National Statistics, and most recently by Lord Lipsey, joint chair of the All Party Statistics Group.

Johnson's unwillingness to engage with the criticism was discussed recently at the Annual General Meeting of the Council for Defence of British Universities (where Martin Wolf gave a dazzling critique of the Higher Education and Research Bill from an expert economics perspective).  Lord Melvyn Bragg said that in years of attending the House of Lords he had never come across such resistance to advice. I asked whether anyone could explain why Johnson was so obdurate. After all, he is presumably a highly intelligent man, educated at one of our top Universities. It's clear that he is ideologically committed to a market in higher education, but presumably he doesn't want to see the UK's international reputation downgraded, so why doesn't he listen to the kind of criticism put forward in the official response to his plans by Cambridge University? I don't know the answer, but there are two possible reasons that seem plausible to me.

First, those who are in politics seldom seem to understand the daily life of people affected by the Bills they introduce. One senior academic told me that Oxford and Cambridge in particular do themselves a disservice when they invite senior politicians to an annual luxurious college feast, in the hope of gaining some influence. The guest may enjoy the exquisite food and wine, but they go away convinced that all academics are living the high life, and give only the occasional lecture between bouts of indulgence. Any complaints, thus, are seen as those coming from idle dilettantes who are out of touch with the real world and alarmed at the idea they may be required to do serious work. Needless to say, this may have been accurate in the days of Brideshead Revisited, but it could not be further from the truth today – in Higher Education Institutions of every stripe, academics work longer hours than the average worker (though fewer, it must be said, than the hard-pressed doctors).

Second, governments always want to push things through because if they don't, they miss a window of opportunity during their period in power. So there can be a sense of, let's get this up and running and worry about the detail later. That was pretty much the case made by David Willetts when the Bill was debated in the House of Lords:

These are not perfect measures. We are on a journey, and I look forward to these metrics being revised and replaced by superior metrics in the future. They are not as bad as we have heard in some of the caricatures of them, and in my experience, if we wait until we have a perfect indicator and then start using it, we will have a very long wait. If we use the indicators that we have, however imperfect, people then work hard to improve them. That is the spirit with which we should approach the TEF today.

However, that is little comfort to those who might see their University go out of business while the problems are fixed. As Baroness Royall said in response:

My Lords, the noble Lord, Lord Willetts, said that we are embarking on a journey, which indeed we are, but I feel that the car in which we will travel does not yet have all the component parts. I therefore wonder if, when we have concluded all our debates, rather than going full speed ahead into a TEF for everybody who wants to participate, we should have some pilots. In that way the metrics could be amended quite properly before everybody else embarks on the journey with us.

Much has been said about the 'post-truth' age in which we now live, where fake news flourishes and anyone's opinion is as good as anyone else's. If ever there was a need for strong universities as a source of reliable, expert evidence, it is now. Unless academics start to speak out to defend what we have, it is at risk of disappearing.

For more detail of the case against the TEF, see here.

Sunday, 8 January 2017

A common misunderstanding of natural selection



© cartoonstock.com
My attention was drawn today to an article in the Atlantic, entitled ‘Why Do Humans Still Have a Gene That Increases the Risk of Alzheimer’s?’ It noted that there are variants of the apoliprotein gene that are associated with an 8- to 12-fold increased risk of the disease. It continued:
“It doesn’t make sense,” says Ben Trumble, from Arizona State University. “You’d have thought that natural selection would have weeded out ApoE4 a long time ago. The fact that we have it at all is a little bizarre.”

The article goes on to discuss research suggesting there might be some compensating advantage to the Alzheimer risk gene variants in terms of protection from brain parasites.

That is as may be – I haven’t studied the research findings – but I do take issue with the claim that the persistence of the risk variants in humans is ‘a little bizarre’.

The quote indicates a common misunderstanding of how natural selection works. In evolution, what matters is whether an individual leaves surviving offspring. If you don’t have any descendants, then gene variants that are specific to you will inevitably disappear from the population. Alzheimer’s is an unpleasant condition that impairs ability to function independently, but the onset is typically long after  child-bearing years are over. If a disease doesn’t affect the likelihood that you have surviving children, then it is irrelevant as far as natural selection is concerned. As Max Coltheart replied when I tweeted about this: “evolution doesn't care about the cost of living in an aged-care facility”.

Monday, 2 January 2017

My most popular posts of 2016

The end of the year is a good time to look over the blog, to see which posts garnered most attention. Perhaps not surprisingly, among the six most popular were several pieces I wrote related to scientific publishing and the reproducibility crisis: examining the problem and potential solutions. This is a topic I have become increasingly passionate about, as I worry about how the current system encourages us to waste precious time and money pursuing 'exciting' and 'ground-breaking' results, rather than doing thoughtful, careful science. Of course, these are not mutually exclusive, but, sadly, the emphasis on innovation and productivity can lead people to cut corners, and we then end up with findings that are not a solid basis on which to build further research.


Here are the top 6 posts of 2016:




Here are slides from and references from a talk I gave on reproducibility


Here is a link to an advanced course for early career researchers on this topic that I am running with Marcus Munafo and Chris Chambers


Here is a link to a complete catalogue of my blog posts.


Happy New Year!







Thursday, 22 December 2016

Controversial statues: remove or revise?



The Rhodes Must Fall campaign in Oxford ignited an impassioned debate about the presence of monuments to historical figures in our Universities. On the one hand, there are those who find it offensive that a major university should continue to commemorate a person such as Cecil Rhodes, given the historical reappraisal of his role in colonialism and suppression of African people. On the other hand, there are those who worry that removal of the Rhodes statue could be the thin end of a wedge that could lead to demands for Nelson to be removed from Trafalgar Square or Henry VIII from King’s College Cambridge. There are competing petitions online to remove and retain the Rhodes statue: with both having similar numbers of supporters.

The Rhodes Must Fall campaign was back in the spotlight last week, when the Times Higher ran a lengthy article covering a range of controversial statues in Universities across the globe. A day before the article appeared, I had happened upon the Explorer's Monument in Fremantle, Australia. The original monument, dating to 1913, commemorated explorers who had been killed by 'treachorous natives' in 1864. As I read the plaque, I was thinking that this was one-sided, to put it mildly.

Source: https://en.wikipedia.org/wiki/Explorers%27_Monument
 
But then, reading on, I came to the next plaque, below the first, which was added to give the view of those who were offended by the original statue and plaque. 

Source: Source: https://en.wikipedia.org/wiki/Explorers%27_Monument
I like this solution.  It does not airbrush controversial figures and events out of history. Rather, it forces one to think about the ways in which a colonial perspective damaged many indigenous people - and perhaps to question other things that are just taken for granted. It also creates a lasting reminder of the issues currently under debate – whereas if a statue is removed, all could be forgotten in a few years’ time. 
Obviously, taken to extremes, this approach could get out of control – one can imagine a never-ending sequence of plaques like the comments section on a Guardian article. But used judiciously, this approach seems to me to be a good solution to this debate.

Friday, 16 December 2016

When is a replication not a replication?



Replication studies have been much in the news lately, particularly in the field of psychology, where a great deal of discussion has been stimulated by the Reproducibility Project spearheaded by Brian Nosek.

Replication of a study is an important way to test the the reproducibility and generalisability of the results. It has been a standard requirement for publication in reputable journals in the field of genetics for several years (see Kraft et al, 2009). However, at interdisciplinary boundaries, the need for replication may not be appreciated, especially where researchers from other disciplines include genetic associations in their analyses. I’m interested in documenting how far replications are routinely included in genetics papers that are published in neuroscience journals, and so I attempted to categorise a set of papers on this basis.

I’ve encountered many unanticipated obstacles in the course of this study (unintelligible papers and uncommunicative authors, to name just two I have blogged about), but I had not expected to find it difficult to make this binary categorisation. But it has become clear that there are nuances to the idea of replication. Here are two of those I have encountered:

a)    Studies which include a straightforward Discovery and Replication sample, but which fail to reproduce the original result in the Replication sample. The authors then proceed to analyse the data with both samples combined and conclude that the original result is still there, so all is okay. Now, as far as I am concerned, you can’t treat this as a successful replication; the best you can say of it is that it is an extension of the original study to a larger sample size.  But if, as is typically the case, the original result was afflicted by the Winner’s Curse, then the combined result will be biased.
b)    Studies which use different phenotypes for Discovery and Replication samples. On the one hand, one can argue that such studies are useful for identifying how generalizable the initial result is to changes in measures. It may also be the only practical solution if using pre-existing samples for replication, as one has to use what measures are available. The problem is that there is an asymmetry in terms of how the results are then treated. If the same result is obtained with a new sample using different measures, this can be taken as strong evidence that the genotype is influencing a trait regardless of how it is measured. But when the Replication sample fails to reproduce the original result, one is left with uncertainty as to whether it was type I error, or a finding that is sensitive to how it is measured. I’ve found that people are very reluctant to treat failures to replicate as undermining the original finding in this circumstance.

I’m reminded of arguments in the field of social psychology, where failures to reproduce well-known phenomena are often attributed to minor changes in the procedures or lack of ‘flair’ of experimenters. The problem is that while this interpretation could be valid, there is another, less palatable, interpretation, which is that the original finding was a type I error.  This is particularly likely when the original study was underpowered or the phenotype was measured using an unreliable instrument. 

There is no simple solution, but as a start, I’d suggest that researchers in this field should, where feasible, use the same phenotype measures in Discovery and Replication samples. Where that is not feasible, the could pre-register their predictions for a Replication Sample prior to looking at the data, taking into account the reliability of the measures of the phenotype and the power of the Replication Sample to detect the original effect, based on the sample size

Tuesday, 13 December 2016

When scientific communication is a one-way street


Together with some colleagues, I am reviewing a set of papers that combine genetic and neuroscience methods. We had noticed wide variation in methodological practices and thought it would be useful to evaluate the state of the field. Our ultimate aim of identifying both problems and instances of best practice, so that we could make some recommendations.

I had anticipated that there would be wide differences between studies in statistical approaches and completeness of reporting, but I had not realised just what a daunting task it would be to review a set of papers. We had initially planned to include 50 papers, but we had to prune it down to 30, on realising just how much time we would need to spend reading and re-reading each article, just to extract some key statistics for a summary.

In part the problem is the complexity that arises when you bring together two or more subject areas, each of which deals with complex, big datasets. I blogged recently about this. Another issue is incomplete reporting. Trying to find out whether the researchers followed a specific procedure can mean wading through pages of manuscript and supplementary material: if you don’t find it, you then worry that you may have missed it, and so you re-read it all again. The search for key details is not so much looking for a needle in a haystack as being presented with a haystack which may or may not have a needle in it.

I realised that it would make sense to contact authors of the papers we were including in the review, so I sent an email, copied to each first and last author, attaching a summary template of the details that had been extracted from their paper, and simply asking them to check if it was an accurate account. I realised everyone is busy and I did not anticipate an immediate response, but I suggested an end of month deadline, which gave people 3-4 weeks to reply. I then sent out a reminder a week before the deadline to those who had not replied, offering more time if needed.

Overall, the outcome was as follows:
  • 15 out of 30 authors responded, either to confirm our template was correct, or to make changes. The tone varied from friendly to suspicious, but all gave useful feedback.
  • 5 authors acknowledged our request and promised to get back but didn’t.
  • 1 author said an error had been found in the data, which did not affect conclusions, and they planned to correct it and send us updated data – but they didn’t.
  • 1 author sent questions about what we were doing, to which I replied, but they did not confirm whether or not our summary of their study was correct.
  • 8 did not reply to either of my emails.

I was rather disappointed that only half the authors ultimately gave us a useable response. Admittedly, the response rate is better than has been reported for people who request data from authors (see, e.g. Wicherts et al, 2011) – but providing data involves much more work than checking a summary. Our summary template was very short (effectively less than 20 details to check), and in only a minority of cases had we asked authors to provide specific information that we could not find in the paper, or confirmation of means/SDs that had been extracted from a digitised figure.  

We are continuing to work on our analysis, and will aim to publish it regardless, but I remain curious about the reasons why so many authors were unwilling to do a simple check. It could just be pressure of work: we are all terribly busy and I can appreciate this kind of request might just seem a nuisance. Or are some authors really not interested in what people make of their paper, provided they get it published in a top journal?




Friday, 28 October 2016

The allure of autism for researchers

Data on $K spend on neurodevelopmental disorder research by NIH: from Bishop, D. V. M. (2010). Which neurodevelopmental disorders get researched and why? PLOS One, 5(11), e15112. doi: 10.1371/journal.pone.0015112

Every year I hear from students interested in doing postgraduate study with me at Oxford. Most of them express a strong research interest in autism spectrum disorder (ASD). At one level, this is not surprising: if you want to work on autism and you look at the University website, you will find me as one of the people listed as affiliated with the Oxford Autism Research Centre. But if you look at my publication list, you find that autism research is a rather minor part of what I do: 13% of my papers have autism as a keyword, and only 6% have autism or ASD in the title. And where I have published on autism, it is usually in the context of comparing language in ASD with developmental language disorder (DLD, aka specific language impairment, SLI). And, indeed in the publication referenced in the graph above, I concluded that there was disproportionate amounts of research, and research funding, going to ASD relative to other neurodevelopmental disorders.

Now, I don’t want to knock autism research. ASD is an intriguing condition which can have major effects on the lives of affected individuals and their families. It was great to see the recent publication of a study by Jonathan Green and his colleagues showing that a parent-based treatment with autistic toddlers could produce long-lasting reduction in severity of symptoms. Conducting a rigorous study of this size is hugely difficult to do and only possible with substantial research funding.

But I do wonder why there is such a skew in interest towards autism, when many children have other developmental disorders that have long-term impacts. Where are all the enthusiastic young researchers who want to work on developmental language disorders? Why is it that children with general learning disabilities (intellectual retardation) are so often excluded from research, or relegated to be a control group against which ASD is assessed?

Together with colleagues Becky Clark, Gina Conti-Ramsden, Maggie Snowling, and Courtenay Norbury, I started the RALLI campaign in 2012 to raise awareness of children’s language impairments, mainly focused on a YouTube channel where we post videos providing brief summaries of key information, with links to more detailed evidence. This year we also completed a study that brought together a multidisciplinary, multinational panel of experts with the goal of producing consensus statements on criteria and terminology for children’s language disorders – leading to one published paper and another currently in preprint stage. We hope that increased consistency in how we define and refer to developmental language disorders will lead to improved recognition.

We still have a long way to go in raising awareness. I doubt we will ever achieve a level of interest to parallel that of autism. And I suspect this is because autism fascinates because it does not appear just to involve cognitive deficits, but rather a qualitatively different way of thinking and interacting with the world. But I would urge those considering pursuing research in this field to think more broadly and recognise that there are many fascinating conditions about which we still know very little. Finding ways to understand and eventually ameliorate language problems or learning disabilities could help a huge number of children and we need more of our brightest and best students to recognise this potential.