https://penguinsandcheese.wordpress.com/2011/10/13/is-it-possible-to-prove-a-research-hypothesis/#comments (reply to someones comment on my own blog)
Hello! So, the blogs have changed slightly. This is going to be my blog for weeks 4 and 5. I’m going to be discussing why reliability is important when we’re doing research.
So first, what is reliability? And how can we measure it?
I’m pretty sure that if you’re reading this blog then you know all about reliability in a study, but I shall briefly explain anyway.
Firstly, reliability is very closely linked with validity, but it’s important not to get the 2 mixed up. Validity is whether your test measures what you want it to measure. Reliability is a little different.
What is it?
Reliability is simply the consistency of a measure. We could consider our result reliable if we get the same one repeatedly. So, if we conducted an experiment to see if mood is affected by weather and found that it was, would we also find that same result when the test was conducted again? (this is an example of test-rest reliability…which I’ll come to in a second).
If so, the results could be considered reliable.
How can we measure it?
There are actually many ways to test reliability:
- Test-retest reliability – which is when a test is administered at 2 different points in time to assess the consistency of a test across time.
- Inter-rater reliability – this uses 2 or more experimenters to score the test. The scores are then compared to compare the consistency of the rater’s estimates.
- Parallel-forms reliability – this uses 2 different tests, which were created using the same content.
- Internal consistency reliability – this is when 2 questions (most often on a questionnaire) ask the same thing. If the 2 answers match, that shows reliability.
I have gone into these in very brief detail, and there is more detail if anybody wants it here:
Now, is reliability important in Psychological testing? Well, of course it is. If an experimenter got different results every time they did the test, then how would they know which results were the true ones? They couldn’t answer their research question. If they found from one test that weather does affect mood, and on another they found that weather has no effect on mood – does weather affect mood? There’s no way to know because those results aren’t reliable.
Now, if that were the case then that’s when they might look at the validity of the study and see if it was something other than the weather affecting people’s mood – were they actually measuring what they set out to measure? (Which is where reliability and validity are so closely linked, and where it might be easy to get the 2 confused).
So, I shall end by saying that if results aren’t reliable, there’s probably no point in putting any store in these results at all.
I’m going to be having a couple of weeks off from writing my blogs (please, don’t cry), so see you all after exam week, and if you’re a 2nd year Psychology student at Bangor (which I’m sure you are) good luck in your exams!!
Hello! Back again for week 3! (isn’t the year just flying by!).
We’ve been given a wild card this week! I got to choose what I was going to discuss all by myself (well..I kinda choose it off a list of possible blog topics, but we won’t worry about that).
So, I have decided to tell you all about research hypothesis.
I’m sure the majority of you are aware what a research hypothesis is, but for those who don’t allow me to explain.
There are a few definitions of ‘research hypothesis’, but generally, this one seems good enough to illustrate my point: ‘A proposition about the nature of the world that makes predictions about the results of an experiment.’ (Taken from: www.sinauer.com/fmri2e/html/glossary.html ). So, I think that the key word here is ‘predictions’. You’re making a prediction about what you think will happen when you carry out your research.
OK…an example. Imagine that you want to research what colour swans are. Now, in your life, I’m sure that most, if not all, of you have only ever come across white swans and therefore may think that all swans are white. So, you make your research hypothesis that all swans are white. You go out, scour several places in search of swans, and you keep count of how many different coloured swans you find. At the end of your research, you take a look at your data, and sure enough all the swans you saw were white. Great, your research hypothesis was right, right?
Nope…you can’t say unequivocally that all swans are white, because you didn’t see all the swans in the world. Would you ever be able to be absolutely sure that you have seen all the swans in the world? Could you ever prove, for sure that your research hypothesis is correct? I’m afraid not. It’s never possible to prove a research hypothesis, but only to collect a lot of data that can strongly support it.
You can, however, disprove a research hypothesis. Thinking about the swans again, you could have spent years and years finding all the swans you could. Searching high and low in all the parks and ponds you could find, and STILL, you could have only found white swans! Someone could come along, and in an instant ruin your years of work by finding a black swan. Well, there we go….your research hypothesis was that all swans are white. You have tons and tons of data showing that swans are white, and this person goes and finds a black swan!! So, if this black swan exists, ALL swans can’t be white. Which means, unfortunately, your research hypothesis is wrong (and you’ve wasted years of your life looking at swans…good job this is hypothetical).
Not that I stole the swan story from a famous example or anything…but there is a book by Nassim Nicholas Taleb, called ‘The black swan: the impact of the highly improbable’ which discusses this in detail.
The idea of a black swan may have been highly improbable, but it’s never impossible (clearly as my picture above demonstrates). You can never say that a research hypothesis has been proven. Even if all the data in the world strongly supports it, that only makes it highly probable. We can never say that it is never impossible to find proof of the opposite.
To finish off, here’s a video of Nassim himself giving a talk on ‘black swans’. (I personally think I explained it better 😉 but there you go).
Thanks for coming again, next week I have another wild card…so who knows what’s in store for you!
Hello, and welcome back to week 2!
Today, I am posed with the question of whether we need statistics to understand our data.
Now, when it comes to this question, I am inclined to say no for one simple reason: I had a lot of trouble with statistics in the first year of my degree, yet show me some data that has been collected, (infact, I saw data that I collected myself last year for that matter) and I could tell you what’s going on.
For example, let’s look at Milgram’s famous study of obedience.
He asked participants to play the role of ‘teacher’ and everytime their ‘student’ got a question wrong, they had to give them an electric shock of varying shock ratings (eg. slight shock, moderate shock, danger: severe shock and the slightly worrying XXX).
(For those of you who don’t know the study, there wasn’t actually anyone getting an electric shock…it was all a trick. Sneaky these psychologists, aren’t they?)
Anyway, the results showed that of the 40 participants in the study, 26 delivered the maximum shocks while 14 stopped before reaching the highest levels. It is important to note that many of the subjects became extremely agitated, distraught and angry at the experimenter. Yet they continued to follow orders all the way to the end.*
Now, imagine you know nothing about SPSS, t-tests, ANOVAs etc. and you’re just looking at these results. Surely, you can tell that every single participant gave an electric shock at least once, and over 50% of them gave the maximum shock there was, just because they were told to by an authority figure. You don’t need statistics to conclude that these participants would obey an authority figure even if it meant harming others, and even harming their own mental well-being.
I’m not saying that we don’t need statistics. But do we need statistics to UNDERSTAND data that is put in front of us? I don’t think so. I, personally, get confused after the statistics.
I appreciate that there are data sets that may be too big or too complex to deduce what’s going on just by simply looking at the data in front of you. And yes, for these we would need statistics to help us make sense of what we’re looking at. But again, do we really NEED the in depth understanding of statistics, or is it just a tool to help us make quick conclusions from our data?
I’m probably rather bias, as I’ve never really gotten my head around statistics very well, but still, I am yet to come across some data where I couldn’t see what was going on without having to apply a lot of complicated statistics to it. The again, maybe I just haven’t looked hard enough 😉
Next week, I’ve been given somewhat of a free pass. So I hope you all enjoy surprises.