Undecided? See why you should join!
These are really interesting times to be involved in the social and behavioral sciences. In recent years we've seen high profile fraud cases, a spotlight on questionable research practices and several failures to replicate what we all thought were key results in our fields. So it's no wonder that more and more people are asking, can we still trust the social and behavioral sciences when so much of it seems like sloppy, rather than solid science? If you're interested in this question then this is the course for you. Because this course is on research methods in the social and behavioral sciences, but with a strong focus on research integrity. You'll learn about the methodological concepts and principles that guide scientific research. You'll learn how scientific discoveries should work. And, how it actually works. In short videos that will look exactly like this, I'll explain how social science should work. I'll discuss the fundamental scientific principles and some history and philosophy of science. We'll look at the scientific method and its criteria for evaluation. And we'll cover research design, measurement, sampling, and research ethics.
I'll interview some of my colleagues on how science actually works, and how it can sometimes result in sloppy science.
You'll also find this out for yourself in the assignments. My name is from the University of Amsterdam And my modest hope for you is that you'll become a more critical consumer of research results, that you'll recognize sloppy science when you see it.
Welcome to quantitative methods!
Hi there, and welcome to the course. I'm so happy you joined us. Before we dive in, I thought I'd tell you a little bit about how the course is organized and what to expect. The main goal is very simple. I'll be happy if at the end of this course, you're familiar with the basic methodological concepts. If you understand what the scientific method is and what the standard research methods and procedures are.
Now since we only have six weeks, I've packed an enormous amount of information into the lecture videos. Almost every sentence contains new and relevant information.
Now to keep all this information digestible, I use very simple examples that I reuse as often as possible. Now, some of you will get really annoyed seeing the same examples about my cat and lonely elderly people over and over again, but please know that I repeat them for a reason. This way, you won't have to spend your energy trying to figure out complicated new examples all the time. You can focus all your efforts on understanding the methodological concepts.
This is where you work to reach goal number two, applying your new knowledge to evaluate the methodological quality of research.
The third goal of this course is to let you see the value of methodology. This is the where the focus on research integrity comes in. The recent fraud cases and spotlight on questionable research practices present a unique opportunity to learn why methodology is so relevant.
By discussing how and why things can go wrong, I hope that you'll see how important and interesting methodology really is, and that you'll maybe even be inspired to improve the current state of the art in your own field of interest.
To spark this interest, I've interviewed some of my colleagues about research integrity in their field. Now, these interviews are more laid back and conversational than the lecture videos.
I've also included small writing assignments that you can use to discuss research integrity with your fellow learners. I hope this is where the sparks will really fly.
Okay, if you wanna know more about the course goals and the topics, you can find them on the general info page. You can find scheduling information about the lectures, quizzes, and the assignments in the syllabus under Course Info.
My name is Ana . I lecture, do research, manage a pre-master program, and I coordinate all blended learning efforts for the Social and Behavioral Sciences Department at least at the University of Amsterdam.
You can read all about my research interest and my teaching experience on the page with info on the course team.
You'll mostly be seeing and hearing me, but this course is truly a team effort. On the team page, you can read about our awesome illustrator , our chief director, cameraman, sound technician, and chief editor . >> Hi >> That was . And you can also get to know our director, editor, and camera man, . And our assistant editors. And content editors again but also , and last but not least , who developed almost all of the quiz questions.
What makes knowledge scientific?
To really appreciate the scientific method, it helps to first look at non-scientific ways of gaining knowledge. We will start the course by considering what makes us decide we know something in day-to-day life. When and from whom do you accept a description or an explanation as true? Think about this question for just a minute and write down the sources of knowledge that first come to mind. Do this before watching the first video and then see whether you came up with the same sources!
1.01 Non-scientific Methods
To see why we need the scientific method, let's take a look at what people base their knowledge on in day to day life.
Let's consider my own strong belief that my cat, Misha, loves me most of all people in his life. I just know he loves me, more than anyone else. I feel this in my heart of hearts.
Now is such a belief, a good basis for knowledge? Well, no. Simply believing something doesn't make it so. Things we believe in strongly, can turn out to be false. Also, what if someone else holds an opposing belief. What if my fiancé believes, that Misha loves him more?
We could count the number of supporters for each belief, and require a majority, or a consensus. But, this isn't a very solid basis for knowledge either. Just because most people accept something as true, doesn't mean it is true. For centuries, practically everybody thought the Earth was flat. Turns out, they were wrong; it's round.
The opinion of authority figures like political leaders, experts, scientists is just that, an opinion.
Authorities may have access to more or better knowledge, but they also have an important personal stake in getting their views accepted, their careers and reputations depend on it.
Suppose my fiancé gets a so called cat whisperer, to declare that Misha loves him more. Of course I'm going to be skeptical about this expert opinion, especially if my fiancé paid for it.
I can find my own cat expert to oppose my fiancé's cat whisperer. But, then we would just have two opposing opinions again. What we need is evidence.
Well suppose I regularly observe that after getting home from work, Misha always comes to sit on my lap and not my fiancée's.
I'm supporting my statement about the world, that Misha loves me more, with an observation of the world. Namely, on whose lap he sits, after work.
This gathering of evidence through casual observation, is a better foundation of knowledge than the previous ones. But, it's still not good enough.
This is because, people just aren't very good at observing. We tend to selectively observe, and remember things that agree with our beliefs.
For example I might have forgotten very conveniently, that Misha always sits on my fiancé's lap at breakfast.
There are many biases besides selective perception, that make casual observation a tricky source of knowledge. And the same, goes for our ability to use logic.
Logical reasoning, would seem like a solid basis for knowledge. But our informal logical reasoning, isn't always consistent. There's an almost endless list of fallacies, or logical inconsistencies, that people regularly make in their day-to-day reasoning.
If you want to develop accurate knowledge, make sure that our explanations of the world are valid, then we need something more. We cannot depend on subjective, and unverifiable sources like beliefs, opinions, consensus. And we can't trust casual observation, and formal logic, because they can be heavily distorted by our beliefs.
We need systematic observation, free from any bias combined with consistently applied logic. In other words, we need the scientific method.
What are the essential qualities of a systematic method?
The casual, informal methods discussed earlier aren't very effective. We need a systematic approach combined with consistent application of formal logic. What does this mean? What principles come to mind when you hear the term "the scientific method"? Write them down! If you think of principles that are not discussed in the following video, then please post them on the forum. I'm very interested to hear what you think!
1.02 Scientific Method
We need the scientific method to make sure our attempts to explain how the world works result in valid knowledge. Opinions, beliefs, casual observation and informal logic won't do. They're too subjective and two susceptible to error.
The scientific method is based on systematic observation and consistent logic. Applying the scientific method increases our chances of coming up with valid explanations. It also provides a way to evaluate the plausibility of our scientific claims, or hypotheses. And the strength of the empirical evidence that we provide for these hypotheses in our empirical study or research.
The scientific method can be described according to six principles. If our study meets these principles, then it can be considered scientific.
A hypothesis can then be compared to and compete with other scientific claims to provide the best possible explanation of the world around us.
The first principle requires that a hypothesis is empirically testable. This means that it should be possible to collect empirical or physical evidence or observations that will either support or contradict the hypothesis.
Suppose I hypothesize that my cat loves me more than he loves my fiancé. To test this hypothesis empirically, we need to collect observations or data.
Suppose we both agree that a cat is unable to express love the way humans do? Well, then there's nothing to observe. The hypothesis is not empirically testable.
The second principle is replicability. A study and it's findings should be replicable. Meaning we should be able to consistently repeat the original study.
If the expected result occurs only once, or in very few cases, then the result could just have been coincidental. A hypothesis is more plausible if it's repeatedly confirmed. And this requires that it's possible to repeat or replicate a study.
Let's say I have convinced my fiancé that if the cat loves someone more, the cat will spend more time on their lap. Now suppose I observed that this week, the cat sat on my lap twice as long as on my fiancé's lap.
Well, the hypothesis would be considered plausible if we can show that the result is the same in the following weeks.
But what if the cat dies after the first week of observation? Then we would not be able to check the hypothesis for ourselves. The study is no longer replicable.
To see if results replicate, we have to be able to repeat the study as it was originally conducted. Suppose we do something differently we find different results. Is this a failure to replicate? No, the failed replication could be caused by our change in procedure.
The third principle of objectivity aims to allow others to repeat the study by themselves without need for the original researcher.
Anybody should be able to get the same results based on the description of the assumptions and the procedures. A researcher should, therefore, be as objective as possible about assumptions, concepts, procedures. This means that all these elements should be clearly and explicitly defined. Leaving no room for subjective interpretation.
Suppose I count my cat's face rubbing as an expression of love. But I fail to explicitly tell my fiancé about this. Then my procedure for measuring love is subjective. Even if we systematically observe the cat at the same time, the result will depend on who is observing him. I will conclude the cat shows love more often than my fiancé will.
In this example the results are subjective and therefore incomparable and we might not even be aware of it. If we do not explicitly discuss and agree on what counts as love and what doesn't, then our measurement procedure for cat love is not objectively defined.
The fourth principle is transparency. Being transparent is closely related to being objective. In science, anyone should be able to replicate your results for themselves, your supporters but also your critics.
This means that researchers need to publicly share what assumptions were made, how concepts are defined, what procedures were used, and any other information that's relevant for accurate replication. The fifth principle states that a hypothesis should be falsifiable. Falsifiability is a very important principle. A hypothesis is falsifiable if we're able to at least imagine finding observations that will contradict our hypothesis. If we can't imagine what such contradictory data would look like, well, then the hypothesis cannot be disproven.
Ask any person with a very strong for example religious belief what evidence would convince them that their belief is false. No matter what contradictory evidence you propose, they will probably argue that these facts do not contradict their strong belief. This puts statements based purely on belief, such as religion, outside the domain of science. If there is no form of evidence that will be accepted as disproving a hypothesis, then it's pointless to argue about the hypothesis or to even look for confirmation since the conclusion is already drawn.
A hypothesis should be logically consistent or coherent. This means there shouldn't be any internal contradiction. For example, if a supporting assumption disagrees with the hypothesis.
This means among other things, that researchers should be consistent in what they count as confirmatory, and contradictory evidence.
I hypothesize that my cat loves me more, and so I expect him to sit on my lap longer. But what if he spends more time on my fiancé's lap? I can say that the cat can feel sitting on my lap is uncomfortable for me, so the cat will sit on my lap less often because he loves me more. Of course, this is logically inconsistent. I've changed the interpretation of the results after the data are in to suit my hypothesis. Incidentally, this also makes my hypothesis unfalsifiable. I will always conclude that my cat loves me, whether he sits on my lap often, or not at all.
So to summarize: The scientific method requires that we formulate hypotheses that are empirically testable, meaning the hypothesis can be supported or contradicted by observations.
Objective, meaning the hypothesis can be tested independently by others. Transparent, meaning the hypothesis and results are publicly shared so they can be tested by anyone.
Falsifiable, meaning that finding contradictory evidence is a possibility. And finally, logically consistent, meaning that the hypothesis is internally consistent, and the conclusion, to support or reject the hypothesis, based on the observations, is logically sound.
One final point. The scientific method is only effective when it's used with the right attitude. In order to come up with better hypotheses, researchers need to be critical of their own studies and those of others. This means they have to be open and transparent. They have to accept critique and let go of their pet hypotheses. If others provide better explanations. Only then can science function like an evolutionary system, where only the fittest or most plausible hypotheses survive.
What's the difference between a hypothesis and a theory?
We use the scientific method to support and evaluate scientific claims about how the world works. These scientific claims come in different shapes and sizes. It's good to be familiar with the terms we use for different types of claims. Especially the term 'theory' is used very differently inside and outside science. Before you watch the video answer this question: Which is more certain, a hypothesis or a theory?
1.03 Scientific Claims
Until now, I've talked about statements, hypotheses and explanations of the world around us. And I've used these general terms without specifying what they mean exactly. It's time to clarify this.
Scientific claims about the world around us can be categorized into different types. Some scientific claims, describe, or explain more phenomena than other claims. Also, some scientific claims provide more plausible descriptions or explanations of the world around us. We find some claims to be more certain. Better supported by evidence than others.
In science, the most basic claim is an observation. An observation can be an accurate or inaccurate representation of the world. Suppose I observe that my cat, which has a ginger colored coat, weighs six and a half kilograms.
Most scientists would accept this observation as a probably fairly accurate reflection of a specific aspect of the world around us, assuming the weight scale is valid and reliable.
But in terms of explanatory power, they would find this observation very uninteresting. Because an observation on its own is not very informative. It doesn't describe a general relation between properties, and it doesn't explain anything.
That doesn't mean observations are unimportant. Observations are the building blocks of the empirical sciences. But they're not very useful on their own. An observation on its own is the least interesting type of scientific claim, since it has no explanatory power.
A hypothesis is a statement that describes a pattern or general relation between properties. A hypothesis can also explain the pattern that it describes.
Take this hypothesis. Ginger cats will on average be overweight, more often than cats with a different color fur. And I can extend this hypothesis with an explanation, for the relation between fur color and obesity. For example, by stating that the genes for ginger fur color and signaling fullness of the stomach are linked.
The plausibility of a hypothesis can range from very uncertain, to very certain. A hypothesis can be unsupported and therefore uncertain. For example, if it's new and still untested, a hypothesis can also be strongly supported by many empirical studies, and therefore more certain.
Laws are very precise descriptions of relations or patterns. So precise that they're usually expressed as mathematical equations.
For example, if I drop my cats food bowl from a height of 56 meters, and I know the earths gravitational constant, then I can predict very accurately how long it will take for the bowl to hit the ground by using Newton's gravitational laws.
Laws allow for very precise predictions, but they usually don't explain the relationships that they describe. In this case, between distance, time, and gravity.
Of course in the social sciences, laws are hardly ever formulated. We understand too little of people and groups yet to be able to specify patterns in their behavior with such a degree of precision that we can postulate scientific laws.
In day to day life, theory means an unsubstantiated statement, an educated guess. In science however, theory refers to a broad overarching explanation of many related phenomena.
In the natural and behavioral sciences, a theory is built up out of hypotheses that are very strongly supported by empirical evidence.
In the social sciences, where qualitative and historical comparative approaches are more dominant, a theory is considered highly plausible when it has withstood attempts to refute it based on logical grounds as well as historical or qualitative analysis.
So in science, theories are the most well established explanations, the closest thing to certainty that we have, because they consist of hypothesis that have survived the scrutiny of the scientific method.
There have been many well substantiated theories that were ultimately replaced, like Newton's mechanics that made wave for the special theory of relativity. In science, there is no certainty. Only a provisional best explanation.
Who developed the scientific method and when?
Now that you know the basics, we will briefly see how the scientific method developed throughout history. We'll start with the ancient Greeks and end in our own, modern times. Don't worry about being quizzed about exact dates, this isn't a history class. Just see if you can think of some names of important philosophers or scientists that you associate with scientific methods.
Also, I realize that this view of history is strongly oriented towards western Europe. If you can provide information about parallel development - for example in China and India - in the forums, this would be most welcome! While watching these videos, challenge yourself: For each thinker and view I describe, try to decide whether you agree with their views and why (not).
1.04 Classical Period
The first thinkers to seek natural or earthly explanations instead of divine explanations were ancient Greek scholars like Thales, Pythagoras, and Democritus. But the first to really consider how to obtain knowledge were Plato and Aristotle more than 2300 years ago.
To Plato, the external world and the objects in it are just imperfect reflections or shadows of ideal forms.
Plato was a philosophical realist. He thought reality, in his case, the world of forms, exists independently of human thought. To Plato, these forms are not just abstract concepts in our mind, they really exist but separately from the physical world.
Plato thought that since the physical world we see is an imperfect reflection of reality, we can't learn the true nature of reality through sensory experience. He insisted that knowledge about the ideal forms can only be gained through reasoning. Plato is therefore referred to as a rationalist.
Plato's student, Aristotle, was a realist just like Plato. He though that reality exists independent of human thought. But to Aristotle, reality is the physical world. There is no separate plane of existence where abstract forms live.
Aristotle also disagreed with Plato on how we can gain knowledge about the true nature of things. Aristotle was an empiricist. He believed our sensory experience gives an accurate representation of reality. So we can use our senses to understand reality.
But that doesn't mean Aristotle was interested in observations only. He still saw reasoning as the best way to understand and explain nature. He, in fact, developed formal logic, more specifically, the syllogism.
Here's an example of a syllogism. All humans are mortal, all Greeks are humans, and therefore all Greeks are mortal.
If the two premises are true then the conclusion is necessarily true. By using this conclusion as a premise in a new syllogism, our knowledge builds up.
But, of course, this only works if the premises are actually true. Consider this one, all mammals are furry, all cats are mammals, therefore, all cats are furry. The first premise is false, which means the conclusion is not necessarily true, not a good basis for building knowledge.
So how can you be sure a premise is true? Well, you can prove it using another syllogism. But, of course, you have to keep proving those premises, so there has to be a set of starting premises that you can accept as undisputedly true.
According to Aristotle, these fundamental premises can be determined through observation of basic patterns or regularities in the world.
Unfortunately, he was unaware that some of his own observations were too selective, leading to fundamental premises that we know now are just plain wrong.
For example, he thought, based on his observations, that insects have four legs and that men have more teeth than women.
Aristotle probably came to these conclusions based on observations of the mayfly, which walks on four legs, but like other insects, actually has six legs.
It’s also likely that he examined his own teeth and those of male friends, but only examined the teeth of servant women who were more likely to be malnourished and have less teeth. He didn’t realize it, but his observations were inaccurate.
Even so, Plato's and Aristotle's views remain dominant for almost 2000 years. It took until the end of the 16th century for people to realize that Plato and Aristotle's views were flawed.
How did the scientific method develop after Plato and Aristotle? Well, the ancient Greeks made many scientific advances. For example, Ptolemy described the movement of planets by placing the earth at the static center of the universe with the planets, including the sun, in a circular orbit, each moving in their own little cycle along their orbital path.
These cycles within cycles were necessary to explain the weird phenomenon of retrograde motion where planets would sometimes move backwards.
Ptolemy's model allowed for accurate predictions, but it's thought that people didn't really believe that it described the actual motion of the planets. It only ‘saved the phenomena’.
After the demise of the Greek city states, during the rise and fall of the Roman Empire, and the first centuries of the Middle Ages, very few scientific advances were made. Plato's and later Aristotle's philosophical ideas remained dominant until a new scientific revolution at the end of the 16th century, starting the Age of Enlightenment.
First, around the turn of the 10th century, Arab and Persian scholars such as Ibn al-Hasan, Al Biruni, and Ibn Sina, started using systematic observation and experimentation, emphasizing unbiased observation and not just logical reasoning.
Second, building on the work of their predecessors, the Englishmen Grosseteste and Roger Bacon advocated the use of both induction and deduction.
Induction means using particular observations to generate general explanations. Deduction means predicting particular outcomes based on general explanations.
A third important development was the invention of the printing press. This created the perfect conditions for a scientific revolution. More scholarly works became available to a wider audience.
Among these works was ‘De Revolutionibus Orbium Coelestium’ by Copernicus. This was the fourth important development to lead up to the Scientific Revolution.
And in Copernicus's new model of planetary motion, the planets, including earth, moved in circles around the sun.
Now, this didn't exactly agree with religious doctrine. The church accepted Aristotle and Ptolemy's model with Earth at the center of the universe.
Many historians believe Copernicus was afraid to publish his work because he feared the church would punish him for contradicting their doctrine.
He did eventually publish his new model, but he added a special dedication to the pope arguing that if Ptolemy was allowed to formulate a model with strange cycles that only save the phenomena, well then he should be given the same freedom.
He was implying that his model was also intended, not as an accurate representation, but just as a pragmatic model.
Whether he truly believed this is unclear, he died shortly after the publication which actually did not cause an uproar until 60 years later. Now according to many, the Scientific Revolution and the Age of Enlightenment started with Copernicus. But, others feel the honor should go to the first man to refuse to bow down to the Catholic church and maintain that the heliocentric model actually described physical reality.
This man, of course, was Galileo Galilei.
1.05 Enlightenment
Galileo is considered the father of modern science because he set in motion the separation of science from philosophy, ethics, and theology, which were all under strict control of the Catholic Church.
Others that already quietly advocated a scientific approach based on observation and experimentation, instead of using theological reasoning. But Galileo was the first to do this very explicitly.
He also opposed several of Aristotle theories, which were accepted by the Catholic Church as doctrine.
For example, he disproved the Aristotelian view that heavy objects fall to the Earth more quickly than lighter objects. Galileo did this with a thought experiment, showing that besides observation, he also valued logical reasoning.
Of course, he's most famous for disputing the Aristotelian and Ptolemaic view that the Earth is the center of the universe. He supported Copernicus's heliocentric view, where the sun is the center of the universe.
Galileo made systematic observations of the planet Venus that could only be explained if the planets revolved around the sun instead of the Earth.
Now to Copernicus, the heliocentric model just saved the phenomenon. Meaning that the model accurately predicts our observations of planets, but that it doesn't actually correspond to physical reality.
The Catholic Church did not appreciate Galileo's disruptive ideas. They brought him before the inquisition and put him under house arrest until his death.
Although Descartes also rejected many of Aristotle's ideas, Descartes did agree with Aristotle that knowledge should be based on first principles.
Because he felt our senses and mind can easily be deceived, he decided to discard every notion that's even the least bit susceptible to doubt.
And once he'd removed everything that he doubted, he was left with only one certainty, namely that he thought and therefore he must exist. Cogito, ergo sum.
This eventually led him to conclude that we only know the true nature of the world through reasoning.
Francis Bacon thought, just like Descartes, that scientific knowledge should be based on first principles. But in contrast to Descartes, Bacon maintained that this should happen through inductive methods.
Induction means that observations of particular instances are used to generate general rules or explanations.
Suppose every time I have encountered a swan, the swan was white. I can now induce the general rule that all swans are white.
Bacon believed that all knowledge, not just the first principles, should be obtained only through this inductive method. Generating explanations based on sensory experiences.
This is why he is considered the father of empiricism. Where empiric means relating to experience or observation.
Now David Hume took empiricism to the extreme, accepting only sensory data as a source of knowledge and disqualifying theoretical concepts that didn't correspond to directly observable things.
This led him to conclude that the true nature of reality consists only of the features of objects, not of the physical objects themselves.
This extreme form of empiricism is called skepticism. I'll give you an example. Let's take as a physical object, a cat.
Now what makes this cat a cat? Well, its properties. Its tail, its whiskers, coloring, fur, body shape.
If you take away all the properties that make it a cat, you're left with, well, nothing. The essence of a cat is in his features.
Hume also showed us the problem of induction. Even though you've consistently observed a phenomenon again and again, there is no guarantee your next observation will agree with the previous ones.
For a long time from the perspective of Europeans at least, all recorded sighting of swans showed that swans are white.
In other words, no amount of confirmatory observation can ever conclusively show that a scientific statement about the world is true.
So if you require that all knowledge must be based on observations alone, that means you can never be sure you know anything.
Partly in reaction to Hume's skepticism at the start off the 19th century, a philosophical movement known as German Idealism gained popularity.
The idealists believe that we mentally construct reality. Our experience of the world is a mental reconstruction. Scientific inquiry should therefore focus on what we can know through our own reasoning. Now, the idealists concern themselves mainly with questions about immaterial things like the self, God, substance, existence, causality. They were also criticized heavily for using obscure and overly complicated language.
On the eve of the second Industrial Revolution around the turn of the 19th century, scientists started to lose patience with the metaphysics of the idealists.
Their musings on the nature of being, had less and less relevance in a period where scientific, medical, and technical advances were rapidly being made.
At the start of the 20th century, a new philosophy of science came on the scene, that proposed a radical swing back to empiricism. This movement is called logical positivism.
1.06 Modern Science
After the First World War, a group of mathematicians, scientists and philosophers formed the Wiener Kreis, in English called the Vienna Circle.
They were unhappy with the metaphysics of the German idealists, who focused on first principles of knowledge and a fundamental nature of being.
The Vienna Circle, with members like Moritz Schlick, Otto Neurath, and Rudolf Carnap, felt idealist questions about the self and existence were meaningless because they were unanswerable. They proposed a new philosophy of science called Logical Positivism.
The logical positivist redefine science as the study of meaningful statements about the world. Now, for a statement to be meaningful, it has to be verifiable, which is known as the verification criteria. It means that it should be possible to determine the truth of a statement.
There are two types of meaningful statements. Analytic statements and synthetic statements. Analytic statements are tautological, necessarily true. Examples are, bachelors are unmarried. And all squares have four sides. They are a priori statements, like definitions and purely logical statements.
They don't depend on the state of the world, and therefore don't require observation to be verified. They can be used in mathematics and logic. New combinations of analytical statements can be verified with formal logic.
Synthetic statements depend on the state of the world. Examples of synthetic statements are, all bachelors are happy and all cats are born with tails. These statements are a posteriori. They can only be verified through observation. The logical positivists thought these statements should always be publicly accessible.
Also, statements are not allowed to refer to unobservable entities like electron or gravity because they can't be observed directly.
If a statement makes reference to an unobservable entity, is not tautological, or not logically or empirically verifiable, then that statement is meaningless. Subjects like metaphysics, theology and ethics were thereby nicely excluded from science.
Of course, the criteria and verification through observation couldn't deal with the problem of induction. No amount of confirmatory evidence is ever enough to definitively prove or verify a statement. It's always possible a contradictory observation will be found in the future. So, the strong criterion of verification was weakened by requiring only confirmation instead of verification.
Another very strict rule also had to be changed. Not allowing reference to unobservable entities created big problems. Entities like electron, gravity, and depression cannot be observed directly, but they are indispensable in scientific explanations.
This, together with the problem of induction, led to a more moderate version of logical positivism, called logical empiricism.
Karl Popper, who is nicknamed the official opposition by the Vienna Circle, was one of their main critics. He argued that the distinction between meaningful and meaningless statements should be based on the criterion of falsification, not verification.
Karl Popper argued that we can never conclusively verify or prove a statement with observations but we can conclusively disprove it with contradictory evidence.
Popper proposes that scientists should actively engage in risky experiments. These are experiments that maximize the chance of finding evidence that contradicts our hypothesis. If we find such contradictory evidence, we inspect it for clues on how to improve our hypothesis.
Now, Willard Van Orman Quine showed that this criterion is also problematic. In the Duhem-Quine thesis, he states that no hypothesis can be tested in isolation. There are always background assumptions and supporting hypotheses.
Now, if contradictory evidence is found, then according to Popper, our scientific explanation is wrong and should be rejected.
But according to Quine, we could always reject one of the background assumptions or supporting hypotheses instead. This way we can salvage the original hypothesis.
Thomas Kuhn pointed out that science doesn't develop out of strict application of either the verification or the falsification principle. Hypotheses aren't immediately rejected or revised if the data don't agree with them. Science takes place within a certain framework, or paradigm.
Hypotheses are generated that fit within this paradigm. Unexpected results lead to revision of hypothesis. But only as long as they fit the framework. As this is impossible, the rose salts are just ignored.
But when more contradictory evidence accumulates, a crisis occurs, which leads to a paradigm shift. A new paradigm is adopted and the cycle begins again.
Even in its weaker form of logical empiricism, logical positivism couldn't stand up to the critique of Popper, Quine, and others. Since then, we've progressed to a more pragmatic philosophy of science.
Today's scientists follow the hypothetico-deductive method. Combining induction, deduction. Requiring falsifiability and accepting repeated confirmation only as provisional support for a hypothesis.
Philosophically, many scientists would probably be comfortable with Bas Van Fraassen's constructive empiricism, which states that science aims to produce empirically adequate theories.
Accepting a scientific theory doesn't mean accepting it as definitive. A true representation of the world. According to a constructive empiricist, a scientific statement is accepted as true as far as our observations go. Whether the statement truthfully represents the unobservable entities, simply can't be determined.
We just have a current best explanation for our observations. That's it.
What is your philosophy of science?
I've discussed the main scientists and philosophers who've shaped the scientific method into what it is today. Their views can be categorized into different philosophical views. These views differ according to how they think knowledge can be obtained and what the nature of the world is (which determines what is knowable). In fact I've already discussed many of these views in the videos on the history of the scientific method. If you have the time, try and rewatch these videos and see if you can spot these philosophical views.
1.07 Epistemology
Before you accept the hypothetico-deductive method as the best way to gain knowledge about the world, there are at least two important philosophical questions about knowledge that you should answer for yourself.
The first question concerns the nature of reality. What is real? What exists? And therefore, what is out there that we can gain knowledge of in the first place?
The philosophical field that deals with these types of problems is called ontology, the study of being. The second question concerns the way in which knowledge can be acquired.
Assuming there is a reality out there, that is in principle knowable, then what knowledge of reality is accessible to us, and how do we access it?
The field of philosophy that is concerned with these types of problems is called epistemology. The study or theory of knowledge.
I'll start with the last question first. Assuming there is a reality out there that is knowable, how do we obtain this knowledge?
Well there are many different epistemological views. I'll just discuss the two most important views here.
Using our mind's capability for logical, rational thought, we can deduce truths about the world, without having to resort to experience.
Philosophers like Plato and Descartes coupled rationalism with the idea that at least some of the abstract concepts about the structure of nature are innate. We're born with them.
That means, our mind simply has the capability of understanding these concepts. Because we already know them, we just have to remember or recognize them, by using our reasoning.
Empiricism opposes this view. According to the empiricist view, sensory experience is the most important way, and according to some strict empiricists, even the only way to obtain knowledge about the world.
Aristotle is considered the first empiricist. He thought that the foundational truths about nature come from sensory experience. We can obtain more knowledge through deductive reasoning, but observation is the basis of all our knowledge.
Aristotle didn't believe in innate ideas. In fact, he coined the term ‘tabula rasa’, to indicate everyone was born as a blank slate. Our knowledge is not predefined. The mind is open to any idea.
Of course, Aristotle wasn't a radical empiricist. He didn't object to rational thought entering into the mix, and he wasn't worried about using abstract, not directly observable concepts.
I guess Galileo can be considered a moderate empiricist. He put a lot of emphasis on observation and experimentation, but he also relied heavily on logical reasoning. Galileo in fact famously said, that ‘the book of nature is written in the language of mathematics’.
He had no problem using thought experiments, and included references to unobservables in his hypotheses.
Later empiricists such as Bacon, but especially Hume and the logical positivists, were very strict empiricists. Maintaining that only sensory experience could lead to true knowledge about the world.
They consider statements about unobservable, universal properties that cannot be observed directly, to be meaningless.
The contemporary flavor of empiricism is van Fraassen's constructive empiricism. It emphasizes the role of sensory experience in both inductive and deductive methods. But it allows for theoretical terms that don't have physical, directly observable counterparts.
In constructive empiricism, the aim is to come up with empirically adequate explanations, which can be considered true, if they accurately describe the world, as far as the observables go.
1.08 Ontology
Let's turn to the subject of ontology, or the study of being, which asks, what is the nature of reality?
Well, there are many competing views. And before we can dive into the philosophical views themselves, I'll first explain two main points on which these views differ from each other. The first main point is whether reality exists independently of human thought.
When we refer to objects we perceive in the world, are we referring to actual entities that exist outside of us? Or are we referring to mental representations that are constructed by our mind and that can only be said to exist in our mind?
The second main point concerns the ontological status of particulars and universals. With particulars, I mean specific instances or occurrences in which a property can be observed.
Let me give an example. Love is a general property that we cannot observe directly, but that is instantiated or expressed in behavior.
So when my cat climbs on my lap and takes a nap, that could be a particular instance of the universal property love.
Another example of an unobservable universal property is gravity. Gravity is expressed in particular instances. For example, when I drop my cat's food bowl and it falls to the ground.
So let's look at some different ontological views and see where they stand on the question of particulars versus universals and the question whether reality exists externally or only in our mind.
Idealism is a philosophical view that states that reality as we perceive it exists entirely in our mind. The existence of an external, physical world is irrelevant since our perception of it is determined by our mental processes. Reality is in effect a mental construct. Gravity and love exist, but only in our mind. And the same goes for their particular occurrences.
So an idealist would say that the cat sleeping on my lap and the bowl failing to the ground are also mental constructions.
The question whether universal, unobservable entities are real external independent entities is therefore less relevant for Idealism because both particulars and universals are considered to exist, they're just both mental representations.
Idealism can be contrasted with Materialism. Materialism is a position that accepts an external world independent of our mind. Materialism also states everything in this independent physical reality consists entirely of matter.
This means that everything is a result of the interaction of physical stuff, including our consciousness, feelings and thoughts. These are all just by-products of our brain interacting with the physical world.
Materialism is only about what stuff is made of. Like Idealism, it's not strongly associated with a view on the distinction between universals and particulars.
Realism is a different position. Just like Materialists, Realists maintain that external reality exists independent of human thought. But Realists also maintain that universals like love and gravity are real.
Platonic Realism refers to Plato's position that universals like gravity and love really exist independent from our observation, but on a separate abstract plane.
Scientific Realism is more moderate and states that it's possible to make consistently supported claims using universals in statements about observable phenomena.
In scientific Realism, universals like love and gravity are therefore given the same ontological status as observable particulars.
Unobservables are assumed to exist since they're useful and often even necessary to formulate successful scientific claims.
Finally, we have Nominalism. This view opposes realism as far as universals are concerned. It accepts reality as independent of human thought, but denies the existence of universals. In Nominalism, there is no such thing as gravity or love. There are only falling objects and cats that frequently sit in your lap purring.
According to Nominalists, we just use the terms gravity and love because they help us to make sense of the world, but these universals don't actually exist.
1.09 Approaches
The development of the scientific method I've discussed up until now, was focused mainly on the natural sciences. Physics, astronomy, biology. But during the second half of the 19th century, the social sciences started to arrive on the scene.
During this time, people were shifting back to the ontological view of realism, which assumes that the physical world is real. The world we perceive is external, and exists independently from our thought.
The epistemological view was becoming more positivistic, meaning that scientists thought that we can gain knowledge about the true nature of the world through observation and experimentation.
This realistic, positivistic view was mostly applied to natural phenomenon. But as the social sciences developed, and became distinct scientific fields, the question rose whether the realistic views should also be applied to social, and psychological phenomenon.
According to the view called objectivism the ontological position of realism does indeed apply. Psychological and social phenomena like intelligence, social cohesion are external, independent properties that exist separately from our mental representation of these properties.
According to constructivism, the nature of social phenomena depends on the social actors involved. This means reality is not independent and external. Instead, reality is considered primarily a mental construction that depends on the observer and the context.
For example, properties like happiness or femininity are not external, not unchanging, and cannot be objectively defined.
How these properties are perceived and what they mean depends on what culture, what social group the observer is part of, and the specific historical period.
So, if our psychological and social reality is constructed, subjective, elusive, how do we obtain any knowledge about it? What epistemological position fits the ontological position of constructivism?
These interpretivist views all assume that a researcher's experience or observation of a social phenomenon, can be very different from how the people who are involved in the social phenomenon experience themselves.
The focus should therefore lie with understanding the phenomenon from the point of view of the people involved.
The three interpretivist views I want to discuss are called hermeneutics, phenomenology, and verstehen. They differ slightly on how this understanding of psychological and social reality can be gained.
The term hermeneutics comes from the theological discipline concerned with interpretation of scripture.
Hermeneutics aims to explain social phenomenon by interpreting people’s behavior within their social context. Researches need to take context into account and try to understand how people see the world, in order to understand their actions in it.
Phenomenology is closely related to hermeneutics. It starts from the premise that people are not inert objects. They think and feel about the world around them. And this influences their actions. To understand their actions, it's necessary to investigate the meaning that they attach to the phenomena that they experience.
Now to achieve such an understanding of someone else's experiences, researchers need to eliminate as many of their own preconceived notions as they possibly can.
Verstehen is the third interpretivist view. It has close ties with hermeneutics and phenomenology. Verstehen is mainly associated with sociologist Max Weber.
Researchers need to assume the perspective of the research subjects to interpret how they see the world. Only then can a researcher try to explain their actions.
For example, if European researchers investigate happiness in an isolated Amazonian tribe, they should do so from the tribe's perspective, taking the tribe's social context into account.
For this tribe it might be that the community is more important than the individual. This could mean that happiness is considered a group property that does not even apply to individuals.
Now in order to grasp such a totally different view of the world, researchers need to immerse themselves in the culture of the person, or the group that they're investigating.
Now, of course there are some problems with the constructivist interpretivist view. First, there's a problem of layered interpretation. The researcher interprets the subjects interpretations, and then interprets the findings again as they're placed in a framework or related to a theory. With every added layer of interpretation there's more chance of misinterpretation.
When in our example, happiness is subjective and means different things in different cultures, we just cannot compare them.
This means we can never come up with general theories or universal explanations that apply to more than just particular groups, and particular periods of time.
A third problem is, the difference in frame of reference. If the frame of reference of the researcher is very different, it can be hard for the researcher to assume the subject's point of view. This makes it hard to find out what the relevant aspects of the social context even are.
The constructivist-interpretivist view is generally associated with a quantitative approach to science.
That means observations are made through unstructured interviews or participatory observation, where the researcher becomes part of a group to observe it.
The data are obtained from one or just a few research subjects. The data are analyzed qualitatively by interpreting text or recorded material.
In contrast, the objectivist-positivist view is associated with quantitative research methods. Observations are collected that can be counted or measured, so that data can be aggregated over many research subjects. The subjects are intended to represent a much larger group, possibly in support of a universal explanation. And the data are analyzed using quantitative statistical techniques. Now, although a qualitative approach is usually associated with a constructivist view of science and a quantitative approach with an objectivist view, there is no reason to limit ourselves to only qualitative or only quantitative methods.
1.10 Goals
Of course the ultimate, general goal of science is to gain knowledge, but we can distinguish more specific goals. These goals differ in terms of the type of knowledge we want to obtain, and for what purpose we want to obtain it.
The specific game, or the type of person playing it, is not relevant here, because we assume the relation between violent game play and aggression holds for any violent game, be it GTA, Call of Duty, any other game.
Universalistic research aims to describe or explain phenomenon that apply to all people, or all groups, or societies.
The scientific method can also be used for particularistic research. Particularistic research is aimed at describing or explaining a phenomenon that occurs in a specific setting, or concerns a specific group.
For example, we could investigate the change in the number of Dutch teenagers hospitalized for alcohol poisoning, just after the legal drinking age was first raised from 16 to 18 years in the Netherlands.
The point here is to investigate the size of an effect for a specific group, in a specific location, during a very specific time.
We wouldn't necessarily expect to find the same effects in a different country or in ten years' time if the drinking age was changed again.
Okay, so the goal of research can either be universalistic or particularistic, or in less fancy terms, aimed at obtaining general versus specific knowledge.
A very closely related, and largely overlapping distinction, is between fundamental and applied research. Applied research is directly aimed at solving a problem. It develops or applies knowledge in order to improve the human condition.
Suppose we want to help depressed people, and we think that depression is caused by loneliness. We could create a program, that aims to lower depression, by making people less lonely. We could give lonely depressed people a cat to take care of, and investigate if their feelings of depression actually go down, now that they're no longer lonely.
Applied research can be contrasted with fundamental research. In fundamental research, the aim is to obtain knowledge, just for the sake of knowing. The only purpose of fundamental research is to further our understanding of the world around us, nothing more. It doesn't have an immediate application. It doesn't directly solve a problem.
For example, we might investigate the relation between loneliness and depression, in a large scale survey study, to see whether people who feel lonelier, also feel more depressed, and vice versa.
The aim here is to show there's a relation between loneliness and depression. Maybe we want to see if this relation exists for both men and women, and for different cultural and age groups.
But note that we do not state how depression can be treated. The goal is to know more about the relationship, not to help depressed people.
Most fundamental research is universalistic. But in some cases, fundamental research can be particularistic. For example, when research is done in a very specific setting.
For example, we could investigate the relation between playing violent computer games and aggressive behavior. In a very specific group of young delinquent first offenders in Amsterdam, who all come from privileged backgrounds.
This very specific problem group could provide interesting new insight into the relation between violent game play and aggression.
Applied research is often particularistic, aimed at solving a problem for a specific group, in a specific context. But it can be universalistic. Take the cat intervention aimed at lowering depression. We could expand this applied research study by comparing a group of people that take care of a friendly cat that seeks their company, and a cat that avoids any contact.
Không có nhận xét nào:
Đăng nhận xét