Introduction to Correlational Research in Cognitive Psychology
- Correlational research explores relationships between two or more variables without implying causation.
- Examples include studying correlations between height and weight, family background and career choices, or physical attractiveness and social help.
Visualizing Relationships: Scatter Plots and Regression Lines
- Data from multiple participants on two variables can be visually represented using scatter plots.
- The regression (or best-fit) line helps summarize the trend minimizing the distance of data points from the line.
- Patterns in scatter plots can indicate types of relationships, such as linear, curvilinear, or no relationship.
Types of Relationships Between Variables
- Positive Linear Relationship: Both variables increase together (e.g., higher optimism associated with healthier behaviors).
- Negative Linear Relationship: One variable increases while the other decreases.
- Curvilinear Relationship: Variables relate in a nonlinear, curved pattern.
- No Relationship: Variables do not show any systematic pattern.
Pearson Correlation Coefficient (r)
- Quantifies strength and direction of a linear relationship.
- Ranges from +1 (perfect positive) to -1 (perfect negative), with 0 indicating no linear relationship.
- Strength categories: around 0.20 (weak), above 0.80 (strong).
- Significance testing (p-values) determines if the correlation is statistically reliable.
Interpretation and Limitations of Correlations
- A significant correlation implies predictability but not causation.
- Non-significant correlation does not prove absence of any relationship; may be influenced by sample size or range restrictions.
- Restricted range (e.g., only high SAT scores admitted to college) reduces correlation magnitude.
Reporting Correlations
- Report the correlation coefficient (r), sample size (n), and significance level (p-value).
- Use correlation matrices to present multiple variable correlations simultaneously.
Multiple Regression Analysis
- Extends correlational research to predict an outcome variable based on multiple predictors.
- Computes partial regression coefficients (beta weights) to assess each predictor's unique contribution controlling for others.
- Example: Predicting college GPA from social support, study hours, and SAT scores.
- Statistical software often used to calculate and report these analyses.
Practical Implications
- Selecting variables with adequate population variance enhances correlation accuracy.
- Large, diverse samples improve reliability of correlational findings.
- Understanding these statistical tools aids in designing and interpreting research within cognitive psychology.
This summary encapsulates key lessons on correlational research design, offering actionable insights for students and researchers engaging with cognitive psychology experimentation and data analysis.
For a broader understanding of experimental methodologies, consider reviewing the Foundations of Experimental Design in Cognitive Psychology: Scientific Method and Challenges. To deepen your knowledge on various correlation procedures mentioned, see Understanding Correlation Techniques: Pearson, Spearman, Phi Coefficient, and Point Biserial. Additionally, insights into predictive modeling approaches can be complemented by the Foundations of Quantitative Experimental Design in Cognitive Psychology.
Hello and welcome to the course basics of experimental design for cognitive psychology. I am Dr. Arkwarma from the
department of cognitive science ID KPUR. This is the second week and we've been talking about descriptive research
design. So uh so far uh but as I said earlier uh I wanted to pre you know present present to us a brief survey of
the other two kinds of research designs as well in some detail before we go on to the experimental research designs. So
in the last three lectures or two lectures probably we saw about different kinds of descriptive research designs.
We talked about case studies, we talked about observational research, we talked about uh survey interviews and
questionnaires. Now we move to a slightly different uh uh you know uh angle and we will start uh with studying
a little bit about correlational research designs. Okay. Now uh as we've seen before
correlational research designs are typically used to search for and describe the relationships between two
variables. Say for example you wanted to study two uh you want to study and note two particular variables. Let us say I
don't know you want to study uh the correlation between weight and height. So what you do uh you can very simply
collect the weights of all the people in your class. You collect the heights of all those all the people in your class
and you perform a statistic. We'll talk about that and it gives you a correlation. The correlation comes out
to be so and so. That correlation will tell you whether height and weight are actually related or not. There could be
other factors as well. There could be any number of things that are contributing here. Okay. Uh another
example uh the one that's been uh you know given in this book by Stanganger is basically this thing of uh you know that
a researcher might be interested in knowing the relationship between let's say family background and career
choices. So uh or say for example between diet and disease what kind of diets lead to what kind of diseases or
say for example the physical attractiveness of a person how beautiful uh you know or uh uh good-looking a
person is and will that relate to the amount of help they will get if they are stranded in a public uh you know in a
public uh sort of place. Okay. So you can ask any kinds of questions. You can collect data and you can compute the
correlation between the two things. Let's say on a scale of 1 to 10, how does physical attractiveness uh vary? On
a scale of 1 to 10, this is where one is least help, 10 is most help. When physical attractiveness is highest, do
you also get the highest amount of help? Or when the physical attractiveness is lowest, do you get no help at all?
Something like that. All right. Now, these variables, whichever you want to study, say for example, can be related
in a variety of ways. There can be variety of kinds of relationships between variables. All right. Uh and
sometimes it could also be the case that there is not only the two variables you are concerned with but there are other
variables in play as well. Okay. So uh let's let's try and see what kinds of relationships exist and uh you know more
detailed discussion can be had there. This is an example uh cited in uh Stanganger. Uh this is the study of Sher
and colleagues in 1994. They basically uh you know collected data from 20 participants uh you know students uh on
two scales. One is the scale of optimism and the other is uh you know reported health behavior. Say for example they
wanted to uh you know so one of the measures is liquid scale measure on optimism how optimistic a person is and
uh on health behavior. What kind of healthy habits are people following? Okay.
Now in this the optim optimism scale varies from 1 to 9. So one you're least optimistic, nine your most optimistic.
So higher numbers indicate a more optimistic personality and the health scale ranges from 1 to 25. So where
higher numbers indicate that the individual reports engaging in more healthy activities you know eating good
food, exercising every day, uh waking up and sleeping at correct time and a bunch of other things. Okay, maybe dietary uh
restrictions and so on and so forth. Now here uh one of the hypothesis let's say that the researchers are probably
testing is that people who are more optimistic about their life might not be following uh extremely healthy habits
because they would think that whatever we eat whatever we do we are going to not have bad consequences in terms of
health outcomes uh and so on. On the other hand, people who are less optimistic and slightly, you know,
conscious about life decisions may probably be uh following more healthy behavior because they constantly have
this fear of let's say having a a diabetes or a cancer or an hypertension and things like that and therefore they
might be following more healthy habits. Okay, again just a hypothesis. I don't even know whether these people actually
went with these hypothesis but just trying to give you a flavor of what might uh you know have been done. So as
the estimate of the relationship between the two variables cannot be made using just raw data because there are too many
scores. So 20 participants on a scale of 1 to 9 somebody would have chosen one somebody should have chosen three or
seven or 9 and you have these different scores and on a 1 to 25 with between 1 to 25 several things must have been
chosen. So let's say there are at least if not more there are 40 values to be analyzed. Okay. And these 40 values on
the uh you know uh face of this is not organized in the best manner. So how do you go ahead? How do you start
organizing this data? One of the ways is to try and have a visual representation about how the values are distributed in
relation to each other. So in the next slide I'll just show you a scatter plot of this data where the x-axis indicates
uh the predictor variable uh you know uh and the y-axis uh has show uh has scores on the outcome variable. Each dot is the
combined score of an individual on both of these scales. Look at this. Okay. So, this is optimism, high optimism,
reported health behaviors. You can see it's it seems to be a linear relationship because this is covered by
a line and it seems to be uh interesting that few people who are optimistic uh extremely optimistic are also following
uh you know good healthy habits. So, let's let's try and analyze this. uh again remember this is not a causal
relationship. It is just a uh associative relationship. We will talk about this in a bit.
Now what is this plot? The scatter plot provided us with a visual depiction of the relationship between variables. For
example, the scatter plot just shown you could see that the points fall in a fairly regular pattern wherein data from
most of the individuals are located in the uh lower left corner this side or in the upper right corner. So there is this
you know straight line this oblique rising line that you saw and this straight line is basically referred to
as the regression line. Okay, this is also referred to as the best fit line as it is the line that minimizes the
squared distance of the points from the line. So it basically says how how distant are each of these data points
from this best fit line. Okay, let's discuss this now. Uh so we will talk about this also. it it tells us uh
something uh extremely important. Okay. Now based on this kind of an example you can work around that there are probably
one or two or three or four different kinds of relationships that are possible between variables. For example linear
relationship if the association between the variables on the scatter plot the one that you just saw seems like it's a
straight line. So it can be called a linear relationship. What more? Sometimes let's say if the straight line
indicates that individuals who have above average values on one variable say optimism also have above average values
on the other variable say healthy habits following and both are sort of increasing in the same direction. That
is what we call a positively linear relationship. All right. So you can look at this graph here. This basically
represents a positively linear relationship. Okay. On the other hand, in some cases,
what you will find is that individuals who have above average uh you know values on one variable but below average
values on another variable, then the relationship could be defined as negatively linear. Okay. From the data
that is collected, we see that it seems very similar to a positively linear relationship. But it could very easily
or in a different set of population could be the opposite. Say for example more optimistic people feel that oh I
will never get a disease I will never uh you know be on the other side of the health and therefore they will be
following minimal healthy habits obviously you know it's just a belief thing so you can find with the same data
across different populations both are possible what is true is basically uh you know reflect in the data when you
collect once you collect the data now relationships between two variables are not always in a linear p in a linear
form Okay, sometimes the relationships cannot be linear or can be nonlinear as well. For example, the relationship uh
for a couple of variables cannot be presented or cannot be summed up by a single straight line. You can see the
graphs here C, D and uh sorry D and E. Okay, here the relationship is curvy linear. In E also you can see the
relationship is curvy linear. What is happening is that values in one variable are changing in response to the changes
in the values of the other variable. But these changes are not best described by a straight line. Okay. So there is some
relationship. It's not that these variables are not related to each other. There is some relationship but this
relationship cannot be uh best understood by a straight line. So these are called curvy linear relationship.
Okay. On the other hand, sometimes there could be no relation whatsoever in the two variables or the values of the two
variables. So you can see here there is no relationship between the variables. Uh that is why there's no discerning
pattern here and that is why this can be called an independent relationship. Changing one variable does not have any
impact on the other variable. Okay. Now you can see some interesting uh things here. This R that is here basically
depicts the uh you know Pearson product moment correlation coefficient and this tells us something about the nature of
the relation. I'll describe this in more detail but let's because we have the figure here let's talk about this now
this R uh is positive in a positive linear relationship is negative in a negative linear relationship is 82 here
and 70 here what does this 82 mean 82 means strong correlation if it were 22 or 16 or 05 then it is weak correlation
okay uh that is one similarly the magnitude on the other side minus.7 70 means strong negative correlation,
moderate to strong strong uh negative correlation. But on the bottom if you see graphs C, D and E uh are basically
showing R is equals to zero which basically tells us when there is no relation the value of R will be towards
zero tending towards zero. Uh and in the other two because the relationship cannot be summed up by a straight line
the R value is coming as zero. other statistics might be able to tell us more about these kinds of relationships. All
right, let's let's talk about uh know some statistical assessment uh uh you know characteristics of
correlations. So as I was saying this R was basically the Pearson correlation coefficient. This is the one that is
normally used to summarize and communicate the strength. So how uh what is the magnitude.20
or 80.20 is weak. 8 0 is strong and the direction plus 2 0 or minus 2 0 + 2 0 is positive - 2 0 is negative. So strength
and direction of the correlation is basically uh depicted by the value of the r or this uh you know uh pearson
correlation coefficient. Okay. So again something that I already told you. So the Pearson correlation
coefficient frequently frequently referred to as the correlation coefficient is described by this letter
R and it is the the number associated with the R indicates both the direction and the magnitude of the association.
The direction is indicated by the sign of the correlation coefficient. So plus or minus tells you whether it is a
negative or a positive relation. The uh strength or the effect size of the linear correlation of this linear
correlation is indexed by the magnitude of the number. So 2060.8 980.90 that way.
Now how do you interpret R? It's is fairly simple. The calculation of the correlation coefficient will be
discussed again. We'll talk about that more in more detail. But basically what you do is you also carry out some kind
of significance testing and the sign and whether the correlations are significant. Let's say P less than 05 or
01. Uh we'll talk about this in more detail later. But the significant values basically tell us whether we can
seriously take this relationship or not. whether we can seriously interpret that these variables are actually related to
each other or not. And this will be extremely helpful in when you're trying to predict uh the change in values of
variable B uh based on the change in variable uh change in values of the variable A. Okay. So significant R
indicates that there is a linear association between the variables and thus there is a linear association
between the variables uh you know which is this is the knowledge that you will be able to use about a person's score in
order to predict the changes in the value of the other variable. So for example as optimism and health
behavior remember shares example are significantly positively correlated we can use optimism to predict health
behavior. So maybe uh what comes out of this is more optimistic people follow more healthy habits. Good. So the extent
to which however the extent to which we can predict uh this is indexed by the effect size of the correlation 040.60
and so on and uh yeah so which is the value of R. Now each test statistic as we know also
has an associated statistic that indicates the proportion of variance that this R can account for in the data.
Okay. So that is basically for example for R it is the R square and it is referred to as the coefficient of
determination how much data this particular thing is explaining. Okay. Now when the
correlation coefficient is not statistically significant, this indicates that there is no positive or
linear or negative linear relationship between the variables. So this has to be also you know in the range of uh you
know p value being significant. Now if the r is non-significant does that mean that the data is not useful and we
should discard the data? No. The non-significant R does not necessarily mean that there is no systematic
relationship between the variables. It might sometimes indicate that although one variable can be used to can not be
used to predict uh the other the Pearson correlation coefficient is is not able to provide a good estimate to the extent
to which it is possible. Okay. So if the relationship is not significant it does not say that there's no relation but the
strength is not there the effect size is is much smaller. This again it represents a limitation of the
correlation coefficient because also because it is very common in curvy linear relationship. So other statistics
are followed there. Now uh this can occur in other various ways also. For example, sometimes uh the
data on one of the variables is in a very restricted range. Okay. So the size of the correlation may be reduced if
there is a restriction of the range of variables being correlated. Okay. Let's take an example the one that the book
has Tanganger's book. Uh this could occur for example when the sample under study does not cover the full range of
the variable. So the example was uh the values on the SAT test as predictor of college performance. Now uh it turned
out that the SAT scores and measures of college performance such as gradepoint average
the relationship is almost is only 30 which basically uh you know depicts a uh weaker correlation. Why is this uh
coming out as a weaker correlation? because probably uh you know the size of the coalition is is basically reduced
because students only with the highest uh you know with very high SAT scores anyways get admitted to college. So when
you are uh you know looking at SAT scores distribution and you're looking at GPA distribution the SAT score
distribution will be in a very limited range because this is in a very limited range its ability to predict this one
will not be as much. Okay. So when there is a smaller than normal range or one or both of the measured variables, the
value of the correlation coefficient will automatically be reduced and this will not really represent an accurate
picture of the true relationship between uh variables. So before you are planning to conduct a correlational study, we
should have a very good idea about the distributions of the variable in the population and choose the one which
basically uh carries a wide range of values. Typically that's why you see correlational only studies typically
have a larger sample size uh because they don't handpick their participants. They'll collect a large range of
participants. So it used to happen a lot in bilingualism research when the idea was that you divide people into uh low
proficient and high proficient bilingual. Uh there could be this threshold let's say threshold of 70 on a
particular test under 70 and below 70 and then you you know are carrying out your experiment and data. At some point
somebody suggested that okay we should treat this as a as you know values of one variable across a continuum. So now
what you could do is you could actually dump all of this data together and basically look at proficiency having
this range. So now the number of participants increases. Earlier there were let's say 60 in each group. Now you
have 120 on the same continuum and in that sense it basically you know allowed the people to understand the effect of
language proficiency on other variables much better. Okay, they get this here. Okay, uh the
correlation between high school GPA and college across students is about point uh across all students is 80. However,
since only the students with high school GPA with high school GPA uh about 2.5 are admitted to college, the data are
only available for on both variables for the students who fall within this circled area. Within this circled area,
you can see that the correlation is typically much lower. It comes down to 30. Okay, so this is in that sense
interesting. Now how do we report correlations? We just talking about this. So when the
research hypothesis involves uh testing or presenting the relationship between two quantitative variables, the best way
to report is the R statistic. Okay. So if the null hypothesis is that the variables are independent, then R should
be equals to zero. If the research hypothesis is that the variables are not independent, then it should have some
value either more than zero or less than zero. Some magnitude will be there. In some cases, the correlation between the
variables can be reported in the text of the research board. For example, as predicted by the research hypothesis,
the variables of optimism and reported health behavior were significantly positively correlated in the sample. Uh
you can see here uh we've indicated the sample size for R. We've indicated the value of R and we've basically also
indicated whether this is a significant value or not. All right. Now sometimes there are many correlation
many variables involved. Okay, when there are many variables involved then how do you present the correlations?
Then you typically present them in a correlation matrix. You can see here there are four variables and basically
they are all related to each other as well. Uh so this is basically how they are reported. The one with the asterisk
are the ones that are significant. Okay. All correlations are based into n is equals to 155. here is also uh you know
a correlation matrix that comes out as an output in the IBM SPSS uh you know software. I'm sure everybody knows about
it. Now this is one way you can present a correlational matrix. But sometimes you
are interested in knowing what are the different variables that are playing a part. So we carry out multiple
regression. As we know that the primary objective of correlational research is to investigate the relationship between
variables. It is also possible to study the relationship among many variables, okay? Or more variables. For example, if
the researcher's objective were to predict the GPA of a sample of college students and the scientist decides to
use three uh predictor variables, perceived social support, number of study hours per week and the SAT score.
So now there are three variables that the person is interested in. Now in such a research design where there is more
than one predict uh you know where there are more than one predictor variable then what you are conducting is called a
multiple regression. It can be multiple predictors and a single outcome and either ways. Okay. So multiple uh
regression is basically a statistical technique based on the Pearson correlation uh coefficient and basically
what it does is it correlation coefficients are calculated uh between each of the predictor variables and the
outcome variable and among the predictor variables as well. Okay, so there are several predictor variables and there is
one outcome variable. All of these are trying to predict how does college performance uh fare in terms of
perceived social support in terms of number of study uh uh hours per week and terms of sat score that they had
initially obtained. Look at this. There are different predictors and there is one outcome variable and you can see the
correlational uh sort of uh values that are here. Okay. So uh social support.14
study hours.19 and sad score is 21. So this is broadly what we are getting. Uh this one is not significant and these
two are significant. Now typically uh you know because there are so many of these correlations to
compute it is typically done using softwares and computers. But two important informations are are
information pieces of information are very important. First the ability of all the predictive variables to uh predict
together the outcome variable. So this is indicated by what is called the multiple regression coefficient the
capital R. And just as uh the small uh R had R square. Uh there's also a uh you know capital R which is basically uh the
amount of uh variation the proportion of variation that is uh explained in this data. Okay. So there is the R square. R
square is the proportion of variance measure uh and both can be directly compared to R and R square because it's
broadly these are just correlations. Okay. Second is that the regression analysis may show that statistics
indicate the relationship between each of the predictors as well. So that's why in the correlation matrix you can also
write down correlation values between variable A, B and C also. Okay. So these are called uh regression coefficients or
beta weights. Okay. Now each regression coefficient can be tested for statistical significance. So you can say
these these these are significant. The others are not significant. So that is why P values are indicated by means of
an asterisk on the correlation matrix. You can see here. Okay. So this one is not significant. This one is
significant. This one is. So you can basically uh you know see uh you can when you're putting this in the paper
you or in a table you will mark a star here and a star here and this is the uh regression coefficients. Also understand
the regression coefficients are not exactly same as the zeroorder correlations because they represent the
effects of each of the predictor measures in the regression analysis holding constant for or controlling for
others. So remember because you are doing performing a multiple regression these B values here are not the same as
the value of R that we have seen earlier uh this B the this value of B is uh arrived at controlling for this one and
this one. So this is the overall contribution of social support when you have controlled for study hours and SAT
scores. This one here is the contribution of the study hours controlling for social support and SAT
score. uh uh you know uh predict the outcome variable is GPA and this one is the contribution of stat score in GPA
controlling for study hours and social support. So I think that should be clear that's why the uh beta uh weight is
written there. So that's why the B is there. Okay. Uh the result is that regression coefficients can be used to
indicate the relative contributions of each of the specific variables. For instance, the regression coefficient of
I should have written B. B is equals to 0.1 N indicates the relationship between
study hours and college GPA controlling for both social support and SAT. That's what I was telling you. Okay. In this
case, you can see uh that the regression coefficient was statistically significant and the relevant conclusion
that you can make is that estimated study hours do have uh you know a power or propensity to uh predict the GPA in
the college. All right. So this is a little bit about the correlation uh uh style of research
that we uh that I wanted to talk about. I'll continue this in the next lecture. Thank you.
Correlational research design investigates the relationships between two or more variables without assuming that one causes the other. It helps to identify patterns, such as whether variables move together positively, negatively, or in a non-linear way, providing a foundation for understanding links between cognitive or behavioral factors.
Scatter plots display data points for two variables across multiple participants, revealing potential relationships visually. A regression line is fitted to minimize the distance between points and the line, summarizing the overall trend, whether it's positive, negative, curvilinear, or absent, making complex data easier to interpret.
The Pearson correlation coefficient quantifies the strength and direction of a linear relationship between two variables, ranging from +1 (perfect positive) to -1 (perfect negative), with 0 indicating no linear relationship. Values around 0.20 are considered weak, above 0.80 strong, and significance testing (p-values) confirms if the correlation is statistically meaningful.
A significant correlation means two variables tend to change together predictably, but it does not demonstrate that one variable causes the other. Other factors like third variables or chance could explain the relationship, so correlational designs can inform but not confirm causal connections.
Limitations include sample size effects where small samples may miss significant relationships, and restricted range where limited variance in variables (e.g., only high scorers in a sample) reduces correlation magnitude. Non-significant results do not always mean no relationship exists, cautioning researchers to consider context and data quality.
Multiple regression analyzes how several predictor variables collectively and uniquely contribute to an outcome variable. By computing partial regression coefficients (beta weights), it shows each predictor’s effect controlling for others—for example, predicting college GPA from study hours, social support, and SAT scores—enhancing predictive accuracy.
Researchers should report the correlation coefficient (r), sample size (n), and significance level (p-value) clearly. When multiple variables are involved, using correlation matrices helps summarize relationships efficiently. Transparent reporting ensures clarity and facilitates accurate interpretation by readers.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
Understanding Correlational Research: Limitations and Causal Insights
This lecture by Dr. Arakma from IIT Kpur delves into the basics and complexities of correlational research designs in cognitive psychology. It highlights the strengths and limitations of correlation studies, emphasizing why they cannot establish causality and explores concepts such as reverse causation, reciprocal causation, spurious relationships, extraneous variables, and mediating variables. The summary also presents strategies like longitudinal studies and path analysis to better approximate causal understanding within correlational research.
Understanding Correlation, Sampling, and Experimental Bias in Research
This lecture explores key concepts in research analysis including the difference between correlation and causation, the importance of representative sampling, and how experimental biases like placebo effects and experimenter bias can impact study results. Learn practical examples and strategies such as double-blind procedures to ensure research validity.
Comprehensive Guide to Survey Research Design in Cognitive Psychology
Explore the essentials of survey research design in cognitive psychology, including types of surveys, interviews, questionnaires, and strategies to maximize data validity. This summary highlights best practices, common pitfalls, and actionable insights for effective descriptive research.
Fundamentals of Experimental Design in Cognitive Psychology
This comprehensive overview explores the core principles and mechanics of experimental design in cognitive psychology, focusing on control, causality, and variable manipulation. Learn about independent, dependent, and control variables, types of experiments, initial equivalence, advantages of experimental methods, and practical examples to build a solid foundation for conducting rigorous psychological research.
Comprehensive Guide to Research Approaches in Psychology
Explore the scientific approach to psychological research, including theory development, experimental and correlational methods, and key concepts like variables, operational definitions, and random assignment. Learn how psychologists design studies to describe, explain, predict, and control behavior.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

