click below
click below
Normal Size Small Size show me how
Research exam 1
From Study Guide
| Question | Answer |
|---|---|
| Over the last several years, both __ and ___ have adopted ___ __ __ as a basic tenent of the professions | ASHA & AAA; evidence practice |
| Where did the concept of EBP arise from? | internal medicine |
| What does ebp require clinicians to do? | integrate high quality scientific clinical research evidence with individual clinical expertise to ensure ethical and optimal client management |
| ebp requres clinicians to integrate high quality __ __ __ __ with individual __ __ to ensure __ and __ __ management | scientific research clinical evidence; clinical expertise; ethical and optimal client management |
| Define scientific research | is systematic, controlled, empirical, and critical investigation of hypothetical propositions about the presumed relations among natural phenomena |
| Scientific research is ___, ___, ___, and __ investigation of __ propositions about the __ __ among ___ __ | systematic, empirical, controlled, critical; hypothetical; presumed relations among natural phenomena |
| What are the basic steps of the scientific method? | 1. statement of problem 2. formation of preliminary hypothesis 3. deduction & reasoning 4. formation of final hypothesis 5. observation 6. apply results of observation to hypothesis 7. accept hypothesis...or form new problem statement |
| describe statement of the problem | a question, concern, curiousity |
| describe formation of prelim. hypothesis | your initial thoughts |
| describe deduction & reasoning | apply current knowledge, own experience review of literature ask experts |
| describe formation of final (working) hypothesis | based on results of deduction/reasoning |
| describe observation | testing/experiment |
| describe apply results of observation to hypothesis | applying results of observation to hypothesis |
| describe accept hypothesis | you've answered the question OR you form a new problem statement |
| What should a well thought out statement do? | 1. express a relationship between 2 or more variables 2. be stated clearly and unambiguously 3. should imply possibilities of empirical testing |
| A well thought out statement should... | be answered emprically via observation/measurement |
| a well thought out statement should have the type/manner... | of observations/measures implicitly or explicitly specified within the queston |
| a well thought out statement should have the meaning of... | all key words/concepts are explicitly apparent or operationally defined |
| how you can you make sure you've considered everything in writing your problem statement? | platt's method of strong inference |
| name other considerations in selecting research qeustions | personal interest, personal knowledge areas, potential value, complexity of procedures, availability of equipment/facilities, avilabilty of population, financial resources, amount of time needed, collaborative assistance needed |
| what is null hypothesis? | stating that there is no relationship between specified populations |
| why do researchers like null hypothesis? | it removes researcher bias |
| what are the 3 types of variables? | 1. independent 2. dependent 3. extraneous/intervening |
| what is an independent variable? | the variable that is manipulated or applied by the investigator |
| what is the depedent variable? | the variable to which the independent variable is being applied, the one which may be affected by application of the independent variable |
| what is extranous/intervening variable? | factors other than the independent variable that are capable of causing change in the dependent variable (other variables) |
| list the different types of experiemental designs | bivalent, multivalent, parametric, historical, descriptive, experimental, between subjects, within subjects, mixed |
| what does bivalent mean? | two values - examine the affect of two values of the independent variable on the dependent variable |
| what is multi valent? | many values. examines the effect of several values of an independent variable on a dependent variable |
| what is parmetric? | many VARIABLES. more than one independent variable yields TWO SETS OF RESULTS |
| parametric yields ___ sets of results | TWO |
| who proposed bivalent, multivalent, and parametric? | plutchik |
| what are these based on? | number of variables in the design |
| what are the advantages of bivalent? | |
| what are the disadvantages of bivalent? | To get multiple values of a single independent variable, you'd have to do bivalent experiements multiple times (one for each value you are interested in) |
| what are the ad of multivalent? | you would be able to get multiple values of a single independent variable with a single multivalent variable |
| what are the dis of multivalent? | it would take more testing time/ participant |
| what is the ad of parametric? | best for when you're concerned with multiple independent variables, especially when you think there may be ineraction between them in their effects on the independent variable |
| what is the disad of parametric | |
| what is historical? | categorization of research design. involves investigation of documentary evidence. (patient records) no direct testing of participant by the researcher |
| what is the ad of historical? | allow research to acquire hundreds/thousands of subjects without having to test each *can be used when direct testing would be unethical |
| what is the disad of historical? | because the researcher does directly test, you have no idea of the quality of data |
| what is descriptive? | categorization of research design. measures performance ONLY (you're not applying any variable, no treatment or therapy)(normative data would be an ex) |
| what is the a of descriptive? | common for SLA/A in the form of normative data or in talking about triats/characteristics of special groups and how they differ from the normal/control |
| what is the disa of descriptive? | in conduction comparative research, you must assure/assume that the two groups are equal in all attributes except the one of interest |
| what is experiemental? | categorization of research design. empirical testing of relationship betwee/among variables (apply the variable and see what happens) |
| what is the a of experimental | allows direct application of scientif method by allowing direct application of the IV on the DV. So you can tell empirically if IV has affect on DV. (if you have taken away all extraneous variables) |
| what are the disaof experimental | eliminating or controllin all extraneous variables that could interfere or affect DV. you can't assume that you have probven cause and effect unless the design allows that |
| what is between subjects? | performance of 2 or more separate groups of subjects are compared |
| what are the a of between subjects | very common |
| what are the disa of between subjects | must be able to asume that the groups are equal in all pertinent attributes |
| what is within subjects? | performance of same subjects is compared for different conditions/treatments |
| qhat are the a of within subjects? | study performance of a single group with the advantage that you know subjects are equal across conditions |
| what are the d of within subjects | must be able to assume that the CONDITIONS are equal in all pertinent features, vulernable to sequence/order/carry over effects |
| what is mixed? | combination of both between and within; often used in parametric designs, and look at some variables via between, some via within |
| what are the a of mixed? | |
| what are the disa of mixed? | |
| compare single subject designs to group designs other than the # of subjects used | used when homogeneity of a pop can't be assumed/assured, when very few individuals are available for testing, for gathering pilot data in deciding whether or not to do an experiemnt |
| what are the a of single subject designs? | |
| what are the disa of single subject designs? | |
| what are the a of group designs? | |
| what are the disa of group designs? | |
| describe and differentiate between different types of single subject designs in the text book including: A-only, a-b, a-b-a, a-b-a-b, b-a-b | |
| List the types of single subject designs | A only, B only, AB, ABA, ABAB, BAB |
| What are multitreatment designs? | |
| what are mutliple baseline designs? | |
| What are the different types of variance and what is the specific way to deal? | desired systematic, systematic extraneous, random error variance |
| what is desired systematic variance and how ydo you deal with it? | The effect of the Independent Variable on the Dependent Variable |
| what is systematic extraneous variance and how do you deal with it? | a feature of the Dependent Variable or other extraneous factors that effects several or all subjects, and/or has consistent effects. This is undesired variance. |
| what is random error variance and how do you deal with it? | random variables that may effect only one or a small number of subjects, or effect subjects differentially. Also undesired |
| define empirical | based on direct observation and/or measurement |
| define main effects | effect of each independent variable |
| define interaction | combined effect of independent variables (operating on eachother) |
| define external validity | how applicable the results of our research are to a larger population (real world) |
| define internal validity | how well we measure what we say we are measuring. |
| define reliability | reliability relates to replicability; that is, if we continue to make similar measures on similar groups, do we get similar results. |
| define sequence/order effects | systematically improves or impairs a subject’s performance on a series of tasks. This usually encompasses what researchers refer to as “History” and to a lesser effect “Maturation” and“Test” effects (to be covered later). |
| what are sequence/order effects also called? | order effects, carry-over effects |
| define counterbalanced design | |
| define retrospective design VS post facto designs | |
| define time-series design | |
| define qualitative data | |
| define quantitative data | systematically improves or impairs a subject’s performance on a series of tasks. This usually encompasses what researchers refer to as “History” and to a lesser effect “Maturation” and“Test” effects (to be covered later). |
| define a priori vs. post hoc designs/decisions | rationalism/intuition. uses pure reason based on assumptions that are considered to be self-evident with little or no concern given to observation/experience.Post hoc - historical, after the fact |
| define history effects | events or variables, in addition to the independent variable, that occur between repeatedmeasures and may confound the effects of the IV on the DV |
| define maturation effects | changes within the subjects themselves, occurring between repeated measures that may confound the effects of the IV on the DV. |
| define test/ test practice effects | If subtests are sequenced on the basis of increasing complexity, subjects may gain confidence and do better on the harder tests than subjects given tests sequenced in decreasing complexity (early failure may demoralize and further impair performance). |
| list instrumentation effects | 1. calibration 2. poor reliability 3. observer errors |
| describe calibration | can be internal, such as change in performance from unstable components (such as an overheating transistor), or external, such as changes in barometric pressure or electrical/magnetic fields. |
| describe poor reliability for instrumentation | measurement increments may not be sensitive enough; instruments may be misused or instructions not properly followed. |
| describe observer errors for instrumentation | can result from fatigue or boredom; due to errors in reading instrument or recording information. In addition, the Observers themselves may be a source of error: Rosenthal Effect (Experimenter Bias): |
| define statistical regression effects | if subjects are selected from the ends of the performance curve/range, it is somewhat more likely that incorrect scores are present, so greater change is seen upon re-test than would be expected from the influence of the IV |
| define experimenter bias effects (differential subject selection) | rosenthal effect & differential subject selection |
| what is differential subject selection? | Experimental groups are not selected on equality of attributes (including aspects of the DV that interact with the IV); is another form of Experimenter Bias. |
| define rosenthal effect? | (Experimenter Bias): the fact that participants know they are being watched/evaluated may change their performance (observer attributes & observer expectations) |
| define mortality effects | loss of subjects during the course of an experiment; especially troublesome if subjects are matched and/or treatments counterbalanced. |
| define hawthorne effect | Changes in subject(s) performance simply because they know they are involved in an experiment/being observed (ex: placebo effect) |
| define interaction effects | Independent, Dependent, and/or Extraneous variables may interact, enhancing or decreasing each other’s effect on the DV. |
| define reactive/interactive effects of testing | Subjects exposed to testing/experimental procedures may react to a variable in a different way than they would in a normal setting. |