BROOKINGS, S.D. — A survey of more than 1,000 farmers showed that a large majority did not understand how to interpret research results and, according to Sara Berg, SDSU Extension Agronomy Field Specialist, the language used in reporting may be to blame.
“Terms commonly used and understood in the research world can be confusing, unless you understand their true meaning,” Berg explained.
Berg is part of a multi-state team of Extension personnel working together to clear up confusion among producers when it comes to research. Together they are publishing a series of articles which delve into four research topics including: the best practices for side-by-side comparison trials, how to set up on-farm research and the topic of this article, what common research terms mean. The fourth and final article, not yet released, will focus on helping producers see legitimate research from biased information produced to sell inputs.
To view past articles, visit iGrow.org and search by Sara Berg’s name.
In addition to Berg, the team includes: Lizabeth Stahl, University of Minnesota; Josh Coltrain, Kansas State University; John Thomas, University of Nebraska-Lincoln.
Lizabeth Stahl, Extension Educator with University of Minnesota, is the author of this article.
Not significantly different
“When a producer sees two numbers that are clearly not the same labeled as “not significantly different,” it can be confusing,” Stahl said.
In agriculture research, Stahl explained, just because there may be a five bushel per acre difference, one may not be able to say with any confidence that the treatments actually differ based upon how the study was set up and/or the amount of error found within the study.
“We encourage producers to consider the purpose of the research,” Stahl said. “Research is typically conducted so that we can use the results to help make the best decisions possible in the future.”
Least significant difference
Another phrase, the least significant difference (LSD), is used to describe a measure of value when the difference is considered statistically significant.
“In a hybrid variety trial, for example, the LSD describes the minimum bushels-per-acre that two hybrids must differ by before we would consider them to be “significantly different,'” she explains.
There is no way to calculate the LSD if a researcher simply splits a field in half and puts one treatment on one side of the field and a different treatment on another side of a field.
“In this scenario there is no way to sort out if a difference in observed yields was due to underlying factors such as soil type, planting population, drainage, compaction, disease, insect pressure, harvest issues, topography, etc., or the treatment,” she said.
Stahl further explained that when the LSD is calculated at the .05 significance level, this means researchers can be 95 percent certain that the treatments (or hybrids, etc.) really did differ in yield if the difference between them was equal to or greater than the LSD.
“A significance level of .05 or .10 is most commonly used in agricultural research,” she said.
No significant difference
What does it mean when data is labeled as having “no significant difference?”
Stahl explained the answer this way. “This can occur when there is so much variability in the results due to other factors that researchers can’t make a conclusion with confidence, or when the treatments or hybrids in the study simply don’t differ in yield,” she said.
For example, results from a University of Minnesota tillage trial demonstrates the importance of statistical analysis in helping determine if a yield difference is likely “real.”
Three long-term tillage systems were evaluated at multiple locations over three years across southern Minnesota. Tillage treatments were randomized and replicated four times at each location.
At one site in 2011, average corn yield for strip tillage was 10 bushels per acre greater than in moldboard plow. Yet, yield was not statistically significant.
Based on results, researchers could not determine that one tillage system produced significantly higher yields than another.
“Although average yields were numerically different, statistical analysis determined researchers could not say with any confidence that the tillage systems resulted in different yields,” Stahl said. “If yields are not statistically different, don’t treat them differently. Resist the temptation to put economics to average yields if they are not significantly different. Doing so could lead to poor and costly decisions in the future.”
— SDSU Extension
For more news from South Dakota, click here.