Research Studies


Kudos to Brian Klug and Anand Lal Shimpi for measuring and graphing the (very nonlinear) relationship [article;graph] between iPhone 4 bars and signal strength, which helps explain what’s going on with the iPhone death grip.

Also, here’s a suggestion to AT&T, who might want to replace the slogan “More Bars in More Places.”

AT&T  iPhone image © Patrick Hoesly (Creative Commons Attribution 2.0 Generic)

One Response to “Bars, Schmars.”

  1. Steve Says:

    In case anyone wonders about the red dots under “America”: it’s Microsoft Word 2003 trying to be helpful with a Smart Tag:

Leave a Reply

For many years, things have been somewhat more likely to end up looking worse than previously thought than to end up looking better. According to a study released today (Study Sheds Light on Previous Thought), however, the preponderance of things ending up looking worse than previously thought is greater than previously thought and is increasing.

Things that, according to a Google News Archives search on the exact phrase, were or might have been “better than previously thought” in 2010 to date, according to news reports:

  • World energy demand
  • David Hoffman and Cheryle Jackson, in the Illinois primary
  • How well planes can cope with small amounts of ash in the air
  • Canada’s labour market
  • Consumer spending in Japan
  • The outlook for global economic growth
  • The UK economy
  • The US economy
  • The Barbados hockey team
  • The Oregon Ducks
  • How well patients tolerate cancer drugs blocking more than one target
  • How well beta interferon works against MS, for patients who respond to it

Things that, according to a Google News Archives search on the exact phrase, were or might have been “worse than previously thought” in 2010 to date, according to news reports:

  • The magnitude of the Gulf of Mexico oil leak
  • The danger posed by the Gulf oil spill to the US food supply
  • Greece’s debt crisis
  • The side effects of statin drugs
  • Total US job losses during the current recession
  • Ireland’s fiscal crisis
  • California budget numbers
  • The European economy
  • Job losses in Arizona
  • The US economy
  • Destruction from Yemen’s northern war
  • The impact of climate change
  • New York State’s budget hole
  • Britain’s economic downturn
  • The impact of childhood obesity on chronic diseases and life expectancy
  • Eric Abidal’s torn leg muscle
  • Canadian debt
  • The danger from texting while driving [related link]
  • The New Jersey budget crisis
  • Soil and creek contamination from the Oeser Co. wood treatment plant

Before the 1970s, previous thought was rarely challenged. For most of the twentieth century, in fact, it was rarely reported that things were or might be “better than previously thought” or “worse than previously thought.”

On only three occasions between 1900 and 1970 [Source:Google News Archives] did news reports indicate an inaccuracy of previous thought about something in such precise terms. These things were the 1929 English water famine (worse, 1929), treasury conditions (better, 1937), and apple prospects in three Washington State counties after a spring freeze (better, 1965).

Beginning in the 1970s, however, “worse than previously thought” began to appear regularly in the press. By the 1990s, “better than previously thought” had also caught on, and by 2005, “better than previously thought” had become more frequent than “worse than previously thought,” prompting several researchers to project a long-lasting reversal in the direction of error of previous thought.

The study released today, however, shows that between 2005 and 2010, there was a complete reversal of the reversal. The preponderance of “worse than previously thought” over “better” is now the strongest in well over a decade. Adding to the importance of this result is the fact (apparent in the lists above) that less serious things are being noted as “better than previously thought,” while more serious things are noted as “worse.”

One Response to “Study Sheds Light on Previous Thought”

  1. Tom Thomson Says:

    Might have been a good idea to point out that the two lists (worse than previously sought and better than previously thought, both in 2010 to date) don’t have an empty intersection.

Leave a Reply

In July, I griped here about misleading news reports of a higher crash risk among texting truckers than among non-texting ones. I pointed out that the research quoted had not shown an increased crash risk, and in fact observed fewer crashes (zero, in fact) among the texting truckers. The data might suggest a decreased crash risk among texting truckers, I noted. The reason for the confusion was that crashes and near-crashes (which included sudden and possibly crash-preventing maneuvers) hadn’t been separated in the calculations. Nevertheless, the increased crash-or-near-crash risk was widely reported as an increased crash risk. I wondered whether more near crashes might in fact be a positive thing; those who occasionally swerve suddenly might be paying more attention to the road than those who rarely do.

Today, researchers and others are expressing surprise that just-in real crash data doesn’t support what they don’t realize the earlier research didn’t show in the first place. For example:

If researchers (or journalists) are surprised by today’s news, they probably didn’t examine the research very closely. They may have believed the misleading headlines instead.

Disclaimers: I haven’t read all the research, and some studies might have in fact shown an increased crash risk, unlike the one I mentioned. That might be reason for real surprise. In addition, today’s data doesn’t specifically show that less phone use means no fewer accidents, because laws don’t always change behavior. But some of the reports I read did suggest that there was less phone use, yet no lower crash rate, in places that instituted bans.

Ironically, the crash data out today could make things worse. If drivers think phone use while driving is not as unsafe as previously thought, they might be less careful when using a phone while driving, and phone-related crashes might increase. Common sense suggests that multitasking requires greater concentration. If you do use a phone while driving, drive with even more care than usual. In other words, this might be one situation where being wary might have benefits, even when its not warranted by the facts – especially because being somewhat over-cautious while driving has no serious down side, it seems to me. (It’s not like it infringes upon hundreds of millions of people’s civil rights, like acting on other unwarranted fears can and does…)

One Response to “Texting While Driving, or Patting My Back”

  1. Steve Kass » Study Sheds Light on Previous Thought Says:

    [...] The danger from texting while driving [related link] [...]

Leave a Reply

Original title: Over 90% of Research Studies Make Me Want to Scream (P < 3E-12).

Shania Twain is in the news today. No, her new album still isn’t out, but her face is in the spotlight. It turns out someone “applied” the latest “research” to “determine” that she has the perfect face, “scientifically” speaking. The distance between her eyes and mouth are precisely 36% of the length of her face, and her interocular distance is exactly 46% of its width. These proportions, according to an article in press at Vision Research, are universally optimal (among low-resolution, mostly Photoshopped images of a few white women).

Garbage. Poppycock. Nonsense. Balderdash. Crap, crap, crap of a research paper, right from sentence 1: “Humans prefer attractive faces over unattractive ones.”

But you came here for the pictures. (more…)

Leave a Reply

Today’s clicking (especially from fivethirtyeight.com) led me to two strikingly similar declamatory reports about high school student’s knowledge of civics, complete with chart-laden survey results.

“Arizona schools are failing at [a] core academic mission,” concludes this Goldwater Institute policy brief.

“Oklahoma schools are failing at a core academic mission,” announces this Oklahoma Council of Public Affairs article.

When asked to name the first president of the United States, only 26.5% of the Arizona high school students surveyed answered correctly. Only 49.6% could correctly name the two major political parties in the United States. An even smaller percentage of Oklahoma high school students gave correct answers to these and other questions from the U.S. citizenship test study guide. None of the thousands of students surveyed in either state answered all ten questions correctly.

The shocking thing is that these are garbage studies. Made-up numbers, probably. The acme of vulpigeration. Evil. Makes me sick. (Glad I coined the word, though.)

No way these are real studies. Danny Tarlow over at This Number Crunching Life has taken a mathematical hammer to the Oklahoma “study” quite effectively. (The blatant similarity of the Arizona “study” blows away any shred of possibility that the Oklahoma study is legit. I’d love to see Danny’s face when he sees the Arizona report.)

What’s frightening is that this kind of snake oil has far too good a chance of surviving as fact (which it isn’t) and influencing public policy.

The guilty parties? The Goldwater Institute, which as you might guess is a conservative “think” tank. The OCPA, which describes itself as “the flagship of the conservative movement in Oklahoma.” Matthew Ladner, the author of both reports, who is vice president of research for the Goldwater Institute. And last but not least, Strategic Vision, LLC, which Ladner says “conducted” the studies. In my opinion, the word is concocted. Read about them yourself.

[Updated with correct business name: Strategic Vision, LLC.]

Leave a Reply

The following subheadline on the Scientific American website caught my eye today (and not only because of the missing period):

New research makes the case for hard tests, and suggests an unusual technique that anyone can use to learn

I may be a bit thick, because neither the article nor the research paper it mentioned suggested any unusual technique to me. But this was better than my last wild goose chase reading episode, when I vainly sought a footnote on a cereal box (there was a dagger: †, but no footnote. Can you believe that?).

Henry Roediger and Bridgid Finn, the Scientific American article’s authors, write that researchers Kornell, Hays, and Bjork found that “learning becomes better if conditions are arranged so that students make errors.” There’s that pesky word “better.” Better than what? The eternal unanswered question. My guess is that Scientific American is reporting that Kornell et al. have found that learning under a) conditions arranged so that students make errors is better than learning under b) conditions arranged so that students do not make errors. In other words, that the researchers found errorful learning to be better than errorless learning. Not that it’s a bad article, but it would be nice if Roediger and Finn had stated what they’re reporting a bit more clearly. (This is why I give writing assignments to my statistics students. By the end of the semester, they better learn not to use adjectives like better without answering “Better than what?”.)

Anyway, Kornell et al. do mention errorless learning in their paper, recently published in the Journal of Experimental Psychology: Learning, Memory and Cognition® (yes, the name of the journal is a registered trademark), but they don’t study it. The abstract notes that they examine the question of “what happens when one cannot answer a test question—does an unsuccessful retrieval attempt impede future learning or enhance it?” Kornell et al. didn’t exactly examine this question either, because they didn’t (and possibly couldn’t) isolate what part of the learning in their scenario was “future” learning. In addition, they only studied learning after wrong answers, so one must be careful not to assume their research sheds light on getting test questions wrong vs. getting them right. (Suppose a researcher reported that “Student learning among African-Americans is enhanced when they are given test questions they cannot answer.” If the researcher only studied African-Americans and made no comparison to other populations, the reported finding might easily be misinterpreted.)

What Kornell et al. did was compare two scenarios for learning previously unknown information. One scenario was unsuccessful retrieval attempts (the students were asked to provide the not-yet-learned information as answers to test questions, and they answered incorrectly). In this scenario, the retrieval attempt was followed by feedback that included a brief presentation of the new information (i.e., the correct test question answer). The second scenario was a longer-lasting presentation of the new information with no retrieval attempt (the students were not asked to answer a test question, and it’s unclear in some of the experiments whether the students knew what kind of test question they would later be asked). Not surprisingly, unsuccessful retrieval attempts enhanced learning (as measured by scores on a test containing questions like those in the retrieval attempt), when compared to presentation of new information with no retrieval attempts. Despite the Scientific American article’s subheadline, this research makes no case that “hard tests” are better for learning than non-hard tests. They may be, but this research doesn’t help us figure it out. The research does support the value of tests, hard or not-hard, so long as there’s feedback with the right answer.

One Response to “Using flashcards is better than just reading them”

  1. Andrew Willett Says:

    That’s what happens when you let Björk work on your research project. Half your research budget gets spent on crazy outfits and acts of visionary musical weirdness, which means you have to cut corners elsewhere.

    † (That would have driven me crazy as well.)

Leave a Reply

Almost every semester, I use the AOL Breach data as a point of departure for something in at least one of my classes. The data is fascinating. Most data is fascinating, but this data is particularly so: at once shocking, funny, creepy, poignant, sad, frightening, noble, ignoble, shrewd, and lewd. It’s also rich in the way data can be rich. It’s completeness—for a sample of several thousand AOL accounts, it includes the complete account search history during March, April, and May of 2006—which includes timestamped search strings and the result rank and destination of clicks-through, makes it ripe for discovering all sorts of patterns of human thought and behavior.

It’s AOL data week in one of my classes now. This morning, I proposed several nontrivial questions about the data that could be answered with SQL queries. We looked at the results and discussed what they might say about the unwitting study subjects. Then I asked my students to suggest some questions of their own. What are the typical time-of-day and day-of-week patterns of an individual AOL customer’s searches? Are there identifiable differences in the patterns (and by extension in the sleep, social, and perhaps employment or school behavior) of people whose searches included, say, “britney”? For what kinds of searches do users most often click through several pages of results? And so on.

One of my students suggested an excellent simple question. What are the most common searches of the form “how to …”? Out of millions of queries in the AOL data, there were many thousands of “how to … ?” searches. The most frequent was “how to tie a tie,” requested 92 times by a total of 47 distinct users. The rest of the top ten (in terms of most distinct users asking the question) were how to write a resume, gain weight, have sex, get pregnant, write a book, write a bibliography, start a business, lose weight, and make money, each sought by a dozen or more different people. AOL converted the queries to lower case and removed much of the punctuation, but they didn’t correct spelling. Consequently, how to masterbate and how to masturbate appear separately at ranks 49 and 51 respectively. The question would have nearly hit the top 10 without the misspellings.

Here’s a PDF file of the top 1000 “how to” queries submitted through AOL explorer by a sample of AOL users in the spring of 2006. You can probably guess that it’s not safe for work. Although there are no pictures, plenty of sex, drugs, and gambling is spelled out, and there are more than a few questions likely to offend in one way or another. Have a look.

2 Responses to “#836. How to be a sex goddess”

  1. Greg Everitt Says:

    Wow professor, this list is… people are interesting, is all I’m saying.

  2. Steve Kass » Why, why, why? Says:

    [...] AOL data (see #836. How to be a sex goddess) was a little thin on "why is he" queries, but a broader "why is" search [...]

Leave a Reply

Will 42 still be less than 57 in the New World Order?

What the Daily News says.

Leave a Reply

Percentage of Americans who believe

in God 80% (a)
that Barack Obama was born in the United States of America 77% (b)
in Heaven 73% (a)
in Hell 62% (a)
that openly gay persons should be allowed to serve in the military 59% (f)
in the devil 59% (a)
that these numbers add up to 100% (except for rounding) 54% (c)
Darwin’s theory of evolution 47% (a)
that all of the Old Testament is the “Word of God” 37% (a)
in witches 31% (a)
in reincarnation – that they were once another person 24% (a)
that all of the Torah is the “Word of God” 14% (a)
that Barack Obama is a Muslim 11% (e)
that Lee Harvey Oswald acted alone in JFK’s assassination 10% (d)
  1. Harris Interactive
  2. The Daily Kos
  3. Steve Kass
  4. CBS News
  5. Pew Research
  6. The Washington Post

Leave a Reply

Headlines today, all citing the same study by the Virginia Tech Transportation Institute:

From the press release for the study (emphasis mine):

In VTTI’s studies that included light vehicle drivers and truck drivers, manual manipulation of phones such as dialing and texting of the cell phone lead to a substantial increase in the risk of being involved in a safety?critical event (e.g., crash or near crash). However, talking or listening increased risk much less for light vehicles and not at all for trucks. Text messaging on a cell phone was associated with the highest risk of all cell phone related tasks.

From a New York Times article containing further detail about the VTTI study (emphasis again mine):

In the two studies, there were 21 crashes and 197 near crashes — defined as an imminent collision narrowly avoided — from a variety of causes, including texting. There were also about 3,000 other near crashes that were somewhat easier for the truckers to avoid, the researchers said. There also were 1,200 unintended lane deviations.To determine how the events compared to safer conditions, the researchers compared what occurred in the dangerous situations to 20,000 segments of videotape chosen at random.

In the case of texting, the software detected 31 near crashes in which the cameras confirmed the trucker was texting, though no actual collisions occurred. In the random moments of videotape, there were six instances of truckers’ texting that did not result in the software detecting a dangerous situation.

I couldn’t find the study data on the VTTI web site, but I found some PowerPoint presentations and other documents about the study. None of them quoted a “crash risk.” The increased risks were for “crash/near crash” incidents. The numbers quoted by the Times make it pretty clear that “crash/near crash” included crashes, near crashes, “other near crashes” and unintended lane deviations, because that’s the only way the odds ratio comes out to 23:

image

Do I think texting while driving is a bad idea? Yes. Do I think you’re 23 times more likely to have an accident while texting than while not texting? I don’t know. The VTTI study suggests not, and it might suggest the opposite: while texting, you’re less likely to have an accident (but more likely to brake, swerve, or change lanes unintentionally). I can’t say whether the study suggests either of these things with statistical significance, because I don’t have the data, but it does seem to be the case that the study subjects, when texting, were more likely to swerve or slam on their brakes, but not crash.

I can think of some possible reasons for this, too: if you’re texting, you’re probably not sleeping (sleeping being a significant risk factor for actual crashes, but probably not for swerving and braking), and if you’re texting, you might know you’re creating a dangerous situation and be more prepared to swerve or brake suddenly.

One Response to “Texting While Driving”

  1. Steve Kass » Texting While Driving, or Patting My Back Says:

    [...] July, I griped here about misleading news reports of a higher crash risk among texting truckers than among non-texting [...]

Leave a Reply

« Previous PageNext Page »