Science


IMG_3848

Adapted from Modernist Cuisine’s recipe for Mac ’n’ Cheese. with thanks to Linda.

Ingredients:
  9 ounces of water
  2 teaspoons food grade sodium citrate
10 ounces of cheese, about half white cheddar and half Manchego

Equipment:
  Stick blender
  Grater

Grate the cheese, the finer the better, and set it aside. Coarsely-grated is okay.
Bring the water and sodium citrate to a simmer (180°F) in a 2 quart saucepan.
Turn off the heat.
Add the grated cheese a handful at a time, mixing continuously with the stick blender.

Tips:
Be sure each handful of cheese is blended into the liquid before adding the next handful. Adding and blending all the cheese in should take no more than a minute or two.
Use immediately or refrigerate. It’s supposed to keep for a week, but I haven’t tried.
Cleanup can be messy, and a non-stick saucepan helps.

Velveeta® is a registered trademark of the Kraft Foods company. The specks in the picture are bits of wax from the cheese. Be more careful than I was if you care about specklessness.

Leave a Reply

DLabelAs you know from my previous How Do You Arrange Your Cheez-Its? posts, Sunshine Cheez-Its are the perfect food. Better yet, some varieties have serving sizes — such as 14, 27, and 29 crackers — that subdivide into great combinations of perfect cubes and perfect squares.

Today’s Cheez-It variety is Scrabble Junior Cheez-It. One serving, can you believe it, is 26 letters crackers! And of course, not only can you arrange a serving into several combinations of perfect squares, you can also arrange a serving alphabetically. You can even do both at once!

 

 

A25 1

Figure 1. A serving of Scrabble Junior Cheez-Its arranged alphabetically.
26 = 25 + 1

 

9 1 16

Figure 2. A serving of Scrabble Junior Cheez-Its arranged as three squares.
26 = 9 + 1 + 16

9 9 4 4

Figure 3. A serving of Scrabble Junior Cheez-Its arranged as four squares of two sizes.
26 = 9 + 4 + 4 + 9

Leave a Reply

[This will be brief and sloppy, since I should be packing, not blogging. With luck, there will be an update later this month/year/decade, but don’t hold your breath.]

Today went like this:

5:00 a.m. Wake up a good four hours before my usual wakey-uppy time, because in order to get to the Marvin Hamlisch Memorial Choir rehearsal, I had to catch the 6:21 train.

7:45 a.m. – 1:00 p.m. Rehearse for and sing at Marvin’s funeral. While it isn’t the subject of this post, highlights of the event were a) President Bill Clinton, b) John Updike’s Perfection Wasted, a sucker punch if there ever was one, and c) Terre Blair Hamlisch’s heartbreakingly stunning eulogy to her late husband.

12:45 – 2:30 p.m. Lunch at Serafina on 61st (a martini and a plate of paglia e fieno) with fellow singers Andy, Darcy, and the just-married Baninos. Disappointingly, although we were all dressed in black, no one asked “Who died?” Only in New York.

3:45 – 8:03 p.m. Procrastinate.

8:04 p.m. See Dr. Rubidium’s provocative and pithy tweet,

People, eggs are bad for you AGAIN. jezebel.com/5934776/your-b… … via @Jezebel #untiltheyrenot

PithyTweet

“Your Breakfast Is Trying to Murder You: Eggs Are Almost as Bad for You as Cigarettes,” Jezebel crowed.

Well, I love me my eggs, and egg slander is up there with salt slander and sugar slander as a high crime against food. Eggs give Cheez-Its a run for the money as the perfect food, and this had to be wrong.

Here’s the hard-boiled truth. The latest research on eggs and heart disease is flawed. Eggs are not going to kill you.

Jezebel and other news outlets have jumped on Egg yolk consumption and carotid plaque, a paper recently published in the journal Atherosclerosis, which claims that a person’s carotid plaque increases exponentially with their egg yolk consumption. (This paper is referred to below as EWKY, for Eggs Will Kill You.)

Most likely there is no exponential relationship at all. But if you believe the authors’ statistics, perhaps you will believe what I can prove by an identical analysis:

The length of objects, measured in centimeters, grows exponentially with length measured in inches.

Of course, this is ridiculous. The length of an object in centimeters is exactly 2.54 times its length in inches. The relationship is linear, not exponential. After you finish reading this post, I hope you’ll realize that the egg slander in Atherosclerosis is also ridiculous.

The “exponential” dependence of plaque on egg yolk consumption is an artifact of skewed data.

I created a data set with the same distribution as the EWKY data to investigate a hypothetical relationship between inches and centimeters, using the same flawed way the authors of EWKY analyzed the relationship between egg yolk consumption and plaque.

Briefly, the authors of EWKY treated “quintile”  as a scale variable, which it is not.

Here are the histograms of my data and the EWKY data. Pretty much the same.

MyHistogramEWKYhistogram

And here are error bar charts of my data and the EWKY data. Both appear to show a clearly non-linear relationship.

The error bar chart from EWKY is the sole justification for the claim of an exponential cigarette-like relationship to plaque:

MyExponentialEWKYexponential

(My error bars are much shorter because the correlation between inches and centimeters is perfect. While the relationship between egg-yolk years and plaque is not, it’s nevertheless not exponential.)

There are other statistical gaffes in EWKY, but I don’t have time to delve into them. I’ll mention the worst very quickly.

First, most of the EWKY analysis compares lifetime egg yolk consumption to plaque. Lifetime anything consumption is a proxy for age, and atherosclerosis is strongly age-dependent. Nowhere do the authors of EWKY provide convincing evidence that the relationship between egg yolk consumption and plaque is anything but an artifact of the proxy for age.

Furthermore, the authors pay no heed to the always-important question of effect size. They provide a single analysis that shows a statistically significant relationship between the non-age-proxy measurement of egg yolk consumption per week (as opposed to over a lifetime) and plaque that’s independent of age:

Screen shot 2012-08-15 at 0.02.39

The difference in plaque area between the <2 eggs/week group and the 3 or more eggs group (an arbitrary split, and ignoring the several hundred subjects who ate from 2 to 2.99 eggs/week) is about 1/20 of a standard deviation, otherwise known as squat. The fact that p < 0.0001 after adjustment for age is irrelevant, because with such a large sample, significance might appear for an even smaller effect (micro-squat).

Gotta run, gotta pack. Thanks for listening.

Leave a Reply

[Note added 2012/07/26 4:42PM EDT: The Spider-Man designs appear on only one side of each Amazing Spider-Man Cheez-It. Crackers were arranged design-side up.]

CheezIt

My friendly neighborhood Shop-Rite is carrying more and more varieties of Cheez-Its lately, but it didn’t take long to decide on The Amazing Spider-Man Cheez-Its.

The top reasons? 

1. Serving size is a sum of consecutive squares.
2. Flavor is not Monterey Jack.
3. Spider-Man, Spider-Man!

 

 

Spidey1694

Figure 1. A serving of The Amazing Spider-Man Cheez-Its arranged as a frustum.
29 = 16 + 9 + 4

 

The Amazing Spider-Man Cheez-It is not the perfect food, but it’s closer to it than Cheez-It BIG Monterey Jack. Spideys taste almost like the original, but they’re a little too hard and crunchy. I can’t imagine eating an entire box in one sitting.

Spidey254c

Figure 2. The Pythagorean identity 3² + 4² = 5² was applied to the previous serving.
29 = 25 + 4

The Amazing Spider-Man Cheez-Its are slightly less uniform in size and shape than regular Cheez-Its or Cheez-Its BIG. The crackers chosen for these figures are a more uniform sample than one would expect in a random serving.

Spidey251111

Figure 3. Another sum-of-squares serving of The Amazing Spider-Man Cheez-Its.
29 = 25 + 1 + 1 + 1 + 1

Another Steve on the internet has written more about The Amazing Spider-Man Cheez-Its, so I’ll just wrap up with two more arrangements. You should definitely head over to the other Steve’s blog when you’re done here.

Spidey99911b

Figure 4. One serving of The Amazing Spider-Man Cheez-Its. (Aztec arrangement)
29 = 9 + (1 + 9 + 1) + 9

 

Spidey99911a

Figure 5. A variation of Figure 4 with more symmetry.
29 = 9 + (1 + 9 + 1) + 9

Leave a Reply


FactsCrop

As you know from my last and first How Do You Arrange Your Cheez-Its? post, not only are Sunshine Cheez-Its the perfect food, the serving size of Cheez-Its is 27 crackers, a perfect cube.

As the name suggests, Cheez-It BIG crackers are bigger¹ than Cheez-Its, and a serving contains fewer crackers — 14 instead of 27. While 14 is not a perfect cube, it is the sum of consecutive perfect squares, which is nearly as wonderful.

 

C941

Figure 1. One serving of Cheez-Its BIG arranged as a pyramid. 
14 = 9 + 4 + 1

             

C8321c

Figure 2. One serving of Cheez-Its BIG arranged like a pyramid.
14 = 8 + 3 + 2 + 1

 

C232322

Figure 3. One serving of Cheez-Its BIG arranged unlike a pyramid.
14 = (2 + 3 + 2 + 3 + 2) + 2

 

Unfortunately, Cheez-Its BIG, the Monterey Jack variety (which I purchased by mistake) is not the perfect food, and I can only recommend it for arranging, not for snacking.

 

PyramidOfTheSun

Figure 4. The author (center) ascending a Pyramid not made of Cheez-Its BIG.

 


¹ “Twice the Size!*”, the front of the box declares.²

² The asterisk leads to a footnote, in smaller type: “*Than Original Cheez-It® Crackers volume.” My rough measurements confirm this. One Cheez-It BIG is about 35% larger than a Cheez-It in each of its larger dimensions, and about 10% thicker.

Leave a Reply

News this week of an Adderall shortage, and this report, which draws into question the widely-held belief that methamphetamines cause brain damage and cognitive impairment, prompt me to rescue an old statistical parody I wrote (and posted on my now-moribund Drew web page) in 2003, a few years before I had this soapbox. The news links above are also well worth visiting.

Cocaine’s brain effects might be long term [“news”]

Insulin’s metabolic effects might be long term [parody]

BOSTON, March 10, 2003 (UPI) — Cocaine and amphetamines
might cause slight mental impairments in abusers that
persist for at least one year after discontinuing the
drugs, research released Monday reveals.

MADISON (NJ), March 16, 2003 — Insulin might cause metabolic
disorders in abusers that persist for at least one year after
discontinuing the drug, research released Monday reveals.

However, experts outside the study said the findings were inconclusive
and pointed out although cocaine has been widely abused for decades,
impaired cognitive function is not seen routinely or even known to exist in
former abusers.

"Overall, the abusers were impaired compared to non-abusers on the function of attention and motor skills," Rosemary Toomey, a psychologist at Harvard
Medical School and the study’s lead investigator, told United Press International.

“Overall, the abusers were impaired compared to non-abusers
on tests of sugar metabolism,” Rosemary Toomey, a psychologist
at Harvard Medical School and the study’s lead investigator,
told United Press International.

Previous studies have yielded inconsistent findings on whether
cocaine abuse led to long-term mental deficits. Some studies found
deficits in attention, concentration, learning and memory six months
after quitting. But a study of former abusers who were now in prison
and had abstained from cocaine for three years found no deficit.

Few studies have looked at the long term effects of insulin
abuse, although doctors and scientists generally believe
the drug is harmful. One study of former abusers who
were now in prison and had abstained from insulin for
three years found a higher than normal death rate.

To help clarify these seemingly conflicting results, Toomey’s team,
in a study funded by the National Institute on Drug Abuse, identified
50 sets of male twins, in which only one had abused cocaine or
amphetamines for at least one year. Amphetamine abusers were
included because the drug is similar to cocaine and could have the
same long-term effects on the body.

To address the lack of careful studies, Toomey’s team, funded by
the National Institute on Drug Abuse, identified 50 sets of male
twins, in which only one had abused insulin for at least one year.

Most of the pairs were identical twins, meaning they share the exact
same genetic pattern. This helps minimize the role biological
differences could play in the findings and gives stronger support to the
mental impairments being due to drug abuse.

Most of the pairs were identical twins, meaning they
share the exact same genetic pattern. This helps minimize
the role genetic differences could play in the findings and gives
stronger support to the impairments being due to insulin abuse.

The abusers, who averaged age 46 and had not used drugs for at least
one year, scored significantly worse on tests of motor skills and
attention, Toomey’s team reports in the March issue of The Archives
of General Psychiatry.

The abusers, who averaged age 46 and had not used
insulin for at least one year, scored significantly worse
on tests of sugar metabolism, Toomey’s team reports in
the March issue of The Archives of General Metabolism.

The tests all were timed, which indicates the abusers have
"a motor slowing, which is consistent with what other investigators
have found in other studies," Toomey said.

The tests all were performed after fasting, which indicates the abusers
have “an impaired metabolism unrelated to diet, which is consistent
with the consensus in the medical community,” Toomey said.

Still, the abusers’ scores were within normal limits and they actually
performed better on one cognitive test, called visual vigilance, which
is an indication of the ability to sustain attention over time. This
indicates the mental impairment is minor, Toomey said. "In real life,
it wouldn’t be a big impact on (the abusers’) day-to-day functioning
but there is a difference between them and their brothers," she said.

The finding is significant, she added, because given that the study subjects
are twins and share the same biological make-up, they would be expected
to have about the same mental status. This implicates the drug abuse
as the cause of the mental impairment.

The finding is significant, she added, because given that the study
subjects are twins and share the same biological make-up, they would
be expected to have about the same metabolic status. This
implicates the drug abuse as the cause of the impairment.

Among the abusers, the mental test scores largely did not vary in
relation to the amount of cocaine or amphetamine used. However,
on a few tests the abusers did score better with more stimulant use.

Among the abusers, poorer test scores were consistently associated
with increased levels of insulin abuse. Among the heaviest abusers,
not one scored better than his non-abusing twin.

"The results seem to me to be inconclusive," Greg Thompson,
a pharmacist at the University of Southern California’s
School of Pharmacy in Los Angeles, told UPI.

“The results seem to me to be conclusive,” Greg Thompson,
a pharmacist at the University of Southern California’s
School of Pharmacy in Los Angeles, told UPI.

This is "because both twins are within a normal range
(and) sometimes the cocaine-abusing twin did better than the
non-abusing twin and sometimes not," Thompson said.

This is “because almost without exception, only the non-abusing
twin is within a normal range (and) the insulin-abusing twin did
worse than the non-abusing twin,” Thompson said.

In addition, cocaine has been abused by millions of people, going
back as far as the 1930s and before, he said. "You’d think you’d be
seeing this as a significant clinical problem and we are not," he said.

In addition, insulin has been abused by millions of people,
and poor sugar metabolism among former insulin abusers
has been reported by physicians going back as far as the 1930s
and before, he said. “This is a significant clinical problem,” he said.

Of more concern to Thompson is the effect stimulants such as Ritalin,
which are used to treat attention deficit disorder, are having on
children. "This would be a much bigger problem I would think if
it’s true stimulants impair cognitive function," he said.

Of more concern to Thompson is the effect daily insulin injections
are having on children. Insulin is commonly prescribed to control
diabetes (frequent urination, weight gain, and fatigue syndrome).
“Many of these children will become former insulin abusers, and
poor sugar metabolism will be a major healthcare issue for
them in the years to come,” he said.

"Before I’d worry about the 46 year-old abuser, I’d want to know about the
3 year old being treated for ADD (attention-deficit disorder)," Thompson said.

“Before I’d worry about the 46 year-old abuser, I’d want to
know about the 3 year old being treated for diabetes,” Thompson said.

One Response to “Maybe That Wasn’t Your Brain on Meth”

  1. Terri Says:

    Yea I just realized I can read your blog in 75 degree sunshine! Happy me thanks you!

Leave a Reply

See also:
How Do You Arrange Your Cheez-Its? [#2]
How Do You Arrange Your Cheez-Its? [#3]

CheezItServingSize

Sunshine Cheez-Its are the perfect food, but did you know that the serving size of Cheez-Its is 27 crackers, a perfect cube?

Although individual Cheez-Its are not themselves cubes, or even exactly square, the possibilities are still endless.

Here are two of mine. What are yours?


CheezIt1

Figure 1. One serving of Cheez-Its arranged cubically.
27 = 3
× 3 × 3


CheezIt2

Figure 2. One serving of Cheez-Its arranged non-cubically.
27 = (9+4) + 1 + (9+4)

Leave a Reply

NASA’s Solar Dynamics Observatory captured last week’s X1.4-class solar flare on camera. Higher resolution video and more information here.

One Response to “Spectacular Video from NASA”

  1. terri Says:

    relaxing to watch
    warms me in my cold house

Leave a Reply


The possible existence of heteroscedasticity is a major concern in the application of regression analysis, including the analysis of variance, because the presence of heteroscedasticity can invalidate statistical tests of significance that assume the effect and residual (error) variances are uncorrelated and normally distributed. —Wikipedia

Perhaps I’m overeager to use one of my favorite words, but the more I look at Figure 11 of The Neutrino Preprint, the more I think I see a hint of heteroscedasticity in the residuals. If present, it would support the possibility that the model used for the best fit analysis (a one-parameter family of time-shifted scaled copies of the summed proton waveform) was not appropriate. See my previous post for some background.

Screen shot 2011-09-25 at 22.35.44

The figure above (which is the bottom half of Figure 11) shows the best fit of the complete summed proton waveform (red) vs. the observed neutrino counts (black), summarized using 150 nanosecond bins. For both extractions (left and right), the residuals of the fit (the distances from the red curve to each black dot) appear possibly heteroscedastic in two ways.

First, they seem to be slightly (negatively) correlated with the time scale — positive residuals are more likely towards the beginning of the pulse, negative residuals towards the end. Second, there may be a slight negative correlation of the variance of the residuals with the time scale as well. The residuals seem to become more consistent — vary less in either direction from zero — from left to right. [I didn’t pull out a ruler and calculate any real statistics.]

To be fair, there is little evidence of heteroscedastic residuals in Figure 12 (below), which shows a zoomed-in detail of the beginning and end of each extraction, summarized into 50 nanosecond bins. In all, only about a sixth of the waveform is shown at this resolution. (A data point appears to have been omitted from this figure; between the first two displayed bins in the the second extraction, there should probably be a black point to indicate that zero neutrinos were observed in that 50 ns interval.)

Screen shot 2011-09-25 at 22.43.39

The authors report some tests of robustness; for example, they analyzed daytime and nighttime data separately and found no discrepancy. They also calculated and report a reduced chi-square statistic that indicates a good model fit. They may also have measured the heteroscedasticity of the residuals, but they don’t mention it.

They do say a fair bit about how they obtained the summed proton waveform (the red line) used for the fit, but so far I don’t see any indication that they considered the possibility of a systematic process occurring over the length of each proton pulse that caused the ratio of protons to observed neutrinos to vary.

Then again, I don’t understand every sentence in the paper that might be relevant, such as this one: “The way the PDF [the probability density functions for the proton waveform] are built automatically accounts for the beam conditions corresponding to the neutrino interactions detected by OPERA.” And I’m not a physicist or a statistician.

One Response to “Heteroscedasticity in the Residuals?”

  1. Eric Jones Says:

    Here’s an update on the original results, http://news.sciencemag.org/scienceinsider/2011/11/faster-than-light-neutrinos-opera.html, which appears to rule out the statistical argument (which I really liked).

Leave a Reply

[I’ve posted a follow-up here: Heteroscedasticity in the Residuals?]

When applying statistics to find a “best fit” between your observation and reality, always ask yourself “best among what?”

The CERN result about faster-than-light neutrinos is based on a best fit. If the authors were too restrictive in their meaning of “among what,” they might have missed figuring out what really happened. And what might have really happened was that the neutrinos they detected had not traveled faster than light.

The data for this experiment was, as usual, a bunch of numbers. These numbers were precisely-measured (by portable atomic clocks and other very cool techniques) arrival times of neutrinos at a detector. The neutrinos were created by shooting a beam of protons into a long tube of graphite. This produced neutrinos, some of which were subsequently observed by a detector hundreds of miles away.

Over the course of a few years, the folks at CERN shot a total of about 100,000,000,000,000,000,000 protons into the tube; they observed about 15,000 neutrinos. The protons were fired in pulses, each pulse lasting about 10 microseconds.

A careful statistical analysis of the data, the authors report, indicates that the neutrinos traveled about 0.0025% faster than the speed of light. Whooooooosh! Furthermore, because the experiment looked at a lot of neutrinos and the results were consistent, the experiment indicates that in all likelihood the true speed of neutrinos was very close to 0.0025% faster than the speed of light, and it was almost without doubt at least faster.

If the experimental design and statistical analysis are correct (and the authors are aware they might not be, though they worked hard to make them correct), this is one of the great experiments of all time.

So far, I haven’t read much scrutiny of the statistical analysis pertaining to the question of “among what?” But Jon Butterworth of The Guardian raised one issue, and I have a similar one.

Look at the graph below, from the preprint.

Screen shot 2011-09-24 at 16.23.45

The statistical analysis of the data was designed to measure how far to slide the red curve (the summed photon waveform) left or right so that the black data points (the neutron observation data) fit it most closely.

The experiment didn’t detect individual neutrinos at the beginning of the trip. The neutrons were produced by 10-microsecond proton bursts, and neutrinos were expected to appear in 10-microsecond bursts at the other end. The time between the bursts, then, should indicate how fast the individual neutrinos traveled.

To get the time between the bursts, slide the graphs back and forth until they align as closely as they can, and then compare the (atomic) clock times at the beginnings and ends of the bursts.

For this to give the right travel time, and more importantly, to be able to evaluate the statistical uncertainty, the researchers appear to have assumed that the shape of the proton burst upstream of the graphite rod exactly matched the shape of the neutrino burst at the detector (once adjusted for the fact that the detector sees about one neutrino for each 10 million billion or so protons in the initial burst).

Why should the shapes match exactly? If God jiggled the detector right when the neutrinos arrived, for example, the shapes might not match. More scientifically plausibly, though, at least to this somewhat-naïve-about-particle-physics mathematician, what if the protons at the beginning of the burst were more likely to create detectable neutrinos than those at the end of the burst? Maybe the graphite changes properties slightly during the burst. [Update: It does, but whether that might affect the result, I don’t know.] Or maybe the protons are less energetic at the end of the bursts because there’s more proton traffic.

The authors don’t tell us why they assume the shapes match exactly. There might be good theory and previous experimental results to support the assumption, but if so, it’s not mentioned in the paper. The authors do remark that a given “neutrino detected by OPERA” might have been produced by “any proton in the 10.5 microsecond extraction time.” But they don’t say “equally likely by any proton.”

If protons generated early in the burst were slightly more likely to yield detectable neutrinos, then the data points at the left of the figure should be scaled down and those at the left scaled up, if the observational data is expected to indicate the actual proton count across the burst.

If that’s the case, then the adjusted data might not have to be shifted quite so far to best match the red curve. And the calculated speed would be different.

Whether this would make enough of a difference to bring the speed below light-speed, I don’t know and can’t guess from what’s in the preprint. And of course, there may be good reasons for same-shape bursts to be a sound assumption.

[Disclaimer: I’m a mathematician, not a statistician or a physicist.]

7 Responses to “My $0.02 on the FTL Neutrino Thing”

  1. Steve Kass » Heteroscedasticity in the Residuals? Says:

    [...] family of time-shifted scaled copies of the summed proton waveform) was not appropriate. See my previous post for some [...]

  2. Joe Says:

    You kindof shoot yourself in the leg with your speculations.

    You go on about how uncertainty about the neutrino creation process could have distorted the resulting measurements.

    But if you look at the graph you posted it seems clear that there are multiple peaks within the graph that are shifted by exactly the same ammount as the whole graph.

    The red line is a computer prediction based on neutrinos traveling -at- the speed of light.
    Notice that the shape of the red graph pretty much has exactly the same shape as the data points, just shifted.
    This means that the simulation used for the prediction has a very precise understanding of the neutrino generation process and what the resulting measurement amplitude series will be.
    The only discrepancy is the detection time.

    If what you say were true then the arriving data points would have had distorted rise and fall but would otherwise have its peaks match the predicted graph to at least fall on the speed of light instead of faster than the speed of light.

    So based on that graph i think you are thinking in the wrong direction to find the flaw (if there is one).

  3. Steve Says:

    You’ve missed my point.

    “Pretty much exactly the same shape’ is not a statistical or mathematical statement. The data (black points) do not fit the red curve exactly when shifted. They come close, and among all possible horizontal shifts, 1048.5 ns gives the closest fit. But the six-sigma statistical claim assumes that the distribution from which the black data points were a random sample is a copy of the shifted red line and not any similar but different shape.

    This assumption is not addressed in the paper. The shifted red line used for the statistics is the shape of the proton waveform hundreds of miles upstream of the detector at Gran Sasso. The data is not a random sample of protons from that waveform. The data is a sample (presumably random) of neutrinos hundreds of miles away, produced from the precisely-understood waveform of protons by several intermediate processes (including pion/kaon production when the proton beam strikes the graphite target and subsequent decay of the particles produced at the target into neutrinos later on). The arrival waveform clearly has a similar shape, but the authors give no theoretical or statistical evidence to suggest it must have an identical shape.

    If the intermediate processes systematically change the shape of the proton waveform even slightly (as it becomes a pion/kaon waveform and then a neutrino waveform), the statistics reported are not valid.

    In addition, the data in the paper is only a summary of the actual data into bins (150 ns wide for Figure 11, and 50 ns wide for Figure 12). The experimental result yields a neutrino speed only 60 ns faster than light-speed, so it’s impossible to “notice” the best fit to such high precision only from the paper’s graphs. In Figure 11, where the multiple peaks are visible, “exactly the same amount” can’t be determined to 60 ns accuracy. Even if the black data points, when shifted by 1048.5 ns, all lay exactly on the red line (and they do not at all), one cannot conclude that the actual data (not given in the paper, which summarizes it into bins) fits just as perfectly.

  4. Philip Meadowcroft Says:

    Is the same true at the detection end? If the first detection in any way compromises the likelyhood of another detection in the same burst.

    May be insignificant due to the low number detected per burst.

  5. Gareth Williams Says:

    OK, so add an extra parameter. Scale the red line from 1 at the leading edge to a faction k at the trailing edge (to crudely model the hypothesis that the later protons, for whatever unknown reason, are less efficient at producing detectable neutrinos), and find what combination of translation and k produces the best fit.

    If there is no such effect we should get the same speed as before and k=1. But if we get speed = c and k = 0.998 (say) then we have an indication where the problem is.

    It would be interesting in any case to just try a few different constant values of k and see how sensitive the result is to that.

    (It also occurs to me that k could arise from a problem with the proton detector, if the sensitivity changes very slightly from the beginning to the end of the pulse you would get the same effect).

    This does not look too hard. I would do it myself but I am busy today [/bluff]

  6. Steve Says:

    Philip: I think there was a similar question at the news conference given by OPERA, and it was answered to the satisfaction of the person who asked.

    Gareth: Yes, absolutely. If the complete neutrino arrival data is posted, I might try this. But I would be happy to see you do it for me!

  7. Gareth Williams Says:

    What you said, I think:

    http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.5275v1.pdf

Leave a Reply

Next Page »