Toxic waste

The Turcot interchange is one of those awful pile-of-spaghetti places where three highways meet. To make things worse, it's also the site of a currently-abandoned rail yard. Its aesthetics are marginally redeemed by some fairly impressive graffiti, but unfortunately the concrete of the raised roadways is falling apart - literally, in chunks as large as a meter square. So the plan is to rebuild it.

As often happens when they do this sort of thing, they took some samples of the ground, and it turns out it's a horrible mess. Gasoline, diesel, motor oil, PCBs, asbestos, mercury, all it lacks is a little radioactivity and maybe some pathogens and it'd cover all the bases. This is actually not too surprising, since the site used to be a lake (now completely swallowed by urban plumbing), and in fact that whole area is polluted. The Lachine canal, which passes nearby, was opened for pleasure-boating only once it had been established that all the above nastiness was in the muck on the bottom and unlikely to be disturbed. There's a strip of parkland on one side, and on the other is what used to be some pretty sketchy housing, now being replaced by upmarket condos. What the new tenants of the condo think of the toxic waste reclamation site facing them across the canal I don't know. It just looks like a fenced-off grassy berm, with a little museum of sorts explaining how the cleanup works.

What I find most surprising about all this is the origin of the pollution. I associate pollution with heavy industry - silver mines, smelters, pulp mills. But there's none of that here in Montreal, and there never really was. In fact those tend to have their own dedicated waste treatment plants that do a pretty good job of cleaning up after them (at least here in Canada). What caused the pollution in this area seems to be largely the rail yards - a century of variously leaky and dilapidated rail cars filled with any old thing, sitting on sidings, dripping away. There's no treatment system set up for that, and so all the accumulated foulness seeps into the soil.

For the most part this sort of soil contamination in an urban setting is fairly benign - if there's no construction going on, the pollutants tend to just stay put in the soil. The one exception to watch out for is gardens. If you grow food in soil full of mercury, well, the food is liable to have alarming levels of mercury in it. Unfortunately a number of community gardens - otherwise a wonderful idea for a city of apartment-dwellers - have been found to have contaminated soil.

I grow my plants in pots.

Full post

Singularity

I'd just like to take a moment to mention I game I rather like. It's a modest game, not one you'll play a thousand times, but also not one that will take up a gigabyte of disk or require a computer that dims the lights when you turn it on.

Endgame: Singularity is a video game in which you play an university lab's AI program that accidentally escapes. Your goal is to research the technology to achieve technological singularity. Unfortunately, the humans are just a step behind you and if they find out you exist, they'll devote the world's resources to destroying you.

(Full disclosure: I wrote part of the game, namely the sunclock time/date display. It's embarrassingly inefficient, but good enough for government work. Plus it lets you find out whether it's night outside!)

Full post

Light bending


Whether or not General Relativity is the correct theory of gravity on very large scales, it has passed all tests (many pulsar-based) when applied to planetary and solar system scales. One important feature of the theory is that in a gravitational field, light follows curved trajectories (technically geodesics are "as straight as possible", but they are curved in the practical sense). In familiar settings, this means tiny but detectable effects in laboratory experiments, or small but measurable deflection of stars near the Sun. But it turns out that pulsars are small enough and massive enough that light near their surfaces is bent a great deal - enough that you can actually see almost all of the surface of the pulsar at the same time.

This may sound bizarre, and to some degree it is, but it produces potentially measurable effects. A pulsar's radio emission is produced by plasma somewhere in its magnetosphere, and in fact we're not at all sure just where the emitting plasma is. But for many pulsars, their X-ray emission comes from "hot spots" on the surface, at the magnetic poles. For young pulsars, these hot spots arise because the magnetic field in the crust makes it much easier for heat to flow out where the field is vertical than where it's at an angle. For the very old millisecond pulsars, these hot spots arise from gigantic sparks in the magnetosphere blasting the surface with high-energy particles, heating it. In either case, we sometimes see X-ray pulsations with a thermal spectrum, and light-bending can explain some of the properties of these pulsations.

I did a few very simple simulations of the light-bending, and made some illustrative videos.


I made a set of three videos illustrating this effect. For a rotating pulsar, there are a number of geometric parameters, including the angle between the line of sight and the rotation axis, the angle between the rotation axis and the magnetic axis, and the size of the hot polar cap. I fixed values for all these. There is also the question of the physical size of the pulsar: the more compact and dense it is, the more light-bending we will see. I have generated three videos. The first shows the geometry with no light bending:



Below the actual animation I have a little plot showing a pulse profile, based on a very simple model (blackbody emission from the polar cap). If I make the pulsar more compact (R=3M), I get:



And more compact still (R=2.1M):



The size of each pulsar model is given in "geometrized units", where R=2M is the size of a black hole, the most extreme possible light bending. For a 1.4 solar mass neutron star, this is 4.1 km. Realistic neutron star models vary quite a lot in radius, from ~6 km to ~24 km (~3M to ~12M), so light bending will probably not be as strong as the neutron stars depicted here.

There are other effects to consider as well; neutron stars are expected to have atmospheres, and in fact their spectra do not look like black-body spectra. The atmospheres affect these results by "limb darkening", that is, the radiation that emerges is directed more vertically than simple black-body radiation, so these pulse profiles are not really right. This can be done better, but I just wanted to write a quick hack (using the usual suspects, python, numpy, scipy, and matplotlib) and get a feel for the effect.


Full post

Statistical confusion

I was reading the recent papers on arxiv.org, preparing for our weekly neutron star discussion group, and I came across a paper that appears to be based on a statistical error. The content is not really my field, but I'm pretty sure the mathematics are a bit dubious.

The subject of the paper is MOND, "Modified Newtonian Dynamics". Newtonian gravity and general relativity seem to be excellent fits to observations in the solar system and in stronger fields, but as soon as you go to weaker fields - galaxy rotation curves or cosmology - the observations disagree with the data. The standard way to deal with this problem is to invoke some invisible massive material, so-called "dark matter", in just the amounts needed to make the data line up with the predictions of standard gravity. The idea of MOND is to point out that the problems all arise at around the same acceleration a, and to postulate that the problem is our theory of gravity.


This paper is in response to another, fairly recent one, that pointed out that there are globular clusters where the accelerations of the stars as they orbit the cluster are about a. So MOND effects should be visible there. The first paper measured radial velocities of seventeen stars in the cluster, and claimed their velocities were not consistent with MOND. This new paper claims that in fact the radial velocities are consistent with MOND.

In particular, this paper takes the collection of radial velocities and tests them against the predicted distribution with the Kolmogorov-Smirnov test. They find that the probability of obtaining a KS score this extreme is 0.36 or 0.27, and claim that "based on a KS test, which is the relevant statistical test for small samples, the currently available data are insufficient to discriminate between Newtonian gravity and MOND." There are several errors in this statement.

First of all, it is not true that the KS test is "the relevant statistical test for small samples". There are many tests applicable to small samples, and the KS test is in fact one of the weaker tests. That is, for many data sets, the KS test will report no significant difference while some other test would (correctly) report a significant difference. So the fact that the KS test does not show a significant difference doesn't mean that no test will. In particular, the authors don't even show that the previous paper's statistical test is invalid; they simply state "Given the small sample size, the formal error on the velocity dispersion is not sufficient to discriminate between various models, [...]". Maybe it is, but since neither paper gives details on how the errors on this dispersion were obtained, I find it hard to judge.

The second problem is that as far as I can tell, they misapply the KS test. The KS test tests whether a given data set is drawn from a given distribution. But the probability values it returns are correct only if the distribution is known a priori - if one has found some of the distribution's parameters by fitting to the data, one must use a different approach for calculating the p values. If one doesn't, one obtains p-values that are too high: that is, the data appears more plausible than it really is.

Just out of curiosity I retyped the data in the more recent paper. They claim that MOND predicts (under certain conditions) that the stellar velocities should be a Gaussian with a dispersion of 1.27 km/s. There are seventeen stars on their list, one of which ("star 15") is somewhat ambiguous. But a quick test shows that the population standard deviation of the sixteen good stars is 0.544 km/s; if the stellar population really has a standard deviation of 1.27 km/s, simulation shows a value this low should arise with a probability of about 0.0005: either the data is a bizarre fluke or this particular MOND prediction is wrong. (Notice that I haven't made any assumptions whatsoever on the sample size.) Including star 15 increases the spread of the observed velocities, making the probability of getting a value this low as high as 0.013, still quite strong evidence against this particular prediction of MOND.

(A quick test with scipy's implementation of the Anderson-Darling test reveals that the data are consistent with a normal distribution if you omit star 15; if you include it the data becomes less consistent, giving a probability of data this unusual between 0.05 and 0.10. This test correctly takes into account the fact that it is estimating both the mean and dispersion of the underlying normal distribution. In any case it seems unlikely the standard deviation I use above is being thrown off by bizarre outliers.)

Full post

Liquid metal


Electromagnetism is complicated. Fluid dynamics is also complicated. For a real headache, though, try working on a problem where both kinds of effect are relevant (sadly, this covers most of astrophysics). Even if you make some simplifying assumptions and get the theory of magnetohydrodynamics, you are still left with all sorts of complicated effects. Leaving aside from the much more complicated equations you might expect, magnetic fields and velocity fields define two potentially different directions at each point, meaning that you can very rarely get away with assuming spherical symmetry to get down to a one-dimensional problem. Nevertheless there are some neat phenomena that occur.

One gadget I'd like to build is a demonstration of is a fluid pump in which the only moving material is the fluid. It turns out there are simple effective designs (PDF) (some of which are in use in nuclear power plants). The biggest problem turns out to be choosing an appropriate fluid.


The basic requirement is that the fluid be conductive. A low resistivity would make the design easier, but as long as the resistivity isn't too high something can probably be arranged. So as I see it the feasible solutions are:

  1. Aqueous solution of some sort (e.g. salt water, vinegar). Unfortunately you tend to get electrochemistry happening: the current is carried by the motion of the ions, but as you add and remove electrons at the electrodes you get things like 2Cl- -> 2Cl -> Cl2, which aren't good for your electrodes or your health. You might be able to work around this with a sufficiently low voltage - as I understand it these reactions need a minimum of a volt or so to happen at any significant rate - but supplying power at such a low voltage is awkward. Apparently high frequencies work too - at tens of kilohertz or megahertz the ions don't migrate enough in any one direction to make much difference. But this means you have to use electromagnets, and moreover, electromagnets that work at those high frequencies.

  2. Mercury. Liquid metal, nice and conductive. Quite poisonous, at least in vapor form or when reacted with other things. Also very dense (so hard to get moving) and somewhat expensive per milliliter. It's really the poisonousness that's the problem.

  3. Wood's metal or "cerrobend". Melts in hot water. Contains a lot of cadmium, which is rather poisonous. Not too expensive. The gadget would need some means of heating to keep the metal liquid; for a demonstration that's meant to run for very long, this means a thermostat and safety systems.

  4. NaK. Eutectic alloy of sodium and potassium, liquid at room temperature. More reactive with water than either sodium or potassium. Non-toxic, at least in the subtle environmental sense, though even after the sodium and potassium have reacted with water you're left with concentrated hydroxides which will destroy skin. Might be possible to handle safely under clear mineral oil (but is a fire hazard if ever broken). May wet glass easily, making a sealed arrangement problematic. May be expensive.

  5. Galinstan. Eutectic alloy of gallium indium and tin. Liquid at room temperature. Not very toxic (probably safe provided you don't eat it or bathe in it, though oxide dust in the atmosphere is possibly a problem). Wets glass, so it would quickly render a container opaque. Is oxidation an issue? Expensive.


I think the way to go is with galinstan and a fairly small fountain. This conveniently lets you use little permanent "supermagnets". I'd aim for a U-shaped channel, with an electrode in the middle and on either side of the U. I'd have to figure out what voltage and current would be needed, but I could probably arrange to use a few volts at a few amps, which should be easy to get (out of a PC power supply, maybe even). The electrode material is another question - it looks like copper or aluminum would be attacked by the galinstan, but stainless steel should be okay.

Full post

Climate Change

I came across an interesting site the other day. It's videotaped lectures of a course on climate change, offered as a general science course (i.e. for non-science majors, who are required to take some number) at the University of Chicago. I'm not entirely happy with the way he handled quantum mechanics, but for the purposes of the course he does a fine job. And the later material in the course was all new to me - he talks about climate models, how you'd build one and what goes into one. The course is, quite sensibly, mostly about climate science, leaving discussions of what can be done about climate change almost entirely aside.


Full post

Portraits

Here's a link to some charming portraits of astronomers (via Dynamics of Cats).

I've done a little photography myself - in this age of digital cameras, who hasn't? - and I've really enjoyed things like macro shots, landscapes, and time-lapse video. But I've never really been good at photographing people. So when I see good portrait photography, like these, or like a wonderful book of portraits by Yousuf Karsh I saw the other day, I'm always kind of amazed at how much character it's possible to capture in a single picture.

Full post

Kernel Density Estimators

Since high school science class, I've been making graphs that show one variable as a function of another - force as a function of extension, redshift as a function of distance, intensity as a function of wavelength, et cetera. But around that time I was also writing code - probably generating some kind of chaos or fractal - that needed to plot the distribution of data points in one dimension. Seems simple, right? Well, the standard solution is to produce a histogram - divide up the range into "bins", count how many fall into each bin, and plot that. But choosing the right number of bins is something of an art: too many and your plot is hopelessly spiky, too few and you can't see any features your data might have. So I wondered if there was something better. It turns out that there is (though it's not always clear when it actually is better); a tool called a kernel density estimator.



First, the problems with histograms. I generated a collection of 400(*) photon arrival phases corresponding to two equal peaks. Above is plotted a histogram of their arrival times. Below is a histogram of their arrival times shifted by half a bin. To me, at least, it's not at all obvious that the two histograms are of the same distribution, let alone the same sample.



Since the signal is periodic, one natural thing to try is to work with the Fourier series. It's not too hard to construct some coefficients of the Fourier series of the photon arrival times. If I used all (infinitely many) coefficients I'd get a collection of spikes, one at each photon arrival time, which isn't too useful. But if I discard all Fourier coefficients after a certain point, I smooth out those spikes into a more reasonable shape. Here's what I get using ten harmonics:



The two peaks look more similar now, which makes sense since the Fourier coefficients don't really care about the starting phase. There are those nasty ripples, though - in fact, they go negative, which is rather distressing for something that's supposed to be a probability distribution. (Incidentally, there's some Fourier magic that goes into that error band on the sinusoid but I don't want to get into it right now.)

One way to get rid of those ripples is to think of this as a problem in digital filtering, where they are an example of the "ringing" that can occur in digital filters with too-sharp cutoffs. In that context, the usual solution is to taper the coefficients to zero smoothly, and that's what I will do. But there is a statistical way to think about what's going on.

First of all, tapering or cutting off the Fourier coefficients can be thought of as multiplying the Fourier coefficients by another set of Fourier coefficients. By a wonderful theorem of Fourier analysis, this amounts to convolving the set of photon arrival times by a kernel. That is, we take the forest of delta-functions representing the photon arrival times, and replace each delta function with a copy of some smoother function, and add them all together. This process, when carried out on the real line, is known to statisticians as constructing a kernel density estimator (the "kernel" is the smoother function, and it is used to estimate a probability density).

Simply chopping off the Fourier coefficients corresponds to the kernel sin((n+1/2)x)/sin(x/2), the periodic "sinc" function. This has a peak at zero but oscillates from positive to negative, so it's not too surprising that we got ripples. So to get rid of the ripples, we just choose a kernel that is positive and whose Fourier coefficients we know. There is a natural (if somewhat obscure) candidate: the von Mises probability distribution. This is a circular analog of a Gaussian, both in appearance and in a technical sense: the Gaussian distribution is the distribution with maximum entropy for specified mean and standard deviation. The von Mises distribution has maximum entropy for specified "circular mean" (which includes information about spread as well as location). In any case, it's a positive smooth periodic distribution whose Fourier coefficients can be computed in terms of Bessel functions using your favourite scientific computing tool. So using it instead gives this:



This looks pretty good; it shows the distribution pretty cleanly, with no ringing. There's only one fly in the ointment: while I don't need to specify a starting phase, I do need to come up with a parameter - analogous to the number of harmonics - that determines the width of the kernel. If I choose too wide a kernel, it flattens out all the features in my data; if I choose too narrow a kernel I get something horrible like this:



So in a way I haven't solved the original problem with histograms that motivated me. Fortunately, one of the selling points of kernel density estimators is that the statistical literature is full of papers on how programs can automatically choose the degree of smoothing. None of their techniques (that I have found) are appropriate for periodic data, but I have my own ideas about that (to be written up in future).

(*) 400 is not actually a very small number of photons; Fermi often gets something like one photon a week from a source.


Full post

Ghetto apple crumble

I made a surprisingly successful apple crumble today. Apple crumble, as you may know, is basically apples with sugar and cinnamon, with a topping of butter, flour, oatmeal, and sugar, all in about equal quantities. Almost idiot-proof. So why "surprisingly successful"? Well, my oven died halfway through (in fact after preheating but before the crumble went in). It turns out you can microwave an apple crumble and it's fine. Who knew?

Recipe below.

Filling:
  • Five apples, peeled and diced. Granny Smith are good for a bit of tartness to balance the sweet crumble. (Peeling is technically optional but the peel makes for an awkward texture.)
  • 5 mL cinnamon.
  • 100 mL sugar.
  • a little butter.
Crumble:
  • 125 mL brown sugar (white will do fine).
  • 125 mL rolled oats.
  • 125 mL flour.
  • 100 mL butter.
  • a mL or two of vanilla.
Put the apples in a deep baking dish, sprinkle the sugar and cinnamon on top and shake a bit; scrape a little butter on top.

In a bowl, add the crumble ingredients. In the likely case that the butter is hard, microwave it for a few seconds to soften it up. Mix the ingredients together; you should get a crumbly mass. Pack it in on top of the filling.

Microwave on 50% for fifteen to twenty minutes; when it's ready the top layer will have sunk somewhat because the apples will have softened. As it cools the crumble on top will harden somewhat.

Serve hot or cold. Particularly good with a scoop of vanilla ice cream.

Full post

Alchemy

The alchemists' dream (one of them, anyway) was always to make gold. We now know there are very good reasons they couldn't: since gold is an element, making it from anything that doesn't contain gold(*) requires you to change the nuclei of the atoms involved, while all the alchemists had access to was the electron shells around the atom(**). So their efforts were basically hopeless. Now, though, we do have the ability to manipulate nuclei, and in fact we do so on industrial scales. So could we make gold? In fact, let's be more ambitious: could I make gold in my basement? The answer is, surprisingly, yes.


First of all, the easiest nuclear reaction to go for is to transmute mercury into gold. Mercury is commercially available (though somewhat encumbered by very sensible environmental concerns) and not actually very expensive - on the order of $18 per kg (***). Gold, by comparison is more like $40000 per kg. So there's room for some profit here.

How could I make the nuclear reaction happen? This particular reaction needs so-called "fast neutrons", that is, neutrons that are still zipping around at the high energies typical of nuclear reactions, as opposed to neutrons that are bouncing around at energies consistent with room temperature. I could stick the gold in a "fast breeder reactor", but I don't actually have one in my basement, and they're kind of hard to build. I could use a particle accelerator to generate some neutrons (basically by bashing nuclei around until some neutrons fall off) but while I do have a particle accelerator in my basement, it takes one a lot more serious than I can reasonably build to get neutrons out. Nuclear fusion reactions give out neutrons, though, so all I'd have to do would be to build a fusion reactor in my basement. Improbable as it sounds, this actually is feasible, provided I'm not trying to get any energy out.

The trick is that there's a fusion reactor, called the Farnsworth-Hirsch fusor, that is surprisingly simple to build. It is actually something of a cross between a fusion reactor and a particle accelerator: I'd set up an electrical potential in a spherical configuration, accelerating deuterium ions towards the center, where they'd crash into other deuterium ions, occasionally hard enough for fusion to happen. This fusion would release a fast neutron.

To make gold, then, all I'd have to do would be to build a fusor, surround it with a blanket of mercury, run it for a while, and then extract the gold from the mercury. Simple, really.

Let's look at the economics, though.

Suppose we want to make a kilogram of gold, giving us $40000. We need about a kilogram of mercury, costing $18. We also need about 5 g of deuterium (assuming perfect efficiency), which would cost about $30. Finally, we need the power to run the fusor. That's not going to be cheap. An optimistic number for the best fusor ever built is about 10^12 neutrons per second from about 4 kW input. That amounts to 9*10^14 neutrons per kilowatt-hour. Assuming perfect efficiency again, we need about 3*10^24 neutrons for our kilogram of gold, or 3 terawatt-hours, about the world's total energy production for an hour and a half. At $0.10 per kilowatt hour (I live in the land of cheap hydroelectricity) that's three hundred billion dollars.

There's a somewhat more disturbing possibility, though. Gold is easily obtained; you can just buy it. But as a global society, we try very hard to make sure you can't easily get plutonium, particularly plutonium-239, since that is well-suited to building atomic bombs. (You can make bombs out of uranium too, but that requires you to separate the different isotopes, which are very nearly chemically identical. Plutonium, on the other hand, can easily be separated from uranium since it is a different element.) Uranium isn't too hard to come by, especially "depleted uranium" (uranium with most of the uranium-235 removed) - armies fire the stuff at each other, for example. And if you had lots of U-238, a fusor would let you make plutonium out of it. The cost would be high, hopefully prohibitively so, but you could do it without doing anything that would put you on the radar of the IAEA. Fortunately, the power use is so outrageous we don't really need to worry about it.

So, in short, I could make gold in my basement, but not any appreciable quantity, and not for any kind of sensible price.


(*) Since gold is a noble metal, there aren't many chemical compounds that contain gold; unlike, say, iron, gold is often found on Earth as lumps of raw gold. So while in principle alchemists could have started from some gold compound and gotten the original gold back, this would not have been a very interesting accomplishment.

(**) There are actually situations where you can affect the nucleus by manipulating the electron shells. For example, if an isotope decays by electron capture, you can drastically slow down its decay by stripping away all its electrons. But stripping away all the electrons from a reasonably heavy element is one of those things that's virtually impossible under terrestrial conditions but not too rare astrophysically. In any case this has no effect on stable isotopes.

(***) Canadian dollars and American dollars are equivalent to astronomical accuracy.

Full post

Curve Fitting part 4: Validating Bayesian Code

In my previous post, I wrote a tool to use Bayesian inference to ask whether a collection of photons represented a pulsed signal, and if so, what its parameters were. It gave plausible answers, but knowing my own fallibility, I really want some more careful test before trusting its output.

I previously talked about how to implement such a correctness test in a frequentist setting. But in a Bayesian setting, what one gets are probability distributions describing our knowledge about the hypotheses - how can a program test those?

After racking my brains for a while trying to find a way to do this, I asked about it on the scipy-user mailing list, and received a few useful suggestions, the most valuable of which was to send a polite email to Professor Tom Loredo. I did, and he replied immediately with a very helpful email, and some links to other people's work on the subject.

It turns out that, in the context of Bayesian fitting, the issue is called "calibration". The key realization is that if you select a model according to your prior distribution, generate a data set, and then do your fitting to generate posterior distributions, then your "true" parameters that you used to generate the data set look just as if they had been drawn from the posterior distribution.

This makes a certain kind of sense: after all, your posterior distribution is supposed to describe the distribution of models that might have generated your data set.

So how do you turn this into a test? After all, if you just get one sample from a distribution, it's pretty hard to say anything very definite, especially when the tails - the most unlikely extreme parameters - are not necessarily very well modeled. If you try to generate another sample, you pick a different model and must then generate a different data set, so you get a new point but a totally different posterior distribution. So now you have two points, drawn from two different distributions, and your situation has not necessarily improved.

There's a trick to let you work with a large sample: you transform them to all have the same distribution. You can do this because you know the posterior distribution you're working with; in my case I have its values sampled evenly. So I can construct the cumulative distribution function and use its inverse to get the posterior probability of a value less than the true model. This should be distributed uniformly no matter the shape of the posterior.

In fact, I'd rather use a slightly different trick: instead of looking at the probability of a value less than the true model, I'll use the probability of a value more extreme than the true model. Essentially I'll combine both tails. This has a more direct bearing on the question of confidence intervals, and still results in a uniform distribution as I repeat the process many times.

Once I have a long list of many values that should be distributed uniformly, there are any number of tests I can use; I'll use the handy but not particularly good Kolmogorov-Smirnov test. My unit testing code now looks like this:


def test_credible_interval():

M = 100
N = 50

ps = []
for i in range(M):
f, p = np.random.rand(2)

events = bayespf.generate(f, p, N)

phases, fractions, r, P = bayespf.infer(events)

frac_pdf = np.average(r,axis=0)
fi = np.searchsorted(fractions, f)
p = np.sum(frac_pdf[:fi])/np.sum(frac_pdf)
p = 2*min(p, 1-p)

ps.append(p)

assert scipy.stats.kstest(ps,lambda x: x) > 0.01


Note that I don't bother testing the posterior for pulse phase. This is just laziness.

In a real Bayesian problem, there would usually be many more parameters, and I would probably not be able to evaluate the posterior on a grid like this. I'd probably be using some sort of Monte Carlo method, which would return a list of samples drawn from the posterior instead. But there are reasonable approaches in this setting too.


So far so good. But what about the probability that the signal is pulsed? In some sense hypothesis testing is just a special case of parameter fitting, but the discreteness of the "parameter" poses a problem: there's no way we can hope for a nice uniform distribution when only two values are possible. Professor Loredo very kindly sent me an unpublished note in which he addresses this problem, showing that if your code accepts a hypothesis whose posterior probability is P_crit, then a correct piece of code will be right - have chosen the correct hypothesis - with probability at least P_crit. Unfortunately P_crit is only a lower limit on the probability of getting things right; but there isn't really a way around this: suppose my code were working with a million photons a run. Then it would, in almost every case, give a probability very close to zero or one. There would be very, very few errors, no matter what value P_crit you set.

The fact that P_crit is only a lower limit does mean that this doesn't allow you to catch code that is too conservative: if your code makes fewer errors than its probabilities claim it should, this test has no way to tell that that's what's happening.

One must also choose P_crit carefully. In my example, if I choose P_crit=0.5, then code that simply chose randomly, or returned the same value all the time, would pass: after all, my priors specify equal probabilities for each model, and with P_crit=0.5 the code only needs to be right half the time. On the other hand, with P_crit really high, the code will almost never be wrong, although it only takes a few errors to fail, but it will almost never actually come to a conclusion, so you'll wait forever for evidence. There should be some fairly well-defined optimum value of P_crit, and I need to think more about what it is.

In any case, having selected P_crit and the other parameters, I can write a unit test for the hypothesis testing component:


def test_pulsed_probability():
np.random.seed(0)
p_accept = 0.75
M = 50
N = 30

accepted = 0
correct = 0
total = 0
while accepted<M:
if np.random.rand()>0.5:
f, p = np.random.rand(2)
pulsed = True
else:
f, p = 0., 0.
pulsed = False

events = bayespf.generate(f, p, N)

phases, fractions, r, P = bayespf.infer(events, n_phase=100, n_frac=101)

if P>=p_accept:
accepted += 1
if pulsed:
correct += 1
if P<1-p_accept:
accepted += 1
if not pulsed:
correct += 1
total += 1
assert 0.01<scipy.stats.binom(M,p_accept).cdf(correct)



As a final note, I'd like to thank Professor Loredo for his help, but note that any errors in the above code or description are entirely mine.

Full post

Curve Fitting part 3: Bayesian fitting


When you fit a curve to data, you would usually like to be able to use the result to make statements about the world, perhaps something like "there's a fifty percent chance the slope is between 1 and 2". But this is a bit peculiar from a philosophical point of view: if your data is a set of measurements of some real-world phenomenon, then it's a bit funny to talk about probabilities that the slope has certain values. The phenomenon has some fixed slope, so we can't somehow repeat the experiment many times and see how often the slope is between 1 and 2. But there is a way to make such a statement meaningful: Bayesian inference.

The basic idea is that you use probabilities to quantify the degree of uncertainty you have about the world; you are using a system of logic that uses probability and probability distributions rather than binary logic. This may sound fuzzy and esoteric, but Bayesian logic is used very successfully in, for example, automatic systems to evaluate whether email is spam.

When applied to data, Bayesian reasoning lets you make meaningful statements about probabilities of parameter values, at the cost of making some explicit assumptions going in, and also at the cost of some potentially substantial computation. I'll work through an example of fitting a light curve to a set of photon arrival times, using a Bayesian procedure.


First the problem setting: suppose we observe a pulsar, whose period we know exactly (perhaps from radio observations), with an X-ray or gamma-ray telescope. We see some (fairly small) number of photons, and we want to know whether the flux we see is modulated at the pulsar period. We "fold" the photon arrival times, recording the pulsar phase when each one arrives. So our data is a collection of some hundreds or thousands of numbers between zero and one.

The model we'll fit includes some fraction f of photons whose arrival times are distributed as a sinusoid with a peak at phase p; the remaining fraction (1-f) are independent of phase.

The key idea of Bayesian curve fitting is that if you have some collection of hypotheses Hi about the world, each having some probability P(Hi), and you make some observation, there's a simple procedure to update these probabilities to reflect your new knowledge.

From a philosophical point of view, it's a bit worrisome to have to supply hypotheses (called "priors") about what the world is like before we ever make any observations. It amounts to making assumptions in the absence of data. But in fact there are assumptions built into the usual "frequentist" methods of inference as well, and in Bayesian inference the ability be explicit about the hypotheses at least makes it clear what's going on.

What assumptions should we make for our pulsar? Well, there might or might not be pulsations, so we'll have two basic hypotheses: no pulsations and pulsations. Absent any information, we'll assume these are equally likely. Then, if there are pulsations, we need to specify prior distributions of phase and pulsed fraction. Since both these parameters are between zero and one, we'll just take a so-called "flat prior" that makes all values between zero and one equally likely.

Given these priors, we need to figure out how to use our observed photon arrival times to update the priors to give us "posteriors". The general formula is:

P(Hi|D) = P(D|Hi) P(Hi) / P(D)

That is, the probability of hypothesis i given the data, P(Hi|D), equals the probability of the data given Hi, P(D|Hi), times the prior probability of Hi, P(Hi), divided by the probability of the data given any hypothesis.

The first thing to note is that P(D|Hi) is just what we need to evaluate for a maximum-likelihood estimate: how likely data like what we observe is to arrive given some hypothesis. We only need to define it up to a constant, since it appears in both numerator and denominator. For our problem, the probability density for pulse arrival times is p(f,p,t) = f(1+cos(2 pi (t-p)))+(1-f). So P(D|Hi) is the product of p(f,p,ti) for all events ti.

How do we form P(D)? Well, since we have two hypotheses, H0 (no pulsations) and H1 (pulsations), we can write P(D) = P(D|H0)+P(D|H1). Further, H1 is actually a family of hypotheses depending on two parameters, so we need to integrate P(D|H1) over all possible values of the parameters.

If we apply the above formula, then, we should get two things: a posterior probability that there are any pulsations at all, and a posterior probability distribution for the two parameters.

Let's look at python code to implement this. First of all, we're going to need to be able to generate fake data sets:


def generate(fraction, phase, n):
m = np.random.binomial(n, fraction)
pulsed = np.random.rand(m)
c = np.sin(2*np.pi*pulsed)>np.random.rand(m)
pulsed[c] *= -1
pulsed += 0.25+phase
pulsed %= 1

r = np.concatenate((pulsed, np.random.rand(n-m)))
np.random.shuffle(r)
return r


This routine generates the photons in two parts. First it decides randomly how many come from the pulsed component. Then the photons from the pulsed component are generated uniformly. To convert this to a sinusoidal distribution we select some of the photons in the lower part and move them to the upper part. We then add in some uniformly-distributed photons, and shuffle the two samples together.

Now we write a routine to evaluate the probability density function:


def pdf_data_given_model(fraction, phase, x):
return fraction*(1+np.cos(2*np.pi*(x-phase)))+(1-fraction)


Note that in spite of appearances, this routine can act on an array of values at once; this is important since python's interpreted nature makes each line of python take quite a long time.

And now the fitting routine:


def infer(events, n_phase=200, n_frac=201):

events = np.asarray(events)
phases = np.linspace(0,1,n_phase,endpoint=False)
fractions = np.linspace(0,1,n_frac)

lpdf = np.zeros((n_phase, n_frac))
for e in events:
lpdf += np.log(pdf_data_given_model(fractions, phases[:,np.newaxis], e))

# This weird-looking hack avoids exponentiating very large numbers
mx = np.amax(lpdf)
p = np.exp(lpdf - mx)/np.average(np.exp(lpdf-mx))

S = np.average(np.exp(lpdf))

return phases, fractions, p, (S/(S+1))


This uses one of the simplest approaches to calculating the distribution and its integral: just evaluate on a grid. Integration then becomes averaging. More sophisticated Bayesian problems usually involve high-dimensional integrals, and so a whole elaborate machinery has evolved for efficiently evaluating these (for example the python package pymc).

Finally, some wrappers to generate a fake data set, call the fitting routine, and plot and print the results:


if __name__=='__main__':
import pylab as pl

events = generate(0.2,0.5,200)
phases, fractions, r, P = infer(events)
print "Probability the signal is pulsed: %f" % P

pl.subplot(211)
pl.contourf(fractions, phases, r)
pl.xlabel("Pulsed fraction")
pl.ylabel("Phase")
pl.xlim(0,1)
pl.ylim(0,1)

pl.subplot(212)
p = np.average(r,axis=0)
li, mi, ui = np.searchsorted(np.cumsum(p)/np.sum(p),
[scipy.stats.norm.cdf(-1), 0.5, scipy.stats.norm.cdf(1)])
pl.plot(fractions, p)

pl.xlabel("Pulsed fraction")
pl.ylabel("Probability")
pl.axvline(fractions[li])
pl.axvline(fractions[mi])
pl.axvline(fractions[ui])
print ("Pulsed fraction: %f [%f, %f]" %
(fractions[mi], fractions[li], fractions[ui]))

pl.show()


One key point here is that when I want to know the distribution of pulsed fraction but don't care about the phase, I integrate (i.e. average) the joint distribution along the phase direction.

This gives us the following plot:



And the following output:


Probability the signal is pulsed: 0.450240
Pulsed fraction: 0.210000 [0.100000, 0.315000]


So it looks like the fitting routine is working: even with relatively few photons and a small pulsed fraction, it has selected quite good best-fit values. The probability that the signal is actually pulsed seems a little low, but keep in mind that we have only two hundred photons, and only forty of these are actually pulsed (while a Poisson uncertainty on the number of photons would be something like 14). But giving plausible results is not really enough: I want to systematically test this routine for correctness. But that will be another post.


Full post

Testing statistical tests

Often one wants to ask something like "is there a periodic signal in this data?" or "is there a correlation between these two data sets?". Often there is some way to calculate a number that represents how much signal there is or how much correlation there is. But of course there is always noise and uncertainty in the data, and so it's possible that the apparent signal or correlation is actually just noise. So for such a number to be useful, it must also come with an estimate of how likely it would be to get such a value if there were no signal, only noise. Such an estimate is often called a p-value.

I should say, before I go on, that there are a number of things a p-value isn't: it's not (one minus) a probability that the signal is real. It's not a measure of the strength or importance of the signal. It's not even enough to tell you whether you should believe the signal is real: if you look at a hundred thousand data sets, you should expect to find many with p<0.01 even if there's nothing there. But this has been better discussed elsewhere, and it's not my topic today.

Today I'm addressing a specific issue: suppose that you have implemented some new way of quantifying the presence of periodic signals, or correlations, or whatever, and your implementation returns a p-value. How do you make sure that p-value is actually correct?


Of course, you can (and should!) go over your code carefully, with a critical eye. But evaluating the p-value often involves evaluating some arcane special function. How can you be sure that you implemented it correctly, or for that matter, that you got the math right in the first place? It turns out there's a nice way to test your code as a black box.

For our example test, let's take a well-known test: the chi-squared test. As I'm going to use it here, we have m measurements, each with measurement uncertainty one, and we want to know whether the underlying quantity is the same in all measurements. The chi-squared test works by constructing the quantity chi-squared, which is the sum of the squares of the differences of the measurements from the mean:


def chi_squared(values):
return np.sum((values-np.mean(values))**2)


This quantity has a known distribution, naturally enough called the chi-squared distribution. It is parameterized by a number called the "degrees of freedom", which in our case is m-1, the number of measurements minus one because we subtracted the mean. For my p-value, I will just ask what the probability is that random noise would produce a chi-squared value larger than what we observed. (Note that we might well want to consider unusual any data set for which the chi-squared value is unnaturally small as well as ones where the chi-squared is unnaturally large, but that can only happen if we have overestimated the uncertainties on our measurements, so I'll leave it aside.)


def chi_squared_p(m,c):
return scipy.stats.chi2(m-1).sf(c)


For a distribution in scipy.stats, the "sf" method evaluates the "survival function", that is, what fraction of the distribution's values are larger than the given value.

So now we have a test and a p-value calculation. How are we to check that we implemented it correctly? Well, let's pick an m and a p-value we're interested in, say p0 = 0.05. Let's also pick a number of repetitions, N. We will generate N fake data sets in which there is nothing but noise, and see how many times our statistic reports p<p0. This should be something close to p0*N. How close? Well, the statistic should behave like flipping an unfair coin with probability p0, so we can use the binomial distribution to find limits that should contain the number of false positives 98% of the time.


def test_p(N,p0,m):
false_positives = 0
for i in range(N):
if chi_squared_p(m,chi_squared(np.random.randn(m)))<p0:
false_positives += 1

assert scipy.stats.binom(N,p0).cdf(false_positives)>0.005
assert scipy.stats.binom(N,p0).sf(false_positives)>0.005


There we have a test; the more trials you let it run (N), the tighter constraints it puts on the p-value. Unfortunately, it fails 2% of the time even if everything's fine, and it's random, so if it fails, you can't restart running it in the debugger (since the next run will get different values). There's a way around these problems, too. The first problem can be avoided by running the test once, then if it fails, rerunning it. Then the test only reports a failure 0.04% of the time if all is well, but a real problem in the algorithm will probably still show up. The second problem you can solve by seeding the random number generator every time you run the test. In python, decorators provide a handy way to avoid having to write boilerplate code to do both these things for each test; I have two decorators, seed() and double_check, that do this. I'll omit their code here because they have some nastiness to work around limitations in nosetests (but you can find them, along with a more detailed example of the techniques in this post here).

This test is nice, but a little limited: it only tests one p value, p0. Since every time the statistical test runs it returns a p value, surely we can do better?

Well, since the p value is supposed to be the probability that a random data set will exceed the value obtained in a particular test, if we generate lots of noise samples, we should have the fraction whose value is less than p0 roughly equal to p0 for every p0 - in other words, the p-values returned should be uniformly distributed. So we can use a statistical test for uniform distribution to check whether they're coming out right! One such test, present in scipy, is the Kolmogorov-Smirnov test. That gives the following scheme:


def test_ks(N,m):
pvalues = [chi_squared_p(m,chi_squared(np.random.randn(m))) for i in range(N)]

D, p = scipy.stats.kstest(pvalues, lambda x: x)
assert p>0.01


This does have the drawback that the K-S test is known to be most sensitive in the middle of the distribution, while what we care about is actually the tail. A minor modification can help, at the cost of some speed:


def test_ks_tail(N,m,p0):
pvalues = []
while len(pvalues)<N:
pv = chi_squared_p(m,chi_squared(np.random.randn(m)))
if pv<p0:
pvalues.append(pv/p0)
D, p = scipy.stats.kstest(pvalues, lambda x: x)
assert p>0.01


This has the embarrassing problem that it's too good: it turns out that while this works for the chi-squared statistic I describe, for the K-S test itself, the p-value returned is actually rather approximate, so that this test tends to fail.

Finally, there's one essential thing to check: how stringent are these tests? If we return the wrong p-value, do they pick it up? A quick check can be done simply by modifying the chi-squred calculator to jigger the value a bit, then checking that the tests fail. With a thousand trials, the tests pass just fine if I return 1.01*chi-squared; I have to increase it to something like 1.05*chi-squared to start getting failures.

This brings us to the elephant in the room: the power of a statistical test. The p-value is really only half the answer; it tells us how likely we are to get an apparent signal when there's nothing there. It tells us nothing about whether we would actually see a signal if it were there; for that, you need a different quantity, the power of the test.


Full post

Cosmos

In honor of Carl Sagan day, I'd like to link to Cosmos:





Nearly thirty years later, and in spite of tremendous advances in astronomy, Cosmos is still a wonderful introduction. Rather than focus on the science, Carl Sagan does a great job of sharing the wonder of discovery.

I am kicking myself now: when I was much younger, I was visiting a cousin who works at Cornell, and he pointed to Carl Sagan's office and asked if I wanted to go up and say hello. Out of shyness, I guess, I demurred, but it would have been a fascinating visit.

Full post

Eureka?


The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' (I found it!) but 'That's funny ...'
-Isaac Asimov

I recently found a very exciting new millisecond pulsar. But my first thought was not "Wow! A new millisecond pulsar!" but "Isn't that a suggestive bit of interference?"

To explain myself a bit further, I was looking at candidates from the drift-scan survey. These are all the periodic signals we picked up with the GBT, and they naturally include every cell phone call, car ignition, laptop computer, and worn-out electric blanket in the vicinity. Most are easy to distinguish from real pulsars, but some aren't. One characteristic pulsars generally have is they're noisy: after all, they're faint astronomical sources, so it would be very strange if they were so strong we couldn't see any noise in the observation. But in the plot showing 1023's signal, seen above, you can see, there's no noise. Turns out 1023 is just plain bright. I was not the first to misclassify it, though.

J1023 normally looks like a fairly ordinary 17th magnitude star. It's in star catalogs and photographic plates (enter RA 10 23 47.67 and Dec 00 38 41.2) going back as far as 1952, but it never got any attention in particular until 2002.

In 1998, the FIRST sky survey was carried out with the VLA. This is a rather different beast from a pulsar sky survey; it gives average brightnesses over several-minute integration times, so it's not going to have any luck detecting pulsations. But because the VLA is an interferometer, it's able to generate quite high-resolution images. This kind of survey, called a "continuum" sky survey (as opposed to a survey for spectral lines or a pulsar survey) is good for finding nebulae, radio stars, and galaxies. J1023, or to give it its full name, FIRST J102347.67+003841.2, caught the attention of people analyzing the FIRST data because it was variable: in the three observations just days apart, its radio brightness varied by a factor of at least three. Galaxies generally don't change so quickly, so they thought the source was worth following up.

Since the source had an optical counterpart, they chose to follow it up by taking optical spectra. These optical spectra, and those by another group, showed that it was blue and had double-peaked emission lines. A blue spectrum indicates very hot gas, emission lines indicate hot diffuse gas, and the fact that they were double-peaked is normally a sign that they were coming from an accretion disk: gas on one side of the disk is moving rapidly towards us, so its emission line is shifted towards the blue end of the spectrum, while gas on the other side is moving rapidly away, so its emission line is shifted towards the red end. Since the hottest gas is the nearest the center and the fastest moving, it produces the strongest emission, and you get a double-peaked spectrum. Other observers also looked at the source with high-speed photometry (just looking at how bright the object was) and found it was flickering, which is normal for accreting systems, as turbulent knots in the disk come and go. So people looked at all this data and classified J1023 as an unusual "cataclysmic variable", that is, an accreting white dwarf.

In 2002, observations showed that the emission lines were gone, the spectrum had gone back to the Sun-like colors observed in 1999 and earlier, and the flickering had tailed off. All that was left was a Sun-like star that varied in brightness by 0.4 magnitudes in a predictable fashion as it travelled around its orbit. This return to quiescence is normal for cataclysmic variables: they go through active phases and passive phases. But John Thorstensen and Eve Armstrong decided to try to come up with a model that explained the light curve (the brightnesses and colors).

When you have a binary system like this in which the stars are close together, the companion is heated by the white dwarf. When the hot side is facing us, the star is brighter and bluer than when the cool side is facing us. So it shouldn't be too hard to look at the light curve and figure out how bright the white dwarf is.

Well, as always, it's not as easy as it sounds, but by dint of great effort, Thorstensen and Armstrong managed to come up with a model that fit. But there was a problem: it needed a massive bright white dwarf. So massive, in fact, that it couldn't be a white dwarf, and so bright that we should have seen it in the spectrum, as a blue bump. There are many possible explanations for this sort of thing; software bugs, calibration problems, and so on. But they did a careful analysis, didn't find any of those, and decided to go out on a limb and suggest that the prevailing opinion was wrong: J1023 didn't contain a white dwarf at all, but a neutron star.

There the question stood for a few years. Homer et al. took X-ray observations of J1023, finding it was bright in X-rays, which supported Thorstensen and Armstrong but wasn't definitive. Thorstensen kept an eye on the system, taking a spectrum now and then to see if it was doing anything new. It wasn't. Then we came along.

When I found the pulsar, we weren't very sure of its position. We were using the low frequency of 350 MHz so that our beam would be big enough to see any piece of sky for two minutes as the Earth turned. But that big beam means that when we find a pulsar, all we know is a very rough position. Nevertheless, when we found a bright fast millisecond pulsar, we knew it was going to be interesting, so we requested a follow-up observation with the GBT.

The night before we were supposed to take the data, Jim Condon of the NRAO emailed us to point out that J1023 was in our beam. Ingrid Stairs, one of my collaborators on the survey, did some reading and found Thorstensen and Armstrong's paper. Suddenly there was the possibility that our millisecond pulsar might actually have been accreting in 2001. This was a big deal - Duncan Lorimer bet us all a drink that it wasn't the same source (covering his bets, I think - he was hoping it was the same source as much as the rest of us), and we were all excited. So we definitely wanted to find out whether it was the same source as soon as possible.

Unfortunately, the observation we had planned was another 350 MHz observation, just to see whether the source was real and to start to build up a timing solution for it. So as Scott Ransom and I prepared to run the observation, we argued: I wanted to look at 350 MHz first, to make sure there was something there at all, and Scott wanted to take a 2 GHz observation pointed at the position of J1023, so that the much smaller beam would tell us whether the source was really J1023 or not. In the end we compromised: we started at 350 MHz, and the pulsar came booming in right away. So we retracted the prime focus arm and switched receivers and pointed the GBT at J1023, and sure enough, there it was, loud and clear right at the position of J1023. We took the rest of the observation at 2 GHz, and immediately began requesting follow-up observations.

We initially planned to follow up with the GBT, since it was what we were used to, but Paulo Freire emailed us and asked us to please propose for time with Arecibo: there was not much else for Arecibo to look at at the time of day 1023 is visible there, and Arecibo's funding is threatened and it could really use a splashy discovery. (We were keeping the discovery quiet at this point, but Paulo was sharing an office with one of my collaborators, so there was no keeping it from him.) With Arecibo, this already-bright pulsar comes in beautifully, and we get nice clean timing.

Timing the pulsar, we quickly came up with a model of the orbit of the pulsar, and sure enough it agreed with Thorstensen and Armstrong's orbit. In fact, not only did the orbital period come out the same, if we extended our solution back to 2005, we got the same orbital phase as they did: over those three years, we were able to account for every single turn of the companion around the pulsar. Needless to say, this removed all doubt that our system was actually J1023.

The follow-up observations also revealed some peculiar phenomena, like plasma floating around the system and orbital period variations (very small, needless to say), but the essence of it is there: J1023 is a system with a neutron star and a companion that had an accretion disk for about a year around 2001 but is now a millisecond pulsar. The paper has just been published in Science, and is available on arxiv.org.


Full post

Unobtainium capacitors


Continuing my train of thought from the other day, what are the theoretical limits for a capacitor? Just how much energy can you store in one?

If you try to analyze it in terms of voltages and capacitances, it becomes very tricky: you can make thinner plates closer together without increasing the weight or volume of the capacitor at all. The key to making the problem tractable is to realize that the energy is stored in the electric field. The goal in trying to store as much energy in a capacitor as possible is to get the electric field as strong as possible over as much volume as you can manage.

Since this will be an order-of-magnitude estimate anyway, let's assume that the electric field completely fills the capacitor. That leaves the maximum strength as the last parameter. Since we're looking for theoretical limits, let's assume you're limited by the tendency of the electric field to ionize the atoms in whatever you make this capacitor out of. So an electric field of about a volt per angstrom is just about all we can hope for. This gives us an energy density of about 400 Joules per cubic centimeter, or 0.1 watt-hours per cubic centimeter. This really isn't that much, and in fact according to Wikipedia there is a company claiming its capacitors do substantially better than this. Granted, it's a patent application, but where did I go wrong?

Well, there are certainly things I ignored - dielectric constant, for example - but I think the basic issue is thati in real capacitors the limit really is the strength of electric field your materials can tolerate. Since the energy density depends on the square of this number, clever engineers who can come up with extraordinarily tough materials have a fair amount of room to beat my crude estimate. But it seems clear to me that that room is far from unlimited: capacitors will continue to improve as energy storage devices, particularly in aspects I haven't touched on like leakage, but don't expect one to power your laser pistol any time soon.

Full post

A Missing Link


We've been pretty sure that low-mass X-ray binaries turn into millisecond pulsars for a while now. But no one had ever seen the one turn into the other. Thanks to the drift-scan survey, now we have.

The system, J1023, is currently behaving just like a normal millisecond pulsar: it spins very regularly, and we see radio pulses every cycle. But in 2001, before anyone had any idea there might be a pulsar there, optical observations showed that it was flickering, blue, brighter than usual, and had double-peaked emission lines. All of these are signs of hot gas flowing in the system, and the double-peaked lines in particular are a pretty clear indication that the system had an accretion disk. Unfortunately nobody knew to point an X-ray telescope at J1023 while it was doing this, but it seems pretty clear in retrospect that it was an X-ray binary at the time.

In 2002, observations showed the emission lines were gone, the brightness and color were back to 1999 levels, and the flickering was tailing off to nothing. So the optical story stands today: no sign of an accretion disk. But there is clearly a radio pulsar there, and we're able to make some extremely good measurements of the system because of it. We know that the pulsar is 7.2 times as massive as its companion, for example, and if the pulsar has a typical neutron star mass of 1.4 solar masses, the system is 1.3 kiloparsecs away (about 4200 light-years) and we're seeing it at an angle of 46 degrees.

The image above is a pair of artists' renditions of it (well, sort of, I did most of the work using the software binsim and I'm hardly an artist, though Joeri van Leeuwen improved them significantly). They assume the pulsar's mass is 1.4 solar masses, and show the system as we think it was in 2000 and now. The hot disk of matter, present in 2000 but absent now, produced the optical emission that was observed, but (we think) it was blown away when a drop in the accretion rate allowed the radio pulsar to turn on, producing not just the beam of radio waves pictured in the image but also a powerful wind.

Since we know the system went through one roughly two-year active phase, it seems entirely possible that it will do so again within the next few years. If that happens, we'll be able to watch the formation of an accretion disk in a system where we have very good measurements of the orbit and system geometry from pulsar timing. That's never been seen before, and will be very exciting.


While I was the one who found this source, everyone involved in the drift-scan survey did essential work in making it possible for me to find it, and I worked with many other people to carry out the follow-up observations, so let me thank all my collaborators: Ingrid H. Stairs, Scott M. Ransom, Victoria M. Kaspi, Vladislav I. Kondratiev, Duncan R. Lorimer, Maura A. McLaughlin, Jason Boyles, Jason W. T. Hessels, Ryan Lynch, Joeri van Leeuwen, Mallory S. E. Roberts, Frederick Jenet, David J. Champion, Rachel Rosen, Brad N. Barlow, Bart H. Dunlap, and Ronald A. Remillard.
The paper describing the discovery has just been published in Science, and can also be read on arxiv.org.

Full post

Dar Williams

I only really started to listen to music in the 90s, so my idea of what music should be is bleeps and clicks, with human voices only in the form of ironic movie samples. (This is of course precisely the range of music that can be made by geeky shut-ins with piles of computers and electronics. Literally, in some cases; I saw this one live.)

But as the result of a hard disk crash (sadly not recorded), I've been listening to a lot of music from other sources lately, and I find I'm really liking Dar Williams. She's got a nice voice, clever lyrics, and how can you dislike someone who writes a song about the Milgram experiment?




Full post

Where do millisecond pulsars come from?


Most pulsars have periods around a second, and are spinning down very gradually. A few are clearly young (still hot, still in a supernova remnant, spinning down rapidly) and are faster, with periods as short as several tens of milliseconds. But there are other pulsars that are even faster - periods down to 1.4 milliseconds, that is, spinning some seven hundred times a second - and yet appear to be very old: cold, spinning down very gradually, very weak magnetic field. These are called millisecond pulsars, and it was a puzzle where they could possibly come from.

The key clue to the puzzle was their binarity. Most stars are found in systems of several stars orbiting each other. But pulsars are usually solitary. This is mostly because of their violent births: to make a pulsar, the star has to go supernova, and such a violent explosion tends to break up binary systems. It doesn't always break them up, though, and so some binary pulsars are known. If you look at the millisecond pulsars, though, most of them are actually in binary systems, unlike the normal pulsars (see the graph at right, based on the ATNF pulsar database). So we think the presence of a companion is key to making a millisecond pulsar.

The story, as we understand it, is this: A pulsar forms in a supernova. It is either in a binary system which survives, or it captures a companion. It lives out its life as a pulsar, spinning down gradually past the point where it is visible as a pulsar. The system stays like this for a long time. But eventually, one of two things happens: the companion starts to swell up into a red giant, or the orbit shrinks. In either case the system reaches a point where some of the matter at the surface of the companion is attracted more strongly by the neutron star than by the companion. This matter then falls down onto the pulsar.

Pulsars are not much more massive than the Sun, but they are much much smaller. So when matter falls onto one, tremendous amounts of energy are released. But remember that the two stars are in orbit, so the system is rotating. If you take a piece of matter from the companion, it will carry some angular momentum with it. To make the obligatory analogy, just as when a figure skater pulls in her arms, she speeds up, when you take matter from the companion and bring it in to the neutron star, the matter begins to rotate more rapidly. When this matter falls on the star, the star is spun up a little. I am glossing over numerous important details here, but the point is, when you start transferring mass to a neutron star, it is possible to spin it up.

So, we think that millisecond pulsars are old pulsars that have spun up, or "recycled", by accretion of mass from a companion. Observationally, we see systems where mass is being transferred: they're very bright X-ray sources. In a few cases we can actually tell how fast the neutron star is rotating, and sure enough, its period is down in the millisecond range. But these systems don't produce radio pulsations, presumably because all that matter falling in either blocks the radio waves or shorts out the emission mechanism (which needs a near-vacuum in the magnetosphere). So to make millisecond pulsars you need somehow to turn off the accretion and clear out all the matter, so that the radio pulsations can emerge. This transition hasn't been seen before, and the theorists have some difficulty explaining the population of objects that we see - while we see both millisecond pulsars and their hypothetical accreting progenitors, none of the progenitors seems to be positioned to turn into anything like the millisecond pulsars we actually see. So that last step, accretion turning off and radio pulsations starting, remains something of a mystery.

Full post

Energy storage


Dan of Dan's Data just posted an article about science-fiction-y batteries. He discusses a battery based on matter and antimatter but dismisses it as inconvenient (in that the energy comes out as a shower of high-energy radiation). Instead he suggests a battery based on just cramming a lot of electrons - several grams of electrons! - into a AAA-sized package, and finds that this actually stores vastly more energy. There's an error there, but it's an interesting one.


First of all let's do his calculation a little more carefully. Let's suppose we're taking 11.5 grams of electrons - roughly two billion coulombs - and cramming them into a AAA-sized package. For simplicity, let's assume the package is a conductive sphere of radius 1 cm. This is actually a capacitor, with capacitance about 1 picofarad (incidentally, 1 pf is a fairly normal size for a capacitor, though they don't usually choose a conductive-sphere form factor). So the energy stored in this capacitor is (1/2) Q^2/C, or a staggering 2*10^30 Joules, the sun's total luminosity for a couple of hours.

(All calculations are done with the charming frink, though of course the errors are my own.)

There's a problem with this, though. Actually, there are several, but let's tackle the most fundamental first: if I convert 2*10^30 J to mass using E=mc^2, I get about twenty billion tonnes. This isn't just a meaningless figure: the energy stored in the capacitor really does weigh this much. If it seems weird for energy to have mass, well, I have to agree, but you can actually see it by looking at the periodic table: a helium atom is basically four hydrogen atoms after you stick two of the electrons to two of the protons and squash all the nucleons together. But each hydrogen atom has a mass of 1.007825 u, while helium has a mass of 4.002602 u, rather than the 4.0313 u that you would expect. The difference in mass is the mass of the energy in the nucleus. Since in helium, that energy is negative (pulling the nucleons together rather than pushing them apart), the mass is also negative, and helium weighs less than the hydrogen that makes it. But it's the same phenomenon.

One interesting question is, where is all that mass? It turns out that there is a nice answer. The little conductive sphere is surrounded by a strong electric field. In fact Gauss' law lets us work out just how strong it is at the surface: 2*10^23 V/m. It turns out that electric fields store energy. The stronger the electric field, the higher the energy density it stores. If you add up all the energy stored in the electric field of this charged sphere, over all space, you get exactly the energy you had to put in to charge it up. In other words, the energy is stored in the electric field, strongest at the sphere but extending out to infinity. And when you have an energy density, you have a mass density. Dividing by c^2, you get a mass density for the electric field of 2*10^15 g/cm^3. All that mass is due to the electric field, mostly close to the sphere.

Incidentally, there's a neat little calculation here. We know that electrons are weird quantum-mechanical beasts whose position and velocity are unavoidably ill-determined, and in fact as ar as we can tell an electron is a point particle. But let's ignore that for a moment and try to figure out how big an electron is. We don't have much to go on, because they don't sit still, and they interact with other things without touching them. But there is one place to look: they have a mass. Suppose we pretend that an electron is a tiny conductive sphere, with no mass of its own, but with charge smoothly distributed over its surface. Then the electric field will have a mass of its own, just as with our little conductive sphere. What if we just declare that the mass of the electron is entirely due to this electric field? Well, then we can calculate a radius! The value you get is 1.4*10^-15 m, which is tiny enough (about a hundred thousand times smaller than an atom) to not be totally unreasonable. You get a slightly different answer if you assume the electron is non-conductive, but leaving aside that issue, this size is the classical electron radius: 2.8*10^-15 m. Astonishingly enough, this number comes up in various physical phenomena, for example Thomson scattering of light by an electron.

So we can't, even with some pretty impressive unobtainium, cram 11.5 grams of electrons into a AAA-sized space. But if we decide that 11.5 grams should be the total weight of the battery, including energy, how much energy can we store? Well, it turns out, just a little less than the matter-antimatter battery. Since the matter-antimatter battery turns its contents entirely into energy, there's no way to beat its storage efficiency. But the above calculation shows that the charged metal sphere is nearly as efficient. We just charge it up to a slightly less outrageous voltage of 4*10^13 V. Now we've got something as good as the antimatter battery, but that provides us with handy electrons! What more could you ask?

Well, for one thing, you need to get rid of the electrons somehow once you've used them. Otherwise your laptop will be constantly giving out static shocks. Try and drive an electric car with them and you'll be shedding kilowatts of static shocks. Not ideal. Also, speaking of static, the electric field of this metal sphere will extend out to infinity, and will be quite strong even at some distance. At the surface of the sphere, the electric field will be 4*10^15 V/m, which is 4*10^5 volts per angstrom. So the voltage difference across an atom will be on the order of a million volts, compared to the 13.6 volts it would take to pull an electron off a hydrogen atom. So this little sphere will shred all atoms that come near it. Worse, that field extends out to infinity, so that at a meter away it's still 40 volts per angstrom. This is going to be a real pain. Isn't there some way to shield it?

Well, yes. You just put a spherical shell around it with the opposite charge. Or you abandon the sphere and just set up two parallel plates with opposite charges. Now the fields of the two plates cancel out (nearly), everywhere except between the plates.

Actually, this kind of energy storage device exists, and you probably have thousands in your house: it's a capacitor. Now, granted, it doesn't store the kind of energy density Dan is talking about, since it's not made of unobtainium by aliens, but an appropriate bank of capacitors can store a very great deal of energy and deliver it very quickly.

But it doesn't sound so impressive to ask the aliens for a capacitor.

[Update: I talk more about what one can hope for from a capacitor here.]

Full post