Neusis constructions (2): trisections

This post, containing a nice slew of methods of trisecting a general angle using neusis methods, can be found at Numbers and Shapes.

Neusis constructions (1)

You can find this post, introducing geometric constructions which allow the use of a straight-edge with two marks, at my new site Numbers and Shapes.

Solving a cubic by folding

This post, which is on my new site here:

http://www.numbersandshapes.net/?p=2520

shows how a cubic equation can be solved by origami.  This is not a new result by any means, but it’s hard to find a simple proof of how the construction works.

An alternative to partial fractions

The full post is available at my new site:

http://www.numbersandshapes.net/?p=2495

Enjoy!

Meeting Julia

In my last post I mentioned the new language Julia. It deserves more than a single paragraph, so I thought I’d walk through a problem, and tackle it with the language.

The problem is a stock standard one: investigating the fitting of an SIR model of disease spread to an outbreak of influenza at an English boarding school. Of the 763 pupils, on the first day only one was infected, and the number of infected pupils changed over 14 days as:

1   2   3   4   5   6   7   8   9  10  11  12  13  14
3   6  25  73 222 294 258 237 191 125  69  27  11   4

We will try to find parameters \beta and \gamma so that when the equations

\displaystyle{\frac{dS}{dt}=-\beta IS}

\displaystyle{\frac{dI}{dt}=\beta IS-\gamma I}

\displaystyle{\frac{dR}{dt}=\gamma I}

are solved, the values of I fit the data.

Note that the sum of all the right hand sides is zero; this indicates that S+I+R is constant, which for this model is the population. (Other models allow for births, deaths and other population changes).

Finding the parameters can be done in several steps:

  1. Set up a function for the system of differential equations.
  2. From that function, create a second function which is the sum of squared differences between the data, and the computed values from the equations.
  3. Minimize that sum.

The first function can be created in Julia (following the example here) as:

function SIR(t,x,p)
     S = x[1]
     I = x[2]
     R = x[3]
     beta = p[1]
     gamma = p[2]
     [-beta*S*I, beta*S*I-gamma*I, gamma*I]
end

The default behaviour of functions in Julia is to return the line before the “end” statement; thus in a function such as this one, there is no need for a “return” statement (although one exists).

To solve a system of equations we need access to the ODE package, which I’ll assumed you’ve installed with

Pkg.add()

This can be loaded into a running Julia session either with

using ODE

which adds all the ODE functions into the top namespace, or as

import ODE

which adds all the functions in the ODE namespace. So to call the Dormand-Prince method, you would use

ode45

if all methods were in the top namespace, or as

ODE.ode45

in the second case.

So, for example, we could attempt to solve the equations as:

p = [0.01, 0.1]
t, x = ODE.ode45((t,x)->SIR(t,x,p),[0:0.1:14],[762.,1.,0.]);

To plot the functions, we’ll use the plotting library Winston (Winston, Julia from 1984 – geddit?), which is fairly basic, but enough for our simple needs:

import Winston
wn = Winston

wn.plot(t,x[:,1],"b",t,x[:,2],"g",t,x[:,3],"r",[1:14],data',"k*")

SIR_julia

The green curve corresponds to I, the number of infected individuals at a given time, and the asterisks to the data. Clearly these parameters do not provide a very good fit.

The next step is to create a sum of squares function to minimize. To do this, we will solve the ODE system with the ode4 method which uses a fixed step size. This means we can ensure that there are computed values for I at all the integer points:

function ss(b::Vector)
    data = [3 6 25 73 222 294 258 237 191 125 69 27 11 4];
    t,x = ode.ode4((t,x)->SIR(t,x,b),[0:0.05:14],[762.,1.,0.]);             
    sum((data-x[21:20:281,2]').^2)
    end

Now this function can be optimized, using a method from the Optim package:

import Optim

Optim.optimize(ss,[0.001,1],method=:cg)

The “method” variable here defaults to the Nelder-Mead method, but for this particular function I found that the conjugate-gradient method gave the best results.

This produces quite a bit of information, of which the salient bits are:

 * Minimum: .0021806376138654117
.44517197444683443

 * Value of Function at Minimum: 4507.078964

So, let’s try these parameters and see what happens:

p = [.0021806376138654117, .44517197444683443];
t, x = ODE.ode45((t,x)->SIR(t,x,p),[0:0.1:14],[762.,1.,0.]);
wn.plot(t,x[:,1],"b",t,x[:,2],"g",t,x[:,3],"r",[1:14],data',"k*")

And here’s the result:
SIR_julia2

As you can see, the fit is remarkably good.

As far as problems go, this is not particularly hard – conceptually speaking it’s probably at an undergraduate level. On the other hand, it’s not completely trivial, and does seem to me to give the computational environment a bit of a work-out. And Julia solves it with ease.

The best Matlab alternative (3)

Over two years ago I wrote The best Matlab alternative with a follow-up a bit later, which seem to have engendered a far amount of discussion.

Well, things have moved on in the computational world, and a user is now spoiled for choice.

Octave has just had version 3.8 released; this version comes with a GUI (which is not enabled by default). However, the GUI is a very fine thing indeed:

octave_gui

As you see, it has all the bells and whistles you may want: command window and history, variable browser, file browser, editor, and a documentation reader. The GUI will become standard as of version 4.0.

I think that Octave has come along in leaps and bounds over the past few years, and as a drop-in Matlab replacement (if that’s what you want) I don’t think it can be bettered.

Scilab is at version 5.4.1, and is a mature product. I have come across a few comments in newgroups to the effect that Scilab is better than Octave for some advanced numerical work. However, I’m not sure if these are just opinions or are backed up with solid data and timings. I used to like Scilab a great deal, but now I don’t see that it offers anything over Octave other than Xcos (a dynamic systems modeller, similar to Simulink), and a few niche extras, such as solutions of boundary value problems.

Python was a language I hadn’t considered in my first post, but with the addition of NumPy and SciPy (plus numerous add-on packages) becomes a definite contender for serious number-crunching. The web is full of stories about people who have dumped Matlab for Python. And of course with Python you get a lovely programming language, and extensions such as Cython, which gives you C-type speed. To get an idea of the packages, see here. Python has a interactive shell IPython which adds enormously to Python’s ease of use.

Julia is the “new kid on the block”: a language which has been designed from the ground up for efficiency and speed, as well as convenience and ease of use. It is supposed to have the power of Matlab, the speed of C, and the elegance of Python. It is still in the early stages (the most recent version is 0.3.0), but it shows enormous promise, and has garnered an impressive list of users and developers. And there is already a small – but growing – list of add-on packages. The Julia web pages show some timings which seem to indicate that Julia is faster (sometimes by several orders of magnitude) than its contenders. (Well, they would, wouldn’t they? They would hardly show timings which were slower than other environments.)

My particular interest is image processing, in which Octave does very well (its image package is quite mature now), followed by Python (with its Mahotas and scikit-image packages). Scilab has its SIP toolbox, but I spent several days trying to install it and gave up. There is an Images package for Julia, but it doesn’t seem on a first glance to have the breadth of the others.

I ran a single test myself: to create a 1000 x 1000 random matrix, invert it, and find the trace of the product of the original and its inverse. In Octave:

tic; A = rand(1000,1000);Ai = inv(A);trace(A*Ai);toc

Scilab:

tic; A = rand(1000,1000);Ai = inv(A);trace(A*Ai);toc

Python (importing numpy as np):

%timeit A = np.matrix(np.random.rand(1000,1000));Ai = A.I;np.trace(A*Ai)

Julia:

tic();A=rand(1000,1000);Ai = inv(A);trace(A*Ai);toc()

Note that Octave, Scilab and Julia all have similar syntax; Python is slightly different because functions don’t all exist in the top namespace. This means, for example, you can use a function called “rand” from different packages, with different functionality, easily.

And here are the results (running the code three times each):

Octave: 1.08677; 1.04735; 1.10616 seconds

Scilab: 3.502; 3.425; 3.663 seconds

Python: 2.33; 2.28; 2.39 seconds

Julia: 0.47034; 0.35776; 0.36403 seconds

This is hardly definitive, but it does show how fast Julia can be, and in relation how slow Scilab can be.

These four environments are by no means the end. R is extensively used, and has over 5000 add-on packages at the CRAN repository. However, I’ve never used R myself. I believe it has branched out from its statistical foundations, and is now more of a general use environment, but still with the best statistical functionality in the business. Then there’s the Perl Data Language, apparently used by astrophysicists, and the GNU Data Language, about which I know nothing. See the Wikipedia page for a list of others.

(Note that I’ve not discussed Freemat this time; I don’t believe it’s a serious contender. I had a look through some of its source code, and most of its files seem to be cut-down Matlab/Octave files without any error checking. Anyway, there seems to be no need to use niche software when Octave is so mature and so well-supported by its developers and users.)

My recommendations? Use Octave, unless you have some niche requirements, in which case you may find Scilab more suitable. If you’re prepared to sacrifice maturity for speed, give Julia a go. If your wish is a mature programming language as your base, then Python.

A very long-running program

In the interests of random number generation, I’ve been experimenting with the iteration

x\to g^x\pmod{p}

for a prime p and a primitive root g\mod p.  Now, it turns out that some primes have a primitive root which generates all non-zero residues, and others don’t.  For example, consider the prime p=19, which has primitive roots 2, 3, 10, 13,14,15.  Starting with x=1, we find the iterations produce:

\displaystyle{\begin{array}{ll}2:&2, 4, 16, 5, 13, 3, 8, 9, 18, 1\\  3:& 3,8,6,7,2,9,18,1\\  10:&10,9,18,1\\  13:&13,15,8,16,9,18,1\\  14:&14,9,18,1\\  15:&15,8, 5, 2, 16, 6, 11, 3, 12, 7, 13, 10, 4, 9, 18, 1  \end{array}}

We see that no primitive root generates all non-zero residues. However, for p=23, we find that the primitive root 20 does generate all non-zero residues:

20,18,2, 9, 5, 10, 8, 6, 16, 13, 14, 4, 12, 3, 19, 17, 7, 21, 15, 11, 22, 1

Let’s call a prime such as 23, an “exponentially complete” prime. You can see a list of the first few such primes at http://oeis.org/A214876 (which has been contributed by me).

My question of the moment is: “Is 2^{31}-1 exponentially complete?” The prime 2^{31}-1 is one beloved by random number theorists, because modulo arithmetic can be performed very efficiently using bit shifts. In C, for example, z \mod p can be computed with

(z&p)+(z>>31)

However, I can’t think of any method of finding this out except by testing each primitive root, to determine the length of its cycle. Even using GNU C, and the PARI library, and some speed-ups (for example, replacing modular exponentiation with just one multiplication from pre-computed arrays), my desktop computer has been grinding away for two days now, and is up to primitive root 1678; that is, the program has taken 2 days to test 380 primitive roots. Since there are \phi(2^{31}-2)=534600000 primitive roots, at this rate it will take something like 534600000/380\times 2 = 2813684.21 days, or about 7703.66 years. That’s a very long time.

The AIM Network

The Australian Independent Media Network

Follow

Get every new post delivered to your Inbox.

Join 53 other followers