Maxima on Android

Such a long long time since I last posted!  Very poor blogging etiquette.  Anyway, to break the drought, here’s news of the best thing to come my way in the last six months – fully featured Maxima running on Android (this screenshot from its website):

maxima-screen-shot1 maxima-screen-shot2

At a conference I attended late last year, sponsored by calculator companies, the Maple reps (both of whom I knew), claimed that calculators were an old-fashioned outmoded technology, especially now that with smartphones, ipods etc, you can have access to the Internet in your pocket (Wolfram Alpha, anybody)?  I still think there’s a lot to be said though for a device which you can use independently of a network, and in fact I believe most calculator users use very little of their functionality.  How many CAS calculator users, I wonder, program on or with their machines?

But now here’s a fully featured, mature CAS running on a portable device – the first such one, as far as I know.  There are in fact other CAS’s available for portable systems: PocketCAS, Mathomatic, Yacas (which I’ve written about before), no doubt others.  None of them, as far as I can tell, work particularly well, or are in any way complete.  It doesn’t take long before you run up against a wall of something they are unable to do.

So I see Maxima as being the first complete portable CAS – and I’m delighted to see it on Android.  It works fine on a smartphone, and is quite wonderful on a tablet.

Its home page is here.

A piecewise polynomial approximation to the normal CDF

The normal cumulative distribution function


has the unfortunate property of not being expressible in terms of elementary functions; that is, it can’t be computed with functions available on a standard scientific calculator. Some modern calculators, such as the Casio ClassPad and the TI-Nspire (and some of their predecessors) can indeed evaluate this function, but for students without such calculators, they need to look up tables – surely an old-fashioned method!

To overcome this there have been masses of approximations produced over the years; one of the simplest is the logistic approximation:

\displaystyle{\Phi(z)\approx 1-\frac{1}{1+e^{1.702z}}\mbox{ for }z\ge 0.}

Note that the integrand is an even function, which means that


if z<0, and so we only have to find an approximation for positive z.

However, there is a conceptually even easier function, which dates back to a 1968 paper by John Hoyt:

\displaystyle{\Phi(z)\approx\left\{\begin{array}{ll}\displaystyle{z\left(\frac{9-z^2}{24}\right)+\frac{1}{2}}&\mbox{if }0\le z\le 1\\[5mm]  \displaystyle{\frac{(z-3)^3}{48}+1}&\mbox{if }z\ge 1\end{array}\right.}

Here is a plot of the two of them together (created using Maxima and gnuplot):

and here is a plot of their difference:

In fact the absolute difference is 0.01 or less for 0\le z<3.78. This piecewise polynomial function provides a nice quick-and-dirty approximation to the normal CDF.

Easy Lagrangian interpolation

As everybody knows, the formula for Lagrangian interpolation though points (x_i,y_i), where all the x_i values are distinct, is given by

\displaystyle{P(x)=\sum_{i=0}^n\left(\prod_{j\ne i}\frac{x-x_j}{x_i-x_j}\right)y_i}

It is easy to see that this works, as the product is equal to zero if x=x_k\ne x_i and is equal to one if x=x_i.

This isn’t particularly easy to program, and in fact methods such as Newton’s divided differences or Neville’s algorithm are used to obtain the polynomial.

But a method introduced by Gabor Szegö provides a very simple form of Lagrange’s polynomial.

Start with defining



\displaystyle{\frac{d}{dx}\pi(x)=\sum_{i=0}^n\Biggl(\prod_{\substack{0\le j\le n\\j\ne i}}(x-x_j)\Biggr)}.

This is obtained immediately from the product rule for arbitrary products, and using the fact that the derivative of each individual term of \pi(x) is one. This means that in particular

\displaystyle{\pi'(x_i) = \prod_{\substack{0\le j\le n\\j\ne i}}(x_i-x_j)}

because all other terms of \pi'(x) contain an (x-x_i) term and therefore produce zero for x=x_i.

This means that the coefficient of y_i in the Lagrangian polynomial can be written as


and so the polynomial can be written as


Here is an example, with x=-2,1,3,4 and y=-3,-27,-23,3. First, Maxima:

(%o1) xs:[-2,1,3,4];
(%o2) ys:[-3,-27,-23,3];
(%o3) define(pi(x),product(x-xs[i],i,1,4));
(%o4) define(pid(x),diff(pi(x),x));
(%o5) define(P(x),sum(pi(x)/(x-xs[i])/pid(xs[i])*ys[i],i,1,4));
(%o6) ratsimp(P(x));


Now Sage:

sage: xs = [-2,1,3,4]
sage: ys = [-3, -27, -23, 3]
sage: pi(x) = prod(x-i for i in xs); pi(x)
(x - 4)*(x - 3)*(x - 1)*(x + 2)
sage: pid(x) = diff(pi(x),x);
sage: P(x) = sum(pi(x)/(x-i)/pid(i)*j for (i,j) in zip(xs,ys))
sage: P(x).collect(x)
x^3 - 11*x - 17

And finally, on the TI-nspire CAS calculator:




\displaystyle{{\rm Define}\quad q(x)=\prod_{i=1}^n(x-xs[i])}

\displaystyle{qs:={\rm seq}\left(\left.\frac{d}{dx}(q(x))\right|x=xs[i],i,1,n\right)}

\displaystyle{{\rm Define }\quad p(x)=\sum_{i=1}^n\left(\frac{q(x)}{(x-xs[i]) \cdot qs[i]}ys[i]\right)}

p(x)\hspace{20mm} x^3-11\cdot x-17

Note that all the mathematical symbols and programming constructs are available through the calculator’s menus.

Notice that this approach requires no programming with loops or conditions at all – it’s all done with simple sums and products.

Symbolic integration with an open source CAS

When I use Maple with my first year students, and we are experimenting with integration, I challenge them to produce a function which Maple can’t integrate in closed form. Given the huge number of special functions in Maple, and my students’ lack of imagination in creating functions, this is a hard exercise for them (until they get the idea of composition). Mathematica is the same; its vast library of special functions means that many integrals can be expressed in closed form. The open source CAS’s known to me: Sage, Maxima, Axiom, REDUCE, and Yacas, are generally much less able for integration. (Yacas is not being currently developed, but apparently some work is being done on a fork of it.) These systems know about the standard transcendental functions: trigonometric and hyperbolic and their inverses, exponential and logarithmic functions. However, there are a few limits even here.

I started thinking about this with the integral

\displaystyle \int \frac{\sqrt{x+\sqrt{1+x^2}}}{x}\,dx

which can be expressed entirely in terms of elementary functions. You can see it done in Wolfram|Alpha and check it out. However, neither Maxima (and hence Sage) can solve this, and REDUCE can only do it with a special package designed for integrals of this type. Interestingly, Axiom does this with no trouble.

Some of the open source systems know a little about other functions: the error function, or elliptic integrals, but often not enough to use them as results of integration.

For example

\displaystyle \int e^{-x^2}\,dx

is solved by all of Sage/Maxima, REDUCE and Axiom in terms of the error function, but the similar function

\displaystyle \int e^{x^2}\,dx

can’t be solved in Axiom.

Sage/Maxima can solve the integral

\displaystyle \int\sin(x^2)\,dx

in terms of the error function, but REDUCE and Axiom just return the integral unevaluated. Axiom’s knowledge of special functions is the poorest of all these systems, although it would probably not be hard to build such knowledge in. However, Axiom can evaluate

\displaystyle \int e^{e^x}\,dx

in terms of the exponential integral Ei(x), as can Sage/Maxima (but they express the result in terms of the incomplete gamma function), but REDUCE can’t do anything.

None of these systems can evaluate

\displaystyle \int \frac{1}{\sqrt{1-x^4}}\,dx = F(\sin^{-1}(x),-1)

which involves an elliptic integral. However, Maxima gets at least half marks for being able to differentiate the result properly:

sage: maxima.diff(elliptic_f(arcsin(x),-1),x)

For one final example:

\displaystyle \int\log(\sin(x))\,dx

is solved by Sage/Maxima using the polylogarithm function; Axiom and REDUCE can do nothing.

Maxima and REDUCE come with test suites for integration, but I don’t think there’s such a suite for Axiom. There has not been, as far as I know, a published comparison of the integration capabilities of these systems, but it seems that of the systems mentioned here, Maxima has the best all round strength.

The Digital Signature Algorithm in Maxima and Sage

The Digital Signature Algorithm, also known as the Digital Signature Standard is, as it name implies, a standard for digital signatures. Most digital signature algorithms work by reversing a public key cryptosystem: a message is signed with the sender’s private key, and the signature is verified using the sender’s public key. The DSA is based on the El Gamal scheme, with a few extras thrown in for extra security, and to make the signatures smaller.

It is set up with four values: a large prime p, a prime q which is a factor of p-1, a primitive root a of p, and a value g defined by

g=a^{(p-1)/q} \bmod p.

For security, p is recommended to be at least 154 digits, and q at least 48 digits.

Alice, the sender, chooses as her private key any value A<q-1 and calculates

B=g^A \bmod p

as her public key. This is secure, by the discrete logarithm problem. The public key consists of the values (p,q,g,B) and the private key is the value A.

Given a message m, a signature is computed as follows:

Alice chooses at random k for which 0<k<q-1. She then computes:

r = (g^k\pmod{p})\pmod{q}

s = k^{-1}(m+Ar)\pmod{q}

and the two values (r,s) are the signature of the message m. We are assuming here that m<p; as in general most messages will be much larger, it is customary to sign not the message itself, but a cryptographic hash of the message, which will be a string of some fixed length (and shorter than p ).

To verify the signature, Bob (the receiver) calculates the following values:

x = s^{-1}m\pmod{q},

y = s^{-1}r\pmod{q},

v = (g^xB^y\pmod{p})\pmod{q}.

If v=r then he accepts the signature.

To see why this works, note that from the definition of s, we can write


and by multiplying through by s^{-1} we obtain:


This last equation can be written

k= s^{-1}m+Ars^{-1}\pmod{q}

= x+Ay\pmod{q}.

Now we have

r = (g^k\pmod{p})\pmod{q}

= (g^{x+Ay}\pmod{p})\pmod{q}

= (g^x(g^A)^y\pmod{p})\pmod{q}

= (g^xB^y\pmod{p})\pmod{q}

= v.

This algorithm has the advantage that its security is of order length of p, but the signature values are much smaller – the size of q.

Now let’s look at this algorithm in both Maxima and Sage.


We start by creating p and q. We will need to factor p-1 both to find a value of q, and to find its primitive root. So we start by attempting to factor a few randomly chosen large primes, until we have a prime p for which p-1 can be factored, and which also has a reasonably large prime factor q.

We will use smaller values both of p and q than the algorithm requires, just to show how it works.

(%i1) p:next_prime(random(10^80));
(%i2) factor(p-1);

After a few tries, I found

p = 182842970179003336959233156794188485625560973915508949565601
q = 525970797581619193760592144581011744537

Maxima doesn’t have a built-in command to find primitive roots, but one can easily be written, using the fact that a primitive root a is a number for which

a^{(p-1)/t}  \neq 1 \bmod{p}

for all prime factors t of p-1. Here is one such program:

if not(primep(p)) then error("Your number is not prime"),
do (
if not member(1,makelist(power_mod(i,(p-1)/f[j][1],p),j,1,n)) then


(%i3) a:primroot(p);
(%o3) 5
(%i4) g:power_mod(a,(p-1)/q,p);

The private/public key pairs:

(%i5) A:random(q-1);
(%i6) B:power_mod(g,A,p);

Now we can choose a random message a sign it (for ease we won’t show the outputs, which are just long numbers):

(%i7) m:random(p);
(%i8) k:random(q-1);
(%i9) r:mod(power_mod(g,k,p),q);
(%i10) s:mod(inv_mod(k,q)*(m+A*r),q);

Now to verify the signature:

(%i11) x:mod(inv_mod(s,q)*m,q);
(%i12) y:mod(inv_mod(s,q)*r,q);
(%i13) v:mod(mod(power_mod(g,x,p)*power_mod(B,y,p),p),q);
(%i14) is(v=r)
(%o14) true


As with Maxima, we start by finding p and q. We simply repeat

sage: p=next_prime(randint(1,10^80))
sage: factor(p-1)

until we find values we want. I found

p =
q= 2199623526308059394919085303004156101


sage: a=Mod(primitive_root(p),p)

returns 5. The extra Mod( ,p) ensures that the type of a is an element of the
ring of integers modulo p \mathbb{Z}_p:

sage: parent(a)
Ring of integers modulo 530156743088749972013047250493987281

Now we can calculate the other values we need:

sage: g=a^((p-1)/q)
sage: g

and the private/public key pairs:

sage: A=randint(1,q-1);A
sage: B=g^A;B

Note that because of Sage’s handling of types, we don’t have to fuss with “power_mod”; since the type of a is “element of \mathbb{Z}_p“, so automatically are g and B, and powers are thus computed quickly, using the modular exponentiation algorithm.

Let’s choose a message, which will be any value less than p:

sage: m=randint(1,p)

and sign it (as with Maxima we won’t show any of the values):

sage: k=randint(1,q-1)
sage: r=mod(g^k,q)
sage: s=(m+A*r)/k

since g is in the ring \mathbb{Z}_p, we need to change r to be in the ring \mathbb{Z}_q. This
will force s to be also in this ring, so there is no need for any explicit calls to number theoretic functions.

Now we can verify the signature:

sage: x=m/s
sage: y=r/s
sage: v=mod(g^x*B^y,q)
sage: v==r


Both Maxima and Sage provide all the necessary functionality to compute and verify a digital signature (well, nearly all; we had to write our own program for primitive roots in Maxima), but of the two, Sage certainly allows for easier commands, with its mathematical types. For example:

Maxima: y:mod(inv_mod(s,q)*r,q);
Sage: y=r/s

Maxima: v:mod(mod(power_mod(g,x,p)*power_mod(B,y,p),p),q);
Sage: v=mod(g^x*B^y,q)

Addendum: the use of “modulus” in Maxima

Richard Fateman (see comments below) pointed out that my comparison above is in fact incorrect. Maxima allows for very neat commands by use of “modulus”, which if set to any prime number, will effect all subsequent rational expressions. For example:

(%i1) modulus:97;
(%i2) a:rat(5);
(%i3) a^75;
(%o3)                  -34

We see that the output is given in “balanced” form, where the residues are balanced about zero. However,. in Maxima modulus is a global property – it applies to all rational expressions, whereas in Sage different variables can have different modular types. However, by a judicious change of modulus, we can simplify the Maxima commands considerably. To show how to do this, we will deal with some very small numbers, so as to be able to show the outputs. We will use p=1031, q=103, and the primitive root a=14 of p. First, the setup phase. Since everything here is done modulo p, we will use this as the modulus value:

(%i1) p:1031;
(%o1)                   1031
(%i2) q:103;
(%o2)                    103
(%i3) modulus:p;
(%o3)                   1031
(%i4) a:rat(14);
(%o4)                     14
(%i5) g:a^((p-1)/q);
(%o5)                    320
(%i6) A:rat(70);
(%o6)                     70
(%i7) B:g^A;
(%o8)                     48

Now to sign a message m=500 using the “random” value k=25:

(%i9) m:500;
(%o9)                    500
(%i10) k:25;
(%o10)                    25
(%i11) r:mod(g^k,q);
(%o11)                    95

At this stage we are going to start performing operations modulo q, so we change the modulus:

(%i12) modulus:q;
(%o12)                  103
(%i13) s:(m+A*r)/k;
(%o13)                  -23

We have thus created the signature (with balanced modulo values); now for the verification:

(%i14) x:m/s;
(%o14)                   32
(%i15) y:r/s;
(%o15)                  -31

To compute the final value we’ll change the modulus once more:

(%i16) modulus:p;
(%o16)                1031
(%i17) v:mod(g^x*B^y,q);
(%o17)                  95
(%i18) is(v=r);
(%o18)                true

and we’re done, all with very simple commands.

Fun with numerical integration

Numerical integration, or quadrature (to use the old-fashioned, but charming, term) is the business of finding approximate values to definite integrals; integrals which are either impossible to be computed analytically, or for which the antiderivative is too complex to be used. In the first category we might have


At least, Wolfram|Alpha can’t compute this, so I assume it can’t be done analytically; and in the second category we might have


for which the antiderivative is a horrific expression involving the incomplete gamma function.

In an elementary calculus course, students might be introduced to some very simple numerical techniques: the trapezoidal rule or Simpson’s rule, but unless the course is more specifically geared to numerical computations, not much further.

However, there are lots of different methods for numerical integration, some of which are great fun to play with.

Newton-Cotes integration

All these methods are based on one simple idea: sample the function values at equidistant points, find the interpolating polynomial through those points, and integrate the polynomial. Since polynomials are easy to integrate, this method is easy to apply. For example, let’s generate the fourth-order Newton-Cotes rule, which uses a quartic polynomial, and hence requires five points.

Using Maxima:

(%i1) load(interpol);
(%i2) pts:[[a,f1],[a+h,f2],[a+2*h,f3],[a+3*h,f4],[a+4*h,f5]];
(%i3) integrate(lagrange(pts),x,a,a+4*h);
(%i4) ratsimp(%);
(%o4) (14 f5 + 64 f4 + 24 f3 + 64 f2 + 14 f1) h/45

In other words,


and this is known as Boole’s rule.

The trouble with single Newton-Cotes rules is that as the number of points get larger, the interpolating polynomial may develop more “wiggles” and the resulting integral approximation may get worse. For example, take the integral


for which the exact value is

2\tan^{-1}{5}\approx 2.746801533890031

However, let’s try his with a tenth-order rule:

(%i5) f(x):=1/(1+x^2);
(%i6) pts:makelist([-5+n,f(-5+n)],n,0,10);
(%i7) integrate(lagrange(pts),x,-5,5),numer;
(%o7) 4.673300563481597

We can see why this is so poor by plotting the original function and its interpolant together (click on image to show full-sized image):

Runge phenomenon

For this reason, better results can be obtained by chopping the integral up into small segments, and applying a low order rule (say, Simpson’s) to each segment.

Newton-Cotes with adjustments

If you look up old texts, or search about on the web, you’ll find rules similar to the Newton-Cotes rules, but with much simpler coefficients.

For example, the sixth-order rule is


As you can see, the coefficients are already getting cumbersome.

One way of adjusting a Newton-Cotes formula is by adding some function differences. If we denote f(a+nh) by f_n these are defined as

\Delta f_n=f_{n+1}-f_n

and all higher differences can be obtained recursively for k\ge 1:

\Delta^{k+1} f_n=\Delta^kf_{n+1}-\Delta^kf_n.

It is not hard to show that

\Delta^kf_n=\sum_{p=0}^k(-1)^p{n\choose p}f_{n+p}

So, for example


The idea is that for most functions, differences tend to get smaller, so by adding some small multiple of a difference, we shouldn’t affect the formula too much, but we may simplify its coefficients. Let’s try this with the Newton-Cotes sixth-order rule, which for simplicity we shall integrate over the range [1,7]:

(%i8) [a,h]:[1,1];
(%i9) remfunction(f);
(%i10) pts:makelist([a+n*h,f(a+n*h)],n,0,6);
(%i11) integrate(lagrange(pts),x,1,7);
(%i12) nc6:ratsimp(%);

This just produces the sixth-order rule seen above. But now!

(%i13) d6:sum(binomial(6,k)*(-1)^k*f(a+k*h),k,0,6);
(%o13) f(7) - 6 f(6) + 15 f(5) - 20 f(4) + 15 f(3) - 6 f(2) + f(1)
(%i14) ratsimp(nc6+d6/140);
(%o14) (3 f(7) + 15 f(6) + 3 f(5) + 18 f(4) + 3 f(3) + 15 f(2) + 3 f(1))/10

This is Weddle’s rule; more commonly written as


Notice how much simpler the coefficients are! Another rule, Hardy’s rule, can also be obtained by an adjustment:

(%i15) ratsimp(nc6-d6*9/700);
(%o15) (14 f(7) + 81 f(6) + 110 f(4) + 81 f(2) + 14 f(1))/50

or in integral form as


Not only is this simpler than the sixth-order Newton-Cotes rule, but two of the coefficients have vanished.

Averaging different rules

You and your students can invent your own rules by taking some Newton-Cotes rules and forming weighted averages of them. For example, Simpson’s rule (the second order rule) over a difference of 2h is


If we, for example, take


where N is the fourth order Newton-Cotes rule, and S is the Simpson rule over 2h, we obtain


which certainly has nice coefficients.

Such a formula may not in fact give very accurate results, but students are always excited to discover something “for themselves”, and inventing their own integration rules (and testing them), would be a fascinating exercise.

Calculating pi

Here’s a lovely iteration which converges quadratically to \pi. Start with:





\displaystyle{x_{n+1}=\frac{1}{2}\left(\sqrt{x_n}+\frac{1}{\sqrt{x_n}}\right)} for n\ge 0

\displaystyle{y_{n+1}=\frac{y_n\sqrt{x_n}+1/\sqrt{x_n}}{y_n+1}} for n\ge 1

\displaystyle{\pi_n=\pi_{n-1}\frac{x_n+1}{y_n+1}} for n\ge 1

This remarkable iteration comes from Borwein and Borwein’s “Pi and the AGM“; this is the first formula for \pi of the many in the book. It is based on the arithmetic-geometric mean, which is defined as follows:

Given a,b define the sequences a_n,b_n by a_0=a,b_0=b and for all n\ge 1



It can be shown that these sequences both converge to the same value, called the arithmetic-geometric mean of a and b, and denoted


The derivation of the iteration above starts with the complete elliptic integral of the first kind:


The importance of these integrals for the AGM includes the useful result

M(1,k)=\pi K(\sqrt{1-k^2})/2.

It can also be shown that:


and that


(In these results, the upper dot indicates the derivative. It is conventional to write k' for \sqrt{1-k^2} but to keep the symbols down I won’t use that here.) Finally, and this is the basis for the iteration:


Full details are given in the book above.

Now, let’s see this in operation. Using Maxima, with floating point precision set at 200 digits; we start with:

(%1) fpprec:200;
(%i2) [x,y,pi]:bfloat([sqrt(2),2^(1/4),2+sqrt(2)])$pi;
(%o2) 3.4142135623730950488016887242096980785696718753769\

Now lets apply the iteration a few times, each time checking the number of digits against \pi:

(%i3) for i:1 thru 6 do (x:(sqrt(x)+1/sqrt(x))/2,
pi:pi*(x+1)/(y+1),print("pi = ",pi),print("diff = ", floor(abs(log(pi-bfloat(%pi))/log(10.0)))),print(),y:y1);

pi = 3.142606753941622600790719823618301891971356246277167\
diff = 2

pi = 3.141592660966044230497752235120339690679284256864528\
diff = 8

pi = 3.141592653589793238645773991757141794034789623867451\
diff = 18

pi = 3.141592653589793238462643383279502884197224120466562\
diff = 40

pi = 3.141592653589793238462643383279502884197169399375105\
diff = 83

pi = 3.141592653589793238462643383279502884197169399375105\
diff = 170

In only six iterations we have 170 correct digits. Since the number of digits approximately doubles at each step, if we could perform arbitrarily precise computations, we could produce a million digits of \pi in only 13 more iterations.

Borwein and Borwein give many other formulas and iterations for \pi, including one which exhibits quintic convergence; that is, the number of correct digits multiplies by five at each step. To find that you’ll have to read their book…

The AIM Network

The Australian Independent Media Network


Get every new post delivered to your Inbox.

Join 54 other followers