Friday, April 24, 2009

Practical Implications of Calculus

Calculus is widely used by engineers and scientists to analyse practical problems. One common approach is to analyse a particular system using fundamental physics for a particular geometry (e.g. a force balance around a spherical particle falling in a liquid) to form equations. These equations are than either integrated or differentiated to produce useful relationships for a given set of boundary conditions (e.g. settling time of a particle as a function of size and density for a given initial particle velocity). The success of this approach normally depends on the nature of the phenomena being studied (some very chaotic and/or highly non-linear phenomena are difficult to model), the assumptions made in setting up the problems and the difficulty in solving the equations formed. Often, numerical techniques are used to find solutions to these equations and any good engineering mathematics course teaches a range of relevant numerical techniques to differentiate and/or integrate equations that are either difficult or impossible to solve directly.

Another interesting application of calculus is to analyse data. Consider a set of data collected in an experiment ..... imagine we are measuring X and Y simultaneously. When we plot X against Y, the curve generated may clearly show a relationship exists but the relationship is not simple or immediately apparent. A very simple method to start analysing this mysterious relationship, is to differentiate the X Y plot numerically (i.e. calculate the slope at points along the curve) and form a new plot of dX/dY vrs X. Now remember that we differentiate particular functions, new very specific relationships are formed. For example, differentiating a trigonometric function will generate another trigonometric function, and in the case of simple trigonometric functions like sine and cosine, functions are formed that have very specific geometric relationships to the original functions (e.g. cosine has the same shape and periodic form of sine but is "out of phase" with that relationship). In the case of polynomials, differentiating produces a function of lower order; the slope of a cubic follows a parabolic relationship, the differential of a parabolic functions produces a linear function and so on. This means that by differentiating a curve (i.e. measuring the slope of the curve at each point) some of these underlying relationships in the data maybe revealed.

This approach can be extended to differentiating the dX/dY curve formed, as double differentiation also can unlock some underlying relationship For example, differentiating sinX will form cosX and differentiating that relationship will produce a negative version of the original relationship. Double differentiation of a cubic function will generate a linear function (try it !). Thus, the "slope of the slope" can potentially tell alot about the original relationship. This line of attack can be extended to integration, through measuring the area under the Y curve and plotting this relationship against X. Of course, both taking the slope and measuring the area can be used in combination to tackle the problem.

The beauty of this methodology is that the procedure is very simple (e.g. measuring a slope of a line) and easily automated. You can try it yourself .... I suggest asking a mathematically inclined friend to dream up a complex function that is the combination of well known simple functions (e.g. cosx + x^3 + exp(x)), get him or her to form an x y table of values from this relationship and than ask you to derive the underlying relationship from this data set. The detective job in front of you is made simple by modern graphical/CAS calculators that allow ready numerical differentiation and integration of curves. Sometimes, a combination of intuition, luck and insight is required to identify the underlying relationship but the journey is normally fun. Try it !!!

Thursday, April 23, 2009

Thinking about the foundations of calculus

Just recently, I went through the standard derivation of the fundamental theorem of calculus with my students ..... forming tangent lines to a curve, calculating the gradient of that line using an increment, taking the increment towards infinity than repeating similar arguments for the area under a curve before forming the wonderful conclusion that the mathematics of calculating an area under a curve is the reverse of the process for calculating the gradient of a curve. In short, if you understand the mathematics of change, you also understand the mathematics of accumulation and vice versa. This was the brilliant insight that both Newton and Leibniz claimed as their own in the 17th century and formed the basis of the field we know as "Calculus".

This derivation is rightly considered one of the great mathematical breakthroughs of all time and its conclusions are indeed far reaching. During the lecture, I presented the orthodox view that Newton and Leibniz are the great intellectual heros of this breakthrough with a nod of appreciation to ancient Greeks like Archimedes who developed integral calculus via the method of exhaustion. As I was going through these arguments, I found myself questioning this idea of Newtons and Liebniz's pivotal role in the development of calculus. Wasn't the real breakthrough the idea that if you take an increment and imagine it decreasing towards infinity, you can drive useful geometrical relationships ? Isn't that idea, which I think we can accredit to Archimedes, the real intellectual breakthrough ? If you know that idea and have the tools of Cartesian co-ordinates (thank you Descartes !), than won't the relationships that Newton and Leibniz formed eventually fall out ?

Even as I write these heretical ideas down I feel my inner critic saying "No, these ideas only seem obvious because of the brilliant insights of Newton and Leibniz !" That may be true but historians of mathematics writing on calculus have shown that calculus quickly formed as a field after the developments in algebra instigated by Descartes and other mathematicis just proceeding Newton and Descartes. It is also acknowledged that Barrow (Newton's teacher at Cambridge) had an early form of differential calculus before Newton (see http://www.maths.uwa.edu.au/~schultz/3M3/L18Barrow.html for an excellent overview of his ideas). After consulting my inner critic, I think the view I am forming can be expressed as follows: understanding the importance of taking increments towards zero was a great intellectual breakthrough that allowed the development of calculus, simplifying algebra through the Cartersian co-ordinates provided wonderful tools by which to understand the mathematics of change and accumulation and the derivation of calculus by Newton and Leibniz represent the accumulation of this intellectual development. In short, their intellectual insights owe a great deal to Archimedes, Descartes and Barrow.

One of the interesting observation one can make from these discussions is that the way calculus is taught follows a very different route from its historical development. At high schools, we indoctrinate students in algebra, than introduce differential calculus and limits, and than form integral calculus. In history, calculus was formed in almost the opposite order. I suppose, as long as you understand the key intellectual points underpinning calculus, it doesn't really matter in which order you have learn't them.

Thursday, April 9, 2009

A very brief history of calculators or how my brother amazed my school


I started high school in 1973, three years after the end of the Beatles and a generation before the end of the cold war. Everybody wore their hair long, ludriciously wide ties were considered fashionable, most engineers (like my father) owned a slide rule and very simple electronic calculators were starting to become affordable. I remember my brother saving up several weeks of his paper round money to purchase a calculator with a square root button. The arrival of this calculator at our high school caused a sensation and my brother was asked to demonstrate this technological marvel to the headmaster. With the arrival of even more powerful devices throughout that decade, my brother and myself, and everybody else studying mathematics in the Western world, continued to be trained in the use of log tables for carrying out any calculation beyond 687 x 6578. I think the last time I used a log table Ronald Regan hadn't yet become president and computer programs were typed on cards and processed overnight.

During this time, serious letters to the papers and educational experts lamented the fall in educational standards, my year 10 geography teacher warned that global warming would see Sydney under a foot of water by 2000 and there was a general feeling with anyone over the age of 40 that using calculators was "cheating".

By the end of the 1970s and into the early 80s, calculators had advanced quickly and a range of programmable calculators were on offer. In this enlightened era, engineering students tended to be either "HP" or "Casio" adherents, though a few perverse souls identified with the reverse polish notation of the "TI" calculators. I remember quite distinctly slaving away on my Casio programmable calculator with its gigantic 2k of memory, writing quite intricate programs with the line numbering system of level 2 basic, a cute plug in ticker tape printer and an audio tape memory system. Armed with this calculating power, you felt that you could conquer the world or at least complete a pressure drop calculation for a piping system in under 10 minutes. Part of me (a very small part) still hankers for the happy chatter of my ticker tape Casio printer and the amazingly clunky graphics produced from this device. By this time, the scientific calculators familiar with modern students became standard and knowledge of the workings of a slide rule suggested either a perverted soul or a person lost in the past.

The calculator was here to stay ! My arrival in the Engineering profession coincided with the great personal computer revolution and in my own small way I lead the charge, using computer programs (now written in "high" level languages like GW Basic !!) to perform complex engineering calculations that had formerly been the province of "look up" tables and approximate solutions. Even with this shift towards computing, my scientific calculator (still a Casio man) was used on a daily basis. However, by this time my career had taken a sharp turn towards research and the graphics calculator revolution bypassed me, as I was knee deep in numerics, computational thermodynamics and writing unruly "programs" in Excel. It was only when I took my current position that I was handed my first graphics calculator. It was love at first sight ! I love the fact that I can "see" the solution of an equation, that I can calculate derivatives and integrals and even form the ABC TV symbol using parametric graphics. What is there not to love ! I even accepted the transition from being a Casio man to a TI man without suffering a nervous breakdown (OK I had a little therapy).

Interestingly, serious people are still lamenting the falling of educational standards, predicting that Sydney will be under a metre of water by ......, and most people over 40 think that using a CAS calculator is cheating.

Friday, April 3, 2009

The Continuum Assumption

The engineering mathematics course at Swinburne is typical of most engineering mathematics courses around the world, in that, there is a heavy emphasis on the use of functions in analysing the physical world. In particular, there is underlying assumption (often unstated) that we can deal with physical data as a continuum (e.g. analysing radioactivity measurements using exponential functions). It is this assumption that underpins the "classical" paradign of engineering mathematics, which I would describe as:

A. analyse the physical relationships of the system being studied (e.g. force balance of a particle),

B. form equations that reflect these relationships, making appropriate simplifciations and assumptions (e.g. particle is spherical),

C. solve these equations for a given set of boundary conditions or limitations using either analytical or numerical techniques, and

D. analyse the solutions obtained against physical data, returning to first two steps if the solutions obtained are inaccurate or not credible.

This approach, and many subtle variations, has proved to be very powerful in analysing engineering problems, though complex systems where subtle changes in geometry and boundary conditions can produce large variations in behaviour (e.g. turbulence in fluids, movement of fine particles and "chaotic systems" in general) have proved difficult to model using this approach. Stephen Wolfram, in his book "A New Kind of Science" (2002) (see http://www.wolframscience.com/) argued that the classical approach was fundamentally flawed and need replacing with a new approach called "cellular automata". At the heart of Wolfram's claims was this central observation:

All of our measurements of the world are made discretely, that is, we obtain discrete numbers from our instruments (e.g. the temperature measurement from a thermometer) including our senses, and artificially impose continuous relationships upon the world by forming equations around fundamentally discrete phenomena. We could more easily, and naturally, use discrete mathematical models to describe the physical world and dispense with the classical approach.

Quite a claim !! As you imagine this book caused much debate, some of it polite and in some cases, quite inpolite ! You might find the overheads of a lecture I gave on the book interesting - see http://www.swin.edu.au/feis/mathematics/staff/gbrooks_pres.html - and there are literally hundreds of sites on the web discussing this book. You may also interested to read a much earlier (and more modest) version of the same idea by Konrad Zuse (1910-1995) who published "Computing Space" in 1967. An English translation of this pioneering work on "digital physics" is available at http://www.idsia.ch/~juergen/digitalphysics.html. Zuse was also an early pioneer in the development of the computer and was, clearly, a highly imaginative and interesting thinker.

I think the claims, details and repercussions of Wolfram's claim are a bit detailed to discuss here but I do think the first part of his central claim is uncontroversial, that it, the measurements we make of the world are discrete and the equations we impose on this discrete data reflect our intellectual choices not an underlying physical connection between equations and nature (i.e. cannon balls do not have a parabolic equation written into their structure, it is "us" that chooses a parabola to model the motion of the ball). I think this is underlying assumption to appreciate as we continue along our path of differentiating/integrating/ etc. continuous functions to describe the physical world.