Saturday, March 31, 2018

239: The Shape Of Our Knowledge

Audio Link

Recently I’ve been reading Umberto Eco’s essay collection titled “From the Tree to the Labyrinth”.   In it, he discusses the many attempts over history to cleanly organize and index the body of human knowledge.    We have a natural tendency to try to impose order on the large amount of miscellaneous stuff we know, for easy access and for later reference.   As typical with Eco, the book is equal parts fascinating insight, verbose pretentiousness, and meticulous historical detail.    But I do find it fun to think about the overall shape of human knowledge, and how our visions of it have changed over the years.

It seems like most people organizing a bunch of facts start out by trying to group them into a “tree”.   Mathematically, a tree is basically a structure that starts with a single node, which then links to sub-nodes, each of which links to sub-sub-nodes, and so on.   On paper, it looks more like a pyramid.   But essentially it’s the same concept as folders, subfolders, and sub-sub folders that you’re likely to use on your computer desktop.   For example, you might start with ‘living creatures’,   Under it you draw lines to ‘animals’, ‘plants’, and ‘fungi’.   Under the animals you might have nodes for ‘vertebrates’, ‘invertebrates’, etc.     Actually, living creatures are one of the few cases where nature provides a natural tree, corresponding to evolutionary history:  each species usually has a unique ancestor species that it evolved from, as well as possibly many descendants.

Attempts to create tree-like organizations date back at least as far as Aristotle, who tried to identify a set of rules for properly categorizing knowledge.   Later authors made numerous attempts to fully construct such catalogs.   In later times, Eco points out some truly hilarious (to modern eyes) attempts to create universal knowledge categories, such as Pedro Bermudo's 17th-century attempt to organize knowledge into exactly 44 categories.  While some, such as “elements”, “celestial entities”, and “intellectual entities” seem relatively reasonable to modern eyes, other categories include “jewels”, “army”, and “furnishings”.     Perhaps the inclusion of “furnishings” as a top-level category on par with “Celestial Entities” just shows us how limited human experience and knowledge typically was before modern times.

Of course, the more knowledge you have, the harder it is to cleanly fit into a tree, and the more logical connections you see that cut across the tree structure.   Thus our attempts to categorize knowledge have evolved more into what Eco calls a labyrinth, a huge collection with connections in every direction.  For example, wandering down the tree of species, you need to follow very different paths to reach a tarantula and a corn snake, one being an arachnid and the other a reptile.   Yet if you’re discussing possible caged parent-annoying pets with your 11-year old daughter, those two might actually be closely linked.    So our map of knowledge, or semantic network, would probably merit a dotted line between the two.     Thus, we don’t just traverse directly down the tree, but have many lateral links to follow, so Eco describes our real knowledge as more of a labyrinth.   He seems to prefer the vivid imagery of a medieval scholar wandering through a physical maze, but in a mathematical sense I think he is referring more to what we would call a ‘graph’, a huge collection of nodes with individual connections in arbitrary directions.

On the other hand, this labyrinthine nature of knowledge doesn’t negate the usefulness of tree structures— as humans, we have a natural need to organize into categories and subcategories to make sense of things.   Nowadays, we realize both the ‘tree’ and ‘labryrinth’ views of knowledge on the Internet.   As a tree, the internet consists of pages with subpages, sub-sub-pages, etc.   But a link on any page can lead to an arbitrary other page, not part of its own local hierarchy, whose knowledge is somehow related.   It’s almost too easy these days.   If you’re as old as me, you can probably recall your many hours poring through libraries researching papers back in high school and college.   You probably spent lots of time scanning vaguely related books to try to identify these labyrinth-like connections that were not directly visible through the ‘trees’ of the card catalog or Dewey Decimal system.

Although it’s very easy today to find lots of connections on the Internet, I think we still have a natural human fascination with discovering non-obvious cross connections between nodes of our knowledge trees.   A simple example is our amusement at puns, when we are suddenly surprised by an absurd connection due only to the coincidence of language.    Next time my daughter asks if she can get a tarantula for Christmas, I’ll tell her the restaurant only serves steak and turkey.    More seriously, finding fun and unexpected connections is one reason I enjoy researching this podcast, discussing obscure tangential links to the world of mathematics that are not often displayed in the usual trees of math knowledge.   Maybe that’s one of the reasons you like listening to this podcast, or at least consider it so absurd that it can be fun to mock.

And this has been your math mutation for today.


References:






Sunday, February 18, 2018

238: Programming Your Donkey

Audio Link

You have probably heard some form of the famous philosophical conundrum known as Buridan’s Ass.   While the popular name comes from a 14th century philosopher, it actually goes back as far as Aristotle.   One popular form of the paradox goes like this:   Suppose there is a donkey that wants to eat some food.   There are equally spaced and identical apples visible ahead to its left and right.   Since they are precisely equivalent  in both distance and quality, the donkey has no rational reason to turn towards one and not the other, so it will remain in the middle and starve to death.   

It seems that medieval philosophers spent quite a bit of time debating whether this paradox is evidence of free will.   After all, without the tie-breaking power of a living mind, how could the animal make a decision one way or the other?   Even if the donkey is allowed to make a random choice, the argument goes, it must use its living intuition to decide to make such a choice, since there is no rational way to choose one alternative over the other.  

You can probably think of several flaws in this argument, if you stop and think about it for a while.   Aristotle didn’t really think it posed a real conundrum when he mentioned it— he was making fun of sophist arguments that the Earth must be stationary because it is round and has equal forces operating on it in every direction.   Ironically, the case of balanced forces is one of the rare situations where the donkey analogy might be kind of useful:  in Newtonian physics, it is indeed the case that if forces are equal in every direction an object will stay still.    But medieval philosophers seem to have taken it more seriously, as a dilemma that might force us to accept some form of free will or intuition.  

I think my biggest problem with the whole idea of Buridan’s Ass as a philosophical conundrum is that it rests on a horribly restrictive concept of what is allowed in an algorithm.  By an algorithm, I mean a precise mathematical specification of a procedure to solve a problem.   There seems to be an implicit assumption in the so-called paradox that in any decision algorithm, if multiple choices are judged to be equally valid, the procedure must grind to a halt and wait for some form of biological intelligence to tell it what to do next.   But that’s totally wrong— anyone who has programmed modern computers knows that we have lots of flexibility in what we can specify.   Thus any conclusion about free will or intuition, from this paradox at least, is completely unjustified.   Perhaps philosophers in an age of primitive mathematics, centuries before computers were even conceived, can be forgiven for this oversight.

To make this clearer, let’s imagine that the donkey is robotic, and think about how we might program it.   For example, maybe the donkey is programmed to, whenever two decisions about movement are judged equal, simply choose the one on the right.   Alternatively, randomized algorithms, where an action is taken based on a random number, essentially flipping a virtual coin, are also perfectly fine in modern computing.    So another alternative is just to have the donkey choose a random number to break any ties in its decision process.    The important thing to realize here is that these are both basic, easily specifiable methods fully within the capabilities of any computers created over the past half century, not requiring any sort of free will.  They are fully rational and deterministic algorithms, but are far simpler than any human-like intelligence.   These procedures could certainly have evolved within the minds of any advanced  animal.

Famous computer scientist Leslie Lamport has an interesting take on this paradox, but I think he makes a similar mistake to the medieval philosophers, artificially restricting the possible algorithms allowed in our donkey’s programming.   For this model, assume the apples and donkey are on a number line, with one apple at position 0 and one at position 1, and the donkey in an arbitrary starting position s.   Let’s define a function F that describes the donkey’s position an hour from now, in terms of s.  F(0) is 0, since if he starts right at apple 0, there’s no reason to move.   Similarly, F(1) is 1.  Now, Lamport adds a premise:  the function the donkey uses to decide his final location must be continuous, corresponding to how he thinks naturally evolved algorithms should operate.   It’s well understood that if you have a continuous function where F(0) is 0, and F(1) is 1, then for any value v between them, there must be a point x where F(x) is v.   So, in other words, there must be points v where F(v) is not 0 or 1, indicating a way for the donkey to still be stuck between 0 and 1 and hour from now.      Since the choice of one hour was arbitrary, a similar argument works for any amount of time, and we are guaranteed to be infinitely stuck from certain starting points.   It’s an interesting take, and perhaps I’m not doing Lamport justice, but it seems to me that this is just a consequence of the unfair restriction that the function must be continuous.   I would expect precisely the opposite:   the function should have a discontinuous jump from 0 to 1 at the midpoint, with the value there determined by one of the donkey-programming methods I discussed before.

I did find one article online that described a scenario where this paradox might provide some food for thought though.   Think about a medical doctor, who is attempting to diagnose a patient based on a huge list of weighted factors, and is at a point where two different diseases are equally likely by all possible measurements.   Maybe the patient has virus 1, and maybe he has virus 2— but the medicines that would cure each one are fatal to those without that infection.   How can he make a decision on how to treat the patient?   I don’t think a patient would be too happy with either of the methods we suggested for the robot donkey:  arbitrarily biasing towards one decision, or flipping a coin.     On the other hand, we don’t know what goes on behind the closed doors after doctors leave the examining room to confer.   Based on TV, we might think they are always carrying on office romances, confronting racism, and consulting autistic colleagues, but maybe they are using some of our suggested algorithms as well.     In any case, if we assume the patient is guaranteed to die if untreated, is there really a better option?  In practice, doctors resolve such dilemmas by continually developing more and better tests, so the chance of truly being stuck becomes negligible.   But I’m glad I’m not in that line of work. 



And this has been your math mutation for today.

References:

Monday, January 15, 2018

237: A Skewed Perspective

Audio Link

If you’re a listener of this podcast, you’re probably aware of Einstein’s Theory of Relativity, and its strange consequences for objects traveling close to the speed of light.   In particular, such an object will appear to have its length shortened in the direction of motion, as measured from its rest frame.    It’s not a huge factor— where v is the object’s velocity and c is the speed of light, it’s the square root of 1 minus v squared over c squared.    At ordinary speeds we observe while traveling on Earth, the effect is so close to zero as to be invisible.    But for objects near the speed of light, it can get significant.    

A question we might ask is:  if some object traveling close to the speed of light passed you by, what would it look like?    To make this more concrete, let’s assume you’re standing at the side of the Autobahn with a souped-up camera that can take an instantaneous photo, and a Nissan Cube rushing down the road at .99c, 99% of the speed of light, is approaching from your left.   You take a photo as it passes by.   What would you see in the photo?   Due to length contraction, you might predict a side view of a somewhat shortened Cube.   But surprisingly, that expectation is wrong— what you would actually see is weirder than you think.   The length would be shorter, but the Cube would also appear to have rotated, as if it has started to turn left.

This is actually an optical illusion:   the Cube is still facing forward and traveling in its original direction.   The reason for this skewed appearance is a phenomenon known as Terrelll Rotation.    To understand this, we need to think carefully about the path a beam of light would take from each part of the Cube to the observer.   For example, let’s look at the left rear tail light.    At ordinary non-relativistic speeds, we wouldn’t be able to see this until the car had passed us, since the light would be physically blocked by the car— at such speeds, we can think of the speed of light as effectively infinite. Thus we would capture our usual side view in our photo.   But when the speed gets close to that of light, the time it takes for the light from each part to travel to the observer is significant compared to the speed of the car.  This means that when the car is a bit to your left, the contracted car will have moved just enough out of the way to actually let the light from the left rear tail light reach you.   This will arrive at the same time as light more recently emitted from the right rear tail light, and light from other parts of the back of the car that are in between.   In other words, due to the light coming from different parts of the car having started traveling at different times, you will be able to see an angled view of the entire rear of the car when you take your photo, and the car will appear to have rotated overall.   This is the Terrell Rotation.

I won’t go into the actual equations in this podcast, since they can be a bit hard to follow verbally, but there is a nice derivation & some illustrations linked in the show notes.   But I think the most fun fact about the Terrell Rotation is that physicists totally missed the concept for decades.   For half a century after Einstein published his theory, papers and texts claimed that if you photographed a cube passing by at relativistic speeds, you would simply see a contracted cube.    Nobody had bothered carefully thinking it through, and each author just repeated the examples they were used to.    This included some of the most brilliant physicists in our planet’s history!   There were some lesser-known physicists such as Anton Lampa who had figured it out, but they did not widely publicize their results.   It was not until 1959 that physicists James Terrell and Roger Penrose independently made the detailed calculation, and published widely-read papers on this rotation effect.    This is one of many examples showing the dangers of blindly repeating results from authoritative figures, rather than carefully thinking them through yourself.


And this has been your math mutation for today.


References:

Wednesday, December 20, 2017

236: A Stubborn Tortoise

Audio Link

If you have a middle-school-aged child, you’ve probably endured countless conversations where you think you’ve clearly explained your point, but it is always answered with a “Yes but”, and a further rationalization.    Recently I was in such a situation, trying to convince my daughter to scoop the cat litter, and descending down an infinite regress of excuses.   It occurred to me that this conversation was very similar to one that Achilles had with the Tortoise in Lewis Carroll’s famous  1895 dialogue,  “What the Tortoise Said to Achilles”.   I was actually surprised to realize that I hadn’t yet recorded a Math Mutation episode on this classic pseudo-paradox.   So here we are.

This dialogue involves the two characters from Zeno’s famous paradoxes of motion, the Tortoise and Achilles, though it is on a totally different topic.   Achilles presents a pair of propositions, which we can call A and B, as he and his pet discuss an isosceles triangle they are looking at.   Proposition A is “Things that are equal to the same are equal to each other.”   Proposition B is “The two sides of this Triangle are things that are equal to the same.”    Achilles believes that he has now established another proposition, proposition Z:  “The two sides of this Triangle are equal to each other.”   But the Tortoise is not convinced:  as he states, “I accept A and B as true, but I don't accept the Hypothetical”.

Achilles tries to convince the Tortoise, but he believes there is an unstated proposition here, which we will call Proposition C:  “If A and B are true, then Z must be true.”    Surely if we believe propositions A, B, and C, then we must believe Proposition Z.   But the Tortoise isn’t convinced so easily:   after all, the claim that you can infer the truth of Proposition Z from A, B, and C is yet another unstated rule.   So Achilles needs to introduce proposition D:  “If A, B, and C are true, then Z must be true.”   And so he continues, down this infinite rabbit-hole of logic.

On first reading this, I concluded that the Tortoise was just being stubborn.   If we have made an if-then statement, and the ‘if’ conditions are true, how can we refuse to accept the ‘then’ part?   Here we are making use of the modus ponens, a basic element of logic:  if we say P implies Q, and P is true, then Q is true.   The problem is that to even be able to do basic logical deductions, you have to already accept this basic inference rule:  you can’t convince someone of the truth value of basic logic from within the system, if they don’t accept some primitive notions to start with.   

One basic way to try to resolve this is to redefine “If A then B” in terms of simple logical AND, OR, and NOT operators:  “If A then B” is equivalent to “B or NOT A”.   But this doesn’t really solve the problem— now we have to somehow come across basic definitions of the AND, OR, and NOT operators.   You can try to describe the definitions purely symbolically, but that doesn’t give you semantic information about whether a statement about the world is ultimately true or false.   Logicians and philosophers take the issue very seriously, and there are many long-winded explanations linked from the Wikipedia page.

I personally like to resolve this pseudo-paradox by thinking about the fact that ultimately, the modus ponens is really just a way of saying that your statements need to be consistent.   For any reasonable definition of “implies” and “true”, if you say P implies Q, and claim P is true, then Q must be true.   You might nitpick that I haven’t defined “implies” and “true” in terms of more primitive notions…  but I think this is just an instance of the general problem of the circularity of language.   After all, *any* word you look up in the dictionary is defined in terms of other words, and to be able to exist in this world without going insane, you must accept some truths and definitions as basic building blocks, without having to be convinced of them.    Hardcore philosophers might object that by accepting so simple an explanation and blindly using modus ponens willy-nilly, I’m being as stubborn as a Tortoise.   But I’m OK with that.


And this has been your math mutation for today.


References:




Sunday, November 19, 2017

235: Syntax Wars


Audio Link

Recently my daughter was complaining about having to do a "sentence diagramming" assignment in school.   As you may recall, this is when you take sentences and break up their words into a kind of chart, showing clearly the subject, verb, and object, and with outlying slanted lines representing modifiers such as adjectives or adverbs, and similar structures to represent subordinate clauses.   Many middle-school English students find this kind of tedious, but I always liked these assignments.   They transformed the dry subject of Language Arts into a kind of geometry exercise, which in my geekiness I found much more appealing.   But aside from the visual appeal, I liked the idea that language follows rules of syntax:  the pieces need to fit together like a computer program, and if you don't combine a reasonable set of pieces in a reasonable order, you end up with gibberish.

Thinking about the concepts of language syntax reminded me of a famous sentence created by Noam Chomsky as an illustration to linguistics academics:  "Colorless green ideas sleep furiously".    According to Chomsky, and my linguistics professor, this illustrated how a sentence could follow all the formal the rules of syntax and yet still be meaningless.    You can see that its grammar is very straightforward:  the subject is ‘ideas’, the verb is ‘sleep’, and they each have some standard modifiers.  Chomsky’s claim was that the sentence is effectively nonsense, since the meanings of the words just do not fit together.    However, I disagreed with my professor when he made this claim.   Because it does follow the rules of syntax, the sentence doesn't seem inherently broken to a native speaker-- and with a properly poetic interpretation, it makes perfect sense.    For example, a "green idea" might be one motivated by jealousy.   It might be "colorless" for lacking subtlety and nuance.   And it might "sleep furiously" as it sits in the back of your mind, building up resentment over time.   So, "Colorless green ideas sleep furiously" is not only meaningful, it might be a profound statement about what happens when you let jealous resentments build up in the back of your mind.     Due to its correct syntax, it's not too hard to think of many somewhat reasonable interpretations on that sentence.   

I was amused to see that this famous sentence had its own Wikipedia page.   On it, I found that I wasn't the only one to have the idea that it could be sensibly interpreted-- in fact, there was even a contest held in 1985 for the most sensible and concisely explained legitimate usage!   Here is the winner:  “It can only be the thought of verdure to come, which prompts us in the autumn to buy these dormant white lumps of vegetable matter covered by a brown papery skin, and lovingly to plant them and care for them. It is a marvel to me that under this cover they are labouring unseen at such a rate within to give us the sudden awesome beauty of spring flowering bulbs. While winter reigns the earth reposes but these colourless green ideas sleep furiously.”   It looks like they focused on “green” as pertaining to nature when composing this version, and “ideas” as a metaphor for still-underground plants.    Personally, I prefer my interpretation.

Anyway, I think the opposite case-- where the words make sense, but are not following the rules of syntax-- is actually much worse.   Here's an example from John Cage, this podcast’s favorite source of artistic absurdity.    He generated it with the aid of some I-Ching-inspired random numbers applied to a starting point of works by Thoreau.  "sparrowsitA gROsbeak betrays itself by that peculiar squeakerIEFFECT OF SLIGHGEst tinkling measures soundness ingpleasa We hear!"    That's just the opening of the poem "Mureau", in Cage's strange collection "M".   You're actually not getting the full effect in this podcast, because in Cage's version, the typeface of the letters varies randomly too.   Perhaps it's just my unpoetic colorless green jealousy, but that sounds like nonsense to me.    Cage, on the other hand, considered abandoning syntax to be a virtue.   As he wrote in the introduction to M, "Syntax, according to Norman O. Brown, is the arrangement of the army.  As we move away from it, we demilitarize language....  Translation becomes, if not impossible, unnecessary.    Nonsense and silence are produced, familiar to lovers.   We begin to actually live together, and the thought of separating doesn't enter our minds."

I'm afraid I'll just have to respectfully disagree with Cage on that one.     I’m not sure if he was even serious about that explanation, given that his starting point for the text was Henry David Thoreau, not exactly known for separatism or violence.    But in any case,  I like having some structure to my linguistic utterances, and I don't think it's been significantly damaging to world peace.   In fact, I think the mutual understanding provided by sticking to well-understood rules of syntax has been critical to diplomatic relations throughout human history, and prevented far more violence than it has caused.   Let that idea sleep furiously in the back of your mind for a while, and see if you agree with me.


And this has been your math mutation for today.


References:





Saturday, October 14, 2017

234: Le Grand K


Before we start, let me apologize for the delay in getting this episode out.  My old ISP imploded recently, not even giving its users the courtesy of domain or email forwarding, so I had to spend some time straightening out my online life.   Note that this also means the Math Mutation rss feed URL has changed— if you are using iTunes, I think this will be transparent, but if using another podcatcher, you will need to go to mathmutation.com to grab the new address.  

Anyway, on to today’s topic.   Recently reading about the silly lawsuit against Subway for selling foot-long sandwiches that were technically less than a foot long, I had a great idea for a startup business.   I would sell measuring tapes and rulers where every unit is 10% smaller than normal, a great boon to businesses such as Subway that make money by the meter.    Sadly, I soon realized that most weights and measures are standardized by international bodies, and such a business would violate various laws.   But that got me a little curious about how these international measurements are settled upon.   After all, how do I know that a meter measured on a ruler I buy today in Oregon will be exactly the same as a meter stick held by a random ice miner in Siberia?    Do companies just copy each other when they manufacture these things?  How do we keep these things consistent?

In most cases, the answer is simple:  objective definitions are created in terms of fundamental physical constants.   For example, a meter is the distance travelled by light in a vacuum in one 299,792,458th of a second.    With a second being defined in terms of the decay of a cesium-133 atom.   OK, these may sound like somewhat exotic definitions, but they are in principle measurable in a well-equipped physics laboratory, and most importantly, will give the same measurements any time the appropriate experiment is repeated.   But I was surprised to discover there is one odd man out:  the kilogram.   Rather than being defined in terms of something fundamental to the universe, a kilogram is literally defined as the mass of one particular hunk of metal, a platinum-iridium sphere in France known as the International Prototype Kilogram, or IPK, nicknamed Le Grand K.

It is strange that in this modern day and age, we would define mass in terms of some reference like this instead of fundamental constants.   But if you think about how you measure mass, it can be a bit tricky.   Usually we measure the mass of an object by checking its weight, a simple proxy that works great as an approximate measure, when you happen to live on a planet with noticeable gravity.   Once you care about tiny differences like millionths and billionths, however, you realize there is a lot of uncertainty as to the exact relationship between weight and mass at any point on earth—  you need to know the exact force of gravity, which can depend on the altitude, local composition of the earth’s crust, position of the moon, etc.   However, if you compare to other objects of known mass, all these issues are normalized away:  both masses are affected equally, so you can just use the counterbalancing masses to measure and compare.   Thus, using a prototype kilogram, and making copies of it for calibrating other prototypes, is a very practical solution.

Scientists did an amazing job defining the initial prototype:  they wanted it to be equal to the mass of a cubic decimeter of ocean water at 4 degrees Kelvin under one atmosphere of pressure, and the IPK apparently meets that ideal with an error roughly comparable to the mass of a grain of rice.    Unfortunately, recent measurements have shown that the IPK has lost about 50 micrograms over the last half-century relative to copies of it in other countries.   This is despite an amazing level of caution in its maintenance:  climate control, filtered air, and special cleaning processes.    There are various theories about the root cause:  perhaps minuscule quantities of trapped gases are slowly escaping, maybe the replicas are gaining dirt due to not-quite-careful-enough handling, or maybe even mercury vapor from nearby thermometers is playing a role.   But whatever the cause, this is a real problem:  now that high-tech manufacturing is almost at the point of building certain devices atom-by-atom, even tiny levels of uncertainty in the actual value of a kilogram are very bad.

Thus, there is a new push to redefine the kilogram in terms of fundamental constants.   One idea is to define it based on the number of atoms in a carefully-prepared sphere of pure silicon.   Another is to use the amount of voltage required to levitate a certain weight under controlled conditions.     A more direct method would be to define the kilogram in terms of an exact number of atoms of carbon-12.   All these share the problem that they depend on fundamental constants which are themselves only measurable experimentally, to some finite degree of precision, which adds potential error factors greater than the uncertainty in comparing to a copy of the IPK.  However, the precision of most of these constants has been steadily increasing with the advances of science, and there seems to be a general feeling that by the close of this decade, Le Grand K will finally be able to be retired.

And this has been your math mutation for today.


References:



Monday, August 28, 2017

233: A Totalitarian Theorem

Audio Link

A couple of weeks ago, on August 15th 2017, we celebrated a rare Pythagorean Theorem Day, since 8 squared + 15 squared equals 17 squared.   This reminded me of an anecdote I read recently in Amir Alexander’s book “Infintesmal”, a history of the controversies over the concept of infinitesimal quantities over several centuries in Italy and England.   Surprisingly, a key figure in this history was Thomas Hobbes, the English political philosopher best known for his treatise “The Leviathan”, which advocated an autocratic form of government controlled by a single ruler.   What’s not as widely known is that Hobbes developed a strong interest in mathematics, directly influencing his philosophical works.  In fact, his philosophical career was jump-started by his unexpected encounter with the Pythagorean Theorem.

These days, nearly every high school student deals with this theorem in geometry class, but such experience was not nearly as universal in Hobbes’ time, the early 1600s.   In the case of Hobbes, he had been thinking about politics for many years, but by the age of 40, had not yet seen the Pythagorean Theorem.   The story of his encounter with this theorem is related by one of his contemporaries, historian John Aubrey.  One day Hobbes had some spare time to browse while visiting a library, and a copy of Euclid’s Elements opened to a page on the Theorem was sitting on a table.   His reaction was “By God, this is impossible!”   Hobbes wondered, could the formula a squared + b squared = c squared really apply to *every* right triangle, even an arbitrary new one he drew right there?     But he read the proof, and the related proofs and definitions leading up to it, and soon was convinced.   He was amazed that such a profound and non-intuitive result could be deduced based on simple axioms and definitions.  From this point forward, he was in love with geometry and Euclid’s methods of proof.   In addition to attempting further mathematical work himself, he used this method as the basis for his philosophical works.

Hobbes’ most famous treatise, the “Leviathan” published in 1651, was then built upon this method of starting with basic definitions and propositions and deriving the consequences.    Most works of philosophy strive for this ideal, though I think the line between valid logic, sophistry, and word games gets very fuzzy  once you leave the realm of pure mathematics.    Back in college, I remember my classmates majoring in philosophy bragging that mathematics, was a mere subset of the vast realm that they studied.    After graduation, many of them applied their broad expertise in logical reasoning to brewing numerous exotic varieties of coffee.    Of course, some works of philosophy are indeed brilliant and convincing, but it is nearly impossible for them to truly exhibit a level of logical rigor comparable with a mathematical proof.

In any case, this attempt at rigorous foundations made the Leviathan very convincing, and it is today regarded as a foundational work of political philosophy.   To Hobbes’ contemporaries, its convincing nature made the work very disturbing when it came to controversial conclusions.   Today most people remember the Leviathan superficially for its advocacy of a strong central ruler, a king or dictator, who must have absolute power.   Thus he is mixed up in people’s minds with the horrific totalitarian regimes that arose in the 20th century.   But we need to keep in mind that he was writing in a very different time, with the opposite problem:  the weakening of the monarchy had led to decades of civil war in England, with multiple factions repeatedly committing mass murder against each other.   A strong central ruler was seen as a much lesser evil than this situation of pre-civilized barbarism into which Hobbes’ country seemed to have sunk.    We also need to keep in mind that the Leviathan introduced many positive concepts of modern Western political philosophy:   individual rights and equality, the idea that whatever is not forbidden by law is implicitly allowed, and the basis of a government in the consent of the governed.     Thus, while his concept of an absolute ruler is not in favor, Hobbes continues to be a philosophical influence on many modern governments.

Hobbes also tried his hand at advancing mathematics, but with much less success than he achieved in the political arena.   He had been disturbed that some classical math problems, such as the squaring of the circle, were still unsolved, and decided that in order to claim completeness of his methods of reasoning (and thus his philosophical system), he needed to solve them.   He then published numerous solutions to the problem of the squaring of the circle, not anticipating that a few hundred years later this problem would be proven definitely unsolvable.   As you may recall from earlier podcasts, this is a consequence of the fact that pi is a transcendental number, and cannot be algebraically derived from unit ratios.   As a result, all his attempts in this area were flawed in one way or another.    The much more talented mathematician John Wallis published a famous series of letters ripping apart Hobbes’ reasoning from many different angles.   It may seem silly that someone like Wallis wasted so much time on this dispute with a lesser mathematician.    But part of the motivation may have been that discrediting Hobbes mathematically would help to discredit him politically, and save politicians of the time from the need to face the powerful challenges of Hobbes’ ideas.

And this has been your math mutation for today.




References: