Saturday, April 28, 2018

240: R.I.P. Little Twelvetoes

Audio Link

I was sad to hear of the recent passing of legendary jazz artist Bob Dorough, whose voice is probably still echoing in the minds of those of you who were children in the 1970s.    While Dorough composed many excellent jazz tunes— his “Comin’ Home Baby” is still on regular rotation on my iphone— he is most famous for his work on Schoolhouse Rock.   Schoolhouse Rock was a set of catchy music videos shown on TV in the 70s, helping to provide educational content for elementary-school-age children in the U.S.   Apparently three minutes of this educational content per hour would offset the mind-melting effects of the horrible children’s cartoons of the era.    But Dorough’s work on Schoolhouse Rock was truly top-notch:  the music really did help a generation of children to memorize their multiplication tables, as well as learning some basic facts about history, science, and grammar.   However, my favorite of the songs may have been the least effective, with lyrics that were mostly incomprehensible to kids who were not total math geeks.   I’m talking about the song related to the number 12, “Little Twelvetoes.”   I still chuckle every time that one comes up on my iphone.

Now, most of the songs in the Multiplication Rock series tried to relate their numbers to something concrete that the kids could latch on to.    For example, “Three is a Magic Number” talked about a family with a man, woman, & baby;  “The Four-Legged Zoo” talked about the many animals with four legs, and “Figure Eight” talked about ice skating.    But Dorough must have been smoking some of those interesting 1970’s drugs when he got to the number 12.  Instead of choosing something conventional like an egg carton or the grades in school, he envisioned an alien with 12 fingers and toes visiting Earth, and helping humans to do math.   OK, this was a bit odd, but I think it could have been made to work— but he did his best to pile further oddities on top of that, with lyrics especially designed to confuse the average young child.

First, the song spends a lot of time introducing the concept that the alien counts in base 12.   Here’s the actual narration:   

Now if man had been born with 6 fingers on each hand,  he'd also have 12 toes or so the theory goes. Well, with twelve digits, I mean fingers, he probably would have invented two more digits when he Invented his number system. Then, if he saved the zero for the end, he could count and multiply by twelve just as easily as you and I do by ten.
Now if man had been born with 6 fingers on each hand, he'd probably count: one, two, three, four, five, six, seven, eight, nine, dek, el, doh. "Dek" and "el" being two entirely new signs meaning ten and eleven.  Single digits!  And his twelve, "doh", would be written 1-0.
Get it? That'd be swell, for multiplying by 12.

For those of us who were really into this stuff, that concept was pretty cool.   But for the average elementary school student struggling to learn his regular base-10 multiplication tables, introducing another base and number system doesn’t seem like the best teaching technique.   To be fair, Dorough was just following the lead of the ill-fated “New Math” movement of the time, which I have mentioned in earlier episodes of this podcast.   A group of professors decided that kids would learn arithmetic better if teachers concentrated on the theoretical foundations of counting systems, rather than traditional drilling of arithmetic problems.   Thankfully, their mistake was eventually realized, though later educational fads haven’t been much better.

On top of pushing this already-confusing concept, the song introduced those strange new digits “dek and “el”, and a new word “doh” for 12.   While it’s true that we do need more digits when using a base above 10, real-life engineers who use higher bases, most often base-16 in computer-related fields, just use letters for the digits after 9:  A, B, C, etc.    That way we have familiar symbols in an easy-to-remember order.   I guess it’s fun to imagine new alien squiggles for the extra digits instead…  but I think that just makes the song even more confusing to a young child.   And why not just say “twelve” instead of a new word “doh” when we get to the base?  (Note, however, that this song predated Homer Simpson.)

But the part of the song I find truly hilarious is the refrain, which implies that having this alien around would help us to do arithmetic when multiplying by the number 12.   It goes, “If you help me with my twelves, I'll help you with your tens.  And we could all be friends.”   But think about this a minute.   It’s true that the alien who writes in base 12 could easily multiply by 12 by adding a ‘0’ to a number, just like we could do when multiplying by 10.   So suppose you ask Little Twelvetoes to multiply 9 times 12.   He would write down “9 0”.  Exactly how would this be helpful?   You would now have to convert the number written as 90 in base 12 to a base 10 number for you to be able to understand it, an operation at least as difficult as multiplying 9 times 12 would be in the first place!   So although this alien’s faster way of writing down answers to times-12 multiplication would be interesting, it would be of absolutely no help to a human doing math problems.    You could be friends with the alien, but your math results would just confuse each other.

Anyway, I should point out that despite these various absurdities, the part of the song that lays out the times-12 multiplication table is pretty catchy.   So if the kids could get past the rest of the confusing lyrics, it probably did still achieve its goal of helping them to learn multiplication.   And of course, despite the fact that I enjoy making fun of it, I and millions of kids like me did truly love this song— and still remember it over 40 years later.   Besides, this is just one of many brilliantly memorable tunes in the Schoolhouse Rock series.   I think Bob Dorough’s music and lyrics will continue to play in my iPhone rotation for many years to come.

And this has been your Math Mutation for today.

References:







Saturday, March 31, 2018

239: The Shape Of Our Knowledge

Audio Link

Recently I’ve been reading Umberto Eco’s essay collection titled “From the Tree to the Labyrinth”.   In it, he discusses the many attempts over history to cleanly organize and index the body of human knowledge.    We have a natural tendency to try to impose order on the large amount of miscellaneous stuff we know, for easy access and for later reference.   As typical with Eco, the book is equal parts fascinating insight, verbose pretentiousness, and meticulous historical detail.    But I do find it fun to think about the overall shape of human knowledge, and how our visions of it have changed over the years.

It seems like most people organizing a bunch of facts start out by trying to group them into a “tree”.   Mathematically, a tree is basically a structure that starts with a single node, which then links to sub-nodes, each of which links to sub-sub-nodes, and so on.   On paper, it looks more like a pyramid.   But essentially it’s the same concept as folders, subfolders, and sub-sub folders that you’re likely to use on your computer desktop.   For example, you might start with ‘living creatures’,   Under it you draw lines to ‘animals’, ‘plants’, and ‘fungi’.   Under the animals you might have nodes for ‘vertebrates’, ‘invertebrates’, etc.     Actually, living creatures are one of the few cases where nature provides a natural tree, corresponding to evolutionary history:  each species usually has a unique ancestor species that it evolved from, as well as possibly many descendants.

Attempts to create tree-like organizations date back at least as far as Aristotle, who tried to identify a set of rules for properly categorizing knowledge.   Later authors made numerous attempts to fully construct such catalogs.   In later times, Eco points out some truly hilarious (to modern eyes) attempts to create universal knowledge categories, such as Pedro Bermudo's 17th-century attempt to organize knowledge into exactly 44 categories.  While some, such as “elements”, “celestial entities”, and “intellectual entities” seem relatively reasonable to modern eyes, other categories include “jewels”, “army”, and “furnishings”.     Perhaps the inclusion of “furnishings” as a top-level category on par with “Celestial Entities” just shows us how limited human experience and knowledge typically was before modern times.

Of course, the more knowledge you have, the harder it is to cleanly fit into a tree, and the more logical connections you see that cut across the tree structure.   Thus our attempts to categorize knowledge have evolved more into what Eco calls a labyrinth, a huge collection with connections in every direction.  For example, wandering down the tree of species, you need to follow very different paths to reach a tarantula and a corn snake, one being an arachnid and the other a reptile.   Yet if you’re discussing possible caged parent-annoying pets with your 11-year old daughter, those two might actually be closely linked.    So our map of knowledge, or semantic network, would probably merit a dotted line between the two.     Thus, we don’t just traverse directly down the tree, but have many lateral links to follow, so Eco describes our real knowledge as more of a labyrinth.   He seems to prefer the vivid imagery of a medieval scholar wandering through a physical maze, but in a mathematical sense I think he is referring more to what we would call a ‘graph’, a huge collection of nodes with individual connections in arbitrary directions.

On the other hand, this labyrinthine nature of knowledge doesn’t negate the usefulness of tree structures— as humans, we have a natural need to organize into categories and subcategories to make sense of things.   Nowadays, we realize both the ‘tree’ and ‘labryrinth’ views of knowledge on the Internet.   As a tree, the internet consists of pages with subpages, sub-sub-pages, etc.   But a link on any page can lead to an arbitrary other page, not part of its own local hierarchy, whose knowledge is somehow related.   It’s almost too easy these days.   If you’re as old as me, you can probably recall your many hours poring through libraries researching papers back in high school and college.   You probably spent lots of time scanning vaguely related books to try to identify these labyrinth-like connections that were not directly visible through the ‘trees’ of the card catalog or Dewey Decimal system.

Although it’s very easy today to find lots of connections on the Internet, I think we still have a natural human fascination with discovering non-obvious cross connections between nodes of our knowledge trees.   A simple example is our amusement at puns, when we are suddenly surprised by an absurd connection due only to the coincidence of language.    Next time my daughter asks if she can get a tarantula for Christmas, I’ll tell her the restaurant only serves steak and turkey.    More seriously, finding fun and unexpected connections is one reason I enjoy researching this podcast, discussing obscure tangential links to the world of mathematics that are not often displayed in the usual trees of math knowledge.   Maybe that’s one of the reasons you like listening to this podcast, or at least consider it so absurd that it can be fun to mock.

And this has been your math mutation for today.


References:






Sunday, February 18, 2018

238: Programming Your Donkey

Audio Link

You have probably heard some form of the famous philosophical conundrum known as Buridan’s Ass.   While the popular name comes from a 14th century philosopher, it actually goes back as far as Aristotle.   One popular form of the paradox goes like this:   Suppose there is a donkey that wants to eat some food.   There are equally spaced and identical apples visible ahead to its left and right.   Since they are precisely equivalent  in both distance and quality, the donkey has no rational reason to turn towards one and not the other, so it will remain in the middle and starve to death.   

It seems that medieval philosophers spent quite a bit of time debating whether this paradox is evidence of free will.   After all, without the tie-breaking power of a living mind, how could the animal make a decision one way or the other?   Even if the donkey is allowed to make a random choice, the argument goes, it must use its living intuition to decide to make such a choice, since there is no rational way to choose one alternative over the other.  

You can probably think of several flaws in this argument, if you stop and think about it for a while.   Aristotle didn’t really think it posed a real conundrum when he mentioned it— he was making fun of sophist arguments that the Earth must be stationary because it is round and has equal forces operating on it in every direction.   Ironically, the case of balanced forces is one of the rare situations where the donkey analogy might be kind of useful:  in Newtonian physics, it is indeed the case that if forces are equal in every direction an object will stay still.    But medieval philosophers seem to have taken it more seriously, as a dilemma that might force us to accept some form of free will or intuition.  

I think my biggest problem with the whole idea of Buridan’s Ass as a philosophical conundrum is that it rests on a horribly restrictive concept of what is allowed in an algorithm.  By an algorithm, I mean a precise mathematical specification of a procedure to solve a problem.   There seems to be an implicit assumption in the so-called paradox that in any decision algorithm, if multiple choices are judged to be equally valid, the procedure must grind to a halt and wait for some form of biological intelligence to tell it what to do next.   But that’s totally wrong— anyone who has programmed modern computers knows that we have lots of flexibility in what we can specify.   Thus any conclusion about free will or intuition, from this paradox at least, is completely unjustified.   Perhaps philosophers in an age of primitive mathematics, centuries before computers were even conceived, can be forgiven for this oversight.

To make this clearer, let’s imagine that the donkey is robotic, and think about how we might program it.   For example, maybe the donkey is programmed to, whenever two decisions about movement are judged equal, simply choose the one on the right.   Alternatively, randomized algorithms, where an action is taken based on a random number, essentially flipping a virtual coin, are also perfectly fine in modern computing.    So another alternative is just to have the donkey choose a random number to break any ties in its decision process.    The important thing to realize here is that these are both basic, easily specifiable methods fully within the capabilities of any computers created over the past half century, not requiring any sort of free will.  They are fully rational and deterministic algorithms, but are far simpler than any human-like intelligence.   These procedures could certainly have evolved within the minds of any advanced  animal.

Famous computer scientist Leslie Lamport has an interesting take on this paradox, but I think he makes a similar mistake to the medieval philosophers, artificially restricting the possible algorithms allowed in our donkey’s programming.   For this model, assume the apples and donkey are on a number line, with one apple at position 0 and one at position 1, and the donkey in an arbitrary starting position s.   Let’s define a function F that describes the donkey’s position an hour from now, in terms of s.  F(0) is 0, since if he starts right at apple 0, there’s no reason to move.   Similarly, F(1) is 1.  Now, Lamport adds a premise:  the function the donkey uses to decide his final location must be continuous, corresponding to how he thinks naturally evolved algorithms should operate.   It’s well understood that if you have a continuous function where F(0) is 0, and F(1) is 1, then for any value v between them, there must be a point x where F(x) is v.   So, in other words, there must be points v where F(v) is not 0 or 1, indicating a way for the donkey to still be stuck between 0 and 1 and hour from now.      Since the choice of one hour was arbitrary, a similar argument works for any amount of time, and we are guaranteed to be infinitely stuck from certain starting points.   It’s an interesting take, and perhaps I’m not doing Lamport justice, but it seems to me that this is just a consequence of the unfair restriction that the function must be continuous.   I would expect precisely the opposite:   the function should have a discontinuous jump from 0 to 1 at the midpoint, with the value there determined by one of the donkey-programming methods I discussed before.

I did find one article online that described a scenario where this paradox might provide some food for thought though.   Think about a medical doctor, who is attempting to diagnose a patient based on a huge list of weighted factors, and is at a point where two different diseases are equally likely by all possible measurements.   Maybe the patient has virus 1, and maybe he has virus 2— but the medicines that would cure each one are fatal to those without that infection.   How can he make a decision on how to treat the patient?   I don’t think a patient would be too happy with either of the methods we suggested for the robot donkey:  arbitrarily biasing towards one decision, or flipping a coin.     On the other hand, we don’t know what goes on behind the closed doors after doctors leave the examining room to confer.   Based on TV, we might think they are always carrying on office romances, confronting racism, and consulting autistic colleagues, but maybe they are using some of our suggested algorithms as well.     In any case, if we assume the patient is guaranteed to die if untreated, is there really a better option?  In practice, doctors resolve such dilemmas by continually developing more and better tests, so the chance of truly being stuck becomes negligible.   But I’m glad I’m not in that line of work. 



And this has been your math mutation for today.

References:

Monday, January 15, 2018

237: A Skewed Perspective

Audio Link

If you’re a listener of this podcast, you’re probably aware of Einstein’s Theory of Relativity, and its strange consequences for objects traveling close to the speed of light.   In particular, such an object will appear to have its length shortened in the direction of motion, as measured from its rest frame.    It’s not a huge factor— where v is the object’s velocity and c is the speed of light, it’s the square root of 1 minus v squared over c squared.    At ordinary speeds we observe while traveling on Earth, the effect is so close to zero as to be invisible.    But for objects near the speed of light, it can get significant.    

A question we might ask is:  if some object traveling close to the speed of light passed you by, what would it look like?    To make this more concrete, let’s assume you’re standing at the side of the Autobahn with a souped-up camera that can take an instantaneous photo, and a Nissan Cube rushing down the road at .99c, 99% of the speed of light, is approaching from your left.   You take a photo as it passes by.   What would you see in the photo?   Due to length contraction, you might predict a side view of a somewhat shortened Cube.   But surprisingly, that expectation is wrong— what you would actually see is weirder than you think.   The length would be shorter, but the Cube would also appear to have rotated, as if it has started to turn left.

This is actually an optical illusion:   the Cube is still facing forward and traveling in its original direction.   The reason for this skewed appearance is a phenomenon known as Terrelll Rotation.    To understand this, we need to think carefully about the path a beam of light would take from each part of the Cube to the observer.   For example, let’s look at the left rear tail light.    At ordinary non-relativistic speeds, we wouldn’t be able to see this until the car had passed us, since the light would be physically blocked by the car— at such speeds, we can think of the speed of light as effectively infinite. Thus we would capture our usual side view in our photo.   But when the speed gets close to that of light, the time it takes for the light from each part to travel to the observer is significant compared to the speed of the car.  This means that when the car is a bit to your left, the contracted car will have moved just enough out of the way to actually let the light from the left rear tail light reach you.   This will arrive at the same time as light more recently emitted from the right rear tail light, and light from other parts of the back of the car that are in between.   In other words, due to the light coming from different parts of the car having started traveling at different times, you will be able to see an angled view of the entire rear of the car when you take your photo, and the car will appear to have rotated overall.   This is the Terrell Rotation.

I won’t go into the actual equations in this podcast, since they can be a bit hard to follow verbally, but there is a nice derivation & some illustrations linked in the show notes.   But I think the most fun fact about the Terrell Rotation is that physicists totally missed the concept for decades.   For half a century after Einstein published his theory, papers and texts claimed that if you photographed a cube passing by at relativistic speeds, you would simply see a contracted cube.    Nobody had bothered carefully thinking it through, and each author just repeated the examples they were used to.    This included some of the most brilliant physicists in our planet’s history!   There were some lesser-known physicists such as Anton Lampa who had figured it out, but they did not widely publicize their results.   It was not until 1959 that physicists James Terrell and Roger Penrose independently made the detailed calculation, and published widely-read papers on this rotation effect.    This is one of many examples showing the dangers of blindly repeating results from authoritative figures, rather than carefully thinking them through yourself.


And this has been your math mutation for today.


References:

Wednesday, December 20, 2017

236: A Stubborn Tortoise

Audio Link

If you have a middle-school-aged child, you’ve probably endured countless conversations where you think you’ve clearly explained your point, but it is always answered with a “Yes but”, and a further rationalization.    Recently I was in such a situation, trying to convince my daughter to scoop the cat litter, and descending down an infinite regress of excuses.   It occurred to me that this conversation was very similar to one that Achilles had with the Tortoise in Lewis Carroll’s famous  1895 dialogue,  “What the Tortoise Said to Achilles”.   I was actually surprised to realize that I hadn’t yet recorded a Math Mutation episode on this classic pseudo-paradox.   So here we are.

This dialogue involves the two characters from Zeno’s famous paradoxes of motion, the Tortoise and Achilles, though it is on a totally different topic.   Achilles presents a pair of propositions, which we can call A and B, as he and his pet discuss an isosceles triangle they are looking at.   Proposition A is “Things that are equal to the same are equal to each other.”   Proposition B is “The two sides of this Triangle are things that are equal to the same.”    Achilles believes that he has now established another proposition, proposition Z:  “The two sides of this Triangle are equal to each other.”   But the Tortoise is not convinced:  as he states, “I accept A and B as true, but I don't accept the Hypothetical”.

Achilles tries to convince the Tortoise, but he believes there is an unstated proposition here, which we will call Proposition C:  “If A and B are true, then Z must be true.”    Surely if we believe propositions A, B, and C, then we must believe Proposition Z.   But the Tortoise isn’t convinced so easily:   after all, the claim that you can infer the truth of Proposition Z from A, B, and C is yet another unstated rule.   So Achilles needs to introduce proposition D:  “If A, B, and C are true, then Z must be true.”   And so he continues, down this infinite rabbit-hole of logic.

On first reading this, I concluded that the Tortoise was just being stubborn.   If we have made an if-then statement, and the ‘if’ conditions are true, how can we refuse to accept the ‘then’ part?   Here we are making use of the modus ponens, a basic element of logic:  if we say P implies Q, and P is true, then Q is true.   The problem is that to even be able to do basic logical deductions, you have to already accept this basic inference rule:  you can’t convince someone of the truth value of basic logic from within the system, if they don’t accept some primitive notions to start with.   

One basic way to try to resolve this is to redefine “If A then B” in terms of simple logical AND, OR, and NOT operators:  “If A then B” is equivalent to “B or NOT A”.   But this doesn’t really solve the problem— now we have to somehow come across basic definitions of the AND, OR, and NOT operators.   You can try to describe the definitions purely symbolically, but that doesn’t give you semantic information about whether a statement about the world is ultimately true or false.   Logicians and philosophers take the issue very seriously, and there are many long-winded explanations linked from the Wikipedia page.

I personally like to resolve this pseudo-paradox by thinking about the fact that ultimately, the modus ponens is really just a way of saying that your statements need to be consistent.   For any reasonable definition of “implies” and “true”, if you say P implies Q, and claim P is true, then Q must be true.   You might nitpick that I haven’t defined “implies” and “true” in terms of more primitive notions…  but I think this is just an instance of the general problem of the circularity of language.   After all, *any* word you look up in the dictionary is defined in terms of other words, and to be able to exist in this world without going insane, you must accept some truths and definitions as basic building blocks, without having to be convinced of them.    Hardcore philosophers might object that by accepting so simple an explanation and blindly using modus ponens willy-nilly, I’m being as stubborn as a Tortoise.   But I’m OK with that.


And this has been your math mutation for today.


References:




Sunday, November 19, 2017

235: Syntax Wars


Audio Link

Recently my daughter was complaining about having to do a "sentence diagramming" assignment in school.   As you may recall, this is when you take sentences and break up their words into a kind of chart, showing clearly the subject, verb, and object, and with outlying slanted lines representing modifiers such as adjectives or adverbs, and similar structures to represent subordinate clauses.   Many middle-school English students find this kind of tedious, but I always liked these assignments.   They transformed the dry subject of Language Arts into a kind of geometry exercise, which in my geekiness I found much more appealing.   But aside from the visual appeal, I liked the idea that language follows rules of syntax:  the pieces need to fit together like a computer program, and if you don't combine a reasonable set of pieces in a reasonable order, you end up with gibberish.

Thinking about the concepts of language syntax reminded me of a famous sentence created by Noam Chomsky as an illustration to linguistics academics:  "Colorless green ideas sleep furiously".    According to Chomsky, and my linguistics professor, this illustrated how a sentence could follow all the formal the rules of syntax and yet still be meaningless.    You can see that its grammar is very straightforward:  the subject is ‘ideas’, the verb is ‘sleep’, and they each have some standard modifiers.  Chomsky’s claim was that the sentence is effectively nonsense, since the meanings of the words just do not fit together.    However, I disagreed with my professor when he made this claim.   Because it does follow the rules of syntax, the sentence doesn't seem inherently broken to a native speaker-- and with a properly poetic interpretation, it makes perfect sense.    For example, a "green idea" might be one motivated by jealousy.   It might be "colorless" for lacking subtlety and nuance.   And it might "sleep furiously" as it sits in the back of your mind, building up resentment over time.   So, "Colorless green ideas sleep furiously" is not only meaningful, it might be a profound statement about what happens when you let jealous resentments build up in the back of your mind.     Due to its correct syntax, it's not too hard to think of many somewhat reasonable interpretations on that sentence.   

I was amused to see that this famous sentence had its own Wikipedia page.   On it, I found that I wasn't the only one to have the idea that it could be sensibly interpreted-- in fact, there was even a contest held in 1985 for the most sensible and concisely explained legitimate usage!   Here is the winner:  “It can only be the thought of verdure to come, which prompts us in the autumn to buy these dormant white lumps of vegetable matter covered by a brown papery skin, and lovingly to plant them and care for them. It is a marvel to me that under this cover they are labouring unseen at such a rate within to give us the sudden awesome beauty of spring flowering bulbs. While winter reigns the earth reposes but these colourless green ideas sleep furiously.”   It looks like they focused on “green” as pertaining to nature when composing this version, and “ideas” as a metaphor for still-underground plants.    Personally, I prefer my interpretation.

Anyway, I think the opposite case-- where the words make sense, but are not following the rules of syntax-- is actually much worse.   Here's an example from John Cage, this podcast’s favorite source of artistic absurdity.    He generated it with the aid of some I-Ching-inspired random numbers applied to a starting point of works by Thoreau.  "sparrowsitA gROsbeak betrays itself by that peculiar squeakerIEFFECT OF SLIGHGEst tinkling measures soundness ingpleasa We hear!"    That's just the opening of the poem "Mureau", in Cage's strange collection "M".   You're actually not getting the full effect in this podcast, because in Cage's version, the typeface of the letters varies randomly too.   Perhaps it's just my unpoetic colorless green jealousy, but that sounds like nonsense to me.    Cage, on the other hand, considered abandoning syntax to be a virtue.   As he wrote in the introduction to M, "Syntax, according to Norman O. Brown, is the arrangement of the army.  As we move away from it, we demilitarize language....  Translation becomes, if not impossible, unnecessary.    Nonsense and silence are produced, familiar to lovers.   We begin to actually live together, and the thought of separating doesn't enter our minds."

I'm afraid I'll just have to respectfully disagree with Cage on that one.     I’m not sure if he was even serious about that explanation, given that his starting point for the text was Henry David Thoreau, not exactly known for separatism or violence.    But in any case,  I like having some structure to my linguistic utterances, and I don't think it's been significantly damaging to world peace.   In fact, I think the mutual understanding provided by sticking to well-understood rules of syntax has been critical to diplomatic relations throughout human history, and prevented far more violence than it has caused.   Let that idea sleep furiously in the back of your mind for a while, and see if you agree with me.


And this has been your math mutation for today.


References:





Saturday, October 14, 2017

234: Le Grand K


Before we start, let me apologize for the delay in getting this episode out.  My old ISP imploded recently, not even giving its users the courtesy of domain or email forwarding, so I had to spend some time straightening out my online life.   Note that this also means the Math Mutation rss feed URL has changed— if you are using iTunes, I think this will be transparent, but if using another podcatcher, you will need to go to mathmutation.com to grab the new address.  

Anyway, on to today’s topic.   Recently reading about the silly lawsuit against Subway for selling foot-long sandwiches that were technically less than a foot long, I had a great idea for a startup business.   I would sell measuring tapes and rulers where every unit is 10% smaller than normal, a great boon to businesses such as Subway that make money by the meter.    Sadly, I soon realized that most weights and measures are standardized by international bodies, and such a business would violate various laws.   But that got me a little curious about how these international measurements are settled upon.   After all, how do I know that a meter measured on a ruler I buy today in Oregon will be exactly the same as a meter stick held by a random ice miner in Siberia?    Do companies just copy each other when they manufacture these things?  How do we keep these things consistent?

In most cases, the answer is simple:  objective definitions are created in terms of fundamental physical constants.   For example, a meter is the distance travelled by light in a vacuum in one 299,792,458th of a second.    With a second being defined in terms of the decay of a cesium-133 atom.   OK, these may sound like somewhat exotic definitions, but they are in principle measurable in a well-equipped physics laboratory, and most importantly, will give the same measurements any time the appropriate experiment is repeated.   But I was surprised to discover there is one odd man out:  the kilogram.   Rather than being defined in terms of something fundamental to the universe, a kilogram is literally defined as the mass of one particular hunk of metal, a platinum-iridium sphere in France known as the International Prototype Kilogram, or IPK, nicknamed Le Grand K.

It is strange that in this modern day and age, we would define mass in terms of some reference like this instead of fundamental constants.   But if you think about how you measure mass, it can be a bit tricky.   Usually we measure the mass of an object by checking its weight, a simple proxy that works great as an approximate measure, when you happen to live on a planet with noticeable gravity.   Once you care about tiny differences like millionths and billionths, however, you realize there is a lot of uncertainty as to the exact relationship between weight and mass at any point on earth—  you need to know the exact force of gravity, which can depend on the altitude, local composition of the earth’s crust, position of the moon, etc.   However, if you compare to other objects of known mass, all these issues are normalized away:  both masses are affected equally, so you can just use the counterbalancing masses to measure and compare.   Thus, using a prototype kilogram, and making copies of it for calibrating other prototypes, is a very practical solution.

Scientists did an amazing job defining the initial prototype:  they wanted it to be equal to the mass of a cubic decimeter of ocean water at 4 degrees Kelvin under one atmosphere of pressure, and the IPK apparently meets that ideal with an error roughly comparable to the mass of a grain of rice.    Unfortunately, recent measurements have shown that the IPK has lost about 50 micrograms over the last half-century relative to copies of it in other countries.   This is despite an amazing level of caution in its maintenance:  climate control, filtered air, and special cleaning processes.    There are various theories about the root cause:  perhaps minuscule quantities of trapped gases are slowly escaping, maybe the replicas are gaining dirt due to not-quite-careful-enough handling, or maybe even mercury vapor from nearby thermometers is playing a role.   But whatever the cause, this is a real problem:  now that high-tech manufacturing is almost at the point of building certain devices atom-by-atom, even tiny levels of uncertainty in the actual value of a kilogram are very bad.

Thus, there is a new push to redefine the kilogram in terms of fundamental constants.   One idea is to define it based on the number of atoms in a carefully-prepared sphere of pure silicon.   Another is to use the amount of voltage required to levitate a certain weight under controlled conditions.     A more direct method would be to define the kilogram in terms of an exact number of atoms of carbon-12.   All these share the problem that they depend on fundamental constants which are themselves only measurable experimentally, to some finite degree of precision, which adds potential error factors greater than the uncertainty in comparing to a copy of the IPK.  However, the precision of most of these constants has been steadily increasing with the advances of science, and there seems to be a general feeling that by the close of this decade, Le Grand K will finally be able to be retired.

And this has been your math mutation for today.


References: