Saturday, October 14, 2017

234: Le Grand K


Before we start, let me apologize for the delay in getting this episode out.  My old ISP imploded recently, not even giving its users the courtesy of domain or email forwarding, so I had to spend some time straightening out my online life.   Note that this also means the Math Mutation rss feed URL has changed— if you are using iTunes, I think this will be transparent, but if using another podcatcher, you will need to go to mathmutation.com to grab the new address.  

Anyway, on to today’s topic.   Recently reading about the silly lawsuit against Subway for selling foot-long sandwiches that were technically less than a foot long, I had a great idea for a startup business.   I would sell measuring tapes and rulers where every unit is 10% smaller than normal, a great boon to businesses such as Subway that make money by the meter.    Sadly, I soon realized that most weights and measures are standardized by international bodies, and such a business would violate various laws.   But that got me a little curious about how these international measurements are settled upon.   After all, how do I know that a meter measured on a ruler I buy today in Oregon will be exactly the same as a meter stick held by a random ice miner in Siberia?    Do companies just copy each other when they manufacture these things?  How do we keep these things consistent?

In most cases, the answer is simple:  objective definitions are created in terms of fundamental physical constants.   For example, a meter is the distance travelled by light in a vacuum in one 299,792,458th of a second.    With a second being defined in terms of the decay of a cesium-133 atom.   OK, these may sound like somewhat exotic definitions, but they are in principle measurable in a well-equipped physics laboratory, and most importantly, will give the same measurements any time the appropriate experiment is repeated.   But I was surprised to discover there is one odd man out:  the kilogram.   Rather than being defined in terms of something fundamental to the universe, a kilogram is literally defined as the mass of one particular hunk of metal, a platinum-iridium sphere in France known as the International Prototype Kilogram, or IPK, nicknamed Le Grand K.

It is strange that in this modern day and age, we would define mass in terms of some reference like this instead of fundamental constants.   But if you think about how you measure mass, it can be a bit tricky.   Usually we measure the mass of an object by checking its weight, a simple proxy that works great as an approximate measure, when you happen to live on a planet with noticeable gravity.   Once you care about tiny differences like millionths and billionths, however, you realize there is a lot of uncertainty as to the exact relationship between weight and mass at any point on earth—  you need to know the exact force of gravity, which can depend on the altitude, local composition of the earth’s crust, position of the moon, etc.   However, if you compare to other objects of known mass, all these issues are normalized away:  both masses are affected equally, so you can just use the counterbalancing masses to measure and compare.   Thus, using a prototype kilogram, and making copies of it for calibrating other prototypes, is a very practical solution.

Scientists did an amazing job defining the initial prototype:  they wanted it to be equal to the mass of a cubic decimeter of ocean water at 4 degrees Kelvin under one atmosphere of pressure, and the IPK apparently meets that ideal with an error roughly comparable to the mass of a grain of rice.    Unfortunately, recent measurements have shown that the IPK has lost about 50 micrograms over the last half-century relative to copies of it in other countries.   This is despite an amazing level of caution in its maintenance:  climate control, filtered air, and special cleaning processes.    There are various theories about the root cause:  perhaps minuscule quantities of trapped gases are slowly escaping, maybe the replicas are gaining dirt due to not-quite-careful-enough handling, or maybe even mercury vapor from nearby thermometers is playing a role.   But whatever the cause, this is a real problem:  now that high-tech manufacturing is almost at the point of building certain devices atom-by-atom, even tiny levels of uncertainty in the actual value of a kilogram are very bad.

Thus, there is a new push to redefine the kilogram in terms of fundamental constants.   One idea is to define it based on the number of atoms in a carefully-prepared sphere of pure silicon.   Another is to use the amount of voltage required to levitate a certain weight under controlled conditions.     A more direct method would be to define the kilogram in terms of an exact number of atoms of carbon-12.   All these share the problem that they depend on fundamental constants which are themselves only measurable experimentally, to some finite degree of precision, which adds potential error factors greater than the uncertainty in comparing to a copy of the IPK.  However, the precision of most of these constants has been steadily increasing with the advances of science, and there seems to be a general feeling that by the close of this decade, Le Grand K will finally be able to be retired.

And this has been your math mutation for today.


References:



Monday, August 28, 2017

233: A Totalitarian Theorem

Audio Link

A couple of weeks ago, on August 15th 2017, we celebrated a rare Pythagorean Theorem Day, since 8 squared + 15 squared equals 17 squared.   This reminded me of an anecdote I read recently in Amir Alexander’s book “Infintesmal”, a history of the controversies over the concept of infinitesimal quantities over several centuries in Italy and England.   Surprisingly, a key figure in this history was Thomas Hobbes, the English political philosopher best known for his treatise “The Leviathan”, which advocated an autocratic form of government controlled by a single ruler.   What’s not as widely known is that Hobbes developed a strong interest in mathematics, directly influencing his philosophical works.  In fact, his philosophical career was jump-started by his unexpected encounter with the Pythagorean Theorem.

These days, nearly every high school student deals with this theorem in geometry class, but such experience was not nearly as universal in Hobbes’ time, the early 1600s.   In the case of Hobbes, he had been thinking about politics for many years, but by the age of 40, had not yet seen the Pythagorean Theorem.   The story of his encounter with this theorem is related by one of his contemporaries, historian John Aubrey.  One day Hobbes had some spare time to browse while visiting a library, and a copy of Euclid’s Elements opened to a page on the Theorem was sitting on a table.   His reaction was “By God, this is impossible!”   Hobbes wondered, could the formula a squared + b squared = c squared really apply to *every* right triangle, even an arbitrary new one he drew right there?     But he read the proof, and the related proofs and definitions leading up to it, and soon was convinced.   He was amazed that such a profound and non-intuitive result could be deduced based on simple axioms and definitions.  From this point forward, he was in love with geometry and Euclid’s methods of proof.   In addition to attempting further mathematical work himself, he used this method as the basis for his philosophical works.

Hobbes’ most famous treatise, the “Leviathan” published in 1651, was then built upon this method of starting with basic definitions and propositions and deriving the consequences.    Most works of philosophy strive for this ideal, though I think the line between valid logic, sophistry, and word games gets very fuzzy  once you leave the realm of pure mathematics.    Back in college, I remember my classmates majoring in philosophy bragging that mathematics, was a mere subset of the vast realm that they studied.    After graduation, many of them applied their broad expertise in logical reasoning to brewing numerous exotic varieties of coffee.    Of course, some works of philosophy are indeed brilliant and convincing, but it is nearly impossible for them to truly exhibit a level of logical rigor comparable with a mathematical proof.

In any case, this attempt at rigorous foundations made the Leviathan very convincing, and it is today regarded as a foundational work of political philosophy.   To Hobbes’ contemporaries, its convincing nature made the work very disturbing when it came to controversial conclusions.   Today most people remember the Leviathan superficially for its advocacy of a strong central ruler, a king or dictator, who must have absolute power.   Thus he is mixed up in people’s minds with the horrific totalitarian regimes that arose in the 20th century.   But we need to keep in mind that he was writing in a very different time, with the opposite problem:  the weakening of the monarchy had led to decades of civil war in England, with multiple factions repeatedly committing mass murder against each other.   A strong central ruler was seen as a much lesser evil than this situation of pre-civilized barbarism into which Hobbes’ country seemed to have sunk.    We also need to keep in mind that the Leviathan introduced many positive concepts of modern Western political philosophy:   individual rights and equality, the idea that whatever is not forbidden by law is implicitly allowed, and the basis of a government in the consent of the governed.     Thus, while his concept of an absolute ruler is not in favor, Hobbes continues to be a philosophical influence on many modern governments.

Hobbes also tried his hand at advancing mathematics, but with much less success than he achieved in the political arena.   He had been disturbed that some classical math problems, such as the squaring of the circle, were still unsolved, and decided that in order to claim completeness of his methods of reasoning (and thus his philosophical system), he needed to solve them.   He then published numerous solutions to the problem of the squaring of the circle, not anticipating that a few hundred years later this problem would be proven definitely unsolvable.   As you may recall from earlier podcasts, this is a consequence of the fact that pi is a transcendental number, and cannot be algebraically derived from unit ratios.   As a result, all his attempts in this area were flawed in one way or another.    The much more talented mathematician John Wallis published a famous series of letters ripping apart Hobbes’ reasoning from many different angles.   It may seem silly that someone like Wallis wasted so much time on this dispute with a lesser mathematician.    But part of the motivation may have been that discrediting Hobbes mathematically would help to discredit him politically, and save politicians of the time from the need to face the powerful challenges of Hobbes’ ideas.

And this has been your math mutation for today.




References:






Sunday, July 30, 2017

232: Overcooked Bacon

Audio Link

You’ve almost certainly heard of the “Six Degrees of Separation” phenomenon, sometimes known whimsically as the “Six Degrees of Kevin Bacon”, where it supposedly takes only an average of 6 connections to reach any person on the planet from any other.    In the case of the common Kevin Bacon parlor game, you name any actor, and he’s acted in a movie with someone who acted with someone (dot dot dot), and after naming less than six movies, you can always reach Kevin Bacon.     For example, let’s start with Kristen Bell, one of our greatest living actresses due to her role as the Hexagon in Flatland The Movie.   For her, we just need two steps:   she was in Big Miracle with Maury Ginsberg, and Ginsberg was in My One and Only with Kevin Bacon.  In earlier podcasts, I’ve mentioned that mathematicians often think of this in the slightly geekier terms of paper authorship and Paul Erdos.    Although this concept is well known, its truth is not actually as well-established as you might think.   I’m not referring to the well-documented fact that numerous actors are better-connected than Kevin Bacon— perhaps his lack of math podcasting has damaged his career in recent years— but to the general idea of connecting any two people in six steps.

Wikipedia mentions several early precedents for the Six Degrees concept, starting with Hungarian author Frigyes Karinthy in the 1920s.   However, it really got its kickstart in recent times from a Psychology Today article in 1967, where Stanley Milgram described an experimental test of the theory.   He gave volunteers assignments to send a letter to random people around the U.S., with the rule that they could only send it to people they knew on a first-name basis, who then had to forward it on under similar restrictions.    He found that on average, it took only six steps of forwarding for the letters to reach their targets.   As you would expect, news about the surprising result spread rapidly.    However, many years later, a writer named Judith Kleinfeld looked up Milgram’s original raw data, and was surprised at what she saw.   In the Psychology Today article, Milgram failed to report that less than 30 percent of the letters ever made it to their destination— the six degrees were only measured in the small minority that succeeded, making the overall result much less impressive.   Sure, perhaps there were legitimate reasons for some failures— people in the middle of the chain may have been suspicious of being asked to forward a random item to a stranger— but it still makes the reported result very questionable.

To think about why the Six Degrees idea might be true or false, it’s useful to abstract the question into a problem in graph theory.    This means we should think of people as dots, or vertices, on a large piece of paper, with connections between them symbolized by edges between any pair of vertices.    If we’re representing people on Earth, there are around six billion vertices.   We’re asking the question:  to get from any vertex A to any other vertex B, how many edges do we need to traverse?    If we assume each person has about 1000 acquaintances, then in one step we can reach 1000 vertices, in two steps 1000x1000 or one million, and in three steps one billion.   So we would expect to reach anyone on the planet in an average of fewer than four connections, making the Six Degrees idea actually seem a bit pessimistic.

But something seems too easy about this random graph model.   Are any two random vertices really equally likely to be connected?   Am I just as likely to know an arbitrary computer geek from Oregon as an arbitrary goat farmer from Cambodia?   It seems that we should somehow be able to account for being more likely to know people in a similar location, job, social class, etc.   A nice article on the “Mathematics Illuminated” website discusses some alternative models.   As an opposite extreme to the random graph, suppose we arrange the vertices in a large circle, where every vertex is connected to only its thousand nearest neighbors.   Now to get from one vertex to the one on the opposite side of the circle, who is about 3 billion vertices away, we need 3 billion over 5 hundred, or six million connections.   Definitely more pessimistic than our original model.   

But this is an extremely negative model:  while we all know many more people among our neighbors, most of us certainly do know some people who are well-connected in far-off locations.   If we just allow a few of these long distance connections, we significantly reduce the number of hops needed.  For example, suppose we designate 1000 equally spaced “world traveler”  vertices, representing people who are well-connected in global organizations, that are all connected to each other.  Then, to get anywhere, we just have to traverse the average of 3 million people to get to the nearest long-distance edge, cross it, and visit the same number of people on the other side— reducing the average traversal from 6 million connections to about 12000.     

This leads to what mathematicians call a “small world” graph, where most vertices are connected to a large number of neighbors, but a few “hub” vertices have a lot of distant connections.   You can think of this like an airline routing map:   to get to a distant city, you first fly to a nearby hub, then make a long-distance flight from there to another hub, and finally from that hub you can reach your destination.   It’s probably easy for you to think of some “hub” people in your life.   In my case, while most people I talk to daily are from Oregon, my old friend Ruthe graduated from Oxford and worked until last year in the Obama white house— so thru her, I probably have a very short path to numerous prominent politicians, for better or worse.   As we have seen, these hub connections drastically reduce the hops needed to connect any two vertices.   The addition of 1000 world travelers to the circular graph actually strikes me as a major underestimate of likely real-life long-distance connections, making the original idea of Six Degrees of Separation start to seem quite reasonable.

We should add, of course, that Milgram’s flawed experiments were not the final word on the topic.   There have been many similar efforts over the years, and this experiment has gotten easier as the world has become more interconnected in trackable ways, through the growth of the Internet.   Microsoft did an experiment in 2006 analyzing users of their Messenger network, and found an average of 6.6 hops to connect any two people, with a worst case number of 29.   So Milgram may have been right after all, despite the problems with his trials.   On the other hand, the number might be noticeably higher for the non-electronically-gifted portion of the world population— would that hypothetical Cambodian goat farmer be on the internet, or know someone who is?    Hopefully soon the internet will be bringing this podcast (as well as personal connections) to every human being on the planet, but we’re not quite there yet.

And this has been your math mutation for today.




References:






Sunday, June 25, 2017

231: A New Dimension in Housing



You may recall several earlier podcasts in which we discussed the works of Buckminster Fuller, the visionary futurist and architect of the mid-20th century.   He is best known for his discovery of the geodesic dome, and has been memorialized in the name of the chemical “fuillerene”, an arrangement of 60 carbon atoms with a similar structure.    He also was a popular, if somewhat eccentric, speaker on the idea of “Spaceship Earth”, offering various feel-good prescriptions for enabling the future survival of the human race.   A critical component of his philosophy was a new “synergetic” geometry based on 60-degree rather than 90-degree angles, which would surely lead to new ways of looking at the world.    But recently I’ve been reading a new biography of Fuller’s early years, “Becoming Bucky Fuller” by Loretta Lorance, and was surprised to learn that despite his own later descriptions of his early life, Fuller did not originally set out to be a futurist or visionary, or to save the world through a philosophical revolution.   Originally, he was simply trying to start a successful company, and hoping to follow in the footsteps of industrialists like Henry Ford.

Fuller’s first significant job in the 1920s, after leaving the army, was with his father-in-law marketing the “Stockade System”, a clever design of reusable wood-composite blocks for construction purposes.   The idea was to standardize construction on a precision-manufactured type of block, with holes in each block that could be lined up to pour in large quantities of concrete to add structural stability.    This could improve construction efficiency and reduce the cost of buildings.   This company eventually failed, but it gave Fuller a more ambitious idea.   Why not try to mass-produce full houses, rather than the component bricks?    He compared the concepts to Ford’s mass production of cars, in contrast with the cumbersome process of getting a house built.  “What would happen if a person, seeking to purchase an automobile, had to hire a designer, then send the plans out for bid, then show them to the bank, and then have them approved by the town council, all before work on the vehicle could begin?”   Cars would be far more expensive, and would only be affordable to the wealthiest citizens.   By mass-producing houses like cars, private homes would be within reach at much lower income levels.

Fuller attempted several versions of the design of this house, thinking of the practicalities of mass production.  However, his inclination to have it supported by a central, cylindrical metal frame supporting an overall circular or hexagonal design was very unusual for the time, and probably appeared to most people like something out of science fiction.   He also concentrated as much on the philosophy of his new houses as he did on the actual design, marketing it in a pamphlet called “4D Timelock”, implicitly linking his project with new scientific developments regarding the fourth dimension.   There were several reasons why Fuller considered his new designs to be four-dimensional.  First, there was the push for temporal efficiency of construction, to a greater degree than past architecture, incorporating the dimension of Time.   Then, there was supposed built-in longevity to his designs, due to the use of superior materials and techniques, again incorporating the idea of Time from the beginning.   Finally, he claimed that using advanced geometric concepts, like radiating spheres and trigonometry, was a key component that integrated all dimensions.   I find that last claim a bit odd, since those concepts are clearly part of three-dimensional geometry, but was unable to locate an online copy of Fuller’s original pamphlet to check the claim in more detail.

Fuller’s attempts to get investors to actually fund this new concept were, unfortunately, not very successful.   He initially made the mistake of trying to unveil it at a national architects’ convention— one which was dominated by a fear of mass-production and a movement among architects to make sure that every design remained custom and unique.    He then sent out many copies of his “4D Timelock” to potential investors, but while he received some fascinated replies, he got very little money.   Thinking the “4D” concept might be scaring some people off, he changed the marketing name to “Dymaxion”, combining “dynamism”, “maximum”, and “ions”.    His first big break was when he got permission to display a model at the Marshall Field department store in Chicago, and the public were intrigued by the bizarre design.   

Word-of-mouth led to other opportunities for him to display and talk about his ideas.  He exhibited and spoke about the Dymaxion house throughout the 1930s, as well as working on other related projects.   Along the way, he had to start describing his ideas as potential houses of the future— because despite his popularity, he had failed to attract enough investment to mass-manufacture the actual house anytime soon.    But the model’s bizarre appearance, and the rhetoric that connected it with the fourth dimension, were nicely tied with this conception.   Connected with this futurism was the appealing idea that these new houses could enable social progress:  by making housing less expensive and delivering it efficiently, a huge proportion of humanity could be lifted out of poverty and provided practical homes of their own.   As Lorance puts it, “As time progressed the issue changed from the specific house to the possibilities the house represented”.    

In other words, Fuller’s tours promoting the Dymaxion House launched his reputation as a futurist, visionary thinker, and his popularity as a public speaker.   This gave him the freedom and success to later explore many other radical ideas, such as his geodesic domes, which led to well-deserved worldwide fame.   Ironically, we would probably not remember him nearly as well today if his first proposals had actually succeeded in attracting investors, and he had simply become the founder and CEO of a practical manufactured-home provider.

And this has been your math mutation for today.


References:

Saturday, May 27, 2017

230: Just Say The Dog Ate It

Audio Link

In the last few podcasts, you may recall that we’ve been discussing the Collatz Conjecture, a famous unsolved problem that’s very easy to state.   Just take any positive integer, and repeatedly perform the following operation:  if it’s odd, triple it and add 1, but if it’s even, divide it by two.   The conjecture is:  with this process, will you always end up eventually back at 1?   While the conjecture has been tested and is true for a huge range of values, no mathematician has yet been able to prove that it will always be true.   Based on that, I was surprised at a result I got when googling references on this conjecture for my last podcast.   It was an article by a Kentucky professor named Benjamin Braun, in which he talked about using the Collatz Conjecture as a homework problem for an undergraduate math class.

Now, when hearing this, you may be reminded of the strange story of statistician George Dantzig, which we have discussed in a previous podcast.   When in college, Dantzig arrived late in class one day, and copied down some problems he saw on the board, assuming they were the day’s homework.   He thought the homework was harder than usual, but finished it in a couple of days, and handed it in.   Then he discovered that these had actually been famous unsolved problems, and because he thought they were homework, he had solved them, earning his Ph.D. in one day!   This story became a staple of “power of positive thinking” lectures throughout the 20th century.

So, does Braun hope there is another Dantzig lurking out there in his classes, who could solve the Collatz Conjecture if approaching it with the right attitude?   That would be nice, but that’s a longshot.   Actually, Braun believes that unsolved problems can have a significant educational value as homework.   Too many students share a misperception that math is all about using known formulas and procedures to get answers, even after completing many college math classes, and are never in a position to explore a truly interesting problem.    When challenged with the Collatz Conjecture, students need to experiment with various hypotheses without knowing which ones will be true, and then can proceed to develop and prove some interesting insights.   For example, maybe some students will write a program to graph the number of steps it takes for various numbers to get down to 1.   This might lead them to hypothesize and prove an intermediate theorem, such as a relationship between a number’s base-2 logarithm and the minimum number of Collatz steps to 1.   In the end, they probably won’t solve the famous conjecture, but they will learn a lot along the way.

Braun describes three main benefits of this style of assignment:
  1. “Students are forced to depart from the answer-getting mentality of mathematics.”   As we have just discussed, they probably won’t completely solve the problem.
  2. “Students are forced to redefine success in learning as making sense and increasing depth of understanding.”    Since they are free from the pressure to find a solution, they can relax and concentrate on what they are learning.
  3. “Students can work in a context where failure is normal.”    As they examine the problem, students may come up with various hypotheses, and it’s fine if some of them are wrong.  As Braun describes it, they will understand the “pervasive normality of small mistakes in the day-to-day lives of mathematicians and scientists”.

Naturally, you might wonder how students react to being given an unsolvable problem.   Apparently this varies a bit:  Braun mentions that while some students love the exercise, others  are inclined to feel frustration and defeat.   But I think there is little doubt that this exercise is very unique in an undergraduate curriculum.  It’s a great path for students to understand that math isn’t all about replicating known algorithms, formulas, and proofs, and that there is still a huge unknown mathematical universe out there to explore.


And this has been your math mutation for today.


References:



Sunday, March 26, 2017

229: When Numbers Change

Audio Link

In the last episode we discussed the Collatz Conjecture, a simple yet unsolved problem in elementary number theory.   During the podcast, I mentioned that it had been checked for specific values up to 10 to the 60th power, and no counterexample has been found.   Now, for common day-to-day purposes, you might say we can take the conjecture as true, since we’re unlikely to deal in real life with any number that large, unless perhaps we are doing advanced work in physics or chemistry.    Most likely I won’t be filling my car with 10 to the 61st gallons of gasoline & needing to rely on its obscure mathematical properties.   But beyond that, you might wonder if it’s possible at all for the conjecture to be false— after all, why would regular numbers start behaving differently once we get above a certain threshold?  We need to be careful though.   In fact, there are various mathematical conjectures that have been shown to be true up to some immensely large bound, but then suddenly fail.   Today we’ll discuss a few examples.   

Actually, if you think about it a bit, it’s not too hard to construct artificial cases of a conjecture that’s true up to some large number, then fails afterwards.   Here’s an easy one:  “All numbers are less than 10 to the 60th power.”   I dare you, try any number less than 10 to the 60th, and you’ll see this theorem seems miraculously true!  But just try a single larger number, and it will fail.     OK, you may consider that one to be cheating, as of course it’s possible to do this by mentioning a specific limit.   But here’s another artificial case that doesn’t mention a specific limit:   For any number N, that number does not represent the ASCII-encoded text of a Math Mutation podcast.   You may recall that modern computers represent any text document as a long series of numbers, which could be mashed together to represent one large number.   If the shortest episode of this podcast has, say, 500 characters in it, with each character represented by a 8 binary digits, then the smallest counterexample is around 2 to the 4000th power.     Try any smaller number, and it will obey the theorem.   We should point out that even if the artificial nature of these two examples makes them unsatisfying, they are still valid as existence proofs, showing that it is possible for a theorem to prove true for a huge range of numbers and then fail.

The world of mathematics, however,  provides no shortage of “real” conjectures, some created by brilliant mathematicians who just happened to interpolate incorrectly on a handful of topics.   Probably the most famous is one by Pierre de Fermat, the creator of Fermat’s Last Theorem.   I’m sure you remember this Theorem, where in the 1600s, Fermat conjectured that there are no whole number solutions to a^n + b^n = c^n for n greater than two,   That one turned out to be true for all possible numbers, as proven by Andrew Wiles in the 1990s.   But Fermat also came up with many other conjectures.   (He actually had a somewhat pretentious habit of writing down his conjectures along with comments that he had a proof, but not writing down the proof; that’s probably a story for another podcast though.)   Anyway, Fermat examined a set of numbers of the form 2(2n) + 1, which came to be known as Fermat numbers.   The first five of these are 3, 5, 17, 257, and 65537, which he noticed are all prime.   So he hypothesized that all Fermat Numbers are prime.  It seemed like a pretty good guess, though due to the rapid exponential growth it was hard to check for too many values.   But within 70 years after Fermat’s death, Leonhard Euler found a factorization for the 5th Fermat number, which is up in the 4-billions range, showing that it was not prime.  This was actually a pretty impressive accomplishment in a pre-computer age.

Fermat got his posthumous revenge though.   One of the longest-standing serious conjectures that has ended up being disproved was an analog of Fermat’s Last Theorem that Euler proposed in 1769, Euler’s sum-of-powers conjecture.   This basically set up a family of equations related to Fermat’s famous theorem, which Euler thought would all be unsolvable with whole numbers.  One example is a5 + b5 + c5 + d5 = e5..    This actually stood for several centuries, until finally in 1966 a brute-force search with modern computer technology enabled L. J. Lander and T.R. Parkin to find a solution:  275 + 845 + 1105 + 1335 = 1445.      Those numbers may not seem that large, but remember that when combining five 3-digit numbers, you have a truly immense number of possibilities, 10 to the 15th power, again virtually unsearchable in a pre-computer era.

Anyway, in the show notes you can find links to various other cases where a conjecture was thought to be true, and held true for millions of examples or beyond— but then in the end, was discovered to be incorrect.     We shouldn’t feel bad about these:  that’s how all math and science advances, by trying to interpolate new truths about the universe from what has been discovered so far.  Sometimes we are right, and sometimes even the best of us are wrong.   So even though we’ve tested the Collatz Conjecture for a massive range of possible values, there still could be a hidden counterexample lurking somewhere in the far reaches of exponential values, waiting to catch us by surprise a few centuries down the road.

And this has been your math mutation for today.


References:



Monday, February 20, 2017

228: So Easy It's Hard

Audio Link

Let’s try an experiment.  Think of a positive whole number.   Any number will do.   Now, follow this simple rule:  if the number is even, divide it by two.   If it’s odd, multiply by 3 and add 1.   Repeat this process until your resulting number is 1.    So, for example, suppose we start with 5.   We multiply by 3 and add 1, to get 16.   Then, following the same rule, we divide by 2 to get 8.  Then we divide by 2 to get 4, and divide by 2 again to get 2, then 1.   If you try this with a few numbers, you’ll see that although you may go up and down a few times, you always seem to end up at 1.  But are you always guaranteed to arrive at 1, no matter what number you started with?   

Believe it or not, this simple question has not been solved.  It’s a famous open problem of mathematics, known as the Collatz Conjecture, or the “3n+1 problem”.     If we define the stopping time as the number of steps to get to 1, this conjecture can be stated as follows:  all positive whole numbers have a finite Collatz stopping time.   Despite being simple enough to explain to an elementary school student, this problem has defied the efforts of mathematicians and hobbyists for nearly a century.  The late quirky mathematician Paul Erdos once offered a $500 bounty for anyone who solves this problem, but this vast fortune has not yet been claimed.

By experimenting manually with a few numbers, you can easily convince yourself that the conjecture is true— it seems like you really do always end up back at 1, no matter how you started.   Yet your path to get there can vary wildly.   If you start with a power of 2, you can see that you’ll dive straight back to 1.   Some well-positioned odd numbers are almost as easy:  for example, if you start with 85, you’ll then jump to 256, which is a power of 2, and head straight back from there to 1.  On the other hand, if you start with the seemingly innocent number of 27, you will find the total stopping time is 111 steps, during with you visit numbers as high as 9232.  The Wikipedia page has some nice graphs showing how the stopping time varies:  its maximum value seems to slightly increase as the starting numbers increase, but there is no simple pattern that can be established to prove the conjecture.    Computers have experimentally shown that the conjecture holds for numbers up to 2^60, but of course that does not prove that it will remain true forever.

This Collatz stopping time function can also be seen as an example of chaos, a case where a very slight change in initial conditions can cause a dramatic difference in the result.   Why is it that starting with 26 will enable you to finish in a mere 10 steps, while increasing to 27 takes 111 steps, and then many higher numbers have far fewer steps?   It’s a good example to keep in mind when someone claims they have made accurate predictions about some iterative physical system using computer models.  Can they make the case that their model is somehow simpler than the Collatz process, of either halving or tripling and incrementing a single number at a time?   If not, what makes them think their modeling is less chaotic than the Collatz problem, or that their initial conditions are so accurate that they have ruled out chaos effects?     

As with many unsolved problems, this problem is also attractive to many slightly self-deluded amateurs, who every few years publish an article or make an online post claiming to have proven it.   One simple way to test any proposed proof is to see if it also applies to very similar problems for which the conjecture is false.  For example, instead of multiplying odd numbers by 3, multiply them by 5 before adding 1, turning the question into the “5n+1 problem”.   In that case, if we start with the number 13, the sequence is 13, 66, 33, 166, 83, 416, 208, 104, 52, 26, 13.   Since we got back to the number we started with, this means we repeat forever, without ever getting back to 1!   Thus, any attempted proof of the Collatz 3n+1 problem would have to also have some built-in reason for why it doesn’t apply to the 5n+1 version.    Why is the number 3 so special compared to 5?   Well, if I could answer that, I would be riding away in a limo paid for by Erdos’s $500.   

Still, even if you don’t expect to solve it, it is kind of fun to play with example values and look for patterns.   The way that this simple formula can seem to cause numbers to wander away from you, circle around and tease you temptingly, or race straight down to 1, can seem almost lifelike at times.   An intriguing online abstract claims to describe the problem as “an ecological process of competing organisms”, made of 1s in bit strings.  (Sadly, the full paper for that one is hidden behind a paywall, so I wasn’t able to read it.)     But I think my favorite summary of the problem is the one in the XKCD web comic:   “The Collatz Conjecture states that if you pick a number, and if it’s even divide it by two and if it’s odd multiply by three and add one, and you repeat this procedure long enough, eventually your friends will stop calling to see if you want to hang out.”

And this has been your math mutation for today.




References: