Sunday, January 18, 2015

204: What Happened To Grigori Perelman?

Audio Link

Before we start, I'd like to thank listeners katenmkate and EdB, who recently posted nice reviews on iTunes. I'd also like to welcome our many new listeners-- from the hits on the Facebook page, I'm guessing a bunch of you out there just got new smartphones for Xmas and started listening to podcasts. Remember, posting good reviews on iTunes helps spread the word about Math Mutation, as well as motivating me to get to work on the next episode.

Anyway, on to today's topic. We often think of mathematical history as something that happened far in the past, rather than something that is still going on. This is understandable to some degree, as until you get to the most advanced level of college math classes, you generally are learning about discoveries and theorems proven centuries ago. But even since this podcast began in 2007, the mathematical world has not stood still. In particular, way back in episode 12, we discussed the strange case of Grigori Perelman, the Russian genius who had refused the Fields Medal, widely viewed as math's equivalent of the Nobel Prize. Perelman is still alive, and his saga has just continued to get more bizarre.

As you may recall, Grigori Perelman was the first person to solve one of the Clay Institute's celebrated "Millennium Problems", a set of major problems identified by leading mathematicians in the year 2000 as key challenges for the 21st century. Just two years later, Perelman posted a series of internet articles containing a proof of the Poincare Conjecture, a millennium problem involving the shapes of certain multidimensional spaces. But because he had posted it on the internet instead of in a refereed journal, there was some confusion about when or how he would qualify for the prize. And amid this controversy, a group of Chinese mathematicians published a journal article claiming they had completed the proof, apparently claiming credit for themselves for solving this problem. The confusion was compounded by the fact that so few mathematicians in the world could fully understand the proof to begin with. Apparently all this bickering left a bitter taste in Perelman's mouth, and even though he was selected to receive the Fields Medal, he refused it, quit professional mathematics altogether, and moved back to Russia to quietly live with his mother.

That was pretty much where things stood at the time we discussed Perelman in podcast 12. My curiosity about his fate was revived a few months ago when I read Masha Gessen's excellent biography of Perlman, "Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century". It gives a great overview of Perelman's early life, where he became a superstar in Russian math competitions but still had to contend with Soviet anti-semitism when moving on to university level. It also continues a little beyond the events of 2006, describing a somewhat happy postscript: eventually the competing group of Chinese mathematicians retitled their paper " Hamilton–Perelman's Proof of the PoincarĂ© Conjecture and the Geometrization Conjecture", explicitly removing any attempt to claim credit for the proof, and recasting their contribution as merely providing a more readable explanation of Perelman's proof. Sadly, this did not cause Perelman to rejoin the mathematical community: he has continued to live in poverty and seclusion with his mother, remaining retired from mathematics and refusing any kind of interviews with the media.

As you would expect, this reclusiveness just served to pique the curiosity of the world media, and there were many attempts to get him to give interviews or return to public life. Even when researching her biography, Masha Gessen was unable to get an interview. In 2010, the Clay institute finally decided to officially award him the million dollar prize for solving the Poincare Conjecture There had been some concern that his refusal to publish in a traditional journal would disqualify him for the prize, but the Institute seemed willing to modify the rules in this case. Still, Perelman refused to accept the prize or rejoin the mathematical community. He claimed that this was partially because he thought Richard Hamilton, another mathematician whose work he had built upon for the proof, was just as deserving as he was. He also said that "the main reason is my disagreement with the organized mathematical community. I don't like their decisions, I consider them unjust." Responding to a persistent reporter through the closed door of his apartment, he later clarified that he didn't want "to be on display like an animal in a zoo." Even more paradoxically, he added "I'm not a hero of mathematics. I'm not even that successful." Perhaps he just holds himself and everyone else to impossibly high standards.

Meanwhile, Perelman's elusiveness to the media has continued. In 2011 a Russian studio filmed a documentary about him, again without cooperation or participation from Perelman himself. A Russian journalist named Alexander Zabrovsky claimed later that year to have successfully interviewed Perelman and published a report, but experienced analysts, including biographer Masha Gessen, poked that report full of holes, pointing out various unlikely statements and contradictions. One critic provided the amusing summary "All those thoughts about nanotechnologies and the ideas of filling hollowness look like rabbi's thoughts about pork flavor properties." A more believable 2012 article by journalist Brett Forrest describes a brief, and rather unenlightening, conversation he was able to have with Perelman after staking out his apartment for several days and finally catching him while the mathematician and his mother were out for a walk.

Probably the most intriguing possibility here is that Perelman has not actually abandoned mathematics, but has merely abandoned the organized research community, and is using his seclusion to quietly work on the problems that truly interest him. Fellow mathematician Yakov Eliashberg claimed in 2007 that Perelman had privately confided that he was working on some new problems, but did not yet have any results worth reporting. Meanwhile, Perelman continues to ignore the world around him, as he and his mother quietly live in their small apartment in St Petersburg, Russia. Something tells me that this not quite the end of the Perelman story, or of his contributions to mathematics.

And this has been your math mutation for today.



Saturday, December 27, 2014

203: Big Numbers Upside Down

Audio Link

When it comes to understanding big numbers, our universe just isn't very cooperative.  Of course, this statement depends a bit on your definition of the word "big".   The age of the universe is a barely noticeable 14 billion years, or 1.4 times 10 to the 10th power.   The radius of the observable universe is estimated as 46 billion light years, around 4.6 * 10 to the 25th power meters.  The observable universe is estimated to contain a number of atoms equal to about 10 to the 80th power, or a 1 followed by 80 zeroes.   Now you might say that some of these numbers are pretty big, by your judgement.   But still, these seem pretty pathetic to me, with none of their exponents even containing exponents.   It's fairly easy to write down a number that's larger than any of these without much effort, and we have discussed such numbers in several previous podcasts.  While it's easy to come up with mathematical definitions of numbers much larger than these, is there some way we can relate even larger numbers to physical realities?   Internet author Robert Munafo has a great web page up, linked in the show notes, with all kinds of examples of significant large numbers.
There are some borderline examples of large numbers that result from various forms of games and amusements.   For example, the number of possible chess games is estimated as 10 to the 10 to the 50th power.   Similarly, if playing the "four 4s" game on a calculator, trying to get the largest number you can with four 4s, you can reach 10 to the (8 times 10 to the 153rd power) equal to 4 to the 4 to the 4 to the 4th power.  It can be argued, however, that numbers that result from games, artifical exercises created by humans for their amusement, really should not count as physical numbers.   These might more accurately be considered another form of mathematical construct.
At a more physical level, some scientists have come up with some pretty wild sounding numbers based on assumptions about what goes on in the multiverse. beyond what humans could directly observe, even in theory.   These are extremely speculative, of course, and largely border on science fiction, though based at some level in modern physics.  For example, one estimate is that there are likely 10 to the 10 to the 82nd power universes existing in our multiverse, though this calculation varies widely depending on initial assumptions.   In an even stranger calculation, physicist Max Tegmark has estimated that if the universe is infinite and random, then there is likely another identical copy of our observable universe within 10 to the 10 to the 115th meters.   Munafo's page contains many more examples of such estimates from physics.
My favorite class of these large "physical" numbers is the use of probabilities, as discussed by Richard Crandall in his classic 1997 Scientific American article (linked in the show notes).   There are many things that can physically happen whose infinitesimal odds dwarf the numbers involved in any physical measurement we can make of the universe.   Naturally, due to their infinitesimal probabilities, these things are almost certain never to actually happen, so some might argue that they are just as theoretical as artificial mathematical constructions.  But I still find them a bit more satisfying.  For example, a parrot would have odds of about a 1 in 10 to the 3 millionth of pecking out a classic Sherlock Holmes novel, if placed in front of a typewriter for a year.   Taking on an even more unlikely event, what is the probability that a full beer can on a flat, motionless table will suddenly flip on its side due to random quantum fluctuations sometime in the next year?  Crandall estimates this as 1 in 10 to the 10 to the 33rd.   In the same neighborhood is the chance of a mouse surviving a week on the surface of the sun, due to random fluctuations that locally create a comfortable temperature and atmosphere:  1 in 10 to the 10 to the 42nd power.  Similarly, your odds of suddently being randomly and spontaneously teleported to Mars are 10 to the 10 to the 51st power to 1.   Sorry, Edgar Rice Burroughs.
So, it looks like tiny probabilities might be the best way to envision the vastness of truly large numbers, and escape from the limitations of our universe's puny 10 to the 80th power number of atoms.  If you aren't spontaneously teleported to Mars, maybe you can think of even more cool examples of large numbers involved in tiny probabilities that apply to our physical world.
And this has been your Math Mutation for today.


Sunday, November 23, 2014

202: Psychochronometry

Audio Link

Before we start, I'd like to thank listener Stefan Novak, who made a donation to Operation Gratitude in honor of Math Mutation.  Remember, you can get your name mentioned too, by donating to your favorite charity and sending me an email about it!

Now, on to today's topic.  I recently celebrated my 45th birthday.  It seems like the years are zipping by now-- it feels like just yesterday when I was learning to podcast, and my 3rd grader was the baby in the cover photo.   This actually ties in well with the fact that I've recently been reading "Thinking in Numbers", the latest book by Daniel Tammett.   You may recall the Tammett, who I've featured in several previous episodes, is known as the "Rosetta Stone" of autistic savants, as he combines the Rain Man-like mathematical talents with the social skills to live a relatively normal life, and write accessible popular books on how his mind works.    This latest book is actually a collection of loosely autobiographical essays about various mathematical topics.   One I found especially interesting was the discussion of how our perceptions of time change as we age.
I think most of us believe that when we were young, time just seemed longer.   The 365 days between one birthday and the next were an inconceivably vast stretch of time when you were 9 or 10, while at the age of 45, it does not seem nearly as long.   Tammett points out that there is a pretty simple way to explain this using mathematics:  when you are younger, any given amount of time simply represents a much larger proportion of your life.   When you are 10, the next year you experience is equal to 10% of your previous life, which is a pretty large chunk.   At my age, the next year will only be 1/45th of my life,  or about 2.2%, which is much less noticeable.   So it stands to reason that as we get older, each year will prove less and less significant.   This observation did not actually originate with Tammett-- it was first pointed out by 19th century philsopher Paul Janet, a professor at the Sorbonne in France.
Following up on the topic, I found a nice article online by an author named James Kenney, which I have linked in the show notes.  He mentions that there is a term for this analysis of why time seems to pass by at different rates, "Psychochronometry".   Extending the concept of time being experienced proportionally, he points out that we should think of years like a musical scale:  in music, every time we move up one octave in pitch, we are doubling the frequency.   Similarly, we should think of our lives as divided into "octaves", with each octave being perceived as roughly the equivalent subjective time as the previous one.   So the times from ages 1 to 2, 2 to 4, 4 to 8, 8 to 16, 16 to 32, and 32 to 64, are each an octave, experienced as roughly equivalent to the average human.
This outlook is a bit on the bleak side though:  it makes me uneasy to reflect on the fact that, barring any truly extraordinary medical advances in the next decade or two, I'm already well into the second-to-last octave of my life.  Am I really speeding down a highway to old age with my foot stuck on the accelerator, and time zipping by faster and faster?   Is there anything I can do to make it feel like I have more time left?   Fortunately, a little research on the web reveals that there are other theories of the passage of time, which offer a little more hope.
In particular, I like the "perceptual theory", the idea that our perception of time is in proportion to the amount of new things we have perceived during a time interval.  When you are a child, nearly everything is new, and you are constantly learning about the world.   As we reach adulthood, we tend to settle down and get into routines, and learning or experiencing something truly new becomes increasingly rare.   Under this theory, the lack of new experiences is what makes time go by too quickly.  And this means there *is* something you can do about it-- if you feel like things are getting repetitive, try to arrange your life so that you continue to have new experiences.
There are many common ways to address this problem:  travel, change your job,  get married, have a child, or strive for the pinnacle of human achievement and start a podcast.  If time or money are short, there are also simple ways to add new experiences without major changes in your life.  My strong interest in imaginary and virtual worlds has been an endless source of mirth to my wife.  I attend a weekly Dungeons and Dragons game, avidly follow the Fables graphic novels, exercise by jogging through random cities in Wii Streets U, and love exploring electronic realms within video games like Skyrim or Assassins Creed.  You may argue that the unreality of these worlds makes them less of an "experience" than other things I could be doing-- but I think it's hard to dispute the fact that these do add moments to my life that are fundamentally different from my day-to-day routine.   One might argue that a better way to gain new experiences is to spend more time travelling and go to real places, but personally I would sacrifice 100 years of life if it meant I would never have to deal with airport security again, or have to spend 6 hours scrunched into an airplane seat designed for dwarven contortionists.
So, will my varied virtual experiences lengthen my perceived life, or am I ultmately doomed by Janet's math?   Find me in 50 years, and maybe I'll have a good answer.  Or maybe not-- time will be passing too quickly by then for me to pay attention to silly questions.
And this has been your math mutation for today.

Thursday, October 23, 2014

201: A Heap Of Seagulls

Audio Link

Before we start, I'd like to thank listener RobocopMustang, who wrote another nice review on iTunes.  Remember, you too can get your name mentioned on the podcast, by either writing an iTunes review, or sending a donation to your favorite charity in honor of Math Mutation and emailing me about it.

Anyway, on to today's topic.  Recently I was thinking about various classic mathematical and philosophical paradoxes we have discussed.   I was surprised to notice that we have not yet gotten to one of the most well-known classical paradoxes, the Heap Paradox.   This is another of the many paradoxes described in ancient Greece, originally credited to Euclid's pupil Eubulides of Miletus.
The Heap Paradox, also known to snootier intellectuals as the Sorites Paradox (Sorites being the Greek word for heap), goes like this.   We all agree we can recognize the concept of a heap of sand:  if we see a heap, we can look at the pile of sand and say "that's a heap!".   We all agree that removing one grain of sand from a heap does not make it a non-heap, so we can easily remove one grain, knowing we still have a heap.  But if we keep doing this for thousands of iterations, eventually we will be down to 1 grain of sand.  Is that a heap?  I think we would agree the answer is no.  But how did we get from a situation of having a heap to having a non-heap, when each step consisted of an operation that preserved heap-ness?
One reason this paradox is so interesting is that it apples to a lot of real-life situations.   We can come up with a similar paradox if describing a tall person, and continually subtracting inches.   Subtracting a single inch from a tall person would not make him non-tall, would it?   But if we do it repeatedly, at some point he has to get short, before disappearing altogether.   Similarly, we can take away a dollar from Bill Gates without endangering his status of "rich", but there must be some level where if enough people (probably antitrust lawyers) do it enough times, he would no longer be rich.   We can do the same thing with pretty much any adjective that admits some ambiguity in the boundaries of its definition.
Surprisingly, the idea of clearly defining animal species is also subject to this paradox, as Richard Dawkins has pointed out.  We tend to think of animal species as discrete and clearly divided, but that's just not the case.  The best example from the animal kingdom may be the concept of "Ring Species".   These are species of animals that consist of a number of neighboring populations, forming a ring.   At one point on the ring are two seemingly distinct species.  But if you start at one of them, it can interbreed with a neighbor to its right, and that neighbor can interbreed with the next, and so on... until it reaches all the way around, forming a continuous set of interbreeding pairs between the two distinct species.  
For example, in Great Britain there are two species of herring gulls, the European and the Lesser Black-Backed, which do not interbreed.   But the European Herring Gull can breed with the American Herring Gull to its west, which can breed with the East Siberian Herring Gull, whose western members can breed with the Heuglin's Gull, which can breed with the Lesser Black-Backed Gull, which was seemingly a distinct species from the European gull we started with.   So, are we discussing several distinct gull species, or is this just a huge heap of gulls of one species?   It's a paradox.
Getting back to the core heap concept, there are a number of classic resolutions to the dilemma.   The most obvious is to just label an arbitrary boundary:  for example, 500 grains of sand or more is a heap, and anything fewer is a non-heap.  This seems a bit unsatisfying though.   A more complicated version of this method mentioned on the Wikipedia page is known as "Hysteresis", allowing an asymmetric variation in the definition, kind of like how your home air conditioner works.  When subtracting from the heap, it may lose its heapness at a threshold like 500.  But when adding grains, it doesn't gain the heap property again until it has 700.  I'm not convinced this version adds much philosophically though, unless your energy company is billing each time you redefine your heap.
A better method is to use multivalued logic, where we say that any pile has some degree of heapness which continuously varies:  over some threshold it is 100%, then as we reduce the size the percentage of heapness gradually goes down, reaching 0 at one grain.   A variant of this is to say that you must poll all the observers, and average their judgement of whether or not it's a heap, to decide whether your pile is worthy of the definition.
If you're a little more cynical, there is the nihilistic approach, where you basically unask the question:  simply declare it out-of-bounds to discuss any concept that is not well-defined with clean boundaries.   Thus, we would say the real problem is the use of the word "heap", which is not precise enough to admit philosophical discussion.   There are also a couple of more involved philosophical resolutions discussed in online sources, which seem a bit technical to me, but you can find at the links in the show notes.
Ultimately, this paradox is pointing out the problem of living in a world where we like things to have discrete definitions, always either having or not having a property we ascribe to it.  It is almost always the case that there are shades of grey, that our clean, discrete points may reach each other by a continuous incremental path, and thus not be as distinct as we think. 
And this has been your math mutation for today.


Sunday, September 28, 2014

200: Answering Common Listener Questions

Audio Link

Wow, I can't believe we've made it to 200 episodes.  Thanks everyone for sticking with me all this time, or at least for discovering this podcast and not immediately deleting it.   Actually, if we're being technical, this is the 201st episode, since I started at number 0.   But we all suffer from the common human fascination with big round numbers, so I think reaching number 200 is still something to celebrate.  

Finding a sufficiently momentous topic for this episode has been a challenge.  Wimping out somewhat, I think a good use of it is to answer a number of listener questions I have received by email over the past 7 years in which I've been podcasting.   Of course I have tried to send individual answers to each of you who has emailed me-- please continue emailing me at erik (e-r-i-k) at but on the theory that each emailer represents a large number of listeners who are too busy or lazy to email, they are probably worth answering here.

1.  Who listens to this podcast?   According to my ISP, I've been getting about a thousand downloads per week on average.   Oddly, a slight majority seem to be from China, a country from which I've never received a listener email, as far as I can tell.   Chinese listeners, please email me to say hi!   Or perhaps the Communist spies there have determined that my podcast is of strategic importance to the United States and needs to be monitored.  If that's the case, I'll look forward to an elevated status under our new overlords after the invasion.  Assuming, that is, that they don't connect me to any of my non-podcast political writings, and toss me into the laogai instead.

2.  How is this podcast funded?    Well, you can probably guess from the average level of audio quality that I'm not doing this from a professional studio; just a decent laptop microphone, plus some cheap/shareware utilities including the Podcast RSS Buddy and the Audacity sound editor, along with a cheap server account at 1 and 1 Internet.  So I actually don't spend a noticeable amount on the podcast.  That's why rather than asking for donations, I ask that if you like the podcast enough to motivate you, you donate to your favorite charity in honor of Math Mutation and email me.
On a side note, I have fantasized about trying to amp up the quality and frequency and make this podcast a profitable venture.   Many of us small podcasters were inspired a few years ago when Brian Dunning of the Skeptoid podcast quit his day job and announced he was podcasting full time.   However, that dream died somewhat when it was revealed earlier this year that that Brian's lifestyle was partially funded by some kind of internet fraud, and he was sentenced to a jail term.

3.  Why don't you release episodes more often, and/or record longer episodes?  First of all, thanks for the vote of confidence, and I'm glad you're enjoying the podcast enough to want more!  During my first year or two of Math Mutation, I had lots of great ideas in the back of my mind, so coming up with topics & preparing episodes was pretty easy.   But now I'm at a point where I've cleared the backlog in my brain, and now I have to think pretty hard to come up with cool topics, and spend a nontrivial amount of time researching each one before I can talk about it.  This is also combined with many non-podcast responsibilities in my daily life, including a wife and daughter who somehow like to hang out with me, and an elected position on the local school board, at the 4th largest district in Oregon.   So I'm afraid I won't be able to increase the pace anytime soon.  Perhaps in a few years, after I've been tarred, feathered, and removed from public office, and my daughter becomes a teenager and hates me, I'll have a bit more podcasting time though.

4.  Can you help me solve this insanely difficult math problem:  (insert problem here)?   I've received a number of queries of this form.   I'm flattered that my podcasting persona has led you to believe I'm a matehmatical genius of some kind, but to clarify, I would put myself more in the category of an interested hobbyist, nowhere near the level of a professional mathematician.   I did earn a B.A. in math many years ago, but my M.S. is in computer science, and I work as an engineer, using and developing software that applies known mathematical techniques to practical issues in chip design at Intel.   If you're a math or science major or graduate student at a decent college, and have a problem that is challenging for you, it's probably way over my head!   So if you're one of the numerous people who sent me a question of this kind & didn't get a good answer, don't think that I'm withholding my brilliant insights, you've probably just left me totally baffled.   And you're probably way more likely to solve it than I am anyway.

5. What other podcasts do you listen to?   To start with, I don't listen to other math podcasts.  This is partially because I'm afraid I'll be intimidated at how much more professional they are.   But mostly I'm worried that I'll subconsciously remember them and accidentally repeat the same topic in my own podcast, as humans are prone to do.    I do avidly listen to podcasts in other genres though.   As you might suspect from some of my topics, I'm a big fan of the world of "science skepticism" podcasts, such as Skeptoid, QuackCast, Skeptic's Guide to the Universe, and Oh No Ross and Carrie.   Those are always fun, although occasionally a bit pretentious in their claims to teach other people how to think.   I'm also a bit of a history buff, really enjoying Robin Pierson's "History of Byzantium", Harris and Reily's "Life of Caesar", and the eclectic "History According to Bob".   Rounding out my playlist is the odd Australian comedy/culture podcast "Sunday Night Safran", where a Catholic priest and a Jewish atheist have a weekly debate on cultural issues.

Anyway, I think those are probably the most common questions I have received from listeners.   I always love to hear from you though, so don't hesitate to email me if you have more ideas, questions, or requests for the podcasts.   If I receive enough emails, I might not wait until episode 400 before doing another Q&A. 

And this has been your math mutation for today.


Sunday, August 24, 2014

199: Precaution or Paranoia?

Audio Link

Before we start, I'd like to thank listeners Dim Questor, who posted another nice review on iTunes, and Jordan Mahoney, who shared our podcast link on Facebook. Remember, you too can have your name,or bizarre iTunes nickname, immortalized in my podcast by following in Dim or Jordan's shoes!

Now, on to today's topic. You may have heard some online skeptics derisively referring to the "precautionary principle", the idea popular among certain activists that if a new technology has any risks at all, or some new product has a nonzero amount of toxic contamination, we need to take the safe path and ban it. While a certain level of caution is always wise, the fact is that anything you do contains some level of risk, and any product contains some level of contamination. For example, we all still ride in cars, even though an average of 0.72 people die for every 100 million passenger miles driven in cars. Should cars be banned because this number in .72 rather than 0? If we were to follow this principle in general, we would effectively prevent all forms of scientfic and technological advance. So I was surprised to see some buzz on the internet indicating that Nassim Nicholas Taleb, a famous mathematical philosopher who has written several books on risk and probablility, and who I respect a lot, has been arguing that the Precautionary Principle needs to be applied to the concept of genentically modified foods, or GMOs, and that they should therefore be banned. I downloaded his paper, linked in the show notes, to learn a little more about this somewhat surprising position.

Let's begin by reviewing Taleb's "Black Swan" concept, which you may remember me intruducing back in podcast 84, "How to Bankrupt Your Boss And Get Rich". The idea here is that when managing risk, we have to consider both the probablility of a negative event, and the magnitude of its effect. If a low-probablility event, known as a Black Swan, causes a huge penalty, this may outweigh all the cumulative benefits of taking a risk over time. For example, suppose we make a bet that we will flip a coin once per month, and I have to pay you $100 for each head, and you pay me $1000 for each tail. There are risks to each coin flip- I might gain or lose money- but not much overall risk to me by playing this game, since it's nearly impossible to lose a lot of money, and in the long term I will profit nicely on average.

Now suppose we add a new rule: if we see the "black swan" event of five heads in a row, I need to pay you a million dollars. With this rule, the game has changed a lot. If a certain low-probability but not impossible result happens, I will be totally ruined, losing more money than I have ever won in this game. The expected number of tosses to get 5 heads in a row is about 62, so even though it might seem like a profitable game for a year or two, I'm virtually guaranteed to be ruined in the long run. Of course this game is pretty silly, but as Taleb has pointed out, many traders of complex derivatives in the stock market are essentially playing it: they please their bosses by showing steady market-beating profits, and collect their annual bonuses, until one day some low-probability Black Swan event causes them to lose their company much more money than their accumulated total profits. Often by that point they have built up enough savings so they don't care if they are fired.

The Black Swan concept goes beyond finances, of course, which is where the GMO discussion comes in. Taleb points out that when taking any action which provides risk of global ruin, we need apply the Precautionary Principle, because when you multiply a small probability risk by the infinite costs of the global ruin scenario, you can see that the risk just isn't worth it. In other words, in cases where the cost is total ruin, the Precautionary Principle should apply, and we should refuse to take any risk, no matter how small it seems. We need to be careful here, though, in that Taleb is not advocating a universal embrace of the precautionary principle, but just advocating it for very specific scenarios. He still rides in cars and planes, for example. He also continues to support nuclear power, because even though a nuclear meltdown is not good, the risks are all local: a single nuclear accident, though bad for the immediate area, will not blow up the planet, and thus can be planned for within the domain of normal risk management.

If I'm reading Taleb's paper correctly, his main concern with GMOs is that large near-monopoly companies are selling newly existing forms of life to a global market, and in effect creating a monoculture, where a huge proportion of the world's crops in each category are a single, globally genetically identical species. Furthermore, these are new species, lacking the centuries of farming experience we have with naturally existing ones. This means that we have a high risk of problems with these new species, and many such problems carry risk of infinite-cost global ruin. If it has hidden long-term effects on human health, nearly the whole planet will be affected at once. If it is capable of virally spreading and displacing other species, this will all happen at once worldwide. And if some new disease can kill this species of crop, it will devastate the whole world's food produciton. Thus we are taking an infinite potential cost, which can affect agriculture or humanity worldwide, and offsetting it against a marginal benefit of improved crops due to the GMO. Incurring the risk of an infinite-cost Black Swan event, for a finite benefit due to GMO crops, does not balance out mathematically, and thus we shouldn't do it.

It seems to me that his most compelling point is really about the risk of monoculture- having a tiny set of large companies as global seed suppliers- rather than the risks of GMOs themselves. If Monsanto disavowed GMOs in favor of old-fashioned selective breeding, this monoculture issue would still be present. In general, Taleb has good points about monopolies and central control-- we do need to be careful about transforming local risks to global ones by making things the same everywhere. This is similar to his critique of the economic theory of Comparative Advantage, which we discussed back in episode 165. We need to watch for and prevent a true worldwide dependence on a tiny set of near-genetically-identical species, and make sure any new species is extensively tested in a small, local environment before being propagated further. And we do need to make sure we never reach a point where too large a proportion of the world is dependent on a single species of crop. But as long as we are taking action to avoid this worldwide monoculture, the concept behind GMO foods is basically the same as traditional selective breeding-- producing a new species based on existing forms of life-- so it's hard for me to see why it should be treated so differently. Many new plant varieties have been developed and deployed based on selective breeding over the past century as well.

Whether GMOs truly provide a risk of global ruin, or really just a finite risk of some bad crops in local markets, becomes even more important when we balance these risks against the potenital benefits. Improving overall crop yields and nutritional value, one of the potential GMO benefits, is a life-or-death question for many impoverished populations worldwide. Nobel Peace Prize winner Norman Bourlaug, known as "the man who saved a billion lives" due to his work improving the yields of Third World agriculture, stated before he passed away that we are approaching natural limits in making use of arable land, and that GMOs would be required in addition to fully end world hunger. Taleb dismisses such GMO benefits because we could theoretically solve the problems through other means-- but I don't think it's valid to argue on the basis of what can be theoretically done, if nobody is currently doing it. Our society, markets, and culture currently have the will and ability to solve these problems through the careful use of GMOs, and are not doing it through other methods suggested by Taleb.

So, should we follow Taleb's recommendations and ban GMOs, due to the imbalance of a finite benefit vs an infinite risk? Or embrace GMOs with open arms and dismiss Taleb as a kook on this issue? Neither is quite right. I think Taleb does have a good point about the risks of monoculture, and we need to make sure we create policies that ensure agricultural variety, and prevent reaching a point where nearly the whole world is depending on a fragile handful of plant species. But it looks to me like as long as we avoid this pitfall, the other risks of GMOs are comparable to those of naturally bred species, and the potential benefits are massive, potentially saving millions of starving and malnourished people. Thus we need to think carefully about all the benefits involved, and the true level of risk, before taking any hasty government action.

And this has been your math mutation for today.


Sunday, August 3, 2014

198: We're All Pasteurized

Audio Link

Before we start, I'd like to thank listener Dan Unger, who posted another nice review on iTunes. Thanks Dan!

Now, on to today's topic. Most of us know the name of Louis Pasteur, the 19th-century French biologist and chemist, from seeing his name embedded in the term 'pasteurized' on milk cartons. And he is rightly remembered for his discoveries in the area of the germ theory of disease, which led to the development of many vaccines as well as the famous process for heating milk to reduce the bacterial content, collectively saving human lives in the millions over the past two centuries. But did you know that before his famous medical contributions, he made an equally revolutionary discovery in chemistry: the fact that molecules could posses chirality, that is left-handedness or right-handedness, just like ordinary macroscopic 3-D objects?

We're all familiar with the concept of left/right asymmetry in the real world. Simple examples are a glove, which cannot be changed from left-handed to right-handed without turning it inside out, or an ordinary screw, where we have to remember the "lefty loosey righty tighty" rule to use it properly. It seems natural that molecules should be able to exhibit such asymmetry as well, but at the dawn of the 19th century, scientists hadn't really discussed that idea much. In 1848, a young Louis Pasteur was studying a chemical called tartaric acid, a byproduct of wine production. He was trying to understand a strange anomaly related to this substance and a very similar one called racemic acid. Scientists had determined the chemical compositions of both acids, and they were exactly the same. Yet they had some very different physical properties: in particular, tartaric acid in solution would rotate a beam of polarized light passing through it, while racemic acid had no such effect.

Pasteur decided that there must be some geometric difference at the molecular level, so generated crystals of both acids to examine under a microscope. Remember that crystals are essentially an endless repetition of a small molecular structure, so studying the shapes of pure crystals can grant some insight into the shapes of the molecules themselves. Other scientists thought he was wasting his time, since the experiment had already been done by others, and they found that tartaric and racemic acid crystals were exactly the same shape. But Pasteur noticed something that his colleagues had missed: while tartaric acid crystals were all the same shape, racemic acid was a mixture of two types: the tartaric acid crystals, plus another form that was a mirror image of the other. Since the form was somewhat asymmetric, this meant that the two forms were truly different, even though other scientists had dismissed these differences as insignificant: no rotation in our physical space could transform one into the other.

Pasteur then came up with a clever experiment. He crystallized some racemic acid, then with a microscope and needle, carefully separated the left-handed and right-handed crystals into two piles. Then he created solutions of the two types of crystals. As he suspected, one of the solutions was a solution of tartaric acid, and rotated polarized light in the same way that tartaric acid normally would. But the other was a new substance, which rotated light in the opposite direction. In other words, the reason racemic acid did not usually bend light was that it was a mix of right-handed and left-handed molecules, while tartartic acid consisted purely of the left-handed form. Pasteur had separated out the right-handed component of the racemic acid. The result was so shocking to the scientists of the day that the French Academy of Sciences made Pasteur repeat the experiment in front of witnesses before they would accept it.

Today awareness of the chirality, or handedness, of molecules plays a critical role in biochemical and medical research. This is mainly due to the fact that life is "homochiral", a fancy way of saying that our basic building blocks all have a single handedness. This differs from most naturally occurring nonliving materials, which tend to exhibit both kinds of handedness randomly in roughly even proportions. The biological origin of Pasteur's tartaric acid was responsible for its one-sided content. Almost all amino acids found in living creatures are left-handed, and we use them to interact with right-handed sugars to supply most of our energy. At first, scientists found this very surprising, since attempts to artificially synthesize most biological molecules result in roughly equal quantities of right- and left- handed forms. This can have tragic consequences: the infamous pregnancy drug thalidomide was a great treatment for morning sickness in its left-handed form, but the right-handed version caused serious birth defects.

The reason for life's left-handedness is one of science's great mysteries. You can see links to articles in the show notes with a few different theories. One is that it stems from a fundamental left-handedness in physics, since certain types of radioactive decay have a leftward electron spin. The effect is tiny, though, and you could argue that this is just begging the question, since you still have to explain the left-handedness of physics. Another theory is that life was seeded by left-handed molecules from space: meteors with both types of amino acids may have happened to pass through regions of space with polarized light that destroyed one kind more often than the other. Or it could just be luck-- maybe left-handed life randomly formed first in the primordial soup, and once it had a toehold it crowded out any other possibilities. Whatever the real answer is, this geometry is critically important to every cell in our bodies.

And this has been your math mutation for today.