Monday, February 20, 2017

228: So Easy It's Hard

Audio Link

Let’s try an experiment.  Think of a positive whole number.   Any number will do.   Now, follow this simple rule:  if the number is even, divide it by two.   If it’s odd, multiply by 3 and add 1.   Repeat this process until your resulting number is 1.    So, for example, suppose we start with 5.   We multiply by 3 and add 1, to get 16.   Then, following the same rule, we divide by 2 to get 8.  Then we divide by 2 to get 4, and divide by 2 again to get 2, then 1.   If you try this with a few numbers, you’ll see that although you may go up and down a few times, you always seem to end up at 1.  But are you always guaranteed to arrive at 1, no matter what number you started with?   

Believe it or not, this simple question has not been solved.  It’s a famous open problem of mathematics, known as the Collatz Conjecture, or the “3n+1 problem”.     If we define the stopping time as the number of steps to get to 1, this conjecture can be stated as follows:  all positive whole numbers have a finite Collatz stopping time.   Despite being simple enough to explain to an elementary school student, this problem has defied the efforts of mathematicians and hobbyists for nearly a century.  The late quirky mathematician Paul Erdos once offered a $500 bounty for anyone who solves this problem, but this vast fortune has not yet been claimed.

By experimenting manually with a few numbers, you can easily convince yourself that the conjecture is true— it seems like you really do always end up back at 1, no matter how you started.   Yet your path to get there can vary wildly.   If you start with a power of 2, you can see that you’ll dive straight back to 1.   Some well-positioned odd numbers are almost as easy:  for example, if you start with 85, you’ll then jump to 256, which is a power of 2, and head straight back from there to 1.  On the other hand, if you start with the seemingly innocent number of 27, you will find the total stopping time is 111 steps, during with you visit numbers as high as 9232.  The Wikipedia page has some nice graphs showing how the stopping time varies:  its maximum value seems to slightly increase as the starting numbers increase, but there is no simple pattern that can be established to prove the conjecture.    Computers have experimentally shown that the conjecture holds for numbers up to 2^60, but of course that does not prove that it will remain true forever.

This Collatz stopping time function can also be seen as an example of chaos, a case where a very slight change in initial conditions can cause a dramatic difference in the result.   Why is it that starting with 26 will enable you to finish in a mere 10 steps, while increasing to 27 takes 111 steps, and then many higher numbers have far fewer steps?   It’s a good example to keep in mind when someone claims they have made accurate predictions about some iterative physical system using computer models.  Can they make the case that their model is somehow simpler than the Collatz process, of either halving or tripling and incrementing a single number at a time?   If not, what makes them think their modeling is less chaotic than the Collatz problem, or that their initial conditions are so accurate that they have ruled out chaos effects?     

As with many unsolved problems, this problem is also attractive to many slightly self-deluded amateurs, who every few years publish an article or make an online post claiming to have proven it.   One simple way to test any proposed proof is to see if it also applies to very similar problems for which the conjecture is false.  For example, instead of multiplying odd numbers by 3, multiply them by 5 before adding 1, turning the question into the “5n+1 problem”.   In that case, if we start with the number 13, the sequence is 13, 66, 33, 166, 83, 416, 208, 104, 52, 26, 13.   Since we got back to the number we started with, this means we repeat forever, without ever getting back to 1!   Thus, any attempted proof of the Collatz 3n+1 problem would have to also have some built-in reason for why it doesn’t apply to the 5n+1 version.    Why is the number 3 so special compared to 5?   Well, if I could answer that, I would be riding away in a limo paid for by Erdos’s $500.   

Still, even if you don’t expect to solve it, it is kind of fun to play with example values and look for patterns.   The way that this simple formula can seem to cause numbers to wander away from you, circle around and tease you temptingly, or race straight down to 1, can seem almost lifelike at times.   An intriguing online abstract claims to describe the problem as “an ecological process of competing organisms”, made of 1s in bit strings.  (Sadly, the full paper for that one is hidden behind a paywall, so I wasn’t able to read it.)     But I think my favorite summary of the problem is the one in the XKCD web comic:   “The Collatz Conjecture states that if you pick a number, and if it’s even divide it by two and if it’s odd multiply by three and add one, and you repeat this procedure long enough, eventually your friends will stop calling to see if you want to hang out.”

And this has been your math mutation for today.




References:




Sunday, January 29, 2017

227: Heads In The Clouds

Audio Link

A few days ago I was flipping channels on the TV, and saw a few minutes of one of the horrible movie adaptations of Jonathan Swift’s classic 1726 satirical novel, Gulliver’s Travels.  When you hear that title, you probably think of a man being tied up on the beach and being captured by an army of tiny Liiluputians.   Most adaptations of the novel focus on that nation of tiny people, which actually comprised only the first section of the Travels.   Although mainly a political satire, even that part of the book had an influence on modern mathematics and computer science—  the Lilliputians were fighting a war over which end of an egg to crack first, with the Big Endians vs the Little Endians.   We now use those terms when describing whether the highest or lowest byte comes first in each multi-byte ‘word’ that forms a computer memory.   But that’s not our main topic today.   What I want to talk about is one of Gulliver’s later voyages, which directly satirized the mathematics and science of Swift’s day:    the voyage to Laputa.

Laputa was a city built on a floating island, populated by a highly educated race of men who spent all day contemplating advanced ideas of music, mathematics and science, almost completely disconnected from any practical matters.   These were a group of people who literally had their “heads in the clouds”.   According to at least one website, this metaphor was already in use by the 1600s, so Swift may have had it in mind when designing this city.  In fact, Laputans are always so deep in thought that they must hire an assistant to alert them when they need to interact with the real world.   As Swift described it:  

It seems the minds of these people are so taken up with intense speculations, that they neither can speak, nor attend to the discourses of others, without being roused by some external taction upon the organs of speech and hearing; for which reason, those persons who are able to afford it always keep a flapper … in their family, as one of their domestics; nor ever walk abroad, or make visits, without him.  And the business of this officer is, when two, three, or more persons are in company, gently to strike … the mouth of him who is to speak, and the right ear of him or them to whom the speaker addresses himself.  This flapper is likewise employed diligently to attend his master in his walks, and upon occasion to give him a soft flap on his eyes; because he is always so wrapped up in cogitation, that he is in manifest danger of falling down every precipice, and bouncing his head against every post; and in the streets, of justling others, or being justled himself into the kennel.   

Somehow all this contemplation never translates in practice to real world usefulness.  Gulliver admires the elaborate care which their tailor takes to measure every detail of his body, but the clothes then delivered are ill-fitted.  Their houses are all poorly put together, with walls at odd angles, because their precise geometric instructions are too refined for the uneducated servants who end up having to do the actual building.    They serve their food cut carefully into various geometrical shapes or representing musical instruments, with no relevance towards whether that is an appropriate or useful presentation for actual consumption.   He summarizes the situation with “I have not seen a more clumsy, awkward, and unhandy people, nor so slow and perplexed in their conceptions upon all other subjects, except those of mathematics and music.” 

A question you might now be asking is:  why does Swift seem so hostile to advanced science and mathematics, which by our time have resulted in amazing improvements to human comfort, productivity, and lifespan?   We need to keep in mind that back in the early 1700s, it was not at all obvious that all the effort spent by the elites on pursuing advanced studies of mathematics and science were actually leading anywhere.   One of the few practical abilities the Laputans did have was the power to lower their floating city and crush rebellious townspeople on the ground, perhaps a hint at the worry that new science was too often used to develop instruments of war rather than advance humanity.  Here is one of Swift’s comments on the unfulfilled promises of scientific leaders:  “ All the fruits of the earth shall come to maturity at whatever season we think fit to choose, and increase a hundred fold more than they do at present; with innumerable other happy proposals.  The only inconvenience is, that none of these projects are yet brought to perfection; and in the mean time, the whole country lies miserably waste, the houses in ruins, and the people without food or clothes. “   In fact, these promises were largely fulfilled by the sciences in the 20th century— too bad Swift never lived to meet Norman Borlaug and see the massive agricultural productivity increases of his Green Revolution.

Swift also was particularly sensitive to the suspicious claims that scientific understanding of mathematical laws governing the natural world would somehow enable a corresponding scientific and mathematical reorganization of society to benefit mankind.   He’s actually pretty explicit about this, stepping back from the satire to address the real world directly at one point:  

But what I chiefly admired, and thought altogether unaccountable, was the strong disposition I observed in them towards news and politics, perpetually inquiring into public affairs, giving their judgments in matters of state, and passionately disputing every inch of a party opinion.  I have indeed observed the same disposition among most of the mathematicians I have known in Europe, although I could never discover the least analogy between the two sciences; unless those people suppose, that because the smallest circle has as many degrees as the largest, therefore the regulation and management of the world require no more abilities than the handling and turning of a globe; 

Here I think Swift was on to something, when we consider that the major mass-murdering totalitarian movements of the 20th century all had intellectuals at their core who believed they needed to scientifically re-engineer society.    On the other hand, since I’m actually an engineer who now serves in elected political office, I should probably stop the podcast at this point before getting myself into trouble.   

f you’re a fellow math geek and haven’t read Gulliver’s voyage to Laputa, I think you’ll really enjoy it.   Since it’s so old it’s out of copyright, you can follow a link in the show notes and read it for free at Project Gutenburg.
  
And this has been your math mutation for today.



References:




Thursday, December 29, 2016

226: See You Next Year

Audio Link

Before we start, I’d like to thank listener Maurizio Codogno, who published a nice review of the Math Mutation book at goodreads.com.   Bizarrely enough, he wrote his review in Italian, but thanks to the magic of Google Translate, that doesn’t stop other listeners from reading it!   Remember that if you like the podcast and/or book, I really do appreciate a nice review at iTunes, Amazon, or Goodreads.

Now, on to today’s topic.   Recently I’ve been reading a collection of essays, short biographical recollections, and text-based art experiments by the radical 20th-century composer John Cage, titled “A Year From Monday”.    You may recall Cage as someone I’ve mentioned in several podcasts,  as he composed (if you can call it that) the silent music piece ‘4 minutes 33 seconds”, plus numerous musical pieces generated based on complicated formulas involving random numbers.   As usual with Cage, his entries in this book are sprinkled with many instances of weirdness for the sake of weirdness, woven in with a bit of celebrity name-dropping.   But worth reading for the bizarre humor and occasional surprising insight.   

Cage also used the essays in the book as lyrics for one of his strangest music pieces, “Indeterminacy”.   In Indeterminacy, he read each short story at a speed designed to fill a constant interval, while randomly-determined music played in the background.   Due to the timing needs of the music, pieces would sometimes be read very quickly, to fit in a lot of words, or very slowly to fill the available time.   There also was a random ordering to the stories.   As Cage described it, “My intention in putting the stories together in an unplanned way was to suggest that all things – stories, incidental sounds from the environment, and, by extension, beings – are related, and that this complexity is more evident when it is not oversimplified by an idea of relationship in one person’s mind.”   I actually bought the 2-CD set a number of years back, but found listening to it a rather frustrating experience, as you can probably guess.   As often happens with Cage, the idea of the piece is a lot more fun than the actual end result.

Getting back to the book, one of the aspects that I find most amusing is the conundrum represented in its title, “A Year From Monday”.   Apparently Cage was having fun with a group of old friends, and they decided they wanted to get together again.  One of them suggested that they would all meet at a favorite spot in Mexico “a year from Monday”, and they all agreed to the proposal, without further clarification.   Cage liked the idea, as it appealed to, as he described it, “my interests in ambiguity and my interest in non-measurement”.  After leaving, however, Cage started wondering, when exactly did they agree to meet?

As you’ve probably already figured out, the phrase “a year from Monday” is rather ambiguous.   How do we define such an interval?   The easiest method would be to assume that they just meant the same date next year:  if we assume that the discussion occurred on Monday, June 2nd, for example, then they would meet next year on June 2nd.   But this will not be a Monday, since the number of days in a year is not divisible by 7— is this is a problem?   Normally, when we talk about an interval starting on a day of the week, we expect to meet on that same day again:  for example, when scheduling a monthly meeting in a tool like Microsoft Outlook, we usually select options like “the first Monday of every month” or “the second Monday of every month”, so perhaps it would be more reasonable to assume that the plan actually meant to meet on the first Monday of June next year.

There is also the question of how to handle the possible case of a leap year.   If the intervening February had an extra day, would they have to meet a day later than originally planned, more like a year from Tuesday?   On the other hand, maybe our slavish devotion to human-created calendars is part of the problem.   If an objectively-measured solar year was intended, this is about 365.25 days, so it might make more sense to meet 365 days later, but delay the meeting time by 6 hours in order to make the interval precisely one year.

While Cage claimed that he enjoyed the ambiguity, he also had a conflicting tendency to carefully plan and measure musical pieces like an engineer, as shown by the precise time intervals used in Indeterminacy.    Thus, eventually he realized that he had to figure out when exactly he was going to Mexico.   When he tried to confirm his plans with the other attendees, he soon realized that due to the ambiguous phrasing, most of them had not actually taken the rendezvous seriously.  In fact, some had make firm plans which would prevent them from meeting in Mexico anywhere near the proposed date.   With that, Cage decided to give up his attempts at planning, realizing that some problems don’t have a clear answer, and simply rely on Fate.  As he phrased it, “We don’t have to make plans to be together.  (Last July, Merce Cunningham and I ran into Bucky Fuller in the airport outside of Madrid.)  Circumstances do it for us.” 
   
And this has been your math mutation for today.



References:

Tuesday, November 29, 2016

225: A Crazy Kind of Computer

Audio Link

Before we start, just a quick reminder:  the Math Mutation book would make a perfect holiday present for the math geeks in your life!   You can find it on Amazon, or follow the link at mathmutation.com .   And if you do have the book, we could use a few more good reviews on Amazon.   Thanks!   And now, on to today’s topic.

Recently I learned about a cool trick that can enable very rapid computation of seemingly difficult mathematical operations.  This method, known as stochastic computing, makes clever use of the laws of probability to dramatically cut down the amount of logic, essentially the number of transistors, needed to compute elementary functions.   In particular, a multiplication operation, which takes hundreds or thousands of transistors on a conventional computer, can be performed by a stochastic computer with a single AND gate.   Today we’ll look at how such an amazing simplification of modern computing tasks is possible.
    
First, let’s review the basic elements of standard computation, as realized in the typical design of a modern computer.   At a logical level, a computer is essentially built out of millions of instances of three simple gate types, each of which takes one or two single-bit inputs, which can be either 0 or 1.  These are known as AND, OR, and NOT gates.   An AND gate returns a 1 if both its inputs are 1, an OR gate returns a 1 if at least one of its inputs is 1, and a NOT gate transforms a single bit from 0 to 1 or vice versa.   An operation like multiplication would occur by representing your pair of numbers in binary, as a set of 0s and 1s, and running the bits through many AND, OR, and NOT gates, more or less replicating the kind of long multiplication you did in elementary school, with a few optimizations thrown in.    That’s why it takes so many gates to implement the multiplier in a conventional computer.

So, with this being said, how can we reduce the multiplication operation down to a single AND gate?   The key is that instead of representing our numbers in binary, we send a stream of bits down each input to the gate, such that at any moment the probability of that input being 1 is equal to one of the numbers we are multiplying.   So for example, if we want to multiply 0.8 times 0.4, we would send a stream where 1s appear with 80% probability the first input, and one with 40% 1s down the second input.   The output of the AND gate would be a stream whose proportion of 1s is 0.8 times 0.4, or 0.32.
    
The reason this works is due to a basic law of probability:   the probability of two independent events both happening is equal to the product of their probabilities.   For example, if I tell you there is an 80% (or 0.8) chance I will record a good podcast next year, and a 40% (or 0.4) chance I will record a bad podcast next year, the chance I will record both a good and bad podcast is 0.8 times 0.4, or 0.32.  Thus there is a 32% chance I will do both.    The fact that this ANDing of two probabilistic events is equivalent to multiplying the probabilities is the key to making a stochastic computer work.   
    
Now if you’re listening carefully, you may have noticed a few holes in my description of this miraculous form of computation.   You may recall from elementary probability that there is one major limitation to this probability-multiplying trick:  the two probabilities must be *independent*.  So suppose you want your stochastic computer to find the square of 0.8.   Can you just connect your 80%-probability wire to both inputs of the AND gate, and expect an output result that is 0.8 time 0.8 (or 0.64) ones?   No— the output will actually just replicate the input value of 0.8, since at any given time, it will be 1 if and only if the input stream had a value of 1.   Think about my real-life example again:  if I tell you there’s an 80% chance I’ll record a good podcast next year, what’s the chance I’ll record a good podcast AND I’ll record a good podcast?   Stating it redundantly doesn’t change the probability, it’s still 80%.   To compute a square operation in a stochastic computer, I need to ensure I have two *independent* bit streams that each represent the number I’m squaring.   So I need two separately-generated streams, each with that 80% probability, and can’t take the shortcut of connecting the same input twice.   If performing a series of computations in a stochastic computer, a designer needs to be very careful to take correlations into account at each stage.  
    
To look at another challenge of this computing style, let’s ask another basic question:  why doesn’t everyone implement their computers this way?  It’s not a question of technology— stochastic computing was first suggested in the 1950s, and such devices were actually constructed starting in the late 1960s.   Most importantly, generating independent bit streams with the probabilities equal to all the inputs of your computing problem, and then decoding the resulting output stream to find the probability of a 1 in your results, are both very complex computing tasks in themselves.   So I kind of cheated when I said the multiplication was done solely with the single AND gate.   Processing the inputs and outputs requires major designs with thousands (or more) of logic gates, quickly wiping out the benefit of the simplified multiplication.    This isn’t a total barrier to the method though.   The key is to find cases where we are able to do a lot of these probabilistic operations in relation to the number of input bit streams we need to create.    This can even be an advantage in some ways:  unlike a conventional computation, if a stochastic computer spends extra time performing the same computation and observing the output, it can increase the precision of the result.   But these issues do mean that finding good applications for the method can be rather tricky, as complex input and output generation must be balanced against the major simplification of certain operations.

Although stochastic computers were first constructed almost a half-century ago, they were then mostly abandoned for several decades.   With the breakneck pace of advances in conventional computing, there just wasn’t that much interest in exploring such exotic methods.    But with the recent slowdown of Moore’s Law, the traditional doubling of computer performance every 18 months or so, there has been a renewed interest in alternate computing models.   Certain specific applications, such as low-density parity check codes and image processing, are well-suited to the stochastic style of computing, and are current subjects of ongoing research.    I’m sure that in coming years, we’ll hear about more clever solutions to common problems that can be better performed by this unusual computing method.

And this has been your math mutation for today.



References:



Monday, October 31, 2016

224: Did America Fail a Math Test?

Audio Link

Well, we’ve made it to another election season here in the US.   Among other things, it means that we’re once again hearing from politicians all over the spectrum about how they will fix American education, if we only vote for them.   We also start hearing all the stories that show how we are failing currently at education.   One of the more amusing ones that has been making the rounds again in the story of the ill-fated Third Pounder hamburger.   Supposedly a competing restaurant chain introduced a Third Pounder, which was designed to beat McDonald’s famous Quarter Pounder, but it failed in the marketplace.    The root cause of the failure was apparently that the average American didn’t understand that a third is greater than a quarter, and thought they were being ripped off.    This story has a bit of the ring of an urban legend to it though— don’t most people successfully cut up pizzas and follow kitchen recipes that require us to have at least this basic knowledge of fractions?   So I decided to do a little browsing on the web and see if I could get the real story.

Although it sounds suspicious, an article from Mother Jones, a generally well-researched periodical, seems to lend credibility to the legend.   The Third Pound burger was introduced in the 1980’s by fast food chain A&W, one of the many lesser competitors to the great McD,   It did indeed fail in the marketplace, and according to the Mother Jones article, they tracked down a statement by a former company owner, Alfred Taubman about what happened.   Here’s the quote they got:

Well, it turned out that customers preferred the taste of our fresh beef over traditional fast-food hockey pucks. Hands down, we had a better product. But there was a serious problem. More than half of the participants in the Yankelovich focus groups questioned the price of our burger. "Why," they asked, "should we pay the same amount for a third of a pound of meat as we do for a quarter-pound of meat at McDonald's? You're overcharging us." Honestly. People thought a third of a pound was less than a quarter of a pound. After all, three is less than four!

So, does this definitively prove that we are living in a nation of morons who think that 1/3 is less than 1/4?   Not so fast.   For one thing, this is not a summary of actual data at the time, but a recollection from years later.   We all know how those can be colored subconsciously by rumors and personal inclinations as time passes.    Like most prominent businessmen, Taubman wanted to believe that he did everything right, and it was the cruel universe that denied him his earned victory.   Perhaps one focus group participant made such a comment, and it stewed in his mind for years after.   Personally, I’ve always preferred McDonald’s burgers over A&W, regardless of what the A&W CEO thinks about their inherent superiority over “traditional fast-food hockey pucks.”

A thread at snopes.com brings up a few more interesting arguments.   Remember that the value provided is the pre-cooked weight of the burger.   Depending on the grinding process, ground beef can vary widely in content and quality.   Fat and water are lost during cooking, so the comparative post-cooking weights of the quarter and third pounders from different chains cannot be taken for granted.    Also, there may be other binding ingredients in the patty— so even if the weights are comparable, one may have more actual beef than the other.    And we can’t forget one other factor, the fact that it often seems more natural to deal in quarters than thirds; threes are an odd number, harder to subdivide and work with in many contexts.   So the term “quarter pounder” may just trigger more comfortable feelings when you read it on a menu, for reason that you don’t consciously consider.

So, is our nation really so ignorant of basic fractions that we reject 1/3-pound burgers for being smaller than quarter pounders?    I think the jury is still out.  McDonald’s has actually introduced several 1/3-pound specialty burgers in recent years, but it’s hard to separate their performance from the general 21st-century decline in our taste for fast food.   A site called adventuresinfrugal,com implicitly proposes an interestting experiment:  someone should introduce both third pounder and fifth pounder burgers at the same price, and see which sell better.    Perhaps that would finally tell us whether or not our nation is truly confused about basic fractions.  

And this has been your math mutation for today.



References:





Friday, September 30, 2016

223: Think With Both Your Brains

Audio Link

One of the most basic questions in mathematics is:  how do you solve problems in general?    This is traditionally why students tremble in terror at “story problems”— instead of being asked to mimic well-known algorithms, as in the majority of their school exercises, suddenly they are in a situation where they are not presented the clear path to the answer.    Yet problem solving is one of the most critical skills you can learn in mathematics classes, and many of us, especially those in science and engineering fields, spend a lifetime continuing to sharpen our skills in this area.    Even in non-math-based professions, people often encounter dilemmas where the solution is not obvious.   So I think it’s worth taking a look at ways to improve our problem solving abilities in general.   And surprisingly, modern neuroscience can provide us some strange methods to try when simple linear reasoning fails us.

Probably the most famous book on this topic is “How To Solve It” by the late Stanford math professor Georg Polya.   Polya lays out a general 4-step process for approaching any problem, in a book full of useful examples from basic areas of algebra and geometry.   First, you need to understand it:  what are the current information, the unknowns, the goals, and the restrictions that apply?  Second, find a way to connect the data and the unknowns, in order to plan your approach.   If this is not obvious, look for related problems, or a smaller subset of the problem that you can solve.   Third, carry out your plan, taking care to show that each step is correct.    And finally, examine the solution:   is there a way to independently check the result, or use it for other problems?   

While Polya’s method is very useful, something about it seems a bit too simple.   After all, if it is easy to understand a problem, plan the solution, and carry it out, why are there so many unsolved problems out there?   Why hasn’t someone definitively solved each millennial problem,   like the P=NP question we discussed in podcast 13, and taken the million dollar prize?     I think one key is that a lot of problems require a flash of intuition, or a conceptual leap that is very difficult to arrive at by linear reasoning.   And that’s where the neuroscience comes in.   Recently I’ve been reading an intriguing book by Andy Hunt called “Pragmatic Thinking and Learning”, which offers a number of strategies for stimulating your mind to solve problems in different ways.

As you’ve probably heard somewhere, many modern scientists believe our brains exhibit two main modes of thought.   Commonly these are called “left brain” and “right brain”, but Hunt points out that the strict connection with the brain hemispheres isn’t quite right, so he suggests the terms “L-mode” and “R-mode”, with the L standing for “linear”, and R standing for “rich”.   You can think of the  two modes as being the two CPUs of a multiprocessing computer system, potentially working in parallel at all times.   Your L-mode brain excels at analytic, linear thinking, and is the primary user of methods like Polya’s.   Your R-mode brain is what you typically exercise in artistic or creative endeavors.    R-mode, while trickier to interact with due to its nonverbal nature, can also provide intuition, synthesis, and holistic thinking— it probably won’t come up with a mathematical proof, but can lead to do discover a conceptual leap you need to get past a roadblock in one.    But how can we effectively interact with our R-mode, or stimulate its activity, in order to leverage its power?   Hunt suggests a variety of basic techniques for getting a dormant R-mode active and more involved.
   
One simple method is to try to use different senses than usual, in a way that engages your artistic side.  While thinking about a problem with your L-mode, do some minor creative action with your hands that exercises your R-mode, such as making shapes with a paper clip, doodling, or putting together Legos.   In one amusing example, Hunt describes a case where a team designing a complex computer program decided to get up and “role-play” each of the functional units, and soon had a variety of new insights about the system.    

Another method Hunt suggests comes from the domain of computer science, but is likely applicable to many other fields:  “Pair Programming”.   The idea here is that one programmer is actually typing a computer program on the screen, inherently an L-mode activity, while the other is sitting next to him, observing, and making suggestions.   Because the second programmer doesn’t have to worry about the L-mode task of entering the precise sequence of commands, he is free to use his R-mode to take a holistic look, and come up with intuitive suggestions about the overall method. 

A third method that can be surprisingly effective is known as “image streaming”.   After thinking about a problem for a while, try to close your eyes and visualize images related to it for ten minutes or so.   For each image you can think of, first try to imagine it visually, then describe out loud how it appears to all five of your senses.   This one sounds a bit silly at first— and I would suggest you don’t try it in an open cubicle with your co-workers watching— but can be a very powerful way to engage your R-mode.   

A fourth suggestion is called the “morning pages” technique:  when you wake up every morning, immediately write at least three pages on whatever topic comes to mind.   Don’t censor what you write, or try to revise and make it perfect, just let the information flow.   Because it’s the first thing in the morning, you’re getting an unguarded brain dump, while your R-mode dreams and unconscious thoughts are still fresh in your mind.    If you were working on a hard problem the day before, your R-mode may naturally have provided new insights during the night that you now want to capture.    As Hunt summarizes, “You haven’t yet raised all the defenses and adapted to the limited world of reality”.

These ideas are just a small subset of known techniques for leveraging your lesser-used R-mode— if you want to maximize your ability to use your whole mind for problem solving, I would highly recommend that you check out his book, linked in the show notes.    I’ll be interested to hear from any of you who successfully use some of Hunt’s odder-sounding techniques to solve difficult problems.    On the other hand, if you think everything I’ve said today sounds crazy, that’s probably just your L-mode brain over-exercising its linear, logical influence. 

And this has been your math mutation for today.



References:





Monday, August 22, 2016

222: Fractal Expressionism

Audio Link

If you watch enough TV, you probably remember an old sitcom plot where the characters are at a viewing of abstract expressionist art, and somehow a 3-year-old’s paint scribblings get mixed in with the famous works.   Most of the characters, clueless about art, pretend to like the ‘bad’ painting as much as the real paintings, trusting that whatever is on display must be officially blessed as good by the important people.    However, one wise art aficionado spots the fake, pointing out how it is obviously garbage compared to all the real art in the room.   Therefore the many pseudo-intellectuals in the audience get affirmation that their professed fandom of “officially” respected art has a valid basis.     I’ve always considered this kind of plot a mere fantasy, until I read about physicist Richard Taylor’s apparent success in showing that Jackson Pollock’s most famous paintings actually involve mathematical objects called fractals, and this analysis can be used to distinguish Pollock artworks from lesser efforts.

Before we talk about Taylor’s work, let’s review the idea of fractals, which we have discussed in some earlier podcasts.   A simple definition of a fractal is a structure with a pattern that exhibits infinite self-similarity.     A popular example is the Koch snowflake.   You can create this shape by drawing an equilateral triangle, then drawing a smaller equilateral triangle in the middle third of each side, and repeating the process on each outer edge of the resulting figure.   You will end up with a kind of snowflake shape, with the fun property that if you zoom in on any local region, it will look like a partial copy of the same snowflake shape.    Other fractals may have a random or varying element in the self-symmetry, which makes them useful to create realistic-looking mountain ranges or coastlines.    The degree of self-similarity in a fractal is measured by something called the “fractal dimension”.

Taylor’s insight was that Pollock’s paintings might actually be representing fractal patterns.   This idea has some intuitive appeal:  perhaps the abstract expressionists were a form of savant, creating deep mathematical structures that most people could understand on an intuitive level but not verbalize.  Taylor created a computer program that would overlay a grid on a painting and look for repeating patterns, reporting the fractal dimension resulting from the analysis.   After examining a large sample of these, his research team announced that Pollack’s paintings really are fractals, tending to almost always fall within a particular range of fractal dimensions.   They also claimed that these patterns could be used with high accuracy to distinguish Pollock paintings from forgeries.   Taylor even claimed at one point that, due to the various changes in technique over Pollock’s career, he could date any Pollock painting to within a year based on its fractal dimension.   Abstract art critics and fans all over the world felt vindicated, and Taylor became the toast of the artistic community.

However, the story doesn’t end there.   When first reading about Taylor’s work, something seemed a little fishy to me— it reminded me a bit of the overblown fandom of the Golden Ratio, which we discussed back in episode 185.   You may recall that in that episode, I pointed out that any ratio in nature roughly close to 3:5 could be interpreted as an example of the Golden Ratio, by carefully choosing your points of measurement and level of accuracy.   Similarly, it seems to me that the level of fine-tuning required for Taylor’s type of computer analysis would make it inherently suspect.   Taylor isn’t claiming an indisputable point-by-point self-similarity, as in an image of the Koch snowflake.   He must be inferring some kind of approximate self-similarity, with a level of approximation and tuning that is built into the computer programs he uses.  Furthermore, there is no way his experimental process can be truly double-blind:  all Pollock paintings are matters of public record, and I suspect everyone involved in his study were Pollock fans to some degree to begin with.   I’m sure most Pollock paintings exhibit some kinds of patterns, and with the right definitions and approximations, just about any kind of pattern can be loosely interpreted as a fractal.   With all this knowledge available as they were creating their program, I’m sure Taylor’s team was able to generate something finely tuned to Pollock’s style, even if they were not conscious of this built-in bias.

Of course, many people in the math and physics world also found Taylor’s analysis suspicious.   A team of skeptics, led by well-known physicist Laurence Krauss, developed their own fractal-detecting program and tried to repeat Taylor’s analysis.   They found that attempting to analyze their fractal dimensions was useless for identifying Pollock paintings.   Several actual paintings were missed, while lame sketches of random patterns by lab staff, such as a series of stars on a sheet of paper that could have been written by a child, were given Pollock-like measurements.   When issuing their report, this team claimed to have conclusively proven that fractal analysis is completely useless for distinguishing Pollocks from forgeries.    In some sense, this may not be such a bad result for art fans— as Krauss’ collaborator Katherine Jones-Smith stated, "I think it is more appealing that Pollock's work cannot be reduced to a set of numbers with a certain mean and certain standard deviation.”

So, are Pollock paintings actually describable as fractals, or not?   The jury still seems to be out.   Krauss’s team claimed that they had definitively disproven this idea.   However, Taylor responded that this was merely an issue of them having used a much less sophisticated computer program.   Active research is still continuing in this area, as shown by a 2015 paper that combines Taylor’s method with several other mathematical techniques, and claims a 93% accuracy in identifying Pollocks.   My inclination is that we should still look at this entire area with a healthy skepticism, due to the inability to produce a truly double-blind study when famous artworks are involved.    But there are likely some underlying patterns in abstract expressionist art, at least in the better paintings, which may be a key to why some people find them enjoyable.   So lie back, turn on your John Cage music, and start staring at those Pollocks.

And this has been your math mutation for today.



References: