Suburban Lion's Blog

2014/05/30

Meta-Pokemon

Filed under: Math,Video Games — Ryan Ruff @ 10:00

In a previous post, I mentioned my fascination with Twitch Plays Pokemon (TPP). The reason behind this stems from the many layers of metagaming that take place in TPP. As I discussed in my previous post, the most basic definition of metagaming is "using resources outside the game to improve the outcome within the game". However, there's another definition of metagaming that has grown in usage thanks to Hofsteadter: "a game about a game". This reflexive definition of metagaming is where the complexity of TPP begins to shine. Let's take a stroll through some various types of metagaming taking place in TPP.

Outside resources

At the base level, we have players making use of a variety of outside resources to improve their performance inside the game. For Pokemon, the most useful resources might include maps, beastiaries, and Pokemon-type matchups. In TPP, many players also bring with them their own previous experiences with the game.

Pokemon itself is a metagame. Within the world of the game, the Pokemon League is its own game within the game. A Pokemon player is playing the role of a character who is taking part in game tournament. What makes TPP so interesting is that that it adds a game outside the game. Players in TPP can cooperate or compete for control of the game character. In effect, TPP is a meta-metagame: a game about a game about a game. Players in TPP are controlling the actions of a game character participating in a game tournament. It's Pokemon-ception!

Gaming the population

Another use of metagaming is to take knowledge of the trends in player behaviors and utilize that information to improve the outcome within the game. In TPP, players would use social media sites such as Reddit to encourage the spread of certain strategies. Knowledge of social patterns in the general population TPP players enables a few players to guide the strategy of the collective in a desirable directions. Memes like "up over down" bring structure to an otherwise chaotic system and quickly become the dominant strategy.

Gaming the rules

One of my favorite pastimes in theory-crafting, which is itself a form of metagaming. Here, we take the rules of the game and look at possible strategies like a game. The method TPP used in the final boss fight is a perfect example of this. The boss is programmed to select a move type that the player's pokemon is weak against and one of these moves deals no damage. By using a pokemon that is weak against this particular move, the boss is locked into a strategy that will never do any damage! Not only do the TPP players turn the rules of the game against it, but they also needed to organize the population to pull it off!

Gaming the population

Another use of metagaming is to take knowledge of the trends in player behaviors and utilize that information to improve the outcome within the game. In TPP, players would use social media sites such as Reddit to encourage the spread of certain strategies. Knowledge of social patterns in the general population TPP players enables a few players to guide the strategy of the collective in a desirable directions. Memes like "up over down" bring structure to an otherwise chaotic system and quickly become the dominant strategy.

Rule modification games

One of the defining characteristics of a game are the rules. The rules of Pokemon are well defined by the game's code, but the rules of TPP are malleable. We can choose between "chaos" and "democracy". Under chaos, every player input gets sent to the game. Under democracy, players vote on the next action to send. When we look at the selection of rules in terms of a game where we try to maximize viewers/participates, we get another type of metagaming.

2014/04/30

Understanding Voter Regret

Filed under: Math,Politics — Ryan Ruff @ 00:34

Lately I've been doing a little bit of research on voting methods.  In particular, I've been fascinated by this idea of measuring Bayesian Regret.  Unfortunately, many of the supporting links on rangevoting.org are dead.  With a little detective work I managed to track down the original study and the supporting source code.

Looking at this information critically, one my concerns was the potential for bias in the study.  This is the only study I could find taking this approach, and the information is hosted on a site that is dedicated to the support of the method proved most effective by the study.  This doesn't necessarily mean the result is flawed, but it's one of the "red flags" I look for with research.  With that in mind, I did what any skeptic should: I attempted it replicate the results.

Rather than simply use the provided source code, I started writing my own simulation from scratch.  I still have some bugs to work out before I release my code, but the experience has been very educational so far.  I think I've learned more about these voting methods by fixing bugs in my code than reading the original study.  My initial results seem consistent with Warren Smith's study but there's still some kinks I need to work out.

What I'd like to do in this post is go over a sample election that came up while I was debugging my program.  I'm hoping to accomplish a couple things by doing so.  First, I'd like to explain in plain English what exactly the simulation is doing.   The original study seems to be written with mathematicians in mind and I'd like for these results to be accessible to a wider audience.  Second, I'd like to outline some of the problems I ran into while implementing the simulation.  It can benefit me to reflect on what I've done so far and perhaps some reader out there will be able to provide input on these problems that will point me in the right direction.

Pizza Night at the Election House

It's Friday night in the Election household, and that means pizza night!  This family of 5 takes a democratic approach to their pizza selection and conducts a vote on what time of pizza they should order.   They all agree that they should get to vote on the pizza.  The only problem is that they can't quite agree on how to vote.  For the next 3 weeks, they've decided to try out 3 different election systems: Plurality, Instant-Runoff, and Score Voting.

Week 1: Plurality Voting

The first week they use Plurality Voting.  Everyone writes down their favorite pizza and which ever pizza gets the most votes wins.

With two votes, veggie pizza is declared the winner.

Mom and the middle child are quite happy with this result.  Dad and the two others aren't too excited about it.  Because the 3 of them were split on their favorites, the vote went to an option that none of them really liked.  They feel hopeful that things will improve next week.

Week 2: Instant Run-off Voting

The second week they use Instant Run-off Voting.  Since the last election narrowed down the pizzas to four options, every lists those four pizzas in order of preference.

The youngest doesn't really like veggie pizza, but absolutely hates pineapple.  Ranks cheese 1st, pepperoni 2nd, veggie 3rd,and hawaiian last.

The middle child is a vegetarian.  Both the hawaiian and pepperoni are bad options, but at least the hawaiian has pineapple and onions left over after picking off the ham. Ranks veggie 1st, cheese 2nd, hawaiian 3rd and pepperoni last.

The oldest child moderately likes all of them, but prefers fewer veggies on the pizza.  Ranks pepperoni 1st, cheese 2nd, hawaiian 3rd and veggie last.

Dad too moderately likes all of them, but prefers the options with meat and slightly prefers cheese to veggie.  Ranks hawaiian 1st, pepperoni 2nd, cheese 3rd and veggie last.

Mom doesn't like meat on the pizza as much as Dad, but doesn't avoid it entirely like the middle child.  Ranks veggie 1st, cheese 2nd, pepperoni 3rd and hawaiian last.

Adding up the first place votes gives the same result as the first election: 2 for veggie, 1 for hawaiian, 1 for pepperoni and 1 for cheese.  However, under IRV the votes for the last place pizza get transferred to the next ranked pizza on the ballot.

However, there's something of a problem here.  There's a 3-way tie for last place!

A fight nearly breaks out in the Election house.  Neither dad, the older or youngest want their favorite to be eliminated.  The outcome of the election hinges on whose votes get transferred where!

Eventually mom steps in and tries to calm things down.  Since the oldest prefers cheese to hawaiian and the youngest prefers pepperoni to hawaiian, it makes sense that dad's vote for hawaiian should be the one eliminated.  Since the kids agree with mom's assessment, dad decides to go along and have his vote transferred to pepperoni.

Now the score is 2 votes for veggie, 2 votes for pepperoni, and 1 vote for cheese.  Since cheese is now the lowest, the youngest childs vote gets transferred to the next choice: pepperoni.   With a vote of 3 votes to 2, pepperoni has a majority and is declared the winner.

The middle child is kind of upset by this result because it means she'll need to pick all the meat off her pizza before eating.  Mom's not exactly happy with it either, but is more concerned about all the fighting.  They both hope that next week's election will go better.

Week 3: Score Voting

The third week the Election family goes with Score Voting.  Each family member assigns a score from 0 to 99 for each pizza.  The pizza with the highest score is declared the winner.  Everyone wants to give his/her favorite the highest score and least favorite the lowest, while putting the other options somewhere in between. Here's how they each vote:

The youngest rates cheese 99, hawaiian 0, veggie 33 and pepperoni 96.

The middle child rates cheese 89, hawaiian 12, veggie 99 and pepperoni 0.

The oldest child rates cheese 65, hawaiian 36, veggie 0 and pepperoni 99.

Dad rates cheese 13, hawaiian 99, veggie 0 and pepperoni 55.

Mom rates cheese 80, hawaiian 0, veggie 99 and pepperoni 40.

Adding all these scores up, the finally tally is 346 for cheese, 147 for hawaiian, 231 for veggie and 290 for pepperoni.  Cheese is declared the winner.  Some of them are more happier than others, but everyone's pretty much okay with cheese pizza.

Comparing the Results

Three different election methods.  Three different winners.  How do we tell which election method is best?

This is where "Bayesian Regret" comes in.

With each of these 3 elections, we get more and more information about the voters. First week, we get their favorites.  Second week, we get an order of preference.  Third week, we get a magnitude of preference.   What if we could bypass the voting altogether and peak instead the voter's head to see their true preferences?  For the family above, those preferences would look like this:

 cheese hawaiian veggie pepperoni youngest 99.92% 2.08% 34.25% 95.79% middle 65.95% 10.09% 73.94% 0.61% oldest 74.55% 66.76% 57.30% 83.91% dad 52.13% 77.03% 48.25% 64.16% mom 87.86% 39.79% 99.72% 63.94%

These values are the relative "happiness levels" of each option for each voter.  It might help to visualize this with a graph.

If we had this data, we could figure out which option produced the highest overall happiness.  Adding up these "happiness" units, we get 380 for cheese, 195 for hawaiian, 313 for veggie and 308 for pepperoni.  This means the option that produces the most family happiness is the cheese pizza.  The difference between the max happiness and the outcome of the election gives us our "regret" for that election.  In this case: the plurality election has a regret of 67, the IRV election has a regret of 72, and the score voting election has a regret of 0 (since it chose the best possible outcome).

Now keep in mind that this is only the regret for this particular family's pizza selection.  To make a broader statement about which election method is the best, we need to look at all possible voter preferences.  This is where our computer simulation comes in.  We randomly assign a number for each voter's preference for each candidate, run the elections, calculate the regret, then repeat this process over and over to average the results together.  This will give us an approximation of how much regret will be caused by choosing a particular voting system.

Open Questions

In writing my simulation from scratch, I've run into a number of interesting problems.  These aren't simply programming errors, but rather conceptual differences between my expectations and the implementation.   Some of these questions might be answerable through more research, but some of them might not have a clear cut answer.   Reader input on these topics is most welcome.

Implementing IRV is complicated

Not unreasonably hard, but much more so than I had originally anticipated.  It seemed easy enough in theory: keep track of the candidates with the lowest number of votes and eliminate them one round at a time.  The problem that I ran into was that in small elections, which I was using for debugging, there were frequently ties between low ranked candidates in the first round (as in the case story above).   In the event of a tie, my code would eliminate the candidate with the lower index first.  Since the order of the candidates was essentially random, this isn't necessarily an unfair method of elimination.  However, it did cause some ugly looking elections where an otherwise "well qualified" candidate was eliminated early by nothing more than "bad luck".

This made me question how ties should be handled in IRV.   The sample elections my program produced showed that the order of elimination could have a large impact on the outcome.  In the election described above, my program actually eliminated "cheese" first.  Since the outcome was the same, it didn't really matter for this example.  However, if the random ordering of candidates had placed "pepperoni" first then "cheese" would have won the election!  Looking at this probabilistically, the expected regret for this example would be 1/3*0+2/3*72 = 48.   A slight improvement, but the idea of non-determinism still feel out of place.

I started looking into some alternative methods of handling ties in IRV.  For a simulation like this, the random tie-breaker probably doesn't make a large difference.  With larger numbers of voters, the ties get progressively more unlikely anyways.   However, I do think it could be interesting to compare the Bayesian Regret among a number of IRV variations to see if some tie breaking mechanisms work better than others.

Bayesian Regret is a societal measure, not individual

When I first started putting together my simulation, I did so "blind".  I had a conceptual idea of what I was trying to measure, but was less concerned about the mathematical details.  As such, my first run produced some bizarre results.  I still saw a difference between the voting methods, but at a much different scale.  In larger elections, the difference between voting methods was closer to factor of .001.    With a little bit of digging, and double-checking the mathematical formula for Bayesian Regret, I figured out I did wrong.  My initial algorithm went something like this:

I took the difference between the utility of each voter's favorite and the candidate elected.  This gave me an "unhappiness" value for each voter.  I averaged the unhappiness of all the voters to find the average unhappiness caused by the election.  I then repeated this over randomized elections and kept a running average of the average unhappiness caused by each voting method.  For the sample election above, voters are about 11% unhappy with cheese versus 24% or 25% unhappy with veggie and pepperoni respectively.

I found this "mistake" rather intriguing.  For one thing, it produced a result that kind of made sense intuitively.  Voters were somewhat "unhappy" no matter which election system was used.  Even more intriguing was that if I rescaled the results of an individual election, I found that they were distributed in close to the same proportions as the results I was trying to replicate.  In fact, if I normalized the results from both methods, i.e.  R' = (R-MIN)/(MAX-MIN), then they'd line up exactly.

This has become something of a dilemma.  Bayesian Regret measures exactly what it says it does -- the difference between the best option for the society and the one chosen by a particular method.  However, it produces a result that is somewhat abstract.  On the other hand, my method produced something a little more tangible  -- "average unhappiness of individual voters" -- but makes it difficult to see the differences between methods over a large number of elections.  Averaging these unhappiness values over a large number of elections, the results seemed to converge around 33%.

Part of me wonders if the "normalized" regret value, which aligns between both models, might be a more appropriate measure.  In this world, it's not the absolute difference between the best candidate and the one elected but the difference relative to the worst candidate.  However, that measure doesn't really make sense in a world with the potential for write-in candidates.   I plan to do some more experimenting along these lines, but I think the method of how to measure "regret" is a very an interesting  question in itself.

"Honest" voting is more strategic than I thought

After correcting the aforementioned "bug", I ran into another troubling result.  I started getting values that aligned with Smith's results for IRV and Plurality, but the "Bayesian Regret" of Score Voting was coming up as zero.  Not just close to zero, but exactly zero.  I started going through my code and comparing it to Smith's methodology, when I realized what I did wrong.

In my first implementation of score voting, the voters were putting their internal utility values directly on the ballot.  This meant that the winner elected would always match up with the "magic best" winner.   Since the Bayesian Regret is the difference between the elected candidate and the "magic best", it was always zero.   I hadn't noticed this earlier because my first method for measuring "unhappiness" returned a non-zero value in every case -- there was always somebody unhappy no matter who was elected.

Eventually I found the difference.  In Smith's simulation, even the "honest" voters were using a very simple strategy: giving a max score to the best and a min score to the worst.  The reason that the Bayesian Regret for Score Voting is non-zero is due to the scaling of scores between the best and the worst candidates.  If a voter strongly supports one candidate and opposes another, then this scaling doesn't make much of a difference.   It does, however, make a big difference when the voters are indifferent between the candidates but gives a large score differential to the candidate that's slightly better than the rest.

With this observation, it became absolutely clear why Score Voting would minimize Bayesian Regret.  The more honest voters are, the closer the Bayesian Regret gets to zero.   This raises another question: how much dishonesty can the system tolerate?

Measuring strategic vulnerability

One of the reasons for trying to reproduce this result was to experiment with additional voting strategies outside of the scope of the original study.  Wikipedia cites another study by M. Badinski and R. Laraki that suggests Score Voting is more susceptible to tactical voting than alternatives.  However, those authors too may be biased towards their proposed method.  I think it's worthwhile to try and replicate that result as well.  The issue is that I'm not sure what the appropriate way to measure "strategic vulnerability" would even be.

Measuring the Bayesian Regret of strategic voters and comparing it with honest voters could potentially be a starting point.   The problem is how to normalize the difference.   With Smith's own results, the Bayesian Regret of Score Voting increases by 639% by using more complicated voting strategies while Plurality only increases by 188%.  The problem with comparing them this way is that the Bayesian Regret of the strategic voters in Score Voting is still lower than the Bayesian Regret of honest Plurality voters.   Looking only at the relative increase in Bayesian Regret isn't a fair comparison.

Is there a better way of measuring "strategic vulnerability"?  Bayesian Regret only measure the difference from the "best case scenario".  The very nature of strategic voting is that it shift the result away from the optimal solution.  I think that to measure the effects of voting strategy there needs to be some way of taking the "worst case scenario" into consideration also.   The normalized regret I discuss above might be a step in the right direction.  Any input on this would be appreciated.

Disclaimer

Please don't take anything said here as gospel.  This is a blog post, not a peer-reviewed journal.  This is my own personal learning endeavor and I could easily be wrong about many things.  I fully accept that and will hopefully learn from those mistakes.   If in doubt, experiment independently!

2014/02/10

What I've discovered, learned or shared by using #mathchat

Filed under: Education,Math — Ryan Ruff @ 22:44

This was a #mathchat topic in July of 2012 that I really wanted to write about but didn't quite get around to at the time.  This happened partly because I was busy juggling work and graduate school, but also because I felt a bit overwhelmed by the topic.   I've learned so many things through my involvement in #mathchat that the idea of collecting them all was daunting.   It also kind of bothered me that my first attempt at a response to this prompt turned into a lengthy list of tips, books, and links.  This type of content makes sense on Twitter.  It's actually the perfect medium for it.  However, to turn this into a blog post I needed some coherency.  I felt like there was a pattern to all of these things that #mathchat has taught me but I just couldn't quite put my finger on it.

A year and a half has passed since this topic came up.  It's now been 6 months since the last official #mathchat.  Despite this, Tweeps from all over the world continue using the hashtag to share their lesson ideas and thoughts about math education.  It's inspiring.  The weekly chats might have stopped, but the community continues to flourish.  Looking back on how things have changed on #mathchat helped put perspective on how #mathchat changed me.  I think I'm finally ready to answer this prompt.

What I learned by using #mathchat was that learning requires taking risks.

On the surface, it seems like this assertion might be obvious.  Whenever we attempt something new, we run the risk of making a mistake.  By making mistakes we have an opportunity to learn from them.  The issue is that we go through this routine so many times that it becomes habitual.   When learning becomes automatic, it's easy to lose sight of the risks and how central they are to the learning process.

I was rather fortunate to have discovered #mathchat when I did.  I had signed up for Twitter at approximately the same time I started teaching math.  Anyone that's ever been a teacher knows that learning a subject and teaching that subject are two entirely different beasts.   I'd been doing math for so long that most of it was automatic.  It wasn't until I started teaching that I realized I had forgotten what it was like to learn math.   As a result, I was struggling to see things from the perspective of my students.  I needed to step out of my own comfort zone and remember what it was like to learn something new.  It's through complete coincidence that my wife stumbled upon Twitter at this time and said, "Hey, I found this new website that you might find interesting".

My social anxiety was still quite strong at this time.  With each Tweet, I was afraid that I would say something stupid and wake up the next day to find that all my followers had vanished.  However, #mathchat provided a welcoming atmosphere and discussion topics that were relevant to my work environment.  This provided me with an opportunity to engage in discussion while mitigating  some of the risks.  I knew that each topic would be close to my area of expertise and the community was composed of people who were also there to learn.  There was a certain comfort in seeing how people interacted on #mathchat.  People would respond critically to the content of Tweets, but always treated each participant with dignity and respect.   I was experiencing first hand what a real learning community could be like.

A frequent motif in these #mathchat discussions was Lev Vygotski's model of learning.  With my background in psychology, I was already familiar with the concepts and vocabulary.  However, #mathchat helped me link this theory with practice.   I became more and more comfortable with a social perspective on learning because I was learning through my social interactions.  While I had known the definition of terms like "zone of proximal development", I wasn't quite to the point where I could see the line separating what I could learn on my own and what I could learn with assistance.  I had always been a self-driven learner, but in order to be successful in learning I needed to limit myself to areas that were close to my existing skills and knowledge.  I needed to minimize the risks when learning on my own.  Learning in a social environment was different.  I needed to become comfortable taking larger risks with the reassurance that the people I was learning with would help me pick myself up when I fell.

The #mathchat discussions themselves were not without risks of their own.  Colin took a risk himself by creating #mathchat.  It was entirely possible that he could have set this chat up only to have no one show up to participate.  Indeed, many a #mathchat started with an awkward period of silence where people seemed hesitant to make the first move.  There's much lower risk in joining a discussion in progress than starting one from scratch. The risk is lower still by simply "lurking" and only reading what others have said.  As time went on, there was a growing risk that #mathchat would run out of topics for discussion.  This risk has since manifested itself and #mathchat has entered a state of hiatus.

I'm aware of these risks only in hindsight.  At the time, I wasn't really conscious of the shift occurring in my own model of learning.  What started to make me realize this change was the adoption of my two cats.  This provided my another opportunity to put learning theory into practice by training them (although it's arguable that they're the ones training me instead).  The smaller one, an orange tabby named Edward, responded quickly to classical and operant conditioning with cat treats.  The larger one, a brown tabby named Alphonse, didn't seem to care about treats.  It quickly became obvious that I was using the wrong reinforcer for him.  With his larger body mass and regular feeding schedule, there was no motivation for him to consume any additional food.  It's easy to forget that in the experiments that these concepts developed from, the animals involved were bordering on starvation.  The risk of not eating is a powerful motivator for these animals to learn in the experimental setting.  My cat Alphonse was under no such risk.  He was going to be fed whether he played along with my games or not.  I've since learned that Alphonse responds much better to training when there's catnip involved.

The key to successful training is very much dependent on being able to  identify a suitable reinforcer.  What functions as a reinforcer varies widely from subject to subject.   With animal studies, survival makes for an universal reinforcer as the reward of living to procreate is (almost) always worth the risk.  However, humans follow a slightly different set of rules because our survival is seldom in question.  We're also unique in the animal kingdom because we can communicate and learn from others' experiences.   In a typical classroom situation, the ratio between the risk and reward takes on greater significance.  We're faced with such an overabundance of information about the world that we can't possibly learn it all.  Instead of maximizing performance on a test, the desired outcome, a common alternative is for students to minimize the risk of disappointment.   It's often much easier for a student to declare "I'm bad at math" than to go through the effort of actually trying to learn a new skill.  Rather than taking the high-risk choice of studying for the test with only a moderate payoff (a grade), these students opt for a low-risk low-payoff option by simply choosing not to care about the exam.  When looked at from a risk/reward perspective, maybe these students are better at math than they're willing to admit.

The solution, as I discovered through #mathchat, is to lower the risks and adjust the rewards.  I've started working on making my courses more forgiving to mistakes and acknowledging them as an integral part of the learning process.  I've started working on increasing the amount of social interaction I have with students and trying to be a better coach during the learning process.  There's no denying that I still have much to learn as a teacher, but thanks to #mathchat I have a clearer idea of how to move forward.  For me to progress as a teacher, I need to more comfortable taking risks.  It's far too easy to fall into habit teaching the same class the same way, over and over.  I need to do a better job of adapting to different audiences and trying new things in my classes.  Fortunately, there's a never ending stream of new ideas on Twitter that I'm exposed to on a regular basis thanks to my "Personal Learning Network".

I feel it's a crucial time for me to be sharing this perspective on the role of risk in learning.  There seems to be a rapidly growing gap between teachers and politicians on the direction of educational policies.  There's a political culture in the US that is obsessed with assessment. Policies like Race-to-the-Top and No Child Left Behind emphasize standardized testing and value-added measures over the quality of interpersonal relations.  The problem with these assessment methods is that they don't take the inherent risks of learning into consideration.  Risk is notoriously difficult to measure and it doesn't fit nicely into the kinds of equations being used to distribute funding to schools.

There was recently a backlash of (Badass) teachers on Twitter using the #EvaluateThat to post stories of how our assessment methods fail to capture the impact teachers make in the lives of their students.   Teachers are the ones that witness the risks faced by students up close.   It's our job as teachers to identify those risks and take steps to manage them so that the student can learn in a safe environment.  As the stories on #EvaluateThat show, many teachers go above and beyond expectations to help at-risk students.

While teachers struggle to reduce risks, policy makers continue to increase them through more high-stakes exams.  At times it almost seems like politicians are deliberately trying to undermine teachers.  Maybe what we need in education policy is a shift in the vocabulary. Lets stop worrying so much about "increasing performance outcomes" and instead focus on "decreasing risk factors".  Doing so would encourage a more comprehensive approach to empowering students.  For example, there's strong statistical evidence that poverty severely hinders student success.  By addressing the risks outside of the classroom, we can enable students to take more risks inside the classroom.

2012/09/27

Profile of an "undecided" voter: Nader, Arrow, Nolan, Flux, Aikido and Metagaming the Vote in 2012

Filed under: Math,Politics — Ryan Ruff @ 14:09

Hello! My name's Ryan and I'm an "undecided" voter.

No, it's not what you think.

I'm not undecided between these guys:

There's no way in hell I'm voting for Romney.

I'm not an idiot as Bill Maher not-so-subtly suggested last week. (It's okay Bill, I can take a joke)

I'm undecided between these guys (and gal):

Mathematician and author John Allen Paulos described the situation a little more elegantly:

I'd like to believe that I fall into the "unusually thoughtful" category and wanted to share my perspective.

FULL DISCLOSURE: This is my personal blog and obviously biased by my opinions. I'm a member of the Green Party and have made a "small value" donation to the Stein campaign. Despite my party membership, I try to vote based on the issues and not the party. I voted for Obama in 2008 and voted for Ron Paul in the 2012 GOP primary. While I'm not technically an "independent" due to my affiliation with the Greens, I'm probably about as close to one as it gets.

Let's start with a little historical background and work our way forward from there.

My first voting experience was in the 2000 election. I didn't like either Gore or Bush, and ended up gravitating towards the Nader campaign. His positions on the issues most closely aligned with my own, so I did what seemed like the most rational thing to do at the time. I voted for him.

After the election, Nader (and the Green Party in general) received a large amount of criticism from Democrats for "spoiling" the election. The Democrats argued that votes cast for Nader in key states like Florida, would have been otherwise been cast for Gore. The counter argument is that Bush v. Gore was decided by the Supreme Court, but I won't get into that.

From my perspective, my vote for Nader in this election could not be counted as a "spoiler". I was living in California at the time, and the odds of California's votes in the Electoral College going to Bush in the 2000 were negligible. My vote for Nader was completely "safe" and allowed me to voice my opinion about the issues I cared about. However, this notion of a "spoiler vote" forever changed how I thought about my voting strategy.

Independence of Irrelevant Alternatives

In the 1950s, economist Kenneth Arrow conducted a mathematical analysis of several voting systems. The result, now known as Arrow's Impossibility Theorem, proved that there was not ranked voting system that could satisfy the following conditions for a "fair" election system:

1. It accounts for the preferences of multiple voters, rather than a single individual
2. It accounts for all preferences among all voters
4. An individual should never hurt the chances of an outcome by rating it higher
5. Every possible societal preference should be achievable by some combination of individual votes
6. If every individual prefers a certain option, the overall result should reflect this

Arrow was largely concerned with ranked voting systems, such as Instant Run-off Voting, and proved that no such ranking system could ever satisfy all of these conditions. There are non-ranked voting systems that meet most of these conditions, such as score voting, but one of these conditions of interest that our present system doesn't meet is number 3. This condition goes by the technical name of Independence of irrelevant alternatives. The idea is that the outcome of a vote should not be affected by the inclusion of additional candidates. In other words, there should never be a "spoiler effect".

What I find interesting here is that the very mechanics of our voting system lead to a situation where the outcome of elections is controlled by a two party system. It forces citizen to vote tactically for the "lesser of two evils", while from my perspective both of those "evils" have gotten progressively worse. George Washington warned of this outcome in his farewell address:

However [political parties] may now and then answer popular ends, they are likely in the course of time and things, to become potent engines, by which cunning, ambitious, and unprincipled men will be enabled to subvert the power of the people and to usurp for themselves the reins of government, destroying afterwards the very engines which have lifted them to unjust dominion.

Until we can address the issues inherent in our voting system itself, I'm left with no choice but to vote strategically in the election. My policy for voting is a tactic of minimaxing: minimizing the potential harm while maximizing the potential gain. It's with this strategy in mind that I turn to the options of the 2012 presidential race.

Quantifying Politics

In order to apply a mathematical analysis to voting, it is first necessary to have some way of quantifying political preferences. As a method of during so, I'll turn to the so called Nolan Chart. An easy way to find out where you stand on the Nolan Chart is the World's Smallest Political Quiz. Here's where it places me:

Here's where I'd place the 2012 candidates:

Note that this is my subjective opinion and may not necessarily reflect the opinions of the candidates themselves. It's also important to note that this is a simplified model of political disposition. There are other models, such as the Vosem (восемь) Chart that include more than two axes. If you were, for example, include "ecology" as a third axis, this would place me closer to Stein than Obama and closer to Obama than Johnson. The resulting distances to each are going to vary depending on what axes you choose, so I'm just going to stick with the more familiar Nolan Chart.

Since I'm politically equidistant from each of the candidates, my minimax voting strategy would suggest that I vote for the candidate that has the highest chance of winning: Obama. However, there are many more variables to consider that might result in a different outcome. One of those variables is something I call "political flux".

Political Flux

People change. It's a well known fact of life. Changes in political opinions are no exception. If you look at the stances that Obama and Romney have made during this campaign, and compare those to their previous positions, I think you'll see a trend that looks something like this:

Obama campaigned hard left in 2008, but during his term in office his policies have shifted more towards the center. Romney campaigned in the center while he was running for governor of Massachusetts, but has shifted more towards the right during his presidential campaign. These changes are highly concerning to me, because both candidates are shifting away from my position. Thus, while Obama is closer to me on the political spectrum, the fact that he is moving away from my position makes the long term pay-offs lower than they would be if he had "stuck to his guns". In turn, this makes the 3rd party candidates a more appealing option.

I might even go so far as to suggest that this "political flux" is the reason why these 3rd party candidates are running. Statistically, their odds of winning are too low to change the outcome of the election. However, they can influence the direction of the political discourse. The more people that vote for those candidates, the more likely that future candidates venture in those respective directions. This vote comes at a "risk" though, as those 3rd party candidates run the risk of "spoiling" the election for a less undesirable candidate. The level of this this risk varies from state to state due to the electoral college system.

The Electoral College

A popular vote is not enough to win the election. The president is selected by an Electoral College that gets a number of votes based on a (mostly) population proportional system. For some of these states, the polls predict a pretty solid winner and loser for the presidential race. For others, the state has a tendency to lean right or left. According to The New York Times, the following states are considered a "toss-up" in the upcoming election:

• Florida
• Iowa
• North Carolina
• New Hampshire
• Ohio
• Virginia
• Wisconsin

If you are living in one of these states, the risks of voting for a third party are greater because your vote will have a higher chance of "spoiling" the election for one of the candidates. I happen to live in Virginia — one of the 2012 "battleground" states. I foresee a large number of attack ads in my near future. The big question is, is the pay-off worth the risk?

Aikido Interlude

For the past couple months, I've been studying Aikido — a martial art that might be best described as "the way of combining forces". The idea is to blend ones movements with those of the attacker to redirect the motion of the combined system in a way that neither individual is harmed by the result. As a lowly gokyu, I still have a lot to learn about this art, but I find some of the core principals behind it rather insightful from a physical and mathematical perspective.

The basic idea is a matter of physics. If an object has a significant amount of momentum, then it takes an equal amount of momentum to stop it. However, if you apply a force that is orthogonal (perpendicular) to the direction of motion, then its relatively easy to change the direction of motion. You don't block the attack in aikido. You redirect the attack in a way that's advantageous to your situation. You can see the basic idea in my crude drawing below:

The result of this is that many aikido techniques end up having a "circular" sort of appearance. In reality, it's the combination of the attacker's momentum and the orthogonal force applied by the defender that cause this. See if you can spot this in the following video of Yamada sensei:

So what does this have to do with voting?

Well consider my position on the Nolan Chart and the direction that the two major candidates are moving in. As much as I would like to shift the debate to the left, it would require a significant amount of political force and time to negate this momentum towards the right and even longer to push it in the opposite direction. It would be much more efficient to push "north" and allow the momentum to carry the political culture towards my general position.

In other words, voting for Gary Johnson might actually be the path of least resistance to my desired policies.

Metagaming the Election

Here you can start to see my predicament. Part of me wants to vote for Gary Johnson, because I think that doing so would be mostly likely to shift the debate in the direction I want it to go. Part of me wants to vote for Jill Stein, as doing so would help strengthen the political party that I belong to. Part of me wants to vote for Barack Obama, but only because doing so would have the greatest chance of preventing a Romney presidency. According to the latest polling data, the odds of Obama being re-elected are 4:1. Those are pretty good odds, but this is a high stakes game. It sure would be nice if there was a way to "have my cake and eat it too".

It turns out that there is.

I can metagame the election.

The idea of metagaming, is that it's possible to apply knowledge from "outside the game" to alter one's strategy in a way that increases the chance of success. In this case, I've decided to employ a strategy of vote pairing.

You see, I live in the same state as my in-laws who traditionally vote Republican. However, despite a history of voting GOP, they're both very rational people. Romney keeps shooting himself in the foot by saying things that are downright stupid. Screen windows are airplanes? Free health care at the emergency room? The more Romney talks, the easier it becomes to convince rational people that he's unfit to be president.

After many nights of debate, we've come to the realization that we're only voting for one of the two major parties because the other party is "worse". From there, a solution presents itself: "I'll agree to not vote for Barack Obama if you agree to not vote for Mitt Romney". This agreement is mutually beneficial to both parties involved. Without this agreement, our votes just cancel each other out. With the agreement, the net benefit to each candidate is still zero but now those votes are free to be spent elsewhere. The end result is that we each have a larger impact on the presidential election without altering the outcome.

With the vote pairing secured, I'm free to vote for Stein or Johnson at my own discretion. Both of these candidates agree on what I think is the most important issue: ending our "wars" (of which there are too many to list). They differ on a number of issues, particularly on economics and the environment. Personally, I think that the Greens and Libertarians need to meet half-way on the issues for an Eco-libertarian ticket. Jill Stein needs to recognize that the US Tax Code is a mess and needs reform. Doing so can help eliminate corporate handouts, many of which go to industries that adversely affect public health. Gary Johnson needs to recognize that laissez-faire economic policies alone will not fix our broken health care system or halt the impending climate change. I'm going to be looking forward to seeing debates between Stein and Johnson which I think will highlight the complexities of these issues and hopefully identify some possible solutions.

That's great, but what can I do?

If you want to go one step further, you can Occupy the CPD. Sign the petition to tell the Commission on Presidential Debates that you think we should hear from all qualified candidates and not just the two that they think we should hear from.

Finally, research the alternative parties and join one that matches your personal beliefs. Even you end up voting for one of the two major parties, joining a 3rd party and supporting that movement can have a significant effect on future campaigns. Here's a few links to get you started:

2012/07/16

Guild Wars 2: Mesmer Sharper Images Analysis

Filed under: Math,Video Games — Tags: , — Ryan Ruff @ 12:12

This weekend marks the 3rd Beta World Event for Guild Wars 2. I wrote a little bit about my general experiences in the first BWE, but this time I'm focusing on a very specific area of the game. In the first BWE, I was just playing the game and having fun with it. In the second BWE, I started to do a lot more "testing". In particular, one of the things I was testing was the "Sharper Images" trait.

Sharper Images (SI) is a Dueling trait that causes critical hits from Illusions to inflict bleeding for 5 seconds. This trait was bugged in the first BW1 and didn't work at all. In the second BWE1, it worked as described but a second phantasm trait called "Phantasmal Haste" was bugged resulting in some crazy damage output. This means that I didn't get a very good perspective on how these two traits would work together, but that's okay because I can do the math! In addition to seeing how the phantasm related traits would interact together, I also wanted to find out which stats to gear for in order to maximize my damage. In order to do this, we first need some information about how damage is calculated in GW2. Assuming a level 80 character:

• Pandara_RA! at Team Legacy worked out the following formula for the base damage of an attack: $Base Damage = \frac{(Power) \cdot (Weapon Damage) \cdot (Skill Coefficient)}{Target Armor}$
• The chance of getting a critical attack is determined by the Precision above the base: $CritRate= \frac{4 + (Precision - Base)/21}{100}$
• When an attack criticals, it hits for 50% more damage plus any bonus to critical damage (Prowess). With this, we can find out the average damage of an attack using: $Direct Damage = (Base Damage) \cdot (1+(Crit Rate) \cdot (0.5+\frac{Prowess}{100}))$
• The last piece of information we need is the bleeding damage, which is dependent on condition damage (Malice). According to the GW2 wiki this is determined by $\frac{damage}{second} = 40+0.05 \cdot (Malice)$. The bleed duration of 5 seconds can be improved through stats, but only pulses once per second. This means that we can round the duration down to find the number of pulses and find the total bleed damage: $\frac{damage}{second} \cdot \lfloor duration \rfloor$

To get a rough estimate of Phantasm DPS, I put these formulas together with some various equipment set-ups and trait choices. You can download this spreadsheet here. To make things simplier, I focused entirely on "Illusionary Duelist" with SI because I knew it hits 8 times every 10 seconds. I also had to make several assumptions about how certain traits would stack, and all of this is subject to change when the game is released anyway. Despite these shortcomings, I found several interesting results:

• Without any bonus condition damage, SI can add about 10%-20% damage depending on the target's armor (best against higher armor foes) when used in conjunction with Phantasmal Fury. This puts it on par with most damage traits at the adept level.
• With a skill coefficient of about 0.5 (a total guess BTW), the direct damage builds and condition damage builds I tried seem to even out in terms of potential damage. A lower skill coefficient tends to favor condition damage and a higher one favors direct damage.
• Chaotic Transference bonus seems lack-luster relative to the heavy investment.
• Phantasmal Strength and Empowered Illusions complement each other well in a power Build, but the investment for Phantasmal Strength doesn't seem worth it in a condition damage build.
• Phantasmal Haste tends to work better with a condition damage build than a power build. You don't need to hit hard with SI, you just need to hit often.
• Investing 20 points into Domination can have a big effect on condition damage builds because it extends bleeds for an extra tick. This makes Lyssa's Runes a potentially interesting choice with SI because of the +10% condition duration, allowing you to spend 10 of those points from Domination elsewhere with minimal DPS loss.
• The Rampager jewelry seems to be a better choice than Rabid for a condition damage build with SI. There's no point to having strong bleeds if you aren't applying them frequently enough.

There's still a lot more analysis to be done here and some empirical data to collect in BWE3 to verify these findings, but the results look promising. As it stands, you can make SI work in either a direct damage phantasm build or condition damage build with the appropriate gear. Small tweaks to the skill coefficient can keep the two builds competitive if necessary. This fits with Arena.Net's philosophy of having multiple play-styles be equally viable.

I'd encourage you to try out the spreadsheet with other gear and build combinations that I didn't try. If you're feeling adventurous, you might even extend it to include skills other than iDuelist or other traits I may have overlooked. If you find out any more information about how phantasm damage is calculated I'd love to hear about it in the comments!

Happy theory-crafting!

Update: BWE3

I did a little testing during BWE3, regarding the attack rates and skill coefficients of the different phantasms. This information should help give an idea of how much each phantasm benefits from stacking Power vs stacking crit/condition damage for Sharper Images. Please note that my recharge times were approximated, and Sanek over at GW2Guru came up with somewhat different numbers. I'm including both my attack rates and his for comparison:

 illusion Hits Recharge Attack Rate (hits/sec) Sanek's Recharge Sanek's Rate (Hit/sec) Approx. Skill Coef. DPS Coef. (Mine) DPS Coef. (Sanek) iDuelist 8 10 0.8 7.5 1.066666667 0.228956229 0.183164983 0.244219978 iSwordsman 1 3 0.333333333 5.5 0.181818182 0.734006734 0.244668911 0.13345577 iWarlock 1 5 0.2 6 0.166666667 0.080808081 0.016161616 0.013468013 iBerserker 1 5 0.2 6 0.166666667 0.281144781 0.056228956 0.046857464 iMage 1 5 0.2 6.7 0.149253731 0.397306397 0.079461279 0.059299462 iDefender 1 3 0.333333333 4.5 0.222222222 0.131313131 0.043771044 0.029180696 iDisenchanter 1 3 0.333333333 4.5 0.222222222 0.131313131 0.043771044 0.029180696 iWarden 12 10 1.2 14 0.857142857 0.033670034 0.04040404 0.028860029 swordClone 3 3 1 staffClone 1 1 1 scepterClone 2 3 0.666666667 gsClone 3 2 1.5

Knowing that the skill coefficient for iDuelist is only 0.23, stacking for condition damage seems to be the best method to maximize damage over time with Sharper Images given a high enough crit rate to apply it consistently. As a general rule of thumb, if your crit rate is less than 50% then you should be gearing for power and if your crit rate is greater than 50% then you should be gearing for condition damage.

A few other interesting things to note:

• iSwordsman has one of the best skill coefficients of any phantasm. If you're not using Sharper Images and have Power oriented spec, you may want to try out the off-hand sword.
• iWarlock's DPS is pretty pitiful without conditions. I'm not sure what the bonus per condition is, but I'd recommend having two staff clones up with iWarlock since they have a much faster attack rate. Edit: 10% bonus per condition
• iWarden has quick attack rate and is has an AoE attack, but remember that this Phantasm is stationary. You're very unlikely to get all 12 hits against a real player.
• iBerserker has slow recharge AoE attack that moves down a line. It might be possible to hit an opponent twice with this if they're running in the same direction, but I can't be sure about it.
• The Greatsword clones have the fastest attack rate of any illusion according to my tests. It seems kind of odd that the best clone for Sharper Images would be on a weapon with no innate condition damage.
• iMage has a high skill coefficient but low attack rate. At first glance, this looks like it would be better for a power build than condition build, but you should remember that he also applies Confusion on attack.
• iMage and iDisenchanter have bouncing attacks that hit three targets: 1 enemy and 2 allies. I couldn't seem to get it to hit the same enemy twice, but this is something to check for on release.
• Keep in mind that my original spreadsheet assumes that you leave your Phantasms out all the time. As of BWE3, this is no longer the optimal play-style. If you decide to go with a Power build, you'll probably get the best burst damage by using Mind Wrack right after your phanstasm's first attack cylce. Likewise, Cry of Frustration can now dish out some major hurt if you're built for condition damage.

2012/05/13

5 Recent Mathematical Breakthroughs That Could Be Taught in Elementary School (but aren't)

Filed under: Education,Math — Tags: — Ryan Ruff @ 13:02

In a previous blog post, I made the claim that much of the math curriculum is ordered based on historical precedent rather than conceptual dependencies. Some parts of the math curriculum we have in place is based on the order of discovery (not always, but mostly) and while other parts are taught out of pure habit: This is how I was taught, so this is how I'm going to teach. I don't think this needs to be the case. In fact, I think that this is actually a detriment to students. If we want to produce a generation of mathematicians and scientists who are going to solve the difficult problems of today, then we need to address some of the recent advances in those fields to prepare them. Students should not have to "wait until college" to hear about "Topology" or "Quantum Mechanics". We need to start developing the vocabulary for these subjects much earlier in the curriculum so that students are not intimidated by them in later years.

To this end, I'd like to propose 5 mathematical breakthroughs that are both relatively recent (compared to most of the K-12 curriculum) while also being accessible to elementary school students. Like any "Top 5", this list is highly subjective and I'm sure other educators might have differing opinions on what topics are suitable for elementary school, but my goal here is just to stimulate discussion on "what we could be teaching" in place of the present day curriculum.

#1. Graph Theory (c. 1736)

The roots of Graph Theory go back to Leonard Euler's Seven Bridges of Königsberg in 1736. The question was whether or not you could find a path that would take you over each of the bridges exactly once.

Euler's key observation here was that the exact shapes and path didn't matter, but only how the different land masses were connected by the bridges. This problem could be simplified to a graph, where the land masses are the vertices and the bridges are the edges.

This a great example of the importance of abstraction in mathematics, and was the starting point for the field of Topology. The basic ideas and terminology of graph theory can be made easily accessible to younger students though construction sets like K'Nex or Tinkertoys. As students get older, these concepts can be connected to map coloring and students will be well on their way to some beautiful 20th century mathematics.

#2. Boolean Algebra (c. 1854)

The term "algebra" has developed a bad reputation in recent years. It is often referred to as a "gatekeeper" course, which determines which students go on to higher level mathematics courses and which ones do not. However, what we call "algebra" in middle/high school is actually just a subset of a much larger subject. "Algebra I" tends focuses on algebra as it appeared in al-Khwārizmī's Compendious Book on Calculation by Completion and Balancing (circa 820AD). Consequently, algebra doesn't show up in the math curriculum until students have learned how to add, subtract, multiply and divide. It doesn't need to be this way.

In 1854, George Boole published An Investigation of the Laws of Thought, creating the branch of mathematics that bears his name. Rather than performing algebra on numbers, Boole used the values "TRUE" and "FALSE", and the basic logical operators of "AND", "OR", and "NOT". These concepts provided the foundation for circuit design and eventually lead to the development of computers. These ideas can even be demonstrated with a variety of construction toys.

The vocabulary of Boolean Algebra can and should be developed early in elementary school. Kindergartners should be able to understand basic logic operations in the context of statements like "grab a stuffed animal or a coloring book and crayons". As students get older, they should practice representing these statements symbolically and eventually how to manipulate them according to a set of rules (axioms). If we develop the core ideas of algebra with Boolean values, than perhaps it won't be as difficult when these ideas are extended to real numbers.

#3. Set Theory (c. 1874)

Set Theory has its origins in the work of Georg Cantor in the 1870s. In 1874, Cantor published a ground breaking work in which he proved that there is more than one type of infinity -- the famous "diagonal proof". At the heart of this proof was the idea of thinking of all real numbers as a set and trying to create a one-to-one correspondence with real numbers. This idea of mathematicians working with sets (as opposed to just "numbers") developed momentum in the late 1800s and early 1900s. Through the work of a number of brilliant mathematicians and logicians (including Dedekind, Russell, Hilbert, Peano, Zermelo, and Fraenkel), Cantor's Set Theory was refined and expanded into what we know call ZFC or Zermelo-Fraenkel Set Theory with the Axiom of Choice. ZFC was a critical development because it formalized mathematics into an axiomatic system. This has some suprising consequences such as Gödel's Incompleteness Theorem.

Elementary students probably don't need to adhere to the level of rigor that ZFC was striving for, but what is important is that they learn the language associated with it. This includes words and phrases like "union" ("or"), "intersection" ("and"), "for every", "there exists", "is a member of", "complement" ("not"), and "cardinality" ("size" or "number"), which can be introduced informally at first then gradually formalized over the years. This should be a cooperative effort between Math and English teachers, developing student ability to understand logical statements about sets such as "All basset hounds are dogs. All dogs are mammals. Therefore, all basset hounds are mammals." Relationships can be demonstrated using visual aids such as Venn diagrams. Games such as Set! can further reinforce these concepts.

#4. Computation Theory (c. 1936)

Computation Theory developed from the work of Alan Turing in the mid 1930s. The invention of what we now call the Turing Machine, was another key step in the development of the computer. Around the same time, Alzono Church was developing a system of function definitions called lambda calculus while Stephen Kleene and J.B Rosser developed a similar formal system of functions based on recursion. These efforts culminated in the Church-Turing Thesis which states that "everything algorithmically computable is computable by a Turing machine." Computation Theory concerns itself with the study of what we can and cannot compute with an algorithm.

This idea of an algorithm, a series of steps to accomplish some task, can easily be adapted for elementary school instruction. Seymour Papert has been leading this field with technologies like LOGO, which aims to make computer programming accessible to children. Another creative way of approaching this is the daddy-bot. These algorithms don't need be done in any specific programming language. There's much to be learned from describing procedures in plain English. The important part is learning the core concepts of how computers work. In a society pervaded by computers, you can either choose to program or be programmed.

#5. Chaos Theory (c. 1977)

Last, but not least, is Chaos Theory -- a field of mathematics that developed independently in several disciplines over the 1900s. The phrase "Chaos Theory" didn't appear in the late 1970s, but a variety of phenomena displaying chaotic behavior were observed as early as the 1880s. The idea behind Chaos Theory is that certain dynamic systems are highly sensitive to initial conditions. Drop a shot of half-half into a cup of coffee and the resulting pattern is different every time. The mathematical definition is a little more technical than that, but the core idea is relatively accessible. Chaos has even found several notable references in pop culture.

The other core idea behind chaos theory is topological mixing. This could be easily demonstrated with some Play-Doh (or putty) of two or more colors. Start by combining them into a ball. Squash it flat then fold it over. Repeat it several times and observe the results.

The importance of Chaos Theory is that it demonstrates that even a completely deterministic procedure can produce results that appear random due to slight variations in the starting conditions. This can even be taken one step further by looking at procedures that generate seeming random behavior independently of the starting conditions. We live in an age where people need to work with massive amounts of data. The idea that a simple set of rules can produce extremely complex results provides us with tools for succinctly describing that data.

Conclusion

One of the trends in this list is that these results are easy to understand conceptually but difficult to prove formally. Modern mathematicians seem to have a tendency towards formalism, which is something of a "mixed blessing". On one hand, it has provided mathematics with a firm standard of rigor that has withstood the test of time. On the other hand, the language makes some relatively simple concepts difficult to communicate to younger students. I think part of the reason for this is that the present curriculum doesn't emphasize the rules of logic and set theory that provide the foundation for modern mathematics. In the past, mathematics was driven more by intuitionism, but the math curriculum doesn't seem provide adequate opportunities for students to develop this either! It might be argued things like "new math" or "Singapore math" are helping to develop intuitionism, but we're still not preparing students for the mathematical formalism that they'll be forced to deal with in "Algebra I" and beyond. Logic and set theory seem like a natural way to develop this familiarity with axiomatic systems.

Observers might also note that all five of these proposed topics are related in some form or another to computer science. Computers have been a real game-changer in the field of mathematics. Proofs that were computationally impossible 500 years ago can be derived a in minutes with the assistance of computers. It's also changed the role of humans in mathematics, from being the computer to solving problems using computers. We need to be preparing students for the jobs computers can't do, and my hope is that modernizing the mathematics curriculum can help accomplish this.

Do you have anything to add to this list? Have you tried any of these topics with elementary students? I'd love to hear about your experiences in the comments below.

2012/05/03

Pre-Calc Post-Calc

Filed under: Education,Math — Ryan Ruff @ 14:00

Gary Davis (@republicofmath) wrote an article that caught my attention called What's up with pre-calculus?. In it, he presents a number of different perspectives on why Pre-Calc classes have low success rates and do not adequately prepare students for Calculus.

My perspective on pre-calculus is probably far from the typical student, but often times the study of "fringe cases" like myself can provide useful information on a problem. The reason why my experience with Pre-Calc was so atypical, is because I didn't take it. After taking Algebra I, I had started down a path towards game programming. By the end of the following year, where I had taken Geometry, this little hobby of mine hit a road block. I had come to the realization that in order to implement the kind of physics that I wanted in my game I would need to take Calculus. I petitioned my counselor to let me skip Algebra II and Pre-Calc to go straight into AP Calculus. They were skeptical at first, but eventually conceded to my determination and allowed me to follow the path I had chosen.

Skipping from Geometry to Calculus meant that there were a lot of things that I needed to learn that first month that many of my peers had already covered. I had never even heard the word "logarithm" before, had no idea what e was, and had only a cursory understanding of trigonometry. These were the topics I had missed by skipping Pre-Calc, and I was fully aware of that, so I "hit the books" and learned what I needed to know about them. By the end of that first month I had caught up to the rest of the class and by end of the semester I would be helping other students with those very same topics.

I think the most obvious difference between myself and the "typical Calculus student" was the level of motivation. Many of the students in Calculus were there because "it would look good on a college application". I was there because I wanted to be there. A common problem throughout math education is the "When am I ever going to use this?" attitude. I already knew where I was going to use the math I was learning. I had an unfinished game at home that needed a physics system, and every new piece of information I learned in Calculus made me one step closer to that goal. If you had ever wondered why a 4th order Runge-Kutta method is better than Euler's method, try writing a platformer.

The second difference was a little more subtle, but there were some conceptual differences in how I thought about exponential, logarithmic, and trigonometric functions. The constant "e" wasn't just some magic number that the textbook pulled out of thin air, it was the the unique number with the property that $\frac{de^x}{dx} = e^x$ and $\int e^x dx = e^x$. When it came to sine and cosine, I would think of them like a circle while my other classmates would picture a right triangle. They would hear the word "tangent" and think "opposite over adjacent", but I thought of it more like a derivative. Sure, I had to learn the same "pre-calc" material as they did, but the context of this material was radically different.

A couple years ago I suggested that Pre-Calc should be abolished. The trouble with Pre-Calculus (at least in the U.S.) is that the course needs to cover a very diverse array of questions which includes exponential, logarithmic and trigonometric functions. I would argue that these concepts are not essential to understanding the basic foundations of Calculus. The math curriculum introduces the concept of "slope" in Algebra I, which is essentially the "derivative" of a line. There's no reason why we should be sheltering students from language of Calculus. The concepts of "rate of change" and "accumulation" can and should be connected with the words "derivative" and "integral", long before students set foot in the course we presently call Calculus. As students become more comfortable with these concepts as they relate to lines, parabolas and polynomials, then gradually step up the level of complexity. When students start to encounter things like surfaces of revolution, then they'll actually have a reason to learn trigonometry. Instead of trigonometry being the arbitrary set of identities and equations that it might appear to be in pre-calc, students might actually learn to appreciate it as a set of tools for solving problems.

I think this issue of Pre-Calc is really a symptom of a larger problem. The mathematics curriculum seems to be ordered historically rather than conceptually. I've heard Pre-Calc described as a bridge to Calculus. This makes sense when you consider the historical development of Calculus, but not when considering the best interest of students in today's society. Leibniz and Newton didn't have computers. Who needs bridges when you can fly?

2012/03/12

Measuring Rational Behavior

Filed under: Math,Politics,Religion — Ryan Ruff @ 12:20

Is "rationality" a measurable quantity?

In a previous blog post, I discussed some common logical errors that often arise in political discourse. This led to a rather interesting discussion on Twitter about political behaviors and how to model them mathematically (special thanks to @mathguide and @nesa_k!). One of the questions that came up this this discussion was how to define "rational behavior" and whether or not this is a measurable quantity. What follows is my hypothesis on "rational behavior": what it is and how to measure it.

Please keep in mind that this is just a hypothesis and I don't quite have the resources to verify these claims experimentally. If anyone has evidence to support or dispute these claims, I would certainly be interested in hearing it!

Defining "rational behavior"

Before we can begin to measure "rationality", we must first define what it means to be "rational". Merriam-Webster defines "rational" as "relating to, based on, or agreeable to reason". The Online Etymology Dictionary describes the roots of the word in the Latin rationalis, meaning "of or belonging to reason, reasonable", and ratio, meaning "reckoning, calculation, reason". It's also worthwhile to mention that ratio and rational have a distinct mathematical definition referring to the quotient of two quantities. Wikipedia suggests that this usage was based on Latin translations of λόγος (logos) in Euclid's Elements. This same Greek word lies at the root of "logic" in English.

Based on these definitions and etymology, I think its fair to define rational behavior as "behavior based on a process of logical reasoning rather than instinct or emotion".

Even this definition is far from perfect. In the context of game theory, "rational behavior" often defined as the process of maximizing benefits while minimizing costs. Note that by this definition, even single celled organisms like amoeba would be considered to exhibit "rational behavior". In my opinion, this minimax-ing is a by-product of evolution by natural selection rather than evidence of "reason" as implied by the typical usage of the word "rational".

I should also clarify what I mean by "logical reasoning" in this definition. In trying to quantitatively measure rational behavior, I propose that it makes sense to use a system of fuzzy logic rather than Boolean logic. By using the Zadeh operators of "NOT", "AND", and "OR", we can develop an quantitative measure of rationality on a scale of 0 to 1. In logic, we say that an arguement is considered sound if it's valid and its premises are true. Since we're using the fuzzy "AND" in this model, the rationality measure is the minimum truth value of the logical validity and base assumptions.

Using this definition, we can also define irrational behavior as "behavior based on an invalid logical argument or false premises". I'd like to draw a distinction here by defining arational behavior as "instinctive behaviors without rational justification", to cover the amoeba case described above. An amoeba doesn't use logic to justify its actions, it just instinctively responds to the stimuli around it.

Rationalism and Language

There's an implicit assumption in the definition of "rational behavior" that I've used here, and that is that this requires some capacity for language. First-order predicate logic is a language, so the idea that "rational behavior" is language dependent should come as no surprise. In fact, the same Greek word "logos" from which "rational" is derived was also used as a synonym for "word" or "speech". The components of language are necessary for constructing a formal system, by providing a set of symbols and rules of grammar for constructing statements. Add a set of axioms (assumptions) and some rules for inference, and you'll have all the components necessary to construct a logical system.

A Dynamic Axiomatic System Model of Rational Behavior

A this point we can start to develop an axiomatic system to describe rational behavior. Using the operators of fuzzy logic and the normal rules of first-order logic we can create an axiomatic system that loosely has the properties we would expect of "rational behavior". It's very unlikely that the human mind uses the exact rules of fuzzy logic, but it should be "close enough". We also have to consider that the basic beliefs or assumptions of a typical person vary over time. Thus, it's not enough to model rational behavior as an axiomatic system alone, we must consider how that system changes over time. In other words, this is a dynamic system.

As we go through life, we "try out" different sets of beliefs and construct hypotheses about how the world works. These form the "axioms" of our "axiomatic system". Depending on whether or not these assumptions are consistent with our experiences, we may decide to keep those axioms or reject them. When this set of assumptions contains contradictions, the result is a feeling of discomfort called cognitive dissonance. This discomfort encourages the brain to reject one of the conflicting assumptions to reach a stable equilibrium again. The dynamic system resulting from this process is what I would characterize as rational behavior.

One particularly powerful type of axiom in this system is labeling. Once a person takes a word or label and uses it to describe him or herself, the result is the attribution of large number of personal characteristics at once. The more labels a person ascribes to, the more likely it is that a contradiction will result. Labeling also has powerful social effects associated with it as well. Ingroups and outgroups can carry with them substantial rewards or risks depending on the context.

Rather than rejecting faulty axioms when confronted with cognitive dissonance, some individuals develop alternative methods of reducing the discomfort. The general term for pattern of behavior is called cognitive bias. This behavior can take a variety of different forms, but the one that is most relevant to this discussion is the confirmation bias. One of the ways in which the human brain can reduce the effects of cognitive dissonance is by filtering out information that would result in a contradiction with the base assumptions. Another relevant bias to consider is the belief bias, or the tendency to evaluate the logical validity of an argument based on a pre-existing belief about the conclusion.

Whatever form it may take, cognitive bias should be taken as evidence of "irrational behavior". Not all cognitive biases are of equal magnitude, and some arguments may rely more highly on these biases than others. The goal here is not a Boolean "true" or "false" categorization of "rational" and "irrational", but more of a scale like the one used by PolitiFact: True, Mostly True, Half-True, Mostly False, False, Pants on Fire. The method of applying truth values in fuzzy logic makes it highly appropriate for this purpose.

Examples in Politics

Consider this clip from The Daily Show. Using this clip may seem a little biased, but it's important to remember that John Stewart is a comedian. Comedians have an uncanny knack for walking the fine line between "rational" and "irrational", providing an interesting perspective to work with.

In the first example, we have the issue of Rick Santorum and JFK. After reading JFK's speech on religious freedom, Santorum says that it made him want to throw up. In order to defend this statement, Santorum uses a good ole fashioned straw man argument by claiming that JFK was saying "no faith is not allowed in the public public square" when in fact JFK was saying "all faiths are allowed". I think Santorum's behavior here is a prime example of irrational behavior. Taking this position may very well earn him some votes with the deeply religious, but it's clear that Santorum has some problems finding consistency between his personal beliefs and the First Amendment. His position is not based on a valid logical argument, but on a physical response to the cognitive dissonance resulting from his conflicting beliefs. This example also shows the power of deeply held self-labeling behaviors like religion.

Mitt Romney made some headlines with his "NASCAR Team Owner" blunder. It would appear that Mitt Romney had gone to Daytona to try and score some points with "average Americans", but a slip of the tongue showed how out of touch he really is. To Romney's credit, his behavior here is about half-rational. His assumptions are probably something like this:

• I want people to vote for me.
• People vote for someone they can relate to.
• Most people know someone who likes NASCAR.
• I know someone who likes NASCAR.

It makes sense from a logical standpoint, but it turns out that the person who Romney knows that likes NASCAR just happens to be a
"team owner" instead of a "fan". This small detail makes it unlikely that people will relate to him, but at least the foundation of a logical argument is there.

This brings us back to Rick Santorum again. This time, Santorum calls President Obama a "snob" for "[wanting] every American to go to college". Not only is this comment blatantly false, but he's employing an ad hominem attack in lieu of a logical argument. This example draws a nice dichotomy between President Obama and Rick Santorum. The President is making a rational argument in favor of higher education which is well supported by evidence. By opposing this rational argument on a faulty premise, Santorum comes out of this situation looking mostly irrational. His behavior makes sense if you consider the effects of confirmation bias. Santorum believes that the President is trying to indoctrinate college students to become liberals. He believes it so thoroughly that he simply filters out any evidence that would contradict it. While most observers can hear the President say "one year of higher education or career training", Santorum doesn't. He hears the part confirms his beliefs and filters out the rest. I'd imagine that for Santorum, listening to President Obama speak sounds something like the teacher from the Peanuts cartoons: "one year of higher education wah wah-wah wah-wah-wah". To Santorum's credit, at least he had the mind to retract his "snob" statement -- even if only partially. This shows that the underlying mechanisms for rational behavior are still there, despite his frequent leaps of logic.

Conclusion

I hope I've at least managed to present a definition of "rationality" that's a little more precise than the everyday use of the term. I'm sure some people out there might disagree with the way I've rated the "rationality" of these behaviors. Different people have different experiences and consequently have different assumptions about the world. If we were to use multiple "rationality raters" and average the results, perhaps we might have a decent quantitative measure of rationality to work with.

Part of the problem with measuring rationality is the speculative nature of trying to determine someone else's assumptions. We can generally use what a person says as an indication of what they believe -- at least for the most part. It's also important to consider not only the statement, but the context in which the statement is made. In political discourse, we implicitly assume that politicians are being honest with us. They might be wrong about the facts, but this idea that they are honestly representing their own views is something that voters tend to select for. Perhaps this is why Romney is still struggling against Santorum in the primary. Santorum may have problems getting his facts straight and presenting a logical argument, but he has a habit of saying what he believes regardless of the consequences. Romney, on the other hand, says what he thinks will win him the most votes. Many voters do not vote "rationally", they vote according to how they "feel" about the candidates. Romney may be more "rational" than Santorum, but his calculated responses cause him to lose that "feeling of honesty" that Santorum elicits from voters.

In the next article, I'll attempt to explain the origins of rational and irrational behavior. I think the key to understanding these behaviors lies in evolution by natural selection. I would argue that both rational and irrational behaviors contributed to the survival of our species, and this is why irrationality persists into the present. Stay tuned!

2012/02/13

Final Fantasy XIII-2 Clock Paradox and Hamiltonian Digraphs

Filed under: Math,Video Games — Ryan Ruff @ 22:11

I'm a long time fan of the Final Fantasy series, going back FF1 on the NES. In fact, I often cite FF4 (FF2 US) as my favorite game of all time. I enjoyed it so much that it inspired me to learn how to program! One of my earliest Java applets was based on a Final Fantasy game and now, 15 years later, I'm at it again.

I had a blast playing FF13, so when I heard about its sequel I had to pick it up. The game is fun and all, but I've become slightly obsessed with a particular minigame: The Clock Paradox.

The rules of the game are simple. You are presented with a "clock" with some number of buttons around it. Each of these buttons is labeled with a number. Stepping on any of the buttons deactivates that button and moves the two hands of the clock to positions that are the distance away from that button specified by the labeled number. After activating your first button, you can only activate the buttons which are pointed at by the hands of the clock. Your goal is to deactivate all of the buttons on the clock. If both hands of the clock point to deactivated buttons and active buttons still remain, then you lose and must start over.

See this minigame in action in the video below:

You may not know this about me, but I'm not a real big fan of manual "guess and check". I would rather spend several hours building a model of the clock problem and implementing a depth first search to find the solution, than spend the 5 minutes of game time trying different combinations until I find one that works. Yes, I'm completely serious. Here it is.

I think that the reason why I'm drawn to this problem is that it bears a close relation to one of the Millennial Problems: P vs NP. In particular, the Clock Paradox is a special case of the Hamiltonian Path Problem on a directed graph (or digraph). We can turn the Clock Paradox into a digraph with the following construction: create a starting vertex, draw arcs to each position on the clock and place a vertex, and finally draw two arcs from each positions following the potential clock hands from that position. The Hamiltonian path is a sequence of arcs that will visit each vertex exactly one. If such a path exists, then the Clock Paradox is solvable.

This little minigame raises several serious mathematical questions:

• What percentage of the possible Clock Paradoxes are solvable?
• Is there a faster method of solving the Clock Paradox? Can it be done in polynomial time, or is it strictly exponential?
• Is there any practical advise topology can offer to help players solve these puzzles?
• Is there anything these puzzles can teach us about the general Hamiltonian Path Problem?

I don't claim to know the answers, but I would offer the following advise: see if you can identify a node with only one way in or out. If you can, then you know that you'll need to start or end. If all else fails, you can always cheat by plugging it into my sim!

That's all I have for today. Maybe there will be some rigged chocobo races in the future... kupo.

2012/01/25

The Three Axioms of Political Alogic

Filed under: Math,Politics — Tags: — Ryan Ruff @ 07:43

I find it rather interesting that the foundations of both logic and democracy can be traced back to ancient Greece. Here in the US, we've taken the Greeks' idea of democracy and brought it to a new level, but at the same time our political discourse seems anything but logical. We owe to Aristotle the "Three classic laws of thought", which are as follows:

1. The law of identity. Anything object must be the same as itself. $P \to P$
2. The law of noncontradiction. Something can't be and not be at the same time. $\neg(P \land \neg P)$
3. The law of excluded middle. Either a proposition is true, or it's negation is. $P \lor \neg P$

It's worth while to note that these statements are neither verifiable or falsifiable, qualities true of any "axiom". An axiom is supposed to be a self-evident truth, that gives us starting point for a discussion. The universe described by these axioms is one where "TRUE" and "FALSE" form a dichotomy. These axioms don't handle things like quantum particles or Russell's paradox in which things can be both true and false simultaneously. Nevertheless, they provide a useful tool for discerning truthhood. Politicians, however, are more concerned with "votes" than "truths". The following "Three Axioms of Political Alogic" are the negation of the "three classic laws of thought", and generally indicate situations where a politician is distorting the truth for personal gain. Although, that could change if Schrodinger's Cat decides to run for office.

The Three Axioms of Political Alogic

#1: The law of deniability

Just because something is, doesn't mean that it is.

First order (a)logic: $\neg (P \to P)$

Sometimes politicians don't have their facts straight, but that won't stop them from proclaiming that a lie is the truth. The most common form of this seems to be the denial of evolution and climate change, despite the overwhelming scientific evidence. When the majority of the population is poorly informed about scientific issues, its much easier for a politician to appeal to these voters by reaffirming their misconceptions than it is to actually educate them. Just ask Rick Santorum.

There's a corallary to this rule, and that is that if you repeat the lie often enough then eventually the public will believe you. The right-wing media repeatedly refers to President Obama as "Socialist" or "Muslim", despite neither being true, in the hopes of eventually convincing the public that they are true.

Just because two positions contradict each other, doesn't mean you can't hold both of them simulatenously.

First order (a)logic: $P \land \neg P$

Politicians seem to have a natural immunity to cognitive dissonance, allowing them to hold two contradictory positions without feeling any guilt or embarrassment. Republicans like to call themselves "pro-life" while simultaneously supporting the death penalty -- something I never fully understood. How can one be pro-life and pro-death at the same time?

President Obama's 2012 State of the Union had a few subtle contradictions worth noting. President Obama begins by praising the General Motors bailout and goes on to speak out against bailouts near the end. He also called out "the corrosive influence of money in politics", while he himself was the largest beneficiary of Wall St donations during the 2008 campaign. When you consider that this President has built his position on the principles of compromise and cooperation, taking both sides of the issue seems to be his way of encouraging both parties to work together. Unfortunately, this strategy hasn't really worked out that well in the past.

#3: The law of the included middle

You don't need to choose between a position and its negation. You can always change your mind later.

First order (a)logic: $\neg (P \lor \neg P)$

Politicians try to appeal to the widest possible base of voters. Since the voters don't always agree with each other on a particular issue, you'll often find politicians changing their stance depending on which voters they're speaking to. This law is the "flip-flop" rule of politics. Mitt Romney is a popular example, having changed his stances on abortion, Reaganomics, and no-tax pledges. These changes make sense from a vote-maximization point of view. Romney's earlier campaign in Massachusetts required him to appeal to a moderate voter base. In the GOP Primary, he now needs to contend with the far-right wing voters. If the votes he potentially gains by changing stance outnumber the votes he'd lose from the flip-flop, then he gains votes overall. Likewise, President Obama has also "flip-flopped" on some issues he campaigned on now that he's actually in office -- like single-payer healthcare versus individual mandates. Again, the President is dealing with a change in audience. "Candidate Obama" needed to appeal to the general population, while "President Obama" needs to appeal to members congress. He's still trying to maximize votes, it's just a different type of vote that counts now.

Parting Thoughts

This post started with a joke on Twitter about politicians' inability to do basic math or logic. After giving it some thought, perhaps they're better at math than I originally gave them credit for. They may not be able to answer simple arithmetic problems, but when it comes down to maximizing the number of votes they receive they are actually quite skilled. They may tell bold faced lies and flip-flop all over the place, but they do so in such a way that gets them elected and keeps them there. If we want politicians to tell the "truth" then we to start voting that way. We also need to start educating others about how to tell a "lie" from the "truth", and I hope someone finds these "Three Axioms of Political Alogic" a valuable tool for doing so.

Older Posts »