Computational Political Economy

Scott de Marchi

Duke University

Fall, 1999

 

Readings

 

Required:

(1) An Introduction to Genetic Algorithms, by Melanie Mitchell (MIT, 1996).§

(2) An Introduction to Natural Computation, by Dana Ballard (MIT, 1997).§

(3) Automata and Computability, by Dexter Kozen (Springer, 1997).§

(4) Induction: Processes of Inference, Learning, and Discovery, by Holland, Holyoak, Nisbett, and Thagard (MIT, 1989).§

(5) Micromotives and Macrobehavior, by Thomas Schelling (Norton, 1978).©

(6) The Complexity of Cooperation, by Robert Axelrod (Princeton, 1997).©

(7) The Sciences of the Artificial, by Herbert Simon (MIT, 1998).©

(8) Turtles, Termites, and Traffic Jams, by Mitchel Resnick (MIT, 1999).©

 

Recommended (but very supplemental)[1]:

C++ Program Design, by Cohoon and Davidson (McGraw-Hill, 1999).§

Godel, Escher, and Bach, by Douglas Hofstadter (Vintage, 1989).©

 

© denotes an accessible book without prerequisite (mathematical or otherwise).

§ denotes a book that involves some difficulty, and requires some type of technical prowess.

 

Overview

 

Computational Political Economy is a gentle introduction to the field of computational modeling.  At root, computational work springs from a desire to conduct formal, replicable investigations of political phenomena with clearly defined assumptions and hypotheses.  It differs from more traditional formal investigations (e.g., non-cooperative game theory) due to a belief that assumptions such as complete information and unlimited computational power are unjustifiable in light of experimental and physiological research into human decision-making. 

 

Computational work typically models political actors as locally informed, boundedly rational agents, and attempts to incorporate learning, communication, and cooperation into political models.  Due to these goals, computational modeling is extraordinarily interdisciplinary, and borrows freely from the fields of artificial intelligence, cognitive psychology, economics, information theory, optimization theory, and political science.  We will, of course, focus on models of political phenomena in this course, though the theoretical foundations we will cover are not narrowly targeted at political science.

 

Computational modeling also has a great deal in common with experimental work, insofar as any computational model must arrive at reasonable algorithms that govern the behavior of political agents.  Human experiments offer guidance in formulating the behavior of agents, and often can be used to restrict the range of available parameter choices in a given model.  Due to this interdependence, this course will also spend some amount of time acquiring a working knowledge of the current literature in cognitive psychology, political psychology, and experimental economics.   

 

The semester will be divided into roughly two halves.  The first half will be a readings course that will examine why one might want to engage in computational modeling in the first place, and what the relatives merits and challenges of the field are.  The second half of the course will focus on developing a group project that applies the principles from the first half of the course to a practical application in political science. 

 

Finally, one should ask why this particular methodology / field of inquiry is any more or less useful than the other options on the plate for a student of political science.  The first answer is that computational methods are entirely complementary to the study of econometrics or game theory, insofar as they alert you to what makes a problem difficult for analytic methods, and what alternatives exist to more customary approaches.  Simply put, if you study complex political phenomena, you may find yourself stumped, and often the only recourse is to a computational model. 

 

The second answer is that computational modeling is a bridge between the restrictions inherent in analytic approaches, and experimental work that is often not precise in detailing the dynamics of a given phenomenon.  Computational models relax many of the more objectionable assumptions in rational choice models, but this does not mean they lack rigor.  Rather, computational models, by their very nature, force a researcher to precisely define what rules govern the behavior of political actors and the environments in which they exist.  Thus, computational modeling shares a similar epistemological bent with formal theoretic approaches, and should be seem as complimentary and not antagonistic.

 

Annotated (and Tentative) Schedule:

 

I.                 Modeling – as opposed to Analytic / Formal, Empirical, or Experimental Approaches.  First week read {5} and second week read {8}.  This section will focus on epistemological concerns, and as such, we will read two “light” books, insofar as they are not technical in nature.  Hopefully, you will get some idea why traditional approaches are often inadequate for certain domains of problems.   

II.               Useful Restrictions: Cognitive Psychology.  Third week read {7} and fourth week read {4}.  One of the main problems with computational approaches is that any output of a given model is implicitly conditioned upon the particular parameter instantiation.  Human experiments and research in cognitive psychology are extremely useful both in motivating computational work (i.e., so you can include more complex models of behavior) but also in limiting the parameter space.

III.             Tools for Modeling 1: Optimization.  Fifth and sixth weeks read {2} and seventh and eighth weeks read {1}.  This is where things become grittier.  We will focus on how one develops learning algorithms for agents, and why this is a difficult matter.  Two stages of thought are important: first, learning how to construct measures that order problems by their innate difficulty; second, learning how to select a search strategy given the problem type at hand.

IV.            Tools for Modeling 2: Algorithms, Agents, and Environments.  Ninth and tenth weeks read {3}.  This week will focus on the computational theory underlying much of what we have done to date.  It has broad applications not only for computational work, but also for theories of repeated games (for one example see Osborne and Rubinstein).

V.              Tools for Modeling 3: Robustness and Analysis of Results.  Eleventh week notes and articles will be assigned.  Once you have a model, we will focus on what the issues are in studying the outputs of that model.  This is not easy, because our main justification for computational work is that the problem is exceedingly difficult and thus not amenable to traditional analytic approaches.

VI.            Political Science Examples.  Twelfth week read {6}.  We will take a critical look at some of the leading applications in political science; additional articles may be assigned.

VII.          Nuts and bolts of programming / completing a project.  Thirteenth through finals period, work with me on individual project.  This time period will be devoted to light programming / work on your project.  We’ll all likely have to move into my office and order out.

 

Grading

 

You will typically receive one assignment per week.  Each will be graded.  Your last assignment which stretches over 2+ weeks will be weighted accordingly.  Your final grade will be the mean of these assignments (note: I drop the lowest grade).  There will be no final exam.


 

First Assignment

 

As we will be concerned with search algorithms throughout the course, your first assignment allows you to explore this area without assuming you know a programming language or any computational theory.  Your goal is to find the highest point in a topology I will select on the Duke campus.  Your only tool is a yardstick, and an algorithm you will write before next week’s class.  The particulars:

 

1)     get a yardstick.  This yardstick will represent your “local information” – i.e., how far you can see in any one direction.  When writing your algorithm to find the highest point in the landscape I assign, you may use the look command.  The syntax is:

 

look direction

 

where direction is a point on a standard analog clock face from {1,2,…,12}.  This command returns the altitude (a delta relative to your current position) of the direction you selected.  You may look in multiple directions from a fixed point and move in the “best” direction; this is the only case in which you are allowed to remember values.

 

2)     you may also move, which is of course essential if you are to land upon the highest point on the landscape.  You may take one or more yardstick-sized hops in any direction by using the move command.  Its syntax is:

 

move direction #steps

 

where direction is as above and #steps is an integer representing how many yardstick lengths you will go.

 

3)     your final command is the randomize command.  It simply gives you a number between 1 and X that you may use (creatively) in lieu of any parameter value.  Its syntax is:

 

randomize X

 

where X is the upper bound of the integer that will be returned.

 

Assume that the distance between you and the highest point on the landscape is no more than 25 steps as the bird flies (this is required given that you are not allowed to store things in memory, use an if statement, or use loops).  Devote some thought to the nature of local optima on the Duke campus.

 

We will start class with this assignment, so come prepared.  You should have a list of commands and they will be interpreted most literally.  If your commands are ambiguous (i.e., I cannot determine what you are supposed to do), your assignment will be void.  If you have any questions, or want me to look at your algorithm, send it to me via email this week.  Note: you may think this is goofy, but it actually covers most of the ideas we want to discuss.



[1] Additional Texts for the Mathematically Inclined (and not at all part of this course!):

An Introduction to Kolmogorov Complexity and Its Applications, by Li and Vitanyi (Springer, 1997).

An Introduction to Optimization, by Chong and Zak (Wiley, 1996).

Computational Complexity, by Christos Papadimitriou (Addison-Wesley, 1994).

Neural Networks for Pattern Recognition, by Christopher Bishop (Oxford, 1995).