# 2017-08-30 (More than a) Few Words About "Computer Literacy" in the Twenty-First Century...

## 2017-08-13 Educational Technologies Here at Berkeley

https://www.icloud.com/keynote/0yKJfOMN5SvDtK_K7tjWAstcA

### Why We Are Moving to Computer Problem Sets

The Economics Department is responding to the current Berkeley budget crisis by enforcing a work speedup on our GSI section leaders

- Instead of teaching 2 sections of 30 for a 50% GSI salary
- They will be teaching 3 sections of 25
- They are underpaid anyway
- Hence we need to offload the problem set grading part of their job onto our machines...

This is an opportunity and a challenge

- The opportunity: you have to learn (or remember) a little Python for simple programming exercises
- The challenge: you have to learn (or remember) a little Python for simple programming exercises
- The opportunity: you learn how to do more things in problem sets
- The challenge: we can demand that you do more things in problem sets
- The opportunity: you will master important tools

### Why We Are Moving to Computing

Four stages in the western European academic intellectual tradition

- The Visible College: Scribal literacy and Roman numeracy
- The Invisible College: Print literacy and Arabic numeracy
- The Republic of Letters: Enlightenment critical thinking and the scientific method
- The Unseen University...

Data Science Wizard skills as the equivalent today of the chancery hand of the fourteenth century...

### Data Science as Today's Equivalent of the Chancery Hand

Needed: anecdotes and data

**Data**so that we can tell whether the stories we hear (and the stories we tell) are in any sense representative**Anecdotes**—case studies—thick description—so that we can understand what the not-atypical cases and patterns*really are*

Data Science wizard skills needed for handling Big Data (and small data)

- Datasets
- Web scraping
- Statistical analysis
- Emergence
- Simulation
- Presentation

Anecdotes

- Read stuff
- But, these days, so much to read!
- We are not just writing the history of Europe from the archive of Venetian ambassadors' reports back to the
*Serenissima*anymore, are we?- Cf.
**Leopold von Ranke**and his use of the*Relazione*:**Gino Benzoni**http://surface.syr.edu/cgi/viewcontent.cgi?article=1213&context=libassoc

- Cf.

- We are not just writing the history of Europe from the archive of Venetian ambassadors' reports back to the
- Point-and-click simply does not scale...
- Data science skills essential for wrangling what you are going to read about your anecdotes—case studies—thick description exemplars as well

Compare to the *chancery hand* of the late Medieval period

- The abiity to write quickly, legibly, and formally a huge leg up in your career
- A
*chancery hand*not absolutely essential—but very, very useful and practical - You went to the late Medieval university to learn the "liberal arts"
- That isk the skills needed to make your way as a free person
- Not a serf
- But, also, not a feudal lord and not a feudal vassal
- Needed
- The
*trivium*: grammar, logic, and rhetoric: how to read, how to think, and how to present and persuade - The
*quadrivium*: arithmetic, geometry, music theory, and astronomy—arithmetic, algebra (including accounting), geometry (including logistics), trigonometry (including surveying), frequency, harmony, and astronomy (including weather forecasting and navigation) - Plus: law, medicine, architecture, etc....

- The

- What were you?
- A judge
- A reeve or bailiff
- A courtier
- A theologian
- Working in some noble's incipient bureaucracy
- Working in some bishop's not-so-incipient bureaucracy
- Working for a city government
- Working for somebody in a guild merchant

- That isk the skills needed to make your way as a free person

### Rules for Computer Programming

Five principles:

- Half an hour a day...
- Find a program that does almost what you want to do:
- Inspect the source code...
- Tweak it and see if that does the job...

- Google the error message...
- Remember: Reinstall and rerun (i.e., conda install/update) is your best friend...
- One language at a time!

#### Something Wise Seth Lloyd Said to Me in May...

The discussion was about how frequently potential employers are changing the stack that they expect CUNY computer science B.A.s to have seen in the major

- Seth: "I have made it a lifelong goal to have to know only one computer language at a time..."

Seth has had some success in his career:

- Nam P. Suh Professor of Mechanical Engineering and Engineering Systems at MIT
- Director of the Center for Extreme Quantum Information Theory at MIT
- External Adjunct Fellow at the Santa Fe Institute
- Ph.D. (Physics), Rockefeller University. M. Phil. (Philosophy), Cambridge University. B.A. (Physics,
*summa cum laude*), Harvard University - Calls himself a "Quantum Mechanic"—although no blue overalls in evidence...

**Since you cannot learn every tool, make sure you learn tools that are broadly useful. Python is broadly useful**

#### Berkeley as Python (or at Least jupyter notebook) Central

Lots of Python resources around

**Jean Mark Gawron**: Python for Social Science http://gawron.sdsu.edu/python_for_ss/course_core/book_draft/index.html**Python Practice**: Learning Resources http://python.berkeley.edu/resources/**Python Practice**: Python Community http://python.berkeley.edu/community/

Python as Least Unfriendly Thing to Learn

- Kunal Marwaha says
- Python is the best of both worlds—the functionality of a general-purpose language but with different “packages” for different disciplines…
- Python is friendly to novices. Most languages can do what you need, but efficiency depends on how quickly you can learn it…
- The user community of each language makes a big difference…
- Python is the fifth most popular programming language and is open source.
- If you have never programmed and are working on a research problem, Python is almost certainly the best language to try first…

## 2017-08-26 Let's Do Something Useful with Python: Solving Allen Downey's Bayesian Dice Problem

And, in the process, think a little bit about how to use computers—how to learn the equivalent of what learning to write in a fine chancery hand was for Oxford students back in the fourteenth century...

cf: **Allen Downey** *Think Bayes* https://github.com/AllenDowney/PythonCounterPmf/blob/master/PythonCounterPmf.ipynb

### Python Environment Setup

Get the library modules we will use...

```
import numpy as np # the numerical manipulation module...
import matplotlib.pyplot as pyplot # graphing module—imitates MATLAB...
import random # pseudo-random number generator...
from collections import Counter # see below...
```

Set the system up to show graphs in the notebook...

```
%matplotlib inline
```

### On Counter...

In Python, you feed "Counter" a list of numbers or objects (for example, "our_list_of_results") and then label what Counter gives us back (in this case, "our counts") like this:

```
In[]: our_counts = Counter(our_list_of_results)
```

Counter then gives you what is essentially a two-column table "our_counts": The first column is a list of all the things that Counter found in our_list_of_results—all the outcomes. The second column is the number of times each of those outcomes appeared in our_list_of_results. Thus if you feed Counter this list of die rolls:

```
In[1]: our_list_of_results = [1, 2, 4, 2, 3, 2]
```

Then Counter will give us:

```
Out[1]: our_counts = {
1: 1
2: 3
3: 1
4: 1
}
```

By some abuse of language, we then do not say: "'our_counts' is what is returned by the Counter function applied to 'our_list_of_results'". We say, instead: "'our_counts' is a Counter object".

IMHO, "object" is a bad word to people who are not already object-oriented computer programmers. Think, instead: "Counter" is a Python program somebody has already written. It takse a list, and burps back a table (a "dict" or "dictionary" data structure in Python) that tells us what outcomes are in the list and how many times each outcome is in the list.

The fact that the language programmers of Python have already done the work of creating the "Counter" class and including it in the "collections" module makes our work here much, much easier. We are going to roll our (virtual) dice a million times. Counting up the outcomes would be tedious, if not for "Counter".

Why are we rollling a million (virtual) dice? It is the quickest-and-easiest way to solve Allen Downey's Bayesian Dice Problem https://github.com/AllenDowney/PythonCounterPmf/blob/master/PythonCounterPmf.ipynb:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die, each of them with sides labeled from 1 on up sequentially. If you have ever played Dungeons & Dragons, you know what I am talking about. Suppose I select a die from the box at random, roll it, and truthfully report to you that I get a 6. What is the probability that I rolled each die?

Professor Downey solves this via creative thought and clever programming, using Python's object-oriented language structure. But we are, first, going to solve it much more easily and quickly—in literally five minutes of time spent programming—using what we might as well call: Brute Force and Massive Ignorance.

I am going to present you with both ways because they together nearly span the ways we interact with computers: getting them to do lots of really simply and dumb calculations really fast, and keeping track of the results, on the one hand; building and keeping track of very complex, sophisticated, and clever chains of reasoning, on the other. It is useful to see both sides. (And this was how I originally started to learn this stuff in AM 110 in the spring of 1979: we worked close to the bare metal of the DEC PDP-11/70 via Assembly language on the one hand; we worked as far from the bare metal as was then possible with LISP on... well... in theory, at least, it did not matter what it was on on the other hand.) What I call BFMI is not Assembly, but it is not far. The type of Python Allen Downey writes is not LISP, but it is not far either...)

## Brute Force and Massive Ignorance...

We are, first, going to solve Allen Downey's Bayesian Dice Problem https://github.com/AllenDowney/PythonCounterPmf/blob/master/PythonCounterPmf.ipynb:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die, each of them with sides labeled from 1 on up sequentially. If you have ever played Dungeons & Dragons, you know what I am talking about. Suppose I select a die from the box at random, roll it, and truthfully report to you that I get a 6. What is the probability that I rolled each die?

forcefully, brutally, and quickly. We are simply going to roll the (virtual) dice a million times, and we will see what results. We will then be able to say:

Hmmmm. When the result of the die roll was a six:

- 0.0% of the time the die rolled was four-sided (duh!)
- 39.4% of the time the die rolled was six-sided
- 29.6% of the time the die rolled was eight-sided
- 19.4% of the time the die rolled was twelve-sided
- 11.7% of the time the die rolled was twenty-sided

By the Central Limit Theorem, and by the hope that a million dice rolls is enough trials for the CLM to apply, we then assign those numbers as our probabilities of what die was rolled in the case the result of the die roll was a six.

So let's get to work: here is the program:

```
In[2]: random.seed(1)
data = [[] for x in range(20)] # set up a list of 20 empty sublists to
# track the results: one individual
# sublist for each of the 20 possible
# numbers from 1 to 20 that the dice
# might roll...
dice = [4, 6, 8, 12, 20] # set up a list of the five dice you have:
# 4-, 6-, 8-, 12-, and 20-sided...
for i in range(1000000): # do the following a million times...
die = random.choice(dice) # choose a die at random—1/5 chance of each
result = random.randint(1, die) # roll the chosen die
data[result-1].append(die) # add the number of sides of the die that
# made that roll onto the sublist of the
# times that number was rolled
odds = {4: [0] * 20, 6: [0] * 20, # set up a 20 x 5 table of the relative
8: [0] * 20, 12: [0] * 20, # frequencies each number was rolled by each
20: [0] * 20} # die; the braces "{}", colons ":", and the
# 4, 6, 8, 12, 20 tell Python that each column
# of this table has a label: the number of
# sides of the die that it corresponds to
# (this is another "dict" object in Python,
# if you care)...
for i in range(20): # for each of the 20 rolls...
for j in dice: # for each of the five dice...
odds[j][i] = (Counter(data[i])[j]/ # use Counter to calculate
sum(Counter(data[i]).values())) # often number i was rolled
# by die j; then divide by the
# total number of times i was
# rolled...
# and we are done: the table
# "odds" is what we want...
```

And here is how we print out our table of results

```
In[3]: print("MONTE CARLO DICE PROBLEM RESULTS (10^6 TRIALS)") # print the title...
print(" ") # blank line
print("roll 4-sd 6-sd 8-sd 12-sd 20-sd") # header—which column
# goes with which die
print(" ") # blank line
for i in range(20): # loop to print 20 lines
print(i+1, " ",
'{:.2%}'.format(round(odds[4][i], 5)),
'{:.2%}'.format(round(odds[6][i], 5)),
'{:.2%}'.format(round(odds[8][i], 5)),
'{:.2%}'.format(round(odds[12][i], 5)),
'{:.2%}'.format(round(odds[20][i], 5)))
```

```
Out[3]: MONTE CARLO DICE PROBLEM RESULTS (10^6 TRIALS)
roll 4-sd 6-sd 8-sd 12-sd 20-sd
1 37.04% 24.81% 18.41% 12.34% 7.41%
2 37.07% 24.85% 18.32% 12.39% 7.37%
3 36.93% 24.59% 18.50% 12.39% 7.58%
4 37.04% 24.71% 18.55% 12.26% 7.44%
5 0.00% 39.36% 29.12% 19.64% 11.88%
6 0.00% 39.26% 29.32% 19.78% 11.65%
7 0.00% 0.00% 48.61% 32.15% 19.24%
8 0.00% 0.00% 48.06% 32.54% 19.40%
9 0.00% 0.00% 0.00% 62.71% 37.29%
10 0.00% 0.00% 0.00% 62.29% 37.71%
11 0.00% 0.00% 0.00% 62.45% 37.55%
12 0.00% 0.00% 0.00% 62.45% 37.55%
13 0.00% 0.00% 0.00% 0.00% 100.00%
14 0.00% 0.00% 0.00% 0.00% 100.00%
15 0.00% 0.00% 0.00% 0.00% 100.00%
16 0.00% 0.00% 0.00% 0.00% 100.00%
17 0.00% 0.00% 0.00% 0.00% 100.00%
18 0.00% 0.00% 0.00% 0.00% 100.00%
19 0.00% 0.00% 0.00% 0.00% 100.00%
20 0.00% 0.00% 0.00% 0.00% 100.00%
```

### Discussion

That took about 15 minutes of my time: five minutes to program up the million dice rolls (the program ran first time—that almost never happens) and then count, sort, and divide to produce the odds table; five more minutes to add the comments to the main program; and a last five minutes to look up how to print out the ugly version of the odds table just above.

Oh. And computer time? Cells [1], [2], and [4] saw the computer came back finished before I could think of what to do next. Cell [3] took about 6 seconds to run on my machine (13" MacBookAir, Early 2015; 2.2 GHz Intel Core i7; 8 GB 1600 MHz DDR3; C1MPQ2QJG944).

Note that this table is not exactly right. The first four rows should be *identical*: the only information we get from a die roll of 1-4 is "this roll could have been produced by any of the dice". Similarly, rows 5-6 ("this roll could have been produced by any except the 4-sided die"), rows 7-8 ("this roll could not have been produced by the 4- or by the 6-sided die"), rows 9-12 ("this roll could only have been produced by the 12- or the 20-sided die), and rows 13-20 ("yup: the die rolled has more than 12 sides") should be identical. But they are not quite. (Even a ten million rolls run would still leave occasional jiggles up or down by 0.1% in som elements of the table: I tried it, and it took the computer a minute. If I wanted to nail the numbers—have CLT convergence down to the last tenth of a percentage point—I would have to let the microprocessor spend as much time running it as I spent programming it: 15 minutes or so and five hundred million rolls.)

How should we grasp what is going on in this odds table? Rows 13-20 are simple: one of the dice has been chosen and rolled, and the number rolled is one that could only have been rolled by the 20-sided die. So the chance the 20-sided die was the one chosen and rolled is 100%.

But what is going on in rows 9-12? A die is chosen, and the resulting roll could not have been made by the 4-, the 6-, or the 8-sided die. We thus understand why the first three numbers in each line are: 0%, 0%, 0%: we know that either the 12- and 20-sided die was the one chosen. But we know that each had an equal chance of being chosen. So why are the last two numbers not 50%, 50% but instead 62.5%, 37.5%? Because we know more than just that the die roll could not have been made by any of the 4-, 6-, or 8-sided dice. We also know, in addition, that *if* the 20-sided die were chosen *then* it gave us a relatively unlikely relatively low roll: a 9-12 and not the 13-20 that would have been more likely had we known only (a) that the 20-sided die were chosen and (b) that it rolled a number greater than 8.

To be more concrete: 1/5 of the time the 12-sided die is rolled. 1/3 of the time that the 12-sided die is rolled, the result is a 9-12. Thus (a) 12-sided die and (b) 9-12 occurs 1/5 x 1/3 = 1/15 of the time a die is rolled. 1/5 of the time the 20-sided die is rolled. 1/5 of the time that the 20-sided die is rolled the result is a 9-12. Thus (a) 12-sided die and (b) 9-12 occurs 1/5 x 1/5 = 1/25 of the time a die is rolled. Seeing a 9-12 thus takes place 1/15 + 1/25 = 8/75 of the time—3/75 with a 20- and 5/75 with a 5-sided die. Hence 37.5% and 62.5%.

These arguments *sound* convincing. But how do we know they are really right? How do we know that this is the way things work? We know because we rolled the (virtual) dice a million times. 5/8 of the time that a 9-12 was rolled, the die doing the rolling was the 12-sided die; only 3/8 of the time was the die doing the rolling the 20-sided die.

## Math, Clever Programming, and Deep Insight...

An alternative way to solve this Bayesian Dice Problem https://github.com/AllenDowney/PythonCounterPmf/blob/master/PythonCounterPmf.ipynb:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die, each of them with sides labeled from 1 on up sequentially. If you have ever played Dungeons & Dragons, you know what I am talking about. Suppose I select a die from the box at random, roll it, and truthfully report to you that I get a 6. What is the probability that I rolled each die?

is the way that Professor Allen Downey of Olin College solves it: via math, clever programming, and deep insight.

First, Allen Downey of Olin College extends Counter—building it into what we will here call a "ProbabilityMass" by adding a "normalize" to it so that what it will contain (after "normalize") is not counts but rather frequencies that sum to one—probabilities, in other words. Then he extends ProbabilityMass, building it into what he calls a "Suite"—a suite, that is, of the subjective Bayesian probabilities of complete and non-overlapping hypotheses about the world. He does so by adding a "bayesian_update" function that takes a piece of data and uses it to update the Bayesian probabilities of the hypotheses in the Suite if you know the likelihoods of the data appearing under each of the hyptheses. Last, he extends Suite, building it into what he calls a "DiceSuite", by adding on a function calculating the likelihood that a particular roll was made by a certain die given how many sides the die has:

```
In[4]: class ProbabilityMass(Counter):
def normalize(self):
total = float(sum(self.values()))
for key in self:
self[key] /= total
def __hash__(self):
return id(self)
def __eq__(self, other):
return self is other
def render(self):
return zip(*sorted(self.items()))
```

```
In[5]: class Suite(ProbabilityMass):
def bayesian_update(self, data):
for hypo in self:
like = self.likelihood(data, hypo)
self[hypo] = self[hypo] * like
self.normalize()
```

```
In[6]: class DiceSuite(Suite):
def likelihood(self, data, hypo):
return hypo[data] # note: if data > hypo, the die roll
# is greater than the number of faces
# on the die, and so then hypo[data] = 0.
# If data ≤ hypo, hypo[data] = 1/hypo—
# equals one divided by the number of faces.
# When "likelihood" is called inside
# "Bayesian update", it thus produces the
# thus produces the right multiplicative
# factor with which to update the hypothesis
# probability in the line
# "self[hypo] * like"
```

Last, he defines a function to make an object representing a k-sided die, if you feed it the number k:

```
In[7]: def make_die(num_sides):
die = ProbabilityMass(range(1, num_sides+1))
die.name = 'd%d' % num_sides
die.normalize()
return die
```

Then the programming is trivial: make the box of dice, and then one loop to calculate how the probabilities of each die are updated from their initial equal chances for each of the 20 possible numbers that might result from the die roll, and a subloop to print out the lines of the output table with percent signs:

```
In[8]: dice = [make_die(x) for x in [4, 6, 8, 12, 20]]
print("BAYESIAN UPDATE RESULTS") # print the table...
print(" ")
print("roll 4-sd 6-sd 8-sd 12-sd 20-sd") # header row
print(" ")
for i in range(20): # loop over die roll values
dice_suite = DiceSuite(dice) # set up the dice
dice_suite.bayesian_update(i+1) # probability for the current roll
rounded = [""] * 5
processing = list(dice_suite.values())
for j in range(5):
rounded[j] = round(processing[j],5)
print(i+1, " ",
'{:.2%}'.format(rounded[0]),
'{:.2%}'.format(rounded[1]),
'{:.2%}'.format(rounded[2]),
'{:.2%}'.format(rounded[3]),
'{:.2%}'.format(rounded[4]))
```

```
Out[8]: BAYESIAN UPDATE RESULTS
roll 4-sd 6-sd 8-sd 12-sd 20-sd
1 37.04% 24.69% 18.52% 12.35% 7.41%
2 37.04% 24.69% 18.52% 12.35% 7.41%
3 37.04% 24.69% 18.52% 12.35% 7.41%
4 37.04% 24.69% 18.52% 12.35% 7.41%
5 0.00% 39.22% 29.41% 19.61% 11.77%
6 0.00% 39.22% 29.41% 19.61% 11.77%
7 0.00% 0.00% 48.39% 32.26% 19.36%
8 0.00% 0.00% 48.39% 32.26% 19.36%
9 0.00% 0.00% 0.00% 62.50% 37.50%
10 0.00% 0.00% 0.00% 62.50% 37.50%
11 0.00% 0.00% 0.00% 62.50% 37.50%
12 0.00% 0.00% 0.00% 62.50% 37.50%
13 0.00% 0.00% 0.00% 0.00% 100.00%
14 0.00% 0.00% 0.00% 0.00% 100.00%
15 0.00% 0.00% 0.00% 0.00% 100.00%
16 0.00% 0.00% 0.00% 0.00% 100.00%
17 0.00% 0.00% 0.00% 0.00% 100.00%
18 0.00% 0.00% 0.00% 0.00% 100.00%
19 0.00% 0.00% 0.00% 0.00% 100.00%
20 0.00% 0.00% 0.00% 0.00% 100.00%
```

### Recap

The math, clever programming, and deep insight way has given us the same table—save for the jittering caused by sampling error, because 1,000,000 dice rolls is not quite enough for the CLT to nail things to even the first decimal place—as in the BFMI section above.

How did that work? Did that go by too fast? Let's run through this again:

Allen Downey starts with the already-programmed up "Counter" from the *collections* module:

```
In[9]: result = Counter([1, 2, 3, 4, 5, 6])
result
Out[9]: Counter({1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1})
```

Applying "Counter" to the list [1, 2, 3, 4, 5, 6] gets us a table (here printed by Python on one line). The 1, 2, 3, 4, 5, and 6es before the colons tell us the outcomes that are in the list (the "keys" of the Python "dict" object that Counter returns), and the 1s after the colons tell us the counts of how many times each outcome was found in the list (the "values" of the Python "dict" object that Counter returns).

Then Allen Downey extends Counter to ProbabilityMass:

```
In[10]: class ProbabilityMass(Counter):
def normalize(self):
total = float(sum(self.values()))
for key in self:
self[key] /= total
def __hash__(self):
return id(self)
def __eq__(self, other):
return self is other
def render(self):
return zip(*sorted(self.items()))
class Suite(ProbabilityMass):
def bayesian_update(self, data):
for hypo in self:
like = self.likelihood(data, hypo)
self[hypo] = self[hypo] * like
self.normalize()
class DiceSuite(Suite):
def likelihood(self, data, hypo):
return hypo[data]
```

Last, he defines a function to make an object representing a k-sided die, if you feed it the number k:

```
In[11]: def make_die(num_sides):
die = ProbabilityMass(range(1, num_sides+1))
die.name = 'd%d' % num_sides
die.normalize()
return die
```

Now that everything is set up, let us get to work. Let us make our five dice:

```
In[12]: dice = [make_die(x) for x in [4, 6, 8, 12, 20]]
```

And let us see what we have made:

```
In[13]: dice
Out[13]:
[ProbabilityMass({1: 0.25, 2: 0.25, 3: 0.25, 4: 0.25}),
ProbabilityMass({1: 0.16666666666666666,
2: 0.16666666666666666,
3: 0.16666666666666666,
4: 0.16666666666666666,
5: 0.16666666666666666,
6: 0.16666666666666666}),
ProbabilityMass({1: 0.125,
2: 0.125,
3: 0.125,
4: 0.125,
5: 0.125,
6: 0.125,
7: 0.125,
8: 0.125}),
ProbabilityMass({1: 0.08333333333333333,
2: 0.08333333333333333,
3: 0.08333333333333333,
4: 0.08333333333333333,
5: 0.08333333333333333,
6: 0.08333333333333333,
7: 0.08333333333333333,
8: 0.08333333333333333,
9: 0.08333333333333333,
10: 0.08333333333333333,
11: 0.08333333333333333,
12: 0.08333333333333333}),
ProbabilityMass({1: 0.05,
2: 0.05,
3: 0.05,
4: 0.05,
5: 0.05,
6: 0.05,
7: 0.05,
8: 0.05,
9: 0.05,
10: 0.05,
11: 0.05,
12: 0.05,
13: 0.05,
14: 0.05,
15: 0.05,
16: 0.05,
17: 0.05,
18: 0.05,
19: 0.05,
20: 0.05})]
```

We have just asked Python to give us back, as its output in Out[13], the thing that "dice" currently is. And Python has delivered

The Python object "dice" is rather complicated. First, it is a list—a list of 5 ProbabilityMass objects. That is what the opening "[", the closing "]", and the four commas before the second, third, fourth, and fifth occurances of "ProductivityMass" mean to Python: they tell it that this "die" thing is a five-item list.

Each of the elements of the list is an instantiation—big word, but I don't know of a better one—a thing that belongs to the "ProbabilityMass" class of things. That is what the "ProbabilityMass(" and the closing ")" mean to Python. Inside each ProbabilityMass wrapper is a Python dict object—that's what the opening "{" and the closing "}" mean. The first item in the dict inside the fifth ProbabilityMass object is "1: 0.05,"—the "1" is the *key*, in this case, a potential roll of this particular die, an outcome that might be obtained; the ":" tells Python that the key is over; the "0.05" is the *value*, in this case that probability that this die, when rolled, will yield the outcome that is the *key*; and the "," tells Python: on to the next *key : value* pair. The dict object inside the fifth ProbabilityMass has 20 *key: value* entries—it is a 20-sided die, after all. The dict object inside the first ProbabilityMass has 4 *key: value* entries—it is a 4-sided die.

Next let us print the table title, a blank line, the table header row telling us that the first column is the outcome of the die roll and also which column is associated with which die, and then another blank line:

```
In[14]: print("BAYESIAN UPDATE RESULTS")
print("")
print("roll 4-sd 6-sd 8-sd 12-sd 20-sd")
print("")
Out[14]:
BAYESIAN UPDATE RESULTS
roll 4-sd 6-sd 8-sd 12-sd 20-sd
```

Next let us loop over all 20 possible outcome die rolls, calculating what the odds were that each roll was made by each die if each die had an equal 1/5 chance beforehand of being selected and printing out the results:

```
In[15]: for i in range(20):
dice_suite = DiceSuite(dice)
dice_suite.bayesian_update(i+1)
rounded = [""] * 5
processing = list(dice_suite.values())
for j in range(5):
rounded[j] = round(processing[j],5)
print(i+1, " ",
'{:.2%}'.format(rounded[0]),
'{:.2%}'.format(rounded[1]),
'{:.2%}'.format(rounded[2]),
'{:.2%}'.format(rounded[3]),
'{:.2%}'.format(rounded[4]))
Out[15]:
1 37.04% 24.69% 18.52% 12.35% 7.41%
2 37.04% 24.69% 18.52% 12.35% 7.41%
3 37.04% 24.69% 18.52% 12.35% 7.41%
4 37.04% 24.69% 18.52% 12.35% 7.41%
5 0.00% 39.22% 29.41% 19.61% 11.77%
6 0.00% 39.22% 29.41% 19.61% 11.77%
7 0.00% 0.00% 48.39% 32.26% 19.36%
8 0.00% 0.00% 48.39% 32.26% 19.36%
9 0.00% 0.00% 0.00% 62.50% 37.50%
10 0.00% 0.00% 0.00% 62.50% 37.50%
11 0.00% 0.00% 0.00% 62.50% 37.50%
12 0.00% 0.00% 0.00% 62.50% 37.50%
13 0.00% 0.00% 0.00% 0.00% 100.00%
14 0.00% 0.00% 0.00% 0.00% 100.00%
15 0.00% 0.00% 0.00% 0.00% 100.00%
16 0.00% 0.00% 0.00% 0.00% 100.00%
17 0.00% 0.00% 0.00% 0.00% 100.00%
18 0.00% 0.00% 0.00% 0.00% 100.00%
19 0.00% 0.00% 0.00% 0.00% 100.00%
20 0.00% 0.00% 0.00% 0.00% 100.00%
```

But what was going on inside the loop? To see, let's go inside just the sixth iteration of the loop, and print out what dice_suite looks like as the loop proceeds. First:

```
In[16]: dice_suite = DiceSuite(dice)
dice_suite.normalize()
dice_suite
Out[16]:
DiceSuite({ProbabilityMass({1: 0.25, 2: 0.25, 3: 0.25, 4: 0.25}): 0.2,
ProbabilityMass({1: 0.16666666666666666,
2: 0.16666666666666666,
3: 0.16666666666666666,
4: 0.16666666666666666,
5: 0.16666666666666666,
6: 0.16666666666666666}): 0.2,
ProbabilityMass({1: 0.125,
2: 0.125,
3: 0.125,
4: 0.125,
5: 0.125,
6: 0.125,
7: 0.125,
8: 0.125}): 0.2,
ProbabilityMass({1: 0.08333333333333333,
2: 0.08333333333333333,
3: 0.08333333333333333,
4: 0.08333333333333333,
5: 0.08333333333333333,
6: 0.08333333333333333,
7: 0.08333333333333333,
8: 0.08333333333333333,
9: 0.08333333333333333,
10: 0.08333333333333333,
11: 0.08333333333333333,
12: 0.08333333333333333}): 0.2,
ProbabilityMass({1: 0.05,
2: 0.05,
3: 0.05,
4: 0.05,
5: 0.05,
6: 0.05,
7: 0.05,
8: 0.05,
9: 0.05,
10: 0.05,
11: 0.05,
12: 0.05,
13: 0.05,
14: 0.05,
15: 0.05,
16: 0.05,
17: 0.05,
18: 0.05,
19: 0.05,
20: 0.05}): 0.2})
```

At this point—just after its creation—"dice_suite" looks a lot like "dice" looked like, except that the first "ProbabilityMass(...etc....)" has had a "{" added before it, and a ": 0.2" added after its final ")"; the rest of the ProbabilityMass(...etc....) objects have had a ": 0.2" added after their final ")"s; and the last "ProbabilityMass(...etc....): 0.2" has had a "}" added after it.

All the stuff from each "P" to the following ")" is one of the dice. And each of the "0.2"s after the colon outside the first five closing brace-parenthesis pairs "})" is the probability that that particular die was chosen to be rolled.

Next the loop asked Python to do:

```
In[17]: dice_suite.bayesian_update(6)
```

What does dice_suite look like now, after the update?

```
In[18]: dice_suite
Out[18]:
DiceSuite({ProbabilityMass({1: 0.25, 2: 0.25, 3: 0.25, 4: 0.25}): 0.0,
ProbabilityMass({1: 0.16666666666666666,
2: 0.16666666666666666,
3: 0.16666666666666666,
4: 0.16666666666666666,
5: 0.16666666666666666,
6: 0.16666666666666666}): 0.3921568627450981,
ProbabilityMass({1: 0.125,
2: 0.125,
3: 0.125,
4: 0.125,
5: 0.125,
6: 0.125,
7: 0.125,
8: 0.125}): 0.2941176470588236,
ProbabilityMass({1: 0.08333333333333333,
2: 0.08333333333333333,
3: 0.08333333333333333,
4: 0.08333333333333333,
5: 0.08333333333333333,
6: 0.08333333333333333,
7: 0.08333333333333333,
8: 0.08333333333333333,
9: 0.08333333333333333,
10: 0.08333333333333333,
11: 0.08333333333333333,
12: 0.08333333333333333}): 0.19607843137254904,
ProbabilityMass({1: 0.05,
2: 0.05,
3: 0.05,
4: 0.05,
5: 0.05,
6: 0.05,
7: 0.05,
8: 0.05,
9: 0.05,
10: 0.05,
11: 0.05,
12: 0.05,
13: 0.05,
14: 0.05,
15: 0.05,
16: 0.05,
17: 0.05,
18: 0.05,
19: 0.05,
20: 0.05}): 0.11764705882352945})
```

Note what has changed: only the five numbers after the "):"s. Those are your subjective probabilities of which die was chosen after seeing that the die roll was 6.

The probability for the four-sided die is now zero. The "bayesian_update" function looked inside the four-sided die object—that is, inside

```
ProbabilityMass({1: 0.25, 2: 0.25, 3: 0.25, 4: 0.25})
```

for that is Python's representation of the four-sided die—for the chance that a four-sided die rolls a 6. It found nothing. And so it then multiplied zero by the previous probability of a four-sided die equal to 0.2—the 0.2 in the

```
ProbabilityMass({1: 0.25, 2: 0.25, 3: 0.25, 4: 0.25}): 0.2
```

obtained zero, and so substituted that zero in for 0.2.

The probability for the six-sided die is now 0.3921568627450981. The "bayesian_update" function looked inside the six-sided die object—that is, inside

```
ProbabilityMass({1: 0.16666666666666666, 2: 0.16666666666666666,
3: 0.16666666666666666, 4: 0.16666666666666666,
5: 0.16666666666666666, 6: 0.16666666666666666})
```

for that is Python's representation of the six-sided die—for the chance that a six-sided die rolls a 6. It found 0.1666666666. And so it then multiplied the previous probability of a six-sided die equal to 0.2—the 0.2 in the

```
ProbabilityMass({1: 0.16666666666666666, 2: 0.16666666666666666,
3: 0.16666666666666666, 4: 0.16666666666666666,
5: 0.16666666666666666, 6: 0.16666666666666666}): 0.2
```

by 0.16666666666666, and, after doing the analogous multiplications for the eight-, twelve-, and twenty-sided dice, normalized them by multiplying them by 11.76470588235765 so that they all added up to the one that a proper set of probabilities needs to add up to. That is how we got the 0.3921568627450981. And, similarly, the 0.2941, the 0.1961, and the 0.11.77.

Then the sixth run of the loop prints out the corresponding table line:

```
In[19]: rounded = [""] * 5
processing = list(dice_suite.values())
for j in range(5):
rounded[j] = round(processing[j],5)
print(6, " ",
'{:.2%}'.format(rounded[0]),
'{:.2%}'.format(rounded[1]),
'{:.2%}'.format(rounded[2]),
'{:.2%}'.format(rounded[3]),
'{:.2%}'.format(rounded[4]))
Out[19]: 6 0.00% 39.22% 29.41% 19.61% 11.77%
```

giving the probabilities of each die after seeing that the number rolled was a six.

Putting the whole loop back together once again:

```
In[20]: for i in range(20):
dice_suite = DiceSuite(dice)
dice_suite.bayesian_update(i+1)
rounded = [""] * 5
processing = list(dice_suite.values())
for j in range(5):
rounded[j] = round(processing[j],5)
print(i+1, " ",
'{:.2%}'.format(rounded[0]),
'{:.2%}'.format(rounded[1]),
'{:.2%}'.format(rounded[2]),
'{:.2%}'.format(rounded[3]),
'{:.2%}'.format(rounded[4]))
Out[20]:
BAYESIAN UPDATE RESULTS
roll 4-sd 6-sd 8-sd 12-sd 20-sd
1 37.04% 24.69% 18.52% 12.35% 7.41%
2 37.04% 24.69% 18.52% 12.35% 7.41%
3 37.04% 24.69% 18.52% 12.35% 7.41%
4 37.04% 24.69% 18.52% 12.35% 7.41%
5 0.00% 39.22% 29.41% 19.61% 11.77%
6 0.00% 39.22% 29.41% 19.61% 11.77%
7 0.00% 0.00% 48.39% 32.26% 19.36%
8 0.00% 0.00% 48.39% 32.26% 19.36%
9 0.00% 0.00% 0.00% 62.50% 37.50%
10 0.00% 0.00% 0.00% 62.50% 37.50%
11 0.00% 0.00% 0.00% 62.50% 37.50%
12 0.00% 0.00% 0.00% 62.50% 37.50%
13 0.00% 0.00% 0.00% 0.00% 100.00%
14 0.00% 0.00% 0.00% 0.00% 100.00%
15 0.00% 0.00% 0.00% 0.00% 100.00%
16 0.00% 0.00% 0.00% 0.00% 100.00%
17 0.00% 0.00% 0.00% 0.00% 100.00%
18 0.00% 0.00% 0.00% 0.00% 100.00%
19 0.00% 0.00% 0.00% 0.00% 100.00%
20 0.00% 0.00% 0.00% 0.00% 100.00%
```

Once all the preliminary building of the scaffolding is done—the creation of the classes and the functions and the methods—the actual program is three lines of code:

```
In[]: for i in range(20):
dice_suite = DiceSuite(dice)
dice_suite.bayesian_update(i+1)
```

### Discussion: Why Do Things Downey's—the *Pythonic*—Way?

Now I do not know about you, but that took me not fifteen minutes but more like six hours: to copy the program logic from Allen Downey's github repository, poke around it until I felt that I understood its logic well enough to explain it, modify it so it prints out the whole table of results rather than just line 6, and then write this up. It would have taken me a lot more than six hours if I had had to come up with it rather than just follow along in the footsteps of Allen Downey's *Think Bayes* http://www.greenteapress.com/thinkbayes/.

If it took you significantly less time, then congratulations: you have a bring future ahead of you in high-paying jobs working as an object-oriented computer programmer!

What are the reasons to do things Downey's way—a style of programming that is object-oriented, Pythonic, using clever data structures, the theoretical insight of statistics developed by the Rev. Thomas Bayes that:

$P(A|B) = \frac {P(B|A)P(A)}{P(B)}$

and other such intellectual tools, rather than "let's roll a million (virtual) dice and see"?

First, Allen Downey now has a suite of tools that he can easily adapt to answer related—and more complicated questions. What if you want to know: "What are the probabilities if my counterparty rolls 3, 6, 4, 2, 5, 4, 3"? What if you want to know: "How many die rolls do I need to see—none of which are above 12—before I can conclude that it is highly, highly unlikely that the die chosen is the twenty-sided die?"?

Second, sooner or later BFMI fails because you do not have enough brute force at your disposal. Suppose you had not one but 100 dice rolls, and not one but 100 possible ways that the initial box from which the die is chosen could be set up. We are now—if you want to nail the probabilities to even one decismal place—talking about not fifteen minutes but 2500 hours of computer time.

Third, even in the best of cases, BFMI—let's simulate it a lot of times and see what happens—requires that you know a simple thing to simulate, and the kind of knowledge behind what Downey is doing is the only thing that makes setting up the right simple thing to simulate many times possible.

Fourth, after programming it up, someone doing it Allen Downey's way would understand and could explain why the numbers are what they are in a way that somebody doing it the BRMI way could not. Recall the discussion beginning "How should we grasp what is going on in this odds table?" above: I had to bolt that onto the BFMI run in order to give you a chance of gaining some insight into the situation. Somebody doing it Allen Downey's way would have to learn and understand that discussion already before they could finish programming.

Fifth, the BFMI way makes my students cry, because my code reads as though it was badly translated from the FORTRAN computer language written over 1953-1957 by John Backus, Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre https://en.wikipedia.org/wiki/Fortran#History.

### Discussion: Dangers of the *Pythonic* Way

Of course, the benefits of eschewing BFMI—tool building, a big head start on related problems, economy of computer power when it becomes scarce, understanding of what are first- and what are third-order aspects of the phenomena, and insight—apply *only if you actually understand the program suite you are running*.

If you do not, you are a mere *Code Monkey* typing uncomprehended symbols. You are, at best, in the position of the underbriefed and undertrained *Sorcerer's Apprentice*.

And everyone finds themselves in such a position soone or later. Indeed, there is a strange affinity between fantasy magic on the one hand and computer programming on the other—especially as computer science evolves languages and frameworks that are more and more abstract, object-oriented, recursive, etc. We see this, in fact, in the original *Hacker's Dictionary*:

WIZARD n. 1. A person who knows how a complex piece of software or hardware works; someone who can find and fix his bugs in an emergency. Rarely used at MIT, where HACKER is the preferred term. 2. A person who is permitted to do things forbidden to ordinary people, e.g., a "net wizard" on a TENEX may run programs which speak low-level host-imp protocol; an ADVENT wizard at SAIL may play Adventure during the day. —Paul Dourish: THE ORIGINAL HACKER'S DICTIONARY http://www.dourish.com/goodies/jargon.html

We see this more so in the Introduction to Abselson, Sussman, and Sussman's (1993) *Structure and Interpretation of Computer Programs*—a book that actually has a wizard on its cover:

Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells.

The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.

A computational process, in a correctly working computer, executes programs precisely and accurately. Thus, like the sorcerer's apprentice, novice programmers must learn to understand and to anticipate the consequences of their conjuring… https://mitpress.mit.edu/sicp/full-text/book/book.html

At some point knowledge decays into superstition, as the programmers lose control of what they are doing because they do not understand the system—or because the system is itself buggy, and does not behave as they were taught it would. Thus things go wrng—here we have an example: Gandalf the Grey, Python νb, confronting unexpected behavior from a pandas.DataFrame:

or xkcd:

### Discussion: Strike a Balance

Of course, it is not one or the other: one does not have to, and one should never be bullied into, declaring permanent allegiance to BFMI or to being clever and *Pythonic*. It is both/and. It is sometimes one/sometimes the other.

We believe in rough consensus so the members of the community can help one another rather than get in one another's way; we believe in code that runs; and we believe in getting work done.

Go thou, and do likewise...

### Addendum: Gandalf the Grey Talking Shop

'I found myself suddenly faced by something that I have not met before. I could think of nothing to do but to try and put a shutting-spell on the door. I know many; but to do things of that kind rightly requires time, and even then the door can be broken by strength. As I stood there I could hear orc-voices on the other side: at any moment I thought they would burst it open. I could not hear what was said; they seemed to be talking in their own hideous language. All I caught was

ghâsh: that is “fire”. Then something came into the chamber–I felt it through the door, and the orcs themselves were afraid and fell silent. It laid hold of the iron ring, and then it perceived me and my spell. What it was I cannot guess, but I have never felt such a challenge. The counter-spell was terrible. It nearly broke me. For an instant the door left my control and began to open! I had to speak a word of Command. That proved too great a strain. The door burst in pieces. Something dark as a cloud was blocking out all the light inside, and I was thrown backwards down the stairs. All the wall gave way, and the roof of the chamber as well, I think…'Picking up a faggot [Gandalf] held it aloft for a moment, and then with a word of command,

naur an edraith ammen!, he thrust the end of his staff into the midst of it. At once a great spout of green and blue flame sprang out, and the wood flared and sputtered. ‘If there are any to see, then I at least am revealed to them’, he said. ‘I have written Gandalf is here in signs that all can read from Rivendell to the mouths of Anduin…’

### Addendum: Philosophy: Magic and Python

Seth Lloyd, 1982 First Marshall of the Alpha of Massachusetts Chapter of ΦΒΚ, Nam Pyo Suh Professor of Mechanical Engineering at MIT, Miller Fellow at the Santa Fe Institute, Director of the W.M. Keck Center for Extreme Quantum Information Theory at MIT, the director of the Program in Quantum Information at the Institute for Scientific Interchange, and author of *Programming the Universe*, says:

I have made it a principle of my life to learn as few pieces of software as possible!

My graduate students have been known to say that when I program it looks as though everything has been badly translated from FORTRAN. It is time to fix that. And the youngs say that the thing to do is Python, because here at Berkeley there is a greater concentration of Python wizards than of anything else, and in fact the greatest concentration of Python wizards in the world—a veritable Pythonic information singularity: if it can be done in Python, there is somebody within a mile who has done it; and if it can't be done in Python, there is somebody within a mile extending Python to do it.

But...

Python is an odd, extendable and extended beast. Everything is an object. Everything you do is poking objects in one way or another to get them to respond. And if you do not understand what the objects you are dealing with really are, Python will surprise you in unpleasant ways. Here is a simple example, but that there is such a simple example is something I find quite alarming;

```
In[]: eight = 8
eight + eight
Out[]: 16
In[]: eight = "8"
eight + eight
Out[]: '88'
In[]: eight = [8]
eight + eight
Out[]: [8, 8]
```

### Addendum: On Not Positive But Negative Models for "Data Science"

As you learn about and as you learn how to do what is now called "data science", it is important—not as important as knowing how to do "data science" right, but important—to be able to recognize and critique when data science is done wrong. There is lots of information out there put forward by lazy, undertrained, cynical, self-interested, and—sometimes—malevolent people. It is important to be able to recognize it, assess it, and then be able to identify and explain what has gone wrong.

As **Drew Conway** put it in 2010 in his piece: "The Data Science Venn Diagram", we have to deal with the fact that a number of skills need to be combined to do "data science" in a way that has a positive impact on the world: hacking skills, knowledge of the field, and also appropriate math and statistics:

Hacking skills, math and stats knowledge, and substantive expertise... each... very valuable.... Being able to manipulate text files at the command-line, understanding vectorized operations, thinking algorithmically... the hacking skills... apply[ing]appropriate math and statisticsmethods... [but] data plus math and statistics only gets you machine learning, which is great if that is what you are interested in, but not if you are doing data science... [which] is about discovery and buildingknowledge....The hacking skills plus substantive expertise danger zone... people who “know enough to be dangerous”... capable of extracting and structuring data... related to a field they know quite a bit about and... run a linear regression... lack[ing] any understanding of what those coefficients mean. It is from this part of the diagram that the phrase “lies, damned lies, and statistics” emanates, because either through ignorance or malice this overlap of skills gives people the ability to create what appears to be a legitimate analysis without any understanding of how they got there or what they have created.

Fortunately, it requires near willful ignorance to acquire hacking skills and substantive expertise without also learning some math and statistics along the way. As such, the danger zone is sparsely populated, however, it does not take many to produce a lot of damage...

I am much less optimistic than Conway. It seems to be easy—these days at least—to find people who will create a simulacrum of a data science analysis that is in fact grossly misleading, whether through laziness, undertraining, cynicism, self-interest, or—sometimes—malevolence.

### Addendum: How Allen Downey Explains It

If you want, Here is how Allen Downey explains his Bayesian Dice Problem, from his book *Think Bayes* http://www.greenteapress.com/thinkbayes/:

To represent a distribution in Python, you could use a dictionary that maps from each value to its probability. I have written a class called Pmf that uses a Python dictionary in exactly that way, and provides a number of useful methods. I called the class Pmf in reference to a probability mass function, which is a way to represent a distribution mathematically. Pmf is defined in a Python module I wrote....

The following code builds a Pmf to represent the distribution of outcomes for a six-sided die:

```
pmf = Pmf()
for x in [1,2,3,4,5,6]:
pmf.Set(x, 1/6.0)
```

Pmf creates an empty Pmf with no values. The Set method sets the probability associated with each value to 1/6.... Pmf provides a method, Normalize....

```
pmf.Normalize()....
```

Pmf uses a Python dictionary to store the values and their probabilities [note: this is confusing: the "values" of the random variable are the "keys" to the dictionary object; and the probabilities are the "values" of the dictionary object], so the values in the Pmf can be any hashable type. The probabilities can be any numerical type, but they are usually floating-point numbers....

Now that we see what elements of the framework are the same [for these problems], we can encapsulate them in an object—a Suite is a Pmf that provides

init, Update, and Print:

```
class Suite(Pmf):
"""Represents a suite of hypotheses and their probabilities."""
def __init__(self, hypo=tuple()):
"""Initializes the distribution."""
def Update(self, data):
"""Updates each hypothesis based on the data."""
def Print(self):
"""Prints the hypotheses and their probabilities."""....
```

This chapter presents the Suite class, which encapsulates the Bayesian update framework. Suite is an abstract type, which means that it defines the interface a Suite is supposed to have, but does not provide a complete implementation. The Suite interface includes Update and Likelihood, but the Suite class only provides an implementation of Update, not Likelihood. A concrete type is a class that extends an abstract parent class and provides an implementation of the missing methods....

Estimation: The dice problem: Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about. Suppose I select a die from the box at random, roll it, and get a 6. What is the probability that I rolled each die?Let me suggest a three-step strategy for approaching a problem like this:

- Choose a representation for the hypotheses.
- Choose a representation for the data.
- Write the likelihood function.
In previous examples I used strings to represent hypotheses and data, but for the die problem I’ll use numbers. Specifically, I’ll use the integers 4, 6, 8, 12, and 20 to represent hypotheses:

```
In [3]:
from dice import Dice
suite = Dice([4, 6, 8, 12, 20])
```

And integers from 1 to 20 for the data. These representations make it easy to write the likelihood function:

```
class Dice(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1.0/hypo
```

Here’s how Likelihood works. If hypo < data, that means the roll is greater than the number of sides on the die. That can’t happen, so the likelihood is 0.

Otherwise the question is, “Given that there are

hyposides on the dice, what is the chance of rolling [the number]data?” The answer is 1/hypo, regardless of data.Here is the statement that does the update (if I roll a 6):

```
In [4]: suite.Update(6)
Out[4]: 0.08500000000000002
```

And here is the posterior distribution:

```
In [5]: suite.Print()
Out[5]:
4 0.0
6 0.3921568627450979
8 0.2941176470588235
12 0.19607843137254896
20 0.11764705882352941
```

After we roll a 6, the probability for the 4-sided die is 0. The most likely alternative is the 6-sided die, but there is still almost a 12% chance for the 20-sided die.

### Addendum: Why Python?

**You Cannot Learn Every Tool**:

- Something Seth Lloyd said to me in May: "I have made it a lifelong goal to only have to know one computer language at a time..."
- He has had some success in his career: "Nam P. Suh Professor of Mechanical Engineering and Engineering Systems and Director of the Center for Extreme Quantum Information Theory at MIT. External Adjunct Fellow at the Santa Fe Institute. Ph.D. (Physics), Rockefeller University. M. Phil. (Philosophy), Cambridge University. B.A. (Physics), Harvard University..."
- Calls himself a "Quantum Mechanic"

- Since you cannot learn every tool, make sure you learn tools that are broadly useful. Python is broadly useful.

**Berkeley as Python (or, at Least, Jupyter Notebook) Central**:

**Jean Mark Gawron**: Python for Social Science http://gawron.sdsu.edu/python_for_ss/course_core/book_draft/index.html**Python Practice**: Learning Resources http://python.berkeley.edu/resources/**Python Practice**: Python Community http://python.berkeley.edu/community/

**Python as Least Unfriendly Thing to Learn**:

**Kunal Marwaha says:**

- Python is the best of both worlds—the functionality of a general-purpose language but with different “packages” for different disciplines…
- Python is friendly to novices. Most languages can do what you need, but efficiency depends on how quickly you can learn it…
- The user community of each language makes a big difference…
- Python is the fifth most popular programming language and is open source.
- If you have never programmed and are working on a research problem, Python is almost certainly the best language to try first…

### Shoebox: Resources from Data Sciences 100

**DS 100**: Computer Setup http://www.ds100.org/fa17/setup- Python Data Science Web Resources http://www.ds100.org/fa17/resouces
**Justin Johnson**: Python Numpy Tutorial http://cs231n.github.io/python-numpy-tutorial/**Hernan Rojas**: Python 101 Sample Notebook http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/Python_101.ipynb**John Hunter et al.**: Pyplot tutorial—Matplotlib 2.0.2 documentation http://matplotlib.org/users/pyplot_tutorial.html#pyplot-tutorial**Michael Waskom**: Seaborn tutorial http://seaborn.pydata.org/tutorial.html**Julia Evans**: Pandas Cookbook http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/tree/master/cookbook/**Scott Chacon and Ben Straub**: Pro Git https://git-scm.com/book/en/v2

### Shoebox: There was a time...

There was a time, a century and more ago, when the high-tech bleeding-edge electricity sector was an important but discrete part of the economy.

Today, if one asks "where is the electricity sector? What is the impact of electricity on the economy?" The only answer is: everywhere. Electricity is no longer in any sense any sort of discrete sector. Electricity is everywhere.

So it is rapidly becoming with computers and "data science". Computers, data, simulations, emergence from the bottom up, communication--they are no longer a discrete piece of our society and our educational system. Soon they will be everywhere: soon it will make as little sense to talk about the discrete pieces of the university that make use of computers as it makes sense to talk about the discrete pieces of the university that use electricity.

**Note to Self**: View jupyter notebook from dropbox (or other) links:

- Dropbox: change: "https://www.dropbox.com/" to: "http://nbviewer.jupyter.org/urls/dl.dropbox.com/"; then load the url...
- Else: change: "https://" to "http://nbviewer.jupyter.org/"; then load the url...