Sunday, 13 December 2015

On Some Untrue Facts About Animals

[Computer-generated lies, thanks to JanusNode.]

Buzzards are the only animal whose buttocks are still evolving. 
Canaries can be trained to destroy.
Armadillos are the only animal whose muscles are parasitic.
Tamarins are going extinct because their boobs are prized for making organ donation containers. 
Voles are the only animal whose perianal areas are angelic. 
Porcupines are the only animal whose bodies can fuse. 
Lice are the only animal whose goos are composed of caffeine. 
The polecat's hairs are shaped like planks.
The largest partridge ever recorded weighed more than a cabdriver's garbage pail.
If a dog eats a kiwi it will climax.
Some jackrabbits found in Cambodia defend themselves by shooting tilapia from their vaginas.

Wednesday, 25 November 2015

On Some New Holiday Gift Ideas

My wife showed me one of those special Christmas catalogs that we receive at this time of year, listing crazy gift ideas that seem implausibly likely to ever sell. She suggested that JanusNode might be able to come up with some equally good products. I whipped up a quick 'Holiday Gift Idea' generator (to be included with the next release of JanusNode), and append herewith a few examples of the output. 

I gift these ideas to the world at large; feel free to develop them for your own profit.
Husband's Laser Pointer Boxer shorts 
Gourmet Garden Statue, made with real dried shark dung
Programmable Coffin with genuine antique finish
Touch-screen Playable Panties
Two-person Cummerbund-Bra & Knicker Set
Bird-watcher's Flashlight with Secret Soap Dispenser 
Glow-in-the-dark Ball Gown T-shirt with engraved initials 
Wind-up Scarf with battery backup 
Fiber-optic Remote-controlled Jeans/Underwear
Automatic Flying Kipper with silly voice 
Dishwasher-safe Slippers 
Unbreakable Baby's First Laser Pointer 
WiFi-enabled Mustard Dispenser 
Edible Lunch Container 
Japanese Toilet Paper Holder with secret Ferris wheel 
Speaking Bikini/Tracksuit
Crocodile skin USB-powered Socks
Remote-controlled Fish Dispenser
World's Smallest South American Biscuit Dish with engraved initials

Saturday, 24 October 2015

On The Mathematics of Meaning

I have worked with co-occurrence models of semantics for a long time. These computational models try to bootstrap word meaning from analysis of patterns of word co-occurrence in large corpora of text. Recently, Google released a set of tools (word2vec) and associated materials for a new, and very good, kind of co-occurrence model that they have built. There is a nice explanation of the model here.

One of the things you can do with co-occurrence model word representations is subtract or add them, to see what the resultant word representation 'means' (I skip over the mathematical details since we are just here for fun). For example, in word2vec space:
king - man + woman = queen
The equality sign here has to be taken with a grain of salt; it really means 'is similar to'.

My colleague Geoff Hollis and I have been working with the word2vec model (using a smaller dictionary and a slightly different representation and similarity measure than Google). I added the ability to add fractions of representations instead of just adding or subtracting each word representation as a whole, and have spent some time looking for interesting semantic math results. I have defined '=' here as 'being in the top ten closest results' (and also restricted myself by requiring that the final result on the right of the '=' sign cannot be among the top ten closest neighbors of any the input words on the left of that sign). This human flexibility (and the fact that I have deliberately searched for interesting results) means that this math is really a human-computer collaboration rather than a purely computational result. 

Here are some of my most interesting results. Enjoy.
love + 0.4 * sex = friendship
love + sex = infidelity
love + 3 * sex = monogamy

murder + fun = gunplay

apple + pig = potato

cat + 0.7 * dog = poodle

despair + 0.5 * hope = frustration

wealth + 0.2 * dream + 3 * selfish = elitist

courage + 2 * stupidity - incompetence = audacity

hope + time = opportunity

logic + hope = principle

man - 2 * education = snake

tiger - cat = rhino

sex + drunken = debauchery

love + dream = passion
[Image from: Alfred Bray Kempe (1886) A Memoir of the Theory of Mathematical Form.]

Saturday, 17 October 2015

On Attaining Our Goals

 Approach the goal. 
         It is difficult to attain 
what is not there.

The words above are one of my favorite JanusNode productions. I like the idea that life is all about striving to attain goals that are really just figments of our imagination. We make up goals, and then our goals make us up.

In my history of psychology course at the University of Alberta we discuss Carl Jung, whose work addresses the weird question that has to be asked: Who made up the process by which we make up goals? Whoever or whatever controls our goal-making algorithm controls us. Jung had a labyrinthine answer to the question of where that algorithm comes from.

 Jordan Peterson's (1999) book 'Maps Of Meaning: The Architecture of Belief' and Elizabeth and Paul Barbers' (2004) book 'When They Severed The Earth From The Sky: How the Human Mind Shapes Myth' both discuss Jung's answer, more or less, from different perspectives. The discussions they each offer are also complex, but include noting that:

  • Humans are not very good storage devices so information gets distorted when it passes into our heads. What is incidental fades away. What is important is magnified.
  • One way to safeguard what is important when it has to be stored in a leaky human mind is to store it more than once.
  • The unknown is frightening and has to be made comprehensible, predictable, and  approachable by speaking of it using analogies to what we understand for sure, notably human needs and desires.
  • Analogies using human needs and desires require that the story be 'fleshed out a little', with the analogy-maker adding elements to make the story coherent.
  • Similar stories from different sources can be merged into a new meta-representation that can encode the gist of the similarity, giving us recognizable, stable and versatile mythic elements are useful for coherently representing and thinking about what is important.    
When we invent the goals that make us up, we use the cognitive tools that we have. Those tools include some that have been passed down to us by natural selection and others that have been passed down encoded as mythic elements in tales, poems, songs, and legends. 

Who programmed the algorithm by which we make up the figments of imagination that make us up? All of history did.

Saturday, 26 September 2015

On Prickles & Goo

I enjoy this video cartoon of Alan Watts talking about 'prickles and goo', which was made by the people who brought us South Park. I use this cartoon in my History of Psychology class to introduce the fundamental split in history and in contemporary science between the 'Neats' and the 'Fuzzies' (as philosopher Dan Dennett called them): people who best appreciate nice neat clean 'Lego-like' theories (like the Pythagoreans) and those who best appreciate non-linear, messy, 'Plasticine-like' theories (like William James).


Saturday, 5 September 2015

On Schopenhauer's Expectation-Violation Theory of Humor

 "From a scientific point of view, optimism and pessimism are alike objectionable: optimism assumes, or attempts to prove, that the universe exists to please us, and pessimism that it exists to displease us. Scientifically, there is no evidence that is concerned with us either one way or the other."
                Bertrand Russell
                A History of Western Philosophy

Arthur Schopenhauer (pictured above looking happier than ever) is a German philosophy mainly remembered for his (1818) book, The World As Will and Idea. It's a rather dreary document, a precursor to 20th century Existentialism, written as a slow quasi-mystical, sentimental rant about how much better it is not to exist than it is to exist. This is an ancient idea of course: Sophocles could already allude to its long history, writing in his [c. 400 BCE] play Oedipus at Colonus "Never to have lived is best, ancient writers say". Schopie (as a German colleague of mine likes to call him) was allegedly not as miserable as his philosophy makes him sound: he enjoyed eating out at restaurants, having love affairs, and arguing.

My own interest in Schopie stems entirely from an unexpected interlude in The World As Will and Idea on the nature of humor. In that interlude Schopie suggests that “The cause of laughter in every case is simply the sudden perception of the incongruity between a concept and the real objects which have been thought through it in some relation”. The most common theory of humor when Schopie was writing was that incongruity itself was humourous, an idea alluded to by Aristotle (as what wasn't?) but generally attributed to Francis Hutcheson’s (1725) essays Reflections on Laughter. Many critics pointed out the obvious problem with this idea: some incongruities are not funny at all. The psychologist Alexander Bain wrote a great paragraph about this in his (1865) book The Emotions and Will:  

“There are many incongruities that may produce anything but a laugh. A decrepit man under a heavy burden, fives loaves and two fishes among a multitude, and all unfitness and gross disproportion; an instrument out of tune, a fly in ointment, snow in May, Archimedes studying geometry in a siege, and all discordant things; a wolf in sheep's clothing, a breach of bargain, and falsehood in general; the multitude taking the law into their own hands, and everything of the nature of disorder; a corpse at a feast, parental cruelty, filial ingratitude, and whatever is unnatural; the entire catalogue of vanities given by Solomon,— are all incongruous, but they cause feelings of pain, anger, sadness, loathing, rather than mirth.”
Schopie's proposal is that it is not incongruity itself that is funny, but only incongruity between an event in the world and a pre-existing idea or expectation about that event. In particular, he proposed two forms of humor that (only partially following his own terminology) we can call conceptual bifurcation and conceptual subsumption. Humorous bifurcation occurs when an idea that was believed to be from one category turns out to belong to two categories. A pun is a good example. Humorous subsumption is pretty much the opposite: it occurs when two ideas believed to be belong to distinct categories actually belong to the same category, as in the many jokes that ask how one thing is like or unlike another.

Schopie went on to suggest that what he called "the ludicrous effect" (degree of funniness) was related to the size of the relevant incongruity. Though he made no attempt to quantify this idea by controlling the size of the incongruity, I have recently done some work with some colleagues that tries to do exactly it is currently under review, I will leave a discussion of it for a later post. We can get a vague feel for Schopie's claim by comparing frequent and infrequent word pairs that violate our expectations. For example, I find the phrase ghost llama (which has 753,000 occurrences on Google) funnier (more humorous) than the phrase ghost cat (19,300,000 occurrences), even though I do find ghost cat a little funny. It is hard to extend the idea systematically to real jokes, since they are often incongruous in complex ways that do not admit of easy quantification.

[Image adapted from: Zimmern, H. (1876) Arthur Schopenhauer: His Life and His Philosophy. London: Longmans, Green, and Co.]

Saturday, 11 July 2015

On the Fragility of Life

Today I slit my wrist. 

It was a deep cut that ran vertically, as prescribed by those who think about how best to cut your wrists. However, it wasn't on purpose. I dropped a bottle of French cider (I am sad about this) that exploded on impact and somehow slashed me as it did. 

I was worried about the location and depth of the injury (arteries and all that), so I went to emergency [which is only a few hundred metres from my house] accompanied by my son. After the adrenaline rush wore off I realized a minor wound such as mine would not make it to the front of the queue for about 12 hours so I left and went and bought some appropriate bandages. I will probably live, unless the Gods have some other fiendish plan for me tonight. 

But just now, several hours later, I have discovered that my watch (which I took off quickly so I could wrap my bleeding wrist) is broken. I realize now that the short wound stops exactly where my watch started. So perhaps the fact that I was wearing a watch stopped a slightly serious wound from being a much more serious wound. 

How fragile is life.

Saturday, 13 June 2015

On Cyber-randomized Verse Industrialization

My automatic poetry-writing program JanusNode is—like blue cheese, minimalist paintings, and the music of  Darius Milhaud—an acquired taste. I am only a little less bewildered than the rest of my family about how I acquired this obscure taste. Some people just like blue cheese, and some people just do not. Who can hope to explain why? Why do we even want to explain it? Let those who like Stilton eat it, and let the rest enjoy Kraft cheese. We have world enough for both types of people.

My fascination with '21st century digital poetry industrialization' (as JanusNode once dubbed it) goes back over three decades. For most of that time it was a mainly private obsession. JanusNode was exposed to a larger audience when I realized—years after I had opened a Twitter account and wondered what to do with it—that Twitter was the perfect medium for JanusNode. The hardest part of successful 'pata-combinatoric poem printing ' (as JanusNode has also dubbed it) is keeping the semantic thread. It is surprisingly easy to get a machine to randomly generate 140 characters that occasionally make a statement that is witty or profound or funny or insightful. It is harder to coax a machine to randomly generate much longer strings that do not just seem obviously and dully random. However,  JanusNode has always done that occasionally. Recently I came up with a way to constrain its vocabulary around a particular topic so as to increase the probability of semantic coherence across ever-longer strings. Here I am publishing for the first time some of JanusNode's recent longer works (length > 140).

There is a school of thought that says that we should remove the human touch entirely from aleatoric text thoughts. My attitude has always been that we should give computer authors no more and no less respect than we would give to any human author. I have treated JanusNode's productions here exactly the same way I would have treated a human being's productions if they were submitted to me: I have very occasionally imposed some minor edits, especially where it seemed obvious that there must just be a typographic error. Though I have therefore occasionally tweaked the punctuation, capitalization, conjugation, or phrase-splitting, there is no sense in which any person or machine could reasonably claim that I substantially altered the intent of the real author, which is here not me but randomness. I have treated randomness the same way an admiring editor would treat Charles Bukowski, Richard Brautigan, or Henry Miller: with sober respect for her literary talent, coupled with a realistic recognition of the fact that she might have been drunk when she wrote the text. 


[Image adapted from: William Felkin's (1867) A History of the Machine-wrought Hosiery and Lace Manufactures]

Marriage was strained by

waiting and
 and waiting and waiting and waiting
       commit to
investigating the perceived feminine characteristics
           in women
           of beauty        


 Death is the termination of
          night terror
       ontogenetic fiction
       horror vacui 



the origin
       consciousness and
       the coming back
             to fear
        to avoid
             this threat
  perceived -
     Why are
          we here?


         into sustaining the
            intimacy of
              and commitment to
              the virtue
              of the contents
              of experience


         be an
     of divine grace
pious and
  driving the secretion of a group
           of neurons


 Death is the termination of the
           attitude we take
a high stage
            of existence
    and contains
 both an
       aesthetic and extinction.


 The brain
               is characterized by progressive cognitive
 together with intensified
the universe as
       an incentive for
             the production of


SOCRATES: But were we not saying that when a thing has parts, all the parts will be a whole and all?
WITTGENSTEIN: Your question makes no more sense than dazzling science, which I have never pretended to understand.
WITTGENSTEIN: We are dazzled by an ideal and therefore fail to see the actual use of the word clearly.


Ostracizing Stabilizer

To ostracize, to ostracize, to ostracize, to centralize,
              to centralize all while you ostracize,
              to centralize the centralizer...


Write and perform a song (in the style of  'Shake, Rattle and Roll' by Big Joe Turner) about the role of fiction in psychoanalysis while singing  'Fortunate Son' by Creedence Clearwater Revival. Ask the audience to meditate on the theater. Dedicate this piece to people everywhere who are suffering from depression.


come, insightful deity,
              and perceive the supreme hours yourself,
        for when you perceive them,
           my love,
        my insightful brain turns into a generous spontaneity
        and an enviable award-winning


 The Outrageous Fingernail Of Me

  I will always remember
        my Dadaism
              joyful unveiling
        joyous entertaining
        a murderous atheist in the dainty mountain


still I must know where to look for a dream,
still I must know where to look for a dream

        I have assumed that I do not dream myself,
              I have assumed here that I do not dream myself,
                still I must know where to look for a dream.
        I have assumed here that I do not dream myself?


  The choice.
        The choice...

        Isn't it as though I were choosing?
  I no longer have any choice.

                I no longer have any choice.
  The choice.
  But should we also call it justifying an imagined choice?


  memory's loving voices        
              the ocean fill'd with joy
    for every woman too
          and love
                dear friend
                whoever you are
    take this kiss.


The Desire

    unaccustomed to a religious theory,
     separated from
  any truthful contentment,
  exist always underestimating
    in a peaceful mind
    the nothings. 


The Need

To light,
  to accept,
                to underestimate,
                to pleasure,
        to ease all
                              while you lighten,
  to love a guidance-
                did we finally only have a delightful religion?


Our world depends upon
        over-grown by the beautiful.

          am here for
  your classic


 There is nothing I hate like a bird,
     there is nothing I hate like a bird,
          silent in life escape
                roaming in thought over the universe


SOCRATES: Then we must not speak of seeing any more than of not-seeing, nor of any other perception more than of any non-perception, if all things partake of every kind of motion?
JANUSNODE: That makes me think of Nietzsche's point that a casual stroll through the lunatic asylum shows that faith does not prove anything.


  look at a cat when it stalks a bird

                    portray these acts in words

          I mean this
                            simply invites me to apply the picture I am given
                                       to imagine a form of life
                then suddenly I saw it...

Sunday, 7 June 2015

On Altering Page 4 of 'A Human Document'

As I have mentioned before on this blog, I am a huge fan of Tom Phillips' A Humument, a multi-decade art project to alter the pages of an old Victorian novel, A Human Document by W.H. Mallock. The website Venus Febriculosa recently held a competition to alter a page from A Human Document, with Tom Phillips as the judge. The excellent winning entries are here, along with a few other entries, including my own, reproduced above.

Thursday, 4 June 2015

On the Release of the Paperback Version of My Novel

The paperback version of my novel came out on June 10, 2015. Ask for it at your local bookstore or buy it on You can read more about it here on

Or check out these review excerpts:
"Part treatise on art appreciation, part humorous on-the-road tale, neuropsychologist Westbury’s debut novel offers a compelling story about the role art can play to disrupt, delay, and contribute to human engagement with the real world. —Booklist

"Westbury’s Bride Stripped Bare contains many layers. It has, among other elements, a road trip, an unusual love triangle, cross-dressing, anagrams, off-kilter theories of religion, a handmade chocolate grinder, and a see-how-many-balls-I-can-keep-in-the-air comic structure that adds to the novel’s overall buoyancy." —Edmonton Journal

"Part treatise on art appreciation, part humorous on-the-road tale, neuropsychologist Westbury’s debut novel offers a compelling story about the role art can play to disrupt, delay, and contribute to human engagement with the real world." —Booklist

"It’s a sweet story, and it builds inevitably to a happy ending." —Kirkus

"Clever, funny, and fun, and filled with great discussions about art" —BookRiot

"Bright aesthetic discussion amid mishap; not just for the college-nostalgic but for anyone who enjoys a rush of ideas while being entertained." —Library Journal, Top Summer Reads 2014

"Westbury, a cognitive neuropsychologist at the University of Alberta (Edmonton), has plenty to say about art and attention, about the line between sanity and mental illness, and about the nature of a well-lived life... Westbury's debut is a call to pay attention, and a reminder of the rewards of patience and open eyes." —Barnes & Noble Review

Saturday, 28 March 2015

On The Mis-cited Magic of the Number 7+/-2

 [Figure 6, reproduced from George Miller's (1956) paper  
The Magic Number 7 (+/- 2): Some limits on our capacity for processing information.  
N.B. The y-axis in this figure is in bits, not items.]

[This post has been edited since its initial posting, 
to correct some inexactitudes in the original text]
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
                                                            - Josh Billings
One thing that everyone knows, perhaps one of the most widely cited facts of psychology, is that the capacity of short-term memory is 7+/-2 chunks, where a 'chunk' is some potentially complex (multidimensional) unit. Everyone knows that this fact was first discovered by George Miller, and that it was reported in a famous paper in 1956, called The Magic Number 7 (+/- 2): Some limits on our capacity for processing information, which often shows up as the first or second most cited paper in psychology. However, it seems very few people bother to read what Miller wrote. Although the fact attributed to the paper is mentioned in passing, it is not the main focus of the paper, Miller did not ever pretend to have discovered it himself, and he did not really say that we could remember just 7+/-2 things.

When I read this charmingly-written paper, I was surprised by several things.

The first surprise is a rather minor one: the paper contains no new data. It is a review paper. This surprise is really neither here nor there, since it matters little who actually collected the relevant data. There are lots of empirical data in the paper.

The second surprise is that the paper is written in information-theoretic terms. This matters because Miller moves back and forth between talking about the amount of information (Shannon entropy) a person can remember and the number of items a person can remember. That these are not the same thing is one of the main points of the paper.

The third surprise in Miller's paper is that the ‘7+/- 2’ is not always referring to number of items, but at first to a standard deviation bound. One of the main points of the paper for Miller is the fact that the number 7 (approximately) shows up all over the place. He begins by considering recognition of simple (one-dimensional) stimuli and concludes "If I take the best estimates I can get of the channel capacities for all the stimulus variables I have mentioned, the mean is 2.6 bits [= 6.3 items] and the standard deviation is only 0.6 bit [=1.5 items]." So here we see the first appearance of 7+/-2 (sort of): Miller notes that a person can usually accurately recognize 6.3 one-dimensional items (eh, that's nearly 7) with a standard deviation (not an upper and lower bound) of 1.5 (close enough to 2, for sure, if we squint). Two standard deviations includes about 95% of a normal distribution, so if we want state Miller's law properly with an (approximate) upper and lower bound (a 95% confidence interval), it should not be 7+/-2, but 6.5+/-3 [i.e. the average +/- (2 * the standard deviation)]. This is Miller's first '7'.

The fourth and biggest surprise in his paper is the big one: that Miller does not really argue that the human short term memory capacity is approximately 7 +/- 2 items. In fact, he explicitly rejects this, noting that "Everyday experience teaches us that we can identify accurately any one of several hundred faces, any one of several thousand words, any one of several thousand objects, etc. [...] We must have some understanding of why the one-dimensional variables we judge in the laboratory give results so far out of line with what we do constantly in our behavior outside the laboratory" [Emphasis added]. After he has discussed the experiments on recognition memory for one-dimensional stimuli and established the 6.5+/-3 finding, he turns his attention to discussing recognition memory for multidimensional stimuli: that is, stimuli that have more than one dimension of variability. A simple example (though just one of several considered in the paper) is the difference between how people do at recalling previously-seen points on a segmented line (one dimension) and how they do at recalling previously-seen points on a square grid (two dimensions). For points on a line, people have a memory capacity of about 3.25 bits (9 items). For points on a grid, they have a memory capacity of about 4.6 bits (24 items)! So much for a short-term memory capacity of 7+/-2 (at least for recognition memory)!

Miller goes on to estimate how far this improvement due to multi-dimensionality can be taken. Extrapolating by eye-balling ("in a moment of considerable daring", as he notes) from the available data that looked at multiple dimensions of variability [see figure reproduced above], Miller estimates that the 'real' multidimensional capacity of human short-term memory is about 7.2 bits, or 150 items, "up into the range that ordinary experience would lead us to expect". This is Miller's second '7': it is a 7 bit limit, 150 items. 

This is drawn from studies that manipulated at most six dimensions, but Miller writes "I suspect that there is [...] a span of perceptual dimensionality and that this span is somewhere in the neighborhood of ten, but I must add at once that there is no objective evidence to support this suspicion." Later in the paper he rounds this suspicion to....his third 7! He writes "I have just shown you that there is a span of absolute judgment that can distinguish about seven categories"[emphasis added], though in fact he looked at data that manipulated six categories, and guessed people might go be able to go as high as ten. Well, 6 and 10 are approximately 7+/-2 so...yeah, close enough.

The fourth 7 that Miller discusses is the so-called subitization limit, the number of things a person can enumerate at a glance. He notes that "there is a span of attention that will encompass about six objects at a glance" (well, 6 is more or less 7).

Finally, Miller discusses a fifth 7, his most famous one. At the end of the paper, is his deservedly-famous discussion of chunking, he reviews data collected by others showing that “With binary items the span is about nine […] although it drops to about five with monosyllabic English” and then shows this in Figure 7. Here Miller has switched tasks. His initial discussion was about the recognition memory task called absolute judgment (the experimenter presents N unnamed but coded [usually numbered] stimuli, then presents one of them again and asks the participant which one it was), on which his point is, as noted above, that people can do very well far beyond 7+/2 items. The final discussion is about ordered memory span: the experimenter presents N named stimuli (words, numbers, letters) and asks the subject to repeat all of them back in order. If by ‘memory span’ you mean ‘number of things a person can hold in short term memory’, then Miller’s review of multidimensional absolute judgment experiments shows that it is far above 7+/-2, as common sense suggests it must be. If you mean ‘number of items you can repeat back in order without error’ then Miller’s paper reviews data suggesting it is 5 to 9 items, the magical 7+/-2. As others have noted, the ordering requirement is itself adding to the magnitude of the memory problem, because the stimulus order is information that has to be memorized along with the items.

The title of his paper The Magic Number 7 (+/- 2) was selected not because that is the span of short-term memory (which it is not), but rather because of Miller's remarking on the approximate equality (if we squint) of the several limits in this range that he discusses: "span of immediate memory [for one dimension of variability, 6.5 items] [...], a span of absolute judgment that can distinguish about seven categories, and [...] a span of attention that will encompass about six objects at a glance." The main point of his paper is actually that these three approximately equal limits of approximately 7 do not in fact reflect a common process, but we need not concern ourselves with that point here.

So, according to the data Miller reviews, the limit of human short term recognition memory (from his daring extrapolation) is about 7.2 bits or 150 multi-dimensional items. Moreover, he is also at pains to point out in his paper (in keeping with his observation from everyday life cited above) that this limit can be (and often is) extended by recoding and temporal chaining...

Given his own remark that everyone knows that we can easily recognize dozens or hundreds of things, it is amazing that people continue to cite Miller as 'having proved' the obviously-untrue claim that the limit of human short term memory is 7+/2 items. It really depends on how you define memory capacity.

You can read the whole paper here.

Saturday, 21 March 2015

On Writing Like Designing A Ski Hill

Writing a book is like designing a ski hill. When the people you made it for are racing along, they don't even notice that they are being guided through potentially treacherous territory by careful design. They just feel free.

[Image adapted from Figure 16 of John Ruskin's (1856) Modern Painters: Volume 4.]

Sunday, 15 February 2015

On Glorifying the Achievement of Winning for Love pointed out that North Korea had released a slew of new patriotic slogans, many of which seemed like something my nonsense-generating program JanusNode might have written. I used JanusNode to statistically mix together (by Markov chaining) Jenny Holzer's aphorisms and the new North Korean slogans (with a few other texts occasionally thrown into the mix) to auto-generate some new patriotic slogans. [Nonsense-generation purists: Note that as well as selecting these texts from a much larger pool, a few small human edits were made, including adding many of the exclamation marks.] See also my previous post on automating Jenny Holzer. Enjoy.
Builders! Let us glorify the achievement of winning for love!
Government is a burden on the idiosyncratic institutions!
Giving free rein to your emotions is an irresistible challenge!
Get rid of studying the enemy!
Sometimes you do crazy things for military hardware!
Moral integrity to volunteer is reactionary torture!
Manual labor can be refreshing method of our everyday life on a high level!
The Party's policies rest on the proclivity of the people to wake up wishing things are never messy!
Selflessness is the lifeline of south Korean warmongers!
Fight death-defyingly for our dear children by increased production!
Let us give full play to death, and technology!
Get rid of abuse of authority and revolution and gunners!
Get rid of abuse of authority and self-centeredness!
Symbols are parasites!
Fear is the best planning!
Love is terror-induced immobilization!
Let us raise a strong wind of stereotypes!
Let us hold fast to the props!
If you have freedom of choice, repetition is a weapon!
Electricity is the triumph of socialist patriotism!
Enrich the life of service personnel and people with credit!
Fight death!
Separatism is the same as admitting defeat!
Stupid people deserve special control!
You can't behave if they think they are important!

Random mating is good to give up if you have ugly consequences!
Knowledge should be as easy as falling off a log!
The Party's revolutions are pointless if no one notices your oldest fears!

Sunday, 1 February 2015

On The Egg As The Sun's Light Refracted Into Life

A fine bit of writing on the humble egg, from the excellent book On Food and Cooking: The Science and Lore of the Kitchen by Harold McGee:
"...modern science has only deepened the egg's aptness as an emblem of creation. The yolk is a stockpile of fuel obtained by the hen from seeds and leaves, which are in turn stockpiles of the sun's radiant energy. The yellow pigments [of the yolk] also come directly from plants, where they protect the chemical machinery of photosynthesis from being overwhelmed by the sun. So the egg does embody the chain of creation, from the development of the chick back through the hen to the plants that fed her, and then to the ultimate source of life's fire, the yellow sphere of the sky. An egg is the sun's light refracted into life." (p. 69; Emphasis added)
[Image: Altered detail from Plate 1 + scanning-artifact hands from John Ellard Gore (1893) The Visible Universe: Chapters on the Origin and Construction of the Heavens]

Saturday, 24 January 2015

On Counting Wisps

My most popular blog post is On The Processing Speed Of The Human Brain. This post is a follow-up, asking a related question: What is the storage capacity of the human brain?

This is a difficult question to answer, because we don't really know what memories are, or how exactly the brain is storing them. It seems implausible that it is storing them in some sort of countable form, an organized set of tiles that can be plucked from the mind when needed. If you believe that the objects of the mind are less like tiles and more like the sensible wisps of a seething dynamic process, then a question about the storage capacity of the brain doesn't make  obvious sense. What exactly are we to count? All the wisping that has been sensed? All the wisping that could theoretically be sensed? The wisping that will actually be sensed by a particular person's brain? The average Shannon information of the dynamic wisping process across some specific time unit?

Some people have estimated that the storage capacity of the human brain is functionally infinite since we always find room to store more information if we want to, so no practical limit exists. This is a weak argument, since by the same argument your computer's hard disk has functionally infinite space. Perhaps the storage of the brain is constant despite our ability to always fit more in if we really have to, because we fit it in by over-writing what is already there.

A more principled lower estimate can be made by considering the hardware of the brain. A human being has about 100 billion brain cells. Let's assume that a change in any connection strength between two connected neurons is equal to one bit of information and further assume (a huge over-simplification) that neural connections have just two possible strengths (like a bit in a computer, which is either 1 or 0). Assume as we did before that each neuron connects to 1000 other neurons. Then each neuron has ‘write’ access to 1000 bits of information, or about 1 kilobyte. So we have 100 billion (number of neurons) X 1 K of storage capacity, or 100 billion kilobytes. That’s about 8 x 10^14 bits or 100 terabytes. Let's round it up to 10^15 bits, or 125 terabytes. Since in fact neural connections are not two-state but multi-state and since neuron bodies can also change their properties and thereby store information, this may be a very low estimate. On the other hand, since it counts every potential distinction as an encoded distinction (assuming no noise, no redundancy, no unused connections), it could be a huge over-estimate.

The number of bits in the brain is not equal to the number of items. For example, to store one letter of text (one item) on a computer takes a theoretical minimum of seven bits, and in real computers it usually takes more. To store one picture can take thousands or even millions of bits. The same must apply to the human brain, so if we want to count 'things' (potential wisping!) rather than bits, we need to make some adjustments for the fact that each memory must be composed of many bits.

The first person to try to estimate the amount of storage in a human brain was Robert Hooke, in 1666. He estimated how fast he could think, multiplied by his lifespan, and decided that he had 2 x 10^9 bits of storage. He had a high estimate of himself: his estimate for an average person was twenty times less, at 10^8 bits! The psychologist Tom Landauer wrote a paper in 1986 ("How Much Do People Remember? Some Estimates of the Quantity of Learned Information in Long-term Memory", Cognitive Science, volume 10, pages 477-493), in which he tried to estimate from a review of experimental results how many useful distinctions a person might be able to remember in all. His estimate was one billion relevant distinctions (10^9 bits). At a 2002 Psychonomics conference presentation that I saw, Landauer re-visited this question. He used a novel technical method (whose details need not concern us here, which is good because I can't remember them now) to estimate how much word knowledge a person had. His new estimate is in the same ball park as Hooke's: 4 x 10^8 bits. This seems implausibly low to me: that's only about 50 megabytes. I can't believe that my brain wisps as lightly as that.

Eh. I don't really think the question of how much information the brain stores has a universally compelling answer, except with some really wide error margin. Let's say: the brain can store somewhere between 10^9 bits (125 megabytes) and 10^15 bits (125 terabytes). Only [!] six orders of magnitude difference, which is good enough when we are trying to count something as abstract as a random person's potential to sense wisps.

[Image: Altered Figure 27 "The commissural connecting the cerebellum to the olivary bodies" from Thomas Laycock's (1860) Mind and Brain: Or, the Correlations of Consciousness and Organisation; with Their Applications to Philosophy, Zoology, Physiology, Mental Pathology, and the Practice of Medicine, Volume 2 ]