Saturday, 10 March 2018
I recently learned about the Japanese riddle form called a nazokake, which is built around a homophone pair. The structure of a nazokake is something like:
How is an X like a Y?
They both have to do with [homophone] Z.The nice thing about these riddles is that they are very easy to automate (I was not the first to notice this: e.g. see here and here). I found a list of homophones online, and generated their semantic associations using the word2vec model, which bootstraps semantic association from patterns of text co-occurrence. Then I wrote some simple rules for my text generation program JanusNode, to get it to generate riddles of this form. Here are some samples of what it came up with. Enjoy.
A dividend is like an odor because they are both about cents (scents).
How is a drumstick like an anthem? They are both about cymbals/symbols.
How is a maternity ward like a sports playoff? They both make me think of births/berths.
How is a goalpost like a cry? They are both related to balls (bawls).
How is a game of billiards like a trolley? They are both about cues/queues.
How is a missile like a daughter? They both have to do with the air (the heir).
How is a commander like a CPU? They both make me think of colonels/kernels.
A stockade is like a waddle because they both have to do with a gate (a gait).
Thursday, 19 October 2017
Although word form and meaning are logically distinct, there are several ways that form and meaning can become correlated in language. When words have correlated form and meaning for no obvious reason, we call them phonesthemes. The classic example is the words beginning with gl that all have to do with a short intense visual experience: glossy, glow, gleam, glisten, glitter, glitz, glint, glow, glimmer, and glassy. Another example is the set of sl words that are associated with a deficit in speed: slow, slump, slack, sluggish, slothful, sleepy, slumber, slog, slouch, slovenly, and slack.
It turns out there are many such pockets of arbitrary form-meaning correspondence, especially if we loosen the criteria and look for words with related meanings that share a letter sequence anywhere, instead of just at the beginning. For example, there is another gl word associated with a short intense visual experience that no one ever mentions as part of the glitzy family: the word spangle. As another example, a lot of words semantically related to incarceration contain the bigram ai: along with the word jail itself, there are the words arraignment, detain, hellraiser, restrain, constrain, curtail, bail, chain, and villain.
I’ve been exploring this professionally a little and I will summarize my findings in another blog post if those findings ever get through peer review. For now I just want to share some ‘found poems’ from my exploration. I created graphs that connected words that shared both form and meaning. Some elements of those connected graphs seemed to function almost as little poems, so I have collected a few choice examples for you to enjoy (perhaps). I had a few arbitrary rules, of course, because everything is more fun with arbitrary rules. One rule was that I could erase connected words, but I could not add any. Another rule was that I could not change the location of any word networks, although it was arbitrary in the graphs I made: only networks that actually occurred together could be shown together.
The circle of worry
Monday, 17 July 2017
I am moved to muse on this topic, as I have been moved to several others on this blog by a production from my user-configurable dynamic textual projective surface, JanusNode, which recently produced the question: "Would you prefer to die of suicide with a group of others, or of thought cancer in a factory?" I had not previously heard the term 'thought cancer', and, it seems neither has anyone else. Googling it returns hits to 'throat cancer', and forcing the quote mostly gets sentences starting with "I thought cancer was...". The one hit for the term as JanusNode used it was an entry in the Urban Dictionary, which is not always a reliable source. It defines 'thought cancer' as "The adverse effects one endures mentally from over-thinking things; usually secrets, inside information, or just the side effects of an over-active imagination."
This is OK as far as it goes, but if cancer is a systemic error to which many independent biological systems are subject, I think there is a less metaphorical meaning for thought cancer: the unchecked growth of a thought. Many thoughts are cancerous in this sense: they are self-increasing. I had a friend when I was in university who was very sure that strangers were saying bad things about him when he passed them on the street. I could not understand why this otherwise-rational man would think this, until I did an experiment. I walked down Saint Laurent Street in Montreal, taking on exactly the same assumption. I was amazed at how easy it was to find evidence for the idea that people were saying bad things about me. I caught snippets of many conversations which involved people deriding...someone. Why not me? After all, it has been recognized since Aristotle's time (even though David Hume tried to take credit for it) that consistent contiguity in time and space looks like causality. If Event B occurs just after Event A often enough, we naturally assume that Event A caused Event B. If people are often using the word 'asshole' just after I pass them, it must be because my presence caused them to use it. My friend's delusion was just his brain doing business as usual. The problem was the more he believed it to be true, the more evidence he found for it, and the more he obsessed over it. Thought cancer.
This error of self-referentiality is really just a self-fulfilling prophecy, a well recognized mental error in Cognitive-Behavioral Psychotherapy. Many other psychological symptoms can have a similar underlying cause, most notably obsessive-compulsive symptoms. If I believe that I need to step into my house after a number of steps divisible by seven or terrible things will befall me, then I am going to get a lot of positive feedback for that belief. I always step into my house after a number of steps divisible by seven, and terrible things don't befall me! Phew. I dodge a bullet, again and again.
I believe the Web is now enabling thought cancer at a societal level. It does so has made because the Web has made it possible to tailor our news sources exactly to our beliefs, so we will only hear from people who think like we do. The result is the echo chamber of social media: we believe that everything thinks like we do because we have arranged things so that everyone we come into contact with does think like we do. This is the most dangerous thought cancer, because it leads to increasing isolation, which leads to increasing evidence that we are right in our beliefs. In a cancerous tumour, growing cancer begets cancer. Belief systems grow just like tumours, to the extent that we now have, where the two centre-right parties in the United States (i.e. the Republicans and Democrats, both of whom are far too right-wing to, e.g., have a chance of getting elected in my own native country of Canada) have somehow come to believe that they define incommensurably opposing views, rather than occupying a very narrow band of the political spectrum as they do.
Cancer treatment is very brutal, involving two main paths: excise the cells that are dividing too much (surgery), and then (usually) poison the patient to take her as close to death as possible without actually killing her (radiotherapy and chemotherapy) in order to kill all of the most rapidly-dividing cells: in this case cancer cells and hair cells (among others).
For the thought cancer that we now can now diagnosis– especially but not only in the United States– cutting out the tumour is hardly a viable treatment. America can't outlaw (excise) the Democrats and the Republicans (what else is there?), any more than it can outlaw the Internet that has allowed the two parties to to build increasingly stable, increasingly distinct, and ever-growing echo chambers.
The USA has now embarked on the only remaining option, the chemotherapy of its societal thought cancer. It is going to take itself as close to death as it can go without actually dying. Donald Trump is societal chemotherapy, the agent that will save the patient only by almost killing her. We don't need to admire Trump for this, anymore than we admire other poisons. But America's thought cancer has reached a near-terminal stage. The patient needs to take the poison if it is to survive, even knowing that the poison is going to make her vomit, moan for months, pull out all her hair, and degenerate into a long period of unproductive wasting. When it is all over, the uncontrolled growth of the cancer will be checked (through the destruction of the very narrow view of the political band that currently passes as reality due to the thought cancer) and the patient can return to her former glory.
Trump is not the only reason for the societal cancer; he is just a proximate cause. A good overview of the causal chain that led to Trump serving his poisonous role is given in Jonathan Rauch's (July 2016) Atlantic article, How American Politics Went Insane. I recommend it.
[On a related topic, see also my musings On The Narcissism of Small Differences.]
Tuesday, 7 March 2017
My children and I used to write poetry for each other occasionally. Here are two pieces of doggerel I wrote for them.
The first one came from being limited by my son Nico to write about one of the first five topics that came up randomly on Wikipedia. My very first topic was "Glutamate dehydrogenase 2, mitochondrial, also known as GDH 2, [...] an enzyme that in humans is encoded by the GLUD2 gene". Although this did not seem a promising topic for a poem, it was better than the next four (Lindmania stenophylla, The Manitoba Day Award [now sadly deleted from Wikipedia], The Sunshine Millions Dash [also now sadly deleted from Wikipedia], and OMB Circular A-123, a "Government circular that defines the management responsibilities for internal controls in [American] Federal agencies") so I wrote an ode to GLUD2.
My daughter Zoe chose not to limit me at all and asked for a poem on any topic in any style, so I wrote a poem for her about not being limited by rules.
Ode to GLUD2 [For Nico]
An enzyme is one wondrous way
That miracles occur each day.
Each enzyme serves to catalyze
The slow reactions that arise
Inside our bodies; and without
Their helpful work there is no doubt
That life would not exist at all!
Life calls for speed; they heed the call.
And who was it that made that call?
Some call it ‘Chance’ some call it ‘All’;
Some call it ‘God’ but all we know
Is something called to make life so.
And why should we not worship it,
That what’s-it-called that made things fit?
If there’s no God, are mysteries solved?
Are enzymes crap if they evolved?
If you need proof that life’s divine
Then chemistry should suit you fine:
I say that no one ever knew
A thing as lovely as GLUD 2.
On Playing the Game [For Zoe]
You said that I could write in any style:
So I thought: Free verse! But after a while
I thought that things work better with some rules.
I don’t say that all anarchists are fools,
But the world’s big! To focus your view
It helps if there are rules guiding you.
If soccer was played just any old way
I don’t think it would be as fun to play
As it really is: Who would shoot to score
If the goal was moving around or
If some players could use a hockey stick?
I don’t play soccer but I think the trick
(maybe not just there, but in poems too)
Is that masters of the game are those who
Learn to love the rules. So I wrote in rhyme:
Maybe I’ll do free verse another time.
Wednesday, 4 January 2017
How many words does the average person know? This sounds like it should be an easy question, but it is actually very difficult to answer with any confidence. There are a lot of complications.
One complication is that is it not easy to say what it means 'to know' a word. Language users can often recognize many real words whose meaning they cannot explain. Does merely recognizing a word count as 'knowing' it? If not, if we have to know what a word means to count it, how can we decide what it means to 'know the meaning of a word'? As any university professor who has marked term papers will attest, many of us occasionally use words in a way that is not quite consistent with their actual meaning. In such cases, we think we know a word, but we don't really know what it means.
A second complication is that it is not totally obvious what we should count when we reckon vocabulary size. Although the word cats is a different word than the word cat, it might seem unreasonable to count both words when we are counting vocabulary, since it essentially means that all nouns will be counted twice, once in their singular and once in their plural form. The same problem arises for many other words. What about verbs? Should we count meow, meows, meowing, and meowed as four different words? What about catlike, catfight, catwoman, and cat-lover? Since English is so productive (allows us to so easily make up words by glomming together words and morphemes, subparts of words like the pluralizing 's'), it gets even more confusing when we start considering words that might not yet really exist but easily could if we wanted them to: catfest, catboy, catless, cattishness, catliest, catfree, and so on. A native English speaker will have no trouble understanding the following (totally untrue) sentence: I used to be so cattish that I held my own catfest, but now I am catfree.
The third complication is a little more subtle, and hangs on the meaning of the term 'the average person'. In an insightful paper published a couple of years ago (Ramscar, Hendrix, Shaoul, Milin, & Baayen, 2014), researchers from Tubingen University in Germany argued (among other things) that it was very difficult to measure an adult's vocabulary with any reliability. Assume, reasonably, that there are a number of words that more or less everyone of the same educational level and age all know. If we test people only on those words, those people will (by the assumption) all show the same vocabulary size. The problem arises when we go beyond that common vocabulary to see who has the largest vocabulary outside of that core set of words. Ramscar et al. argued (and demonstrated, with a computational simulation) that the additional (by definition, infrequent) words people would know on top of the common vocabulary are likely to be idiosyncratic, varying according to the particular interests and experiences of the individuals. A non-physician musician might know many words that a non-musician physician does not, and vice versa. They wrote: "Because the way words are distributed in language means that most of an adult's vocabulary comprises low-frequency types ([...] highly familiar to some people; rare to unknown to others), [...] the assumption that one can infer an accurate estimate of the size of the vocabulary of an adult native speaker of a language from a small sample of the words that the person knows is mistaken". Essentially, the only fair way to assess the true vocabulary size of adults (i.e. of those who have mastered the common core vocabulary) would be to give a test that covered all of the possible idiosyncratic vocabularies, which is impossible since it would require vocabulary tests composed of tens of thousands of words, most of which would be unknown to any particular person.
So, is it just impossible to say how many words the average person knows? No. It is possible, as long as you define your terms and gather a lot of data. A recent paper (Brysbaert, Stevens, Mandera, and Keuleers, 2016) made a very careful assessment of vocabulary size. To address the first complication (What does it mean to know a word?), they used the ability to recognize a word as their criterion, by asking many people (221,268 people, to be exact) to make decisions about whether strings were a word or a nonword. To address the second issue (What counts as a word?), they focused on lemmas, which are words in their 'citation form', essentially those that appear in a dictionary as headwords. A dictionary will list cat, but not cats; run, but not running; and so on. If this seems problematic to you, you are right. Brysbaert et al. mention (among other attempts to identify all English lemmas) Goulden et al's (1990) analysis of the 208,000 entries in the (1961) Websters Third New International Dictionary. That analysis was able to identify 54,000 lemmas as base words, 64,000 as derived words (variants of a base word that had their own entry), and 67,000 as compound words, but also found that 22,000 of the dictionary headwords were unclassifiable. Nevertheless, Brysbaert et al. settled on a lemma list of length 61,800. To address the third issue (What is an average person?) they presented results by age and education, which they were able to do because they had a huge sample.
And so they were able to come up with what is almost certainly the best estimate to date of vocabulary size (drumroll please): "The median score of 20-year-olds is [...] 42,000 lemmas; that of 60-year-olds [...] 48,200 lemmas." They also note that this age discrepancy suggests that we learn on average one new lemma every 2 days between the ages of 20 and 60 years.
As I hope the discussion above makes clear, 48,200 lemmas is not the same as 48,200 words, as the term is normally understood... Because they focused on lemmas specifically to address the problem of saying what a word is, Brysbaert et al. didn't speculate on how many words a person knows , where we define words as something like 'strings in attested use that are spelled differently'. I have guesstimated myself, informally and very roughly, that about 40% of words are lemmas, so I would guesstimate that we could multiply these lemma counts by about 2.5, and say that an average 20-year old English speaker knows about 105,000 words and an average 60-year-old English speaker knows about 120,500 words...but now I just muddying much clearer and more careful work.
 Update: After this was published to the blog, Marc Brysbaert properly chastised me for failing to note that their paper includes the sentence "Multiplying the number of lemmas by 1.7 gives a rough estimate of the total number of word types people understand in American English when inflections are included", with a reference to the Golden, Nation, and Read (1990) paper. He also noted that this does not include proper nouns. Without boring you with the details of how I came to my estimate of the multiplier, I will note that my estimate was made on a corpus-based dictionary that included many proper nouns, so our estimates of how to go from lemmas to words are perhaps fairly close. My apologies to the authors for mis-representing them on this point.
Brysbaert, M., Stevens, M., Mandera, P., & Keuleers, E. (2016). How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s Age. Frontiers in Psychology, 7.
Goulden R., Nation I. S. P., & Read J. (1990). How large can a receptive vocabulary be? Applied Linguistics, 11, 341–363.
Ramscar, M., Hendrix, P., Shaoul, C., Milin, P., & Baayen, H. (2014). The myth of cognitive decline: Non‐linear dynamics of lifelong learning. Topics in Cognitive Science, 6(1), 5-42.
Sunday, 11 September 2016
"The method of addition is quite charming if it involves adding to the self such things as a cat, a dog, roast pork, love of the sea or of cold showers. But the matter becomes less idyllic if a person decides to add love for communism, for the homeland, for Mussolini, for Catholicism or atheism, for fascism or antifascism. [...] Here is that strange paradox to which all people cultivating the self by way of the addition method are subject: they use addition in order to create a unique, inimitable self, yet because they automatically become propagandists for the added attributes, they are actually doing everything in their power to make as many others as possible similar to themselves; as a result, their uniqueness (so painfully gained) quickly begins to disappear. We may ask ourselves why a person who loves a cat (or Mussolini) is not satisfied to keep his love to himself and wants to force it on others. Let us seek the answer by recalling the young woman [...] who belligerently asserted that she loved cold showers. She thereby managed to differentiate herself at once from one-half of the human race, namely the half that prefers hot showers. Unfortunately, that other half now resembled her all the more. Alas, how sad! Many people, few ideas, so how are we to differentiate ourselves from one another? The young woman knew only one way of overcoming the disadvantage of her similarity to that enormous throng devoted to cold showers: she had to proclaim her credo 'I adore cold showers!' as soon as she appeared in the door of the sauna and to proclaim it with such fervor as to make the millions of other women who also enjoy cold showers seem like pale imitations of herself. Let me put it another way: a mere (simple and innocent) love for showers can become an attribute of the self only on condition that we let the world know we are ready to fight for it."The narcissism of small differences (and the unfortunate human drive for it) explains much of the insanity in the world in general, and in the current US election cycle in particular.
Tuesday, 30 August 2016
[Note: A small error in the original posted tables has been corrected.]
G = Gold; S = Silver; B = Bronze; SUM = All Medals.
The 'ACTUAL RANKING' is the ranking by raw medal count, the usual way.
'ADJUSTED' is the population adjusted ranking.
'DIFFERENCE' is the ranking by the difference between 'ACTUAL RANKING' and 'ADJUSTED' ranking.
Ties are given the same value.
The table is sorted by the average of all the difference rankings, i.e. by a measure of how much a better a country did in overall ranking compared to what would be expected given its population.
As you can see, by this measure (which to me is much more rational than raw medal count) the countries that did the best are Jamaica, Croatia, and New Zealand. Canada is 8th. The United States and China are big fat losers, 19th and 20th respectively once we adjust for the huge pool of talent they had to draw from.
Saturday, 6 August 2016
A lot of people got confused about the way we measured how a NW was funny, in part because we were loose about how we used the term 'entropy' in the paper (though very clear about what we had measured). Journalists understood that we had shown that words were funnier to the extent that they were improbable, and that we had used this measure 'entropy', but most journalists did not report the measure we used correctly. Most thought that we had said strings with higher entropy are funnier, which is incorrect. Here I explain what we actually measured and how it relates to entropy.
Shannon entropy was defined (by Shannon in this famous paper, by analogy to the meaning in physics of the term 'entropy') over a given signal or message. It is presented in that paper as a function of the probabilities of all symbols across the entire signal, i.e. across a set of symbols whose probabilities sum to 1. I italicize this because it emphasizes that entropy is properly defined over both rare and common symbols, by definition, because it is defined over all symbols in the signal.
Under Claude Shannon’s definition, a signal like ‘AAAAAA’ (or, identically, ‘XXXXXX’) has the lowest possible entropy, while a signal like ‘ABCDEF’ (or, identically, ‘AABBCCDDEEFF’, which has identical symbol probabilities) has the highest possible entropy. The idea, essentially, was to quantify information (a synonym for entropy in Shannon's terminology) in terms of unpredictability. A perfectly predictable message like ‘AAAAAA’ has the lowest information, for the same reason you would hit me if I walked into your office and said “Hi! Hi! Hi! Hi! Hi! Hi!”. After I have said it once, you have my point–I am saying hello–and repeating it adds nothing more = it is uninformative.
So, Shannon entropy is defined across the signal that is the English language as a function of the probabilities of the 26 possible symbols, the letters A-Z (we can ignore punctuation and case; we could include them easily enough but they don’t change the general idea and played no role in our nonwords).
If we do the math (by summing -p(X)log(X), for every letter in the alphabet, which is how Shannon entropy is defined), the entropy of English is 4.2 bits. What this means is that I could send any message in English using a binary string for each letter of length 5. This makes perfect sense if you know binary code: 2^5 = 32, which gives us more codes than we need to code just 26 symbols…so concretely, A = 00000, B = 00001, C = 00010, and so on until we get to Z = 11010).
What we computed in our paper can be conceived of as the contribution of each nonword to this total entropy of the English language, that string's own -p(X)log(X). In essence, we treated each nonword as one of part of a very long signal that is the English language. This is indeed a measure of how unlikely a particular string is, but that is not entropy, because entropy is measure of summed global unpredictability, not local probability.
Think of it this way: If I am speaking and I say 'I love the cat, I love the dog, and I love snunkoople’, you will be struck by snunkoople because it is surprising, which is a synonym for unexpected. We quantified how unexpected each nonword was (the local probability of that part of the signal), in the context of a signal that is English as she is spoken (or written).
Our main finding was that the less likely the nonword is to be a word of English—basically, the lower the total probability of the letters the nonword contains–the funnier it is. This is not just showing that 'weird strings are funny', but something more interesting that: that strings are funny to the extent that they are weird.
There is an interesting implicit corollary (not discussed in the paper), which is that we are the kind of creatures that use emotion to do probability judgments. Our feelings about how funny a nonword string is are correlated with the probability of that string. If you think about that, it may seem deeply weird, but I think it is not so weird. One of the main functions of emotion is to alert us embodied creatures to unusual, dangerous, or unpredictable aspects of the world that might harm us. Unusualness and unpredictability are statistical concepts, since they are defined by exceptions to the norm. So it makes good sense that emotion and probability estimation would be linked for embodied creatures.