# How many tweets are possible?

Randall Monroe's latest "What If?" explores the total number of possible English-language tweets:

Based on the rates of correct guesses—and rigorous mathematical analysis—Shannon determined that the information content of typical written English was around 1.0 to 1.2 bits per letter. This means that a good compression algorithm should be able to compress ASCII English text—which is eight bits per letter—to about 1/8th of its original size. Indeed, if you use a good file compressor on a .txt ebook, that’s about what you’ll find.

If a piece of text contains n bits of information, in a sense it means that there are 2n$2^n$ different messages it can convey. There’s a bit of mathematical juggling here (involving, among other things, the length of the message and the concept of unicity distance), but the bottom line is that it suggests there are on the order of about 2140×1.12×1046$2^{140\times1.1} \approx 2\times10^{46}$ meaningfully different English tweets, rather than 10200$10^{200}$ or 10800$10^{800}$.

Now, how long would it take the world to read them all out?

Reading 2×1046$2\times10^{46}$ tweets would take a person nearly 1047$10^{47}$ seconds. It’s such a staggeringly large number of tweets that it hardly matters whether it’s one person reading or a billion—they won’t be able to make a meaningful dent in the list in the lifetime of the Earth.

### 29

1. Boundegar says:

I caught an error in the math.  The number of meaningfully different English tweets is actually zero.

1. GawainLavers says:

Reading all the tweets takes you ten thousand eternal years.

At least I’ve been given a reason to cheer the heat death of the universe.

2. “Hold the newsreader’s nose squarely, waiter, or friendly milk will countermand my trousers”

1. Drew_Gehringer says:

“Drop your panties, Sir William; I cannot wait until lunchtime!”

3. The number of possible 160 character ascii messages is not the # of possible tweets in English.  If you constrain it to the actual vocabulary of English including slang and misspelled words, you cut down the possibilities enormously.  Still a rough go to read it all, but I suspect a much lower number of messages that are arguably english sentences.

1. Especially when you consider that would include millions of combinations of just punctuation and mathematical symbols.

I don’t think that would be a very fun novelty account.

1. noah django says:

yeah, y’all’s thought was my thought.  i was wondering how Monroe was arriving at “meaningful English tweets” using his methodology versus what you and chaircrusher suggest.  Monroe’s idea sounds more like Borges’ Library of Babel to me.

1. Admittedly he likely considered this – if we thought of it so did he.

1. Randall used the concept of informational entropy, which is basically how many bits you need in order to fully specify something under a given set of assumptions. Two coin flips has two bits of entropy, but two coin flips where you know at least one is heads has about 1.6 bits. This amount also happens to be the theoretical lower bound on how small you can compress something.

He cited a fairly well-known paper by the father of information theory that says, given the assumption that a string of characters is a grammatical english sentence, each character has an entropy of around 1.1 bits. This is how he accounted for the fact that most ASCII strings are not grammatical english: normally, a random ASCII string will have about 7 bits per character.

2. SamSam says:

I’m quite certain you didn’t read the post.

He dismisses the answer of “the number of possible 1[40] character ascii messages” in about the third sentence.

3.  Wow! It’s almost like you didn’t read the article!

4. timquinn says:

Each utterance (tweet) has a context and therefore each one is unique and has a unique meaning.

5. Just_Ok says:

tl;dr

6. Gorgonzola says:

It should take a supercomputer that long to generate them all and give them an index number.  Then instead of tweeting, people can just reference the number of the tweet they were going to make.

1. timquinn says:

100000000000000000000000000000000000000000000005

1. timquinn says:

. . . but don’t take it personally.

2. corollax says:

You’ve just described a very simplistic, if computationally expensive, pre-computed compression algorithm. Because after all, that’s what you get when you run a compression program: a binary number for which there exists a corresponding piece of data.
You recover that data through a process called decompression. Which in the case of your index isn’t very computationally expensive, but is instead absurdly costly in storage space.

7. BarBarSeven says:

…they won’t be able to make a meaningful dent in the list in the lifetime of the Earth.

Which is why to make an impact on Twitter one needs to grab the reader by the neck & kick them in the balls by calling a 9 year old African American girl a “see you next Tuesday” in a Tweet. AMIRIGHT!

8.  It’s really huge number,if it’s real

9. Arthur C. Clarke’s story needs an upgrade: “The Nine Trillion Tweets of God.”

10. Roose_Bolton says:

Of these, 25% are dadboner’s “Really looking forward to the weekend you guys” and another 25% are “Feel like I drank a thousand beers last night you guys”.

11. oasisob1 says:

1: Isn’t it ‘Munroe’?
2: i’ve been tweeting md5 hashes for ages, fuck the 140 char limit.

12. Eddie K says:

In a similar vein, I’ve had to design a number of small 16×16 pixel icons used for web site favicons, application drop down menus, file lists and the like.

It’s quite an art to make a 16×16 icon that’s colorful, attractive and meaningful that doesn’t look like something out of a broken video card or a child’s stick figure drawing.

How many such icons are possible? Assuming 24 bit pixels (8 bits for each of red, blue and green) gives 2^24 or 16,777,216 colors per pixel. With 256 pixels per icon you get (2^24)^256. Essentially a 256 digit, base 16,777,216 number or 2^6,144 different icons. We shouldn’t run out for a while.

I’ve always wondered what kind of fabulous icon a real artist such as a Picasso, Warhol or Pollack could produce on such a tiny canvas.

13. KWillets says:

Using the Shannon entropy of common English gives a low estimate for this, because all the sentences that are uncommon are added back into the corpus.  Shannon measured the ability of humans to guess the next character in a string, but the guesses relied on the fact that, say, “I am reading a book” is more common that “I am reading a boot”.  In an “all possible sentences” corpus, both are equally likely; we lose the ability to down-rate the “long tail” of correct but unlikely sentences.

1. Erich Morisse says:

+1

Long tail is fine, as Shannon’s 1.1-1.2 bits per letter is average encoding length, with more frequent letters encoded with fewer bits.

You bring up a great point though, it looks like the encoding is based off of a first order character frequency. Once you bring in word frequency and word transition frequencies (both also discussed by Shannon), the average encoding bit count changes.

Any way you cut it, this is my favorite “What If?” yet.

14. chris rogers says:

Pretty sure ASCII is 7 bits and lots of those are control charcters like a linefeed. So really 26 letters, 10 digits and lets say 20 symbols. 56 letters could be encoded in 6 bits…