Pictish art may have actually been written language

Discuss

18 Responses to “Pictish art may have actually been written language”

  1. Anonymous says:

    “There is, sadly, not a lot of detail about what specific characteristics make language stand out from decoration.”

    A very insulting comment. Ever heard of the field of linguistics?

    This is linguistic anthropology, with a little Shannon Entropy thrown in for decoration. Linguistics is by nature quite mathematical.

    I am a linguistic anthropologist.

    • Anonymous says:

      Enlighten us, oh linguistic anthropologist, instead of taking insult when we ask you how you know your stuff works.

      I mean, really, you’re coming off like a dick here. Why don’t you get yourself an account, pretend you are not the guy who wrote that anon comment, and answer the question instead of getting mad that it was asked.

      I’ve read “The Horse, the Wheel, and Language” cover to cover and I spent a vacation once 20 years ago wandering around Scotland looking at pictish stones. I can tell you the major themes (tongs, crescent & v-rod, peanut & z-rod, etc.). People here are interested in this subject. Sign in, and talk to us about it. You’ll rarely find such a mix of polymaths and autodidacts as you’ll find here.

    • Anonymous says:

      Not a lot of detail here. Obviously Maggie knows such details exist, given that she has linked to an article about using them; but that article doesn’t give too many. As a linguistic anthropologist, you’d think you’d pay a little more attention to context.

    • lectroid says:

      > I am a linguistic anthropologist.

      Am I the only one who just pictured Anon popping out of the bushes and making Alton Brown grumpy?

    • Axx says:

      I don’t think the observation you are talking about does anything to slight the field of linguistics. However, it might make a statement about the nature of art and language that humans produce.

      Both language and art consist of self-referential patterns that really aren’t that different to one another; depending on what language culture you were raised in they can be quite hard to disentangle. In fact, I can think of a few languages that are founded in picture-drawing…

      This difficulty in art/language disentanglement is particularly heinous when a culture’s art and its language exist in a sort of superposition with one another. If you wan’t to blow your mind with fascinating linguistics, watch “Cracking the Maya Code”:

      http://www.pbs.org/wgbh/nova/mayacode/

  2. highlyverbal says:

    Math is hot.

  3. dotytron says:

    We used techniques like this in my freshmen engineering class at Purdue. We did a unit on cryptography, mostly using Matlab to crack 1960′s era U.S. and Soviet encryption systems.

    The concept of Shannon Entropy sounds similar to the index of coincidence, the probability that any character could could be any other character in a string of text. If i remember right, the IC of English is something like 1.73. So you can take a string of encrypted text, manipulate it a zillion times using raw computing power, and then pick out the iteration that results in decrypted text by looking at the IC…graphing all the IC values for each iteration in a histogram is handy…then you can easily find the decryption algorithm without having to examine every one of the hundreds or thousands possible algorithms individually to determine if the text is readable.

    This makes me miss my days at Purdue… :(

    Or…Maybe I don’t miss the all-nighters in ENAD pounding away on a Sun workstation all that much…

  4. KWillets says:

    Shannon Entropy is a measure of the information content of a set of symbols. It’s related to the minimal compressed size of a text, and one can use compression programs such as gzip to estimate it.

    Purely random sequences of symbols have high (maximal) entropy, but human language is typically 5-10 times smaller. I expect the researchers just looked at the distribution of k-grams (sequences of k symbols) in the engravings, and ascertained that they’re far from random.

  5. bardfinn says:

    I reasonably believe that classic Chomskyites will argue that this is less evidence that Pictish pictograms are a written expression of a spoken human language and more evidence of the human brain having an innate grammar that imposes itself on linear expressions of human thought.

    Grikdog: you don’t pick and choose which subsets of the known pictograms to compress, you compress (Shannon complexity analyse) the entire set of known pictograms. (Otherwise you’re playing Bible Code.) Not all languages have an equal Shannon entropy — or even similar amounts of Shannon entropy — but it’s an accepted axiom in linguistics that all written human languages have a Shannon entropy some orders of magnitude less than a collection of grouped spatial graphic depictions, which has less than a collection of naturally-arisen ordering (allow me to flip the bird once more at Dembski), which has less than a collection of “randomness”.

    • dragonfrog says:

      bardfinn – maybe I’m missing your point (and I have only barely skimmed the article, which probably raises the likelihood that I am).

      I took Grikdog’s “Until you know what bits of the pictures (pardon) contribute to meaning, you don’t know what to compress, and have no clue what it’s entropy might be” to mean, how do you know which features of a pictogram have meaning, and would create a distinct pictogram, versus which strokes are just a variation in handwriting style: This dog’s head has one ear cocked – do we call this another instance of “dog’s head”, or do we call it a new and distinct pictogram, “dog’s head with cocked ear”?

  6. mn_camera says:

    What? No several species of small furry animals gathered together in a cave and grooving?

  7. grikdog says:

    Until you know what bits of the pictures (pardon) contribute to meaning, you don’t know what to compress, and have no clue what it’s entropy might be. It’s possible to have writing that maps one-to-one with the 35,000 or so “words” of a language, assuming all languages are equally complex, with NO compressibility.

    And yes, Noam Chomsky demonstrated a beautifully elegant model of English grammar several decades ago, that is today the foundation of electronic translation devices (the military versions, not Babelfish).

  8. holtt says:

    Crom! I knew those Pictish devils were up to something.

  9. Anonymous says:

    Oh god – here comes another stupid Dan Brown novel. They’ve even got a quote from “one of the world’s leading experts on signs and symbols”…

  10. Anonymous says:

    Shannon entropy is great for analysis of information in images as well! Entropic consideration of spatially resolved signals can filter out random, high-amplitude noise.
    I work with people who use it in ultrasound imaging. Amongst other applications, we’ve tried it out to pick up defects in synthetic materials and to distinguish dystrophy-affected tissue in DMD patients.
    http://www.ncbi.nlm.nih.gov/pubmed/16285454
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2096710/
    (Articles may require institutional access)

  11. nanuq says:

    Unfortunately, information theory doesn’t help figure out the actual meaning. Without a Rosetta Stone of some sort, we’ll never know.

Leave a Reply