So here's the idea: publishers should create default directories called "covers" at their server-root (e.g., tor.com/covers, harpercollins.co.uk/covers, etc) filled with high-rez PNGs or JPGs (or both) named after the book's ISBN -- for Neil Gaiman's Graveyard Book, it would be http://harpercollins.co.uk/0060530928.png. Tweak your robots.txt file to make sure the search-engines all crawl these directories, so when you search on images.google.com or images.yahoo.com for an ISBN, the publisher's high-rez would be right there at the top.
Many bloggers would just embed these images on their homepages (if you're worried about bandwidth costs, I'll personally kick in ten bucks to cover a year's worth of downloads for images hosted on S3), which means that publishers could simultaneously update their covers on (potentially) hundreds of old reviews when a new edition comes out, add banners like "New York Times Bestseller," etc. What's more, you can gather usage stats from your server logs and discover which bloggers are reviewing your books and which of those reviewers gets the most traffic.
With a nice, predictable naming scheme, this becomes a super-lightweight API. Delicious Monster and other services could automatically look up covers in the appropriate publisher's /covers directory. Indie booksellers, school librarians and other people producing promotional materials would have a canonical source. Even a publisher's own PR department could benefit from having an easily accessible, outside-the-firewall, easily accessed, up-to-date directory of cover art.
There you have it -- a practically zero-cost way for publishers to sell more books, get better market-intelligence, and get better control over the collateral used to sell their products online.