Newspapers and online publishers appear to be heading back into battle against search engine behemoth Google (http://www.google.com) .
Yesterday the powerful World Association of Newspapers (http://www.wan-press.org) (WAN) issued a rather terse statement (http://www.wan-press.org/article16666.html) , calling on Google "to respect the rights of contentcreators" and embrace a new access protocol for search engines indexing Web sites, known as the Automated Content Access Protocol (http://www.the-acap.org/) (ACAP).
ACAP is a proposed search engine protocol for accessing publishers' sites, created by the publishing industry under WAN's leadership. (See previous Tidbits coverage (http://www.poynter.org/column.asp?id=31&aid=133763) .)
How ACAP works: Publishers place ACAP code on their servers that controls search engine access. Currently the robots.txt (http://www.robotstxt.org/) method does this -- but WAN says that protocol is too simplistic and does not give publishers enough options. Furthermore, robots.txt is not a gatekeeping mechanism that online publishers have a stake in. The current access protocol was something imposed on them by search engines years ago.
WAN claims that publishers in 16 countries are known to have already implemented ACAP. (WAN's membership includes news agencies, book and magazine publishers, libraries, and search engines as well as newspaper publishers.)
In the statement, WAN president Gavin O'Reilly implied that Google's reluctance to accept ACAP is as a result of "its own commercial self-interest" -- adding that the search engine behemoth should "not glibly throw mistruths about." This is the first salvo in what will probably become a key battle between Google and media players in the next few years.
WAN also claimed that Google's European executive Rob Jonas (who was incorrectly referred to as "Ron" instead of "Rob" in WAN's statement) implied that Google would not embrace ACAP. At a December 2007 conference (http://www.journalism.co.uk/2/articles/531181.php) , Jonas reportedly said that the current robots.txt protocol "provides everything most publishers need" -- indicating that the search engine is happy with the status quo.
The very same Rob Jonas was invited to the big annual WAN conference in Cape Town (http://www.capetown2007.co.za/home.php) last year. At the conference, Google was both slammed and praised (http://www.matthewbuckland.com/?p=285) by many publishers.
Personally, I think this struggle is fundamentally about money. (What else could it be?) WAN probably will contend that the real issue is controlling access and respecting their rights. However, controlling access means that publishers would eventually be in a position to charge Google to crawl or index their content, even in aggregated form.
I can can see both sides of the current struggle: The argument against Google: Why should Google aggregate and list content it does not pay for? Content that comes from publishers has a cost associated with it. The argument for Google: How else should a search engine behave? It must aggregate headlines and blurbs in order to send traffic to sites. Arguably Google News (http://www.google.com/news) is a competing news brand, presenting content that belongs to other news sites. But the search engine has also been very careful NOT to monetize Google News by displaying Adsense ads there -- a move that would infuriate publishers, who could then claim that Google is directly profiting from their content.
WAN also should be careful. Although it represents a powerful publishing lobby of newspapers and online publishers, the publishing community is anything but united on this issue. Google may aggregate publisher content, but it is also a huge source of traffic (and in some cases, revenue) publishers. Many online publishers would be reluctant to give that up -- especially smaller and mid-size publishers that rely more heavily on Google.
Credit : Matthew Buckland http://www.poynter.org