Part I : What is wrong with governments forcing liability on Internet intermediaries?

 

Does the good old ‘don’t shoot the messenger’ still apply in a digital world? Even if, in today’s economy, the messenger earns quite a lot of money from his work, does it mean he should now be liable for the content of the messages he delivers? Or should states instead think of promoting legal content by changing some old regulatory models and outdated ways of doing business?

 

Meet the messengers

Let’s start with identifying some of the messengers in the digital world, often referred to as Internet intermediaries: the Internet service providers (ISPs) and the content providers.

Internet service providers (ISPs)

On their way through the vast network of networks, digital packages are being routed by the ISPs, which guide them from source to destination through the fastest and most efficient route. ISPs do not check the packages for inappropriate content; they simply deliver the packages to us. ISPs range from local small and medium enterprises (SMEs) to giant international telecoms such as Telefonica, AT&T, and Verizon.  Let’s face it: they (can) earn a lot! Many of them are among the most solvent business entities in today’s global economy, even in the recessionary times.

Content providers

Do you even remember the last time you typed a full Internet address (URL) into your browser? No? I didn’t think so. Instead, we Google the address we need, follow the link through Facebook or Twitter, or visit social bookmarking webs. We seldom share files via e-mail any more. Instead we use DropBox, Google docs, MegaUpload (RIP and resurrected). These content providers are also digital messengers: they find the information for us, help us share it widely and access it easily. They do not check the information we access for inappropriate content; they simply deliver the existing information to us. Let’s face it again: with billions of searches or posts per day, they certainly earn lots and lots of money through advertising. They represent the emerging economic giants of today.

 

Should we shoot the messengers?

To observe best approaches for fighting illegal content, we should look through both types of lenses: shortsighted and farsighted.

The simplistic (shortsighted) view: Yes

Internet intermediaries are at the centre of digital content distribution. Technically (though theoretically) they can check each and every piece of digital information passing through their servers and filter out the inappropriate parts of the web…if they so wish, or if they are obliged to do so.  This work, through costly (technology, knowledge, and manpower), would be done by the intermediaries themselves, and would save governments from having to invest in educating and equipping juridical institutions to join the digital reality.

Since the intermediaries will not be delighted at investing in inspecting and filtering our data (which would also reduce the traffic flow and thereby reduce their profits) or disclosing private information belonging to their customers to third parties (at least not without financial return), the regulators might need to force them to do so. And this is where SOPA/PIPA/ACTA and other abbreviated legislations come in, naming the intermediaries liable for what they transport, and asking them to self-censor and reveal their users. It seems a short, sweet and practical solution: digital content will be controlled and the intellectual property rights (IPR) industry will be protected.

The holistic (farsighted) view: No

Ever heard of the term Web2.0? Content on the Net is not exclusively produced or shared by web admins and the quality content industry (like Hollywood) anymore; instead, it is mostly created and shared by users themselves – some 2 billion of them at the moment. And there is so much more than the illegal content: Wikipedia’s collaborative information, YouTube’s creative artistic and educational videos, near-realtime Twitter news and updates, Google-stored scientific papers and books, endless small websites and services with local content of cultures across the globe...

Yes, there is also inappropriate and even illegal content. But are the intermediaries the right ones to judge what is appropriate or illegal and what is not? Should Telefonica or Google decide if certain bits of content are parts of counterfeited products and whether this content is being used for private or commercial gain? Should they decide if some websites, like MegaUpload for instance, are involved with large-scale illegal activity? Are they competent to do so? Are they able to?

Some statistics will help: 60 hours of video are uploaded every minute on YouTube (link), Twitter user send an average of 140 million tweets per day (link), and over 550 million websites exist with more than 300 million added in 2011 alone (link). Even if intermediaries would be obliged, what would it take for them – in terms of equipment and manhours (lawyers and engineers) – to regularly check through all this content? Mission impossible – even for ius congens type of content (child porn, justification of genocide or terrorism) or politically or culturally sensitive content (porn, gambling or Nazi materials), let alone for IPR with specific sensitive legal aspects.

Yet, if forced to and marked as liable by governments, in order to avoid severe financial penalties intermediaries may turn to the simplest (or only possible) solution: severe self-censorship based on even the slightest insinuation of a what-could be inappropriate content according to our own internal blurry criteria. In practice this could mean that:

  • Twitter and FaceBook – since they would not be able to follow all the posts and shared links – might start censoring posts based on keywords or web addresses blacklisted based on unknown internal criteria.
  • Wikipedia – since it would not be able to check through all its articles and links – might need to remove all the articles entered collaboratively by users where there is even a possibility of a quoted source which does not respect the author’s rights.
  • PayPal would cease providing services to Internet companies in which they are not 100% confident (remember the case of Wikileaks?).
  • Google might filter out potentially questionable search results again based on internal blacklists.
  • ISPs might introduce filtering of entire web spaces (DNS filtering) such as YouTube or Twitter, since they cannot guarantee the content shared there.

This excessive self-censorship would result in loads of valuable Internet content becoming inaccessible. Moreover, potential new revolutionary services – such as FaceBook or Twitter in their early days – would not have the chance to develop in such a restrictive environment (consider the impact on developing countries where such services are more and more likely to emerge and impact the local and regional economic development).  The Internet would cease being a rich, open and economically viable space that encourages innovation and rewards potential.

 

[Next: Part II: What is the way to Internet regulation?]
 

Comments

Vladimir Radunovic's picture
Vladimir Radunovic
Garland, thank you for the inputs. You are right that this shows the need for a multistakeholder approach. Regarding BiTag, you might share some more info and a link...
Garland McCoy (not verified)
Vlada has raised some great points...what is needed is a multi-stakeholder approach to this issues...see BiTag as a model...what you want to have at the core are the very same kids who currnetly are working for NSA and other govenment and private companies at Universities that have "contracts" on cyber attack models...it is a "kids" game ...our Kids vs China's kids...etc. anyway ...you charge them with setting up the screening software and surround them with consumer groups and industry folks to ensure transparency ...the Universities...Professors and students will provide an environment where the work can get done ...and be trusted by all parties...just an idea...

Add new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.