A matter of trust

In July, 1945 Vannevar Bush–The Director of the Office of Scientific Research and Development for the United States Government –wrote in an article for The Atlantic titled As we May Think:

“There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.”

It was a statement that he could make with authority, as he oversaw an organization of nearly 30,000 scientific men and women who–among other accomplishments–began the Manhattan Project, created the Norden Bomb Sight, developed instruments for SONAR and RADAR, and helped make the mass-production of drugs possible.

In Vannevar’s eyes, it was a clear signal of the end of the Renaissance man. As specialization increased, no one person could reasonably be expected to know everything necessary to bring new inventions and innovations to full maturity. And he suggested that it was only the development of devices that were beyond books–machines that could extend human memory and provide near instantaneous retrieval and cross referencing capabilities–that would allow scientific advancement to proceed.

Skipping past the decades in which the roots on the tree of knowledge continued to deepen to the point that the “memex” Vannevar Bush envisioned in his prescient article took its first step toward reality as a network of computers shared by governments and universities (and bypassing the explosion of technological advances that made it possible to create computers that were inexpensive enough to be dedicated to performing tasks for a single individual), we arrived at the moment in which personal computing devices became legion, palm sized, and constantly connected.

It was at this point the advancement of knowledge accelerated and deepened beyond anyone’s imagination. Job descriptions that could not even be comprehended 10 years prior came into existence, only to be completely replaced by new job descriptions five years later. Today, traveling visionaries fill their slide decks with predictions of job descriptions being torn down and reborn almost annually.

Throughout this entire process, the focus has been on the accumulation and dissemination of knowledge. Terms like “Knowledge Management” have entered the lexicon. “Knowledge Architects” focus on taxonomies, knowledge maps, and semantic searches. The Knol has been proposed as a unit of knowledge.

And the collective wisdom is that to reach the next point of specialization acceleration, we should continue to add to this base of knowledge, and invest in curators of that knowledge in the form of narrowly focused niche applications or sophisticated cloud enhanced software. (And given that such collective wisdom is now expanded to include two thirds of the world’s population, I would be foolish to argue against it.)

But I will humbly suggest that knowledge growth using refined search or curation as the sole determining factor for reaching the next stages of human intellectual advancement or even business success is becoming increasingly unlikely for a simple reason. The pool of human knowledge has become so choked with the detritus of an army of spiders and robots that rather than embrace the collective intelligence we have amassed, we try to insulate ourselves from it. There are too many “false positives” out there, and too few retractions and amendments.

Some contend that the occasional false story that spreads like wildfire is a natural consequence of the mediums upon which we have chosen to rely for our information. But it appears a far deeper issue is that there is no remorse in one being wrong as long as one is first. And data that goes uncorrected becomes useless for statistical analysis in the future.

So if quantity, curation, and focus alone cannot determine the rulers of the next information empires, is there a deciding factor? The following trends point toward trust as the critical ingredient.

  • Google was just a search engine with a funny name amidst a sea of competition at its outset. It was only by earning the trust of its users for returning the knowledge that was most likely to satisfy the questions asked that it rose to dominance. And, ironically, it was only when it moved onto shaky ground with its loyal users around issues of trust in both its results and its intentions that the door opened even slightly for its competition.
  • The rise of Facebook is as much about being able to insulate yourself from people who do not share your views and find information that only comes from sources you trust as it is about rekindling friendships. There, the “like” button is becoming a unit of confidence, a currency of trust, and an index of knowledge. And just as with Google, it is only matters of trust that threaten its continued success.
  • The entire premise of crowd-sourcing via ratings systems counts on how hungry we have become for validation that our faith is not being misplaced before we invest ourselves either financially or intellectually.
  • Wikipedia, arguably one of the largest collections of human knowledge ever amassed because it allows anyone to contribute to its body of work, is not allowed to be cited as a source of information by most colleges, universities, journals or publications. The reason? A lack of trust in its veracity.

It is the organizations that ask themselves “How can I become the most trusted source for the knowledge and information I provide?” rather than “How can I become the biggest or most profitable?” that will lead us into the coming decade.