Read The Internet of Us Online

Authors: Michael P. Lynch

The Internet of Us (15 page)

Who Does Know: Crowds, Clouds and Networks

Dead Metaphors

Truths, Nietzsche once wrote, are worn-out metaphors, “coins that have lost their pictures, and now only matter as metal, not as coins.”
1
The word “network” has lost its luster in just this way: we now just accept it as a literal description of the facts. Our economy is a network; our social relations are networked; our brains are composed of neural networks; and of course, the Internet, the World Wide Web, is a network. Thus, we might wonder whether knowledge is too. This idea has become reasonably common in tech circles. Some believe it is a game-changer. Again, David Weinberger is at the forefront: “In a networked world, knowledge lives not in books or in heads but in the network itself.”
2
Indeed, in Weinberger's view, the information age is basically over. We live in the networked age, where information doesn't come in discrete packets but in structured wholes.

Let's start unpacking that notion by looking at the idea of a network itself. Think of the ways in which one can describe—or map—a transportation system, such as a subway. One way is to simply superimpose the path of the train tracks onto an existing street map. That works fine, as long as the street map is not too detailed itself, and as long as there aren't too many underground tubes and tracks. If, for example, there is just one track, with two stops, then passengers only need to know where these stops are in order to orient themselves. But what if there are dozens of stops, and the lines crisscross and don't follow the paths of the streets overhead? That was the problem that Harry Beck, an employee of the London Underground, aimed to solve in 1931 by developing a new Tube map—one which, with additions, is still familiar to riders today. What was different about Beck's map is that he ignored the geography of the city and concentrated solely on showing, without reference to scale, the sequence of stations and the intersection of the Underground lines.

By doing so, Beck was able to bring to the fore the information that Tube riders really wanted most: how many stops are in between the present stop and the one you want to get to, and where the lines interconnect. By knowing these two facts, you can deduce how to get from A to B.

As the information theorists Guido Caldarelli and Michele Catanzaro note, Beck's map is like a graph. As such, it displays a basic feature of a network: “in networks, topology is more important than metrics. That is, what is connected to what is more important than how far apart two things are: in other words, the physical geography is less important then the ‘netography' of the
graph.”
3
The reason why, in this case, is pretty clear. The netography or topology of the Underground matters to us because what we are interested in is how information
is distributed in that system—or, more bluntly, in how we riders are distributed along the lines of the Underground tracks. What Beck's map shows is that thinking of something as a network is useful when what matters is a complex pattern of distribution between points rather than the points (the “nodes”) themselves
.
This is part of the reason it makes sense to say that knowledge is becoming more and more networked. The infosphere has made it possible to distribute information so efficiently, and so quickly, that these facts about the distribution become important in themselves.

But really, we are more networked than that. We are increasingly
composing
a knowledge network—or is it composing us?

Knowledge Ain't Just in (Your) Head

Let's go back to neuromedia. What would happen if it became available to the general population? The nature of communication would change, certainly. But that's not all. The boundaries between ourselves and others would, in certain key respects, change as well—especially with regard to how we come to know about the world.

Suppose everyone in a particular community has access to this technology. They can query Google and its riches “internally”; they can comment on one another's blog posts using “internal” commands. In short, they can share knowledge—
they can review one another's testimony—in a purely internal fashion. This would have, to put it lightly, an explosive effect on each individual's “body of knowledge.” That's because whatever I “post” mentally would then be mentally and almost instantly accessible by you (in a way that would be, we might imagine, similar to accessing memory). We'd share a body of knowledge by virtue of being part of a network.
But that is not the most drastic fallout of neuromedia.
The more radical thought is that we are sharing the very cognitive processes that allow us to form our opinions. And to the extent that those processes are trustworthy and accurate, we can say we are sharing ways of knowing.

The traditional view has always been that humans know via processes such as vision, hearing, memory and so on. These ways of getting information are internal; they are in the head, so to speak. But if you had neuromedia, the division between ways of forming beliefs that are internal and ways that are not would no longer be clear. The process by which you access posts on a webpage would be as internal as access to your own memory. So, plausibly, if you come to know, or even justifiably believe, something based on information you've downloaded via neuromedia, that's not just a matter of what is happening in your own head. It will depend on whether the source you are downloading from is reliable—and that source will include the neural networks and cognitive processes of other people. In short, were we to have neuromedia, the difference between relying on yourself for knowledge and relying on others for knowledge would be a difference that would make less of a difference.

Andy Clark and David Chalmers' “extended mind” hypothesis suggests that, in fact, our minds are
already
extended past the boundaries of our skin.
4
When we remember what we are looking for in a store by consulting a shopping list on our phone, they argue, our mental state of remembering to buy bread is spread out; part of that state is neural, and part of it is digital. The phone's notes app is part of my remembering. If Clark and Chalmers are right, then neuromedia doesn't extend the mind any more than it already is extended. We already share minds when I consult your memory and you consult mine.

The extended mind hypothesis is undoubtedly interesting, and it may just be true. But we don't actually have to go so far to think knowledge is extended. Even if we don't literally share minds (now, at least), we do share the processes that ground or justify what our individual minds believe and think. As philosopher Sandy Goldberg has pointed out, when I come to believe something based on information you've given me, whether or not I'm justified in that belief doesn't depend just on what is going on in
my
brain. Part of what justifies my belief is whether
you
, the teacher, are a reliable source. What justifies my receptive beliefs on the relevant topic—what grounds them—is the reliability of a process that includes the teacher's expertise. So whether
I
know something in the receptive sense already can depend as much on what is going on with the teacher as it does the student.
5

Goldberg's hypothesis seems particularly apt when we form beliefs receptively via digital sources—which, as I said, can be understood as knowing via testimony. In relying on
TripAdvisor, or Google Maps, or Reddit, I form beliefs by a process that is essentially socially embedded—a process the elements of which include not just chips and bits but aspects of other people's minds, social norms and my own cognition and visual cortex. How I know is already entangled with how you know.

The Knowing Crowd

So far then, we've seen that knowledge has become increasingly networked in at least two discernible ways: Google-knowing is the result of a network. And our cognitive processes are increasingly entangled with those of other people.

This raises an obvious question. Is it possible that the smartest guy in the room
is
the room? That is, can networks themselves know?

There are a few different ways to approach this question. One way has to do with what those in the AI (artificial intelligence) biz call “the singularity”—a term usually credited to the mathematician John von Neumann. The basic idea is that at some point machines—particularly computer networks—will become intelligent enough to become self-aware, and powerful enough to take control.

The possibility of the singularity raises a host of interesting philosophical questions, but I want to focus on one issue that is already with us. As we've discussed, there are reasons to think that we digital humans are, in a very real sense, components of a network already. So, could networked groups literally know things over and above what their individual members know?
And if groups know things as a collective—in any sense of “know”—then they have to be able to have their own true, justified beliefs. Is that possible?

Some philosophers have argued that it is, and cite the fact that groups can pass judgments even when no individual in the group agrees with the judgment. For example, imagine a group of interviewers trying to choose the best person for the job. Suppose they interview three candidates and each of the interviewers ranks each of the candidates by order (with one being highest). It might turn out that nobody ranks candidate B as number one but that B still turns out as the candidate with the highest
cumulative
ranking (if, for example, everyone ranks B second but split their remaining votes). If so, then the group “believes” that B is the best candidate for the job even though no individual in the group has ranked that candidate number one.

The eminent philosopher of sociology Margaret Gilbert has argued that, if they exist, real group beliefs are the product of what she calls “joint commitments.”
6
A joint commitment is the result of two or more people expressing a readiness to do something together as a unit—like dancing a waltz, performing a play, starting a business, or interviewing a job applicant. You don't, Gilbert emphasizes, always have to engage in a joint commitment deliberately. Often we express our willingness to act together only implicitly, as I might if I just held out my hand to you and gestured toward the dance floor. But however individuals express their readiness to jointly commit, their expression must be common knowledge to all; it must be something that is so taken for granted that everyone knows and everyone knows
that everyone knows. In Gilbert's view, when these conditions are in place and a group has a joint commitment of this sort, it makes sense to think of groups as having a belief just as individuals have beliefs.

Gilbert's hypothesis explains why we do sometimes hold groups responsible over and above their members. As I write this, the corporation British Petroleum received a billion-dollar-plus fine for its role in the Deepwater Horizon oil spill in the Gulf of Mexico. Corporations, while they might be treated as “legal” people, are actually groups of people jointly committed to a common end of profiting from a particular enterprise or enterprises. When we hold groups who are jointly committed in this way responsible, we are holding a group responsible, not the individuals within the group. And it does seem as if we hold groups responsible not just for their actions but for their views—for example, if our job interviewers were, as a group, to believe that a man was the best applicant for the job even though a woman with far better credentials had applied. In such a case, we might hold that the belief—no matter how sincere—was unreasonable.

In some cases, Gilbert's view of joint commitments may also explain some group commitments made by digital humans. The digital “groups” that we form are often bolstered by a joint commitment to something, whether it be a political ideology, a hobby, a sport, or the practice and theory of hate. Such groups often do have the sort of common knowledge that joint commitment requires. But it is less clear whether people participating in Internet chat rooms, or posting on a comment thread on a popular blog, are really intending to
“do something together.” Sometimes that may indeed be the case—Wikipedia is a good example of a network where posters are committed to a joint enterprise—but often the opposite is true. From the standpoint of Gilbert's theory, Internet groups and networks may not have any group knowledge at all.
7

Yet even if social networks don't literally know like individuals do—a view that Weinberger himself shies away from—there is still another way of thinking about the question of whether networks know. Groups can certainly
generate
knowledge, in the sense that the aggregating of individual opinions can give us information, and possibly accurate, reliable information, that no one individual could. Consider that ubiquitous feature of your online life: the ranking. There was a day when the only way to get information on whether a movie, restaurant or book was to your taste was to consult a professional review. Now we also have the star system. Instead of one review, we can get dozens, hundreds or even thousands. And in addition to the “qualitative” comments, we get an overall ranking, the average of individual rankings assigned to the product. Useful? Certainly. And most of us know some simple facts about such systems as well. To name the most obvious: the more rankings, the more reliable we tend to take the average score to be (1,000 rankings with an average of 4 stars is far more impressive than three rankings of 4.5 stars). Of course, we also know that the fact that many people like something doesn't mean we'll like it too.

The fact that we so often trust such rankings—at least, in the right conditions—points out that we already tend to abide by the
main lesson of James Surowiecki's 2004 landmark book
The Wisdom of Crowds
. Surowiecki's point was that in certain conditions, the aggregated answers of large groups could be wiser—could display more knowledge—than an individual, even an individual expert. Suroweicki's most famous example comes from the work of Francis Galton, a British scientist. Galton examined a competition in which 787 contestants at a country fair estimated the weight of an ox. The average of all guesses was 1,197 pounds. The ox weighed 1,198 pounds.
8

Other books

The Tailgate by Elin Hilderbrand
The Girl from Everywhere by Heidi Heilig
Titanic: April 1912 by Kathleen Duey
Decatur the Vampire by Amarinda Jones
The Comforts of Home by Jodi Thomas
Innocent Lies by J.W. Phillips