Read The Internet of Us Online

Authors: Michael P. Lynch

The Internet of Us (23 page)

Yet Google-knowing, while a basis for understanding, is not itself the same as understanding because it is not a creative act.

To use the Internet is to have the testimony machine at your fingertips. That is what makes it so useful. But understanding is often said to be different from other forms of knowledge precisely because it is not
directly
conveyed by testimony—and thus not directly teachable.
2
Again, you can give someone the
basis
for understanding. But in the usual cases, you can't directly convey the understanding itself. An art teacher, for example, can give me the basis for creative thought by teaching me the rudiments of painting. She can give me ideas of what to paint and how to paint it. But I did not create these ideas; I create when I move beyond imitating to interpret these ideas in my own way. Likewise, you can give me a theorem without my understanding why it is true. And if I do come to understand why it is true, I do so because I've expended some effort—I've drawn the right logical connections. Coming to understand is something you must do for yourself.

Let's contrast this with other kinds of knowledge. I can download receptive knowledge directly from you. You tell me that whales are mammals; I believe it, and if you are a reliable source and the proposition in question is true, I know in the receptive way. No effort needed. Or consider responsible belief: you give me some evidence for whales being mammals. You tell me that leading scientists believe it. If the evidence is good, then if I believe it, I'm doing so responsibly. But in neither case do I thereby directly understand why whales are or aren't mammals. You can, of course, give me the explanation (assuming you have it). But to understand it, I must grasp it myself.

Or so it is generally. One might wonder, however, whether that would remain the case were we as fully integrated as the neuromedia possibility imagines. To have neuromedia would be like reading minds. You'd be able to access other people's thoughts through little
more than the intermediary of satellites. We would all be Google Completing our thoughts for one another, and as result collaboration could very well start to
feel
from the inside like individual creation does now.

This is still a long way from showing that neuromedia would increase our understanding of the world all by itself. There is no doubt that information technology is already radically facilitating collaboration. And coming to understand, like any act of creation, is something you can do
with
others. But just because you can understand with others doesn't alter the fact that understanding involves a
personal
cognitive integration—a combination of various cognitive abilities in the individual, including a grasp of dependency relations and the skill to make leaps and inferences in thought. It ultimately involves an element of individual cognitive achievement.
Understanding is not something I can outsource
.

Yet what makes this individual cognitive achievement so valuable? Why worry about understanding if correlation, as Chris Anderson might say, gets you to Larissa? What can it add that other forms of knowing cannot?

Understanding is a necessary condition for being able to explain, and explanations matter. A well-confirmed correlation can be the basis of (probabilistic) predictions. But prediction is not the only point of inquiry, nor should it be. Good explanations for why a correlation holds give us something more. As the eminent philosopher of science Philip Kitcher has noted, good explanations are fecund.
3
They don't just tell us what is; they lead us to what might be: they suggest further tests, further views, and they rule out certain hypotheses as well. Moreover, if you want to control something and not just predict what it will do given the preexisting data, you need
to know why it does what it does. You need to understand. Thus, being able, on the basis of Google Flu Trends, to predict where the flu spreads is incredibly helpful. But if we want to know how to control its spread, we must better understand
why
it spreads. And once we do so, it seems likely that our predictions might themselves become more nuanced.

In fact, authors of a recent study critiquing the predictive power of Google Flu Trends have made this very point.
4
The authors argue that more refined predictive techniques drawing on traditional methods of modeling can be at least as accurate as Google's method, which they demonstrate has routinely overestimated the amount of flu cases by as much as 30 percent. They ascribe this to what they call “big data hubris,” or the assumption that sheer data size alone will always result in more predictive power. The researchers' point is not that big data techniques aren't helpful, but that the Google algorithm is not likely to be a good stand-alone method for predicting the spread of the flu.

Given our argument above, this is not surprising. Big data techniques are going to assist our models and explanations, not supplant them.

The creativity of understanding helps to explain our intuitive sense that understanding is a cognitive act of supreme value and importance, not just for where it gets us but in itself. Creativity matters to human beings. That's partly because the creative problem-solver is more apt to survive, or at least to get what she wants. But we also value it as an end. It is something we care about for its own sake; being creative is an expression of some of the deepest parts of our humanity.

Finally, understanding can also have a reflexive element. Our
deepest moments of understanding reveal to us how we ourselves fit into the whole. Thus, an act of understanding something or someone else can also help you understand yourself. When that happens, understanding comes with what Freud called the “oceanic feeling”—the feeling of interconnectedness.

Perhaps this is why we treasure those moments of understanding in both ourselves and others. If you've ever taught or coached or parented someone, you've tried to help someone understand. The moment they do is what makes the effort worthwhile. If that moment never comes, you regret it because that person is missing out on an act of creative personal expression, a chance to see how the parts connect to make the whole.

So even if, contrary to what I've suggested here, we are someday able to outsource our understanding to some coming piece of glorious technology, it is not clear that we should want to. To do so risks losing something deep, something that makes us not just digitally human, but human, period.

Information and the Ties That Bind

What would it be like if you had the Internet connected directly to your brain? That, or something like it, is the future toward which we are barreling. The hyperconnectivity of our phones, cars, watches and glasses is just the beginning. The Internet of Things has become the Internet of Everything, the Internet of Us.

These pages have spun a cautionary tale about this progress, but there is actually a lot to be optimistic about. The massive amount of data that is making hyperconnected knowing possible has the potential to help cure diseases, contribute to constructive solutions
to climate change and tell us more about our own preferences, prejudices and inclinations than we ever thought possible. I look forward to these developments, and I hope you do too. My point in this book is that we should nonetheless approach the future with our eyes wide open, especially since our relationship with the Internet is becoming more and more intimate. Intimacy brings comfort, but it also makes us vulnerable.

Some of these vulnerabilities are extensions of those we already have. The Internet of Us will be comprised of human bodies that are themselves communicating with one another, and with the Net, through a variety of embedded or surface-worn devices. Data trails will follow us around like so many little sparks; dancing points not of light but of 1s and 0s. These data trails are already here. I am reminded of Aleksandr Solzhenitsyn's remark in his 1968 book
Cancer Ward
:

As every man goes through life he fills in a number of forms for the record, each containing a number of questions. . . . There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider's web, and if they materialized as rubber bands, buses, trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city. They are not visible, they are not material, but every man is constantly aware of their existence. . . . Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads.
5

The threads are strings of information. They are the ties that bind us to one another, and society to us. What big data and the hyperconnectivity of knowledge are doing is making these connections brighter, more numerous, stronger and fundamentally easier to pluck. And so our respect—if that is the word—should also grow for those who have, or wish to have, their hands on these strings. Let us hope their motivations are pure, or at least neutral, while we stay on guard for the opposite. As Bertrand Russell once remarked in a somewhat different context, advances in technology never seem to bring along with them—at least, all by themselves—a change in humanity's penchant for greed and power. That is a lesson I hope we heed—even while we look forward to the benefits the Internet of Us will bring.

Many of us share the same concerns. After the initial launch of Google Glass, the reaction was more negative than expected. While many were excited about the technology, it seemed that just as many were worried about its potential for invading privacy; others were concerned about its potential for distracting drivers. These practical objections were serious. But I can't help wondering if the concern went deeper. Before its launch, Google cofounder Sergey Brin was reported to have said, “We started Project Glass believing that, by bringing technology closer, we can get it more out of the way.”
6
Brin was meaning to emphasize the fact that Glass allows you to take pictures without fumbling for your camera. But he inadvertently put his finger on a more basic fear of the Internet of Us. We are getting technology out of the way by pulling it closer—in the case of Glass, literally making us
see
through it. We know technology can always alter our perspective.
But this perspective-altering effect can only increase as it migrates inward.

We must be careful that we don't mistake the “us” in the Internet of Us for “everything else.” The digital world is a construction and, as I've argued, constructions are real enough. But we don't want to let that blind us to the world that is not constructed, to the world that predates our digital selves. And the Internet of Us is not only going to affect how we see our world; it will affect our form of life. One aspect of this concerns autonomy. The hyperconnectivity of knowledge can help us become more cognitively autonomous and increase what I called epistemic equality. But I've argued it can also hinder our cognitive autonomy by making our ways of accessing information more vulnerable to the manipulations and desires of others. And it can lead us to overemphasize the importance of receptive knowing—knowing as downloading.

Humans are toolmakers, and information technologies are the grandest tools we have at the moment. Our tool-making nature shapes how we understand the world and our role within it. It encourages us to see the natural environment as something upon which we operate, which we use as means for our own ends, as an extension of the tools we develop to interact with it. So what happens when we extend our tools to the point that they become integrated with our life, when we become the very tools themselves? That is the most salient question about the coming Internet of Us. And it raises a danger that we cease to see our own personhood as an end in itself. Instead, we begin to see ourselves as devices to be used, as tools to be exploited.

None of this is inevitable, however. How could it be—the changes in our form of life that digital ways of knowing are bringing have yet to fully unfold. We should not fear information technology per se, or the “Internet” in the expanding Internet of Us. It is the “us” part—or our
uses
of technology—that we must mind. We
are
becoming more powerful knowers. We just must also strive to be more responsible, understanding ones.

Acknowledgments

Over the years, I've been fortunate to talk about these subjects with many wise and intelligent people, including Robert Barnard, Don Baxter, Paul Bloomfield, Sandy Goldberg, Patrick Greenough, Hanna Gunn, Julian Jackson, Casey Rebecca Johnson, Brendan Kane, Junyeol Kim, Nathan Kellen, Tom Lynch, Helen Nissenbaum, Nikolaj Jang Lee Pedersen, Duncan Pritchard, Baron Reed, David Ripley, Paul Roberts, Marcus Rossberg, Evan Selinger, Nate Sheff, Tom Scheinfeldt and Daniel Silvermint. A special shout-out to the Block Island Cognitive Research Institute, who heard early versions of these ideas (over and over again): Paul Allopenna, Terry Berthelot, James Dixon, Inge-Marie Eigsti, Lisa Holle, Jim Magnuson and Emily Myers.

Nate Sheff and David Pruitt were of great help in researching various materials in the initial stages of this project. Early drafts of the manuscript benefited heavily from comments by Patricia Lynch, Phil Marino, Kent Stephens, Tom Stone and Steven Todd;
Terry Berthelot provided invaluable commentary on a later draft. Portions of this book were given as talks at the University of Connecticut Humanities Institute, the University of Edinburgh, the University of St. Andrews, Northwestern University's Kaplan Humanities Institute, University of Cincinnati's Taft Center, Syracuse University, Ohio State University, the American Philosophical Association, Yonsei University, TEDx, the Chautauqua Institution and SXSW. Portions of chapters 4 and 6 build on ideas I first tried to express in “A Vote for Reason,” “Privacy and the Concept of the Self” and “Privacy and the Pool of Information” in the
New York Times
' The Stone blog, as well as “The Philosophy of Privacy: Why Surveillance Reduces Us to Objects,” May 7, 2015, in
The Guardian
. The ideas of chapter 1 draw inspiration from “NeuroMedia, Knowledge and Understanding,” published in
Philosophical Issues: A Supplement to NOÛS,
vol. 24 (2014).

Other books

Tessa Ever After by Brighton Walsh
Forbidden Fruit by Annie Murphy, Peter de Rosa
Swamp Sniper by Jana DeLeon
In Firefly Valley by Amanda Cabot
Fearless by Diana Palmer
The Pandora Project by Heather A. Cowan