I want to follow up on my post from November, Ethics in research design and longevity of interest. I want to do so because I have been thinking about technology constantly over the past weeks - I am interested in understanding why humans are compelled to find technological "solutions" to social problems, especially ecological crises. Over the past few weeks, I have been reflecting on what it means for me to be studying chemistry, and why I am doing it, and what it may really mean for the world, if anything at all. But I have been further contemplative about technology after listening to Krista Tippett's conversation with Jon Kabat-Zinn on Being, as well as looking forward to tonight's upcoming Jeopardy! episode, in which Watson, a computer developed by IBM, will be competing against humans.
I have been trying to understand the unique position of humans to the environment, and this has allowed a great deal of introspection, both of my relationship to the world, and my responsibility to my neighbourhood and community and family and myself in living in a less destructive manner. What this has entailed has been to boil down my life to what matters to me the most, and what matters involves only in some small sense the technology I am surrounded by. Of course, you wouldn't be reading this if technology didn't exist. But I do feel that I have been allowed time to reflect through a purging of things, including technology. Kabat-Zinn made one of the most insightful remarks on our very rudimentary understanding of the influences of technology in an increasingly connected world. He said that, "...technology is becoming more sophisticated than our understanding of ourselves as human beings without technology." With our technologies, we are able to have our phones ring in the middle of the night with the arrival of an e-mail, and we are able to ship apples from New Zealand to the US with just the burden of customs. Technology has become so second-nature to us that we now cannot envision a world without it. If we cannot envision a world without it, how does that affect our moral selves, and how does that affect answers to increasingly complex problems? Potential answers to questions of what it means to be human, and what it means to be good, and what it means to live in an ecologically sustainable world still feel as far as they have for millennia. By adding more complexity in the picture through technology, we obfuscate an already complex understanding of ourselves. At the same time, I am in no way taking a position in which I am saying technology is unequivocally bad.
Dale Jamieson and Wendell Berry write at length at the compulsion of applying what we know, even though we don't have a full understanding of the impacts. A great example of this is nuclear waste being used as ammunition in Iraq. But what might the motives be for developing the Watson? How does having developed the Watson answer pressing questions? How does this move us forward morally and ethically?