
Is a future that's shiny and chrome really what we want, or is it just what we've been told we should want?

If you are looking for someone who will tell you that technology is mostly good and that you just have to build in a few key safety features, I am sorry to disappoint you. Technologies, when they become fixtures in public sphere, change our moral obligations. Just consider biomedical technologies: life support means that we have a new obligation to try to keep people alive who we could not 100 years ago. Now doctors and nurses are responsible for patients' deaths in a way they were not before.
​
This means that we really ought to be careful about which technologies we want to build and why. The current hype surrounding AI, and previous years' hype for nanotech, CRISPR, Internet of Things and other trends does not indicate that this is really the future we should want. Worse, the people most adamant that we should embrace a new technological fad are usually those most likely to benefit, regardless of who is left out. Do we really want tomato plants that don't yield viable seeds for farmers? Should our thermostats depend on constant online connectivity to work? How do we control a technology that is too small to see, or one that is by definition autonomous?
​
I work from an interdisciplinary perspective, so you will find this site includes philosophy, theology, and sometimes good old fashioned STS. I believe bringing these perspectives together provides a fuller picture than individual methods do. My orientation is primarily a critical theoretical one, drawing from liberation theology, Frankfurt School and psychoanalysis. If you would like to be challenged on your views, I welcome you to peruse the site and consider why techno-optimism is dangerous. I also welcome feedback and pushback--perhaps together we can realize something that neither of us independently can. This site serves as my personal site and contains primarily my own research and teaching.
-
Under About Me, you'll be able to read a little more about me.
-
Under Books, you'll find books I have written or edited.
-
​Under Writings, you'll find academic and popular writings by or about me.
-
Under Videos, you'll find video presentations by me.
-
Under Resources, you can find both PowerPoints I have made for presentations and talks, blog posts, and syllabi on ethics of technology.
-
Under Projects, you can find current research projects I am working on.
-
Under the Links tab, you'll find what I consider useful links.
-
If you have any questions, comments or just want to say hello, you can reach me under the Contact tab.
I used to joke with my students that "business ethics" is an oxymoron. Perhaps "military ethics" also falls into this category, where the goals of many military actions (e.g., killing the enemy effectively) thwart typical ethical interests (e.g., saving lives). Medical ethics seems to be a reasonable term, since medical professionals ostensibly want ends that coincide with moral aims. Where does technology ethics fall in the realm of applied ethics? What does it mean to talk about the ethics of technology? Philosophers like Martin Heidegger and Jacques Ellul suggested last century that technology has an inherently totalizing force. Critical theorists like Herbert Marcuse and Andrew Feenberg have argued that modern technologies tend to work for the same power-amassing interests as capital (acquisition) and military (dominance). But on the total opposite side of the spectrum, policy documents, business developments, educational curricula, popular conversations and simple economic pressure has made it seem like technological advance is not only inevitable, but it's also desirable in itself.
​
Why is this? What if I told you that the worries of Marcuse and the grand promises of Elon Musk are one in the same? Most philosophers of technology (at least most I know) worry that promises made by engineers and businessmen are actually 1) not very imaginative, 2) ignorant of real concerns and ethical considerations, and 3) poorly thought out. What this means is many times, technological solutions are designed with either bad intentions or with good, but ignorant, intentions. An example of the former would be "productivity monitoring" software designed to ensure that workers are constantly monitored. An example of the latter is promoting AI as a solution to climate change.
​
Ethics of technology is important because we all share the same planet and the consequences of bad technologies affect (or potentially affect) everyone around the globe as well as future generations. Previous generations heavily developed fossil fuel systems; today we face a warming planet. Oppenheimer's Manhattan Project means we need to constantly monitor who has nuclear arms. Even to take a relatively small problem, the ubiquity of email and smartphones means workers never really get time off!
​
"No previous ethics had to consider the global condition of human life and the far-off future, even existence, of the race."
--Hans Jonas
