Convergence of new emerging technologies
L.CAENAZZO - L.MARIANI - R.PEGORARO - E.GREGORI FERRI - C.PETRINI - E.GEFENAS - S.SEMPLICI - B.GORDIJN - D.STEMERDING - M.KAISER - R.STRAND - J.LEKSTUTIENE'
15,00 €
- Paperback
15,00 €
- E-book
12,26 € tax incl.
Ethical challenges and new responsibilities
Foreword
Bert Gordijn
Human beings are not the only tool-making animals. New Caledonian crows, chimpanzees and various other animals make tools as well.
Nevertheless, we seem to be the only species to have advanced technology from simple tools to extremely sophisticated high tech. This has happened over a period of 2.5 million years. Yet the development of technology has not been linear. Until very recently we have mainly been making stone tools. The pace of technological development was extremely slow for almost the entire history of human tool making.
So much so that it would have been difficult to conceive of the idea of emerging technologies, since hardly anyone would have ever seen a new technology emerge during their lifetime. Only around five thousand years ago we mastered bronze metallurgy, and – two thousand years later – iron metallurgy.
In antiquity the Greeks were the first to make technology a subject matter of philosophy. Their views, however, were generally not very favorable.
Technology was mainly seen as an imitation of nature. The idea that technological innovation could be used to the benefit of mankind in a big way would have been quite surprising to an average Greek philosopher.
This somewhat unfavorable assessment of technology in philosophy only changed two thousand years later in the work of Francis Bacon. In his New Atlantis – published posthumously in 1627 - we encounter a positive attitude to science and technology. The idea here is that, if only we apply a suitable methodology in scientific research, we can develop new technologies and medical therapies that will serve to achieve a plethora of important human goals. This attitude towards technological innovation became more widespread in subsequent centuries, especially in the age of enlightenment. The 18th century saw the start of the industrial revolution, which could be regarded as the materialization of Bacon’s utopian phantasies. Associated with this phenomenon were a surge in patents, population numbers and growth in GDP (Gross Domestic Product) in the West. In the 19th century we saw the first strong public protests against technological innovation, when the Luddites rioted – worried about the foreseen decline in demand of homemade textile products - and shattered labor saving textile machines.
This introduced a trend of technology criticism in the public domain in Western culture that has remained significant both in popular culture (e.g. in Hollywood blockbuster movies) as well as academia.
However, technology criticism does not seem to have slowed down the pace of innovation.
Today continuous technological innovation is thought to be an ordinary fact of life. We all have a vivid appreciation of emerging technologies because we have all seen various new technologies materialize in our own experience. This is in sharp contrast to a lack of rapid innovations for 99% of the history of human technology. Some authors even predict that innovation will further accelerate beyond recognition.
Vinge argued that “… we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence” (Vinge, 1993). Similarly, Kurzweil maintained: “Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history.” Lately, artificial intelligence has indeed made great progress. As a result various authors and public figures have voiced concerns about the rapid developments fearing that artificial intelligence might not only have beneficial effects and be difficult to control, especially when it reaches a superhuman level of general intelligence. Be that as it may, emerging technologies arguably seem to be amongst the most significant contemporary factors shaping the human condition. To illustrate this claim, let me give three examples of how emerging technologies are increasingly influencing 1) our environment, 2) our food, and 3) even our own nature, respectively.
Environment. Geoengineering is the attempt to change the climate through technological means. There are basically two approaches: the first one focuses on ways to reduce the solar energy that reaches the earth (e.g. with the help of radiation reflecting structures in space or stratospheric sulfur aerosols), the second one aims at reducing the amount of greenhouse gases in the atmosphere (e.g. through artificial trees or phytoplankton blooms as a result of iron fertilization). The main ethical question here is whether geoengineering research should continue, and whether the technology should be employed, if it turns out to be safe and effective. There are strong arguments in favor of geoengineering, maintaining that since the current approach – to turn the tide of climate change through political initiatives and lifestyle changes – does not seem to work, we need to come up with something else to avoid catastrophe. In addition, even if we were to achieve a policy solution, we might already have passed certain tipping points and nevertheless be too late. This would mean that we would still need geoengineering to mitigate the destructive effects of climate change. On the other hand, it can be argued that geoengineering runs a moral hazard risk in that the likelihood of people changing their lifestyle will significantly reduce as soon as there is the possibility of a technological fix for the climate change problem. Moreover tinkering with the climate seems to be intrinsically risky since we still do not know enough about all the factors that play into and are affected by the climate system. Furthermore, geoengineering attempts in one country might trigger harmful effects in another thus raising the prospect of conflicts. Finally, geoengineering technologies could be used as weapons, e.g. by triggering draughts or floods in target countries.
Food: So called in-vitro meat is meat cultivated in the lab without killing any animals. Various approaches are being researched, such as scaffold- based techniques and self-organizing tissue cultures. The main ethical question here is whether the further development of cultured meat is ethically desirable. Proponents argue that in-vitro meat sidesteps cruelty against animals, avoids zoonotic diseases and is more sustainable with regard to greenhouse gas emissions, energy, as well as use of water and land. Critics point at the wisdom of repugnance and unknown risks.
They also see in-vitro meat as unnatural. Finally, there is a moral hazard in that cultured meat can be seen as a technological fix subverting a much-needed moral change to vegetarianism.
Human nature: Enhancement technologies finally, aim at the improvement of traits of perfectly healthy human beings. Examples of current enhancement technologies are nonsurgical beautifying procedures such as botox and laser treatments, cosmetic surgery and dentistry, doping such as steroids and erythropoietin, smart drugs, mood enhancers, and brain-computer interfaces for gaming purposes.
Further developments in tissue engineering, genome editing, biogerontology, ICT and nanotechnology could lead to future enhancements concerning the maximum life span, body modification, sensory abilities, motorial skills, emotional capacities, cognitive capabilities and indeed, even our moral features. This prospect has raised the idea that we might enhance human beings to the extent that it would be hard to still categorise them unambiguously as human beings: they might have to be regarded as posthumans. This admittedly highly speculative scenario might be achieved through genetic engineering, increasing cyborgization, or whole brain emulation, amongst others. The main ethical issue is whether the intentional transformation of humans into posthumans is ethically desirable. This question has triggered a debate between transhumanists and bioconservatives. The former argue that it could be enriching and reduce suffering to radically improve on the human blueprint, which as a product of natural selection is afflicted with imperfections (see e.g. Bostrom, 2005). The latter fear that creating posthumans subverts the idea of human rights and might endanger the human species (see e.g. Fukuyama, 2004).
The edited volume at hand first tackles the ethical challenges of emerging technologies in general and then zooms in on the moral questions raised by the application of new technologies in the healthcare arena specifically. Against the backdrop of aforementioned developments it is most welcome!
No customer comments for the moment.