Regulating Technologies, 7-8 April 2007

Regulating Technologies - King’s College London, 7-8 April 2007, London, UK.

(EN) Conference report Regulating Technologies - Announcing an international and interdisciplinary conference formally to launch the Centre for Technology, Ethics an Law in Society, based in the School of Law at King’s College London
In his opening presentation Lawrence Lessig focused on 2 “c”s, code and corruption. His first example concerned web content that is harmful to minors. As efforts to regulate the problem through law have not been successful, efforts have been turned to code blocking access to pornography. Censors, however, might secretly even block other content. As there is no obvious way to challenge which content is blocked, no legal action against the blocking of certain content is available. This leads to less free speech. (See ACLU White Paper Fahrenheit 451.2: Is Cyperspace Burning?)

Lessig suggested two theorems, the first being ”no law means bad code” and the second ”good law (avoid) bad code”. Applied to the example of child pornography Lessig thinks that law is better than bad code. The state could regulate invisible tags on web pages about content that might be harmful to minors and browsers can be made to block. The burden would be put on the websites (same burden in real space), browser manufacturers (already now possibility to block content), and "adults" (according to existing legislation no burden on them at the moment). This would weaken the market for censor ware.

The first block of the conference was on biotechnology.
Stephen L Minger (Director Stem Cell Biology Laboratory, Therapeutic and Research potential of human stem cells, Co-organiser - London Regenerative Medicine Network, talked about embryonic stem cells (Embryonic stem (ES) cells are cells whose fate is to turn into tissue (skin, brain, ..). They have to be “harvested” within two days time). Organ transplantation has worked for several years now. Also cell based technology works today, there is however a profound shortage of human fetal tissue. The stem cell discussion and controversy spins around the multiplying of cells from one foetus and double it so it constitutes up to 15 foetuses. Foetus tissue as such does not constitute a controversy anymore.

In the UK all cell-reproductive medicine technology falls under regulation/control of Human Fertilisation and Embryology Authority (HFEA, created in 1990). In order to be able to use embryonic stem cells the “parents” have to sign an informed consent form and they must not receive any financial inducement for the donation.

Deryck Beyleveld referred to the Pro-Life Alliance Case and discussed the difference between legal fact and legal fiction. According to the U.K. Human Tissue Act consent is "deemed to" have been given. Consent, however, is not absolute because personal autonomy is not absolute. In patent law when treating the determination of a genetic sequence that occurs in nature as an invention (this is usually not patentable) legal fiction is used in order for it to be patentable.

Graeme Laurie described the new Ethics and Governance Council of the United Kingdom Biobank (http://www.egcukbiobank.org.uk/).

Han Somsen, Tilburg Institute for Law, Society and Technology (TILT), talked about Cloning Trojan horses: precautionary regulation of reproductive technologies. There is an imbalance of what we can do and what we can currently understand. Legal articulations of precaution include Principle 15 of the Rio Declaration. The precautionary principle is not a legal principle for risk prevention, but has become a political tool allowing states to the control the future goal of technological development (van den Daele).

Dr Andrea Buechler, Zurich University, discussed the Swiss federal law on transplantation tissue (TPG). If consent is necessary to remove brain tissue from an embryo, who should give it? It has been argued that the pregnant woman has decided "against" the foetus and put its interest behind her own, therefore she should not have right to decide or give consent instead of the foetus. There might also be a risk for termination of pregnancy if tissue could be donated to a certain person. This is not allowed according to Act. The view can be seen in the law that embryos are potential human beings.

Andrew Murray, LSE, Department of law, talked about the Clarkson Conundrum and the (Cyber)State. He divided between static regulation (Philip Selznick, administrative law theory) and complex regulation (Cybernetics, regulatory flux, external, e.g. new technology or internal, complex systems and systems theory). Murray discussed as well regulatory systems (Rob Baldwin & Martin Cave: Regulatory Strategies, Regulatory Interpretations: Mark Thatcher), Werner Heisenberg (inventor of matrix mechanics; Heisenberg uncertainty principle), Jeremy Clarkson (The end is near, Sunday Times 28/01/07), "Internet's like mercury". According to Murray once you create technology, the genie is out of the bottle. In other words, once it is developed, others can do the same (see e.g. Napster, .. Kazaa). He also mentioned Cyberlibertarianism and Reidenbergs Lex Informatica.

He further discussed three concepts: Concept 1 - Layers (Not just layers but the concept of "layered vertical regulation". If I change an applet to TCP/IP, this will not change TCP/IP, but if I change TCP/IP, I can change the other layers). Concept 2 - Environment (regulating the physical world, social legal theory, environmental modalities, social modalities - legal (hierarchical) modalities). Concept 3 - The Power of the Network (from .control to .community, technology increases power of individuals).

In his presentation “Articulating law, markets, social norms and technology: What differentiating might teach us” (
www.vub.ac.be/LSTS) Paul De Hert discussed if law has content. Law has always existed even in times without democracy or quest for the just. Law as a practice and law as an institution existed even in Nazi-Germany. Lawyers are not solution oriented, they look at law as lawyers, not at technology. (See also Bruno Latour - Making Things Public , Reassembling the Social, 2005). According to Paul law is a matter of imputation, that is law serves at attaching things or persons to persons or things. Holding things and persons together, is what law does. Law does not regulate. (See also Cass Sunstein's Legal Reasoning and Political Conflict, OUP 96)

Jonathan Zittrain discussed malware development and how it leads to information appliances, where the producer decides how to use an appliance. The device is closed to outside
development, only vendor decides on the use and on updates, they decide to have partnerships with third party suppliers, which is then the only way for the user to use third party products. In other words the PC is starting to get locked down.

Several cases in the U.S. are examples for this development:
- TiVo - Echo case: Echo lost the case and had to lock down a certain function in all sold units. The result is that a producer even can decide to make an appliance not functional again.
- OnStar case: Pre-installed microphones in car, FBI told company to open microphones, so they could eavesdrop (FBI taps cell phone mic as eavesdropping tool (
www.news.com) Dec 04, 2006)

Supply creates demand. You think technology is going one way, suddenly it goes the other way. An architecture of control is created. This leads to the question how we should regulate technologies that stand to regulate us.

Zittrain also presented the project
stopbadware.org, which is a cooperation with Google. He furthermore referred to the book The Wealth of Network, Yochai Benkler. (www.benkler.or)

On Sunday
Judy Illes talked about neurotechnology and lying and deception. (Spence et al 2001; Langleben, 2002; Kozel et al 2005: True false mock crime) The ethical challenges when it comes to MRI detection of lying comprise that human behaviour is complex and that a certain behaviour cannot only be related to a certain reason. Relevant factors for “lying” can include memory, intention, motivation, planning function, monitoring, mood, daily physiology. Lying and deception are two different things. Lying would be frank misinformation that states and erroneous conclusion. Deception, on the other hand, can be misleading information, omission, distortion that lead to an erroneous conclusion.

According to Illes technical issues concerning lying and deception comprise
-paradigmatic: standards of practice do no exist yet, the question of quality control arises; instruments are different, study design, experimental parameters (internal external validity), socioculturally-appropriate stimuli (at the moment mostly white male college students between the ages of 20-30 are tested), data geography (ROIs, which parts of the brain are looked upon)
-analytic: localisation vs networks, motivation-mitigated neural signatures

Ethics and policy issues include:
-privacy
-context: autonomy, coercion, (accused, victims (false memories), children and adults (stigma, profiling)
-justice: what goals? what uses? (proximate, long-range)
-nonmalficence: false positives/false negatives, unexpected clinical findings (is there a moral obligation to tell that person?)
-countermeasures (internal and external, e.g. beta blockers, TMS; if you squeeze your toes can that effect the outcome of MRI)
-allocation of scarce resources for research
-oversight: by whom and how
-moral culpability (Kulynych 2002): the goal is to find out whether somebody is lying, but not why or what somebody is lying about

The major risks and troubling concerns are a premature adoption of technology. Some publications such as The Nation, Brave Neuro World and New Scientist Thought Detectives are profound overstatements (See also
Jeffrey Rosen, The Brain on the Stand, New York Times, 11 March 2007). According to Judy Illes “the neuroscience of ethics leads to the
ethics of neuroscience”. She is also associated with the
Neuroethics Society.

Richard Ashcroft, Queen Mary, talked about bioethics and the military. He referred inter alia to the book Mind wars and the Dana foundation (As well as to the movie and book “Flowers of Alderone”). The enhanced war fighter is an issue within the military. The prediction and enhancement of soldier outcomes is important and genomics are applied to screening of physiological and psychological fields of interest. The technologies of enhancement include cyber technologies as well as biomedical technologies (drugs, gene therapy). Within the latter one can distinguish multiple-use timelimited technologies, single shot permanent technologies (somatic gene therapy) and semipermanent, removable technologies (cyber implants).

Moral hazards are
-enhancement as such
-disinhibition of combatants: undermining the capacity to act in accord with jus in bello? if the feeling of shame is taken away, maybe this affects as well the ability to remember?
-dissolution of person/body/weapon
-dissolution of role/identity distinction: classical soldier is a person occupying a role temporarily, who can assume or put aside the role when necessary. When hard-wired is this still possible?

Regulatory options include
-upstream (ethics of science, research ethics)
-midstream (non-proliferation - international treaties and inspection, regulation via product licensing)
-downstream (jus in bello, command responsibilities, warrior virtues - moral enhancement as well as physiological)

What should be regulated:
- regulate enhancements as weapon systems (dumdum bullets, NBC weapons)
- as "moral horrors" (cf reproductive cloning)
- as quantitatively increasing risks in existing jus in bello/military discipline regime
- don't regulate (war is hell, realistically you win first and then worry about it later)

Charles D. Raab in his presentation on Regulation information systems, rethinking the "tools" outside the "box" referred to a paper that he has been writing with Paul de Hert. As a starting point he referred to Lessig's tools of regulation and expressed that they do not say much about the actors and processes involved (bringing policy actors back into the picture). He expressed enthusiasm for Christopher Hood's "tools" approach (looking at what governments do). Raab made two general observations. The first being a need for a policy-actor approach, because tools and their implementations are produced by decisions made through social, political and economic processes. Secondly, there is a need to look more deeply, more critically and more normatively at "tools" and the "toolbox", because they have wide-ranging effects and because they do not operate singly. They are rather substitutes for each other, not like in a real toolbox that a hammer cannot substitute a screw driver.

This leads to
1. Rethinking actors: a policy actor approach to privacy participants and their relationships (Bennet and Raab, 2006, 220), regulatory body, data controllers, technology developers/providers.
2. Rethinking the tools (Lessig, 1999): critics to Lessig: Joel Reidenberg, Schwarz ("Lessig two-step", property aspect), Mark Rosenberg, ambient intelligence?

Hood's (Tools of government, 1983: 4-6, Tools in ... e-government, revised version)
-nodality: property of being in middle of information or social network, ability to traffic in information by having the "whole picture"
-authority: possessing legal or official power, ability to determine in a legal or official sense treasure: possessing a stock o money or fungible chattels, gives ability to exchange
-organisation: possessing a stock of skilled people, land, buildings; gives physical ability to act directly

Four Criteria or "Canons" (Hood 1993: 133)
1.instrument or mix of instruments used should be selected after some examination of alternative possible tools for job
2.tool should be matched to the job
3.the choice must not be "barbaric", must satisfy certain ethical criteria, such as justice and fairness
4.effectiveness is not enough. desired effect must be achieved with minimum possible drain on the government's bureaucratic resources.

Five Questions about Regulatory Toolkits
1.is it complete and accurate as a taxonomy that will capture the future as well as the past?(governance, paper in Netherlands, dynamic, complexity, diversity of tools)
2.according to what criteria can the instruments be compared and contrasted? (optimal or adequate solutions?)
3.to which specific information practices or contexts do different tools pertain? (datamining, data-collection, several fields, several technologies, be specific in which situation and for what purpose a certain tool is better suited)
4.are the instruments substitutable for each other, or are they complementary?
5.if they are complementary, how do the instruments combine (and how might they combine better)? (no instrument is independent of each other)

Bert Gordijn, The Netherlands, discussed nanotechnology. In 1974 Norio Taniguchi first used one-billionth of a meter (10 atoms of hydrogen). The broad concept of nanotechnology would include new technology smaller than microtechnology and at least one dimension small. The narrow concept on the other hand would mean a programme and manipulate matter with molecular precision and to scale it to three-dimensional products of arbitrary size.

Eric Drexler wrote the first technical paper on molecular nanotechnology in 1981. The conceptual roots are in John Von Neumann's Idea of a Kinematic Constructor (1950s) and Richard Feynman's Idea of Ato mic Maneuvering (1959). "With assemblers we will be able to remake our world or destroy it".

The question about utopian dreams or apocalyptic nightmares arise. Utopian dreams would include clean manufacturing, reversion of environmental degration, inexpensive high-quality products, mass production food, and improvement of medicine. Apocalyptic nightmares would consist of environmental damage, disruption of economics, unstable arms race, totalitarian surveillance, gray goo scenario (self-replication does not stop and transfers the biosphere to one gray goo).

An obsolete view is the physical impossibility of creating self-replicating assemblers (Smalley, 2001). Current research is not focused on achieving assemblers and has a wide array of different nanoscale structures. Special issues are neuroimplants (Freitas, 1999). The ethical problems connected are triggering of medicalisation (If all others have neuroimplants etc, should I have that as well?)

Possible scenarios include:
1. normative debate: intentional transformation of human beings into post human beings, transhumanists – bioconservatives - genetic germline modification
2. man-machine fusion: post human cyborg
3. uploaded mind: software resident intelligence, go on living forever

Prof. Dr. Bert-Jaap Koops, Tilburg Institute for Law (TILT) presented When normative technologies abound - developing criteria for "code as law". He discussed the acceptability of normative technology, in terms of democratic and constitutional legitimacy (rules are built-in intentionally in order to steer people's behaviour).

Normative technology would include compliance-enforcing tools (the invisible hand of the law) and other behaviour-influencing technology (the invisible hand of the market) His sources include heuristic methodology, literature by Lessig, Brownsword (2004, 2005), Asscher (2006, procedural criteria) and authors on regulation of technology Koops (2006).

One can see an integrated set of criteria, both substantive and procedural criteria. As primary criteria he mentioned human rights, other moral values, rule of law and democracy (not only voting once in a while, but participation of stake-holders in general). Secondary criteria could then be principles of procedure (transparency of rule-making, accountability, expertise <-> independence, efficiency) and principles of result (= rule) (choice, flexibility, transparency, effectiveness).

When there are deficits in the acceptability of "code", what should be done?
- adapt normative technology
- adapt our understanding of acceptability (re-assess notions of democracy, rule of law, legitimacy. What does law mean in a world of Ambient Intelligence with Ambient Law built-in?)

T.J. McIntyre, University College Dublin, Filtering or Blocking: Evaluating Process and Purpose talked about that censorship is nothing new, so why is filtering different? The reasons include opacity (implication for traditional rule of law values) and automaticity (dilution of moral responsibility, removal of discretion in enforcement, loss of proportionality, and loss of feedback).

Rhetoric issues include the fact that filtering is in common use, even by cyber libertarians, the question if not filtered means unclean, the illusion of precision and the value in alternative terms - censor ware (Finkelstein) or "blocking" (ACLU).

Issues of transparency and accountability affect
- traditional state censorship: list of banned books, criteria for designation, prior notice, rights of appeal/challenge, enforcement public,
- filtering systems:
• truth in blocking (Lessig): notification of blocking, identity of blocker, end user deceived (web site not retrieved because of technical problem or user informed why?)
• commercial incentives of manufacturers: black lists secret/no independent review (no incentives for private producers), litigation against researchers, temptation to blacklist critics, government unable to review systems they have mandated
• systematically targets intermediaries
• who may not have incentives to defend speech
• and are probably not public actors (Kreimer, Censorship by Proxy)

Filtering and feedback: public enforcement processes act to provide feedback (Tien, Architectural Regulation and Evaluation of Social Norms)

Two models of filtering
- David Post: decentralised rule making, market for law, respect for autonomy of individual, exit ensures encountability, consent as legitimating factor, outcome for efficient, allows for multiple communities, multiple values
- Lawrence Lessig: policy making by invisible hand, not voluntary for those affected, excludes public values, imposes market dynamic on civil discourse, denies legal accountability, preference for narrow publicly required filters, which will ensure greater transparency, and crowd out bad code.

In other words consent (Post) vs democratic involvement (Lessig).

The BT Cleanfeed is a mandatory voluntary system. The blacklist is generated by IWF (International Watch Foundation) and the end users are unaware of blocking but presented
with error messages.

Concerning the Chinese filtering intermediaries such as Google or Yahoo subject to binding "self discipline pact, self-censorship”. In the case Centre for Democracy and Technology v Pappert (2004) Pennsylvania law required the blocking of child pornography.

McIntyre also mentioned function creep and slippery slope arguments (Volokh) concerning equality. If filtering is used for one content, why not use for other content? Also anticircumvention measures (proxy, Tor, peacefire.org) might gain in importance.

Mireille Hildebrandt, Erasmus Universiteit Brussel, talked about a Vision of Ambient Law. Autonomic computing is one step beyond proactive computing, one step beyond interactive computing, and one step beyond passive (TV) computing. Hildebrandt uses ambient intelligence (EU, Philips) in the meaning of proactive computing (computer anticipates your behaviour, coffee ready before you want it, computer detects that with keystrokes that you are getting tired, etc)

Hildebrandt discussed technological and legal normativity. Technology is neither good nor bad, but never neutral (Kranzberg 1986). A normative impact vs moral impact can be noticed: smart car, machine to machine communication with road, rfid technology, biometric measures of your facial movements - correlates to your driving behaviour, it will detect when driving becomes dangerous. This has a moral impact and the car can do two things: warning (light, irritating sound) or tell you that you have to park the car within two minutes, otherwise the motor will stop. This is regulating your behaviour, one has no choice, and it is entirely transparent. Not every technology is determanitive. There is also a constitutive and regulative impact.

Legal normativity concerns legal norms are externalised and intended (cp technological normativity). Law functions at a meta-level, constitutional democracy. Legal and technological tools are not exchangeable, technological devices and infrastructures need legal regulation, precisely because they regulate.

About the relation between law and technology: How to regulate technology, if it regulates our life (is written law effective?). Law is already technologically embodied: in the script.

Another point Hildebrandt discussed was technological articulation of law (law in constitutional democracy)
(a) from orality to script (Pierre Levi and Liceur?): create distance between author and content, addressing people not present and in future, liberated from the custody of author (author no longer in charge of what was meant with the text); primary text with authority, later comments, then comments on comments, the need for interpretation. (Lévy (1990), The Blind and the Lame)
(b) from hand-written to printed script: accumulation of text, orality is still primary, people read text out loud, from master and pupil to individual, from disputatio to systemisation, more people read, less master-pupil, facilitated through technical development, proliferation of text as an advantage of printing, then systematisation (+ interpretation from orality)
(c) from letter-isation to digitalisation: from linear sense of time to real time segments and points.

In order to run a state, you need detailed law which leads to the printing press. A class of people has to safeguard the coherence of text. Therefore autonomous law appears between ruler and ruled.

A vision of ambient law includes
a) mutual transformations
b) ambient law

The failure of data protection legislation might be caused by focus on data instead of profiles and by technological infrastructures that do not enable the exercise of rights. A technological infrastructure should be developed to enable ambient technology, so you can access technology, the right of access to profiles has to be embodied in technology. Legal transparency tools mean positive freedom in this case.

PETs make environment less intelligent and the infrastructure has to be adapted. TETs transparency enhancing technologies could be used instead. If the goal of environment is not to disturb you with questions, a program that asks you for decisions cannot be solution. Opt-in opt-out are not complex enough for the solution. Solutions right now should focus on to save our feeling of choice.

Karen Yeung Ben Bowling talked about Technological applications in criminal justice and security. These can be surveillant, investigative, probative, coercive, punitive or communicative. According to research for a secure Europe (2004) "security without support of technology is impossible".

Surveillance technologies concern extensive surveillance that goes beyond traditional notion of surveillance, earlier only suspects, now general. A proper regulatory framework is missing for new technologies that do not require a physical intrusion, but can "look through walls and cloths".

Therefore definitions are necessary and effects should be described. Crime is being driven by technology; therefore thinking about technology is required when it is being produced rather than when it is presented as a fact.