Autonomous vehicles aren't really autonomous.
300 years of work on autonomy says they're just really clever.
This is one of two posts that mark my getting back to blogging. This a techy one and the other is a politics one, which very roughly are the dual themes of this new blog.
I’m working on regulatory innovation at the moment. It’s a dry topic that’s critical to some not-very-dry things: space, AI, engineering biology and autonomous vehicles. While the first three tend to make me feel insignificant, autonomous vehicles excite me on a basic, childish level. I want one. I want to send it on stupid errands, have it pick me up from the pub or take me on holiday while I sleep at the wheel.
My only objection to this incredible promise lies in the name: most such vehicles, including self-driving cars and most drones, aren’t ‘autonomous’. I will try to convince you that this is important and not at all the pedantry of an former political theorist.
What autonomy means
Autonomy is a relatively young concept in engineering sciences, but it is an old one in philosophy, where is it underpins an incredibly large and diverse body of thought about liberty, rationality and individual rights. It is not a particularly famous idea but it often underpins some much more famous ones, chiefly freedom and consent, as well as domination. Pretty much all liberal philosophy rests on some conception of the autonomous individual, and its also really important in various fields of applied ethics like biomedical ethics and criminal law.
Autonomy evolved out of an effort to understand where morality came from if not Gods, monarchs or nature. There is a long tradition of thought, beginning with Immanuel Kant, that supposes that autonomy is what makes humans special, that it separates us from animals. So the argument goes: animals are governed by base, primal instincts beyond their control, whereas humans are governed by goals they set for themselves about what is the right and good way to live (this logic also underpinned Kant’s racist distinctions between Europeans and ‘savages’). And if there is no higher authority telling us what it right and good, such as Gods or Kings, then we should protect that autonomy at all costs, and uphold various rights and freedoms (on speech, assembly, association, etc.) that guard our autonomy.
Autonomy is thus usually defined in terms of the capacity to choose our own goals, not actions. If this sounds odd to you, that’s probably because we often think of freedom (close cousin of autonomy) as being the absence of constraints on our actions. But this leads to some seemingly perverse conclusions — like the addict whose chases their addiction without constraint. Are they really free just because they do as they please and nothing stops them? By the same token, is a poor hermit who has nothing but wants for nothing either and is perfectly happy, really unfree?
This dilemma occurs because even a ‘free’ person is constrained by what they actually want, and so a satisfying definition of freedom surely involves the ability to choose goals that are actually good for us and stick to them. We are all a bit unfree, in that sense. True freedom then, is being somebody who sets their own goals, who decides what’s important for them. In this way, a free person isn’t totally ungoverned, but rather self-governing. They still live by rules, but rules they make for themselves. We call this concept autonomy.
It is through this capacity we set very trivial goals like ‘I will do park run every week’ and also significant ones like: ‘family is most important’, or ‘I have obligations to people in my nation/community, but less so to people who are not’. Autonomy is not just the capacity to do X over Y, it is the capacity to value X over Y, and as such it is the basis of all moral choice and an defining condition of politics.
What self-driving cars do
Most vehicles described as ‘autonomous’ are apparently so because they take lots of decisions without being told what to do. They make choices about actions, unlike mere machines, which simply execute them. This is an important innovation but it is not what autonomy means. A slave might take lots of decisions about how to go about their work, and maybe much smarter and better placed than their master to do so. But if you are enslaved, by definition, you are being deprived of autonomy.
Autonomy is less the ability to make rational decisions in general, which computers and animals also do — that’s intelligence — and is more about self-governance, agenda-setting and the determination of what is good and right to pursue in the first place. Self-driving cars therefore, are almost paradigmatically not autonomous. They are very intelligent, insofar as they are incredibly good at choosing actions that fulfil the goals they inherit from humans. But insofar as a human still chooses to travel from A to B, and they choose why, they are acting autonomously and the car is not.
Admittedly, the choice of goals and actions is not really so clean a distinction. To get a destination, a car must select millions of goals and actions, which are nested and dependent on each other. ‘Get in the slow lane safely within 100 metres of the exit’ is a goal when you are looking down the decision tree (towards actions that it achieve it), and an action when you look up it (towards the goal(s) it serves). Self-driving cars do therefore set goals, of a sort, but only all of them are directly instrumental to the goal of getting on to the next road, and then the next, and the next, until it reaches a destination it had no part in determining.
Perhaps if my car chooses what music to play, we can say is slightly more autonomous. If it chooses what time to leave home in order to pick up my kids from school, making judgements about safety, punctuality and dinner, it is exercising moderate autonomy — and this level isn’t implausible. But until it starts going to shops on its own, and crucially, for its own reasons, it is not fully autonomous.
Why it matters
Part of this is pedantry and I won’t pretend otherwise. It’s my blog and I write and post autonomously. But there will come a time when it will be important for the public to understand the difference between ‘smart/intelligent things’ and ‘autonomous things’. There is currently much fanfare in the world of AI about the possibility of superintelligence and its implications for AI risk/safety. A general artificial intelligence comparable to or surpassing humans is, if we are to believe the hype, not really that far away.
Without getting drawn into the debate about how likely this is, it is worth pointing out that the core condition of a deviant AGI subverting human intentions is not so much one becoming superintelligent but becoming autonomous. That is to say it would override, expand or reinterpret its own system prompts in ways we did not forsee or intend. Presumably it is feared that autonomy will develop organically out of intelligence, as it did for humans, because while are spending trillions on making machines more intelligent, there is no comparable investment in developing ones that determine their own agendas and imperatives.
And why would there be? The financial potential of AI invariably lies in having them serve purposes that humans want and are prepared to pay for. How can I reliably make money from a machine that has its own agenda? I’m not even sure I’d want to get in a truly autonomous car. Fortunately, the vehicles in development now are not autonomous but simply very smart cars. That is what we should call them, along with smart data, smart meters, smart homes, smart speakers and TVs. Alas, for this reason and others, a great wrong was done to us at the turn of the century.



