When artificial kidneys were first used as a medical tool in 1945, it became unnervingly clear that human organs, until then essential to the human make-up, were replaceable. Soon after, hearts — once thought to be the linchpin of humanity — were quickly substituted by external devices, supplanting the inexplicable complexity of human muscle with far simpler, synthetic parts.
This month, a team of Yale scientists partially revived the cellular function of pigs a full hour after the animals’ brain and cardiac waves had flatlined. With the help of their OrganEx system, they restored some cellular activity in the pigs’ hearts, livers and — most meaningfully to bioethical discussions — brains. Though the pigs did not regain consciousness, the Yale researchers demonstrated that vital organs may remain treatable for longer than most scientists have suspected. While this finding doesn’t yet have clinical applications, it may soon offer a new challenge to medical claims about where life ends and death begins.
With the successful revival of some brain and cardiac cellular activity in mammals, the day when medical technologies will again force us to update our definition of human death looms slightly closer.
The brain is the last human organ whose parts cannot be replaced synthetically: As philosopher Daniel Dennett writes, brain transplants are the one kind of operation where one should wish to be on the donating side. If at one point our hearts epitomised the singularity of humans, today the gooey, floating mass within our skulls delineates what we understand as human life.
Until the middle of the 20th century, a patient could be pronounced dead without debate if her heart stopped and her lungs ceased to function. But new ventilators and defibrillators meant that checking for rising, falling or fluttering chests was no longer a valid way to diagnose death. In the late 1960s, physicians who were concerned about the viability of transplantable organs proposed a new metric for thinking about our mortality, one focused on brain death rather than on the functioning of other organs. Their approach soon took hold, and when today’s physicians record their patients’ time of death, they mean the moment when medical devices can no longer register or restore consciousness.
How to forestall death
As Harvard bioethicist Robert Truog suggests, what we formally call “death” consists “more of a moral judgement than a biological fact.” In other words, brain death is less the point at which an organism is definitively gone and more an arbitrary limit, designed to permit legal and medical systems to move on. Though there are no properly documented cases of recovered consciousness after a correct brain death diagnosis, Truog predicts that medical advances may at some point preclude us from using the term “brain death” as a legally binding elision with what the US President’s Council on Bioethics defines as “human death”: The irreversible cessation of the “fundamental work of a living organism.”
With the successful revival of some brain and cardiac cellular activity in mammals, the day when medical technologies will again force us to update our definition of human death looms slightly closer.
This promise is at once thrilling and terrifying. If we extrapolate on the potential of the Yale team’s OrganEx system, we may eventually be capable of reviving silent brains and restarting organs that once would have been considered irreversibly dead. (As it turns out, “irreversibly dead” is not a pleonasm.) In just a few decades, we may be forced to acknowledge that death isn’t a biological absolute so much as an administrative process. Death certificates might indicate that the deceased’s family couldn’t afford to reboot their loved one — or to preserve their body long enough to let such technologies take hold. With advancements in cryonics and emerging technologies such as OrganEx, this is no longer just a science fiction hypothetical but a reality conceivable within our century.
With the current pace of technological advancement, it is not implausible that these futuristic, life-extending medical technologies may become available for low-income people alive today.
The distinction between life and death, in other words, might become a more painful sort of moral judgement: A matter of who can afford to keep a body functioning. In such a future, health inequities would be exacerbated; the wealthy could repeatedly forestall their death, while those least well-off would be forced to accept an indeed “irreversible cessation” of their bodily functions. The fact, however, is that this future shouldn’t sound unfamiliar to those least well-off today. In 2022, a person dies almost every hour while waiting for an organ transplant.
The notion that death could be, and sometimes is, an administrative hurdle — the result of missing ventilators, organs or, in the future, superior but expensive OrganEx devices — makes funerals difficult to swallow. We might ask whether we should continue to develop life-extending technologies if they risk exacerbating our already horrifying inequities.
The answer, I suggest, is yes. In the 1940s, the vast majority of patients with failing kidneys did not have access to dialysis — though some exceptionally well-off, well-connected or simply lucky ones did. Since then, millions of low-income patients have been saved because we accepted this period of inaccessibility. In 2022, artificial kidneys are far from equitably distributed, with those who lack health insurance often unable to afford them. Yet the only way of increasing access to cutting-edge medical interventions is by encouraging more funding for them — even if this temporarily worsens disparities.
If the philosopher William MacAskill is right — and if we do our part to ensure we have a future to look forward to — humanity is only entering its adolescence and has a moral obligation to improve the lives of future generations. In fact, with the current pace of technological advancement, it is not implausible that these futuristic, life-extending medical technologies may become available for low-income people alive today. And one might argue that the fastest, most ethically permissible way of lowering the price of extraordinary medical therapies is by having the wealthy subsidise them as initial customers, as philosopher John Rawls implies.
DNA sequencing is a case in point: The first incomplete sequence cost $2.7 billion in 2003 and offered no clinical relevance. In 2011, Steve Jobs paid $100,000 to learn his genome sequence and his tumours’ genes, without encouraging results. Today, thanks at least in part to Harvard geneticist George Church, who advocated for the democratisation of genome sequencing since the 1990s, it is the upper-middle-class American’s $299 go-to Christmas present and is only beginning to provide clinical benefits. Tomorrow, insurance companies and European governments may offer DNA sequencing free of charge, allowing vulnerable populations to benefit from this once-luxurious tool.
The practice of forestalling death is as old is it is undivorceable from the concept of medicine. As history shows, today’s extraordinary measures will simply be tomorrow’s measures, saving the lives of real humans, both rich and poor. This will remain true even when we again tweak our definition of where life ends and death begins.
— Raiany Romanni is a bioethicist, a Harvard Kennedy School Fellow in Effective Altruism, a VitaDAO Fellow and an A360 Scholar.