AI, language, and liability

How worried should we be and why?

This is a meandering essay about language, intelligence, and the claims of AI enthusiasts and catastrophists. Although I’m no expert, given how little we really know about human cognition relative to its infinite complexity, I am skeptical about both apocalyptic and utopian predictions AI and our future. What follows is a discursive meditation on some reasons why.

The Rosetta Stone in the British Museum, courtesy Wikipedia

Cory Doctorow recently published an article titled “Code is a Liability, not an Asset” which explored all the ways software generally, and AI specifically, creates downstream effects that he posits the tech industry is either ignorant of, or willfully ignoring. It’s a great article, very insightful and full of several excellent examples of what he’s describing.

Throughout the article, though, he repeatedly restates the title in various forms, that software, AI, code for the sake of code, is “a liability, not an asset.”  Something about the formulation snagged in my brain, distracting me from the otherwise excellent points he makes.

After some reflection, I think the problem stems from the various meanings of the word “liability” and how they muddle and distract from his thesis, likely unconsciously and unintentionally.

The four most common meanings of liability are:

  1. The accounting sense, a debt incurred and outstanding;
  2. The legal sense, as in a responsibility; 
  3. The probability or risk sense;
  4. The most generic sense, as a disadvantage, something hindering you or holding you back; 

The accounting sense has real applications in software, the bulk of Doctorow’s theme.  Software, and presumably AI as well, definitely is a “unit of wealth or value that is expected to provide future value”, the definition of an asset. But they also have maintenance expenses tied to them. In accounting, expenses are accrued, and after a certain period of time, accrued expenses that aren’t paid become current liabilities.  Accountants capture future expenses through the practice of amortization and depreciation, or dividing up future expenses across the life of the thing.  In the case of software, in the strictest sense as Doctorow points out, it doesn’t wear out and can be used an infinite number of times, so theoretically it has no maintenance expense or depreciation. But that’s not how the real world works.

                  Software is a tool for automating workflows. In a factory, the workflow is physically represented by an assembly line, the sequence of steps necessary to go from input to output. Because the world changes, workflows change. Software that’s drifted from the actual workflow, or more accurately, software for a workflowthat has drifted away from the original, has to be updated.

                  Software is also dependent on data that is suited to it’s function. If data degrades or changes, or the workflows generating the data (the more common situation) drift away, the software needs to be altered to adapt. This is an expense.

                  Last, software only exists as electronic traces inside hardware. Some software is designed for, and optimized, for specific hardware infrastructure. As long as that hardware infrastructure remains unchanged, all is well, but we all know that doesn’t happen. Hardware is always evolving, necessitating the dreaded upgrades. Enough upgrades go by and now your software ALSO needs upgrading because it’s no long optimized for the new hardware.

                  These all generate conventional expenses from the accounting perspective.

What about the legal sense, responsibility, something you incur or possess inherently? An example might be leaving loaded bear traps in your front yard lots of neighborhood kids around.

We’re already seeing it. Look at the generation of pornography by Grok, necessitating a hasty code rewrite and apologies.

                  In medical software, there are many discussions about the liability incurred by physicians and software companies as specialists like radiologists and pathologists increasingly rely on AI software for image analysis and interpretation. When one of them makes an error resulting in patient harm, who will bear the legal liability, the physician, the physician’s employer, or the software company? In any case, the use of the software has incurred a liability, and a responsibility for the consequences of the decisions made with that software.

The third sense of the world liability, as a risk, is not directly addressed in Doctorow’s article. This is the tendency to do something or be a certain way, as in, “If you stand on the top step of a ladder all the time, you’re liable to fall.” He does, however, touch on the issue of “tech debt” which accrues over time as organizations cobble together legacy systems, in need of upgrades long deferred, with layer upon layer of software work arounds with baling wire and duct tape. “Tech debt” is another way of describing system brittleness or fragility, as opposed to a resilient, failure tolerant system. The more cobbling together over time, the more brittle and fragile the system becomes, until eventually it crashes as a result of some unforeseen scenario. The probability of that crash comes from the accumulated risk, the liability it has to that outcome, and is directly related to the amount of bespoke kludges used to keep the system operational.

                  Doctorow uses these scenarios to describe the difference between writing code, which is articulating the steps of automating a process, and software engineering, which is the discipline of looking at upstream, downstream, and adjacent consequences to the operation of that software on workflows. It requires judgement and experience, and can’t be rendered down to a statistical association exercise.

The last sense, liability as a disadvantage is also not addressed directly, though the entire article in a sense is about software as a disadvantage when looked at from different perspectives. An example of liability as a disadvantage would be, “His thick southern twang was a social liability when he arrived at Harvard freshman year.”  Doctorow does spends some time discussing the types of users who find using AI to be helpful, an actual advantage, or asset in the broader sense. 

Throughout the essay, Doctorow mixes in all of these meanings of “liability” while continuing to contrast with the world “asset.”  This is what created the subtle dissonance for me.

So what’s the point of this exercise? Consider it my own meandering contribution to AI skepticism.  The stochastic predictions of Large Language Models would look at the word liability and drop it into sentences based on the statistical associations of “liability” with other words, then sentences, then paragraphs adjacent to the relevant concept. I’m sure there’s some ranking of frequencies of which sense of “liability” is used most often and in what context (Cory’s use is likely the most common, in conjunction with “asset” in the accounting sense), but then how many ways would it screw up the other three senses?  If a human dashing off an essay to get it posted quickly can glom and muddle them, how the hell is the AI going to sort them out and make those subtle distinctions?

All languages contain words with multiple meanings, and meanings represented by multiple words, all of it dynamic and shifting, words and meanings converging and splitting, mutating and dying out, depending on the interplay between humans, their culture, and current events.

The incessant symbolic fermentation of slang, graffiti, and now the visual and linguistic synthesis of both in memes, show the relentless quest for novelty, broadened insight, and creativity that are hallmarks of human cognition.

Take the word “bad” for example, a slang phenomenon from my youth. There was a period of time that anything really good or cool was emphatically “bad”, as in “baaaaaad”. New words are coined, or old words are repurposed, married with images, and if they resonate in some way, propagate across the culture through multiple media. Depending on the context, these propagations may only take days, or even hours and persist for a very long time. To this day, I can’t hear the world “bad” and not think of Richard Roundtree, Curtis Mayfield, and the theme from “Shaft”:

That Shaft is a baaaad mother…

Shut your mouth!

I’m just talking ’bout Shaft…

And we can dig it.

There are yet other layers of meaning encoded in the role of tone, gestures, facial expressions, and contextual references.  An example from my personal experience of a conversation in my home between my wife and one of our children is a perfect example:

“Why are you yelling at me?”

“I didn’t say a word!”

“You’re yelling at me with your eyes!”

Certainly an LLM could capture some of this if fed the correct training data, but that is, and forever will be, a Sisyphean task, the models walking backward toward the horizon, forever capturing already stale data while human minds are onto the next novelty.

Language and all other modes of communication, are the interfacing media of two, or more, minds, the contact surface between the sensory bubbles of nervous systems detecting and interacting with the outside world, the surface of which is defined by the performance characteristics of each sense (decibels for hearing, wavelengths for light, physical contact touch, duration, proximity, scent sensitivity, and so on).

I don’t know who is right about whether AI is an existential threat or a big scam. I do know that whatever is coming is already on its way, and I suspect that if something larger emerges from these efforts along the lines of some form of artificial consciousness, it won’t be from the language models, for the reasons above. I have a hunch that it’s more likely to come from the efforts around autonomy, if the emergence of intelligence from living organisms is any guide.

A foundational piece of the infrastructure of consciousness is the sensory experience, the cognitive framework of the concept of “I”, a separate thinking being existing in space and time. Even the most basic organisms can orient themselves in space, and as they become more complex, develop behaviors that demonstrate a sense of time. At the cellular level, the existence of the “homunculus” in animal’s somatosensory cortex is our base model for our own bodies and how they interact with our sensory environment. Next steps after that involve building more complex internal models of how we interact with the surrounding environment.

But a key step is when those internal models begin developing models of other beings, with their own capacities for agency. Early on, it’s models of predators and prey, to stay alive and obtain food. Then come the models of our fellow humans, mates, offspring, neighbors, competitors. That begins the recursive process of creating models of “I and you”, two minds interacting, which then sets the stage for the emergence of language, and the feedback loop of competition and cooperation that drives evolution and the further development of mind, and eventually culture.  Those accelerating iterations are what differentiated the human species from the rest of the primates.

Back to the beginning and Doctorow’s essay, the imminent risks of AI are not as much about how well it writes code or can compose a memo, though I worry that the more bright people use these tools, the more their communication, reasoning, and creative skills will atrophy, to all of our detriment.  As Doctorow often states, what will hurt us most in the short term are the bad decisions, based on greed and laziness, that leaders will make based on the bluster and promises of AI sales people.

Either way, we’ve still got a lot to learn.

Leave a comment