The explosive growth of AI technology raises a host of concerns. Data centers hosting AI servers are projected to consume unsustainable amounts of electicity in the near future. At the same time, despite the wonderous achievements so far, critical questions about what society gains from this technology remain unanswered. Our challenge is addressing the issues while keeping a clear eye on what AI can and can’t do.

The big questions we are avoiding are moral choices about priorities, what we are giving up for whatever it is AI will do for us. Should we be spending this much money and diverting this many resources toward an endeavor with this many open questions? We need to talk about energy, capabilities, worker displacement, education, negative feedbacks on human cognition, and what kind of future we are choosing, whether actively or blindly groping, and who is doing that choosing.
How much resources? In the brief research I did for this post, in absolute terms, today, the amount of electricity that AI consumes is still relatively small. It is the growth that is alarming. Estimates vary, and specific numbers will be irrelevant before I finish typing this, but ALL forecasts are for continued dramatic acceleration of growth that WILL have an impact on global energy consumption within 2-5 years. That is when the hard choices will start landing about how power grids will handle those loads. This is why the big tech companies are now openly discussing building their own nuclear power plants and even the laughable idea of putting data centers in orbit.
So this thing, AI, which is really a collection of many technologies, is a rapidly growing burden on global energy consumption. Let’s pause for a second and assess the global energy situation.
All modern technology requires electricity. Electricity providers are publicly regulated utilities because access to power is a public good that should benefit all. The diversion of significant power to AI is a policy question requiring open debate.
We are currently in the midst of a massive transition between power sources. Despite the best efforts of the liquid hydrocarbon oligarchy to cripple it, solar energy is here to stay. As predicted would happen, the costs for producing solar energy have dropped to the point now the world is rapidly decarbonizing new energy production, even as atmospheric CO2 continues rising. The adoption of this new technology is not evenly distributed, though. The U.S., as often happens lately, is lagging in embarrassing ways.
AI power consumption will influence the choices to be made about sources of energy. Each one has tradeoffs, and before we choose, we need to understand the impacts and what we get for the sacrifices and negative consequences. For example, it’s difficult to power data centers with solar energy alone. Should we be building more hydrocarbon dependent energy production just for data centers? Will the benefits of AI outweigh the obvious, ongoing harm to the planet?
There are also moral choices to be made about Western lifestyles, energy consumption, and energy efficiency. Americans consume preposterous amounts of energy in our daily routines, in stupid, wasteful ways. This is all before we even get to AI.
So what are we talking about when we use the acronym A.I. for Artificial Intelligence? Already, we’re muddling important concepts. What AI proponents constantly hype is the artificial replication of human capacity for thought. Is it even possible, and if so what capabilities could be replicated?
I believe it may be theoretically possible, but even with the recent dramatic improvements in some capabilities, current technologies have a long, long way to go to even begin approaching what any human can do. Why is replicating human cognition so difficult?
Human intelligence is embodied, which means that it is the result of the activity of living tissue in the human nervous system (this article will not be straying from scientific explanations of human consciousness). All the capabilities of the human mind are the result of the flow of electrolytes and organic molecules across and between the membranes of trillions of cells of the human body.
Using the terminology of computer science, already inadequate for the complexity of living systems, the human nervous system senses the environment, receives data, processes and analyses that data, creates hypotheses, then makes decisions, acting on that information to meet the needs of the single, embodied intelligence. To dispense with the possibility question, to the extent we can replicate a system that does those things in the same, or similar ways, the human body does, then yes, in my opinion, it is theoretically possible. So let’s tackle the capacity question to see where the obstacles are.
Most of the activity of the human nervous system is unconscious, which brings us to the first AI challenge. AI technologies are attempting to replicate the capabilities of a black box. We are just beginning to scratch the surface of how the organic systems underpinning the human mind work. The complexity of some AI systems are such that they have now effectively become black boxes themselves. So we have black boxes attempting to replicate the capabilities of other black boxes. The black box problem is an active one in AI research and will be a major impediment for progress going forward because of the inherent challenges of complex systems.
Human cognition has a conscious element, but that rests on top of a huge, invisible hierarchy of unconscious processes that we only dimly understand. In general, AI focuses on replicating skills and functions with discrete, measurable outcomes (how fast can it…how many times, how much data…), because these are the ones most amenable to engineering solutions, like looking for your car keys at night in the circle of light under the lamp post.
But what is being replicated by current AI technologies? They are almost exclusively focused on the activities of the brain that are associated with conscious mental activity, mostly language, data analysis, and reasoning. No doubt, these are important, formidable tasks, and to the extent AI tech has succeeded, and it has with astonishing results, these functions represent only a small part of what the human mind does. One area where AI decisively, and importantly exceeds human capacity is statistical reasoning.
Humans are really bad at this, in general. Some of the most useful insights AI tools can generate use statistical reasoning. But this is still not much more than having a really powerful desktop calculator. The meaning and relevance of the outputs of the statistical reasoning process are still questions of values, priorities, and judgement.
It also excels at processing preposterous amounts of data. Here’s another place where our bias toward consciousness is telling. The human nervous system also processes preposterous amounts of data on a second to second basis but we have no access to or awareness of it. The trillions of cells in the human body constantly communicate, handle data, and react to the outputs of those processes, all to maintain homeostasis and interact with the environment. An area where we can get a glimpse into the miraculous complexity of human homeostatic mechanisms is robotics. Rapid, astonishing gains are being made there, but try to imagine even the most advanced robot doing something like running down a fly ball across grass in the outfield, or performing an Olympic ice skating routine. This is just one kind of the unconscious, big data processing the human organism does really well that we have no conscious access to, part of the black box.
Generative AI (genAI) technology is another area where rapid improvements occur daily. But what is that really doing, and how does it compare to human processes? Currently a debate rages about whether genAI is actually creating anything, given how it works. genAI uses higly advanced statistical reasoning to generate responses to queries based on layers and layers of pattern matching and statistical associations of words that are commonly used to answer a query. It is often eerily accurate, but because the technology has no access to the actual meaning of the words, weird anomalies can creep in. The engineers then add yet another layer of checking to address the anomaly, making the systems ever more complex, but still devoid of any understanding of meaning.
An aspect of language that will always challenge this approach is its plasticity. Words are not static. Their meanings change over time, some drop out of use, and new ones emerge to replace them, or to describe new phenomena. Take the word dog: it describes a dizzying array of canines, the act of nagging someone, and a type of male who woman would do best to avoid. The statistical association approach likely captures these three very different meanings (two nouns and a verb) from the statistical associations with other words, but not always, and it will always be backward looking. When someone like Snoop Dogg comes along, a backward looking genAI won’t have any data to ascertain the statistical context of Dogg.
Slang and pet names are another area where genAI will always struggle. Until some updated data gives it the statistical context, it will generate inaccurate outputs in regard to the new words.
Human artists have two major skills sets in varying proportions: technical proficiency, the mastery of the specific skills necessary to produce their art; and esthetic sense or intuition, which helps guide them through their process to arrive at some work that is an expression of their esthetic or captures a feeling.
The technical aspects of art are very much something a machine might be able to reproduce. We see some of this in the outputs of genAI.
Esthetic intuition is an insurmountable challenge for AI. Let’s start with the concept of intuition. It’s inherently unconscious, another black box process that emerges from the depths of the preposterous data processing the human nervous system does. But it’s not just statistical reasoning. It is deeply rooted in emotional responses to pretty much everything in our environment and those emotions inform the intuition. It forms an impression that emerges into consciousness to influence decision making or emotional states.
Emotions color every sensory impression a living nervous system generates. It’s what transforms a sensory experience into a sensation, a physical experience that carries with it some meaning about the importance, relevance, or aversiveness of the sensory data, and it does this through memory. Even something as simple as temperature can have a million different meanings. The heat on your face can provoke joy because of memories of summers at the beach, or fear from watching a house fire, or comfort from sitting around a campfire with friends and family. Smells and emotions are deeply connected. Colors and the interplay between lighting and perspective carry emotional meaning from experiences beginning at birth and our visual engagement with the world.
Human creativity is inherently linked to esthetic intuitions about beauty. This informs all art, and believe it or not, is also deeply rooted in the scientific process. As odd as that sounds, we only have look at the creativity that drives scientific inquiry.
Creativity is also deeply interconnected with play. Very young children explore their environment and soon any random object is abstracted be something else, and the children will spontaneously concoct games replicating things in their environment. Art is playful as much as it is creative. Play is creativity, and creativity is play.
Hypothesis generation in science isn’t just about pattern recognition. It’s also a search for explanations that “feel” right. A dramatic illustration is theoretical physics, where theories are often accepted or rejected based on how “beautiful” they “feel.” Don’t take my word for it: physicist Roger Penrose says “A beautiful ideas has a much greater chance of being a correct idea than an ugly one.” This isn’t a universally shared belief. Physicist Sabine Hossenfelder is one of the most vocal critics of “beauty” in physics, arguing that it has led the community astray. Either way, the fact that it is a topic of argument shows it plays some role in how we explore reality.
Another classic example of intuition in science is August Kekulé’s discovery of the benzene ring after dreaming about a snake biting its own tail. Somewhere, somehow, his unconscious mind processed all the data he was working on and “suggested” this idea to his conscious mind through a dream about an Ouroboros snake.
We have little insight into how this works. More importantly, it’s not something AI technology is even attempting to replicate. The closest it gets is the programming of drives and reward systems, which is what we talk about in living systems like bacteria and worms, hardly scratching the surface of what the human nervous system does every second.
If judgement, especially moral judgement, is rooted in esthetics, and esthetics is rooted in emotions, is there any prospect of AI technologies replicating that? In the near term, absolutely not. At the most unconscious level, we have moral intuitions, the feeling that something either feels right or wrong. The next stage is moral judgement, where we consciously decide something is right or wrong, based on moral intuition as well as experience, memory, training, and social conditioning. Last is moral reasoning, where we articulate a detailed, logical explanation for the moral judgement.
To the extent an AI system could possibly engage in moral reasoning, it will require the programming of moral judgements, which presuppose moral intuitions. Where will those come from? This is why the question of who is driving decisions around AI implementation is so important. The values baked in to systems by their creators will determine what, if any, judgement those systems can exercise, based on the creators values. Are the same as everyone else’s?
Where does that leave us at present? This is where the urgent moral questions around AI need to be made explicit and addressed.
First and foremost, we are currently producing technologies, at the behest and direction of a very small group of people with poorly articulated and highly suspect motives, that will soon consume a harmful amount of limited resources. How this technology consumes those resources will likely impose real harms on human populations.
More importantly, the prodigious and rapidly improving skills of this technology are rapidly integrating with important systems that are moving beyond human control without anything approaching human capabilities for judgement and moral reasoning.
We are implementing AI tools and incorporating them into essential systems without really knowing how they work or how to control them. AI slop now pervades content online, and blurs the line between reality and fantasy. Measurable negative impacts on human cognition are already appearing from excessive use of genAI tools. Fraud, bullying, and theft using genAI are spreading rapidly. Humans need to stay in the loop, but the insatiable demands of profit driven capitalism are pushing us down a road full of peril and without guardrails.
If it’s not possible, or so difficult that perhaps other challenges are higher priorities, why are we doing this? Greed and hubris immediately come to mind. Human creativity cannot be replicated or replaced, but trying to may destroy a lot we hold dear.
The solutions are boring and tedious: regulations, accountability, transparency, penalties, and constraints on the economic upside of uncontrolled implementation, i.e. taxes. Only educated humans in an open, free democracy can make those decisions. If we don’t, they will be made for us by a handful of trillionaires, and then we will all suffer the consequences.
Leave a comment