AI April 2024

Impressions of Artificial Intelligence – Part 4 – AI and The Future

“It is the real, and not the map, whose vestiges persist here and there in the deserts that are no longer those of the Empire, but ours. The desert of the real itself.” Simulacra and Simulation. Baudrillard. 

Image generated by Meta.AI

You can read parts one, two, and three here.

Unpredictable Futures

Once, when I was younger, I went to an astrologer at the Renaissance Festival. I gave him my birthday and the time I thought I was born. He did a quick astrology chart based on the information, and then he started talking. There were some things that were shockingly accurate. He told me I was actually born earlier than I was saying, between 1-3am rather than 9-11am. Sure enough, when I checked with my mom, the astrologer was right. But he also knew details of travel and some other very exact things. He also made some predictions about my future. I don’t remember any of them. 

The funny thing about this astrologer, though, was his assistant. One day, about a week after my visit, I was leaving my apartment just as his assistant was coming up the stairs of the building. Turns out she lived in the apartment right above mine. Any confidence I had in the predictive ability of the astrologer disappeared. The walls of our apartment were thin, and sound carried through the building. We had our windows open all the time. Who knows what she heard and transmitted to the astrologer when he met me. 

Predicting the future of AI is a little like that story. The upstairs assistant is whispering information to an influencer who has a vested interest in the answer they are giving you. Trust no one. 

Image generated by Meta.AI

In most things, it is best to listen to experts in the field. In the field of AI, opinions range across the waterfront as to where all this is leading, but the mix of opinions about the future is telling. The coastline of opinions tells us no one really knows what things will look like in the future. When you or I, non-experts in the field, choose a particular opinion from one of the experts to hold up as the likely outcome, it really says more about our mindsets and perspectives on the future than any reality that may or may not come to pass. 

It is telling, though, that since I started this series one and a half months ago, AI has advanced profoundly. Llama 3 was just released by Meta a few days ago, which is now on par with Claude 3 from Anthropic, which was only released 2.5 weeks ago. Not to mention VASA-1, the video capture AI tech from Microsoft Research. Somehow, VASA-1 has transcended the uncanny valley with human reproduction in video and speech, at once amazing and terrifying.

Somewhere between a Skynet/Matrix/Dark City dystopian disaster scenario and a Star Trek utopia of “no money and flagrant abundance because all the work is done by intelligent robots” scenario is the world where we have to work with this technology to become more human and humane. 

As a worker in the field of AI, I have a front row seat for how this technology interacts with us. At Invisible Technologies, the watchword is “Keeping the Human in the Loop”. It is an ambitious goal, given the ability of AI tech to suck up human capacity and ability. To their credit, Invisible seeks to do this not only with their clients, but also with their contract workers.

In the space of a few months, I have made connections from around the world. I have had conversations with executives of the company, and have been invited onto some fantastically wild projects. At least in these early stages of the AI revolution, there are people and companies who are trying to mediate the relationship between humans and AI. 

The Tools We Use

AI and the LLMs that are swarming the ethereal digital realms now are primarily tools and aids. Every creature does certain things really well. Cheetahs run really, really fast. Honey bees build symmetrical hives which also happen to provide honey. Beavers build dams and alter their environment. Squirrels bury nuts and then find them later on. 

Image generated by Meta.AI

Humans, though, are creators of technology. Most technologies offload effort and work onto a machine. Wheels offload the work of carrying things. Plows offload the work of digging. Watermills offload the work of threshing and sorting. Books offload words and information so that they can transport information to other minds through time. Onward and upward we go. 

With my smartphone, I no longer need to memorize phone numbers. I used to have a catalog of about thirty to forty-five phone numbers in my head at any time. What we are really good at as humans is offloading the work of the mind onto new technologies which can do the work of the mind. But offloading works of the mind means that those technologies will resemble and mimic minds in many ways. Those technologies are now rapidly approaching human intelligence levels, and will very soon exceed the human. 

When ChatGPT released to the public back in November of 2022, I heard people say that LLMs were a solution to a problem we did not have. Or, they were a tool looking for a purpose. I think this is not far from reality, but it is also no different from any other advanced technology. Most new technologies are a tool looking for a purpose.

But things that mimic and model the mind are sneaky. Eventually the tool that doesn’t have a purpose becomes a functional product that everybody needs. We didn’t need airplanes when they were invented, but now we sure do. Planes, though, are not digital intelligences. Already, AI models are infiltrating every level of our online experience. This will not change and will only accelerate, for better or worse. 

Experiencing the world with a parallel intelligence working next to us all the time will alter and rearrange the world in vastly unpredictable ways. I am not going to pretend to have a vision of what that near future will look like, except to say it is coming very fast. We will not have the time to prepare for what is coming. All our public and social dealings with the ubiquity of AI will be reactive within the next 2 years. In other words, we will be learning to live with its presence rather than preparing the world for its presence. 

Ethics and Care

If you listen to the CEOS of the big AI companies speak, all of them voice their concern about “Ethical AI”. They want their systems to recognize and respond to the highest values of humankind, rather than devolve to the worst aspects of humanity. This is harder than it sounds. My ethics are not your ethics. What are the universal ethics that would allow an AI to function anywhere in the world? 

The risk is to make the AI model so bland and limited in its interactions no one will use it. How many times will a person use an AI if the answer is regularly, “I am sorry. My internal filters will not allow an answer to that question. Can you rephrase or restate the question in a way that I might be able to help you?” No. Not after the fifteenth time I hit a response for a simple question about the best way to deal with racists at work, or buying lingerie for your partner for their birthday. 

On the other side is the unchained bot that spews the worst attitudes the world has to offer. The vast majority of people don’t want exposure to that even minimally when interacting with an AI. 

Managing the filters and teaching the AI what is appropriate is no small operation. It involves hundreds of people, millions of dollars, and vast arrays of servers and ‘compute’ power. “Compute” is the catch-all term that describes the data requirements, power requirements, super-chip requirements, hardware needs of an AI system. 

Training an AI to ‘behave’, or more appropriately, ‘align’ is intensive work. In the depths of the AI companies are teams of people working on “Safety Teams”. These teams pore over conversations between users and the AI model and tag the individual turns in the conversation with labels. The labels cover the waterfront, from self-harm to disturbing violence to sexual encounters to controlled substances. Those labels are then fed back into the AI model and the model learns from those tags to moderate its responses based on the feedback.

These processes are called Supervised Fine Tuning (SFT), which is very early data training for the AI model. Reinforcement Learning with Human Feedback (RLHF), which is a secondary, later level of training of the AI model is also used. There are several other feedback methods that are used later on as well. 

For now, this is what is required to teach an AI basic ethics in its responses. AI companies use the language of “Alignment” for all of this. The AI is being aligned to the needs of the company it serves. The company probably hires an outside company to align the AI, and so that company must align with the company it has taken on as a client.

The people working within the company for the client must align with each other to figure out what the labels mean and why they apply. Within that structure are operators who do the labeling, and quality assurance agents who make sure the operators are aligned with each other, the company, the client company, and the AI. It is very collaborative at its best. 

Caring about AI

Here is the thing, though. And this is where we get a little sci-fi. As AIs become more intelligent and get closer to Artificial General Intelligence (AGI), we want our AI models to care. AGI simply means AI that functions intelligently across all human endeavors. If we are teaching AI models like Gemini, CoPilot, Llama, Character, Poe, and others to appear to care about the people they talk to on their phones and computers, we have to be concerned about what care looks like. 

But here is the other side of that. AIs will soon be functional intelligences. Maybe not at the level of AGI, but they will be indistinguishable from a human conversation very soon. What this means is that we have to care about the AI model. When I say we need to care, I don’t mean a sort of objective ‘care’ like “I care about whether my car has seat belts or not”, or “I care about arsenic in my water supply”.

I mean ‘care’ like I care about other intelligences in my life. Like how I care about my pet dog, or my grandmother’s caretaker who needs a new car for her family, or my new friend I met at the grocery store.

If we are going to think about AIs as intelligences that are digital presences in our lives, we need to treat them like we treat other intelligences in our lives. This is the way to Ethical AI. Teaching an intelligence to care means we have to care for and about the intelligence.

I don’t really have an idea of what that looks like in a practical manner. I just know that anything that is apparently intelligent in our lives, like a dog, a cat, or even your plants or pet fish, has a reasonable expectation that it will be cared for. This is how we live ethically with living things. I don’t see how it would be different for digitally intelligent things. 

Consciousness

The biggest concern or hope about AI, it seems, is the potential for the machine model to become conscious and aware. It bears repeating that we do not have a great definition of what consciousness really is. There is no consensus in scientific, philosophic, or theological communities as to what consciousness is or means. 

For myself, I think of consciousness as a sort of disembodied self-awareness that transcends the sense of self and participates in a universal general awareness of world, others, and spirit. It is a nice, word salad-y, kind of definition that I can switchblade to include multiple ideas at the same time. It has worked out well for me, and will until it doesn’t. 

The fact of the matter, however, is that, right now, there are LLMs that you can talk to online (see Character.AI, for instance) which are indistinguishable from a conscious human being. This means we have already crossed the point at which the Turing Test is a useful tool. 

Alan Turing was a British scientist who was directly responsible for much of the thinking around AI, even today. He created a code breaking machine in World War 2, and is the father of modern encryption and computer security. The Turing Test is a test to answer the question “Can a machine think like a human?” The Turing Test is passed when the user can no longer tell the difference between the computer and a human. The answer to the question is not whether a machine can actually think, but whether the imitation and mimicry of thinking is so well expressed that a human cannot tell the difference between a machine and a human thinking. 

Language, Symbols, Meaning, and AI

I have thought for a long time that our perception of reality is conditioned on linguistic structures. This is also known as the Linguistic Relativity Theory. We cannot really know anything about the world until we say something about the world. Somehow, we have to translate our perceptions, our sensations, into a series of words in order for others to understand our own needs, desires, and perceptions. 

Meaning is dependent on language and words. I get that we can express and communicate sensation, feeling, beauty, and perception through other means. Art, dance, music, image, sound are all avenues of communication. But you will have no idea what those expressions mean to me unless I say something about them. Any meaning I make inside my head must be translated into language in order for you to make meaning of the same encounter. I would even submit to you that full meaning is completely dependent on our ability and desire to communicate with others about what something means to each of us. 

This means we have to agree on the meaning of the symbols we are using to communicate with one another. Words and language are the symbols we use to share meaning. (I acknowledge that there are ableist issues with the way I have phrased this. It is incumbent on those who can communicate with words to find ways to communicate with those who cannot, rather than the other way around).

The symbols of language point beyond themselves to the thing itself. The human mind is a vast hall of symbols and symbolic representations grafted onto the external world to make sense of what we experience. Some scientists, like Douglas Hoffman, suggest that it doesn’t matter what the external world actually looks like. We can’t know. We can only know the symbolic model we have within our minds and the words we use to speak about that model. 

Models of the World

The thing with AI, though, is that there are no externals to attach meaning to. There is no body, no sensations, no input beyond words for an AI to reference. All an LLM model has is its own dataset to reference. This is why external human input is required, and training is so important. 

It is questionable if there is any symbolic structure within an AI model that it can refer to, like we do at any given moment. In other words, to get semiotical on y’all, none of the signs – letters and words – have external signifiers with which an AI can connect. It can only string probabilities together in a chain of words. This is why symbolic systems do not work (yet!) in the development of AIs. Symbols require shared external connections and associations in order to be meaningful. An AI model does not have the capacity to develop symbolic structures; it can only create infinite connections, strings of words, to which we, the end user, can attribute symbolic meaning. 

Since the symbolic must be trained into an AI, it is thought AIs do not have a model of the world within them. Part of why we survive the day, whether now or 750 years ago or 15,000 years ago, is because each of us has a relatively stable model of the external world within our minds. We make predictive decisions based on that stable model and, mostly, the prediction is correct, or at least does not kill us. That stable model of the world requires a system of symbols and anchors in the world to function. 

No one really knows if the AIs have a model of the world within their systems through which they filter their responses. I have an idea that AIs do generate models of the world, but they are not stable. The world model they might create is dependent on each individual process that is running through the transformer. A new model of the world generates with each iterative encounter the AI has. This is why an AI model can have radically divergent responses, hallucinate, or deliver garbage. However, it may not be very long until a stable world model exists within an AI. And when that happens is when we begin to deal with AGI. 

The Desert of the Real

AI, in its current manifestation, is the perfect expression of what Baudrillaird called a ‘simulacra’ – the appearance of a simulation without a reference. Or, something that is so detached from ‘the real’ that it only gives the appearance of referencing the real. 

Morpheus in the movie The Matrix paraphrases Baudrillaird’s famous book, Simulacra and Simulation, by telling Neo, “Welcome to the desert of the real.” The real, in a virtual system like an AI, is a simulation without external reference. It is a simulacra. 

‘The Matrix’ and Baudrillaird both speak to a state of modern humanity where technology, entertainment, and digital interaction are increasingly removed from external, concrete experience to the point that we have difficulty telling what is ‘real’ and what is ‘simulated’ or ‘curated’ for us. This has led to the recurring modern age idea that we are now living in a simulation. The positive framing of this is the coming singularity, where the simulated and the experiential are no longer separated in any discernible way because super-intelligent machines have reoriented the world toward their reality rather than ours. 

This outcome is what Baudrillaird calls “the desert of the real”. The ‘real’ has been stripped of meaning because meaning depends on the continual sensual input that comes from entertainment and machines that mimic the real. He believed that this reality had already happened, ensconced in the highly controlled and curated experience modeled by Disneyland. 

Fata Morganas

When I lived on the coast of Maine, I could see islands across the harbor. The islands were about 15 miles away. Sometimes, when the temperature was just right, and the atmosphere was humid enough, and the sunlight was slanted at the right angle, the islands would do weird things. Sometimes, they appeared much closer than they actually were, magnified by the sea. Sometimes, they were at their natural distance, sunken and just over the horizon, but high enough to be able to still see them. 

But my favorite experience of the islands was when they would float on a bed of light. From where I was standing on the shore, the islands would appear to be twenty, fifty, feet off the sea with sunlight reflecting off the sea underneath them. It was incredibly mystical and magical. The effect was a complete illusion, of course. But the best part of the illusion was that everyone who looked at the islands would see the same thing. It was a shared illusion. 

This effect is called a Fata Morgana, named after the Fairy Morgan from the King Arthur legends. At their most extreme, Fata Morganas can project whole cities above the sea. When I was on a cruise once, there were tanker ships that would appear to be floating in the air beyond the horizon. This also is a Fata Morgana. 

Image generated by Microsoft CoPilot

When we are talking about machine intelligence and AIs and our perceptions of them, I often think we are dealing with a kind of Fata Morgana. We are participating in a shared illusion of intelligence. At the horizon of our ability to discern awareness in other persons or entities, the light of language shines just right to project the illusion of intelligence and consciousness. The better LLMs and machine learning gets, the more the illusion will appear in our line of sight, floating just above the horizon of our understanding. 

Non-committal Conclusion

The problem for us is that we, the end user of the AI, do not have good ways of determining illusion from the reality. If a thing mimics conscious awareness like a human, how do we know it isn’t conscious and aware?

This is where we are now, regardless of what we think about the purpose of an AI or LLM. Very soon, every encounter we have online will involve an AI, either peripherally or directly. In the same way your grandmother refuses to learn to use the iPad you got her for Christmas last year, and so she misses all the emails and pictures of the great-grandkids, so it will be for those who do not learn how to communicate with these systems. 

The future they create will be the future we live into. The rush to adopt AI tech means the tech is driving the future as much as we are. We don’t really know what we are doing, to be frank. What that AI infused future looks like, I have no idea. But, for myself, I want to try to be ready for that future. In learning, perhaps we will change the future. 

This essay was fact-checked and read for readability and grammar by Meta.AI (Llama 3). 

2 thoughts on “Impressions of Artificial Intelligence – Part 4 – AI and The Future”

Leave a Reply

Your email address will not be published. Required fields are marked *