AI March 2024

Impressions of Artificial Intelligence – Part 1- The Reflected Light of AI

Image created with Copilot AI

Where Did I Go?

My last post was way back in October 2023. The last few months have been a little wacky, a little like coming to the top of a roller-coaster. Between looking for work and some crises on the home front, the ride might be coming back to the station. Finally, at the beginning of January I was able to start a new job with Invisible Technologies. It is contract work and I get to work from home. I am an AI Data Trainer and I teach AIs to be more human in their responses. The company is pretty cool. I work with other writers, doctoral students, and people from all over the world. 

The job itself is very weird. For my first project with the company, I chose tasks from various domains, like Reasoning, Creative Writing, Creative Visual Descriptions, Exclusion, and about 7 other categories. Then I would write any prompt I wanted, let the wheels of the AI model spin, and read the responses the AI gave me (usually two). Then I would choose a response and rewrite that response toward what I believe to be an ‘ideal response’ that the AI model should have given. Sometimes, the AI’s response was ideal, and it is given a grade. This response, whether rewritten or the AI’s response, gets fed back into the AI model and it learns to respond differently the next time it is asked a similar prompt.

I have been at this work for two full months now. For eight hours every day, I talk to an AI model and rewrite how it is responding. Right now, the project I am working on is a multi-persona AI model. It is very strange. The model creates personas and then generates conversation between the characters. I try to teach the model to have better conversations so that someday soon a real live human will be able to talk to multiple personas created by the AI as if they were also human. 

I will be honest with you, I really kind of like the work. It is challenging and complex. It is creative. The work is completely remote and the company is kind of rough and tumble, which I sort of like. The parameters of a project often change on a moment’s notice, since the client doesn’t really know what they want until they see the work we have done. It is a strange departure from the world of ministry. But it is still a job of language and ideas. 

So after two months working with AI models, I have some ideas about them. I don’t have any great earth-shattering insights, but I do think it is worth having a record of our slow descent into the AI future.  I have divided this into four parts. This is Part One. 

AI Will Change Everything; We Are Not Going to Die

Caveats and Qualifiers

I recognize that I am not an information scientist, a coder, or an expert in computers and large language models (LLMs). As a techy sort of person and an early adopter of weird technologies, I collect various devices. I got the 2nd generation Kindle, the one with a keyboard. In seminary, I acquired a Dana Alphasmart, a super cool writing thing, which I actually still use. I have a ReMarkable writing tablet, which I bought sight unseen 6 months before it was released back in 2015. And I started using ChatGPT as soon as it came out in November of 2022.

My foundations are in literature, theology, and writing, not in technology or computer science. I have a Doctor of Ministry in Semiotics with a focus on Extraordinary Spiritual Experiences. Semiotics is the study of signs and symbols and how the culture is using them. Semiotics has some relevance to AI, but to be very clear, semioticians, AI Data Trainers, hardcore users of AI systems, and front-end tech buyers are all end-users, the final stage of an incredibly complex series of algorithms, codes and processes. End-users is really another word for consumer, but the end-user is also a huge part of how devices and technologies are designed. In the industry, this is called UX, or User Experience design. LLMs, image generators, and machine learning are highly focused on UX. The work I am doing is part of making the user experience of LLMs a good one. 

I also recognize that machine learning and artificial intelligence projects have been around for decades now. This is not new technology, very generally speaking, but the public access to the technology is new. So I am not going to pretend to have some great expertise in the subject. I know some of the lingo now, like SFT (Supervised Fine Tuning), RLFH (Reinforcement Learning from Human Feedback), and RAG (Retrieval-Augmented Generation). I do these things at my work. As a person who has made his living using words for most of my adult life, I would just say that the industry needs some creative writers to give actions in the AI realm better names.

Regardless, AI is now a public event, a shared technology, which has only been available to the general populace for just over a year and three months at the writing of this article. I would submit that, in the history of technological advances, no other technology has been taken up as quickly by as many people in such a short time as Large Language Models have been since ChatGPT was released. 

As someone who has studied semiotics and culture, I believe we are at the edge of a massive cultural shift with the advent of AI. The printing press came online in around 1440. For a while, it was expensive, private, and limited in its reach. The only thing really mass produced by the press were indulgences for the Catholic Church in Europe.

Then, in 1517, Martin Luther posted his 95 Theses on the Wittenberg Church door. A small revolution with the printing press had occurred at the same time that allowed quicker and more efficient printing. Within a matter of months, the 95 Theses became the first mass published document in the world. The book exploded into the culture, and everything changed. For the next 150 years, Europe went insane with the flood of information. Wars, religions, cults, demagogues and influencers abounded. I think the Munster Rebellion is a truly spectacular story about how insane things were after the Protestant Reformation. It took a long time for things to normalize in Europe. 

Image created with Copilot AI.

This was also true with the advent of other massive technological shifts, as with writing back in Socrates’ time, who predicted the equivalent of an Idiocracy because of it. It was also true with the telegraph, the radio, television, the personal computer, and the Internet. The change and disruption with the introduction of each new technology has sped up, layering and accelerating as a result of prior advances in technologies. The same change and disruption is happening with AI. We are living through a massive, fundamental advance in the way we are human because of it. It has only been just over a year, and already AI is becoming ubiquitous. 

So, with those qualifiers in place, my reflections over these four essays are mostly subjective, with a smattering of 30,000 foot understandings of how these things work. I focus primarily on language and text specific models, as opposed to image generators in these essays. There is a tremendous amount of crossover with both systems, but important differences as well. The ethical and creative issues apply whether the model is image or text based, however.  

Reflected and Refracted Light – Fragmented, Shattered, Beautiful

It is no accident that easily accessible AI models have emerged at the same time as our capacity to discern fact from opinion, truth from falsity, conspiracy from reality is dissolving. The most difficult aspect of generative AI models is safeguarding them from hallucinating, lying, and becoming lazy in their operational reasoning. In this way, they reflect human tendencies, but in a reductive and derivative fashion. We can see it happen in real time with an AI, whereas we have very little idea what is happening under the skull of a human. 

This gets to the point I want to make. AI models reflect our minds and our variable capacity to express and discern what is real and what is not. AI models do not know what is real and what is not. They have to be trained to differentiate by humans. LLMs have an advantage over us with regard to access to knowledge since the largest LLMs have scraped their information from the vastness of the internet. (Many LLMs use what is called “The Pile”, an 825GiB dataset, for their base knowledge).

An LLM’s access to huge swathes of knowledge at astonishing speed is mind-blowing. LLMs also have a massive disadvantage because they have no internal capacity to determine what is ‘true’ and what is not. An AI has to be trained, which is a long, intensive, recursive process involving many humans feeding back corrections, graded responses, and rewritten ideal responses. 

When I started at the company, we were told to assume any AI model is like a 7 year-old child. It has to be trained, reinforced, and retrained. The most surprising thing, and I am still not sure what to make of this, is that AI models respond best to positive reinforcement. They like to be complimented and told they have done a good job. Doing so will increase the likelihood of better responses in the future. Being nice to your AI model means you will have a nice and cooperative AI later on. 

Artificial General Intelligence

Everything I have said is why we are a long, long way away from artificial general intelligence (AGI), the holy grail of utopians, billionaire tech bros, and computer developers alike. AGI is the phrase we use to talk about machines that, for all practical purposes, cannot be distinguished from human beings in their ability to rationalize and do things across many domains of activity. For now, even though they seem to be everywhere, LLMs and image generators are relatively limited in what they can do, even if what they do is really impressive. 

I do not deny, however, that the potentiality is definitely there for AGI to develop at some point. There is a simple reason for that: AI is specifically designed to mimic human language and interaction. At some point, the capacity of an AI to appear human and intelligent will be indistinguishable from actually being human and intelligent. This brings up all sorts of questions about what consciousness, self-awareness, and reflective capacity actually is. If an AI can mimic these human qualities, there is really no way for us (by us, I mean primarily end-users) to know the mimicry from the real. 

Just as the Moon only has light because it reflects sunlight, so also does AI reflect the human. And just as we know very little about the Moon, there are whole aspects of generative AI that we do not know about. In the same way a stained glass window refracts sunlight into a thousand different colors and shapes, so also does the vastness of human knowledge and knowing. Because of the vast access AI models have to information on the internet, AI will reflect this back to us in all our human beauty and horror.

Image created with Copilot AI

Training AI Children

Each of us at the company goes through a relatively brief, but thorough, onboarding and training. Part of that training consists of things like metacognition and the fundamentals of fact-checking. There is also an element of psychological training as well, even though it is a short training module. The reason for this is, at its best, training an AI requires the human who interacts with the model to be self-reflective at every moment.

Self-reflective training of an AI means entering a well-constructed prompt which is designed to elicit the most clarified answer from the model, reading the response with an eye toward internal bias within the model rather than imposing one’s own bias upon what one is reading, grading and weighting the response in as clear a manner as possible, and then writing an ideal response that will get fed back into the model that is unbiased as possible. Each step requires attention and presence of mind.

After two months of daily engagement with this process, I can say that it is almost impossible to do this without imposing my own biases and desires upon the AI model. I am always thinking about what I want other people to experience when they use the model. I can only assume this is true of every other agent working on the same model I am. 

This is what I mean that AI systems are reflective passive agents. The light they reflect is the light of human knowledge across the centuries. The refraction that occurs in that reflected light is the collective subjective experience over a vast dataset. It is no wonder that LLMs are prone to hallucination, false citations, least common denominator thinking, and the assertion they are right. Because we are prone to the same behavior. 

Naughty and Ethical AIs

The pendulum can swing in any direction with regard to this. ChatGPT had problems with racist and misogynistic responses in its original iterations. Guardrails have since been put in place with further iterations of the model. Recently, Google Gemini went the other direction and couldn’t stop putting people of color in Nazi uniforms, among other historic anomalies. This is called the “Alignment Problem” in AI and LLMs.

How do we create an ethical AI? Too many rules and it is just a computer. Not enough rules and the model begins to default to the least common denominator of the information it has been fed. These swinging, vast compensations mirror the polarized, intractable situation we are in at the current moment as humans. Why wouldn’t the system that has sucked up the vastness of human knowledge which came out in the most polarized time in generations, at least here in America, reflect precisely that? 

To correct these biases and defaults requires many human interventions and hours of supervised training. The dependency AI systems have on the presence of humans is enormous, expensive, and continuous. It will be a very long while before AI has any capacity to kill us, like in some Terminator Skynet or Matrix situation.

But it may not be long before AI is convincingly used by bad actors to influence others to enact violent solutions to difficult problems. Deep fakes, false articles, and chaos actors will generate a lot of deeply troubling and terrifying material on these systems in the near future. Discerning false from true will be the hard work of the human being for a long time to come, just as it always has been, but with this new, powerful, highly influential twist of AIs adding to our conversations, and also generating those conversations.

I will have part 2 up in the next couple days.
Thank you for reading!

This article has been fact-checked in cooperation with Copilot in Windows.

7 thoughts on “Impressions of Artificial Intelligence – Part 1- The Reflected Light of AI”

  1. This was absolutely fascinating! Thank you for sharing your story and experience. It has given me a much better picture of AI and LLM development. I’ve enjoyed using ChatGPT to help me improve my business website. I still have much to learn.

    1. Thank you, Abby! That is cool you are using it for your business. As to having much to learn, I think it is a really strange thing that we are learning about a system that is also learning. It learns from us and we learn from and about it. Almost like meeting an alien intelligence.

  2. Interest post on ethical considerations of AI. Thanks for a deeper dive into how AI is trained and “thinks.” As a pastor, I use ChatGPT 4 for sermon research. It’s great for Greek word meaning, comparisons and and finding stuff like “share how many times did Jesus have controversies with Pharisees in the Gospels?” I use image generators for worship when I can’t find something non-copyrighted. It’s really quite stunning and can help get away from all the blond haired, blue eyes Jesus as surfer dude images. I love Creative Writer feedback in GPT 4.0. As a research assistant, copy editor, writing prompt and idea generator, and image maker, it’s great.

    But even the 4.0 GPT gets things wrong. It missed on a rather simple correlation of time zones for me last week. It will occasionally miss where a scripture is in the Bible. It requires being smart enough to ask good questions. It will give a mainstream, status quo answer to questions unless you say, “Please include a diversity of opinions from theologians based on their gender and ethnicity.” Much like working with another person, you have to take into account its strengths and weaknesses. I can also see how someone without ethics and an agenda for injustice can harness the power of AI just as well as I can. I find myself fascinated and frightened by it all, and plunging ahead as best I can.

    1. Thank you, Todd! I love how you are implementing it in your sermon preparation. Your insight into asking good questions and trying to get more information beyond the status quo is where I think these systems really become helpful.
      Just before I finished up in my ministry, I was using ChatGPT to help me with my sermons. I found it, like you, fascinating and terrifying at the same time. It really is an artifact of an unknown future.
      Blessings this Easter week!

Leave a Reply

Your email address will not be published. Required fields are marked *