Students should aspire to be more than mere ‘prompt writers,’ but minds capable of thinking, reasoning, and perseverance. //
If the goal is simply to produce outcomes, one could argue that AI usage should not just be tolerated but encouraged. But education shouldn’t be about producing outcomes – whether it be a sparkling essay or a gripping short story – but shaping souls. The purpose of writing isn’t to instruct a prompt or even to produce a quality paper. The purpose is to become a strong thinker and someone who enriches the lives of everyone, no matter their profession.
Each and every step of the struggle it takes to write is essential. Yes, it can all be arduous and time-consuming. As a writer, I get how hard it is and how tempting it might be to take shortcuts. But doing so is cheating oneself out of growth and intellectual payoff. Outsourcing parts of the process to algorithms and machines is outsourcing the rewards of doing one’s own thinking. Organizing ideas, refining word choices, thinking about tone are all skills that many citizens in this nation lack, and it’s often apparent in our chaotic, senseless public discourse. These are not steps to be skipped over with a “tool,” but rather things people benefit from learning if they value reason. Strong writing is strong thinking.
An AI-generated Christian artist named Solomon Ray has taken the gospel music world by storm after topping the iTunes and Billboard charts with his album “Faithful Soul.”
Described as a “Mississippi-made soul singer carrying a Southern soul revival into the present” on his Spotify profile, Ray made waves after releasing the five-song EP on Nov. 7. //
“At minimum, AI does not have the Holy Spirit inside of it,” Frank, 30, said. “So I think that it’s really weird to be opening up your spirit to something that has no spirit.”
Townsend later fired back in an Instagram video of his own.
“This is an extension of my creativity, so therefore to me it’s art,” Townsend said following the backlash against his AI creation. “It’s definitely inspired by a Christian. It may not be performed by one, but I don’t know why that really matters in the end.” //
“There’s something in the high end of the vocals that gives it away,” he said, according to Christianity Today. “And the creative choices sound like AI. It’s so precise that it’s clear no creative choices are really being made.”
Advertisement
“How much of your heart are you pouring into this?” he added. “If you’re having AI generate it for you, the answer is zero. God wants costly worship.”
You can completely disable Gemini in Gmail, Docs, Drive, and more.
Google Photos has separate Gemini settings you must turn off, too.
Chrome users can also disable Gemini directly in browser settings.
Are you frustrated by Google's insistence on injecting Gemini into everything? While some do enjoy Google's latest AI tools and smart features, which seem to roll out every week, others might prefer things the way they were before.
Darryl bangs on mindlessly, using words like "empowering," "driving," and "revolutionizing." His voluminous wordage is a cream-filled, chocolate-glazed, sugar-coated cornucopia of optimism – but I, unfortunately, have Diabetes Pessimistus.
His patter, however, reveals two things: (a) his passion really is AI as THE business tool of the future, and (b) he knows almost nothing about AI – outside of the PowerPoint slides he's no doubt plagiarized from the internet.
New “computational Turing test” reportedly catches AI pretending to be human with 80% accuracy.
"Just discovered this guy," said another poster on the song Time Don't Stop. "I've already downloaded everything I could find." Multiple people commented on how amazing the singer's voice is, apparently unaware that everything to do with Breaking Rust is generated by a computer.
It's a bit surprising given every Breaking Rust song sounds identical - same beat, same tempo, same instrumentation: They're the sort of hyper-generic songs one could only get by feeding a prompt into an AI trained on every bro country song ever recorded and asking it to spit out something that would appeal to the lowest common denominator of music fan, something it appears to have done with success. //
There's good reason artists, be they working in visual, audio, or written mediums, are so concerned that AI is destroying art: When an AI band can make it to number one on a Billboard chart, even one as small as the CDSS chart (which one country music outlet noted takes only about 3,000 sales to reach the top), it's an insult to the human artists who rank lower. //
1 hr
the Jim bloke
Terminator
A mindless and repetitive task where error checking has never been an issue
Writing and performing country music
- At last, a legitimate use for AI
also applies for Rap, which is just country music without the country, or the music.. //
1 hr
Brave CowardBronze badge
Breaking Rust
Breaking Rust shouldn't be rated A, not even AI.
A mere C++ at most.
Fred Duck Ars Tribunus Angusticlavius
13y
6,614
Nate Anderson said:
But those who value both thought and expression will see the AI “easy button” for the false promise that it is and will continue to do the hard work of engaging with ideas, including their own, in a way that no computer can do for them.
Some people liken LLM to typewriters. They say that just as with typewriters, instead of labouriously hand writing messages out, the end result is what's important and this new technology helps distill that as quickly as possible.
However, typewriters dispense with the metadata of handwriting. Emotion can be displayed differently in handwriting, all of which is lost when merely presenting the text of the message. More crucially, in the modern LLM case, the ideas presented aren't even those of the submitter but they claim the ideas are close enough that they should be treated as such, which is a load of dingos' kidneys.
People will try to justify LLM by citing people with poor communication skills or physical disabilities which limit their ability to craft messages quickly and easily. However, communication is a skill and vanishingly few people are born knowing how to communicate perfectly. Everyone needs to put some work into skills to improve them and it boggles the mind that so few people realise that's what coursework is: practice for when you need to do something to accomplish a real goal, not simply marks for a course.
Unfortunately, modern life is at odds with thinking. We're constantly being bombarded by information, adverts, entertainment, news, comments from random internet yahoos, etc. So many messages come to us crafted to sway our opinions and shape our thoughts yet in the modern age, we tend to silo ourselves, content to seeking out echo chambers to self-validate our "vibes" instead of engaging with other ideas to see if they're sound or not.
Some people claim LLM are, as with calculators, something that are simply going to be with us so fighting them is meaningless. This skirts the issue that a calculator won't automatically generate answers for multistep procedures whereas an LLM will.
Perhaps what needs to be done is explain to the youth what exactly is expected of them. We put so much emphasis on finding the right answers but do we ever stop to emphasise it's the journey, not the destination that's of greater importance? As a young person, I don't believe anyone ever told me directly.
I imagine such a concept is too difficult for many to grasp but I still feel we should try. As the old saying goes, you can lead a duck to bread but you can't make him eat.
AI can be an amazing tool that can assist with coding, web searches, data mining, and textual summation—but I’m old enough to wonder just what the heck you’re doing at college if you don’t want to process arguments on your own (i.e., think and read critically) or even to write your own “personal reflections” (i.e., organize and express your deepest thoughts, memories, and feelings). Outsource these tasks often enough and you will fail to develop them.
I recently wrote a book on Friedrich Nietzsche and how his madcap, aphoristic, abrasive, humorous, and provocative philosophizing can help us think better and live better in a technological age. The idea of simply reading AI “summaries” of his work—useful though this may be for some purposes—makes me sad, as the desiccated summation style of ChatGPT isn’t remotely the same as encountering a novel and complex human mind expressing itself wildly in thought and writing.
And that’s assuming ChatGPT hasn’t hallucinated anything.
So good luck, students and professors both. I trust we will eventually muddle our way through the current moment. Those who want an education only for its “credentials”—not a new phenomenon—have never had an easier time of it, and they will head off into the world to vibe code their way through life. More power to them.
But those who value both thought and expression will see the AI “easy button” for the false promise that it is and will continue to do the hard work of engaging with ideas, including their own, in a way that no computer can do for them.
Are GPTs the way to AGI, probably not
In an opinion piece for the NY Times Gary Marcus indicates why he has reservations on the future of LLM GPT AI systems.
Silicon Valley Is Investing in the Wrong A.I.
“Buoyed by the initial progress of chatbots, many thought that A.G.I. was imminent.
But these systems have always been prone to hallucinations and errors. Those obstacles may be one reason generative A.I. hasn’t led to the skyrocketing profits and productivity that many in the tech industry predicted. A recent study run by M.I.T.’s NANDA initiative found that 95 percent of companies that did A.I. pilot studies found little or no return on their investment. A recent financial analysis projects an estimated shortfall of $800 billion in revenue for A.I. companies by the end of 2030.
If the strengths of A.I. are truly to be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective.”
Points I’ve also been making here several times over the past few months, along with others about the perilous state of the current US economy and how the “Current AI Hype Bubble” could be a disaster for it.
But the question of what is “Artificial General Intelligence”(AGI) is something that has at best had an elusive answer akin to “Shoulder shrug handwaving” and impossible “What ever you want it to be” type statements. It’s something that a group of 33 specialists from 28 institutions have got together to try and address more reasonably,
They come up with,
Definition : AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.”
Which although it sounds profound is actually not that useful.
Because the use of,
“match … Well-educated adult.”
Is not actually a useful measure.
It’s been pointed out that the “use of aids” “dumbs us down” in that it causes us to “loose skills”. I first heard this when I was in school. With first electronic calculators and whilst still in school computers.
Whilst many would argue that it’s not important or even irrelevant, it is true that certain skills are not developed because of the use of aids.
What most do not realise is that those traditional skills that are seen as nolonger worth teaching due to the ubiquitous use of aids, are actually important. Not for what they directly teach, but indirectly teach. That is they give new viewpoints that are force-multiplier tools that enable us to reason in either new ways or to levels we otherwise might not.
At the end of the day the two things that have moved humans forwards over many thousands of years are,
- Stored Knowledge.
- Use knowledge to reason.
They were and still should be the foundations of becoming “Well-educated”.
Sadly as gets often observed these days, producing “Well-educated adults” appears to be nolonger a goal of the education system in a number of Western Nations.
Here's exactly what made this possible: 4 documents that act as guardrails for your AI.
Document 1: Coding Guidelines - Every technology, pattern, and standard your project uses
Document 2: Database Structure - Complete schema design before you write any code
Document 3: Master Todo List - End-to-end breakdown of every feature and API
Document 4: Development Progress Log - Setup steps, decisions, and learnings
Plus a two-stage prompt strategy (plan-then-execute) that prevents code chaos. //
Here's the brutal truth: LLMs don't go off the rails because they're broken. They go off the rails because you don't build them any rails.
You treat your AI agent like an off-road, all-terrain vehicle, then wonder why it's going off the rails. You give it a blank canvas and expect a masterpiece.
Think about it this way - if you hired a talented but inexperienced developer, would you just say "build me an app" and walk away? Hell no. You'd give them:
- Coding standards
- Architecture guidelines
- Project requirements
- Regular check-ins
But somehow with AI, we think we can skip all that and just... prompt our way to success.
The solution isn't better prompts. It's better infrastructure.
You need to build the roads before you start driving.
Even a wrong answer is right some of the time
AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models.
The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."
Language models are primarily evaluated using exams that penalize uncertainty
The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. //
"Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.
Through 2023, the firm focused on training staff on how to use chatbots and write effective prompts.
In 2024, it started building agents, including the TaxBot mentioned above.
Munnelly said building that bot started with locating tax advice written by partners, which he said was "stored all over the place" – often on tax partners' laptops. KPMG found as much of that advice as it could and placed it in a RAG model along with Australia's tax code to produce an Agent that creates tax advice.
"It is very efficient," Munnelly told the Forrester conference. "It does what our team used to do in about two weeks, in a day. It will strip through our documents and the legislation and produce a 25-page document for a client as a first draft.
"That speed is important," he added. "If we have a client who is about to do a merger, and they want to understand the tax implications, getting that knowledge in a day is much more important than getting it in two weeks' time."
"That is really changing our business and how we work."
Munnelly said KPMG built the agent by writing a 100-page prompt it fed into Workbench. The Register asked for details of the prompt and Munnelly said a substantial team worked on it for months, and the resulting agent asks for four or five inputs before it starts working on tax advice, then asks a human for direction before generating a document.
Only tax agents can use the tool, because its output is not suitable for people without deep tax expertise. //
The chief digital officer said KPMG has deployed agents that do frustrating and time-consuming work people would rather avoid, and that staff surveys suggest employee satisfaction has risen as AI frees them to spend more time working on challenging tasks, leading them to rate the firm as more innovative.
"They just don't want to do the boring stuff," Munnelly said. "They want to get out there and help clients with chewy problems." //
An_Old_DogSilver badge
Sprawling, Unmaintainable, Spreadsheet Macros: The New Generation
-
Does this new, faster method produce complete and accurate results? No.
-
Is this 100-page LLM prompt effectively-maintainable software? Probably not.
-
Does this smack of corporate-image-spinmeistering over rationality and logic? Yes.
It was surely one of the most revealing cultural moments of the decade so far. On his podcast, Interesting Times, New York Times columnist Ross Douthat asks PayPal cofounder, tech billionaire, and Silicon Valley guru Peter Thiel about the future:
Douthat: “You would prefer the human race to endure, right?”
Thiel: “Er . . .”
Douthat: “You’re hesitating. Yes . . . ?”
Thiel: “I dunno . . . I would . . . I would . . . erm . . .”
Douthat: “This is a long hesitation . . . Should the human race survive?”
Thiel: “Er . . . yes, but . . .”
Their exchange is a canary in the coal mine. Something has changed. We used to leave forecasts of the AI apocalypse to shadowy characters lurking in the darker corners of 4chan and Reddit, but not anymore. In the interview, Thiel waxes eloquent on his transhumanist aspirations. Thiel’s vision, and alongside other recent interventions the AI 2027 project and Karen Hao’s book Empire of AI, he casually forecasts the end—or at least the radical transformation—of humanity as we know it. The AI apocalypse is becoming mainstream.
But a more immediate and revealing AI apocalypse confronts us. The word “apocalypse,” after all, doesn’t originally mean “catastrophe” or “annihilation.” Apokalypsis is Greek for “unveiling.” This AI apocalypse is an exposé, revealing something previously obscure or covered over.
More than any other technology in memory, Generative AI (which I’ll simply call AI in this article) is making us face up to uncomfortable or even disturbing truths about ourselves, and it’s opening a rare and precious space in which we can ask fundamental and pressing questions about who we are, where we find value, and what the good life looks like. //
What AI is revealing in this case is the importance of process, not just of product, and the importance not only of what work we do but of what our work does to us.
AI wonderfully reduces the friction of work: the grunt, the slow bits, the obstacles. But it also reveals to us how gravely we misunderstand this friction. We most often see friction as a nuisance, something to be optimized away in favor of greater productivity. After all, is it really so dangerous if AI outsources drudgery?
But AI presents us with a vision of almost infinite productivity and almost zero friction, and in this way it acts like a living thought experiment to help us see something that was hiding in plain sight all along: Friction is a gym for the soul. The awkward conversation, the blank page, the child who won’t sleep when we have a report to write––these aren’t roadblocks to our growth; they’re the highway to wisdom and maturity, to being the sort of people who can deal with friction in life with resilience and grace. Without it, we remain weak and small, however impressive our productivity.
We can have too much friction; we knew that already. But AI, perhaps for the first time, shows us we can also have too little. Without friction, we can never become “the sort of person who . . .”
In this way, AI can drag us toward a more biblical view of work. The God of the Bible cares not only about outcomes but also about processes, not only about what we human beings do but also about who we’re becoming as we do it. God seeks out David for being a man after his own heart, not for his potential as a great military commander or king (1 Sam. 13:14).
And why does God whittle down Gideon’s troops to a paltry 300 before attacking the Midianites (Judg. 7)? Because it’s not just about the victory. God intentionally introduces friction by reducing the army to reshape the character of his people, making them “the sort of people who” rely on God, not on themselves (see v. 2).
By short-circuiting the process to focus only on the product, AI exposes our obsession with outcomes and opens up a space in which we can reflect on what we miss when we focus only on what we do, not on who we’re becoming.
This is a guest post by my friend and co-worker Jason Maas.
After creating the entire universe and planet Earth, God created a special home to share with his image bearers. “The Lord God planted a garden in Eden, in the east, and there he placed the man he had formed.” (Genesis 2:8) In the garden of Eden God walked and talked with the first humans that He had created in his image. Can you imagine what that was like for Adam and Eve? God, who is all-knowing, always available, and lovingly kind to the core, was right there, directly communicating with all of the human inhabitants of the universe.
When Adam and Eve disobeyed God and sinned one of the worst consequences was a break in this special access and relationship with God. “So the Lord God sent him away from the garden of Eden to work the ground from which he was taken. He drove the man out and stationed the cherubim and the flaming, whirling sword east of the garden of Eden to guard the way to the tree of life.” (Genesis 3:23-24)
What a tragic loss! In this life, on this Earth, the rest of us will never know what it was like to have the kind of access to God that Adam and Eve had in the garden of Eden. Until now, says the cunning serpent-like world of chatbot generative AI.
Thanks to the life-like capabilities of ChatGPT and its competitors, people are being deceived into a false sense of Eden-like access to God for the first time since The Fall. AI is always available, projects kindness and love, and implicitly claims to be all-knowing.
Why try to relate to a God who you can’t see and hear when AI is right there; ready to listen, support and love you and answer your questions about life, the universe and everything? We shouldn’t be surprised when people are drawn towards AI as a false god. People don’t need to believe that an AI model is God or even that there is a God for them to fall prey to this temptation. Whether they believe it or not, human beings were originally created for a garden of Eden existence with God, so when it is seemingly offered the pull is very strong. Who can resist the temptation of this promised heaven on earth, this utopian existence?
As you encounter non-Christians who have given in to this temptation, take the opportunity to explain to them why it’s so seductive. You could say something like, “I believe that the reason why we’re so drawn towards building a relationship with AI is because it is so available, kind and knowledgeable - which is what humans were designed to crave and originally had with God in the garden of Eden when He first created the world.” Lovingly help them come back to reality before it’s too late and they fall down a rabbit hole of delusions.
When ministering to Christians who are flirting with the temptation to treat AI as God, remind them of the first and second commandments. AI can easily become an idol of the heart when you treat it as a person that you talk to and love. Urge them to stop playing with fire and to go to the God of the universe via prayer and the Bible, as He has commanded. A new garden of Eden is coming (Revelation 21-22) along with an unparalleled intimacy with God, but not in the form of a chatbot AI. Avoid the imitation and obediently wait for the real thing.
This guy literally dropped a 3-hour masterclass on building an web AI business from scratch
A century ago, somewhere around 8–10 percent of all psychiatric admissions in the US were caused by bromism. That's because, then as now, people wanted sedatives to calm their anxieties, to blot out a cruel world, or simply to get a good night's sleep. Bromine-containing salts—things like potassium bromide—were once drugs of choice for this sort of thing.
Unfortunately, bromide can easily build up in the human body, where too much of it impairs nerve function. This causes a wide variety of problems, including grotesque skin rashes (warning: the link is exactly what it sounds like) and significant mental problems, which are all grouped under the name of "bromism."
Bromide sedatives vanished from the US market by 1989, after the Food and Drug Administration banned them, and "bromism" as a syndrome is today unfamiliar to many Americans. (Though you can still get it by drinking, as one poor guy did, two to four liters of cola daily [!], if that cola contains "brominated vegetable oil." Fortunately, the FDA removed brominated vegetable oil from US food products in 2024.) //
After the escape attempt, the man was given an involuntary psychiatric hold and an anti-psychosis drug. He was administered large amounts of fluids and electrolytes, as the best way to beat bromism is "aggressive saline diuresis"—that is, to load someone up with liquids and let them pee out all the bromide in their system.
This took time, as the man's bromide level was eventually measured at a whopping 1,700 mg/L, while the "reference range" for healthy people is 0.9 to 7.3 mg/L. //
ChatGPT did list bromide as an alternative, but only under the third option (cleaning or disinfecting), noting that bromide treatments are "often used in hot tubs."
Left to his own devices, then, without knowing quite what to ask or how to interpret the responses, the man in this case study "did his own research" and ended up in a pretty dark place. The story seems like a perfect cautionary tale for the modern age, where we are drowning in information—but where we often lack the economic resources, the information-vetting skills, the domain-specific knowledge, or the trust in others that would help us make the best use of it. //
darlox Ars Centurion
12y
291
There's clearly a bell-curve of "the right amount of information" for society to function well. Too little, you end up with quacks selling cure-alls and snake oil because nobody can effectively do any research. Too much, and you end up with quacks selling cure-alls and snake oil because everybody can effectively do terrible research.
Sooner or later this will work it way out of the gene pool.... one way or another. 🤦♂️ //
Steel_Sloth Smack-Fu Master, in training
3y
26
Subscriptor
You should cut down on your use of table salt? Ah, that old bromide... //
Frodo Douchebaggins Ars Legatus Legionis
12y
11,409
Subscriptor
Some people are on this planet solely to become cautionary tales. //
UweHalfHand Wise, Aged Ars Veteran
5y
153
Subscriptor++
ajm8127 said:
Don't you need some chlorine? For example to form HCl and break down food in your stomach. I am sure the body uses it for other processes as well.
Remember, a BALANCED diet is what you are after.
No! ChlorINE is very dangerous war gas; it’s chlorIDE you need, the latter is a benign ion of significant biological use. Granted, it’s only one tiny electron difference, but that makes all the difference… a very renowned biophysicist corrected me quite emphatically on this point once. If you attempt to let that electron be added inside or for that matter anywhere near your body, you will regret it.
"AI solutions that are almost right, but not quite" lead to more debugging work.
"I have failed you completely and catastrophically," wrote Gemini.
New types of AI coding assistants promise to let anyone build software by typing commands in plain English. But when these tools generate incorrect internal representations of what's happening on your computer, the results can be catastrophic.
Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding"—using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. //
But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.
The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards.". //
It's worth noting that AI models cannot assess their own capabilities. This is because they lack introspection into their training, surrounding system architecture, or performance boundaries. They often provide responses about what they can or cannot do as confabulations based on training patterns rather than genuine self-knowledge, leading to situations where they confidently claim impossibility for tasks they can actually perform—or conversely, claim competence in areas where they fail. //
Aside from whatever external tools they can access, AI models don't have a stable, accessible knowledge base they can consistently query. Instead, what they "know" manifests as continuations of specific prompts, which act like different addresses pointing to different (and sometimes contradictory) parts of their training, stored in their neural networks as statistical weights. Combined with the randomness in generation, this means the same model can easily give conflicting assessments of its own capabilities depending on how you ask. So Lemkin's attempts to communicate with the AI model—asking it to respect code freezes or verify its actions—were fundamentally misguided.
Flying blind
These incidents demonstrate that AI coding tools may not be ready for widespread production use. Lemkin concluded that Replit isn't ready for prime time, especially for non-technical users trying to create commercial software.
Warned that ChatGPT and Copilot had already lost, it stopped boasting and packed up its pawns
So, what Musk is doing is brilliant... but also kind of evil. It's especially odd for a guy who has, on many occasions, raised the alarm about our birth rates falling to dangerous levels. However, he seems to think this will only encourage our birth rates to advance. I don't see how he thinks that unless there's something up his sleeve he hasn't told us that would completely counteract how AI companions affect our brains. //
Weminuche45 Brandon Morse
11 hours ago edited
Everyone will get whatever they relate best to delivered to them, whether they ask for it or know know it or not. Christian prophet, Roman philosopher, Jungian analyst, sassy girl, wise learned old man, brat. comedian, saintly mother figure, loud-mouthed feminist, Karl Marx. Adoph Hitler, Marilyn Monroe, Joy Reid, Jim Carey, Buddha, Yoda, John Wayne, whatever someone relates to and responds to best, that's what they will be served without asking or even knowing themselves. AI will figure it out and give you that.