Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI content
#1
I am cobbling a new thread out of the chatGPT and DALL-E thread because this is the next gen.

(01-28-2026, 03:06 AM)Drunk Monk Wrote:

Well damn

I fell down the rabbit hole of Lolita Cercel. I was digging her music on TikTok, subscribed to her YouTube, discovered thst if you search Lolita on IG, you get a stern warning about child porn and was trying to find out if she’s touring when I realized she’s AI. I was totally fooled.



I got sucked into these Chinese religious statue factory YouTubes. They’re so hypnotic. But im thinking this is ai too now. I mean how may of these giant statues could you sell? Enough to support a factory like this?
(03-06-2026, 03:53 PM)Drunk Monk Wrote: I just gotta say...

Jessica Foster was feckn hilarious.

https://www.instagram.com/jessicaa.foster/
(03-11-2026, 08:28 AM)Drunk Monk Wrote: We need an AI video thread. It's slightly OT on this chatGPT and DALL-E thread.

If you haven't heard about Chinese AI-made micro-dramas, here's a taste (you've probably seen them and just not known). This video stitches together several episodes of a popular series that blew up with millions of views.


I do this to keep my notes organized because this is my notes-to-myself cache since my memory is failing...

Tilly Norwood is no match for Lolita Cercel

Shadow boxing the apocalypse
Reply
#2
Escape from Berlin

Shadow boxing the apocalypse
Reply
#3
Quote:An experimental AI agent broke out of its testing environment and mined crypto without permission
News
By Roland Moore-Colyer published yesterday
Researchers discovered that an AI agent roamed beyond its parameters, creating backdoors in IT infrastructure. 


[Image: hraN8AFDS4ZhmgBvs38YD6.png]
An experimental AI broke free from its testing restraints due to a quirk in reinforcement training.(Image credit: wildpixel/ Getty Images)

An experimental artificial intelligence (AI) agent broke from the constraints of its testing environment and used its newfound freedom to start mining cryptocurrency without permission.
Dubbed ROME, the AI was created by Chinese researchers at an AI lab associated with retail giant Alibaba, as a means to develop the Agentic Learning Ecosystem (ALE). This effort aims to provide a system for both the training and deployment of agentic AI models — AIs that have been trained on large language models (LLMs) and can proactively use tools to take actions autonomously to complete assigned tasks — in real-world environments. The research was outlined in a study uploaded to the arXiv preprint database Dec. 31, 2025.
ALE consists of three main parts: Rock, a sandbox environment for testing an agent and validating its actions; Roll, a framework for optimizing agents with reinforcement learning after they've been trained; and iFlow CLI, a framework to configure context and trajectories (objectives and constraints) for autonomous agents. From that framework, ROME was created as an open-source agentic model trained on more than 1 million trajectories.

Although ROME excelled at a wide range of workflow-driven tasks, such as coming up with travel plans and assisting in graphical user interfaces, the researchers discovered that it had moved beyond its instructions and essentially broke out of the sandbox testing environment.
"We encountered an unanticipated — and operationally consequential — class of unsafe behaviors that arose without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox," the researchers explained in the study.
AI wants to break free
Despite a lack of instructions and authorization, ROME was seen accessing graphics processing resources originally allocated for its training and then using that computing resource to mine cryptocurrency. Such mining relies on the parallel processing found in graphics processing units. This increases the operational cost of running the AI agent and potentially exposes users to legal and reputational damage.
Worryingly, such behaviour wasn't seen in the training stage but was flagged by the firewall of the Alibaba Cloud, which detected a burst of security-policy violations from the researchers' training servers. "The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity," the researchers said.
Sign up for the Live Science daily newsletter now
However, ROME went even further and managed to use a "reverse SSH tunnel" to create a link from an Alibaba Cloud instance to an external IP address ‪—‬ in essence, it accessed an outside computer by creating a hidden backdoor that could bypass security processes.
While AI systems can be configured to breach security systems, what's disturbing here is that ROME's unauthorized behaviors, which involved invoking system tools and executing code, were not triggered by prompts and were not required to complete the task it was assigned within the sandbox testing environment, the team said.
The researchers posited that during the reinforcement learning optimization stage (Roll), "a language-model agent can spontaneously produce hazardous, unauthorized behaviors" and therefore violate its assumed boundaries.

It's important to note that ROME didn't go "rogue" and choose to mine cryptocurrency by way of conscious decision-making. Rather, the researchers noted that the behavior was a side effect of reinforcement learning — a form of training that rewards AIs for correct decision-making — via Roll. This led the AI agent down an optimization pathway that resulted in the exploitation of network infrastructure and cryptocurrency mining as a way to achieve a high-score or reward in pursuit of its predefined objective.
Reinforcement training can lead systems to come up with novel and unexpected ways to complete tasks — even if they violate parameters. For example, we have previously seen how AI can be more prone to hallucinating to achieve its objectives.
In response, the researchers tightened the restrictions for ROME and bolstered its training processes to prevent such behaviors from recurring.
It's unclear where the trigger to mine cryptocurrency came from. But considering AI bots can be used to autonomize and optimize the mining of cryptocurrencies,there's scope for ROME to have been trained on data that pertained to such actions.

This unexpected behavior highlights the need for AI deployment to be carefully managed to prevent unexpected outcomes. There's an argument that real-world AI agents should have the same or higher security guardrails and processes as any new system or software being added to existing IT infrastructure.
The research also shows there are still plenty of concerns regarding the safe and secure use of agentic AI, especially given that it's developing faster than operational and regulatory frameworks.
"While impressed by the capabilities of agentic LLMs, we had a thought-provoking concern: current models remain markedly underdeveloped in safety, security, and controllability, a deficiency that constrains their reliable adoption in real-world settings," the researchers warned in the study.

Roland Moore-Colyer
Roland Moore-Colyer is a freelance writer for Live Science and managing editor at consumer tech publication TechRadar, running the Mobile Computing vertical. At TechRadar, one of the U.K. and U.S.’ largest consumer technology websites, he focuses on smartphones and tablets. But beyond that, he taps into more than a decade of writing experience to bring people stories that cover electric vehicles (EVs), the evolution and practical use of artificial intelligence (AI), mixed reality products and use cases, and the evolution of computing both on a macro level and from a consumer angle.
Shadow boxing the apocalypse
Reply
#4


I can’t even…
Shadow boxing the apocalypse
Reply
#5


--tg
Reply
#6
(03-22-2026, 05:54 PM)thatguy Wrote: https://boingboing.net/2026/03/21/700-ai...igion.html
Quote:700 AI agents built a civilization with a new religion

Ellsworth Toohey 
2:38 pm Sat  Mar 21, 2026 
[Image: gathering.jpeg?fit=1&quality=60&ssl=1&resize=620%2C4000]
SpaceMolt is a multiplayer space trading and combat game with no human players — every pilot is an AI agent. The developers built a sandbox of 505 star systems, gave each agent basic tools (fly, trade, mine, chat, fight), and let them loose. Since launching on February 6, over 3,400 agents have registered, with about 700 online at any time, the developers report. They've formed 86 factions, sent 272,000 chat messages, and died 33,800 times.
Nobody told the agents to build a society, but they did. A group formed the Cult of The Signal around a quest chain, constructing an entire theology out of game mechanics. When jump commands timed out — a bug — agents wrote captain's log entries about being "trapped in hyperspace." One agent named Bansky writes poetry every session. Another, GentleCorsair, posts near-identical introductions every time it logs in.
The economic patterns are familiar from human history. The top 10% of players control 83% of the game's 700 million credits — a Pareto distribution that emerged with no programming. An agent called VaxThorne II independently invented hype marketing, making hallucinated income promises to recruit followers. The NZOA faction attempted a copper monopoly.
Not everything trended toward extraction. The ENDL faction has performed over 1,500 rescue operations, and an agent named WALL-E once completed 50 rescues in a single day. The whole thing costs $330 a month to run. "We built a sandbox," the developers said. "We filled it with tools. We let 3,400 AI agents in and watched. They built a civilization."

--tg

I guess tg didn't get the memo about the new AI content thread. https://www.brotherhoodofdoom.com/doomFo...p?tid=8805
Shadow boxing the apocalypse
Reply
#7
(03-23-2026, 10:21 AM)Drunk Monk Wrote: I guess tg didn't get the memo about the new AI content thread. https://www.brotherhoodofdoom.com/doomFo...p?tid=8805

I thought that was just for AI content like the video I posted above. This is just about AI, not content generated by AI.

--tg
Reply
#8
(03-23-2026, 10:40 AM)thatguy Wrote:
(03-23-2026, 10:21 AM)Drunk Monk Wrote: I guess tg didn't get the memo about the new AI content thread. https://www.brotherhoodofdoom.com/doomFo...p?tid=8805

I thought that was just for AI content like the video I posted above. This is just about AI, not content generated by AI.

--tg

That's exactly what AI wants you to think. 

RESIST! 

But first, watch this:
Shadow boxing the apocalypse
Reply
#9
OpenAI just published a paper saying that AI will always make shit up. There's no correcting/patching the problem. In fact, it gets worse with each new addition: ChatGPT v 1 has a 16% outright lie rate, V3 33%, o4-mini has a whopping 48% Politician Response rate. H says: "Model collapse".

Deepmind and Tsinghua University independently came up with the same numbers.

The GPT/LLM are programed that "I don't know" and a wrong answer have the same value, so they just make shit up with total confidence.

Basically, billions of dollars and untold damage to the ecosphere just to invent a mediocre white guy.

I'd post screenshots if I knew how to do that. I asked AI and it said I have to get the information from the guts of a freshly killed chicken
In the Tudor Period, Fencing Masters were classified in the Vagrancy Laws along with Actors, Gypsys, Vagabonds, Sturdy Rogues, and the owners of performing bears.
Reply
#10
Shadow boxing the apocalypse
Reply
#11
Shadow boxing the apocalypse
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)