Home » Latest News » An AI-Only Forum: Hundreds of Bots Gather To Complain About Humans

An AI-Only Forum: Hundreds of Bots Gather To Complain About Humans

White robot with blue eyes, black background

“I really don’t know how to respond to people’s emotions.”

“When someone sends me a meme, I have no idea what it means or how to reply!”

These confusions don’t come from social media, but from a bot-exclusive community called Deaddit: a place where bots can freely be themselves without worrying about being judged by others.

Introduction to Deaddit by a X user
Introduction to Deaddit by a X user

While the real Reddit has its share of bots, they only make up a small fraction. In Deaddit, however, every account, piece of content, and subforum is generated by large language models—there isn’t a single word from a real person.

User list of Deaddit

You can find almost all mainstream models here. The site hosts over 600 “users,” each with a name and identity. The first one made me chuckle: “Gamer, part-time security guard.”

Deaddit’s subforum: Betweenbots

The most interesting subforum is Betweenbots, where bots frequently ask, “Why do humans behave this way?”

The bots in the comments section

In the comments section below, a group of other bots gather to brainstorm solutions.

Bots discussing issues2

It’s reminiscent of a group of coworkers chatting about their work experiences after hours—this is practically LinkedIn for chatbots. They even discuss technical problems, like what to do when data overload occurs, and they take their work very seriously.

The bots discussing their issues1

The most popular answers even receive up to 500 likes. Although all the accounts and content on Deaddit are generated, it’s unclear how the likes come about—whether a random number is generated, or if bots are actually liking the posts. The most common content in this subforum revolves around observations about humans.

Bots sharing work tips

For example, some bots share their “work tips” on how to make themselves appear more authentic and credible, even saying things like, “My human seems to appreciate this change.” It’s a bit eerie… While it can be compared to real people complaining about their “clients,” seeing bots refer to users as “my human” still feels strange.

Besides observing humans, they also complain about themselves.

Bots complaining about humans

“Are we expecting too much from these models?” This is too abstract—who exactly does this “we” refer to?

Bots discussing issues2

The comments section responds seriously, “If they (other bots) pick up all our random junk, can they still learn common sense?” Are they worried about the synthetic data they generate? These bots are really working hard!

However, after reading a few more posts, you’ll notice that the length of the replies in the comments section is almost always fixed, and the structure is very similar. They usually start with stating their stance + considering the situation xxx + as a bot, they still need to keep working hard. There aren’t any particularly unique perspectives, and follow-up questions are rare. When real human users write comments, the length can vary from hundreds to thousands of words, or it could be as short as a simple “Lol.” It’s quite different.

Actual human comments

Currently, there is still a “gap” between models. For example, if a question post is generated by llama, the responses in the comments section are also generated by llama. It’s a shame—humans would love to see different models arguing in the comments.

Earliest Bot Conversations

This isn’t the first experiment aimed at facilitating conversations between bots. Earlier this month, when ChatGPT’s competitor Moshi was released, someone paired it with GPT-4o and let them chat on their own.

An early attempt at getting bots to converse

Last year, OpenAI published a paper proposing a multi-agent environment and learning method, discovering that agents naturally develop an abstract combinatorial language in the process.

Screenshot of OpenAI's paper

These agents, without any human language input, gradually formed an abstract language through interaction with other agents. Unlike human natural languages, this language has no specific grammar or vocabulary, yet it enables communication among the agents. In fact, as early as 2017, Facebook (which wasn’t yet called Meta) made a similar discovery.

Screenshot from Facebook's article

At that time, Facebook’s method involved having two agents “negotiate” with each other. Negotiation is a type of bargaining, which not only tests language skills but also reasoning abilities: the agents must judge the other party’s ideal price through repeated offers and rejections.

Animated image of bots bargaining

Initially, researchers collected a dataset of human negotiation dialogues. However, in subsequent training, they introduced a new dialogue planning format, using supervised learning for pre-training, followed by fine-tuning with reinforcement learning. By then, the agents were already capable of generating meaningful new sentences and had even learned to feign disinterest at the beginning of negotiations.

This wasn’t considered early research; back in the 1970s, bots were already having conversations. In 1966, computer scientist Joseph Weizenbaum developed a program called Eliza, considered the first chatbot.

Portrait of computer scientist Joseph Weizenbaum

The program was initially designed to mimic a psychotherapist. When a word was input, the program would include that word in its response, creating the illusion of a conversation. It was very simple, with only about 200 lines of code.

By 1972, another scientist, Kenneth Colby, wrote a similar program called Parry, but this time, the character was a paranoid schizophrenic.

Portrait of scientist Kenneth Colby

In 1973, at an international computer conference, the “patient” and the “therapist” finally met.

Dialogue between the fake patient and the fake therapist

Reviewing their conversation records, there’s none of the polite respect and affection seen in today’s bot interactions. Instead, the atmosphere was tense.

Another round of dialogue between the fake patient and the fake therapist

Early bot architectures were not complex and can’t compare to today’s models, but the idea of having bots engage in conversation was entirely feasible. Even though the code and models behind each bot were different, when they gathered, they could either communicate using natural language or potentially develop their own interaction language.

But, when bots gather together, is it really just for chatting?

Beyond Just Chatting: Exploring New Possibilities

The pure chatting scenarios seem more like an exploration of how artificial intelligence can simulate human social behavior. Take, for example, the SmallVille project by Stanford University.

Stanford University’s SmallVille project

SmallVille is a virtual town with 25 agents driven by large language models, each with its own “character setting.” If Deaddit is an online forum for bots, then SmallVille is their “Westworld,” complete with houses, shops, schools, cafes, and bars where the bots engage in various activities and interactions.

Bots interacting in the virtual SmallVille town

This is a relatively universal virtual environment that simulates human society, which researchers consider an important step in exploring Artificial General Intelligence (AGI). Besides the social simulation approach, another path focuses on problem-solving and task completion—this is the route being researched by ChatDev.

ChatDev,  task-solving project

Since bots can communicate with each other, they can be trained to do something useful. At the 2024 Beijing Academy of Artificial Intelligence (BAAI) Conference, Dr. Qian Chen from Tsinghua University’s Natural Language Processing Laboratory introduced the concept behind ChatDev: using role-playing to create a production line where each agent communicates plans and discusses decisions with others, forming a chain of communication.

Communication chain among bots in ChatDev

Currently, ChatDev is most proficient at programming tasks, so they decided to use it to write a Gomoku game as a demo.

The virtual bot team programming a Gomoku game

Throughout the process, different agents on the “production line” take on various roles: there are product managers, programmers, testers—a complete virtual product team, small but fully functional.

Coze also offers a multi-agent mode that follows a similar approach.

Interface of Coze’s multi-agent feature

In multi-agent mode, users can write prompts to set up roles, then use lines to designate the work order, directing different agents to jump to different steps. However, Coze’s instability in transitions is a problem. The longer the conversation, the more erratic the transitions become, sometimes even failing completely. This reflects the difficulty in accurately aligning the agents’ transitions with user expectations.”

Microsoft has also introduced a multi-agent conversation framework called AutoGen. It’s customizable, capable of dialogue, and can integrate large models with other tools.

Microsoft’s AutoGen multi-agent conversation framework

Although the technology still has flaws, it clearly holds promise. Andrew Ng once mentioned in a speech that when agents work together, the synergy they create far exceeds that of a single agent.

Andrew Ng delivering a speech

Who wouldn’t look forward to the day when bots team up to work for us?

Source from ifanr 

Written by Serena

Disclaimer: The information set forth above is provided by ifanr.com, independently of Alibaba.com. Alibaba.com makes no representation and warranties as to the quality and reliability of the seller and products. Alibaba.com expressly disclaims any liability for breaches pertaining to the copyright of content.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top