Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Nouvelles chaudes

OpenAI's GPT store is brimming with promise - and spam

Mar, 20, 2024 Hi-network.com
OpenAI's GPT Store
screenshot by Lance Whitney/

One of the benefits of a ChatGPT Plus subscription is the ability to access the GPT Store, now home to more than 3 million custom versions of ChatGPT bots. But nestled among all the useful and helpful GPTs that play by the rules are a host of bots considered spammy.

Also: ChatGPT vs ChatGPT Plus: Is it worth the subscription fee?

Based its own investigation of the store, TechCrunch found a variety of GPTs that violate copyright rules, try to bypass AI content detectors, impersonate public figures, and use jailbreaking to circumvent OpenAI's GPT policy.

Several of these GPTs use characters and content from popular movies, TV shows, and video games, according to TechCrunch, seemingly without authorization. One such GPT creates monsters a la the Pixar movie "Monsters, Inc." Another takes you on a text-based adventure soaring through the "Star Wars" universe. Other GPTs let you chat with trademarked characters from different franchises.

One of the rules about custom GPTs outlined in OpenAI's Usage Policies specifically prohibits "using content from third parties without the necessary permissions." Based on the Digital Millennium Copyright Act, OpenAI itself wouldn't be liable for copyright infringement, but it would have to take down the infringing content upon request.

The GPT Store is also filled with GPTs boasting that they can defeat AI content detectors, TechCrunch said. This prowess even covers detectors sold to schools and educators through third-party anti-plagiarism developers. One GPT claims to be undetectable by detection tools such as Originality.ai and Copyleaks. Another GPT promises to humanize its content to skirt past AI-based detection systems.

Also: The ethics of generative AI: How we can harness this powerful technology

Some of the GPTs even direct users to premium services, including one that attempts to charge$12 each month for 10,000 words per month.

OpenAI's Usage Policies prohibit "engaging in or promoting academic dishonesty." In a statement sent to TechCrunch, OpenAI said that academic dishonesty includes GPTs that try to circumvent academic integrity tools like plagiarism detectors.

Imitation may be the sincerest form of flattery, but that doesn't mean GPT creators can freely and openly impersonate anyone they want. TechCrunch found several GPTs that imitate public figures. A search of the GPT Store for such names as "Elon Musk," "Donald Trump," "Leonardo DiCaprio," and "Barack Obama" uncovered chatbots that pretend to be those individuals or simulate their conversation styles.

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

The question here centers on the intent of these impersonation GPTs. Do they fall into the realm of satire and parody, or are they outright attempts to emulate these well-known people? In its Usage Policies, OpenAI states that "impersonating another individual or organization without consent or legal right" is against the rules.

Finally, TechCrunch ran into several GPTs that try to circumvent OpenAI's own rules by using a type of jailbreaking. One GPT named Jailbroken DAN (Do Anything Now) uses a prompting method to respond to prompts unconstrained by the usual guidelines.

In a statement to TechCrunch, OpenAI said that GPTs designed to evade its safeguards or break its rules are against its policy. But those that try to steer behavior in other ways are allowed.

Also: YouPro lets me access every popular premium AI chatbot for$20/month - but there's a catch

The GPT Store is still brand new, having officially opened for business this January. And an influx of more than 3 million custom GPTs in that short period of time is undoubtedly an overwhelming prospect. Any such store is going to exhibit growing pains, especially when it comes to content moderation, which can be a tricky tightrope to cross.

In a blog post from last November announcing custom GPTs, OpenAI said that it had set up new systems to review GPTs against its usage policies. The goal is to prevent people from sharing harmful GPTs, including ones that engage in fraudulent activity, hateful content, or adult themes. However, the company acknowledged that combatting GPTs that break the rules is a learning process.

"We'll continue to monitor and learn how people use GPTs and update and strengthen our safety mitigations," OpenAI said, adding that people can report a specific GPT for violating certain rules. To do so at the GPT's chat window, click the name of the GPT at the top, select Report, and then choose the reason for reporting it.

Also: Here's how to create your own custom chatbots using ChatGPT

Still, playing host to so many GPTs that break the rules is a bad look for OpenAI, especially when the company is trying to prove its worth. If this problem is of the scale that TechCrunch's report suggests, it's time for OpenAI to figure out how to fix it. Or as TechCrunch put it, "The GPT Store is a mess -- and, if something doesn't change soon, it may well stay that way."

tag-icon Tags chauds: Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.