Medium is the latest platform to jump on the rage for ChatGPT. the platform of blogging announced a change to its guidelines that now allows the publication and distribution of articles written with AI. Of course, as long as one condition is met: that each text created with the help of a generative artificial intelligence is clearly identified as such.
According to explained Medium, the decision to accept texts written with AI was made after analyzing the comments of its community. The main concern was how the use of tools such as OpenAI would be received, considering that many readers they pay subscriptions to access articles from your favorite writers or publications. Therefore, the need for transparency became a crucial factor when defining how to address this possibility.
“We welcome the responsible use of AI assistive technology on Medium. To promote transparency and help meet reader expectations, we require that any story created with the assistance of intelligence is clearly labeled as such«, Now indicate the guidelines on Medium distribution standards.
Now, what will happen when a user of the platform publishes articles written totally or partially with the help of ChatGPT and does not clarify it? Scott Lamb, Medium’s vice president of content, said that when they find articles they believe have been generated by artificial intelligence, they will not distribute them through the Medium network.
Nevertheless, did not explain how said control measure will be implemented. Nor how the service will be handled if cases of plagiarism are detected, something quite common with generative AI tools.
Medium follows in the footsteps of CNET Y BuzzFeedand embraces the use of generative AI

Medium’s announcement comes to light after it became known that BuzzFeed will implement ChatGPT in its editorial content throughout 2023. A trend that seems to become stronger every day, although it is not without controversy; the most important, without a doubt, the one starring CNET.
Said medium was targeted after it was learned that it published dozens of articles created with AI, but without clearly informing it. In addition, an investigation showed that most of the content generated with this method included serious errorseven though the website claimed that its editorial team checked them before publishing them.
The Medium case is somewhat different because it is a third-party article distribution platform. However, you will face a very delicate process in which must be able to enforce its transparency regulationsif you don’t want the matter to get out of hand.
In any case, from Medium they have already clarified that they will continue to explore different ways of implementing generative AI. And that they expect their approach to the subject to change as time goes on. It remains to be seen how the different publications that use this platform approach the issue, since each one has its own content guidelines. Although some —like fanfare— They have already informed that they will not accept any type of text written with the help of ChatGPT or similar tools.