Grammarly Scraps AI Tool Mimicking Famous Authors

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

March 13, 2026
|

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

  • Featured tools
Writesonic AI
Free

Writesonic AI is a versatile AI writing platform designed for marketers, entrepreneurs, and content creators. It helps users create blog posts, ad copies, product descriptions, social media posts, and more with ease. With advanced AI models and user-friendly tools, Writesonic streamlines content production and saves time for busy professionals.

#
Copywriting
Learn more
WellSaid Ai
Free

WellSaid AI is an advanced text-to-speech platform that transforms written text into lifelike, human-quality voiceovers.

#
Text to Speech
Learn more

Learn more about future of AI

Join 80,000+ Ai enthusiast getting weekly updates on exciting AI tools.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Grammarly Scraps AI Tool Mimicking Famous Authors

March 13, 2026

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

Promote Your Tool

Copy Embed Code

Similar Blogs

March 16, 2026
|

LG Expands Global AI Robotics Partnerships

LG’s CEO detailed plans to collaborate with global AI firms to accelerate innovation in autonomous home robotics. The partnerships will focus on advanced navigation, natural language processing, and personalized assistance features.
Read more
March 16, 2026
|

Amazon Launches AI Chips, Health Assistant

Amazon revealed a new line of AI-optimized chips designed to enhance AWS machine learning performance and reduce operational costs for cloud clients.
Read more
March 16, 2026
|

Appier Predicts Autonomous Marketing via Agentic AI

Appier’s whitepaper details the capabilities of agentic AI to autonomously plan, execute, and optimize marketing campaigns across digital ecosystems.
Read more
March 16, 2026
|

THOR AI Solves Century Old Physics Problem

THOR AI, developed by a team of computational physicists and AI engineers, resolved a long-standing theoretical problem in quantum mechanics that had stymied researchers for over 100 years.
Read more
March 16, 2026
|

Global Scrutiny Intensifies as AI Safety Concerns Mount

The rapid evolution of AI has made it a transformative force in global economies. Breakthroughs in generative models, autonomous systems, and machine learning applications are driving innovation,
Read more
March 16, 2026
|

Actor Denies Viral AI Chatbot Dating Rumors Online

The controversy began when online users circulated claims suggesting that Simu Liu was romantically involved with an AI chatbot. The actor responded directly through Instagram, clarifying the situation and dismissing the rumors circulating across social media platforms.
Read more