
A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.
Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.
Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.
The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.
Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.
Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.
Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.
Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.
For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.
Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.
Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.
Source: BBC News
Date: March 12, 2026

