The Ethics of Draftsmith

The ethics of AI is one of the most important questions of our time. If you’re looking for great insight into that question, you’re in the wrong place! At Intelligent Editing, we’re not AI ethicists. We don’t spend our days researching the rights and wrongs of AI. We haven’t studied how the law needs to adapt to changes in technology.

Our mission is simple:

“We believe people make the best editing decisions and that they always will. We build technology to help people edit faster and better.”

Our mission doesn’t relate to AI. Our flagship product, PerfectIt, doesn’t even use any AI technologies. If you want the benefits of automation in document editing without any consideration of AI, PerfectIt is the product for you. You can build in preferences from your house style. You can even use it to help you enforce the rules of The Chicago Manual of Style. There’s no AI involved at any stage. As Rachel Lapidow describes, editors "hesitant about [AI] are not technophobes or Luddites", and they shouldn't be treated as if they are.

Our mission is also not in opposition to AI. PerfectIt was introduced in 2009. It’s not surprising that technology has changed since then. However, the transformation in what AI can do now is beyond anything most people had considered possible a few years ago. So to help people make faster and better editing decisions, we decided it was the right time to deliver a product that allows users to take advantage of these changes. That’s why our second product, Draftsmith, is an AI product.

Will we put editors out of business?

The audience for PerfectIt is editors. If editors go out of business, we go out of business too. It’s as simple as that. We’re never going to do anything that we think puts editors out of business!

We previewed Draftsmith with an approach that put editors first. We announced its launch in the editing community at both the CIEP and ACES conferences in September 2023. We developed Draftsmith with a view that it would be most useful earlier in the writing/editing process (it’s for refining writing). It’s clear from the first six months that sometimes that means it helps writers and sometimes it helps editors. We’re not going to take any steps that put either writers or editors out of business.

We think the biggest threat that AI poses to writers and editors is not products like ours, but naivete among people who think AI can do the work for them. We’ve already heard of small businesses losing customers to AI. Moreover, we ran a survey that reinforces those fears. Our survey of 500 language professionals from around the world shows a great deal of fear. Some 26% of respondents were concerned that they might lose their job as a result of AI. Moreover, 44% of respondents were concerned about other people losing their jobs. Only 13% of respondents had no concerns at all.

We think the biggest threat that AI poses to writers and editors is not products like ours, but naivete among people who think AI can do the work for them. We’ve already heard of small businesses losing customers to AI.  PerfectIt | Draftsmith

Our view is that people who feel they can have a piece written or edited by AI are misunderstanding the value that professional writers and editors bring. Good editing and writing is all about the human touch. Our software will help writers and editors to make decisions more efficiently. It will help stay competitive against the number of AI alternatives. But what we do will never replace professionals.

Isn’t everything to do with AI simply wrong?

In the same survey, some 75% of respondents said they were worried about abuse of copyright in the training of AI systems. That makes sense. While we’re not legal experts, it certainly sounds like copyright law may have been broken in several cases. Writers, editors, publishing companies and news organizations all lose out if copyright law is broken. As a result, we’ve heard lots of writers and editors say they will not use or engage with it.

We think we’d be letting down our community of writers and editors if we took such a hard stance ourselves. The reality is that the issues with AI are complex. Ben Evans describes it this way:

“Each new wave of technology or creativity leads to new kinds of arguments. We invented performance rights for composers and we decided that photography - ‘mechanical reproduction’ - could be protected as art, and in the 20th century we had to decide what to think about everything from recorded music to VHS to sampling. Generative AI poses some of those questions in new ways.”

There's room for nuance. What’s the difference between a person reading data off the internet to come up with something new and a machine reading the same data to do the same thing? All over the world, terms of use are being adjusted to make that difference clear. However, intellectual property law has not yet adjusted. As a society, we don’t yet know what fair use is. And as a society, we haven’t yet decided how to reward the owners of the original work.

While it is complex, at Draftsmith, our view is that some of the ways that modern LLMs have been built is simply wrong. In particular:

  • Scraping copyrighted work without payment is wrong.

  • Building models that re-create copyrighted work without attribution is wrong.

We stand against both of those practices. However, to produce something that helps our community of writers and editors today, there’s more that we have to keep in mind. We need to consider what our users need to remain competitive with AI and what our users feel they can trust.

Choosing the most trustworthy AI model

Draftsmith has a focus on paraphrasing. It doesn’t make content up from scratch. Rather, it’s using the text of the author to generate alternative wording. So there isn’t much need to worry about re-creating copyrighted content without attribution.

However, writers and editors who want to take advantage of AI have two primary concerns that we had to address: data security and the training of models.

We found that the best AI model for addressing those concerns is Microsoft’s Azure OpenAI Service. By using that, we have created a program where:

  • The only data that leaves your computer is the text you choose to send (one sentence at a time)

  • Nothing is ever used to train AI models.

  • No person can ever see your data.

  • Any text you send is quickly erased from Draftsmith’s servers. None of it is permanently stored anywhere.

Microsoft’s Azure OpenAI Service uses GPT 3.5. We understand the paradox in that. The service that offers the best AI and the most trustworthy data model is the same one that’s being sued for breach of copyright in how it built its AI model in the first place. But that’s the reality of this moment! It is the best and most secure service today despite the alleged copyright transgressions that created it. At the same time, we're also exploring other models (and new ones are appearing all of the time).

Are you giving money to something that’s wrong?

Inevitably, choosing Microsoft’s Azure OpenAI Service does reward the company that built the AI in the first place. However, we think it’s more important that writers and editors have secure services for their work today. Moreover, OpenAI are being sued for the use of copyright work in model training. When legal proceedings are over, we think there’s a high chance that the creators of that work will be rewarded in the end. We don’t know how long that legal process will take. While it’s going on, our priority is supporting writers and editors with the best that AI can offer.

Previous
Previous

Overcoming Editor's Block: How to Be the Most Productive Editor You Can Be

Next
Next

PerfectIt for PowerPoint: A Joke Becomes a Dream