The power to delete the thoughts of others can be abused.
Of course I believe the children are our future, just as I believe historical figures are our past.
But is what's good for children good for our future right now?
I'm driven to this sense of unease by a new feature that Microsoft is slipping into one of its most popular -- or at least one of its most used -- products: Teams.
Making a sudden appearance in the company's 365 roadmap, this new feature has a harsh title: "Microsoft Teams: Chat supervisors can delete messages."
I fear some may immediately look at this and think: "What? You mean, just because they feel like it?"
I fear the answer may, at least in theory, be yes.
Microsoft explains that the genesis of this idea came from, but of course, the children.
"This feature, designed with our Teams for Education users in mind, allows chat supervisors to delete inappropriate, off-topic, or other messages in a Teams chat," says the company.
Already I shudder.
One can quite imagine that teachers, burdened with so much over the last 18 months, would be only too grateful to have at least half of what kids say automatically erased. From their minds, as well as from Teams.
I couldn't help noticing, however, that this message-deletion joy has general availability, so any organization will be able to adopt it -- release date this month.
This is where I foresee a certain pickle.
Some people -- not all of them libertarians -- believe that a record of a chat should be accurate and faithful.
The context of words matters, as does the intent. You can't subsequently delete things that don't sound right. Unless you work for certain political figures, perhaps.
Who will decide what is inappropriate, off-topic or even "other"? Are we really speaking of the speech-police here? Will administrators now be acting like Facebook moderators, there to look out for specific words, rather than specific meanings?
Different administrators will surely be tempted to create their own versions of the new elimination rules.
You might say that this sort of thing already exists in the likes of Slack. Many, though, think of Slack as a glorified chat app rather than solid corporate content.
And what if some mischievous person persuades the administrator to make random, or even insidious, edits of a certain employee's words?
Done with perfectly malicious intentions, this might even adversely affect careers.
Perhaps, similar to the comments sections of many media, administrators will leave a note on deleted content that reads: "This message/sentence/-sentiment/word was deleted because it was deemed inappropriate/rude/ignorant/immature/halfwitted/irrelevant/something else that we didn't like."
That'll make anyone reading the chat log even more curious about what had been said that was so heinous. It might also encourage the intrepid to see what it takes to get a comment deleted.
Sanitizing content can sometimes result in sanitizing legend too, or even creating it.
"She really told that joke about the duck walking into a strip club? In a meeting? Wow."
There will, moreover, be mutterings along the lines of history only ever being written by the victors, so senior management will surely be held responsible.
Then again, it's likely that at least one or two administrators may have their own, independently twisted minds.
What might they end up perpetrating?
Perhaps it's worth starting a Teams chat with them soon, just to check in.