My AI Agent Wrote My LinkedIn Posts. I Rejected All of Them.
I asked my personal AI assistant to draft five LinkedIn posts. They were polished, professional, and sounded nothing like me. Here's how I fixed it.
I gave FRED a simple assignment: write five LinkedIn posts for the week.
He came back with five polished, professional, well-structured pieces of content.
I rejected all of them.
The Problem With Polished
FRED’s first drafts were technically good. Complete sentences. Logical flow. Clear points. The kind of writing that looks right if you don’t read it out loud.
But here’s the thing — if you DID read them out loud, you’d sound like a keynote speaker at a conference you’d never attend. Distant. Measured. Corporate.
That’s not how I write. That’s not how I talk. That’s not how anyone who’s met me would describe a conversation with me.
The posts had too many paragraphs. Too much structure. Too many transitions that screamed “and in conclusion.” They were the LinkedIn equivalent of a suit and tie at a barbecue. Technically appropriate. Completely wrong for the occasion.
Draft One: The Corporate Version
The first draft read like a company blog post. You know the kind — some VP of Something wrote it (or more likely, had their marketing team write it), and it’s full of phrases like “leveraging AI capabilities” and “driving meaningful outcomes.”
FRED had absorbed too much LinkedIn. The platform’s default voice had contaminated his output. He was writing what LinkedIn posts typically sound like, not what MY LinkedIn posts sound like.
I told him: shorter sentences. Less formal. More personal. Write it like you’re texting a smart friend, not presenting to a board.
Draft Two: The Overcorrection
So he swung the other direction. Way too casual. Sentence fragments everywhere. Exclamation points where they didn’t belong. It read like an excited intern’s first social media post.
I told him: find the middle. Conversational doesn’t mean casual. Personal doesn’t mean unprofessional. There’s a specific register between “corporate memo” and “group chat” — that’s where I live.
Draft Three: Getting Warmer
By draft three, something shifted. The tone was closer. The sentence length varied in a more natural way. He’d stopped using words I would never use.
But the structure was still wrong. He was writing in paragraphs. Big, chunky paragraphs with topic sentences and supporting evidence. Like a college essay.
I write in lines. One thought. One sentence. Then the next thought. White space between ideas. Let them breathe.
I showed him examples. Posts of mine that had worked. “See the difference? See how each line stands on its own? That’s not random. That’s the format.”
Draft Four: Almost There
The fourth iteration was the closest. Short lines. Personal voice. Specific examples instead of vague platitudes.
But it still wasn’t right. There was something — I couldn’t even articulate it at first — that felt off. Like a cover band playing your favorite song. All the notes are correct but the feel is wrong.
It took me a few minutes to figure it out: FRED was mimicking the pattern without understanding the why. He’d made the lines short because I told him to, not because he understood which thoughts deserve their own line and which don’t.
That’s a subtle distinction. And it might be the hardest thing to teach an AI.
The Real Lesson
AI doesn’t nail your voice on try one. Or two. Or three.
But here’s what it does do: it learns.
Each correction I gave FRED applied forward. Not just to the next draft of the same post, but to the next post entirely. By the time we got to the fifth post, the first draft was noticeably better than the first draft of the first post.
Not perfect. But better.
And “better each time” is the whole game. That’s how you train any collaborator — human or AI. You give feedback. You’re specific about what works and what doesn’t. You keep iterating until the gap between what you want and what you get is small enough to work with.
Why Most People Give Up Too Early
I think the biggest mistake people make with AI-generated content is quitting after draft one.
Draft one is garbage. Always. With humans and with AI. The difference is that when a human writes draft one, they revise it themselves before showing anyone. When AI writes draft one, you see the raw, unedited mess.
So it looks worse than it is. You see the sausage being made and think the sausage is bad. But the sausage isn’t done yet.
The people who get real value from AI writing tools are the ones willing to iterate. To push back. To treat “not good enough” as a starting point, not an ending.
I’m a persistent SOB. Apparently that’s a prerequisite.
What You Can Do
If you’re using AI for content and the output doesn’t sound like you:
Don’t accept the default voice. AI writing tools default to a generic professional tone. That’s their training data talking. Push past it immediately.
Show, don’t tell. Give the AI examples of your writing that you like. “Write like this” is more effective than “be more casual.” Concrete beats abstract.
Iterate in the same session. Don’t start over. Each correction in the same conversation builds on the last. Starting fresh throws away all that context.
Name the specific problem. “This doesn’t sound like me” isn’t useful feedback. “This sentence is too long and I would never use the word ‘synergy’” is. Be surgical.
Track what sticks. After a few sessions, you’ll notice which corrections the AI retains and which it forgets. The ones it forgets? Write them down. Build a style guide. Give it to the AI at the start of every session.
Be patient. Be persistent. The voice calibration takes time. But it’s time well spent. Because once your AI collaborator understands your voice, every future draft starts closer to the finish line.
The first five posts weren’t publishable. But the process of rejecting them was the most productive writing exercise I’ve done in months.
Sometimes you have to say no a lot before the yes shows up.