A good portion of my work as an Excel trainer and consultant is repurposing. I’ll take an idea I presented to an accounts receivable group and pivot it for a credit management group, or stitch two one-hour segments into a new two-hour class.
You probably do something similar in your own work. You finish a variance review or a campaign readout, and then you spend the rest of the day rewriting it four different ways. The CFO wants the tight version, the department head wants their line broken out, and the board wants the strategic frame. All of it has to live in different formats: an email, a deck, a Slack message, and so forth.
For a long time I kept trying to find the right solution for this, and an LLM was the right tool all along. They keep getting smarter, with features like Claude Cowork and Skills (now in Copilot too, and the focus of this post). In this post we’ll generate a variance commentary, spin it for different audiences and formats, and then scale it up with skills so it’s easier to do over and over.
For simplicity, I’m not adding many guardrails to check that the numbers are right. That matters, of course, but it’s still hard to automate beyond a human reviewer. If you want the basics on how to cross-check generative AI output with a deterministic workflow, check out this post:
You can follow along with the exercise files below. Make sure you upload them to your OneDrive, since Copilot needs them there. At the time of writing, Copilot Cowork requires you to be on the Frontier program.
Getting started
To get started with Copilot Cowork, head to your Microsoft 365 Copilot account and find the Cowork agent:

This works pretty similarly to writing a regular prompt in Copilot, except Cowork is a little more “agentic.” That means it can take a goal and work through it in steps, calling on files, tools, and its own intermediate output along the way rather than just answering a single prompt and stopping. You can hand it something bigger and let it do the legwork: open the files, run the analysis, draft the commentary, and come back with a result you can review.
We’ll start with the basics and build a simple variance analysis. An important part of getting good results is adding context to the prompt by pointing Cowork at the specific files it should use, in our case the two data workbooks. To do this, click the plus sign next to the prompt box and choose Attach cloud files, then navigate to the files you uploaded to OneDrive.

Your final prompt should look something like this. Go ahead and run it.
“Here’s our March variance data (`data/march-2026-variance.xlsx`) and headcount context (`data/headcount-by-dept.xlsx`). Can you write up commentary for me? Headline number, the material variances with causes, anything I should flag, asks for the next review. Working-notes style, not a polished memo.”

If you’ve used Copilot before, the main flow will feel familiar. The new piece is the menu on the right side of the screen. Right now you’ll see an “input folder” along with a few other fields for outputs and instructions. The naming is the giveaway. Cowork is nudging us to think about AI assistance the same way we think about an Excel function or a Power Query flow: you have inputs going in, a process running on them, and outputs coming out. That’s an interesting shift, because once you start framing AI work that way, it stops being one-off prompting and starts looking like a small workflow you can rerun and hand off.
Before we follow that thread, run a few more prompts like the ones below. We’re taking the same variance commentary and repurposing it for different audiences and formats.
Now rewrite that for our CFO. She wants something tight and decision-focused.
Now do the same thing for our VP of Engineering. He only cares about his department and what’s coming next quarter.
Take that CFO version and turn it into an actual email I can send. Subject line, three structured bullets each with the dollar amount and the cause, two specific asks at the end, around 250 words, and cut anything that reads like internal working notes.
Repurposing is the kind of work large language models are best at. You’re not asking the model to discover something new. You’re asking it to take known content and reformat it under known constraints. That’s a job description LLMs were built for.
The catch is that the quality of the output depends almost entirely on how well you specify those constraints. “Rewrite this for the CFO” is too thin. The model will guess at what a CFO wants and the result will read as generic. That’s why we add extra instructions about length, structure, tone, and what to leave out.
Writing those extra instructions every time gets old fast, and it’s hard to share with colleagues so they get the same result you do. Just like you might take a set of steps you repeat in Excel and turn them into your own function, we can do something similar in Copilot Cowork using skills.
Scaling up your prompting with skills
A skill is a small piece of structure, usually a markdown file, that tells the model how to handle a recurring kind of task. You write it once, and the model consults it any time the task comes up. You can read more about these on Microsoft Learn.
What we need to do is stage our new skills in our Cowork folder. I’ve already written some for you in the skills folder of the download. Go to your OneDrive, open Documents > Cowork, and if a skills folder doesn’t exist there yet, create one. Then copy in each of the four skill folders as they are.

Go ahead and open one. These are Markdown files. If you aren’t familiar with Markdown, do a little web searching or ask an AI to walk you through it. The short version: Markdown is a plain text format for writing structured documents using simple symbols, like # for headings, * for bullets, and ** for bold.
Because it’s plain text, it’s easy for humans and language models alike to read. There’s no hidden formatting and no proprietary file structure, just words with a few markers for hierarchy and emphasis. That’s why AI tools have settled on Markdown as the standard format for instructions and documentation.
Markdown is becoming common enough that OneDrive will preview .md files natively in Windows Explorer. I’d also encourage you to download VS Code, which is a free editor from Microsoft and a comfortable place to work with Markdown files.
Let’s take the FP&A audience rewrite skill as an example. Open it up and you’ll see it’s just a written description of what that rewrite should look like. It defines who the audience is (a CFO or finance leader), specifies the tone (direct and decision-focused, no narrative fluff), lays out the structure to use (a headline number, the top drivers with dollar amounts, an implication, and a recommended action), and calls out what to leave out, such as working notes or hedging language.

This is the same idea as an Excel function. When you use SUMIFS(), you don’t re-explain each time what summing with conditions means. You just call the function and pass it the inputs. A skill works the same way. Once it’s written, you don’t have to type out “make this tight, decision-focused, three drivers with dollar amounts, no fluff” every time you want a CFO version. You ask for a CFO version and the model picks up the skill.
OK, that’s the setup. Go back to Copilot Cowork, start a new session, and run a prompt like this. Notice we’re stacking quite a few asks in one go, and we are NOT spelling out HOW to repurpose the content for each audience. We’re also NOT explicitly calling the skills by name.
- Write March variance commentary from the data files.
- Rewrite that for the CFO.
- Now reframe the commentary for the VP of Engineering.
- Turn the CFO version into an email.
- Build a slide outline for the monthly business review.

Look over at the right side panel and you’ll see that all of the relevant skills were called. The model recognized the kind of task each step represented and pulled in the matching skill on its own. You didn’t have to keep re-explaining yourself, and a colleague running the same prompt with the same skills folder would get a result built on the same rules. That’s the function-like behavior I mentioned earlier, and it’s the reason skills are worth setting up even for one-person workflows.

We could get even more sophisticated here and have Cowork actually build the PowerPoint deck and draft the email in Outlook. But Rome wasn’t built in a day, and this is already an amazing start for just a few minutes of work.
What changes when you set this up
I hate to use the word, but yes, this is a game-changer. You can ask for an Engineering-flavored rewrite, then ask for that result to be sent as an email, and because the audience rule and the email rule live in two separate skills, you get both behaviors at once without writing a frankenprompt.
| Without skills | With skills |
|---|---|
| Re-explain the audience and format on every prompt | Skills hold the rules; you just say “rewrite this for the CFO” |
| Output drifts as you re-explain slightly differently | Output stays consistent because the spec is consistent |
| Each format is a separate, ad-hoc effort | Formats compose — rewrite for the audience, then convert the result to email |
| Hard to share the workflow with your team | The skill is the workflow — share the file, share the system |
If a chunk of your week is spent rewriting the same analysis several different ways, this is one of the highest-impact AI applications you can set up. The model isn’t going to do your analysis for you. That part is still your job. What it does well is the downstream rewriting, which has very predictable structure and is currently costing you a lot of time.
A few starting points:
- Pick one recurring writeup. Variance memo, weekly KPI report, campaign readout, customer health snapshot — anything you produce on a regular cadence and rewrite for at least two audiences.
- Write down the rules you actually follow when you rewrite for each audience. What do they want? What length? What gets cut?
- Put those rules in a skill, not a prompt. Even if it starts as a single Markdown file with four audience descriptions, that’s enough to get value immediately.
- Add format skills as you go. Once you have audience covered, the same pattern works for format conversions: to-email, to-deck, to-Slack-post. Each one is a separate, small skill.
The ROI shows up within a week. Many users run their first repurposed deliverable, see the time saved, and immediately think of three more recurring artifacts they want to wrap.
Conclusion
If you take nothing else away from this post:
- Repurposing is a real and underrated chunk of analytical work, and it’s structurally a part AI handles very well.
- Start with the prompts themselves and pay attention to the constraints you add. Constraints are what separate a generic-sounding output from one that’s usable.
- Skills are the right home for those constraints once you want to repeat and scale them. The rules stay stable, the output gets more consistent, and skills compose well together.
And remember, you control what gets handed off. If you’d rather write the variance commentary yourself first and pass that in for repurposing, you can. Skills are a tool you stay in charge of.
If you want to see how I put this kind of system into practice with my clients and learners, you can read more about how I work here:
