I’ve been experimenting with incorporating AI tools into my workflow for some time. They can significantly speed up the process, but the quality of the output depends heavily on the quality of the prompt.
To make my prompts more effective, I started using the CO-STAR method. It’s a simple framework that helps structure prompts so the AI produces more useful responses.
CO-STAR stands for Context, Objective, Style, Tone, Audience, and Response format. Instead of asking broad questions, I structure prompts around these elements so the tool understands the situation, the goal, and the type of output I need.
The diagram below shows the structure I use when writing CO-STAR prompts.

I used this approach when designing this branching scenario for managers conducting difficult performance conversations. Writing believable dialogue paths can be one of the most time-consuming parts of scenario design. Using CO-STAR prompts allowed me to quickly generate and explore different conversation directions while prototyping the scenario structure before building the final interaction in Articulate Storyline.


Using a structured prompting method has improved my workflow in two ways. It’s helped me get more useful AI outputs on the first attempt, and it allows me to spend more time refining the learning design rather than prompting back and forth.
That said, AI outputs are rarely perfect on the first try. Like any LLM tool, the results still need to be reviewed, refined, and shaped to fit the learning goals.
