Gemini v ChatGPT to Create Product Manager PowerPoint Slidedecks

I ran a little experiment to see which AI would create the best PowerPoint slidedecks.

The presentation’s topic was: “How to write effective prompts when using LLMs,” specifically tailored for an audience of Product Managers (PMs). The goal was to compare the capabilities of different AI tools in structuring, focusing, and generating content for a targeted professional audience, assessing their usefulness for PMs in content creation and strategy.

YouTube – Experiment on Best AI Tools for PowerPoint Slidedecks

Experiment Execution

The Prompt and Goal

The core instruction given to each AI tool was consistent: “Create a presentation for product managers showing how to write effective Prompts when using LLMs.”

  • Audience: Product Managers.
  • Topic: Prompt Engineering for LLMs.
  • Deliverable: Presentation structure/slides (or content that could be easily converted into one).

Key Findings by AI Tool

AI Tool Output Quality & Focus Value for Product Managers
Tool A Highly technical, focused on parameter tuning and abstract concepts. Low; required significant editing to translate technical jargon into business value.
Tool B Excellent structure (e.g., Title, Agenda, Problem/Solution, Next Steps), but content was generic. Moderate; provided a solid framework but lacked deep, PM-specific examples.
Tool C Strongest alignment with the PM audience, including slides on “Prompting for Market Analysis” and “User Story Generation.” High; immediately actionable content directly related to PM workflows.
Tool D Concise, list-based output (e.g., Role-Task-Format structure). Highly efficient but lacked detail. Moderate; useful for quick reference guides but not a comprehensive presentation.

Analysis of LLM Performance

The experiment revealed a critical lesson: Prompt Engineering is not just about syntax; it is about context.

  • Audience Understanding: Tools that performed poorly (e.g., Tool A) delivered technically accurate but contextually irrelevant information. Tools that performed well (e.g., Tool C) intuitively mapped the topic of LLM prompts to specific, high-value PM tasks, such as competitive analysis and feature definition.
  • Structure and Utility: PMs require outputs that are ready for immediate use, often in communication formats like presentations. Tools that provided an explicit, logical presentation flow (Tool B, C) were judged as more useful than those that provided a wall of text or a simple list (Tool D).
  • The “Product” Approach: The most successful AI output treated the presentation itself as a product, focusing on the user (PM), the problem (inefficient prompting), and the solution (structured prompting methods), making the content directly relevant to product development methodology.

Key Takeaways for Product Managers

  1. Define the Role: When prompting an LLM for content, explicitly define the persona or role the AI should adopt. For example, “Act as a Senior Product Manager…” or “You are a UX writer for a B2B SaaS platform…”
  2. Specify the Output Format: Do not just ask for “information.” Ask for a “Five-slide PowerPoint structure” or a “Prioritized list in a Markdown table.” This improves the structure and reduces post-generation editing time.
  3. Include Business Context: Frame your prompt within a business objective. Instead of “Write a prompt about features,” try, “Write a prompt to analyze Q3 user feedback data to recommend the top three features that will maximize retention for our high-value customer segment (SMEs).”

Recommendations

A counter-intuitive trend I observed is that the simplest AI tools, sometimes those with fewer customization options, often delivered more coherent, well-structured output for common business requests like presentation creation.

With that said, the highly configurable tools required more sophisticated initial prompting to steer them away from generic or overly technical responses.

This suggests that for many use cases, the immediate utility of the output structure may outweigh the need for fine-grained content control (based on the experiment’s findings).

Have you experimented with AI to create slidedecks? Let us know what the results were like.