https://www.youtube.com/watch?v=_PxkYZ_4z50 Patrick Ellis

This video features Patrick, a CTO and co-founder of a startup, sharing insights from his talk at Microsoft on achieving 10x productivity gains using AI coding tools. He emphasizes that while the tools are now capable, the key is knowing how to set them up and orchestrate the right frameworks. Core AI Workflows (Outside Engineering) Patrick outlines several core AI workflows utilized by his team, extending beyond just engineering:

  • Business Strategy: Leverages tools like Granola, Business Plan, Deep Research, and NotebookLM.
  • Making Plans Actionable: Uses ChatGPT with frameworks such as OKRs, SWOT, and BRDs.
  • Market/Product Research: Employs Deep Research and MCPs (Model Context Protocols).
  • Internal Communication: Relies on Granola, Bolt.new, and ChatGPT.
  • Sales: Utilizes AI SDR.
  • Marketing: Integrates ChatGPT with templates, Midjourney, Flux, and Cursor.
  • Operations Automation: Uses n8n, Zapier, Claude Code, and GitHub Actions.
  • General Productivity: Benefits from Notion, Superwhisper, and other tools.
  • Skill Acquisition: Uses Deep Research to quickly learn and apply new skills, such as YouTube scripts.

He highlights that their non-engineer CEO and CPO have been able to iterate on designs and product features with the same constraints as engineers, thanks to tools like Bolt. This saves significant time in communication and allows them to build prototypes and internal tooling more competently. Patrick stresses that a major bottleneck in productivity, even with advanced AI coding tools, often comes down to communication. Their goal is to eliminate meetings, communications, and documents that are no longer necessary because AI models, when given the right context, can perform these tasks. Core Product/Engineering AI Workflows Patrick then shifts to engineering-specific workflows:

  • Vibe Coding vs. SDE Agents: Discusses the distinction and effectiveness.
  • Codebase Onboarding: Uses Claude Code and GitHub for agent Q&A.
  • Prototyping: Streamlines the process from idea to user testing, generating screenshots and markdown descriptions via Bolt and Claude Code, leading to an SDE Agent.
  • SDE CoPilots: Mentions Claude Code, Cursor, CoPilot, and Windswept.
  • SDE Agents: Utilizes tools like Codex, Jules, Claude Code SDK, Foundry, Devin, and Agent Swarms.
  • Code Review: Employs GitHub Actions and Opus 4.
  • MCPs: Uses Playwright, GitHub, Firecrawl/Context7, Netlify, Zapier, Terraform, etc. He explains MCPs as a standard (led by Anthropic, now adopted by OpenAI, Google, and others) that provides AI models with context on how to use tools.
  • New DevOps: Relies on CLAUDE.md, cursorrules, internal MCP, and prompt management.
  • PKM / Context Management: Leverages Notion and Markdown for personal knowledge management.
  • Design: Utilizes Deep Research, prompt, style guides, and Playwright.

The “80/20” Rule for AI Adaptation Patrick provides three key recommendations for organizations looking to leverage AI:

  1. Adapt AI within your workflows: Embrace the learning curve by trying different ChatGPT/etc models to understand their capabilities. Experience using an agentic coding tool (like OpenAI’s Codex, Anthropic’s Claude Code, Cursor Agent, OpenHands) to understand hands-off workflow.
  2. Spend 30 minutes “vibe coding”: Create a personal app with Bolt.new to get a sense of what these models are capable of. He specifically recommends using Claude Code with Opus 4.
  3. Update your “edu stack”: Pick 3 AI-first engineering podcasts/YouTube channels to follow. Read and watch content from frontier labs (e.g., OpenAI/Anthropic’s YouTube channels, Anthropic’s docs, especially the engineering section).

Rethinking Product & SDLC Patrick concludes by offering a high-level perspective on how AI impacts product development and the Software Development Life Cycle (SDLC):

  1. Empower each person to build more: This includes increasing their capacity in areas like the tech stack, design, SEO, product management, and project management.
  2. Identify the largest bottlenecks: These are likely in communication. Think through GenAI-native solutions. Use Bolt.new to allow PMs/Designers to quickly prototype ideas and technical feasibility. Get these Proof of Concepts (PoCs) in front of customers/partners/stakeholders early to gather product, design, and engineering insights. Record meetings and leverage reasoning models to help write BRD/PRD’s.
  3. Build the “orchestration layer”: This involves developing DevOps for AI tools. Build systems, data pipelines, and workflows that ensure your team can easily provide all the context, prompts, and tools (MCP’s) AI models need to perform.

He emphasizes that the future of software development involves engineers becoming more like orchestrators, managing a “sea of agents” that can perform tasks autonomously once provided with the right context and tools. This reduces friction and drastically increases productivity.


Making PPT:

Claude code + Mario or SlideEV