No-code AI development using Opal



This video provides a comprehensive overview of new and trending AI tools from Google, OpenAI (ChatGPT), and Adobe, demonstrating their capabilities through various practical use cases. Here’s a detailed summary of the tools and their demonstrations: 1. Google Opal (Experiment)

  • Purpose: An experimental tool from Google Labs that allows users to describe, create, and share AI mini-apps using natural language. It chains together prompts, models, and tools.
  • Core Functionality: Users provide an “Input,” which is processed by “Generate” steps using Google models (e.g., Gemini, Imagen) or other external tools, resulting in an “Output.” This essentially builds AI workflows into user-friendly apps.
  • Demo 1: YouTube Thumbnail Generator: Prompt: “Take a YouTube video URL, generate four thumbnail ideas for it. The thumbnail should be clickable and use YouTube best practices to get high click-through rates. Add text, but a maximum of two short words, relevant to the video, in bold modern font.” Workflow Generated: Opal automatically created a three-step flow: YouTube Video URL (Input): Accepts the video link. Generate Thumbnail Image Prompts: Uses Gemini 2.5 Flash to analyze video content and create detailed image descriptions. Generate Thumbnail Images: Uses Gemini 2.5 Flash Image Generation to create the images based on the prompts. Render Thumbnails as HTML: Structures and displays the generated images as HTML. Result: The app generated four distinct, clickable thumbnail concepts with bold text relevant to the video content, demonstrating its ability to create effective visual outputs from a simple text prompt. Customization: Mike showed how to change the underlying AI models (e.g., from Gemini 2.5 Flash to Imagen 4 for image generation) and how to change the output format (e.g., saving to Google Docs, Sheets, or Slides).
  • Demo 2: YouTube to Blog Post: Prompt: “Take a YouTube URL and turn it into a blog post. Generate two relevant images for that blog post and insert them somewhere appropriate.” Workflow Generated: YouTube URL (Input): Accepts the video link. Generate Blog Post From YouTube: Uses Gemini 2.5 Pro to extract key content, transcript, and main ideas. Generate Image Prompt: Creates image prompts based on the blog post content. Generate Blog Post Images: Uses Imagen 4 to generate the images. Generate Blog Post Webpage: Combines the text and images into an HTML blog post. Result: The app successfully generated a complete blog post with two relevant images embedded. While not fully “publish-ready” without refinement, it showcased Opal’s ability to chain multiple large language and image models for a more complex content creation task.
  • Demo 3: Trend-Based Video & Social Media Post: Prompt: “Check the latest Google Trends around AI and no-code. Generate a short video and social media post (140 characters max) to help me, an AI influencer, hop on this trend.” Workflow Generated: A more intricate flow including inputs for “Target Trends” and “Influencer Name,” followed by steps to “Research Trends” (which performed web searches), “Generate Video Instruction,” “Generate Video,” “Generate Social Media Post,” and “Combine Outputs to HTML.” Result: The app generated an 8-second video and a social media post title. This demo highlighted Opal’s capability to integrate web search and complex multi-step workflows to create diverse content.

2. ChatGPT Agent (OpenAI)

  • Purpose: A new feature in ChatGPT 4o (for Plus users) that acts as an “AI employee.” It can spin up a virtual desktop, browse the web, read websites, click elements, take actions, and even run code.
  • Activation: Type /agent in the chat interface.
  • Demo 1: YouTube Channel & Partner Research: Prompt: “Find out all you can about the Creator Magic YouTube channel and community, and then generate me a list of potential partners that I could collaborate with that would be of interest to my audience.” Process: The agent “set up a desktop,” then autonomously performed web searches, navigated to YouTube, LinkedIn, and other relevant sites, reading descriptions and community information. Result: After 7 minutes of work, it produced a detailed report in markdown format, outlining the channel’s mission, audience characteristics, and a list of 9 potential partners (e.g., Make.com, n8n, Cursor, Replit, Runway) with explanations of their relevance and potential collaborations.
  • Demo 2: Travel Planning (Flight Price Optimization): Prompt: “I’m planning a two-week trip to Dubai for four people in December 2025. I’d like to be home for Christmas. The cost is important. Return flights should be less than £1000 for all of us. Happy to do stopovers for 1 or 2 nights each way if it reduces price. Wizz Air, for example, flies as far as Turkey/Cyprus for low cost.” Comparison: A manual search on British Airways for a direct flight for four to Dubai in December 2025 resulted in £1936.28 (approx. 1500), saving $1100. This demonstrated its ability to perform complex comparisons and find optimized solutions.
  • Demo 3: AI Community Meetup Planning: Prompt: “I’ll research popular cafes with great reviews, accessible by public transport, that can accommodate both small and larger groups in each city on your travel itinerary—Chiang Mai, Singapore, Jakarta (considering tie-ins with your Radiodays Asia commitments), and Wroclaw. I’ll optimize the dates and locations to maximize convenience and attendance. I’m starting on this now.” Process: The agent conducted extensive web research, executing terminal commands to find optimal locations, café reviews, transport accessibility, and specific details for each city (Chiang Mai, Singapore, Jakarta, Wroclaw). Result: It generated a comprehensive “AI Community Meetup Planning” report with suggested cafes, reasons for suitability, and meetup options for each location, demonstrating advanced research and planning capabilities.

3. Adobe Firefly

  • New Capabilities: Firefly has recently added new video capabilities, integrated industry-leading external AI models, and introduced a sound effects generation feature.
  • Video Generation: Integrated Models: Firefly now allows users to select between Adobe’s native Firefly Video model and Google’s Veo 3 and Veo 2 models. Credits: Adobe’s native models are generally unlimited, but partner models (like Veo) incur credit costs (e.g., 800 credits for an 8-second 720p video without audio, 1200 with audio). Demo: Generated a video for the prompt “A magical wizard waving his wand high in the sky and singing ‘creator magic’ first on his own and then in harmony with other spirits” using Veo 3. The video included AI-generated music.
  • Image Generation: Integrated Models: Firefly’s image generation platform supports Adobe’s native models (Firefly Image 4 Ultra, Firefly Image 4, Firefly Image 3) and partner models like Imagen 4, Imagen 3 (Google), GPT Image (OpenAI), and Flux 1.1 Pro/Ultra/Raw (Black Forest Labs). Credits: Similar to video, external image models incur credit costs (10-60 credits per image generation). Demo: Generated images for “A magical tabby cat made out of stained glass with the word ‘CREATOR MAGIC’ on a window behind it” using Flux 1.1 Pro, Imagen 4, and GPT Image, showcasing the distinct artistic styles of each model.
  • Generate Sound Effects (Beta): Functionality: Allows users to generate sound effects from a text prompt or by providing a voice performance as a reference for timing and energy. Demo: For a video of a roaring lion, Mike typed “Lion growls” and recorded his own growls. Firefly generated multiple variations of lion growls that matched the timing and intonation of his voice. Users can then drag and drop the generated audio onto a timeline and download.

Overall Conclusion: The video highlights the rapid advancements in AI, emphasizing the growing capabilities of tools like Google Opal for creating custom mini-apps from natural language, ChatGPT Agent for autonomous web research and task execution, and Adobe Firefly for aggregating and leveraging diverse generative AI models across image, video, and now, sound. The speaker promotes his courses on building AI applications and “AI employees.”