Ai fundamentals and privacy



https://www.youtube.com/watch?v=XffC4K3mL5c

In this video, the speaker, Harper, introduces herself as an AI expert with 10 years of experience in AI and machine learning. She has two degrees from Stanford in Computer Science, focusing on AI. Harper previously worked at Meta (formerly Facebook) for four years as a founding engineer, building machine learning systems, and was acquired by NVIDIA. She now teaches AI and started her YouTube channel a year ago, focusing on AI and machine learning. In this Q&A series, she plans to make videos more casual and longer-form, answering questions from her audience. She’s excited to begin this series in her new studio, which is her apartment. Harper then addresses the first question: “What happens to your data when you use AI tools?” She explains that it depends on the specific tool. For example, if you use tools like ChatGPT, Google Gemini, or Anthropic’s Claude, the company hosting these models stores your data. She highlights that some companies, like “Groc,” host open-source models on their servers, allowing users to run these models on their own machines. However, she notes that not all companies are transparent about their data usage. She emphasizes the importance of understanding data privacy when using AI tools, especially those that are not explicit about their data policies. She contrasts this with companies that explicitly state they will use your data to improve their models but not store it, which is a more privacy-conscious approach. Finally, she touches on the concept of “mixture of experts” models, explaining that these are models composed of several smaller, specialized models. These smaller models are trained on different topics or data and then combined to handle a wide range of tasks. This approach allows for more efficient and specialized AI capabilities. She mentions Meta’s LLaMA 4 Maverick (400B) model, which has 128 experts, as an example of this architecture.

The speaker, Harper, also shared some personal insights and experiences:

  • Her background: She has a decade of experience in AI and machine learning, with degrees from Stanford in Computer Science, focusing on AI. She worked at Meta for four years as a founding engineer and was later acquired by NVIDIA.
  • Her current work: She now teaches AI and has recently moved into a new studio in her apartment to create YouTube content.
  • Data privacy concerns with AI: She stressed the importance of understanding how AI companies handle user data, especially noting that not all companies are transparent about their data policies. She contrasted companies that explicitly state they don’t store data with those that do, suggesting caution with the latter.
  • “Mixture of Experts” models: She explained that these models use multiple specialized smaller models to handle a wider range of tasks, making them more efficient and capable. She cited Meta’s LLaMA 4 Maverick (400B) model with 128 experts as an example.
  • Model efficiency: She pointed out that AI models with more parameters and specialized experts can perform better across a wider range of tasks. However, she also mentioned that running these large models can be computationally expensive and slow on personal devices. Tools like Ollama are emerging to help users run these models more easily on their own laptops.
  • She also provided a practical analogy: comparing AI model efficiency to how, in school, each person would write one sentence or one word of a poem, and then pass it to the next person. This allows for a collective, efficient creation process.

Harper is looking forward to engaging with her audience in this new Q&A format, encouraging viewers to ask questions about AI.