Hugo’s blog
About
Categories
All
(9)
Causal inference
(1)
GenAI
(8)
LLMs
(7)
Metaflow
(1)
42 Lessons from a Year of Building with AI Systems
GenAI
LLMs
Findings from a three-hour livestream and two podcast episodes about lessons from building real-world applications on top of LLMs.
Jul 1, 2024
Hugo Bowne-Anderson
Fine-Tuning GPT-2 for Spam Classification: A Live Coding Session with Sebastian Raschka
GenAI
LLMs
Discover how to fine-tune a pre-trained GPT-2 model for spam classification in this step-by-step tutorial. Learn data preprocessing techniques, model architecture modification, and the training process using PyTorch. Understand key concepts like freezing layers, cross-entropy loss, and evaluation metrics. See how fine-tuning can significantly improve accuracy from 50% to 95%. Explore ideas for further optimization and adapting the approach to other text classification tasks like sentiment analysis and topic classification.
Jun 19, 2024
Hugo Bowne-Anderson and Sebastian Raschka
Developing and Training LLMs From Scratch
GenAI
LLMs
Learn the full lifecycle of building large language models (LLMs) from the ground up. Explore model architecture design, pre-training, fine-tuning, RLHF, and deployment techniques. Discover the skills and hardware needed to work with LLMs, and how to choose between prompt engineering, RAG, and fine-tuning for your use case. Stay up-to-date with cutting-edge research like LoRA, Mixture of Experts, and Direct Preference Optimization. Understand the implications of LLMs for community-driven platforms like Stack Overflow.
Jun 18, 2024
Hugo Bowne-Anderson and Sebastian Raschka
10 Brief Arguments for Local LLMs and AI
GenAI
LLMs
I recently hosted a session with Simon Willison for the “Mastering LLMs” conference, discussing local LLM usage with Simon’s CLI utility. Reflecting on why I prefer local LLMs, key benefits include data privacy, performance, cost efficiency, customization, offline capabilities, learning opportunities, open-source support, scalability, ethical considerations, and autonomy. Recommended tools for getting started include Ollama, Simon’s LLM CLI utility, LlamaFile, LM Studio, and Oobagooba’s text generation webUI.
Jun 17, 2024
Hugo Bowne-Anderson
Getting Started with Generative AI for Everyone
GenAI
LLMs
With the advent of Generative AI, more people than ever (technical and non-technical) can build interesting, fun, and productivity-increasing AI apps and workflows. In this post, we introduce the GenAI mindset of combining atomic units to build AI-powered apps and workflows
May 28, 2024
Hugo Bowne-Anderson and Jonathan Whitaker
Boost Your Productivity with ChatGPT in 2024: Simple Steps to Get Started
GenAI
LLMs
Have a 60-page PDF you don’t have time to read and want summarized? A meeting you need action items from? Or an online video you need transcribed and summarized? Would you like them to then generate a PDF, text file, or spreadsheet of the results? You can do all of this with ChatGPT.
May 28, 2024
Hugo Bowne-Anderson
ChatGPT, Author of The Quixote
GenAI
LLMs
In the era of generative AI, copyright won’t be enough. In fact, it’s the wrong place to look.
Mar 24, 2024
Hugo Bowne-Anderson
Lights, GenAI, Action – Building Systems with Generative Video
GenAI
Metaflow
We created a workflow to generate hundreds of videos with Stable Video Diffusion in one command.
Dec 10, 2023
What is Causal Inference?
Causal inference
An Introduction for Data Scientists and Machine Learning Engineers.
Jul 28, 2022
No matching items