42 Lessons from a Year of Building with AI Systems

Findings from a three-hour livestream and two podcast episodes about lessons from building real-world applications on top of LLMs.
GenAI
LLMs
Author

Hugo Bowne-Anderson

Published

July 1, 2024

I recently did a 3 hour livestream for Vanishing Gradients with Eugene Yan (Amazon), Bryan Bischof (Hex), Charles Frye (Modal), Hamel Husain (Parlance Labs), and Shreya Shankar (UC Berkeley).

​Over the past year, these five guests have been building real-world applications on top of LLMs. They have identified crucial and often neglected lessons that are essential for developing and building AI products.

They have recently written an O’Reilly report (also published here) based on these learnings and, in this conversation, they shared advice and lessons for anyone who wants to build products informed by LLMs, ranging from tactical to operational and strategic.

We’ve just now released two podcast episodes of this conversation (also on Spotify etc…):

You can also watch the livestream here on YouTube:

Even if you’re not specifically working on/with LLMs, there’s a serious amount of general data science, machine learning, and AI wisdom packed into these 3 hours.

Let us know what you get out of it on Twitter (@hugobowne and @vanishingdata) or LinkedIn. You can also register for future livestreams onour lu.ma calendar and subscribe to our YouTube channel.

Here are some clips that may be of interest!

Pro tips for building with LLMs

No structured approach? You’re not really doing AI.

The AI Engineer Data Literacy Divide

Build durable systems, not just chasing models

What We Covered

Check it out and let us know what you get out of it on Twitter (@hugobowne and @vanishingdata) or LinkedIn. You can also register for future livestreams onour lu.ma calendar and subscribe to our YouTube channel.