Over the past three months, I’ve attended dozens of Generative AI meetups, hackathons & conferences in the San Francisco Bay Area. I met with several founders, engineers, and early-stage investors interested in or actively working with Generative AI and large language models (LLMs).
Here’s a summary of my learnings about the current state of Generative AI:
That said, there is no denying that LLMs and GenAI are transformative technologies that will fundamentally change how we build applications and interact with computers. We’re probably just coming down from the “peak of inflated expectations” of the GenAI hype cycle.
Most of the issues described above will get solved over the next few years (or sooner; it’s anyone’s guess!) as the technology, tools, and use cases mature. One exciting area is the development of open-source LLMs, especially smaller domain-specific models that can achieve performance similar to GPT-4 for specific tasks despite being 100x-1000x smaller, leading to lower costs and faster responses. Finetuning and step-by-step reasoning are meaningful advances toward making models more reliable, accurate, and production-ready.
It’s an exciting time to be working in AI, but it’s important to remain flexible as the space evolves.