What's up with Generative AI?

Over the past three months, I’ve attended dozens of Generative AI meetups, hackathons & conferences in the San Francisco Bay Area. I met with several founders, engineers, and early-stage investors interested in or actively working with Generative AI and large language models (LLMs).

Here’s a summary of my learnings about the current state of Generative AI:

  • While there is excitement about GenAI, the on-ground reality is a bit different from the buzz on X (Twitter).
  • Despite their capabilities, LLMs aren’t accurate or steerable enough for mission-critical apps.
  • Many products being built on LLMs and diffusion models are still in the alpha/prototype stage.
  • Demand & willingness to pay for GenAI apps at enterprises are lukewarm. It’s more of a “nice-to-have” at this point.
  • Latency and quality of responses are significant challenges that make GenAI apps slow & unreliable.
  • It’s unclear how the quality of GenAI model responses can/should be evaluated, especially in production.
  • At the current pricing of GPT-3.5/4, the cost vs. ROI doesn’t make sense for many applications
  • Adoption of dev tools (vectors DBs, libraries, LLMOps) is relatively low among professional developers
  • Every time OpenAI releases something new, several GenAI startups are impacted negatively.
  • Things are moving too quickly, and it feels difficult to put your money on a product direction for 1.5–2 years.

That said, there is no denying that LLMs and GenAI are transformative technologies that will fundamentally change how we build applications and interact with computers. We’re probably just coming down from the “peak of inflated expectations” of the GenAI hype cycle.

Most of the issues described above will get solved over the next few years (or sooner; it’s anyone’s guess!) as the technology, tools, and use cases mature. One exciting area is the development of open-source LLMs, especially smaller domain-specific models that can achieve performance similar to GPT-4 for specific tasks despite being 100x-1000x smaller, leading to lower costs and faster responses. Finetuning and step-by-step reasoning are meaningful advances toward making models more reliable, accurate, and production-ready.

It’s an exciting time to be working in AI, but it’s important to remain flexible as the space evolves.