Joyce’s picks: musings and readings in AI/ML, April 22, 2024
📌 Musings and Readings
Llama 3 (Meta’s technical blog). It’s worth reading if you are technical.
I like how aggressive Meta has become in launching new open source models and embedding (no pun intended) Llama 3 model to power its apps. I started testing Meta AI in Instagram this weekend, and for the current use cases that we can imagine in Instagram, it is pretty good. Meta is good at meeting where their billions of users are. I can also imagine the additional things that Meta AI agents can do for Instagram users. I like it. Other players should be worried. Apple has huge potential in doing something similar via iMessage, and I really hope that Apple will launch something soon. Google has a massive opportunity to incorporate their models in the Android and device ecosystems.
My take on GenAI use cases in less than 3 minutes! I missed a few but I think the few I mentioned covered major opportunities in various industries.
Stanford published its annual State of AI report. A couple of things that stood out:
~66% foundational LLMs are open source models. I think the jury is out regarding the type of services and software that can be / will be built on top of open source models. It is fantastic for computing players sitting below these models, but I believe that we are still too early to understand what capabilities that need to be built on top of the open source foundational models are good standalone venture investments.
Closed source models are better based on various benchmarks. On the surface, this seems sufficient, but we have not solved the evaluation problem of LLMs. What is good, and what is good enough? Benchmarks do not help answer these questions. If entrepreneurs are solving problems to help answer the question of “what is good, and what is good enough” in emerging LLM-based vertical use cases, this is the time to do it.
In 2023, Google is the leader in launching foundational models. A lot of potential for Google, and I am excited to see more from Google. I also hope that Google doesn’t experiment too much with its Search. It worked really well, and let’s not change what works well. A big world exists for Google Search, and a world exists for summarization of searched results (e.g., Perplexity). If and when Google decides to become a summarization engine (and this is no longer hard to do) and overlays on top of its classic Search, then it will put pressure on smaller players. Clearly, Google has decided not to go this route (yet). Overall, I want to see Google getting more aggressive in being a leader in AI.
The U.S. continues to dominate and dwarf other countries in private investments in AI. I am very happy to see this, and I am not surprised. The U.S. dominates in model developments, infrastructure innovation, and startups. This means that entrepreneurs and builders participating in the U.S. market will learn a lot about what works, what doesn’t, what we need to regulate, and what should be left for market competition. I am happy to see the continued dominance of U.S. innovation which attracts U.S. and foreign capital as well as the brightest minds to work on very interesting problems. All of this has implications in other things such as the future of capital & private markets, investable assets, and talent.
I will skip the deals this week (will resume next week). This week also brings the UChicago event on climate capital. Climate solutions require curated datasets and application of machine learning. Climate capital spans private to philanthropic capital. I am excited to moderate this panel and learn from the panelists.
Next week I will share an interview I conducted with Navin Sharma, a veteran in enterprise data management.
Have a good week and wish my readers who celebrate Passover a joyous one.