- odo newsletter
- Posts
- 🤖 Tech giants and AI bets, ⛓️ Chain of Thoughts prompting
🤖 Tech giants and AI bets, ⛓️ Chain of Thoughts prompting
Welcome to the first edition of the odo newsletter! We are here to provide a free, weekly newsletter on key AI developments for product builders. We’ll cover news from the past week, AI product highlights, and any resources we think you’ll find useful.
News of the week 🗞️
🦙 Meta announced Llama 2, an open-source LLM free for research and commercial use
Last Tuesday, Meta announced Llama 2, the latest open source large language model that the company has developed. Interestingly, Meta is distributing the model through Microsoft, who already invested multiple billions of dollars into OpenAI. Looks like Microsoft is hedging its bet on OpenAI (and its closed approach) with Meta (and its open source model).
Don’t put all your AI eggs in one basket…Product builders should take heed of Microsoft’s approach. Instead of only using one type of model or provider, consider testing and evaluating outcomes across multiple models on a regular basis. It’s unclear who the ultimate winner here will be (or whether there will be just one winner), so it’s best to hedge your bets.
🏛️ White House and seven major AI companies announced a voluntary commitment to safety and transparency during AI development
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI announced last Friday that it will commit to ensuring products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust while developing AI products. These commitments include allowing outside security testing of their products, sharing data and managing risks with the government, and developing a system to distinguish AI-generated content.
Move fast and break things? Not so much this time…Tech companies have traditionally had a bad reputation for sidestepping regulations in the name of innovation. However, AI is a type of technological breakthrough that can have enormous societal and economic implications. It’s encouraging to see these companies proactively establishing guardrails for themselves and others.
🍎 Apple has entered the game…with Apple GPT
Apple is working on a framework to create large language models (termed “Ajax”) and a chatbot service (termed “Apple GPT”). Apple is the only major tech giant to not yet release its own LLM.
Hey Siri, can you get better this time?…Apple’s track record for AI-powered consumer product hasn’t been its best (ahem, Siri). Apple’s strong commitment to data privacy will likely pose challenges during LLM development. But, as we all know, constraints breed creativity. We’re excited to see how Apple will innovate on a likely privacy-first LLM application.
AI product highlight âś‹
Have you ever had 100 unread slack messages and just wanted to curl up into a ball and cry? We found a product just for you! Spoke AI aims to understand what’s happening across long threads and busy channels, and produce summaries for you. We’ve tested the beta product and have been pretty impressed with the results!
Disclaimer: We are not getting paid to highlight the product. We just think it’s cool and want to share!
For the AI nerds 🤓
Diving into Chain of Thought
Background
A recent study (arXiv:2307.10472) was published attempting to detect social bias using LLM prompting. While they ultimately didn’t see success yet, it highlights some important techniques with both prompt engineering and robust LLM performance evaluation. The most important technique we want to highlight is called “Chain of Thought” reasoning.
The core concept of Chain of Thought reasoning is to ask the LLM to output a series of steps it should follow to solve a problem before it attempts a solution. This often improves the output of LLMs for problems that benefit from reasoning (arXiv:2201.11903).
How do I prompt for Chain of Thought?
Asking the LLM to reason through a problem can be as simple as ending your prompt with “Let’s think step by step.” For example:
Would it be viable to produce a product to help people remember the names of people they meet? Let’s think step by step through what the product might look like and how the public might react to it.
Note that the example doesn’t literally use only “Let’s think step by step”; it expands on it have it think through steps in the domain of the question.
Why does this work?
There is still a massive amount we don’t understand about how LLMs exhibit the behavior we experience, but understanding a bit more detail about how they work can make it more intuitive why this prompt works.
You’ve probably heard LLMs are “just fancy autocomplete.” In a practical sense, this is true. An LLM takes all the text before it and finds the most probable next word. It then repeats that until done. That means it doesn’t differentiate between what it wrote or what you wrote. By ending the prompt with ”Let’s think step by step,” you’re making it much more probable that the following words will be the very approach it will take to inform its final output.
Before you go đź’¨
Have you ever tried to describe what product managers do to your parents? Our favorite is:
Like if Steve Jobs had no power and made way less interesting stuff.
We will take any chance to be described in relation to Steve Jobs. đź’€
Reply