"Just" an LLM
Is ChatGPT really AI? Or is it just a chatbot?
Tagged with: artificial intelligence
(Title based on a Threads post from Daniel Jalkut.
What Is Apple Doing in AI? Summaries, Cloud and On-Device LLMs, OpenAI Deal - Bloomberg
But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.
Nah- I'm not buying it. For one - OpenAI and Microsoft are "true" partners; Microsoft's Azure compute + OpenAI's models + Microsoft's apps/OS 1 are getting deeply intertwined. An Apple + OpenAI partnership seems like a strategy to be permanently one step behind Microsoft.
But it seems inevitable that there's big Apple + AI news coming. Siri needs a significant upgrade. The new iPad Pro announcement made a big deal about having "AI compute" power2. "AI features" announcements at WWDC 2024 seems like the safest bet in tech.
So, what might be coming?
If I had to make a bet, my money would be on a Google partnership, with something like the Gemma model running locally on iPhone/iPads etc. as 'Siri 2.0' and access to Gemini for the kind of tasks that need access to 'full fat' LLMs and more computing power.
Also- GitHub CoPilot ↩
Yes, iPads/iPhones/Macs have had 'neural cores' for a few years - but the new iPad seems to be stepping this up significantly, but with no news on what its actually going to power. Worth noting - if you're developing AI/ML/LLM-type software on a Mac, you're using the GPU - not NPU chips. So far, they seem to be locked away for Apple's use (which includes Apple's APIs if you're building apps for the app store - but not if you're running something like TensorFlow in Python.) ↩
Two narratives, one story;
An AI developed by Google has developed sentience - the ability to have its own thoughts and feelings, and a Google engineer working with it has been fired for making the story public.
A Google engineer thinks a 'chatbot' AI should be treated like a human, because he believes that it has developed the ability to have and express its own thoughts and feelings. After Google looked into and dismissed his claims, the engineer went public with them, and was then placed on paid administrative leave and subsequently fired.
The subject of the first story is artificial intelligence - with a juicy ethical human subplot about a whistleblower getting (unfairly?) punished.
The subject of the second story (which is a little more nuanced) is a human engineer going rogue, with an interesting subplot about ethics around artificial intelligence.
I think most of the reporting has been around the first version of the story- and I think thats because it fits into a broader ongoing narrative; the idea that 'our' machines are getting smarter - moving towards a point where they are so smart that humans can be replaced.
Its a narrative that stretches back for centuries - at least as far back as the industrial revolution.