"Just" an LLM
Is ChatGPT really AI? Or is it just a chatbot?
Posted in: Technology
(Title based on a Threads post from Daniel Jalkut.
What Is Apple Doing in AI? Summaries, Cloud and On-Device LLMs, OpenAI Deal - Bloomberg
But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.
Nah- I'm not buying it. For one - OpenAI and Microsoft are "true" partners; Microsoft's Azure compute + OpenAI's models + Microsoft's apps/OS 1 are getting deeply intertwined. An Apple + OpenAI partnership seems like a strategy to be permanently one step behind Microsoft.
But it seems inevitable that there's big Apple + AI news coming. Siri needs a significant upgrade. The new iPad Pro announcement made a big deal about having "AI compute" power2. "AI features" announcements at WWDC 2024 seems like the safest bet in tech.
So, what might be coming?
If I had to make a bet, my money would be on a Google partnership, with something like the Gemma model running locally on iPhone/iPads etc. as 'Siri 2.0' and access to Gemini for the kind of tasks that need access to 'full fat' LLMs and more computing power.
Also- GitHub CoPilot ↩
Yes, iPads/iPhones/Macs have had 'neural cores' for a few years - but the new iPad seems to be stepping this up significantly, but with no news on what its actually going to power. Worth noting - if you're developing AI/ML/LLM-type software on a Mac, you're using the GPU - not NPU chips. So far, they seem to be locked away for Apple's use (which includes Apple's APIs if you're building apps for the app store - but not if you're running something like TensorFlow in Python.) ↩
A rough theory of why voice notes get such wildly different reactions from different people.
The Apple Vision Pro is now on sale. People are getting their hands on them, and sharing their opinions. People who haven't got their hands on them are sharing their opinions. There are a lot of opinions flying around.
First thing - sure, I'm interested in the headset, and the device actually getting in 'normal' people's hands (or on their faces) is this week's news; I'm not going to buy one, because it's ridiculously expensive and if I had that sort of money to throw around, I probably wouldn't be driving a car that's approaching either its 18th birthday or its last trip to the scrapyard and has done the equivalent milage of 5 times around the circumference of the earth.
But what I'm really interested in is the Vision platform; the bits in the software that are going to be the same when the next headset device is launched. And once there are a bunch of different ‘Vision’ devices - where they will fit, in the spaces in people's lives.
The promise of the internet plus the World Wide Web was an open, free network of hyperlinked pages, filled with all of the worlds knowledge. For years, the terms "internet" and "world wide web" were almost interchangeable/synonymous.
30 years on... It has issues.
It's a lot easier to understand the IP issues in 'give me this song but in Taylor Swift's voice' than 'make me a song in the style of the top ten hits of the last decade.' If a human did that, they wouldn't necessarily have to pay anyone, so why would an LLM?
There's an interesting twist with the "Taylor Swift's voice" example; Scooter Braun owns all of Taylor Swift's recordings (at least, I think all the ones released before any ChatGPT-era training dataset were compiled) - he bought the record company, so he owns the master recordings (and all the copies of the master recordings, and the rights relating to them) - but not the songs themselves. Taylor Swift still owns them - which is why she can make her "Taylor's Version" re-recordings (which Scooter Braun doesn't get a penny out of.)
So there's a key difference here; a human would copy the songs (that is, they would be working off the version of the songs that are in their heads - the idea of the songs), so Swift would get paid as the owner of the songs.
But the kind of generative AI we're talking about would be copying 100% from the recordings (ie. the training data would be the sounds, digitised and converted into a stream of numbers) - which Swift doesn't own. The AI doesn't "see" the idea of the songs - it wouldn’t “know” what the lyrics were, what key the songs were in, what chords were being played on what instrument - any more than a Large Language Model “knows” what the words in its (tokenised) training dataset or output mean.
She still owns her songs, but she’s sold her voice.
Fifteen years ago, I wrote a blog post titled “Losing a Virtual Limb”, which was trying to articulate that funny feeling I was getting that buying my first iPhone was going to change everything.
Recently, I got that funny feeling again.
I’ve been using the same iPhone for six years - by far the longest I’ve had the same phone - smashing my previous record.
Its finally time to upgrade.
Charger plug standards are a weird thing to get excited about, but I’m excited about the proliferation of USB-C.
Except for one thing…
Actually, the best programming language of the future is probably going to be English…
WWDC usually isn’t one to look forward to - unless you’re the sort of person who cares about things like Xcode features - because it isn’t the venue where they talk about the new iPhones. Maybe there will be clues about new iPhone features in some new APIs or something, but the focus is generally on developers.
This year is different…
This is the tech war of the moment; a race to be the first to develop an AI/Machine Learning/Deep Learning product that will be a commercial success. Google have a head start - Microsoft+OpenAI look like they could be set to catch up, and maybe even overtake Google. But if this is a race then where is the finish line? What is the ultimate goal? Is it all about the $175 billion Search advertising market - or is it bigger than that?
Nine years ago (Jan 2014), I wrote a post about "the next big thing". I think its fair to say that in a history of technological innovations and revolutions, there isn't really much in the last decade or so that would warrant much more than a footnote; the theme has been 'evolution, not revolution'.
Well, I think the Next Big Thing is - finally - here. And it isn't a thing consumers will go out and buy. Its an abstract, intangible thing; software not hardware, service not product.
For the first time in years, tech has got genuinely interesting again.
After a long time of trying to come up with a simple answer to the simple question of “what is television?”, I decided to go the long way around.
One reason the Metaverse is doomed is because of the idea that it will straddle all of the different computing platforms; too many conflicting business interests will make this impossible to execute.
One reason the Metaverse will succeed is because of the idea that when something, sooner or later, straddle all of the computing platforms, it will deliver something incredibly useful.
A post from the archives of my dead website on why the most important things in tech aren’t the things that everyone gets excited about. (Originally posted in 2009.)
Two narratives, one story;
An AI developed by Google has developed sentience - the ability to have its own thoughts and feelings, and a Google engineer working with it has been fired for making the story public.
A Google engineer thinks a 'chatbot' AI should be treated like a human, because he believes that it has developed the ability to have and express its own thoughts and feelings. After Google looked into and dismissed his claims, the engineer went public with them, and was then placed on paid administrative leave and subsequently fired.
The subject of the first story is artificial intelligence - with a juicy ethical human subplot about a whistleblower getting (unfairly?) punished.
The subject of the second story (which is a little more nuanced) is a human engineer going rogue, with an interesting subplot about ethics around artificial intelligence.
I think most of the reporting has been around the first version of the story- and I think thats because it fits into a broader ongoing narrative; the idea that 'our' machines are getting smarter - moving towards a point where they are so smart that humans can be replaced.
Its a narrative that stretches back for centuries - at least as far back as the industrial revolution.
If “moving seamlessly between virtual spaces” is a key feature of the metaverse, how might that actually work with virtual identities on a decentralised platform? (And why would Mark Zuckeberg, who holds a bigger centralised database of virtual identities than anyone, want that to happen?)
I've been keeping an eye on 'metaverse news' - lots of which I tend to dismiss as being a simple misuse of the buzzword, but something last week caught my eye - an announcement of a partnership between Epic Games and Lego to 'build a place for kids to play in the metaverse'.