Audio Un-mixing and compression
Notes on the weird “ghost” sounds you get when you isolate instruments from a mixed (and compressed) audio track.
Tagged with: AI
Notes on the weird “ghost” sounds you get when you isolate instruments from a mixed (and compressed) audio track.
In the late 1990s, right back at the start of my squiggly career, I was a 'Technical Author' - writing tests for mobile phone software. (Not writing software tests - literally, instructions for the people like me who would have to run the tests. (eg. "Press Menu > 1 > 1 : You should be in a 'Compose New SMS' screen".)
I liked the idea of writing - in particular, the idea of writing something that would help people make the most of consumer technology (phones - this was pre-smartphone, PCs, software etc.) User manuals at the time were typically a joke - written in technical jargon that you could only understand if you already understood how the things worked. 1 So my plan at the time was to get into that side of "technical authoring"; making complicated things simple.
After a few meetings and chats with various people in the field, the first realisation I had was that the reason was that the people writing the "user" documentation were spending >90% of their time writing the technical documentation for engineers; their job was literally the opposite of communicating complex things in simple ways for non-technical people.
The other thing I realised was that the future of this kind of 'documentation' wasn't going to be printed on a piece of paper in the box that the technology came in; it was going to be on the website, where it could be constantly updated, revised etc.
Well - that wasn't entirely incorrect. In the last few weeks, I've needed to find manuals for a few things; we've moved into a new house, and I've needed to understand things like an extractor fan that had stopped spinning, a water pump that controls the heating, and some flat-pack furniture from the old house I needed to reassemble. For all three, I found what I was looking for online - and for all three, it was in the form of a PDF of the printed piece of paper that presumably originally came in the box.
But still - I think its true as a more general trend. Or at least, it has been.
"Documentation" might not have quite made the leap from the static paper-based things to a truly dynamic, searchable, interactive version - but the vast majority of the time, the web will still get you the answers to your questions. Maybe thats a Reddit thread. Maybe its an obscure electricians' forum where someone has asked for help for exactly the same extractor fan problem that I've had - and someone else has provided it.
But the thing that occurred to me this morning, as I was playing with a local LLM and asking it for help about some technical details around how to configure it, was that there is an opportunity for these models to be "self-documenting" that seems to be being missed. Meta's Llama model seemed to struggle with some questions about configuring itself (sending me off on a weird path of writing python scripts, editing .zshrc configuration files - before I did a google search and realised I could do what I wanted to do with two lines of code in the same window that I was 'talking' to the Llama model in.
A fairly small LLM, trained on the model's own documentation should surely be able to get you to an accurate answer much faster and easier than the current 'best option' of Google/Reddit/Stack Overflow searches - which can just as easily get you to outdated/obsolete advice as to the "right" answer.
Sure - LLMs hallucinate; but only when they are trying to provide an "answer" that they don't have enough information to provide and are forced into a 'best available information' situation - which a well-trained model with a single use case should not have a problem with.
Honestly - I think this is still true, for the most part. For example, the manual for a robot hoover we recently got tells you to push a button that is *not labelled on the actual robot* - only in the manual itself, in text so small I had to get my daughter to read it for me. (OK - my eyesight isn't as good as it used to be, but this was literally text on a diagram about 1mm high.) I'm sure it made perfect sense in the version on the designer's 5K screen - but the actual version that the user had to rely on was almost useless. ↩
In the growing buzz around generative AI, a new concept in research methodologies has arisen; "synthetic respondents". Instead of asking people the questions, a Large Language Model creates 'synthetic respondents' which you can ask as many questions as you like. And they will give you answers. And they will probably sound like real people. They will never get bored. They will never try to disguise their "true" thoughts and feelings (as David Ogilvy once said, “People don’t think what they feel, don’t say what they think, and don’t do what they say.”.) You can get answers from thousands of them, very quickly and at very little costs.
(Also - they never leave behind a bad smell, and won't eat all of your biscuits.)
But again - so obvious as to be barely worth mentioning - they aren't real people. They are synthetic - "made up." Just like the 'actors', pretending to be the sort of people we actually want to talk to.
They will do it faster. They will do it cheaper. Will they do it better - or at least, 'good enough'? Well... that's the real question.
(Title based on a Threads post from Daniel Jalkut.
What Is Apple Doing in AI? Summaries, Cloud and On-Device LLMs, OpenAI Deal - Bloomberg
But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.
Nah- I'm not buying it. For one - OpenAI and Microsoft are "true" partners; Microsoft's Azure compute + OpenAI's models + Microsoft's apps/OS 1 are getting deeply intertwined. An Apple + OpenAI partnership seems like a strategy to be permanently one step behind Microsoft.
But it seems inevitable that there's big Apple + AI news coming. Siri needs a significant upgrade. The new iPad Pro announcement made a big deal about having "AI compute" power2. "AI features" announcements at WWDC 2024 seems like the safest bet in tech.
So, what might be coming?
If I had to make a bet, my money would be on a Google partnership, with something like the Gemma model running locally on iPhone/iPads etc. as 'Siri 2.0' and access to Gemini for the kind of tasks that need access to 'full fat' LLMs and more computing power.
Also- GitHub CoPilot ↩
Yes, iPads/iPhones/Macs have had 'neural cores' for a few years - but the new iPad seems to be stepping this up significantly, but with no news on what its actually going to power. Worth noting - if you're developing AI/ML/LLM-type software on a Mac, you're using the GPU - not NPU chips. So far, they seem to be locked away for Apple's use (which includes Apple's APIs if you're building apps for the app store - but not if you're running something like TensorFlow in Python.) ↩
It's a lot easier to understand the IP issues in 'give me this song but in Taylor Swift's voice' than 'make me a song in the style of the top ten hits of the last decade.' If a human did that, they wouldn't necessarily have to pay anyone, so why would an LLM?
There's an interesting twist with the "Taylor Swift's voice" example; Scooter Braun owns all of Taylor Swift's recordings (at least, I think all the ones released before any ChatGPT-era training dataset were compiled) - he bought the record company, so he owns the master recordings (and all the copies of the master recordings, and the rights relating to them) - but not the songs themselves. Taylor Swift still owns them - which is why she can make her "Taylor's Version" re-recordings (which Scooter Braun doesn't get a penny out of.)
So there's a key difference here; a human would copy the songs (that is, they would be working off the version of the songs that are in their heads - the idea of the songs), so Swift would get paid as the owner of the songs.
But the kind of generative AI we're talking about would be copying 100% from the recordings (ie. the training data would be the sounds, digitised and converted into a stream of numbers) - which Swift doesn't own. The AI doesn't "see" the idea of the songs - it wouldn’t “know” what the lyrics were, what key the songs were in, what chords were being played on what instrument - any more than a Large Language Model “knows” what the words in its (tokenised) training dataset or output mean.
She still owns her songs, but she’s sold her voice.
Fifteen years ago, I wrote a blog post titled “Losing a Virtual Limb”, which was trying to articulate that funny feeling I was getting that buying my first iPhone was going to change everything.
Recently, I got that funny feeling again.
This is the tech war of the moment; a race to be the first to develop an AI/Machine Learning/Deep Learning product that will be a commercial success. Google have a head start - Microsoft+OpenAI look like they could be set to catch up, and maybe even overtake Google. But if this is a race then where is the finish line? What is the ultimate goal? Is it all about the $175 billion Search advertising market - or is it bigger than that?
Nine years ago (Jan 2014), I wrote a post about "the next big thing". I think its fair to say that in a history of technological innovations and revolutions, there isn't really much in the last decade or so that would warrant much more than a footnote; the theme has been 'evolution, not revolution'.
Well, I think the Next Big Thing is - finally - here. And it isn't a thing consumers will go out and buy. Its an abstract, intangible thing; software not hardware, service not product.
For the first time in years, tech has got genuinely interesting again.
Two narratives, one story;
An AI developed by Google has developed sentience - the ability to have its own thoughts and feelings, and a Google engineer working with it has been fired for making the story public.
A Google engineer thinks a 'chatbot' AI should be treated like a human, because he believes that it has developed the ability to have and express its own thoughts and feelings. After Google looked into and dismissed his claims, the engineer went public with them, and was then placed on paid administrative leave and subsequently fired.
The subject of the first story is artificial intelligence - with a juicy ethical human subplot about a whistleblower getting (unfairly?) punished.
The subject of the second story (which is a little more nuanced) is a human engineer going rogue, with an interesting subplot about ethics around artificial intelligence.
I think most of the reporting has been around the first version of the story- and I think thats because it fits into a broader ongoing narrative; the idea that 'our' machines are getting smarter - moving towards a point where they are so smart that humans can be replaced.
Its a narrative that stretches back for centuries - at least as far back as the industrial revolution.