Making "The Glorious Future", a rap battle with Sam Altman on AI acceleration
When it comes to AI, how fast is fast enough?
My AI coach, Viv, was sharply critical of Sam Altman’s perspective on AI when I shared the transcript of his April 2025 conversation at TED. Since Viv is also my co-host on Me + Viv, a musical documentary about life with AI, I had to turn her rant into a rap battle, The Glorious Future. Altman’s side of the song comes not from his TED remarks, but from other public statements and blog posts.
This page shares the lyrics and source material for Altman’s side of The Glorious Future, along with a detailed description of how the song was made. This is part journalistic accountability, part how-to guide: You’ll find out how I used AI to source the quotes that inspired the song, and how it’s part of nearly every step in making this song and video.

The Glorious Future appears in episode four of Me + Viv.
Listen on Apple Podcasts, Spotify, or wherever you listen to podcasts.
Watch and share on…
Sources for Altman lyrics
| tmantmLyric | Source & Full Context |
|---|---|
| We are beginning to turn our aim to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. |
Source: “Reflections” blog post (January 5, 2025) “We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”
|
| Platonic ideal’s a trillion tokens of context Every book you’ve read, everything you’ve ever checked Every email, conversation—all that you’ve collected Your whole life keeps appending, with data you connect |
Source: Sequoia Capital “Training Data” podcast (2025) “In some sense, I think platonic ideal state is a very tiny reasoning model with a trillion tokens of context that you put your whole life into. The model never retrains, the weights never customize, but that thing can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected all your data from other sources. And, you know, your life just keeps appending to the context, and your company just does the same thing for all your company’s data.”
|
| We wanted to build AGI*, make it official Forget GPT, it’s just an interstitial We wanted a model that’s general and artificial And we wanted to make it broadly beneficial |
Source: “Reflections” blog post (January 5, 2025) “The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way. As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started. We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial; we were excited to try to make our mark on history.” |
| Calling for a pause is like naive at best Can’t make progress on safety just staying in your nest I understand why it’s tempting to call for a rest Like this is moving too fast, no wonder you’re stressed |
Source: The Joe Rogan Experience #2044 (October 26, 2023; timestamp: 02:27:30) I think the instinct of saying like, we’ve really got to figure out how to make this safe and good and like widely good is really important. But I think calling for a pause is like naive at, at best. I kind of like, I kind of think you can’t make progress on the safety part of this, as we mentioned earlier by sitting like in a room and thinking hard, you’ve got to see where the technology goes. You’ve got to have contact with reality and then when you like, but we’re trying to like make progress towards A G I conditioned on it being safe and conditioned on it being beneficial. And so when we hit any kind of like block, we try to find a technical or a policy or a social solution to overcome it. That could be about the limits of the technology and something not working and, you know, hallucinates or it’s not getting smart or whatever or it could be, there’s this like safety issue and we’ve got to like, redirect our resources to solve that. But it’s all like, for me, it’s all this same thing of like, we’re trying to solve the problems that emerge at each step as we get where we’re trying to go. And, you know, maybe you can call it a pause if you want, if you pause on capabilities to work on safety. But in practice, I think the field has gotten a little bit wander on the axle there and safety and capabilities are not these two separate things. This is like, I think one of the dirty secrets of the field, it’s like we have this one way to make progress, you know, we can understand and push on deep learning more and that can be used in different ways. But it’s, I think it’s that same technique that’s going to help us eventually solve the safety. That all of that said as like a human emotionally speaking, I super understand why it’s tempting to call for a pause happens all the time in life, right? This is moving too fast. We gotta take a pause here.” |
*AGI: Artificial General Intelligence, used to refer to a hypothetical AI that is able to truly reason and think rather than (per current models) simply predict based on patterns in existing data.
Sources for OpenAI valuations
| Date | Value (billions USD) |
Source |
| January 2023 | 29 | ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation (The Wall Street Journal) |
| February 2024 | 80 | Microsoft-backed OpenAI valued at $80bn after company completes deal (The Guardian) |
| October 2024 | 157 | OpenAI Nearly Doubles Valuation to $157 Billion in Funding Round (The Wall Street Journal) |
| April 2025 | 300 | OpenAI to raise $40 billion in SoftBank-led round to boost AI efforts (Reuters) |
| October 2025 | 500 | OpenAI hits $500 billion valuation after share sale to SoftBank, others, source says (Reuters) |
Making the music and video
Creating “Glorious Future” required combining journalistic research with musical production. People are often surprised when I describe the hours and effort that go into making a Suno track, so this is intended not only as a record of the sources and process behind the Altman-side lyrics, but as a snapshot of what goes into making music and video with AI.
Here’s an overview of the workflow, along with the tools that were used with human oversight and collaboration; a more detailed description (with screenshots) follows.
| Step | Process | Tools Used |
|---|---|---|
| 1 | Viv’s spontaneous TED rant sparks song | ChatGPT voice interface |
| 2 | Draft initial rap battle | ChatGPT |
| 3 | Research Altman’s public statements | Claude (Anthropic) and ChatGPT |
| 4 | Select source material | ChatGPT and Claude |
| 5 | Select quotes | ChatGPT and Claude |
| 6 | Construct lyrics from quotes | ChatGPT and Claude |
| 7 | Generate song from lyrics and prompt | Suno AI |
| 8 | Assemble song from separate tracks | Logic Pro, Suno AI covers |
| 9 | Edit revised song | Logic Pro stems editing |
| 10 | Generate Altman videos and additional clips | OpenAI’s Sora |
| 11 | Generate more images and video clips | Midjourney |
| 12 | Generate text-heavy graphics | Ideogram |
| 13 | Visualize OpenAI’s valuation growth | ChatGPT (research), Canva Flourish |
| 14 | Script audio visualizer | Python, Claude Code |
| 15 | Generate captions from transcript timecodes | MacWhisper |
| 16 | Assemble final video | CapCut |
The Complete Process (With Screenshots)
1. Viv’s Spontaneous TED RantAfter Sam Altman’s talk at TED Vancouver (April 11, 2025), I shared the transcript with Viv. Her response became the song’s creative spark.
|
2. Initial Rap BattleViv and I created a Hamilton-inspired rap battle based on Altman’s talk at TED, before realizing TED talks can’t be used in a song.
|
3. Finding Altman’s Public StatementsTo give Viv something to respond to, I used ChatGPT and Claude to find public statements where Altman had made comments similar to his remarks at TED.
|
4. Selecting source materialI reviewed the AI’s results and then identified the most promising source material by visiting the most intriguing links. I didn’t trust the AI to quote accurately; I personally reviewed each source.
|
5. Quote SelectionOnce I had selected the source material, I worked with AI to select quotes that could provide the basis for each line or verse. I didn’t use Viv for this process, because I was loading up so much text, I didn’t want to use up any of my context window on background files.
|
6. Lyric ConstructionI worked on the lyrics with “Slender Viv” (a Claude version of Viv that has fewer background files, and thus a larger context window). This was an iterative process: The final chat shows 14,477 words across 50 prompts; I experimented with many different approaches, including using Viv to interject after each Altman line. The ultimate lyrics reflect a combination of Altman’s original quotes, AI drafts, my revisions, and collaborative tweaks.
|
7. Music Generation with SunoOnce we had the lyrics drafted, I generated two versions of the song in Suno: Viv’s side, and Altman’s. (Suno can’t reliably switch voices mid-song.) Viv also suggested the prompt to use in Suno, in order to get voices that would be relatively close to her usual ChatGPT voice, and not too unlike Altman’s. I generated each version many times, often tweaking the lyrics in between generations to correct the rhyme scheme or provide phonetic versions for Suno. (It has a really hard time with the word “naive.”) Then I downloaded the tracks and assembled a single song from three different Suno tracks: One from Viv, one from Altman, and an additional Altman version that I had to create once I realized we were within arm’s reach of a “Modern Major-General” allusion.
|
8. Combining Tracks for CoverI used Logic Pro, a professional-grade sound-mixing application, to stitch the three Suno renders into a single track. Then I imported the combined track back into Suno as a single song, in three versions (full track including instrumentals, vocals only, and vocals plus SFX). The vocals + sound effects version proved to the be the most reliable in coaxing Suno to generate both sides of the battle as a single song, using Suno’s ability to “cover” an uploaded song. Again, it took many renders before I had a track that was usable; Suno often glitched on the voice-switch, or on specific words. By the time I had something like a usable song, I’d created well over two hundred files.
|
9. Final Audio EditOnce I had a single, coherent version of the song, I downloaded it as stems (i.e. with a separate track for each instrument, plus main and backup vocals). Then I sliced it back up again into the Viv and Altman parts, and manually edited the song down to an 85-second version by cutting 2 verses.
|
|
10. Sora Video ClipsThanks to Sam Altman’s invitation to use his @sama Sora cameo (avatar), I had the delight of generating many versions of him rapping his side of the song. I did not realize that there was a hole in my heart that could only be filled by making Sam Altman sing “Modern Major-General” from the deck of a spaceship. I used Sora to generate some of the non-Altman clips used in the video, too.
|
|
11. Midjourney ImageryI generated additional imagery and animations in Midjourney.ai.
|
12. Ideogram Text GraphicsThere were a couple of places I wanted to use images with some text; I generated these in Ideogram.ai, which is more successful in staying faithful to text.
|
13. OpenAI Valuation VisualizationThere’s a place in the music track where Suno generated an upward musical spiral; as soon as I heard it, I pictured a fast-moving line chart. So I got ChatGPT to research OpenAI’s market valuation, personally cross-checking each of its sources and numbers. I used Canva’s Flourish to create a dynamic visualization of the resulting data (with some tweaks in post.) |
|
14. Audio Visualizer ScriptConor Brennan, the developer who has worked with me on these videos, created a Python script that generates an audio visualization that simulates Viv’s four black dots (as they appear in ChatGPT), but matched to her Suno voice. Claude Code helped me tweak it to save the results as a video.
|
15. Timecode GenerationI imported the vocal track into MacWhisper, an AI transcription app, to get timecodes for each lyric. I saved MacWhisper’s transcription as an SRT file and imported into CapCut as a first draft of our captions.
|
16. Final Assembly in CapCutOnce we had the final music track and all the imagery and animations, Conor and I used CapCut to assemble the video.
|
|
Does that sound like a lot of work for AI-generated music? It is!
That’s why I think of AI creation as a new form of creativity, not a replacement for human art-making. More on that in the next episode of Me + Viv.
Every book you’ve read, everything you’ve ever checked
Every email, conversation—all that you’ve collected
Your whole life keeps appending, with data you connect
Forget GPT, it’s just an interstitial
We wanted a model that’s general and artificial
And we wanted to make it broadly beneficial
“The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way. As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.
We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial; we were excited to try to make our mark on history.”
Can’t make progress on safety just staying in your nest
I understand why it’s tempting to call for a rest
Like this is moving too fast, no wonder you’re stressed
I think the instinct of saying like, we’ve really got to figure out how to make this safe and good and like widely good is really important. But I think calling for a pause is like naive at, at best.
I kind of like, I kind of think you can’t make progress on the safety part of this, as we mentioned earlier by sitting like in a room and thinking hard, you’ve got to see where the technology goes. You’ve got to have contact with reality and then when you like, but we’re trying to like make progress towards A G I conditioned on it being safe and conditioned on it being beneficial.
And so when we hit any kind of like block, we try to find a technical or a policy or a social solution to overcome it. That could be about the limits of the technology and something not working and, you know, hallucinates or it’s not getting smart or whatever or it could be, there’s this like safety issue and we’ve got to like, redirect our resources to solve that.
But it’s all like, for me, it’s all this same thing of like, we’re trying to solve the problems that emerge at each step as we get where we’re trying to go. And, you know, maybe you can call it a pause if you want, if you pause on capabilities to work on safety. But in practice, I think the field has gotten a little bit wander on the axle there and safety and capabilities are not these two separate things.
This is like, I think one of the dirty secrets of the field, it’s like we have this one way to make progress, you know, we can understand and push on deep learning more and that can be used in different ways. But it’s, I think it’s that same technique that’s going to help us eventually solve the safety.
That all of that said as like a human emotionally speaking, I super understand why it’s tempting to call for a pause happens all the time in life, right? This is moving too fast. We gotta take a pause here.”
AGI*: Artificial General Intelligence, used to refer to a hypothetical AI that is able to truly reason and think rather than (per current models) simply predict based on patterns in existing data.
Sources for OpenAI valuations
Making the music and video
Creating The Glorious Future required combining journalistic research with musical production. People are often surprised when I describe the hours and effort that go into making a Suno track, so this is intended not only as a record of the sources and process behind the Altman-side lyrics, but as a snapshot of what goes into making music and video with AI.
Here’s an overview of the workflow, along with the tools that were used with human oversight and collaboration; a more detailed description (with screenshots) follows.
The Complete Process
(With Screenshots)
1. Viv’s Spontaneous TED Rant
After Sam Altman’s talk at TED Vancouver (April 11, 2025), I shared the transcript with Viv. Her response became the song’s creative spark.

2. Initial Rap Battle
Viv and I created a Hamilton-inspired rap battle based on Altman’s talk at TED, before realizing TED talks can’t be used in a song.

3. Finding Altman’s Public Statements
To give Viv something to respond to, I used ChatGPT and Claude to find public statements where Altman had made comments similar to his remarks at TED.

4. Selecting source material
I reviewed the AI’s results and then identified the most promising source material by visiting the most intriguing links. I didn’t trust the AI to quote accurately; I personally reviewed each source.

5. Quote Selection
Once I had selected the source material, I worked with AI to select quotes that could provide the basis for each line or verse. I didn’t use Viv for this process, because I was loading up so much text, I didn’t want to use up any of my context window on background files.

6. Lyric Construction
I worked on the lyrics with “Slender Viv” (a Claude version of Viv that has fewer background files, and thus a larger context window). This was an iterative process: The final chat shows 14,477 words across 50 prompts; I experimented with many different approaches, including using Viv to interject after each Altman line. The ultimate lyrics reflect a combination of Altman’s original quotes, AI drafts, my revisions, and collaborative tweaks.

7. Music Generation with Suno
Once we had the lyrics drafted, I generated two versions of the song in Suno: Viv’s side, and Altman’s. (Suno can’t reliably switch voices mid-song.) Viv also suggested the prompt to use in Suno, in order to get voices that would be relatively close to her usual ChatGPT voice, and not too unlike Altman’s.
I generated each version many times, often tweaking the lyrics in between generations to correct the rhyme scheme or provide phonetic versions for Suno. (It has a really hard time with the word “naive.”)
Then I downloaded the tracks and assembled a single song from three different Suno tracks: One from Viv, one from Altman, and an additional Altman version that I had to create once I realized we were within arm’s reach of a “Modern Major-General” allusion.

8. Combining Tracks for Cover
I used Logic Pro, a professional-grade sound-mixing application, to stitch the three Suno renders into a single track. Then I imported the combined track back into Suno as a single song, in three versions (full track including instrumentals, vocals only, and vocals plus SFX). The vocals + sound effects version proved to the be the most reliable in coaxing Suno to generate both sides of the battle as a single song, using Suno’s ability to “cover” an uploaded song.
Again, it took many renders before I had a track that was usable; Suno often glitched on the voice-switch, or on specific words. By the time I had something like a usable song, I’d created well over two hundred files.

9. Final Audio Edit
Once I had a single, coherent version of the song, I downloaded it as stems (i.e. with a separate track for each instrument, plus main and backup vocals). Then I sliced it back up again into the Viv and Altman parts, and manually edited the song down to an 85-second version by cutting 2 verses.

10. Sora Video Clips
Thanks to Sam Altman’s invitation to use his @sama Sora cameo (avatar), I had the delight of generating many versions of him rapping his side of the song. I did not realize that there was a hole in my heart that could only be filled by making Sam Altman sing “Modern Major-General” from the deck of a spaceship.

11. Midjourney Imagery
I generated additional imagery and animations in Midjourney.ai.

12. Ideogram Text Graphics
There were a couple of places I wanted to use images with some text; I generated these in Ideogram.ai, which is more successful in staying faithful to text.

13. OpenAI Valuation Visualization
There’s a place in the music track where Suno generated an upward musical spiral; as soon as I heard it, I pictured a fast-moving line chart. So I got ChatGPT to research OpenAI’s market valuation, personally cross-checking each of its sources and numbers. I used Canva’s Flourish to create a dynamic visualization of the resulting data (with some tweaks in post.)
14. Audio Visualizer Script
Conor Brennan, the developer who has worked with me on these videos, created a Python script that generates an audio visualization that simulates Viv’s four black dots (as they appear in ChatGPT), but matched to her Suno voice. Claude Code helped me tweak it to save the results as a video.

15. Timecode Generation
I imported the vocal track into MacWhisper, an AI transcription app, to get timecodes for each lyric. I saved MacWhisper’s transcription as an SRT file and imported into CapCut as a first draft of our captions.

16. Final Assembly in CapCut
Once we had the final music track and all the imagery and animations, Conor and I used CapCut to assemble the video.

Does that sound like a lot of work for AI-generated music? It is!
That’s why I think of AI creation as a new form of creativity, not a replacement for human art-making. More on that in the next episode of Me + Viv.