There’s a lot of fear uncertainty and doubt being spread about OpenAI. So let’s help you straighten out what it is and what it isn’t.
Featuring Tom Merritt.
Please SUBSCRIBE HERE.
A special thanks to all our supporters–without you, none of this would be possible.
Thanks to Kevin MacLeod of Incompetech.com for the theme music.
Thanks to Garrett Weinzierl for the logo!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
Send us email to [email protected]
OpenAI is BIG in the news these days what with ChatGPT, GPT-4, its partnership with Microsoft and mounting criticisms from multiple corners.
You may have heard it’s a non-profit. Or that it used to be and now it’s not. Or that it was supposed to open source things and now it’s not.
There’s a lot of fear uncertainty and doubt being spread about Open AI. So let’s help you straighten out what it is and what it isn’t.
Let’s help you Know a Little More about OpenAI.
OpenAI was founded Dec. 10, 2015 with funding donated by Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research.
It’s other founding members were scientists and engineers, research director Ilya Sutskever, as well as Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba.
Its advisors were Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka.
And its co-chairs were Sam Altman and Elon Musk.
OK. That’s a lot of names. I can summarize by saying they mostly make up AI researchers from academia and places like Google and Facebook, or in some cases went on to work at Google and Facebook. Some are still there, some are not and some don’t seem to list their time at OpenAI.
The point being OpenAI made an effort to find to people in the field from all parts of the industry, that were really good at this. And the two driving visionaries of them were the last two names I mentioned. Sam Altman and Elon Musk. We could spend a lot of time talking about Reid Hoffman and Peter Thiel and their ties to Musk being former PayPal folks. And Greg Brockman is an interesting guy from North Dakota and joined Stripe as a founding engineer in 2010 and became CTO there in 2013. He was the first CTO at OpenAI and is now its President.
But I want to focus on Musk and Altman.
Elon Musk you probably know. Born in South Africa, founder of the first federally-insured online bank X.com which in 2000, merged with Confinity, makers of PayPal. Oh right. You might know him more for companies he invested in and bought like Tesla and Twitter. Or companies he founded later like SpaceX.
You might know less about Altman. Born in St. Louis. Went high school at Burroghs out in Ladue. Founded the social networking app Loopt in 2005 and sold it for $43.4 million in 2012. He was then president of Y Combinator in 2014. And he was the CEO of Reddit for 8 days in 2014 between Yishan Wong and the return of Steve Huffman.
Why these two? Well Altman is CEO of OpenAI. And Musk? He is the magnet and Altman’s the steel. [Brief Walter Egan music?]]
Let me explain, While CEO of Y Combinator, Altman began having conversations with Musk, sometimes recorded for the public, about AI. They both shared a concern that it was expanding too rapidly and companies in charge of it weren’t paying enough attention to the risks and to responsible development. They both believed AI could be one of the greatest benefits to humanity but also one of its greatest threats.
They weren’t the only ones thinking along these lines so they gathered together some like-minded folks I mentioned earlier. People concerned with ethics and responsibility. And from the beginning it leaned toward idealism.
OpenAI Incorporated was founded as, and still is, a 501(c)(3) nonprofit. From its beginning it reflected the concerns of Musk and Altman, writing on its website “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” OpenAI said it wanted to “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” It took $1 billion in investment and said it expected to only need to spend a small amount of that over the next few years.
But it had to spend more than it anticipated. AI researchers are paid a lot. OpenAI persuaded talent on its mission, its ethics and responsibility, paying better than nonprofits usually did, but less than Facebook or Google. Founding engineer Zaremba told Wired he turned down offers two three times his market value to work at Open AI.
So the people weren’t cheap. Also the cloud computing wasn’t cheap. Reuters reported that OpenAI spent $7.9 million about 25% of its budget on cloud computing in 2017.
If they wanted to make more progress they needed money to attract top talent and to be able to run more complex experiments.
So you can imagine that after the first few years, OpenAI is starting to wonder about that non-profit status. It’s got to make some hard decisions about all that openness too. I mean they’ve done some impressive things training video game bots with OpenAI Gym and Universal, but is that going to move the needle. They’re the ones doing this responsibly, but what does that matter if the big companies stay so far out in front. If they really want to advance AI, if they really want to be the ones protecting humanity and pushing for responsible development, they’d need more, right? So how do you do that and stay true to your core principles?
This was clearly important for Musk. Not long before the founding of OpenAI, he had told students at MIT that AI was humanity’s biggest threat.
So what happened next was surprising if not shocking. So why did it get downplayed?
On February 20, 2018 Elon Musk announced he was leaving the board of Open AI. In a blog post announcing new donors to the non-profit, Open AI wrote, “Additionally, Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization. As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.”
Well OK. He still believes in the mission, but he’s got his own AI at Tesla to develop so he probably shouldn’t be a director at a competitor, even if it is a nonprofit one. I mean it’s no hard feelings right? Musk even spoke to OpenAI employees to explain that conflict of interest before he left.
But the employees didn’t really seem to buy it. And the line announcing it, was buried at the end of a long first paragraph in a three paragraph post about other funders. Seems like a bigger deal than that no?
Well, maybe it was.
You see, everybody had a solution for that problem OpenAI had of falling behind. Musk’s solution was himself. Put me in charge! Let Musk run the show and he’d catch up. Just look at what he did to the auto industry right?
You may have heard that Musk can be a little — enthusiastic. Maybe rubs people the wrong way sometimes. That seems to have been the case with Open AI’s other founders. Maybe they were also annoyed that Tesla had taken one of those founding engineers- Karpathy. The kind of engineer they were having a hard time convincing to leave higher paying jobs to get. So it’s not too surprising in retrospect that rather than putting Musk in charge, the board moved Altman into the role of President.
And Musk leaving had another effect. According to Semafor, he was supposed to keep contributing money to OpenAI, but he didn’t. That was about $1 billion that the company was expecting to get that it no longer had. At a time when it was scraping to make the funding meet its ambitions.
And right then Google Brain released its “transformer” model. The T in GPT by the way. It was a huge leap forward for AI models, but required a lot more data to train, meaning a lot more computer power, meaning a lot more cost. A cost Google, which ran its own cloud services, could afford to pay. OpenAI, which paid Google for cloud services, could not. If it didn’t want to see Google seriously outdistance it, OpenAI needed to do something.
It started by releasing a new charter in April 2018. It still “committed to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” and said that its “primary fiduciary duty is to humanity.” But now it also said, “We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”
A more public hint that things were changing was the announcement of GPT-2 on February 14, 2019. OpenAI’s Valentine’s Day gift was to not open source this release as it had for its previous releases. GPT-2 could take prompts and complete them. So give it a headline it could write the rest of the article. OpenAI justified the less open release by citing the risk that the tool could be used maliciously. Though a public interface was released. And eventually the full code was released in November.
But then it took a Serious step. On March 11, 2019 OpenAI pulled a move from Mozilla’s playbook. Mozilla had operated for decades as a non-profit that fully owned a for-profit subsidiary. This allowed it to make money on Firefox and attract talent and pay for development.
OpenAI was going to do a similar thing. OpenAI Incorporated, the non-profit, would form OpenAI Limited Partnership, a for-profit company, wholly controlled by OpenAI Inc. But OpenAI LP would be profit capped. Investors would receive up to 100 times their investment, and excess profits would go to the nonprofit OpenAI Inc. To assuage concerns about the move, Altman, the CEO of the new for profit company, took no equity in it.
So they had their solution. Sell non-controlling shares in the for profit company. Except nobody was buying. It was profit capped and the CEO didn’t even want a single share? Not for me.
Well, unless you’re Satya Nadella. In September 2019 OpenAI got its first big investment bite. Microsoft agreed to invest $1 billion, a nice replacement for the lost Musk donations. Not only would it invest but it had even better cloud resources than Google, so it would make its vast Azure infrastructure available. Money to pay talent AND bargain cloud computing. And Microsoft gets to become a bigger player in AI.
Microsoft and OpenAI built a supercomputer to handle the massive amount of data needed to train Large Language Models.
OpenAI was back in the race.
In January 2021, OpenAI released DALL-E a multimodal model that could create images based on a text description.
In August 2021 it launched Codex, which translates natural language to code, and powers Microsoft’s GitHub CoPilot feature.
And in November 2022 DALL-E 2 captured imaginations with much better performance and spawned multiple imitators like Craiyon and Midjourney.
But of the course the big leap also came with the launch of ChatGPT that same month. For OpenAI it was just the latest public demonstration of what its Large Language Model could do. Nobody got that excited when it launched DialoGPT in 2019. Why would this time be any different? Well. It was. But for whatever reason it captured the public imagination. Suddenly OpenAI wasn’t just staying in the race. It was leading it.
Google issued a code red. Microsoft and Google got into an AI announcement competition.
Altman was triumphant.
Musk was — not.
In December 2022, Twitter- now owned by Musk, pulled OpenAI’s access to Twitter data. Musk began tweeting criticisms of OpenAI.
On February 15, 2023, he sang his old 2015-era tune again to attendees at the World Government Summit in Dubai, United Arab Emirates, “One of the biggest risks to the future of civilization is AI.”
On February 27, 2023, The Information wrote that Musk was recruiting engineers and scientists to form a lab to compete with OpenAI.
And on Wednesday March 29 signed an open letter put out by a think tank he funded calling for all companies to pause their research into the next version of AI for 6 months in order to create a safety scheme.
Oh and the week before that, Shivon Zillis, the mother of Musk’s twins, resigned from the OpenAI board.
Altman on the other hand, talking on Lex Friedman’s podcast on March 25, 2023 described Musk as one of his heroes and said, “I believe he is, understandably so, really stressed about AGI safety.” [[find podcast sound for this?]]
So there you have it. OpenAI is a nonprofit AND a for profit company. It was co-founded by Elon Musk, but that’s not nearly the whole story. And whether it has remained true to the values from its founding or whether it engenders the fears it was formed to address. I’ll leave that up to you.
I just hope you Know a Little More about OpenAI.