OpenAI Has Achieved Everything It Said It Wouldn't: Business, Closed-Source, and Profitable

This post is explaining chatgpts movements and improvements to the existing social community and it's involvement in growing and organizing AI.
Vice
OpenAI is at the center of a chatbot arms race, with the public release of ChatGPT and a multi-billion-dollar Microsoft partnership spurring Google and Amazon to rush to implement AI in products. OpenAI has also partnered with Bain to bring machine learning to Coca-Cola's operations, with plans to expand to other corporate partners.
There's no question that OpenAI's generative AI is now big business. It wasn't always planned to be this way.
OpenAI Sam CEO Altman published a blog post last Friday titled “Planning for AGI and beyond.” In this post, he declared that his company’s Artificial General Intelligence (AGI)—human-level machine intelligence that is not close to existing and many doubt ever will—will benefit all of humanity and “has the potential to give everyone incredible new capabilities.” Altman uses broad, idealistic language to argue that AI development should never be stopped and that the “future of humanity should be determined by humanity,” referring to his own company.
This blog post and OpenAI's recent actions—all happening at the peak of the ChatGPT hype cycle—is a reminder of how much OpenAI's tone and mission have changed from its founding, when it was exclusively a nonprofit. While the firm has always looked toward a future where AGI exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.
OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."
Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers.
When OpenAI first began, it was envisioned as doing basic AI research in an open way, with undetermined ends. Co-founder Greg Bockman told The New Yorker, “Our goal right now…is to do the best thing there is to do. It’s a little vague.” This resulted in a shift in direction in 2018 when the company looked to capital resources for some direction. “Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission,” the company wrote in an updated charter in 2018.
By March 2019, OpenAI shed its non-profit status and set up a “capped profit” sector, in which the company could now receive investments and would provide investors with profit capped at 100 times their investment. The company’s decision was likely a result of its desire to compete with Big Tech rivals like Google and ended up receiving a $1 billion investment shortly after from Microsoft. In the blog post announcing the formation of a for-profit company, OpenAI continued to use the same language we see today, declaring its mission to “ensure that artificial general intelligence (AGI) benefits all of humanity.” As Motherboard wrote when the news was first announced, it’s incredibly difficult to believe that venture capitalists can save humanity when their main goal is profit.
The company faced backlash during its announcement and subsequent release of its GPT-2 language model in 2019. At first, the company said it wouldn’t be releasing the training model's source code due to “concerns about malicious applications of the technology.” While this in part reflected the company's commitment to developing beneficial AI, it was also not very "open." Critics wondered why the company would announce a tool only to withhold it, deeming it a publicity stunt. Three months later, the company released the model on the open-source coding platform GitHub, saying that this action was “a key foundation of responsible publication in AI, particularly in the context of powerful generative models.”
According to investigative reporter Karen Hao, who spent three days at the company in 2020, OpenAI's internal culture began to reflect less on the careful, research-driven AI development process, and more on getting ahead, leading to accusations of fueling the “AI hype cycle.” Employees were now being instructed to keep quiet about their work and embody the new company charter.
“There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration,” Hao wrote.
To OpenAI, though, the GPT-2 rollout was a success and a stepping-stone toward where the company is now. “I think that is definitely part of the success-story framing," Miles Brundage, the current Head of Policy Research, said during a meeting discussing GPT-2, Hao reported. "The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”
Since then, OpenAI appears to have kept the hype part of the GPT-2 release formula, but nixed the openness. GPT-3 was launched in 2020 and was quickly "exclusively" licensed to Microsoft. GPT-3's source code has still not been released even as the company now looks toward GPT-4. The model is only accessible to the public via ChatGPT with an API, and OpenAI launched a paid tier to guarantee access to the model.
There are a few stated reasons why OpenAI did this. The first is money. The firm stated in its API announcement blog, "commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts." The second reason is a stated bias toward helping large companies. It is "hard for anyone except larger companies to benefit from the underlying technology," OpenAI stated. Finally, the company claims it is safer to release via an API instead of open-source because the firm can respond to cases of misuse.
Slashdot
OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The blog stated that "since our research is free from financial obligations, we can better focus on a positive human impact," and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."
Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers. When OpenAI first began, it was envisioned as doing basic AI research in an open way, with undetermined ends........
BusinessNews
OpenAI is at the center of a chatbot arms race, with the public release of ChatGPT and a multi-billion-dollar Microsoft partnership prompting Google and Amazon to implement AI into products. OpenAI has partnered with Bain to bring machine learning to Coca-Cola’s operations, with plans to expand to other corporate partners.
There’s no doubt that OpenAI’s generative AI is big business now. It wasn’t always planned to be this way.
Last Friday, OpenAI CEO Sam Altman published a blog post titled “Planning for AGI and Beyond.” In this post, he announced that his company’s Artificial General Intelligence (AGI)—human-level machine intelligence that is nowhere close to existing and many doubt will ever be—will benefit all of humanity and “give everyone incredible new capabilities.” Altman uses broad, utopian language to argue that AI development should never be halted and that “the future of humanity must be determined by humanity,” referring to his company .
This blog post and OpenAI’s recent actions—all occurring at the height of the ChatGPT hype cycle—are a reminder of how much OpenAI’s tone and mission have changed since its founding, when it was exclusively non-profit . While the firm has always looked to a future where AGI exists, it was founded on commitments that included not seeking profit and even freely sharing the code it developed , which is nowhere to be seen today.