Description MBA 580 Case Question and Analysis Case: AI Wars Student Learning Outcomes Analyze options presented in the case Write case analysis that succinctly argues for a point of view and is supported by evidence Understand the interaction between innovation and competition Case Questions: Which of the proposed end-user technologies demonstrates the most promise as a business for Google? Note: You can say “none of the above” and suggest that Google focus on behind the scenes products. Would you recommend an open or closed license for developers? Which firm is best positioned in AI? When answering these questions, be sure to consider concepts from class (and the book) including disruptive innovation, core competencies, and protections to innovation. You should approach the case as if you are consulting for Google. Therefore, the paper should be substantially your own ideas, rather than a summary of the case. No more than two (2) pages in length plus any appendix(ices) and reference page. Write your recommendation to the company in the third person (i.e. refrain from using “I”, “me”, “you” – instead say “Google should”, “he/she should”). Paper must be typed, double-spaced, 12-point font and margins should be set at one-inch on all four sides. Attach your “References” page (APA format) after the “Appendix.” This is not required, but only necessary if you went outside of the case for info.I have attached the case file. UNFORMATTED ATTACHMENT PREVIEW 9-723-434 REV: FEBRUARY 12, 2024 ANDY WU MATT HIGGINS MIAOMIAO ZHANG HANG JIANG AI Wars Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was ak in to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business. — Nico Grant and Cade Metz in The New York Times, December 21, 2022 In February 2024, the world was looking to Google to see what the search giant and long-time putative technical leader in artificial intelligence (AI) would do to compete in the massively hyped technology of generative AI. Over a year ago, OpenAI released ChatGPT, a text-generating chatbot that captured widespread attention. OpenAI would offer a range of new generative AI products as both user-facing applications and developer-facing application programing interfaces (APIs). In January 2023, Microsoft and OpenAI signed a $10 billion deal extending their exclusive partnership. Microsoft would continue to supply OpenAI with seemingly unlimited computing power from its Azure cloud, and Microsoft hoped that OpenAI’s technology and brand would keep Microsoft at the center of the new generative AI boom. Microsoft announced that it would soon begin deploying OpenAI’s technologies throughout its suite of products, from its Microsoft 365 productivity apps to its search engine Bing. 1 Google needed to decide how to respond to the threat posed by OpenAI and Microsoft. Google had a decade of experience developing and deploying AI and machine learning (ML) technologies in its products, but much of their AI work happened in-house and behind the scenes. Google researchers had invented the transformer architecture that made the generative breakthroughs demonstrated by GPT possible. Breakthroughs in AI had been quietly supercharging Google products like Search and Ads for years, but most of the product work was internal and little of it had penetrated the public consciousness. Until 2022, Google leadership had been deliberately cautious about revealing the extent of their AI progress and opening Google’s experimental AI tools to the public. Was generative AI really ready for user-facing applications? Was the public, not to mention the Google PR department, ready for the changes and controversies that more visible and active AI might unleash? What did Google have to gain, or lose, in this opening salvo of the AI wars? Most pressingly, how should Google respond to moves from Microsoft, Meta, Amazon, and many others to commercialize generative AI in what was becoming the biggest big tech narrative of 2024? HBS Professor Andy Wu, Research Associate Matt Higgins, HBS Doctoral Student Miaomiao Zhang, and Doctoral Student Hang Jiang (MIT) prepared this case. This case was developed from published sources. Funding for the development of this case was provided by Harvard Business School and not by the company. HBS cases are developed solely as the basis for class discussion. Cases are not intended to serve as endorsements, sources of primary data, or illustrations of effective or ineffective management. Copyright © 2023, 2024 President and Fellows of Harvard College. To order copies or request permission to reproduce materials, call 1-800-5457685, write Harvard Business School Publishing, Boston, MA 02163, or go to www.hbsp.harvard.edu. This publication may not be digitized, photocopied, or otherwise reproduced, posted, or transmitted, without the permission of Harvard Business School. This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. 723-434 AI Wars Google Google’s homegrown AI project, Google Brain, started in 2011 as an exploratory collaboration involving Stanford computer science professor Andrew Ng, 2 initially focusing on the development of neural networks as a general-purpose AI technology. a Google Brain developed DistBelief, a proprietary internal machine learning system for efficiently training deep learning neural networks. 3 Google internally refined DistBelief over time until finally releasing it to the public as an open-source developer platform, TensorFlow, in 2015. 4 TensorFlow was instrumental in the development of deep learning neural networks, both inside and outside Google. For years, TensorFlow was the most popular tool for artificial intelligence (AI) and machine learning (ML) applications in the world. Since its inception, Google Brain’s research and approach to AI had never been far from Google’s products. A list of which Google products made use of Google Brain’s AI and ML breakthroughs, or any details about how they were implemented, was not public knowledge, but public research papers and blog posts documented the use of machine learning in products such as Google Translate 5 and Google Maps, 6 among others. Google Brain was not Google’s only AI interest. Since 2011, Google acquired a number of AI companies, some rolled into existing teams at Google and others operated as subsidiaries. In March 2013, Google acquired DNNresearch, a deep neural networks startup founded by University of Toronto professor Geoffrey Hinton, one of the pioneering academics of the deep learning approach. 7 A decade later in May 2023, Hinton resigned from Google to speak out about the dangers of the technology. In an interview, Hinton said, “I don’t think they should scale this up more until they have understood whether they can control it.” In April 2013, Google acquired Wavii, an iPhone app that used natural language processing and machine learning to convert content from the web into structured semantic knowledge by topic, after a reported bidding war with Apple. 8 Google continued to acquire startups with AI and ML expertise over the years, more than 30 since 2009, with AI-related acquisitions totaling over $3.8 billion in 2020. 9 In January 2014, Google acquired UK-based AI lab DeepMind for over $500 million. 10 DeepMind, known for using games to test and train its AIs, made headlines when its AlphaGo program beat a human world champion at Go—a complex strategy board game sometimes likened to chess—in 2016. DeepMind operated as an independent Alphabet company, organizationally distinct from the division that housed Brain, until they were combined in April 2023 under the Google Research umbrella. 11 In addition to creating the TensorFlow framework, Google made a number of important advances in the area of natural language processing (NLP), large language models (LLM), and pre-training tools that laid the groundwork for Generative Pre-trained Transformers (GPTs). In 2017, Google introduced a new network architecture, the Transformer, that relied on a new attention mechanism to train neural networks, “dispensing with [computationally expensive] recurrence and convolutions entirely.” 12 In 2018, Google open-sourced BERT (Bidirectional Encoder Representations from Transformers), a technique for NLP pre-training that has been widely adopted by subsequent LLMs. 13 Transformers proved to be a crucial step in the emergence of high-quality LLM-powered chatbots such as GPT-3. 14 Following OpenAI’s 2020 announcement that GPT-3 would be licensed exclusively to Microsoft, a This collaboration also involved Google fellow Jeff Dean and Google researcher Greg Corrado. 2 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. AI Wars 723-434 Google ramped up its public work on LLMs. 15, b In April 2022, Google researchers published the 540billion parameter Pathways Language Model (PaLM), trained using a Google Brain system. 16 In January 2023, reports indicated that Google would introduce a suite of generative AI products over the coming months. 17 Google added a mission statement to its AI website that summarized its view that the technology should be used conservatively and in open collaboration with others (see Exhibit 1). In March 2023, Google launched its new chatbot, Bard (see Exhibit 2). 18 Anthropic In late 2022, Google invested $300 million for 10% of the AI startup Anthropic and secured Anthropic’s commitment to use Google Cloud as a preferred cloud provider. 19 In February 2023, Anthropic announced that Google Cloud would be its preferred cloud provider. 20 In late 2023, after Amazon also invested in Anthropic, Google agreed to invest up to $2 billion more in Anthropic, with $500 million of that upfront. Anthropic also signed a multiyear deal with Google Cloud worth more than $3 billion. 21 Siblings Daniela and Dario Amodei, previously OpenAI’s VP of safety and OpenAI’s VP of research, respectively, founded Anthropic in 2021 as a for-profit, public benefit corporation with the goal of developing “large-scale AI systems that are steerable, interpretable, and robust.” 22 The founding members left OpenAI in 2019 and 2020, reportedly due to concerns about its shift to a for-profit model and the first Microsoft investment. Of the group that broke away from OpenAI to form Anthropic, Dario said, “We had this view about language models and scaling, which to be fair, I think the organization [OpenAI] supported. But we also had this view about, we need to make these models safe, in a certain way, and the need to do them within an organization where we can really believe that these principles are incorporated top to bottom.” 23 Daniela said that the founding team was attracted to “the opportunity to make a focused research bet with a small set of people who were highly aligned around a very coherent vision of AI research and AI safety.” 24 In early 2023, Anthropic began publicizing its approach to “constitutional AI” in AI safety and research circles and released its Claude model in limited beta. In March 2023, Anthropic released its Claude 2 model in limited beta, with wider public access rolling out in July to largely positive reviews. By October 2023, Claude 2 was praised as “the 2nd best publicly accessible model after OpenAI’s GPT4.” 25 In addition to the publicly accessible Claude chatbot, which was free to use in its public beta stage, Anthropic licensed Claude and its underlying constitutional AI—proprietary technologies and techniques for aligning AI systems—to other companies building AI products and services. 26 Unlike some companies that focused solely on foundation models, Anthropic was willing to create custom constitutional AI systems for partners and clients. Competitors OpenAI Founded as a non-profit research institute in 2015, San Francisco-based OpenAI set out to advance artificial general intelligence (AGI) “in the way that is most likely to benefit humanity as a whole, b In October 2021, Google researchers published GLaM (Generalist Language Model) with 1.2 trillion parameters, approximately seven times larger than GPT-3. In January 2022, Google researchers published LaMDA (Language Models for Dialog Applications), a family of Transformer-based neural language models specialized for dialog with up to 137 billion parameters pre-trained on 1.56 trillion words of public dialog data and web text. 3 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. 723-434 AI Wars unconstrained by a need to generate financial return.” OpenAI’s founding research director was machine learning expert Ilya Sutskever, formerly of Google, and the group’s founding co-chairs were Sam Altman and Elon Musk. At its founding, OpenAI received $1 billion in commitments from a group of prominent Silicon Valley investors and companies, including Altman, Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys, and YC Research. 27 In March 2019, OpenAI announced that it would restructure into two organizations: OpenAI Nonprofit would remain a 501(c)(3) operating under its original charter, and OpenAI LP, a “cappedprofit” partnership, would be overseen by the nonprofit. The new structure intended to attract new investors without compromising its mission: profits for investors in the LP would be capped at 10 percent of investment. 28 In July 2019, Microsoft announced it would invest $1 billion in OpenAI and become the exclusive cloud provider for OpenAI, collaborating on a hardware and software platform to incorporate AGI within Microsoft Azure. 29 In June 2020, the GPT-3 c API was opened (in private beta) to select researchers. 30 Within weeks of its release, GPT-3 established itself as the most powerful and useful among large language models (LLM) and the first one to offer a glimpse of mainstream usability through a public-facing API. 31 OpenAI used public internet data and large-scale human feedback to train GPT-3, hiring contractors in Kenya, Uganda, and India to annotate data. 32 The release of GPT-3 in the summer of 2020 marked a turning point in the development of language-based artificial intelligence. The term “generative AI” began to appear regularly in the media. In September 2020, Microsoft announced that it would exclusively license OpenAI’s GPT-3 model. The terms of the exclusivity were such that OpenAI could continue to offer third-party developers input and output through its public-facing API, but only Microsoft would have access to the back end and be able to use GPT-3’s data model and underlying code in its products. 33 To the community of AI researchers and observers who had hoped OpenAI would choose to open-source the model, the Microsoft deal was both a disappointment and a confirmation of their worst fears about the change in OpenAI’s non-profit status. One analyst headlined his post on the deal as, “How OpenAI Sold Its Soul for $1 Billion.” 34 A reporter wrote: “It’s not clear exactly how or if OpenAI’s ‘capped profit’ structure will change things on a day-to-day level for researchers at the entity. But generally, we’ve never been able to rely on venture capitalists to better humanity.” 35 OpenAI launched ChatGPT, a chatbot interface built on GPT-3.5 d that anyone could use, in November 2022. OpenAI reportedly spent more than $540 million developing ChatGPT in 2022 alone, a figure that reflected the high costs of training models and helped explain OpenAI’s continued reliance on Microsoft for computing power and cash.36 In January 2023, OpenAI and Microsoft announced a “multiyear, multibillion dollar” extension to their partnership with a new investment from Microsoft. 37 Terms of the deal were not disclosed, but Microsoft’s investment was widely reported to be worth $10 billion, and rumors circulated that Microsoft would receive 75 percent of OpenAI’s profits until it secured its investment return and a 49 percent stake in the company. 38 Microsoft would also become the exclusive cloud partner for OpenAI c OpenAI’s major releases were numbered GPT-n: GPT-2 (February 2019), GPT-3 (June 2020), and GPT-4 (March 2023). In February 2019, OpenAI released Generative Pre-trained Transformer 2 (GPT-2), a language model that could learn new tasks (such as composing “original” text in a particular style) through the use of self-attention, building on the experimental and neverpublicly-released GPT-1. Despite being a breakthrough in language modeling, few took notice outside of AI research circles. d GPT-3.5 was an informal designation, not an official release, reflecting the improvements made to the model between 2020 and late 2022. 4 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. AI Wars 723-434 going forward and would begin deploying OpenAI’s models in its enterprise products immediately. “In this next phase of our partnership, developers and organizations across industries will have access to the best AI infrastructure, models, and toolchain with Azure to build and run their applications,” said Microsoft Chairman and CEO Satya Nadella. 39 Within months, Microsoft had deployed some of OpenAI’s technology in its Bing search engine and announced plans to roll out more AI features across its portfolio of products. “The expectation from Satya is that we’re pushing the envelope in A.I., and we’re going to do that across our products,” said Eric Boyd, the executive responsible for Microsoft’s AI platform team, in an early 2023 interview. 40 As OpenAI deepened its ties with Microsoft and pushed forward on the commercialization of generative AI, its transformation from non-profit to for-profit status rankled critics, including at least one of its original donors. In March 2023, Elon Musk tweeted: “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” 41 OpenAI offered a number of its technologies to third-party developers through its API service. As of early 2023 these included Access GPT-3, DALL-E 2 (prompt-based image generation), and Codex (a set of tools for converting natural language to code). Several third-party applications had already begun building on OpenAI’s platforms: GitHub Copilot (owned by Microsoft) drew on the Codex platform to create a powerful predictive autocomplete tool for programmers, and Duolingo used GPT3.5 to interpret user input and provide French grammar corrections in its language instruction app. 42 In March 2023, OpenAI released GPT-4 (see Exhibit 3). This next-generation model demonstrated improved conversational abilities, responsiveness to user steering, potential for image-based inputs, and safety precautions to prevent harmful advice or inappropriate content. With the release of GPT-4, OpenAI published benchmarks on the relative performance of GPT-3.5 and GPT-4 on a range of standardized tests (the Uniform Bar Exam, LSAT, GRE, and topic-specific AP tests from Chemistry to English Literature). In most tests, GPT-4 outperformed all other models, often significantly, achieving what OpenAI called “human-level performance on various professional and academic benchmarks.” 43 Behind the scenes, GPT-4 was reported to be more computationally efficient and cost-effective than its predecessor, gains that OpenAI had presumably achieved through advances in training techniques and model architecture. Researchers reading through the technical documentation of GPT-4 in search of information on how those gains were achieved — a common practice with the release of a new model in the pre-commercial days of AI research labs — found only the following passage: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” 44 Sutskever underscored OpenAI’s shift toward closed models, proprietary training methods, and the non-disclosure of training data in an interview: “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI – AGI – is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea…I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.” 45 While it continued to remain unclear how copyright law would apply to the use of copyrighted (but publicly available) data and content, OpenAI drew much of the attention around the issue. In July 2023, OpenAI reached an agreement to license content from Associated Press under a most favored nations clause entitling the publisher to revise the agreement if another publisher got a better deal. 46 In December 2023, the New York Times sued OpenAI and Microsoft for allegedly infringing on its content, joining several other ongoing lawsuits from book authors. 47 5 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. 723-434 AI Wars Microsoft Microsoft had been working on the natural language component of artificial intelligence since the founding of Microsoft Research in 1990. The internal research division made an immediate splash by hiring away three of the top computational linguists of the era from rival IBM to start its NLP research group. Within a few years, Microsoft had become a world leader in the development of grammar detection, spell check, and automatic translation tools. 48 Advances in machine learning picked up when the cloud era got underway in the early 2010s. Satya Nadella was promoted to president of the Server and Tools Division in 2011, the division where Microsoft’s then-nascent cloud initiative, Azure, was housed. In February 2014, Nadella took over the CEO role from his predecessor Steve Ballmer. That summer, Microsoft announced Azure ML, one of the first cloud services to offer a machine learning platform. 49 In the post-transformer deep-learning era (since 2017), Microsoft conducted advanced AI, ML, and LLM research primarily through its Turing program, a collaboration with academic researchers from around the world. 50 The Turing Natural Language Generation model (Turing-NLG), published in 2020, contained 17 billion parameters and outperformed other state-of-the-art models at the time. 51 In October 2021, Nvidia and Microsoft Research’s Turing program combined their LLM efforts to publish Megatron-Turing NLG, the world’s largest generative language model with 530 billion parameters. 52 When Microsoft invested its first $1 billion in OpenAI in 2019, the headline was that Azure would become OpenAI’s exclusive cloud provider. 53 One analyst noted, “Beyond the financial risks and rewards for Microsoft, the bigger prize is that it gets to work alongside OpenAI in developing the technology on Microsoft Cloud, which instantly puts Microsoft at the forefront of what could be the most important consumer technology over the next decade.” 54 According to reporting from The Information, executives inside Microsoft were skeptical in 2019 that OpenAI would live up to the hype. Peter Lee, head of Microsoft Research, found it hard to believe that OpenAI could have accomplished in a few years what Microsoft researchers had been unable to do in a decade. Even Microsoft co-founder Bill Gates warned Nadella against the OpenAI investment. Nadella proceeded with caution, using the in-house Microsoft Research group to check OpenAI’s work. Recounting the evolution of the relationship, the Information article continued: “Over time, those doubts began to fade. When Microsoft researchers compared OpenAI’s language models side by side with Microsoft’s internal models, collectively dubbed Turing, it became undeniable that the startup had built something far more sophisticated.” 55 As the partnership progressed, Microsoft began to value OpenAI as more than just a big customer for Azure or a long-term R&D bet. The more confidence Nadella gained in OpenAI’s generative AI capabilities, the more aggressively he pushed teams across the organization to integrate OpenAI’s models into its products. Microsoft’s subsequent investments, including a January 2023 investment reported to be worth $10 billion, reflected a growing confidence in its startup partner. 56 However, that confidence was called into question in November 2023 after a period of internal strife at OpenAI in which the non-profit’s board fired its CEO Altman and then was quickly forced to bring Altman back and along with a new board (Exhibit 4). Microsoft continued its own in-house AI efforts alongside its ongoing integration of OpenAI technology. Microsoft researchers worked on large-scale models across the organization, including the user-facing implementations of GPT in Office (see Exhibit 5) and Bing, and the less user-facing machine-learning infrastructure being built in Azure. Prometheus, a Microsoft-developed model that helped merge the search and chat functions in Bing, was released in February 2023. 57 An article in Wired described the ongoing internal work at Microsoft: 6 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. AI Wars 723-434 Microsoft has held back from going all-in on OpenAI’s technology. Bing’s conversational answers do not always draw on GPT-4, Ribas [Microsoft’s CVP of Search and AI] says. For prompts that Microsoft’s Prometheus system judges as simpler, Bing chat generates responses using Microsoft’s homegrown Turing language models, which consume less computing power and are more affordable to operate than the bigger and more wellrounded GPT-4 model. 58 Meta Since its founding in 2013, Facebook AI Research (FAIR), later Meta AI, had been led by Yann LeCun, a French-American AI researcher and professor of computer science at NYU. LeCun and other Facebook researchers played an important role in advancing the theoretical basis of generative AI e. 59 During the 2010s, Facebook steadily implemented this research in a range of internal applications from newsfeed ranking to content moderation, language translation, image recognition, etc. Like Google’s extensive in-house AI work, much of what Meta did in AI was never made public, so an observer could only guess at the full extent of their work based on their published papers and public contributions to open-source projects. 60 In August 2022, Meta made its chatbot prototype Blenderbot 3 available to the public. Although Blenderbot 3 was built on Meta’s open-source OPT-175B f, released in the midst of the early GPT-3 hype, and preceded ChatGPT by several months, the chatbot from Meta didn’t garner the same widespread attention or enthusiasm. Researchers and reviewers found Blenderbot underwhelming compared to the early version of GPT-3, as Vox noted in its headline from August 2022, “Why Is Meta’s New AI Chatbot So Bad?” 61 While Google traditionally led in open-source AI models and tools, Meta was quickly establishing a reputation as the new leader in open-source AI. On the open-source front, Meta developed and maintained PyTorch, a computing package and machine learning framework based on the open-source Torch package for Python. Like TensorFlow, PyTorch provided a pre-built set of developer tools that could be used to quickly set up and train deep-learning neural networks. Like Google’s investment in TensorFlow and other company-supported open-source projects, Meta’s contribution to the PyTorch ecosystem did not directly contribute to Meta’s bottom line, but its availability had valuable secondorder effects, generating goodwill and drawing the developer community to Meta’s preferred toolset. Meta’s PyTorch-based internal tools (which remained proprietary) nonetheless benefited from the rapid innovation that open source made possible, strengthening Meta’s appeal as a hub of innovation and an employer of top AI/ML engineers. In 2022, Meta transferred control of PyTorch to the Linux Foundation. 62 In February 2023, Facebook released LLaMA (Large Language Model Meta AI), also under a noncommercial GPL 3.0 open-source license, and shared it with the AI research community. Built on publicly available data, LLaMA was released in four sizes: 7 billion, 13 billion, 33 billion, and 65 billion parameters, making it more flexible for researchers with different computational capacities. LLaMA’s 13 billion-parameter model outperformed GPT-3 on most benchmarks, and LLaMA-65B was e One of their contributions was Generative Adversarial Networks (GANs), which involved pitting two neural networks (a generator and a discriminator) against each other to create an artificial intelligence co-evolutionary arms race capable of doing things like generating novel text and realistic-looking images. Throughout the 2010s, FAIR also made advances in self-supervised learning (SSL) on large unstructured data sets and rapid text classification, inventing a framework called fastText, a simplified approach to text classification that could run on basic inexpensive hardware. f In May 2022, Meta released OPT-175B (Open Pretrained Transformer), a large language model with 175 billion parameters under a non-commercial GPL 3.0 license. In November 2022, Meta AI announced CICERO, an AI agent that had achieved humanlevel performance at the strategy game Diplomacy. CICERO was a language model integrated with strategic reasoning to enable effective negotiation and cooperation with human players. 7 This document is authorized for use only by Jada Edgren (jada.edgren@snhu.edu). Copying or posting is an infringement of copyright. Please contact customerservice@harvardbusiness.org or 800-988-0886 for additional copies. 723-434 AI Wars competitive with the best LLMs in the world. 63 Meta also released details about how the model had been built and trained, including model weights that were proprietary for comparable models at OpenAI and Google. The relative openness of LLaMA made it possible for AI researchers outside of the biggest labs to examine how a genuinely advanced LLM was constructed. Though LLaMA was not originally released as open source, copies of it leaked shortly after it was shared with researchers. Within a few days, LLaMA had effectively become open source. 64 Meta followed through by officially licensing LlaMA 2 as open source, with limitations. The license appeared at first like many other open-source licenses, but reading further revealed additional fine print: If, on the Llama 2 version release date, the monthly active users of the products or services made available…is greater than 700 million monthly active users g in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 65 As of February 2024, Meta AI did not offer any cloud infrastructure services nor commercial API services to developers and had not announced any plans to do so. In fact, Meta looked to be doubling down on its approach of using proprietary internal tools built on open-source technologies, as they had done with PyTorch and Open Compute h. 66 In Meta’s Q1 2023 earnings call, Zuckerberg said: Right now most of the companies that are training large language models have business models that lead them to a closed approach to development. I think there’s an important opportunity to help create an open ecosystem. If we can help be a part of this, then much of the industry will standardize on using these open
READ MORE >>