OpenAI Revenue is MASSIVE, But Apple Isn't Paying a Penny

OpenAI, Apple, Microsoft, and more!

Despite competition, OpenAI's valuation hits $86 billion with plans for new AI products

OpenAI's annualized revenue has reached $3.4 billion, with $3.2 billion from products and services and $200 million from AI model access through Microsoft Azure, CEO Sam Altman disclosed during an all-hands meeting. This significant increase from $1.6 billion in late 2023 highlights OpenAI's growth, driven by enterprise sales and the success of ChatGPT. The company recently hired Sarah Friar as CFO to support its continued investment in AI research and global business expansion.

Sponsor

Vultr is empowering the next generation of generative AI startups with access to the latest NVIDIA GPUs.

Try it yourself when you visit getvultr.com/forwardfutureai and use promo code "BERMAN300" for $300 off your first 30 days.

  • Apple Isn’t Paying OpenAI For ChatGPT Partnership - Apple and OpenAI are not financially compensating each other for integrating ChatGPT into iPhones. OpenAI aims to leverage this exposure to promote its paid ChatGPT version, while Apple benefits from a 30% cut of in-app subscriptions. Future plans may involve revenue-sharing agreements with AI partners. This collaboration, announced at Apple's developer conference, allows ChatGPT to perform tasks Apple's AI cannot. Users may eventually choose from various chatbots on Apple devices.

  • Building Generative AI Features Responsibly - Meta is extending its AI capabilities, including generative AI and language models like Llama, to European users, emphasizing the importance of training these models on locally relevant, publicly shared content to ensure cultural and linguistic accuracy. Committed to transparency and responsible development, Meta has consulted with the EU's privacy authorities and notified European users about their AI practices, providing a straightforward opt-out mechanism. While private messages aren’t used for AI training, future enhancements may utilize public interactions. Meta asserts its approach is in line with EU law and offers Europe the chance to participate in and shape AI innovation, warning against restrictive data policies that could limit the region's access to advanced AI developments.

  • Adobe overhauls terms of service to say it won’t train AI on customers’ work - Adobe has responded to backlash over perceived ambiguity in its terms of service regarding AI training with user content. The company clarified that it has never used customer content for AI model training nor claimed ownership of such content. To address the concerns, Adobe announced upcoming revised terms that reinforce their commitment to transparency and to not train AI on user's work. These changes are in response to users' fears sparked by vague language in a recent terms update, which Adobe acknowledges as unclear. They are also seeking to rebuild trust within the creative community, which has been critical of Adobe's subscription model and AI practices. Despite improvements in moderating content for its AI model, Firefly, Adobe admitted to imperfections and assured it allows users to opt out of automated systems. The new terms will take effect on June 18th, aiming to earn back trust and ensure Adobe remains a trusted partner for creators.

  • Microsoft to Rent Oracle Cloud Servers for OpenAI - Microsoft will rent Oracle’s cloud servers to expand capacity for OpenAI, supplementing its own servers due to high demand for Nvidia GPUs essential for generative AI software. This collaboration, announced alongside Oracle's earnings report, aims to help OpenAI scale operations. Oracle's revenue grew to $14.3 billion, with cloud revenue increasing by 20%. The partnership builds on a prior agreement facilitating data movement between Microsoft and Oracle's cloud services, addressing computing capacity needs amid a global GPU shortage.

  • States Take Up A.I. Regulation Amid Federal Standstill - California lawmakers have put forward around 30 measures aimed at regulating artificial intelligence, representing one of the largest state-led initiatives to curtail the technology's potential negative impacts. These bills seek to introduce stringent restrictions to address job loss concerns, the spread of disinformation, and national security threats. Proposals focus on preventing AI discrimination in sectors like housing and health care, and on protecting intellectual property and employment. As federal action lags, California's legislative history, including a robust privacy law and a child safety law, positions the state's AI regulations to potentially shape nationwide standards. The state's push reflects a wider trend, with almost 400 AI-focused bills introduced in various states. California's legislative session is expected to conclude voting on these AI bills by August 31.

  • What Do Google’s AI Answers Cost the Environment? - Google's AI Overviews, generated by its Gemini language models, now offer direct answers to search queries at the top of the results page, aiming to reach one billion users by 2024. Despite initial errors, concerns have been raised about the environmental impact of this technology. Generative AI searches demand significantly more energy than traditional methods, due to the need to create new information. The use of such AI could double data center energy usage by 2026, leading to an increased carbon footprint and higher operational costs for companies like Google and Microsoft. While tech giants are investing in renewable energy solutions to offset this, the mismatch between renewable energy supply and data center demand presents a challenge. The efficiency of these systems is expected to improve, potentially reducing costs per query over time. There are efforts underway, like giving AI models Energy Star ratings, to better inform users about the environmental costs associated with AI.

  • Elon Musk unexpectedly drops legal action against OpenAI - Elon Musk's legal team has requested a California court to dismiss a lawsuit against OpenAI and its CEO Sam Altman, only a day before a hearing on OpenAI's motion to dismiss. Musk had sued OpenAI, which he co-founded, claiming it strayed from its humanitarian mission towards profit-making. However, no reason for the withdrawal was provided, and the dismissal is "without prejudice," allowing for possible reactivation of the case. This development follows Musk's criticism of a partnership between OpenAI and Apple, amid his own launch of xAI and the chatbot Grok in competition with ChatGPT.

  • OpenAI’s Mira Murati fires back at Elon Musk for describing her company’s new partnership with Apple as ‘creepy spyware’ - At a Fortune event, OpenAI CTO Mira Murati responded to Elon Musk's critique of the integration of OpenAI's chatbot technology into Apple's iOS, branding it as spyware. Murati refuted this, emphasizing user privacy and product safety. Apple's collaboration promises users enhanced AI responses without sharing data with OpenAI. Musk, a former OpenAI co-founder, expressed distrust and forbade his companies from using related Apple devices. Murati also highlighted OpenAI's commitment to transparency and addressing public misunderstanding of AI. High-profile executive hires have occurred at OpenAI, inviting speculation of a future IPO, though Murati did not confirm such plans. OpenAI recently updated its board structure to increase accountability and oversight following an internal leadership dispute.

  • Nokia CEO Makes World's First 'Immersive' Phone Call - Nokia CEO Pekka Lundmark made the first-ever phone call using immersive audio and video technology, which delivers three-dimensional sound for more lifelike interactions. The technology, which enhances current monophonic calls, will be implemented in 5G Advanced standards, allowing for real-time spatial audio transmission. This innovation is expected to improve person-to-person and conference calls by making voices sound as though they are in distinct spatial locations. Nokia plans to license this technology, which could take a few years to become widely available.

  • Apple's AI, Apple Intelligence, is boring and practical — that's why it works - Apple’s new AI, rebranded as Apple Intelligence, focuses on practical, user-friendly features in iOS 18 rather than flashy, error-prone innovations. By integrating AI into everyday apps like Siri, Photos, and messaging with features like proofreading, prioritized notifications, and photo editing, Apple aims to enhance user experience without overwhelming users. This cautious approach avoids the pitfalls seen in other AI implementations, ensuring reliability and minimizing risks associated with AI misuse. Apple Intelligence will be available in beta this fall.

  • Microsoft GPT Builder is being retired - Microsoft will retire its GPT Builder and custom GPTs on July 10, 2024. This retirement will affect both Microsoft-created and customer-created GPTs, with all associated data being removed. Microsoft is shifting focus to other AI tools within its Copilot suite, emphasizing the use of pre-built AI models to streamline user experience and ensure data privacy. Users currently utilizing GPT Builder will need to transition to alternative solutions before the specified date.

  • Announcing ARC Prize - The ARC Prize offers a $1,000,000+ pool to encourage the development of open artificial general intelligence (AGI), challenging the current stagnation in the field. Current large language models (LLMs) excel at memorization rather than genuine reasoning, lacking the ability to acquire new skills as humans do. ARC-AGI, a measure of general intelligence introduced by François Chollet, stands unsaturated by modern AI, with the best scores reaching just 34%, compared to near-perfect human scores. The competition aims to promote breakthroughs in AGI by incentivizing fresh ideas and open-source solutions, opposing the recent trend of closed-source progress and the oversimplified mantra that "scale is all you need." Hosted by Mike Knoop and François Chollet, the ARC Prize welcomes participants from all backgrounds to aid in forwarding AGI research and potentially discover new insights into the nature of intelligence.

  • Consumer Privacy at OpenAI - OpenAI emphasizes user privacy by allowing control over data use and ensuring that API data is not used for model training. ChatGPT users can manage their data settings, and "Temporary Chats" are excluded from training. OpenAI's models avoid storing or recalling personal information, focusing on reducing the use of private data and training models to reject sensitive information. Users can request data deletion and opt-out options to maintain privacy.

  • Samsung unveils plan to speed up delivery of AI chips - Samsung Electronics is aiming to revolutionize the production of artificial intelligence (AI) chips with a one-stop-shop service that integrates its memory, foundry, and chip packaging operations. This approach has reduced AI chip production time by approximately 20%. Siyoung Choi, head of the foundry business, underscored the importance of AI in technology at a Samsung event, where the expectation was set for the global chip industry to grow to $778 billion by 2028, with AI chips being a significant contributor. Samsung is aligning with projections by OpenAI CEO Sam Altman on the surging demand for AI chips. Samsung’s all-inclusive model allows for closely integrated chip components—an advantage given the intense integration required for AI data processing. The company is bolstering its competitive edge with its gate all-around (GAA) chip architecture aimed at enhancing performance and reducing power usage, with plans to mass-produce 3-nanometer GAA chips later this year. Additionally, Samsung disclosed its development of a 2-nanometer chipmaking process intended for high-performance computing, projecting mass production to commence in 2027.

Awesome Research Papers

  • Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning - Husky is presented as a unique, open-source language agent skilled in various complex tasks, including numerical, tabular, and knowledge-based reasoning. Unlike specialized agents, Husky offers a generalized solution, working through two main phases: deciding on the next step and employing specialized models to perform the chosen action. A broad range of actions is incorporated within Husky's design, and it is backed by expertly curated data. Performance evaluations across 14 datasets show Husky's superior capabilities in comparison to previous models, even rivaling advanced language models like GPT-4 in tasks demanding mixed-tool reasoning and knowledge retrieval. Husky’s resources are publicly accessible online.

  • Towards a Personal Health Large Language Model - This paper introduces the Personal Health Large Language Model (PH-LLM), which builds on the Gemini model, focusing on the interpretation and analysis of numerical time-series data from mobile and wearable devices. Three datasets were developed to evaluate the model's abilities in generating personalized health insights, understanding expert knowledge, and predicting sleep outcomes. Results indicate that PH-LLM can perform on par with experts in fitness assessments and shows promise in sleep analysis upon fine-tuning. Additionally, PH-LLM scored higher than expert averages on domain knowledge tests and proved effective in using multimodal data to predict sleep quality. Despite encouraging outcomes, further research is deemed necessary for the safe application in the personal health domain.

  • A Survey on Hardware Aware Efficient Training of Deep Neural Networks - This paper provides a comprehensive survey of techniques for efficiently training deep neural networks (DNNs) with consideration of hardware constraints. It examines methods to optimize computation, memory usage, and energy efficiency, focusing on algorithm-hardware co-design. The survey categorizes approaches into pruning, quantization, efficient architecture design, and hardware acceleration. It also discusses challenges and future directions in the field, aiming to guide researchers in developing more efficient and scalable DNN training methods.

  • Together MoA — collective intelligence of open-source models pushing the frontier of LLM capabilities - Introducing Mixture of Agents (MoA), an innovative architecture that uses multiple large language models (LLMs) in layers to generate enhanced responses through a collaborative process. MoA demonstrated superior performance on AlpacaEval 2.0, surpassing GPT-4o with a score of 65.1%. It introduces two roles within its system: Proposers, which generate initial responses, and Aggregators, that synthesize these into higher quality outcomes. Various configurations, like Together MoA, employ different combinations of proposers and aggregators, showing significant improvements on benchmarks like MT-Bench and FLASK.

Introducing Apple’s On-Device and Server Foundation Models - Apple introduced Apple Intelligence at WWDC 2024, a system deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia, featuring generative models for enhancing user tasks. It includes a ~3 billion parameter on-device language model and a larger cloud-based model on Apple silicon servers. The focus is on empowerment, user representation, mindful design, and privacy, avoiding personal data usage in training foundation models. Apple's AXLearn framework enables efficient model training while post-training algorithms improve instruction-following qualities. Optimization techniques ensure on-device and server model efficiency, and adapter layers allow dynamic task-specific tuning. Extensive human-led evaluations demonstrate the superior performance and safety of Apple models compared to competitors, with future updates promised on a broader set of models.

Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling - Introducing Samba, a novel hybrid model designed to handle sequences with infinitely long context. It integrates a Mamba, a State Space Model, with Sliding Window Attention for efficient memory compression and recall. Tested up to 3.8 billion parameters and trained on 3.2 trillion tokens, Samba shows significant improvements over existing attention and SSM-based models on diverse benchmarks. Remarkably, it can extrapolate up to 256K context lengths from 4K sequences while retaining memory recall and offering enhanced predictions for context lengths up to 1 million. Additionally, Samba processes sequences faster than Transformers, with 3.73x higher throughput for 128K user prompts and a 3.64x speedup for generating 64K tokens.

FontStudio: Shape-Adaptive Diffusion Model for Coherent and Consistent Font Effect Generation - Recent advancements in diffusion-based text-to-image models have extended into the realm of creating artistic fonts, a field typically dominated by professional designers. Focusing on multilingual font text effects, which require intricate visual consistency within font shapes, researchers have developed a shape-adaptive diffusion model. This model navigates the complexities of non-traditional canvas shapes by utilizing a curated image-text dataset and integrating segmentation masks to direct the image generation. Additionally, a method for training-free, shape-adaptive effect transfer ensures uniformity across different letters by leveraging a font effect noise prior and a concatenated latent space. The resulting FontStudio system has demonstrated a strong user preference (78% win rates on aesthetics), surpassing even Adobe Firefly, as per user studies.

Awesome New Launches

Introducing Shutterstock ImageAI, Powered by Databricks: An Image Generation Model Built for the Enterprise - Databricks and Shutterstock have collaborated to introduce Shutterstock ImageAI, a new enterprise-level text-to-image Generative AI model. Using Databricks' Mosaic AI and Shutterstock's high-quality image database, ImageAI provides businesses with the ability to create tailored, trusted, and commercially viable images quickly. Addressing common enterprise concerns, it ensures data governance, observability with model integration into applications, and the security of pre-trained models. The partnership aligns with both companies' dedication to innovating AI responsibly and enhancing creative processes. ImageAI is currently in private preview on Databricks and active on Shutterstock's AI image generator platform.

DuckDuckGo Releases Portal Giving Private Access to AI Models - DuckDuckGo has launched a platform, accessible at duck.ai, that provides users with private access to four AI models, including OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku. The service prioritizes privacy by ensuring that neither DuckDuckGo nor the chatbot providers use user data for training models, with all interactions remaining anonymous. This initiative addresses growing concerns over data privacy, offering a secure way for users to interact with AI models without compromising their personal information.

AI/BI | Databricks - Databricks has introduced AI/BI, a business intelligence product designed to democratize data analytics. It includes Dashboards, which enable users to create interactive visualizations using natural language, and Genie, an AI tool for conversational data analysis. AI/BI is integrated with Databricks' platform, ensuring unified governance and security, and providing scalable, instant insights. This solution aims to empower users across organizations, even those without technical expertise, to analyze data and generate insights efficiently.

Luma Dream Machine - Luma's Dream Machine is an AI model that generates high-quality, realistic videos quickly from text and images. Designed to produce 120 frames in 120 seconds, it creates smooth, action-packed shots with consistent characters and accurate physics. The platform aims to democratize video creation by allowing users to produce visually engaging content efficiently. Dream Machine is positioned as a tool for creating dynamic stories and cinematic experiences, accessible to everyone.

Stable Diffusion 3 Medium — Stability AI - Stability AI has launched Stable Diffusion 3 Medium, an advanced text-to-image AI model featuring 2 billion parameters. This model excels in photorealism, prompt comprehension, and efficient resource usage, making it suitable for consumer PCs and enterprise GPUs. It supports detailed, customizable outputs while maintaining low VRAM requirements. Released under an open non-commercial license, it aims to democratize generative AI. Stability AI has collaborated with NVIDIA and AMD to optimize performance and ensure responsible AI use.

Smart Paste for context-aware adjustments to pasted code - Google has introduced Smart Paste, an innovative tool that enhances the coding workflow by making context-aware adjustments to pasted code. Leveraging AI and large sequence models, Smart Paste predicts necessary modifications, such as syntax corrections or variable renaming, to streamline development. In a study involving around 40,000 engineers, Smart Paste was used in 6.9% of paste actions, with a 42.5% acceptance rate, significantly improving efficiency and user experience. The tool balances speed and accuracy, ensuring seamless integration into the development process.

Check Out My Other Videos:

Join the conversation

or to participate.