AI Is The New Economy

JPMorgan CEO Highlights AI's Potential to Augment Jobs Across Industries in Annual Shareholder Letter

JPMorgan CEO Highlights AI's Potential to Augment Jobs Across Industries in Annual Shareholder Letter

JPMorgan Chase CEO Jamie Dimon has likened the transformational potential of artificial intelligence (AI) to that of the steam engine, stating that it could augment virtually every job. Dimon dedicated a portion of his annual shareholder letter to the importance of AI for JPMorgan's business and society at large. The bank has identified over 400 use cases for AI across various areas, including marketing, fraud, and risk. Dimon said, “over time, we anticipate that our use of AI has the potential to augment virtually every job.” It has amassed thousands of AI experts and data scientists and is exploring the deployment of generative AI. Dimon emphasized that the consequences of AI could be extraordinary and possibly as transformational as significant historical technological inventions such as the printing press, the steam engine, electricity, computing, and the internet.

  • Meta Platforms to Launch Small Versions of Llama 3 Next Week - Meta Platforms plans to launch two smaller versions of its upcoming Llama 3 large language model next week, ahead of the release of the full, larger version expected this summer. The smaller models will be cheaper and faster to run than the larger Llama 3, which is expected to have over 140 billion parameters. Unlike the full Llama 3, the initial smaller models will not have multimodal capabilities to understand both text and images. Meta is working to make Llama 3 more open to answering contentious questions, after finding Llama 2 was too conservative in its responses.

  • Announcing new Microsoft AI Hub in London - Microsoft has announced a new AI hub in London, called Microsoft AI London, to focus on developing advanced language models and infrastructure. The hub, led by AI scientist and engineer Jordan Hoffman, will collaborate with Microsoft's other AI teams and partners, including OpenAI. The investment in the U.K. reflects the country's commitment to advancing AI responsibly and with a safety-first approach. The hub adds to Microsoft's existing presence in the U.K., including the Microsoft Research Cambridge lab, and is part of Microsoft's broader AI strategy, which includes a £2.5 billion investment in AI infrastructure and talent development in the U.K. by 2026.

  • Ive and Sam Altman’s AI Device Startup in Funding Talks with Emerson, Thrive - Jony Ive, former Apple design chief, and Sam Altman, CEO of OpenAI, are collaborating on an AI-powered device and seeking $1 billion in funding from venture capitalists. The device is expected to challenge the conventional smartphone experience and explore new interaction modalities with artificial intelligence. The startup, currently unnamed, is in advanced discussions with Thrive Capital and Emerson Collective, aiming to raise up to $1 billion. The device will not resemble a traditional smartphone, with potential investors including Thrive Capital, an OpenAI investor, and Emerson Collective, a venture capital firm founded by Laurene Powell Jobs. OpenAI's technologies, including their state-of-the-art GPT models, are likely to power the AI capabilities for the device.

  • Big Tech companies form new consortium to allay fears of AI job takeovers - Recent developments highlight growing concerns around AI's impact on employment. UPS experienced its largest layoffs partly due to AI advancements, while IBM has paused hiring for jobs it believes AI will automate. Surveys reveal a significant percentage of workers fear job losses due to AI, with some expecting AI-driven layoffs from their employers. However, the AI-Enabled ICT Workforce Consortium (ITC), including major tech companies like Cisco, Google, and Microsoft, aims to counteract this notion through reskilling and upskilling initiatives. Although the ITC's objectives are to assess AI's impact on 56 strategic ICT roles and recommend training, skepticism persists due to the lack of clear outcomes and the notable decrease in demand for AI jobs. Tech giants' commitments to skill millions seem promising, yet industry experts await concrete strategies and actionable recommendations to ensure these efforts translate into securing the employment future in the face of AI's rise.

  • Groq CEO: “We no longer sell hardware” - The CEO of Groq, a hardware startup, has announced that they will no longer sell hardware directly to end-users. Instead, Groq will focus on building data centers and renting out access to their hardware. This decision was likely made because supporting end-users directly, especially in small quantities, could have been a "death by 1000 cuts" situation for the company. By building and maintaining their own data centers, Groq can provide cloud access to their hardware while avoiding the challenges of supporting individual users.

  • Oracle and Palantir Join Forces to Deliver Mission Critical AI Solutions to Governments and Businesses - Oracle and Palantir Technologies have announced a partnership to offer joint cloud, data, and AI solutions aiming to enhance decision-making capabilities from headquarters to the tactical edge. Oracle’s expansive cloud and AI infrastructure will pair with Palantir’s AI platforms, Foundry and Gotham, facilitating data integration and accelerated decision-making for organizations, while meeting high sovereignty and security requirements. Palantir will adopt Oracle Cloud Infrastructure (OCI) for its Foundry workloads and expand its AI Platforms across Oracle’s diverse cloud environments. The collaboration intends to provide comprehensive cloud and AI services globally, ensuring consistent performance, with a focus on meeting the stringent needs of defense and intelligence customers. Oracle’s AI strategy will augment Palantir’s AI capabilities, catering to accelerated decision-making requirements.

  • Securing Canada’s AI advantage - Canada is advancing its position in the global AI race with a $2.4 billion investment from Budget 2024 to bolster the AI sector and ensure inclusive growth. Recognizing AI's transformative impact on the economy and job market, this investment aims to create high-paying jobs and foster innovation across industries. Key facets include $2 billion for computing infrastructure, support for AI startups, and assistance for businesses to adopt AI. To sustain growth and maintain its competitive edge in AI, Canada is committed to developing a sovereign AI compute strategy and crafting safety measures, including a new AI Safety Institute and strengthening the Artificial Intelligence and Data Act enforcement. This strategic move positions Canada as a world leader in AI, enhancing productivity, and promoting responsible and inclusive technology development for current and future generations.

  • OpenAI transcribed over a million hours of YouTube videos to train GPT-4 - Recent reports by The Wall Street Journal and The New York Times highlight the struggles AI companies face in acquiring high-quality training data. OpenAI, engaged in a legal grey area, used its Whisper model to transcribe over a million hours of YouTube videos to train GPT-4, which the company considers fair use. Google and Meta are also grappling with data scarcity; Google admitted to using YouTube content within agreed terms with creators, while Meta has considered options such as obtaining book licenses to gather more data. The AI industry may outstrip new content production by 2028, prompting exploration of alternative data sources like synthetic data or structured "curriculum learning." However, their reliance on potentially unauthorized data is leading to legal challenges and disputes.

  • Spotify launches personalized AI playlists that you can build using prompts - Spotify introduces 'AI playlists,' a beta feature enabling users to create custom playlists through written prompts on the mobile app in the UK and Australia. Leveraging large language models, the AI interprets requests, which can be as quirky as "songs to serenade my cat," drawing on genres, moods, artists, and more to craft personalized playlists. Users can refine the playlists using feedback commands, and swipe to remove unwanted tracks. The service utilizes Spotify's existing personalization data for user-specific curation and has safeguards against inappropriate prompts. This innovation follows Spotify's previously launched AI DJ and is part of ongoing investment in AI applications, including potential uses like podcast summarization and AI-generated ads.

  • These AI startups stood out the most in Y Combinator's Winter 2024 batch - While overall investment in startups is down, funding for AI soared in the last year with generative AI investments reaching $25.2 billion by late December 2023. Y Combinator's Winter 2024 Demo Day was dominated by AI startups, nearly doubling the number from the previous year. Read the article for more on the notable startups.

  • Fake Facebook MidJourney AI page promoted malware to 1.2 million people - Cybercriminals are exploiting the high interest in AI by promoting fake AI services like MidJourney and ChatGPT-5 on Facebook through malvertising campaigns. They hijack legitimate profiles and create ads that lead users to fraudulent communities which appear genuine. These communities offer bogus previews of AI features, enticing users to download malware-infected executables under the guise of new AI software versions. The malware, which includes variants like Rilide and Vidar, steals sensitive browser-stored data, which is then sold or used in further scams. Despite the takedown of a major fake Midjourney page with 1.2 million followers, similar malicious activities continue, demonstrating a pattern of persistent and sophisticated social engineering attacks on the platform.

  • New Chinese-developed AI weather model Zhiji is shaking up meteorology - Huawei's Pangu-Weather, an AI-based weather prediction model, offers a significant advancement in meteorological forecasting, earning it the distinction of China's top scientific innovation of 2023 by the National Natural Science Foundation of China. Notably, its latest version, Zhiji, delivers 5-day regional forecasts with an impressive 3km precision, improving from the previous 25km granularity. Originally showcased in Nature journal and operational on the ECMWF platform, Pangu-Weather's AI can generate seven-day forecasts 10,000 times faster than traditional methods. This breakthrough has markedly increased the speed and accuracy in predicting extreme weather events, outperforming established numerical simulations. As researchers refine Zhiji further, they aim to fine-tune algorithms for enhanced rainfall forecasts and other specialized meteorological predictions, potentially expanding tailored weather models to other regions through local collaborations.

LLM Training in Simple, Raw C/CUDA - The project called "llm.c" by Andrej Karpathy aims to provide a simple and minimalist implementation of large language models (LLMs) using pure C and CUDA. The project focuses on minimalism and simplicity, with a small codebase and no dependencies on large libraries like PyTorch or cPython. The author has implemented a direct CUDA implementation for faster performance and plans to optimize the CPU version with SIMD instructions. The repository also includes a simple GPT-2 reference implementation for training and inference, which can be used as a starting point for custom LLM applications. The author emphasizes the importance of simplicity, readability, and portability in this project, making it a potential reference implementation for custom LLM training and deployment in edge-adjacent environments.

Awesome Research Papers

CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues - The paper introduces the CantTalkAboutThis dataset, designed to help language models maintain topic relevance in conversations. The dataset consists of synthetic dialogues on various topics, interspersed with distractor turns to challenge the chatbot's focus. Fine-tuning language models on this dataset aims to improve their ability to stay on topic and resist deviating from their assigned role. Preliminary observations suggest that models trained on this dataset also enhance their performance on fine-grained instruction following tasks.

Octopus v2: On-device language model for super agent - The paper presents a new language model, Octopus v2, designed for on-device deployment and integration with super agents. The paper also discusses various techniques for language model compression and acceleration, including quantization, low-rank adaptation, and prompt injection attacks. The authors also highlight the importance of designing chain-of-thought in math problem solving and retrieval-augmented text generation. The paper also mentions the use of foundation models for completing tasks by connecting with millions of APIs. The paper also discusses the potential harms of language models, such as generating biased or offensive summaries, and the potential impact on job loss due to automation.

Mixture-of-Depths: Dynamically allocating compute in transformer-based language models - The paper proposes a method for allocating compute resources dynamically in transformer-based language models. The authors introduce a Mixture-of-Depths (MoD) model, which adapts the depth of the model based on the input, allowing for more efficient use of compute resources. The MoD model uses a router to select the appropriate depth for each input, and the authors show that this approach leads to improvements in the primary language modeling objective and allows for more efficient use of compute resources. The paper also discusses related work on adaptive computation time and depth-adaptive transformers.

StableLM-2-12B - Stable LM 2 12B is a pair of powerful 12BN parameter language models trained on multilingual data in English, Spanish, German, Italian, French Portuguese and Dutch. The update to Stable LM 2 1.6B improves its conversational skills in all of the 7 aforementioned languages and incorporates tool usage and function calling.

Awesome New Launches

Claude Introduces Tool Use - Anthropic introduces a public beta feature that allows Claude to interact with user-defined external tools via the API. Users can customize Claude's capabilities by providing it with access to specific tools, which Claude can then use to complete tasks based on user prompts. The process involves defining tools with names, descriptions, and input schemas, and providing prompts that require the use of these tools. The effectiveness of the tools heavily relies on detailed descriptions, and the website offers guidance on limitations, different Claude models, and encourages user participation and feedback.

Check Out My Other Videos:

Join the conversation

or to participate.