For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasn’t merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements—regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term “scaling laws,” which has since become a touchstone of the industry.
[time-brightcove not-tgx=”true”]This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today’s articulate chatbots.
But now, that bigger-is-better gospel is being called into question.
Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same “intelligence improvements.”
Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said “we haven’t seen any signs of deviations from scaling laws.” OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Google’s Gemini model took GPT-4o’s top spot on a popular AI-performance leaderboard, the company’s CEO, Sundar Pichai posted to X saying “more to come.”
Read more: The Researcher Trying to Glimpse the Future of AI
Recent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the company’s largest model, Opus, which has not received such updates. In June, the company said Opus would be updated “later this year,” but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company’s larger Gemini Ultra model has yet to receive an update. OpenAI’s recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called “GPT-4o with reasoning” internally, suggesting the underlying model is similar in scale to GPT-4.
Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, “we’ve failed deeply as a company,” Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskever’s new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.
“They had these things they thought were mathematical laws and they’re making predictions relative to those mathematical laws and the systems are not meeting them,” says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally “hit a wall”—something he’s warned could happen since 2022. “I didn’t know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck,” he says.
A slowdown could be a reflection of the limits of current deep learning techniques, or simply that “there’s not enough fresh data anymore,” Marcus says. It’s a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text data’s limitations. “I think it’s like that with language models,” she says.
The lack of data is particularly acute in certain domains like reasoning and mathematics, where we “just don’t have that much high quality data,” says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesn’t mean scaling is likely to stop—just that scaling alone might be insufficient. “At every order of magnitude scale up, different innovations have to be found,” he says, noting that it does not mean AI progress will slow overall.
Read more: Is AI About to Run Out of Data? The History of Oil Says No
It’s not the first time critics have pronounced scaling dead. “At every stage of scaling, there are always arguments,” Amodei said last week. “The latest one we have today is, ‘we’re going to run out of data, or the data isn’t high quality enough or models can’t reason.,” “…I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue,” he said. Reflecting on OpenAI’s early days on Y-Combinator’s podcast, company CEO Sam Altman partially credited the company’s success with a “religious level of belief” in scaling—a concept he says was considered “heretical” at the time. In response to a recent post on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying “there is no wall.”
Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. “They expected AI was going to be able to, already write a PhD thesis,” he says. “Maybe it feels a bit.. anti-climactic.”
A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. “We tend to forget that GPT three from GPT four was like 100x scale in compute,” Sevilla says. “If you want to do something like 100 times bigger than GPT-4, you’re gonna need up to a million GPUs,” Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musk’s 100,000 GPU supercomputer in Memphis—the largest of its kind—which was reportedly built from start to finish in three months.
In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAI’s o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. “This is something we already knew was possible,” Sevilla says, gesturing to an Epoch AI report published in July 2023.
Read more: Elon Musk’s New AI Data Center Raises Alarms Over Pollution
Prematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4’s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. It’s an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.
Much of the U.S.’s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Biden’s sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit China’s access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing China’s AI progress.
“The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. “stalls at the frontier,” he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.’s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.
Whether we’re seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. “It’s unclear to me that a few months is a substantial enough reference point,” Singer says. “You could hit a plateau and then hit extremely rapid gains.”
Source: Tech – TIME | 22 Nov 2024 | 6:53 am
MELBOURNE — Australia’s communications minister introduced a world-first law into Parliament on Thursday that would ban children under 16 from social media, saying online safety was one of parents’ toughest challenges.
Michelle Rowland said TikTok, Facebook, Snapchat, Reddit, X and Instagram were among the platforms that would face fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent young children from holding accounts.
“This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia,” Rowland told Parliament.
[time-brightcove not-tgx=”true”]“There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite,” she added.
X owner Elon Musk warned that Australia intended to go further, posting on his platform: “Seems like a backdoor way to control access to the Internet by all Australians.”
The bill has wide political support. After it becomes law, the platforms would have one year to work out how to implement the age restriction.
“For too many young Australians, social media can be harmful,” Rowland said. “Almost two-thirds of 14- to 17-years-old Australians have viewed extremely harmful content online including drug abuse, suicide or self-harm as well as violent material. One quarter have been exposed to content promoting unsafe eating habits.”
Government research found that 95% of Australian care-givers find online safety to be one of their “toughest parenting challenges,” she said. Social media had a social responsibility and could do better in addressing harms on their platforms, she added.
“This is about protecting young people, not punishing or isolating them, and letting parents know that we’re in their corner when it comes to supporting their children’s health and wellbeing,” Rowland said.
Read More: Teens Are Stuck on Their Screens. Here’s How to Protect Them
Child welfare and internet experts have raised concerns about the ban, including isolating 14- and 15-year-olds from their already established online social networks.
Rowland said there would not be age restrictions placed on messaging services, online games or platforms that substantially support the health and education of users.
“We are not saying risks don’t exist on messaging apps or online gaming. While users can still be exposed to harmful content by other users, they do not face the same algorithmic curation of content and psychological manipulation to encourage near-endless engagement,” she said.
The government announced last week that a consortium led by British company Age Check Certification Scheme has been contracted to examine various technologies to estimate and verify ages.
In addition to removing children under 16 from social media, Australia is also looking for ways to prevent children under 18 from accessing online pornography, a government statement said.
Age Check Certification Scheme’s chief executive Tony Allen said Monday the technologies being considered included age estimation and age inference. Inference involves establishing a series of facts about individuals that point to them being at least a certain age.
Rowland said the platforms would also face fines of up to AU$50 million ($33 million) if they misused personal information of users gained for age-assurance purposes.
Information used for age assurances must be destroyed after serving that purpose unless the user consents to it being kept, she said.
Digital Industry Group Inc., an advocate for the digital industry in Australia, said with Parliament expected to vote on the bill next week, there might not be time for “meaningful consultation on the details of the globally unprecedented legislation.”
“Mainstream digital platforms have strict measures in place to keep young people safe, and a ban could push young people on to darker, less safe online spaces that don’t have safety guardrails,” DIGI managing director Sunita Bose said in a statement. “A blunt ban doesn’t encourage companies to continually improve safety because the focus is on keeping teenagers off the service, rather than keeping them safe when they’re on it.”
Source: Tech – TIME | 21 Nov 2024 | 8:30 pm
“AI is a technology like no other in human history,” U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. “Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isn’t the smart thing to do.”
Raimondo’s remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.
[time-brightcove not-tgx=”true”]Raimondo suggested participants keep two principles in mind: “We can’t release models that are going to endanger people,” she said. “Second, let’s make sure AI is serving people, not the other way around.”
Read More: How Commerce Secretary Gina Raimondo Became America’s Point Woman on AI
The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.
In a joint statement, the members of the International Network of AI Safety Institutes—which includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore—laid out their mission: “to be a forum that brings together technical expertise from around the world,” “…to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community,” and “…to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.”
In the lead-up to the convening, the U.S. AISI, which serves as the network’s inaugural chair, also announced a new government taskforce focused on the technology’s national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to “identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology,” with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.
The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Community Party does not get to “write the rules of the road.” Earlier Wednesday, Chinese lab Deepseek announced a new “reasoning” model thought to be the first to rival OpenAI’s own reasoning model, o1, which the company says is “designed to spend more time thinking” before it responds.
On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a “Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” which the commission defined as “systems as good as or better than human capabilities across all cognitive domains” that “would surpass the sharpest human minds at every task.”
Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesday’s event, Anthropic CEO Dario Amodei—who believes AGI-like systems could arrive as soon as 2026—cited “loss of control” risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting “we also need to be really careful about how we do it.”
Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI model—the upgraded version of Anthropic’s Claude 3.5 Sonnet. The evaluation focused on assessing the model’s biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be “routinely circumvented,” which they noted is “consistent with prior research on the vulnerability of other AI systems’ safeguards.”
The San Francisco convening set out three priority topics that stand to “urgently benefit from international collaboration”: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.
While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, “to accelerate the design and implementation of frontier AI safety frameworks.” And in February, France will host its “AI Action Summit,” following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.
Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. “It has the potential to replace the human mind,” she said. “Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation. We need that virtuous cycle.”
Source: Tech – TIME | 21 Nov 2024 | 7:00 pm
U.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.
The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.
[time-brightcove not-tgx=”true”]A sale of Chrome “will permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” Justice Department lawyers argued in their filing.
Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.
The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Biden’s administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.
The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Google’s punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.
If Mehta embraces the government’s recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.
Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apple’s iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.
Regulators also want Google to license the search index data it collects from people’s queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results.
Kent Walker, Google’s chief legal officer, lashed out at the Justice Department for pursuing “a radical interventionist agenda that would harm Americans and America’s global technology.” In a blog post, Walker warned the “overly broad proposal” would threaten personal privacy while undermining Google’s early leadership in artificial intelligence, “perhaps the most important innovation of our time.”
Wary of Google’s increasing use of artificial intelligence in its search results, regulators also advised Mehta to ensure websites will be able to shield their content from Google’s AI training techniques.
The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.
“The playing field is not level because of Google’s conduct, and Google’s quality reflects the ill-gotten gains of an advantage illegally acquired,” the Justice Department asserted in its recommendations. “The remedy must close this gap and deprive Google of these advantages.”
It’s still possible that the Justice Department could ease off attempts to break up Google, especially if Trump takes the widely expected step of replacing Assistant Attorney General Jonathan Kanter, who was appointed by Biden to oversee the agency’s antitrust division.
Read More: How a Second Trump Administration Will Change the Domestic and World Order
Although the case targeting Google was originally filed during the final months of Trump’s first term in office, Kanter oversaw the high-profile trial that culminated in Mehta’s ruling against Google. Working in tandem with Federal Trade Commission Chair Lina Khan, Kanter took a get-tough stance against Big Tech that triggered other attempted crackdowns on industry powerhouses such as Apple and discouraged many business deals from getting done during the past four years.
Trump recently expressed concerns that a breakup might destroy Google but didn’t elaborate on alternative penalties he might have in mind. “What you can do without breaking it up is make sure it’s more fair,” Trump said last month. Matt Gaetz, the former Republican congressman that Trump nominated to be the next U.S. Attorney General, has previously called for the breakup of Big Tech companies.
Gaetz faces a tough confirmation hearing.
Read More: Here Are the New Members of Trump’s Administration So Far
This latest filing gave Kanter and his team a final chance to spell out measures that they believe are needed to restore competition in search. It comes six weeks after Justice first floated the idea of a breakup in a preliminary outline of potential penalties.
But Kanter’s proposal is already raising questions about whether regulators seek to impose controls that extend beyond the issues covered in last year’s trial, and—by extension—Mehta’s ruling.
Banning the default search deals that Google now pays more than $26 billion annually to maintain was one of the main practices that troubled Mehta in his ruling.
It’s less clear whether the judge will embrace the Justice Department’s contention that Chrome needs to be spun out of Google and or Android should be completely walled off from its search engine.
“It is probably going a little beyond,” Syracuse University law professor Shubha Ghosh said of the Chrome breakup. “The remedies should match the harm, it should match the transgression. This does seem a little beyond that pale.”
Google rival DuckDuckGo, whose executives testified during last year’s trial, asserted the Justice Department is simply doing what needs to be done to rein in a brazen monopolist.
“Undoing Google’s overlapping and widespread illegal conduct over more than a decade requires more than contract restrictions: it requires a range of remedies to create enduring competition,” Kamyl Bazbaz, DuckDuckGo’s senior vice president of public affairs, said in a statement.
Trying to break up Google harks back to a similar punishment initially imposed on Microsoft a quarter century ago following another major antitrust trial that culminated in a federal judge deciding the software maker had illegally used his Windows operating system for PCs to stifle competition.
However, an appeals court overturned an order that would have broken up Microsoft, a precedent many experts believe will make Mehta reluctant to go down a similar road with the Google case.
Source: Tech – TIME | 21 Nov 2024 | 6:05 pm
Technological progress can excite us, politics can infuriate us, and wars can mobilize us. But faced with the risk of human extinction that the rise of artificial intelligence is causing, we have remained surprisingly passive. In part, perhaps this was because there did not seem to be a solution. This is an idea I would like to challenge.
AI’s capabilities are ever-improving. Since the release of ChatGPT two years ago, hundreds of billions of dollars have poured into AI. These combined efforts will likely lead to Artificial General Intelligence (AGI), where machines have human-like cognition, perhaps within just a few years.
[time-brightcove not-tgx=”true”]Hundreds of AI scientists think we might lose control over AI once it gets too capable, which could result in human extinction. So what can we do?
Read More: What Donald Trump’s Win Means For AI
The existential risk of AI has often been presented as extremely complex. A 2018 paper, for example, called the development of safe human-level AI a “super wicked problem.” This perceived difficulty had much to do with the proposed solution of AI alignment, which entails making superhuman AI act according to humanity’s values. AI alignment, however, was a problematic solution from the start.
First, scientific progress in alignment has been much slower than progress in AI itself. Second, the philosophical question of which values to align a superintelligence to is incredibly fraught. Third, it is not at all obvious that alignment, even if successful, would be a solution to AI’s existential risk. Having one friendly AI does not necessarily stop other unfriendly ones.
Because of these issues, many have urged technology companies not to build any AI that humanity could lose control over. Some have gone further; activist groups such as PauseAI have indeed proposed an international treaty that would pause development globally.
That is not seen as politically palatable by many, since it may still take a long time before the missing pieces to AGI are filled in. And do we have to pause already, when this technology can also do a lot of good? Yann Lecun, AI chief at Meta and prominent existential risk skeptic, says that the existential risk debate is like “worrying about turbojet safety in 1920.”
On the other hand, technology can leapfrog. If we get another breakthrough such as the transformer, a 2017 innovation which helped launch modern Large Language Models, perhaps we could reach AGI in a few months’ training time. That’s why a regulatory framework needs to be in place before then.
Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems.
Building upon their work, we at the nonprofit Existential Risk Observatory propose a Conditional AI Safety Treaty. Signatory countries of this treaty, which should include at least the U.S. and China, would agree that once we get too close to loss of control they will halt any potentially unsafe training within their borders. Once the most powerful nations have signed this treaty, it is in their interest to verify each others’ compliance, and to make sure uncontrollable AI is not built elsewhere, either.
One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries. They have specialized model evaluation know-how, which can be developed further to answer this crucial question. Also, these institutes are public, making them independent from the mostly private AI development labs. The question of how close is too close to losing control will remain difficult, but someone will need to answer it, and the AI Safety Institutes are best positioned to do so.
We can mostly still get the benefits of AI under the Conditional AI Safety Treaty. All current AI is far below loss of control level, and will therefore be unaffected. Narrow AIs in the future that are suitable for a single task—such as climate modeling or finding new medicines—will be unaffected as well. Even more general AIs can still be developed, if labs can demonstrate to a regulator that their model has loss of control risk less than, say, 0.002% per year (the safety threshold we accept for nuclear reactors). Other AI thinkers, such as MIT professor Max Tegmark, Conjecture CEO Connor Leahy, and ControlAI director Andrea Miotti, are thinking in similar directions.
Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.
The Conditional AI Safety Treaty could provide a solution to AI’s existential risk, while not unnecessarily obstructing AI development right now. Getting China and other countries to accept and enforce the treaty will no doubt be a major geopolitical challenge, but perhaps a Trump government is exactly what is needed to overcome it.
A solution to one of the toughest problems we face—the existential risk of AI—does exist. It is up to us whether we make it happen, or continue to go down the path toward possible human extinction.
Source: Tech – TIME | 16 Nov 2024 | 1:11 am
As a chaplain at Harvard and MIT, I have been particularly concerned when talking to young people, who hope to be the next generation of American leaders. What moral lessons should they draw from the 2024 election? Elite institutions like those I serve have, after all, spent generations teaching young people to pursue leadership and success above all else. And, well, the former-turned-next POTUS has become one of the most successful political leaders of this century.
[time-brightcove not-tgx=”true”]The electoral resurrection of a convicted felon whose own former chief of staff, a former Marine Corps General no less, likened him to a fascist, requires far more than re-evaluation of Democratic Party policies. It demands a re-examination of our entire society’s ethical—and even spiritual—priorities.
It’s not that students on campuses like mine want to be the next Trump (though he did win a majority among white, male, college-educated voters). It is, however, common for them to idolize billionaire tech entrepreneurs like Elon Musk and Peter Thiel. Both Musk and Thiel factored significantly in Trump and Vance’s victory; both will be handsomely rewarded for their support.
But is a technocracy the best we can do as a model for living a meaningful life today? It is past time to recognize that the digital technologies with which many of us now interact from the moment we wake until the moment we drift into sleep (and often beyond that) have ceased to be mere “tools.” Just like we went from users to being the products by which companies like Facebook and Google make trillions in advertising revenue, we now have become the tools by which certain technologists can realize their grandest financial and political ambitions.
Policy reform alone—while necessary—won’t save us. But neither will tech figures like Musk or Theil. In fact, we need an alternative to an archetype that I like to call “The Drama of the Gifted Technologist,” of which Musk, Thiel, and other tech leaders have become avatars.
Based on the ideas of the noted 20th century psychologist Alice Miller, and on my observation of the inner lives of many of the world’s most gifted students, the “Drama of the Gifted Technologist” starts with the belief that one is only “enough,” or worthy of love and life, if one achieves extraordinary things, namely through leadership in tech or social media clout.
I’ve seen this “drama” become a kind of “official psychopathology of the Ivy League” and Silicon Valley. It began, in some ways, with the accumulation of “friends” on Facebook over a decade ago, to gain social relevance. And it has now graduated to become the psychological and even spiritual dynamic driving the current AI arms-race, also known as “accelerationism.” See for example the influential billionaire AI cheerleader VC Marc Andreessen’s famous “Techno-Optimist’s Manifesto,” which uses the phrase “we believe” 133 times, arguing that “any deceleration of AI will cost lives…” and that “AI that was prevented from existing is a form of murder.” Or Sam Altman’s urgent quest for trillions of dollars to create a world of AI “abundance,” consequences for the climate, democracy, or, say, biological weapons be damned. Or Thiel’s belief that one needs a “near-messianic attitude” to succeed in venture capital. Or young men’s hero worship of tech “genius” figures like Musk, who, as former Twitter owner Jack Dorsey said, is the “singular solution”: the one man to single-handedly take humanity beyond earth, “into the stars.”
And why wouldn’t the drama of the gifted technologist appeal to young people? They live, after all, in a world so unequal, with a future so uncertain, that the fortunate few really do live lives of grandeur in comparison to the precarity and struggle others face.
Read More: Inside Elon Musk’s Struggle for the Future of AI
Still, some might dismiss these “ideas” as mere hype and bluster. I’d love to do so, too. But I’ve heard far too many “confessions” reminiscent of what famous AI “doomer” Eliezer Yudkowsky once said most starkly and alarmingly: that “ambitious people would rather destroy the world than never amount to anything.”
Of course, I’m not saying that the aspiring leaders I work with are feeling so worthless and undeserving that they put themselves on a straight path from aspirational tech leadership towards world-destruction. Plenty are wonderful human beings. But it doesn’t take many hollow young men to destroy, if not the whole world, then at least far too much of it. Ultimately, many gifted young adults are feeling extraordinarily normal feelings: Fear. Loneliness. Grief. But because their “drama” doesn’t permit them to simply be normal, they too often look for ways to dominate others, rather than connect with them in humble mutual solidarity.
In the spring of 2023, I sat and discussed all this over a long lunch with a group of about 20 soon-to-graduate students at Harvard’s Kennedy School of Government. The students, in many cases deeply anxious about their individual and collective futures, asked me to advise them on how to envision and build ethical, meaningful, and sustainable lives in a world in which technological (and climate) change was causing them a level of uncertainty that was destabilizing at best, debilitating at worst. I suggested they view themselves as having inherent worth and value, simply for existing. Hearing that, one of the students responded—with laudable honesty and forthrightness—that she found that idea laughable.
I don’t blame her for laughing. It truly can be hard to accept oneself unconditionally, at a time of so much dehumanization. Many students I meet find it much easier to simply work harder. Ironically, their belief that tech success and wealth will save them strikes me as a kind of “digital puritanism”: a secularized version of the original Puritanism that founded Harvard College in the 1630’s, in which you were either considered one of the world’s few true elites, bound for Heaven, or if not, your destiny was the fire and brimstone vision of Hell. Perhaps tech’s hierarchies aren’t quite as extreme as traditional Puritanism’s, which allowed no way to alter destiny, and where the famous “Protestant work ethic” was merely an indicator of one’s obvious predestination. But given the many ways in which today’s tech is worsening social inequality, the difference isn’t exactly huge.
The good news? Many reformers are actively working to make tech more humane.
Among those is MacArthur fellow and scholar of tech privacy Danielle Citron, an expert in online abuse, who told me she worries that gifted technologists can “…lose their way behind screens, because they don’t see the people whom they hurt.”
“To build a society for future cyborgs as one’s goal,” Citron continued, “suggests that these folks don’t have real, flesh and blood relationships…where we see each other in the way that Martin Buber…described.”
Buber, an influential Jewish philosopher whose career spanned the decades before and after the Holocaust, was best known for his idea, first fully expressed in his 1923 essay “I and Thou,” that human life finds its meaning in relationships, and that the world would be better if each of us imagined our flesh-and-blood connections with one another—rather than achievements or technologies—as the ultimate expression of our connection to the divine.
Indeed. I don’t happen to share Buber’s belief in a divine authority; I’m an atheist and humanist. But I share Buber’s faith in the sacredness of human interrelationship. And I honor any form of contemporary spiritual teaching, religious or not, that reminds us to place justice, and one another’s well-being, over ambition—or “winning.”
We are not digital beings. We are not chatbots, optimized for achievement and sent to conquer this country and then colonize the stars through infinite data accumulation. We are human beings who care deeply about one another because we care about ourselves. Our very existence, as people capable of loving and being loved, is what makes us worthy of the space we occupy, here in this country, on this planet, and on any other planet we may someday find ourselves inhabiting.
Source: Tech – TIME | 15 Nov 2024 | 6:03 am
Social media site Bluesky has gained 1 million new users in the week since the U.S. election, as some X users look for an alternative platform to post their thoughts and engage with others online.
Bluesky said Wednesday that its total users surged to 15 million, up from roughly 13 million at the end of October.
Championed by former Twitter CEO Jack Dorsey, Bluesky was an invitation-only space until it opened to the public in February. That invite-only period gave the site time to build out moderation tools and other features. The platform resembles Elon Musk’s X, with a “discover” feed as well a chronological feed for accounts that users follow. Users can send direct messages and pin posts, as well as find “starter packs” that provide a curated list of people and custom feeds to follow.
[time-brightcove not-tgx=”true”]The post-election uptick in users isn’t the first time that Bluesky has benefitted from people leaving X. Bluesky gained 2.6 million users in the week after X was banned in Brazil in August—85% of them from Brazil, the company said. About 500,000 new users signed up in the span of one day last month, when X signaled that blocked accounts would be able to see a user’s public posts.
Despite Bluesky’s growth, X posted last week that it had “dominated the global conversation on the U.S. election” and had set new records. The platform saw a 15.5% jump in new-user signups on Election Day, X said, with a record 942 million posts worldwide. Representatives for Bluesky and for X did not respond to requests for comment.
Bluesky has referenced its competitive relationship to X through tongue-in-cheeks comments, including an Election Day post on X referencing Musk watching voting results come in with President-elect Donald Trump.
“I can guarantee that no Bluesky team members will be sitting with a presidential candidate tonight and giving them direct access to control what you see online,” Bluesky said.
Across the platform, new users—among them journalists, left-leaning politicians and celebrities—have posted memes and shared that they were looking forward to using a space free from advertisements and hate speech. Some said it reminded them of the early days of X, when it was still Twitter.
On Wednesday, the Guardian said it would no longer post on X, citing “far right conspiracy theories and racism” on the site as a reason. At the same time, television journalist Don Lemon posted on X that he is leaving the platform but will continue to use other social media, including Bluesky.
Lemon said he felt X was no longer a place for “honest debate and discussion.” He noted changes to the site’s terms of service set to go into effect Friday that state lawsuits against X must be filed in the U.S. District Court for the Northern District of Texas rather than the Western District of Texas. Musk said in July that he was moving X’s headquarters to Texas from San Francisco.
“As the Washington Post recently reported on X’s decision to change the terms, this ‘ensures that such lawsuits will be heard in courthouses that are a hub for conservatives, which experts say could make it easier for X to shield itself from litigation and punish critics,’” Lemon wrote. “I think that speaks for itself.”
Last year, advertisers such as IBM, NBCUniversal and its parent company Comcast fled X over concerns about their ads showing up next to pro-Nazi content and hate speech on the site in general, with Musk inflaming tensions with his own posts endorsing an antisemitic conspiracy theory.
Source: Tech – TIME | 14 Nov 2024 | 3:45 pm
When Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different.
AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, “It is possible that we will have superintelligence in a few thousand days,” but also noted that “it may take longer.” Meanwhile, Meta CEO Mark Zuckerberg sees the arrival of these systems as more of a gradual process rather than a single moment.
[time-brightcove not-tgx=”true”]Either way, such advances could have far-reaching implications for national security, the economy, and the global balance of power.
Read More: When Might AI Outsmart Us? It Depends Who You Ask
Trump’s own pronouncements on AI have fluctuated between awe and apprehension. In a June interview on Logan Paul’s Impaulsive podcast, he described AI as a “superpower” and called its capabilities “alarming.” And like many in Washington, he views the technology through the lens of competition with China, which he sees as the “primary threat” in the race to build advanced AI.
Yet even his closest allies are divided on how to govern the technology: Musk has long voiced concerns about AI’s existential risks, while J.D. Vance, Trump’s Vice President, sees such warnings from industry as a ploy to usher regulations that would “entrench the tech incumbents.” These divisions among Trump’s confidants hint at the competing pressures that will shape AI policy during Trump’s second term.
Trump’s first major AI policy move will likely be to repeal President Joe Biden’s Executive Order on AI. The sweeping order, signed in October 2023, sought to address threats the technology could pose to civil rights, privacy, and national security, while promoting innovation, competition, and the use of AI for public services.
Trump promised to repeal the Executive Order on the campaign trail in December 2023, and this position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing “radical leftwing ideas” on the technology’s development.
Read more: Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried
Sections of the Executive Order which focus on racial discrimination or inequality are “not as much Trump’s style,” says Dan Hendrycks, executive and research director of the Center for AI Safety. While experts have criticized any rollback of bias protections, Hendrycks says the Trump Administration may preserve other aspects of Biden’s approach. “I think there’s stuff in [the Executive Order] that’s very bipartisan, and then there’s some other stuff that’s more specifically Democrat-flavored,” Hendrycks says.
“It would not surprise me if a Trump executive order on AI maintained or even expanded on some of the core national security provisions within the Biden Executive Order, building on what the Department of Homeland Security has done for evaluating cybersecurity, biological, and radiological risks associated with AI,” says Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.
The fate of the U.S. AI Safety Institute (AISI), an institution created last November by the Biden Administration to lead the government’s efforts on AI safety, also remains uncertain. In August, the AISI signed agreements with OpenAI and Anthropic to formally collaborate on AI safety research, and on the testing and evaluation of new models. “Almost certainly, the AI Safety Institute is viewed as an inhibitor to innovation, which doesn’t necessarily align with the rest of what appears to be Trump’s tech and AI agenda,” says Keegan McBride, a lecturer in AI, government, and policy at the Oxford Internet Institute. But Hammond says that while some fringe voices would move to shutter the institute, “most Republicans are supportive of the AISI. They see it as an extension of our leadership in AI.”
Read more: What Trump’s Win Means for Crypto
Congress is already working on protecting the AISI. In October, a broad coalition of companies, universities, and civil society groups—including OpenAI, Lockheed Martin, Carnegie Mellon University, and the nonprofit Encode Justice—signed a letter calling on key figures in Congress to urgently establish a legislative basis for the AISI. Efforts are underway in both the Senate and the House of Representatives, and both reportedly have “pretty wide bipartisan support,” says Hamza Chaudhry, U.S. policy specialist at the nonprofit Future of Life Institute.
Trump’s previous comments suggest that maintaining the U.S.’s lead in AI development will be a key focus for his Administration.“We have to be at the forefront,” he said on the Impaulsive podcast in June. “We have to take the lead over China.” Trump also framed environmental concerns as potential obstacles, arguing they could “hold us back” in what he views as the race against China.
Trump’s AI policy could include rolling back regulations to accelerate infrastructure development, says Dean Ball, a research fellow at George Mason University. “There’s the data centers that are going to have to be built. The energy to power those data centers is going to be immense. I think even bigger than that: chip production,” he says. “We’re going to need a lot more chips.” While Trump’s campaign has at times attacked the CHIPS Act, which provides incentives for chip makers manufacturing in the U.S, leading some analysts to believe that he is unlikely to repeal the act.
Read more: What Donald Trump’s Win Means for the Economy
Chip export restrictions are likely to remain a key lever in U.S. AI policy. Building on measures he initiated during his first term—which were later expanded by Biden—Trump may well strengthen controls that curb China’s access to advanced semiconductors. “It’s fair to say that the Biden Administration has been pretty tough on China, but I’m sure Trump wants to be seen as tougher,” McBride says. It is “quite likely” that Trump’s White House will “double down” on export controls in an effort to close gaps that have allowed China to access chips, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. “The overwhelming majority of people on both sides think that the export controls are important,” he says.
The rise of open-source AI presents new challenges. China has shown it can leverage U.S. systems, as demonstrated when Chinese researchers reportedly adapted an earlier version of Meta’s Llama model for military applications. That’s created a policy divide. “You’ve got people in the GOP that are really in favor of open-source,” Ball says. “And then you have people who are ‘China hawks’ and really want to forbid open-source at the frontier of AI.”
“My sense is that because a Trump platform has so much conviction in the importance and value of open-source I’d be surprised to see a movement towards restriction,” Singer says.
Despite his tough talk, Trump’s deal-making impulses could shape his policy towards China. “I think people misunderstand Trump as a China hawk. He doesn’t hate China,” Hammond says, describing Trump’s “transactional” view of international relations. In 2018, Trump lifted restrictions on Chinese technology company ZTE in exchange for a $1.3 billion fine and increased oversight. Singer sees similar possibilities for AI negotiations, particularly if Trump accepts concerns held by many experts about AI’s more extreme risks, such as the chance that humanity may lose control over future systems.
Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows
Debates over how to govern AI reveal deep divisions within Trump’s coalition of supporters. Leading figures, including Vance, favor looser regulations of the technology. Vance has dismissed AI risk as an industry ploy to usher in new regulations that would “make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth.”
Silicon Valley billionaire Peter Thiel, who served on Trump’s 2016 transition team, recently cautioned against movements to regulate AI. Speaking at the Cambridge Union in May, he said any government with the authority to govern the technology would have a “global totalitarian character.” Marc Andreessen, the co-founder of prominent venture capital firm Andreessen Horowitz, gave $2.5 million to a pro-Trump super political action committee, and an additional $844,600 to Trump’s campaign and the Republican Party.
Yet, a more safety-focused perspective has found other supporters in Trump’s orbit. Hammond, who advised on the AI policy committee for Project 2025, a proposed policy agenda led by right-wing think tank the Heritage Foundation, and not officially endorsed by the Trump campaign, says that “within the people advising that project, [there was a] very clear focus on artificial general intelligence and catastrophic risks from AI.”
Musk, who has emerged as a prominent Trump campaign ally through both his donations and his promotion of Trump on his platform X (formerly Twitter), has long been concerned that AI could pose an existential threat to humanity. Recently, Musk said he believes there’s a 10% to 20% chance that AI “goes bad.” In August, Musk posted on X supporting the now-vetoed California AI safety bill that would have put guardrails on AI developers. Hendrycks, whose organization co-sponsored the California bill, and who serves as safety adviser at xAI, Musk’s AI company, says “If Elon is making suggestions on AI stuff, then I expect it to go well.” However, “there’s a lot of basic appointments and groundwork to do, which makes it a little harder to predict,” he says.
Trump has acknowledged some of the national security risks of AI. In June, he said he feared deepfakes of a U.S. President threatening a nuclear strike could prompt another state to respond, sparking a nuclear war. He also gestured to the idea that an AI system could “go rogue” and overpower humanity, but took care to distinguish this position from his personal view. However, for Trump, competition with China appears to remain the primary concern.
Read more: Trump Worries AI Deepfakes Could Trigger Nuclear War
But these priorities aren’t necessarily at odds and AI safety regulation does not inherently entail ceding ground to China, Hendrycks says. He notes that safeguards against malicious use require minimal investment from developers. “You have to hire one person to spend, like, a month or two on engineering, and then you get your jailbreaking safeguards,” he says. But with these competing voices shaping Trump’s AI agenda, the direction of Trump’s AI policy agenda remains uncertain.
“In terms of which viewpoint President Trump and his team side towards, I think that is an open question, and that’s just something we’ll have to see,” says Chaudhry. “Now is a pivotal moment.”
Source: Tech – TIME | 9 Nov 2024 | 6:01 am
This election cycle, the crypto industry poured over $100 million into races across the country, hoping to assert crypto’s relevancy as a voter issue and usher pro-crypto candidates into office. On Wednesday morning, almost all of the industry’s wishes came true. Republican candidate Donald Trump, who has lavished praise upon Bitcoin this year, won handily against his Democratic opponent Kamala Harris. And crypto PACs scored major wins in House and Senate races—most notably in Ohio, where Republican Bernie Moreno defeated crypto skeptic Sherrod Brown.
[time-brightcove not-tgx=”true”]As Trump’s numbers ascended on Tuesday night, Bitcoin hit a new record high, topping $75,000. Crypto-related stocks, including Robinhood Markets and MicroStrategy, also leapt upward. Enthusiasts now believe that Trump’s Administration will strip back regulation of the crypto industry, and that a favorable Congress will pass legislation that gives the industry more room to grow.
“This is a huge victory for crypto,” Kristin Smith, the CEO of the Blockchain Association, a D.C.-based lobbying group, tells TIME. “I think we’ve really turned a corner, and we’ve got the right folks in place to get the policy settled once and for all.”
Many crypto fans supported Trump over Harris for several reasons. Trump spoke glowingly about crypto this year on the campaign trail, despite casting skepticism upon it for years. At the Bitcoin conference in Nashville in July, Trump floated the idea of establishing a federal Bitcoin reserve, and stressed the importance of bringing more Bitcoin mining operations to the U.S.
Read More: Inside the Health Crisis of a Texas Bitcoin Town
Perhaps most importantly, Trump vowed to oust Gary Gensler, the chair of the Securities and Exchange Commission (SEC), who has brought many lawsuits against crypto projects for allegedly violating securities laws. Gensler is a widely-reviled figure in the crypto industry, with many accusing him of stifling innovation. Gensler, conversely, argued that it was his job to protect consumers from the massive crypto collapses that unfolded in 2022, including Terra Luna and FTX.
Gensler’s term isn’t up until 2026, but some analysts expect him to resign once Trump takes office, as previous SEC chairs have done after the President that appointed them lost their elections. A change to SEC leadership could allow many more crypto products to enter mainstream financial markets. For the past few years, the SEC had been hesitant to approve crypto ETFs: investment vehicles that allow people to bet on crypto without actually holding it. But a judge forced Gensler’s hand, bringing Bitcoin ETFs onto the market in January. Now, under a friendlier SEC, ETFs based on smaller cryptocurrencies like Solana and XRP may be next.
Many crypto enthusiasts are also excited by Trump’s alliance with Elon Musk, who has long championed cryptocurrencies on social media. On election night, Dogecoin, Musk’s preferred meme coin, spiked 25% to 21 cents.
Crypto enthusiasts are also cheering the results in the Senate, which was the focus of most of the industry’s political contributions. Crypto PACs like Fairshake spent over $100 million dollars supporting pro-crypto candidates and opposing anti-crypto candidates, in the hopes of fomenting a new Congress that would pass legislation favorable to the industry. Centrally, lobbyists hoped for a bill that would turn over crypto regulation from the SEC to the Commodity Futures Trading Commission (CFTC), a much smaller agency.
Read More: Crypto Is Pouring Cash Into the 2024 Elections. Will It Pay Off?
Crypto PACs particularly focused their efforts in Ohio, spending some $40 million to unseat Democrat Brown, the Senate Banking Committee Chair and a crypto critic. His opponent Moreno has been a regular attendee at crypto conferences and vowed to “lead the fight to defend crypto in the US Senate.” On Tuesday night, Moreno won, flipping control of the Senate.
Defend American Jobs, a Crypto PAC affiliated with Fairshake, claimed credit for Brown’s defeat on Tuesday. “Elizabeth Warren ally Sherrod Brown was a top opponent of cryptocurrency and thanks to our efforts, he will be leaving the Senate,” spokesperson Josh Vlasto wrote in a statement. “Senator-Elect Moreno’s come-from-behind win shows that Ohio voters want a leader who prioritizes innovation, protects American economic interests, and will ensure our nation’s continued technological leadership.”
Crypto PACs notched another victory in Montana, where their preferred candidate, Republican Tim Sheehy, defeated Democrat Jon Tester.
Finally, crypto enthusiasts celebrated the accuracy of prediction markets, which allow users to bet on election results using crypto. Advocates claimed that prediction markets could be more accurate than polls, because they channeled the collective wisdom of people with skin in the game. Critics, on the other hand, dismissed them as being too volatile and based in personal sentiment and boosterism.
For weeks, prediction markets had been far more favorable toward Trump than the polls, which portrayed Trump and Harris in a dead heat. (For example, Polymarket gave Trump a 62% chance of winning on Nov. 3.) And on election day, before any major results had been tabulated, prediction markets swung heavily towards Trump; the odds of Republicans sweeping the presidency, house and senate jumped to 44% on Kalshi.
In the last couple months, bettors wagered over $2 billion on the presidential election on Polymarket, according to Dune Analytics. It’s still unclear whether prediction markets are actually more accurate than polls on average. But their success in this election will likely make their presence in the political arena only increase in years to come.
Crypto’s future in the Trump era is far from guaranteed. Crypto prices are highly susceptible to global events, like Russia’s invasion of Ukraine, as well as larger macroeconomic trends. Fraudulent crypto projects like FTX, which thrived in deregulated environments, have also tanked prices in years past. Skeptics worry that more Americans being able to buy crypto will add volatility and risk to the American financial system.
And it’s unclear how dedicated Trump actually is to crypto, or whether he will follow through on his pledges to the industry. “If he doesn’t deliver on these promises quickly, the euphoria could turn to disappointment, which has the potential to result in crypto market volatility,” Tim Kravchunovsky, founder and CEO of the decentralized telecommunications network Chirp, wrote to TIME. “We have to be prepared for this because the reality is that crypto isn’t the most important issue on Trump’s current agenda.”
But for now, most crypto fans believe that a “bull run,” in which prices increase, is imminent, and that regulatory change is incoming. “I don’t think we’re going to see the same kind of hostility from the government, particularly members of Congress, as we have in the past,” says Smith. “ This is really positive news for all parts of the ecosystem.”
Andrew R. Chow’s book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.
Source: Tech – TIME | 7 Nov 2024 | 4:57 am
Today’s best AI models, like OpenAI’s ChatGPT and Anthropic’s Claude, come with conditions: their creators control the terms on which they are accessed to prevent them being used in harmful ways. This is in contrast with ‘open’ models, which can be downloaded, modified, and used by anyone for almost any purpose. A new report by non-profit research organization Epoch AI found that open models available today are about a year behind the top closed models.
[time-brightcove not-tgx=”true”]“The best open model today is on par with closed models in performance, but with a lag of about one year,” says Ben Cottier, lead researcher on the report.
Meta’s Llama 3.1 405B, an open model released in July, took about 16 months to match the capabilities of the first version of GPT-4. If Meta’s next generation AI, Llama 4, is released as an open model, as it is widely expected to be, this gap could shrink even further. The findings come as policymakers grapple with how to deal with increasingly-powerful AI systems, which have already been reshaping information environments ahead of elections across the world, and which some experts worry could one day be capable of engineering pandemics, executing sophisticated cyberattacks, and causing other harms to humans.
Researchers at Epoch AI analyzed hundreds of notable models released since 2018. To arrive at their results, they measured the performance of top models on technical benchmarks—standardized tests that measure an AI’s ability to handle tasks like solving math problems, answering general knowledge questions, and demonstrating logical reasoning. They also looked at how much computing power, or compute, was used to train them, since that has historically been a good proxy for capabilities, though open models can sometimes perform as well as closed models while using less compute, thanks to advancements in the efficiency of AI algorithms. “The lag between open and closed models provides a window for policymakers and AI labs to assess frontier capabilities before they become available in open models,” Epoch researchers write in the report.
Read More: The Researcher Trying to Glimpse the Future of AI
But the distinction between ‘open’ and ‘closed’ AI models is not as simple as it might appear. While Meta describes its Llama models as open-source, it doesn’t meet the new definition published last month by the Open Source Initiative, which has historically set the industry standard for what constitutes open source. The new definition requires companies to share not just the model itself, but also the data and code used to train it. While Meta releases its model “weights”—long lists of numbers that allow users to download and modify the model—it doesn’t release either the training data or the code used to train the models. Before downloading a model, users must agree to an Acceptable Use Policy that prohibits military use and other harmful or illegal activities, although once models are downloaded, these restrictions are difficult to enforce in practice.
Meta says it disagrees with the Open Source Initiative’s new definition. “There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today’s rapidly advancing AI models,” a Meta spokesperson told TIME in an emailed statement. “We make Llama free and openly available, and our license and Acceptable Use Policy help keep people safe by having some restrictions in place. We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions.”
Making AI models open is widely seen to be beneficial because it democratizes access to technology and drives innovation and competition. “One of the key things that open communities do is they get a wider, geographically more-dispersed, and more diverse community involved in AI development,” says Elizabeth Seger, director of digital policy at Demos, a U.K.-based think tank. Open communities, which include academic researchers, independent developers, and non-profit AI labs, also drive innovation through collaboration, particularly in making technical processes more efficient. “They don’t have the same resources to play with as Big Tech companies, so being able to do a lot more with a lot less is really important,” says Seger. In India, for example, “AI that’s built into public service delivery is almost completely built off of open source models,” she says.
Open models also enable greater transparency and accountability. “There needs to be an open version of any model that becomes basic infrastructure for society, because we do need to know where the problems are coming from,” says Yacine Jernite, machine learning and society lead at Hugging Face, a company that maintains the digital infrastructure where many open models are hosted. He points to the example of Stable Diffusion 2, an open image generation model that allowed researchers and critics to examine its training data and push back against potential biases or copyright infringements—something impossible with closed models like OpenAI’s DALL-E. “You can do that much more easily when you have the receipts and the traces,” he says.
Read More: The Heated Debate Over Who Should Control Access to AI
However, the fact that open models can be used by anyone creates inherent risks, as people with malicious intentions can use them for harm, such as producing child sexual abuse material, or they could even be used by rival states. Last week, Reuters reported that Chinese research institutions linked to the People’s Liberation Army had used an old version of Meta’s Llama model to develop an AI tool for military use, underscoring the fact that, once a model has been publicly released, it cannot be recalled. Chinese companies such as Alibaba have also developed their own open models, which are reportedly competitive with their American counterparts.
On Monday, Meta announced it would make its Llama models available to U.S. government agencies, including those working on defense and national security applications, and to private companies supporting government work, such as Lockeed Martin, Anduril, and Palantir. The company argues that American leadership in open-source AI is both economically advantageous and crucial for global security.
Closed proprietary models present their own challenges. While they are more secure, because access is controlled by their developers, they are also more opaque. Third parties cannot inspect the data on which the models are trained to search for bias, copyrighted material, and other issues. Organizations using AI to process sensitive data may choose to avoid closed models due to privacy concerns. And while these models have stronger guardrails built in to prevent misuse, many people have found ways to ‘jailbreak’ them, effectively circumventing these guardrails.
At present, the safety of closed models is primarily in the hands of private companies, although government institutions such as the U.S. AI Safety Institute (AISI) are increasingly playing a role in safety-testing models ahead of their release. In August, the U.S. AISI signed formal agreements with Anthropic to enable “formal collaboration on AI safety research, testing and evaluation”.
Because of the lack of centralized control, open models present distinct governance challenges—particularly in relation to the most extreme risks that future AI systems could pose, such as empowering bioterrorists or enhancing cyberattacks. How policymakers should respond depends on whether the capabilities gap between open and closed models is shrinking or widening. “If the gap keeps getting wider, then when we talk about frontier AI safety, we don’t have to worry so much about open ecosystems, because anything we see is going to be happening with closed models first, and those are easier to regulate,” says Seger. “However, if that gap is going to get narrower, then we need to think a lot harder about if and how and when to regulate open model development, which is an entire other can of worms, because there’s no central, regulatable entity.”
For companies such as OpenAI and Anthropic, selling access to their models is central to their business model. “A key difference between Meta and closed model providers is that selling access to AI models isn’t our business model,” Meta CEO Mark Zuckerberg wrote in an open letter in July. “We expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.”
Measuring the abilities of AI systems is not straightforward. “Capabilities is not a term that’s defined in any way, shape or form, which makes it a terrible thing to discuss without common vocabulary,” says Jernite. “There are many things you can do with open models that you can’t do with closed models,” he says, emphasizing that open models can be adapted to a range of use-cases, and that they may outperform closed models when trained for specific tasks.
Ethan Mollick, a Wharton professor and popular commentator on the technology, argues that even if there was no further progress in AI, it would likely take years before these systems are fully integrated into our world. With new capabilities being added to AI systems at a steady rate—in October, frontier AI lab Anthropic introduced the ability for its model to directly control a computer, still in beta—the complexity of governing this technology will only increase.
In response, Seger says that it is vital to tease out exactly what risks are at stake. “We need to establish very clear threat models outlining what the harm is and how we expect openness to lead to the realization of that harm, and then figure out the best point along those individual threat models for intervention.”
Source: Tech – TIME | 6 Nov 2024 | 3:15 am