
Microsoft Corp.’s partnership with OpenAI Inc. is facing the potential of a full-blown UK antitrust investigation three weeks after a mutiny at the ChatGPT creator laid bare deep ties between the two companies.
The Competition and Markets Authority said Friday it was gathering information from stakeholders to determine whether the collaboration between the two firms threatens competition in the UK, home of Google’s AI research lab Deepmind. Microsoft fell 0.7% in premarket trading.
[time-brightcove not-tgx=”true”]Microsoft has benefited richly from its investments, totaling as much as $13 billion, in OpenAI. By integrating OpenAI’s products into virtually every corner of its core businesses, the software giant very quickly established itself as the undisputed leader of AI among big tech firms. Rival Alphabet Inc.’s Google has been racing to catch up ever since.
The firing — and subsequent rehiring — of Sam Altman as chief of OpenAI last month exposed how inextricably linked the two companies have become. Microsoft shares fell immediately after OpenAI’s board ousted Altman. Microsoft chief executive officer Satya Nadella personally helped negotiate and advocate for his return to the company — at one point offering to hire Altman himself, along with other employees at OpenAI who wanted to leave.
OpenAI’s board eventually agreed to reinstate Altman. The company recently named a three-person interim board and added Microsoft as a nonvoting observer.
Microsoft President Brad Smith said in a statement Friday that “the only thing that has changed is that Microsoft will now have a non-voting observer on OpenAI’s board.” He described its relationship with OpenAI as “very different” from Google’s outright acquisition of DeepMind in the UK. OpenAI didn’t immediately respond to a request for comment.
Smith had said as recently as last month that he didn’t “see a future where Microsoft takes control of OpenAI.”
The CMA said it will look at whether the balance of power between the two firms has fundamentally shifted to give one side more control or influence over the other. When asked to comment on the CMA’s move, a European Commission spokesperson said the regulator had been “following the situation of control over OpenAI very closely.”
The move by the CMA puts Microsoft under the antitrust microscope once again. Its acquisition of video-game giant Activision Blizzard was subjected to nearly two years of regulatory scrutiny before gaining approval in the UK less than two months ago.
Read more: U.K. Competition Watchdog Signals Cautious Approach to AI Regulation
At the core of the partnership between Microsoft and OpenAI is the massive amounts of computer power required to keep the worldwide boom in generative AI going. Running the systems behind tools such as ChatGPT and Google’s Bard has sent demand for cloud services and processing capacity soaring. OpenAI, for example, has become a major customer of Microsoft’s cloud business.
In turn, all three of the world’s biggest cloud-computing providers — Microsoft, Amazon.com Inc., and Google — have become active investors in AI startups.
These large firms have used such deals and tie-ups to “co-opt and neutralize potential rivals” in AI, said Max von Thun, director of Europe for Open Markets Institute, a think tank. “It is essential that antitrust authorities move quickly to investigate these deals, including unwinding them if necessary.”
Source: Tech – TIME | 8 Dec 2023 | 8:31 am

Apps and websites that use artificial intelligence to undress women in photos are soaring in popularity, according to researchers.
In September alone, 24 million people visited undressing websites, the social network analysis company Graphika found.
Many of these undressing, or “nudify,” services use popular social networks for marketing, according to Graphika. For instance, since the beginning of this year, the number of links advertising undressing apps increased more than 2,400% on social media, including on X and Reddit, the researchers said. The services use AI to recreate an image so that the person is nude. Many of the services only work on women.
[time-brightcove not-tgx=”true”]Read More: Pausing AI Developments Isn’t Enough. We Need to Shut It All Down
These apps are part of a worrying trend of non-consensual pornography being developed and distributed because of advances in artificial intelligence — a type of fabricated media known as deepfake pornography. Its proliferation runs into serious legal and ethical hurdles, as the images are often taken from social media and distributed without the consent, control or knowledge of the subject.
The rise in popularity corresponds to the release of several open source diffusion models, or artificial intelligence that can create images that are far superior to those created just a few years ago, Graphika said. Because they are open source, the models that the app developers use are available for free.
“You can create something that actually looks realistic,” said Santiago Lakatos, an analyst at Graphika, noting that previous deepfakes were often blurry.
More From TIME
One image posted to X advertising an undressing app used language that suggests customers could create nude images and then send them to the person whose image was digitally undressed, inciting harassment. One of the apps, meanwhile, has paid for sponsored content on Google’s YouTube, and appears first when searching with the word “nudify.”
A Google spokesperson said the company doesn’t allow ads “that contain sexually explicit content.”
“We’ve reviewed the ads in question and are removing those that violate our policies,” the company said.
A Reddit spokesperson said the site prohibits any non-consensual sharing of faked sexually explicit material and had banned several domains as a result of the research. X didn’t respond to a request for comment.
In addition to the rise in traffic, the services, some of which charge $9.99 a month, claim on their websites that they are attracting a lot of customers. “They are doing a lot of business,” Lakatos said. Describing one of the undressing apps, he said, “If you take them at their word, their website advertises that it has more than a thousand users per day.”
Non-consensual pornography of public figures has long been a scourge of the internet, but privacy experts are growing concerned that advances in AI technology have made deepfake software easier and more effective.
“We are seeing more and more of this being done by ordinary people with ordinary targets,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation. “You see it among high school children and people who are in college.”
Many victims never find out about the images, but even those who do may struggle to get law enforcement to investigate or to find funds to pursue legal action, Galperin said.
Read More: The Heated Debate Over Who Should Control Access to AI
There is currently no federal law banning the creation of deepfake pornography, though the U.S. government does outlaw generation of these kinds of images of minors. In November, a North Carolina child psychiatrist was sentenced to 40 years in prison for using undressing apps on photos of his patients, the first prosecution of its kind under law banning deepfake generation of child sexual abuse material.
TikTok has blocked the keyword “undress,” a popular search term associated with the services, warning anyone searching for the word that it “may be associated with behavior or content that violates our guidelines,” according to the app. A TikTok representative declined to elaborate. In response to questions, Meta Platforms Inc. also began blocking key words associated with searching for undressing apps. A spokesperson declined to comment.
Source: Tech – TIME | 8 Dec 2023 | 1:55 am

(SANTA FE, N.M.) — Facebook and Instagram fail to protect underage users from exposure to child sexual abuse material and let adults solicit pornographic imagery from them, New Mexico’s attorney general alleges in a lawsuit that follows an undercover online investigation.
“Our investigation into Meta’s social media platforms demonstrates that they are not safe spaces for children but rather prime locations for predators to trade child pornography and solicit minors for sex,” Attorney General Raul Torrez said in a statement Wednesday.
[time-brightcove not-tgx=”true”]The civil lawsuit filed late Tuesday against Meta Platforms Inc. in state court also names its CEO, Mark Zuckerberg, as a defendant.
In addition, the suit claims Meta “harms children and teenagers through the addictive design of its platform, degrading users’ mental health, their sense of self-worth, and their physical safety,” Torrez’s office said in a statement.
Those claims echo a lawsuit filed in late October by the attorneys general of 33 states, including California and New York, against Meta that alleges Instagram and Facebook include features deliberately designed to hook children, contributing to the youth mental health crisis and leading to depression, anxiety and eating disorders. New Mexico was not a party to that lawsuit.
Investigators in New Mexico created decoy accounts of children 14 years and younger that Torrez’s office said were served sexually explicit images even when the child expressed no interest in them. State prosecutors claim that Meta let dozens of adults find, contact and encourage children to provide sexually explicit and pornographic images.
The accounts also received recommendations to join unmoderated Facebook groups devoted to facilitating commercial sex, investigators said, adding that Meta also let its users find, share, and sell “an enormous volume of child pornography.”
“Mr. Zuckerberg and other Meta executives are aware of the serious harm their products can pose to young users, and yet they have failed to make sufficient changes to their platforms that would prevent the sexual exploitation of children,” Torrez said, accusing Meta’s executives of prioritizing “engagement and ad revenue over the safety of the most vulnerable members of our society.”
Meta, which is based in Menlo Park, California, did not directly respond to the New Mexico lawsuit’s allegations, but said that it works hard to protect young users with a serious commitment of resources.
“We use sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators,” the company said. “In one month alone, we disabled more than half a million accounts for violating our child safety policies.”
Company spokesman Andy Stone pointed to a company report detailing the millions of tips Facebook and Instagram sent to the National Center in the third quarter of 2023 — including 48,000 involving inappropriate interactions that could include an adult soliciting child sexual abuse material directly from a minor or attempting to meet with one in person.
Critics including former employees have long complained that Meta’s largely automated content moderation systems are ill-equipped to identify and adequately eliminate abusive behavior on its platforms.
Source: Tech – TIME | 7 Dec 2023 | 11:30 am

(LONDON) — European Union talks on world-leading comprehensive artificial intelligence regulations were paused Thursday after 22 straight hours, with officials yet to hammer out a deal on a rulebook for the rapidly advancing technology behind popular services like ChatGPT.
[time-brightcove not-tgx=”true”]European Commissioner Thierry Breton tweeted that talks, which began Wednesday afternoon in Brussels and ran through the night, would resume on Friday morning.
“Lots of progress made over past 22 hours” on the EU’s Artificial Intelligence Act, he wrote. “Stay tuned!”
Representatives of the bloc’s 27 member states, lawmakers and executive commissioners are under the gun to secure a political agreement for the flagship AI Act. They spent hours wrangling over controversial points such as generative AI and AI-powered police facial recognition.
There was disagreement over whether and how to regulate foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.
EU lawmakers also want a full ban on facial recognition systems because of privacy concerns, but they are at odds with governments from member countries that want to use it for law enforcement.
Officials are eager to sign off on a deal in time for final approval from the European Parliament before it breaks up for bloc-wide elections next year. They’re also scrambling to get it done by the end of December, when Spain’s turn at the rotating EU presidency ends.
Once it gets final approval, the AI Act wouldn’t take effect until 2025 at the earliest.
Source: Tech – TIME | 7 Dec 2023 | 11:22 am

Google DeepMind has announced its much-anticipated family of artificial intelligence chatbots, Gemini, which will compete with OpenAI’s GPT series.
According to Google, Gemini Ultra, its largest and most capable new model, outperforms OpenAI’s most capable model, GPT-4, at a number of text-based, image-based, coding, and reasoning tasks. Gemini Ultra will be available through a new AI chat feature called Bard Advanced from early next year, the company said. It is currently being refined and is undergoing “trust and safety checks, including red-teaming by trusted external parties,” according to the announcement.
[time-brightcove not-tgx=”true”]Google DeepMind also announced the launch of Gemini Pro, which is now available to the public through Google’s Bard chat interface, and the smaller Gemini Nano, which will run on Google’s Pixel 8 Pro smartphone. All three models can process text, images, audio, and video and produce text and image outputs.
Google will start to integrate Gemini models into its other products and services, such as internet search and advertisements. From Dec. 13, developers will be able to access Gemini Pro through an API, and Android developers will be able to build with Gemini Nano.
The rollout will put the Gemini suite up against rivals including OpenAI, Anthropic, Inflection, Meta and Elon Musk’s xAI.

DeepMind was founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman in 2010. Google acquired the AI lab for $400 million in 2014, and in April 2023 DeepMind merged with Google’s elite AI research team, Google Brain, to form Google DeepMind, which Hassabis leads.
Read more: TIME100 Most Influential Companies 2023: Google DeepMind
More From TIME
A year after the acquisition, DeepMind’s founders began negotiating to try and obtain greater independence from their new parent company. In 2017, DeepMind’s founders reportedly tried to break away from Google but failed. In 2020, the founders reportedly pushed for a new legal structure to ensure that powerful AI wasn’t controlled by a single corporate entity, even hiring an outside lawyer to help draft the structure, but according to the Wall Street Journal, the proposed structure didn’t make financial sense for Alphabet.
Google and Google DeepMind have been responsible for a number of the most important AI breakthroughs of the last decade, including AlphaGo, which mastered the complex game Go, inventing the transformer architecture that powers today’s chatbots, and solving the protein folding problem with AlphaFold.
But the tech giant has lagged behind competitors such as OpenAI and Anthropic in the era of AI chatbots. A paper from 2021 suggests that DeepMind developed a chatbot, Gopher, as early as December 2020. Google DeepMind chief operating office Lila Ibrahim told TIME that DeepMind decided not to release Gopher because it often gave factually inaccurate responses—a tendency referred to in the industry as ‘hallucinating.’ Before the DeepMind-Google Brain merger, a DeepMind project codenamed Goodall was working to build a ChatGPT competitor, although this was dropped in order to focus on Gemini, The Information reported in August.
Read more: CEO of the Year 2023: Sam Altman
Google announced its own chatbot, Bard, in February 2023, but parent Alphabet’s stock price dropped after analysts judged it to be inferior to competitors. In May it released PaLM 2, an improvement on Bard but judged by commentators to be inferior to GPT-4.
While Google has been slower to bring a consumer AI product out, it has been on the mind of its biggest competitor. Microsoft’s partnership with OpenAI gives it privileged access to OpenAI’s AI models. After the software behemoth announced that it was incorporating OpenAI’s models into Bing search engine, CEO Satya Nadella told The Verge in an interview that he thought AI could help his company challenge Google’s internet search dominance, and that he expects a reaction from Google. “I want people to know that we made them dance,” he said. (Data from analytics firm StatCounter suggests that Google has retained its hegemony in search, although a Microsoft executive disputed this in an interview with the Wall Street Journal in August).
Google DeepMind said in the announcement that it had compared Gemini Ultra to a range of competitor models—OpenAI’s GPT-4, Anthropic’s Claude 2, Inflection’s Inflection-2, Meta’s Llama 2 and xAI’s Grok 1—finding that the large language model outperforms those rivals on tests including professional and academic multiple choice questions and Python coding.
Source: Tech – TIME | 6 Dec 2023 | 12:10 pm

It was a strange Thanksgiving for Sam Altman. Normally, the CEO of OpenAI flies home to St. Louis to visit family. But this time the holiday came after an existential struggle for control of a company that some believe holds the fate of humanity in its hands. Altman was weary. He went to his Napa Valley ranch for a hike, then returned to San Francisco to spend a few hours with one of the board members who had just fired and reinstated him in the span of five frantic days. He put his computer away for a few hours to cook vegetarian pasta, play loud music, and drink wine with his fiancé Oliver Mulherin. “This was a 10-out-of-10 crazy thing to live through,” Altman tells TIME on Nov. 30. “So I’m still just reeling from that.”
[time-brightcove not-tgx=”true”]We’re speaking exactly one year after OpenAI released ChatGPT, the most rapidly adopted tech product ever. The impact of the chatbot and its successor, GPT-4, was transformative—for the company and the world. “For many people,” Altman says, 2023 was “the year that they started taking AI seriously.” Born as a nonprofit research lab dedicated to building artificial intelligence for the benefit of humanity, OpenAI became an $80 billion rocket ship. Altman emerged as one of the most powerful and venerated executives in the world, the public face and leading prophet of a technological revolution.
Until the rocket ship nearly imploded. On Nov. 17, OpenAI’s nonprofit board of directors fired Altman, without warning or even much in the way of explanation. The surreal maneuvering that followed made the corporate dramas of Succession seem staid. Employees revolted. So did OpenAI’s powerful investors; one even baselessly speculated that one of the directors who defenestrated Altman was a Chinese spy. The company’s visionary chief scientist voted to oust his fellow co-founder, only to backtrack. Two interim CEOs came and went. The players postured via selfie, open letter, and heart emojis on social media. Meanwhile, the company’s employees and its board of directors faced off in “a gigantic game of chicken,” says a person familiar with the discussions. At one point, OpenAI’s whole staff threatened to quit if the board didn’t resign and reinstall Altman within a few hours, three people involved in the standoff tell TIME. Then Altman looked set to decamp to Microsoft—with potentially hundreds of colleagues in tow. It seemed as if the company that catalyzed the AI boom might collapse overnight.
In the end, Altman won back his job and the board was overhauled. “We really do feel just stronger and more unified and more focused than ever,” Altman says in the last of three interviews with TIME, after his second official day back as CEO. “But I wish there had been some other way to get there.” This was no ordinary boardroom battle, and OpenAI is no ordinary startup. The episode leaves lingering questions about both the company and its chief executive.

Altman, 38, has been Silicon Valley royalty for a decade, a superstar founder with immaculate vibes. “You don’t fire a Steve Jobs,” said former Google CEO Eric Schmidt. Yet the board had. (Jobs, as it happens, was once fired by Apple, only to return as well.) As rumors swirled over the ouster, the board said there was no dispute over the safety of OpenAI’s products, the commercialization of its technology, or the pace of its research. Altman’s “behavior and lack of transparency in his interactions with the board” had undermined its ability to supervise the company in accordance with its mandate, though it did not share examples.
Interviews with more than 20 people in Altman’s circle—including current and former OpenAI employees, multiple senior executives, and others who have worked closely with him over the years—reveal a complicated portrait. Those who know him describe Altman as affable, brilliant, uncommonly driven, and gifted at rallying investors and researchers alike around his vision of creating artificial general intelligence (AGI) for the benefit of society as a whole. But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”
Buy the Person of the Year issue here
An OpenAI spokesperson said the company could not comment on the events surrounding Altman’s firing. “We’re unable to disclose specific details until the board’s independent review is complete. We look forward to the findings of the review and continue to stand behind Sam,” the spokesperson said in a statement to TIME. “Our primary focus remains on developing and releasing useful and safe AI, and supporting the new board as they work to make improvements to our governance structure.”
Altman has spent much of the past year assuring the public that OpenAI takes seriously the responsibility of shepherding its powerful technology into the world. One piece of evidence he gave was OpenAI’s unusual hybrid structure: it is a for-profit company governed by a nonprofit board, with a mandate to prioritize the mission over financial interests. “No one person should be trusted here,” Altman told a Bloomberg Technology conference in June. “The board can fire me. I think that’s important.” But when that happened only for Altman to maneuver his way back, it seemed to underscore that this accountability was a mirage. How could a company that had brought itself to the brink of self-destruction overnight be trusted to safely usher in a technology that many believe could destroy us all?
It’s not clear if Altman will have more power or less in his second stint as CEO. The company has established itself as the field’s front runner since the launch of ChatGPT, and expects to release new, more capable models next year. But there’s no guarantee OpenAI will maintain the industry lead as billions of dollars pour into frontier AI research by a growing field of competitors. The tech industry is known for its hype cycles—bursts of engineered excitement that allow venture capital to profit from fads like virtual reality or cryptocurrency. It’s possible the breakneck pace of AI development slows and the lofty promises about AGI don’t materialize.
But one of the big reasons for the standoff at OpenAI is that everyone involved thinks a new world is not just coming, but coming fast. Two people familiar with the board’s deliberations emphasize the stakes of supervising a company that believes it is building the most important technology in history. Altman thinks AGI—a system that surpasses humans in most regards—could be reached sometime in the next four or five years. AGI could turbocharge the global economy, expand the frontiers of scientific knowledge, and dramatically improve standards of living for billions of humans—creating a future that looks wildly different from the past. In this view, broadening our access to cognitive labor—“having more access to higher-quality intelligence and better ideas,” as Altman puts it—could help solve everything from climate change to cancer.
Read More: The AI Arms Race Is Changing Everything
But it would also come with serious risks. To many, the rapid rise in AI’s capabilities over the past year is deeply alarming. Computer scientists have not solved what’s known in the industry as the “alignment problem”—the task of ensuring that AGI conforms to human values. Few agree on who should determine those values. Altman and others have warned that advanced AI could pose “existential” risks on the scale of pandemics and nuclear war. This is the context in which OpenAI’s board determined that its CEO could not be trusted. “People are really starting to play for keeps now,” says Daniel Colson, executive director of the Artificial Intelligence Policy Institute (AIPI) and the founder of an Altman-backed startup, “because there’s an expectation that the window to try to shift the trajectory of things is closing.”
On a bright morning in early November, Altman looks nervous. We’re backstage at a cavernous event space in downtown San Francisco, where Altman will soon present to some 900 attendees at OpenAI’s first developer conference. Dressed in a gray sweater and brightly colored Adidas Lego sneakers, he thanks the speech coach helping him rehearse. “This is so not my thing,” he says. “I’m much more comfortable behind a computer screen.”
That’s where Altman was to be found on Friday nights as a high school student, playing on an original Bondi Blue iMac. He grew up in a middle-class Jewish family in the suburbs of St. Louis, the eldest of four children born to a real estate broker and a dermatologist. Altman was equal parts nerdy and self-assured. He came out as gay as a teenager, giving a speech in front of his high school after some students objected to a National Coming Out Day speaker. He enrolled at Stanford to study computer science in 2003, as memories of the dot-com crash were fading. In college, Altman got into poker, which he credits for inculcating lessons about psychology and risk. By that point, he knew he wanted to become an entrepreneur. He dropped out of school after two years to work on Loopt, a location-based social network he co-founded with his then boyfriend, Nick Sivo.
Loopt became part of the first cohort of eight companies to join Y Combinator, the now vaunted startup accelerator. The company was sold in 2012 for $43 million, netting Altman $5 million. Though the return was relatively modest, Altman learned something formative: “The way to get things done is to just be really f-cking persistent,” he told Vox’s Re/code. Those who know him say Altman has an abiding sense of obligation to tackle issues big and small. “As soon as he’s aware of a problem, he really wants to solve it,” says his fiancé Mulherin, an Australian software engineer turned investor. Or as Altman puts it, “Stuff only gets better because people show up and work. No one else is coming to save the day. You’ve just got to do it.”
YC’s co-founder Paul Graham spotted a rare blend of strategic talent, ambition, and tenacity. “You could parachute him into an island full of cannibals and come back in five years and he’d be the king,” Graham wrote of Altman when he was just 23. In February 2014, Graham tapped his protégé, then 28, to replace him as president of YC. By the time Altman took the reins, YC had incubated unicorns like Airbnb, Stripe, and Dropbox. But the new boss had a bigger vision. He wanted to expand YC’s remit beyond software to “hard tech”—the startups where the technology might not even be possible, yet where successful innovation could unlock trillions of dollars and transform the world.
Soon after becoming the leader of YC, Altman visited the headquarters of the nuclear-fusion startup Helion in Redmond, Wash. CEO David Kirtley recalls Altman showing up with a stack of physics textbooks and quizzing him about the design choices behind Helion’s prototype reactor. What shone through, Kirtley recalls, was Altman’s obsession with scalability. Assuming you could solve the scientific problem, how could you build enough reactors fast enough to meet the energy needs of the U.S.? What about the world? Helion was among the first hard-tech companies to join YC. Altman also wrote a personal check for $9.5 million and has since forked over an additional $375 million to Helion—his largest personal investment. “I think that’s the responsibility of capitalism,” Altman says. “You take big swings at things that are important to get done.”
Subscribe now and get the Person of the Year issue
Altman’s pursuit of fusion hints at the staggering scope of his ambition. He’s put $180 million into Retro Biosciences, a longevity startup hoping to add 10 healthy years to the human life-span. He conceived of and helped found Worldcoin, a biometric-identification system with a crypto-currency attached, which has raised hundreds of millions of dollars. Through OpenAI, Altman has spent $10 million seeding the longest-running study into universal basic income (UBI) anywhere in the U.S., which has distributed more than $40 million to 3,000 participants, and is set to deliver its first set of findings in 2024. Altman’s interest in UBI speaks to the economic dislocation that he expects AI to bring—though he says it’s not a “sufficient solution to the problem in any way.”
The entrepreneur was so alarmed at America’s direction under Donald Trump that in 2017 he explored running for governor of California. Today Altman downplays the endeavor as “a very lightweight consideration.” But Matt Krisiloff, a senior aide to Altman at the time, says they spent six months setting up focus groups across the state to help refine a political platform. “It wasn’t just a totally flippant idea,” Krisiloff says. Altman published a 10-point policy platform, which he dubbed the United Slate, with goals that included lowering housing costs, Medicare for All, tax reform, and ambitious clean-energy targets. He ultimately passed on a career switch. “It was so clear to me that I was much better suited to work on AI,” Altman says, “and that if we were able to succeed, it would be a much more interesting and impactful thing for me to do.”
But he remains keenly interested in politics. Altman’s beliefs are shaped by the theories of late 19th century political economist Henry George, who combined a belief in the power of market incentives to deliver increasing prosperity with a disdain for those who speculate on scarce assets, like land, instead of investing their capital in human progress. Altman has advocated for a land-value tax—a classic Georgist policy—in recent meetings with world leaders, he says.
Asked on a walk through OpenAI’s headquarters whether he has a vision of the future to help make sense of his various investments and interests, Altman says simply, “Abundance. That’s it.” The pursuits of fusion and superintelligence are cornerstones of the more equitable and prosperous future he envisions: “If we get abundant intelligence and abundant energy,” he says, “that will do more to help people than anything else I can possibly think of.”
Altman began thinking seriously about AGI nearly a decade ago. At the time, “it was considered career suicide,” he says. But Altman struck up a running conversation with Elon Musk, who also felt smarter-than-human machines were not only inevitable, but also dangerous if they were built by corporations chasing profits. Both feared Google, which had bought Musk out when it acquired the top AI-research lab DeepMind in 2014, would remain the dominant player in the field. They imagined a nonprofit AI lab that could be an ethical counterweight, ensuring the technology benefited not just shareholders but also humanity as a whole.

In the summer of 2015, Altman tracked down Ilya Sutskever, a star machine-learning researcher at Google Brain. The pair had dinner at the Counter, a burger bar near Google’s headquarters. As they parted ways, Altman got into his car and thought to himself, I have got to work with that guy. He and Musk spent nights and weekends courting talent. Altman drove to Berkeley to go for a walk with graduate student John Schulman; went to dinner with Stripe’s chief technology officer Greg Brockman; took a meeting with AI research scientist Wojciech Zaremba; and held a group dinner with Musk and others at the Rosewood hotel in Menlo Park, Calif., where the idea of what a new lab might look like began to take shape. “The montage is like the beginning of a movie,” Altman says, “where you’re trying to establish this ragtag crew of slight misfits to do something crazy.”
OpenAI launched in December 2015. It had six co-founders—Altman, Musk, Sutskever, Brockman, Schulman, and Zaremba—and $1 billion in donations pledged by prominent investors like Reid Hoffman, Peter Thiel, and Jessica Livingston. During OpenAI’s early years, Altman remained YC president and was involved only from a distance. OpenAI had no CEO; Brockman and Sutskever were its de facto leaders. In an office in a converted luggage factory in San Francisco’s Mission district, Sutskever’s research team threw ideas at the wall to see what stuck. “It was a very brilliant assembly of some of the best people in the field,” says Krisiloff. “At the same time, it did not necessarily feel like everyone knew what they were doing.”
In 2018, OpenAI announced its charter: a set of values that codified its approach to building AGI in the interests of humanity. There was a tension at the heart of the document, between the belief in safety and the imperative for speed. “The fundamental belief motivating OpenAI is, inevitably this technology is going to exist, so we have to win the race to create it, to control the terms of its entry into society in a way that is positive,” says a former employee. “The safety mission requires that you win. If you don’t win, it doesn’t matter that you were good.” Altman disputes the idea that OpenAI needs to outpace rival labs to deliver on its mission, but says, “I think we care about a good AGI outcome more than others.”
One key to winning was Sutskever. OpenAI’s chief scientist had an almost religious belief in the neural network, a type of AI algorithm that ingested large amounts of data and could independently detect underlying patterns. He believed these networks, though primitive at the time, could lead down a path toward AGI. “Concepts, patterns, ideas, events, they are somehow smeared through the data in a complicated way,” Sutskever told TIME in August. “So to predict what comes next, the neural network needs to somehow become aware of those concepts and how they leave a trace. And in this process, these concepts come to life.”
Read More: How We Chose the TIME100 Most Influential People in AI
To commit to Sutskever’s method and the charter’s mission, OpenAI needed vast amounts of computing power. For this it also needed cash. By 2019, OpenAI had collected only $130 million of the original $1 billion committed. Musk had walked away from the organization—and a planned donation of his own—after a failed attempt to insert himself as CEO. Altman, still running YC at the time, was trying to shore up OpenAI’s finances. He initially doubted any private investor could pump cash into the project at the volume and pace it required. He assumed the U.S. government, with its history of funding the Apollo program and the Manhattan Project, would be the best option. After a series of discussions—“you try every door,” Altman says—he was surprised to find “the chances of that happening were exactly zero.” He came to believe “the market is just going to have to do it all the way through.”
Wary of the perverse incentives that could arise if investors gained sway over the development of AGI, Altman and the leadership team debated different structures and landed on an unusual one. OpenAI would establish a “capped profit” subsidiary that could raise funds from investors, but would be governed by a nonprofit board. OpenAI’s earliest investors signed paperwork indicating they could receive returns of up to 100 times their investment, with any sums above that flowing to the nonprofit. The company’s founding ethos—a research lab unshackled from commercial considerations—had lasted less than four years.
Altman was spending an increasing amount of time thinking about OpenAI’s financial troubles and hanging out at its office, where Brockman and Sutskever had been lobbying him to come on full time. “OpenAI had never had a CEO,” he says. “I was kind of doing it 30% of the time, but not very well.” He worried the lab was at an inflection point, and without proper leadership, “it could just disintegrate.” In March 2019, the same week the company’s restructure was announced, Altman left YC and formally came on as OpenAI CEO.
Altman insists this new structure was “the least bad idea” under discussion. In some ways, the solution was an elegant one: it allowed the company to raise much-needed cash from investors while telegraphing its commitment to conscientiously developing AI. Altman embodied both goals—an extraordinarily talented fundraiser who was also a thoughtful steward of a potentially transformative technology.
It didn’t take long for Altman to raise $1 billion from Microsoft—a figure that has now ballooned to $13 billion. The restructuring of the company, and the tie-up with Microsoft, changed OpenAI’s complexion in significant ways, three former employees say. Employees began receiving equity as a standard part of their compensation packages, which some holdovers from the nonprofit era thought created incentives for employees to maximize the company’s valuation. The amount of equity that staff were given was very generous by industry standards, according to a person familiar with the compensation program. Some employees fretted OpenAI was turning into something more closely resembling a traditional tech company. “We leave billion-dollar ideas on the table constantly,” says VP of people Diane Yoon.
Buy a print of the Person of the Year covers now
Microsoft’s investment supercharged OpenAI’s ability to scale up its systems. An innovation from Google offered another breakthrough. Known as the “transformer,” it made neural networks far more efficient at spotting patterns in data. OpenAI researchers began to train the first models in their GPT (generative pre-trained transformer) series. With each iteration, the models improved dramatically. GPT-1, trained on the text of some 7,000 books, could just about string sentences together. GPT-2, trained on 8 million web pages, could just about answer questions. GPT-3, trained on hundreds of billions of words from the internet, books, and Wikipedia, could just about write poetry.
Altman recalls a breakthrough in 2019 that revealed the vast possibilities ahead. An experiment into “scaling laws” underpinning the relationship between the computing power devoted to training an AI and its resulting capabilities yielded a series of “perfect, smooth graphs,” he says—the kind of exponential curves that more closely resembled a fundamental law of the universe than experimental data. It was a cool June night, and in the twilight a collective realization dawned on the assembled group of researchers as they stood outside the OpenAI office: AGI was not just possible, but probably coming sooner than any of them previously thought. “We were all like, this is really going to happen, isn’t it?” Altman says. “It felt like one of these moments of science history. We know a new thing now, and we’re about to tell humanity about it.”
The realization contributed to a change in how OpenAI released its technology. By then, the company had already reneged on its founding principle of openness, after recognizing that open-sourcing increasingly powerful AI could be great for criminals and bad for business. When it built GPT-2 in 2019, it initially declined to release the model publicly, fearing it could have a devastating impact on public discourse. But in 2020, the company decided to slowly distribute its tools to wider and wider numbers of people. The doctrine was called “iterative deployment.” It enabled OpenAI to collect data on how AIs were used by the public, and to build better safety mechanisms in response. And it would gradually expose the public to the technology while it was still comparatively crude, giving people time to adapt to the monumental changes Altman saw coming.
On its own terms, iterative deployment worked. It handed OpenAI a decisive advantage in safety-trained models, and eventually woke up the world to the power of AI. It’s also true that it was extremely good for business. The approach bears a striking resemblance to a tried-and-tested YC strategy for startup success: building the so-called minimum viable product. Hack together a cool demo, attract a small group of users who love it, and improve based on their feedback. Put things out into the world. And eventually—if you’re lucky enough and do it right—that will attract large groups of users, light the fuse of a media hype cycle, and allow you to raise huge sums. This was part of the motivation, Brockman tells TIME. “We knew that we needed to be able to raise additional capital,” he says. “Building a product is actually a pretty clear way to do it.”
Some worried that iterative deployment would accelerate a dangerous AI arms race, and that commercial concerns were clouding OpenAI’s safety priorities. Several people close to the company thought OpenAI was drifting away from its original mission. “We had multiple board conversations about it, and huge numbers of internal conversations,” Altman says. But the decision was made. In 2021, seven staffers who disagreed quit to start a rival lab called Anthropic, led by Dario Amodei, OpenAI’s top safety researcher.
In August 2022, OpenAI finished work on GPT-4, and executives discussed releasing it along with a basic, user-friendly chat interface. Altman thought that would “be too much of a bombshell all at once.” He proposed launching the chatbot with GPT-3.5—a model that had been accessible to the public since the spring—so people could get used to it, and then releasing GPT-4 a few months later. Decisions at the company typically involve a long, deliberative period during which senior leaders come to a consensus, Altman says. Not so with the launch of what would eventually become the fastest-growing new product in tech history. “In this case,” he recalls, “I sent a Slack message saying, Yeah, let’s do this.” In a brainstorming session before Nov. 30 launch, Altman replaced its working title, Chat With GPT-3.5, with the slightly pithier ChatGPT. OpenAI’s head of sales received a Slack message letting her know the product team was silently launching a “low-key research preview,” which was unlikely to affect the sales team.
Nobody at OpenAI predicted what came next. After five days, ChatGPT crossed 1 million users. ChatGPT now has 100 million users—a threshold that took Facebook 4½ years to hit. Suddenly, OpenAI was the hottest startup in Silicon Valley. In 2022, OpenAI brought in $28 million in revenue; this year it raked in $100 million a month. The company embarked on a hiring spree, more than doubling in size. In March, it followed through on Altman’s plan to release GPT-4. The new model far surpassed ChatGPT’s capabilities—unlike its predecessor, it could describe the contents of an image, write mostly reliable code in all major programming languages, and ace standardized tests. Billions of dollars poured into competitors’ efforts to replicate OpenAI’s successes. “We definitely accelerated the race, for lack of a more nuanced phrase,” Altman says.
The CEO was suddenly a global star. He seemed unusually equipped to navigate the different factions of the AI world. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman told lawmakers at a U.S. Senate hearing in May. That month, Altman embarked on a world tour, including stops in Israel, India, Japan, Nigeria, South Korea, and the UAE. Altman addressed a conference in Beijing via video link. So many government officials and policy-makers clamored for an audience that “we ended up doing twice as many meetings than were scheduled for any given day,” says head of global affairs Anna Makanju. AI soared up the policy agenda: there was a White House Executive Order, a global AI Safety Summit in the U.K., and attempts to codify AI standards in the U.N., the G-7, and the African Union.
By the time Altman took the stage at OpenAI’s developer conference in November, it seemed as if nothing could bring him down. To cheers, he announced OpenAI was moving toward a future of autonomous AI “agents” with power to act in the world on a user’s behalf. During an interview with TIME two days later, he said he believed the chances of AI wiping out humanity were not only low, but had gone down in the past year. He felt the increase in awareness of the risks, and an apparent willingness among governments to coordinate, were positive developments that flowed from OpenAI’s iterative-deployment strategy. While the world debates the probabilities that AI will destroy civilization, Altman is more sanguine. (The odds are “nonzero,” he allows, but “low if we can take all the right actions.”) What keeps him up at night these days is something far more prosaic: an urban coyote that has colonized the grounds of his $27 million home in San Francisco. “This coyote moved into my house and scratches on the door outside,” he says, picking up his iPhone and, with a couple of taps, flipping the screen around to reveal a picture of the animal lounging on an outdoor sofa. “It’s very cute, but it’s very annoying at night.”
As Altman radiated confidence, unease was growing within his board of directors. The board had shrunk from nine members to six over the preceding months. That left a panel made up of three OpenAI employees—Altman, Sutskever, and Brockman—and three independent directors: Adam D’Angelo, the CEO of question-and-answer site Quora; Tasha McCauley, a technology entrepreneur and Rand Corp. scientist; and Helen Toner, an expert in AI policy at Georgetown University’s Center for Security and Emerging Technology.
The panel had argued over how to replace the three departing members, according to three people familiar with the discussions. For some time—little by little, at different rates—the three independent directors and Sutskever were becoming concerned about Altman’s behavior. Altman had a tendency to play different people off one another in order to get his desired outcome, say two people familiar with the board’s discussions. Both also say Altman tried to ensure information flowed through him. “He has a way of keeping the picture somewhat fragmented,” one says, making it hard to know where others stood. To some extent, this is par for the course in business, but this person says Altman crossed certain thresholds that made it increasingly difficult for the board to oversee the company and hold him accountable.
One example came in late October, when an academic paper Toner wrote in her capacity at Georgetown was published. Altman saw it as critical of OpenAI’s safety efforts and sought to push Toner off the board. Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.
This episode did not spur the board’s decision to fire Altman, those people say, but it was representative of the ways in which he tried to undermine good governance, and was one of several incidents that convinced the quartet that they could not carry out their duty of supervising OpenAI’s mission if they could not trust Altman. Once the directors reached the decision, they felt it was necessary to act fast, worried Altman would detect that something was amiss and begin marshaling support or trying to undermine their credibility. “As soon as he had an inkling that this might be remotely on the table,” another of the people familiar with the board’s discussions says, “he would bring the full force of his skills and abilities to bear.”
On the evening of Thursday, Nov. 16, Sutskever asked Altman to chat at noon the following day. At the appointed time, Altman joined Sutskever on Google Meet, where the entire board was present except Brockman. Sutskever told Altman that he was being fired and that the news would be made public shortly. “It really felt like a weird dream, much more intensely than I would have expected,” Altman tells TIME.
The board’s statement was terse: Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the announcement said. “The board no longer has confidence in his ability to continue leading OpenAI.”
Altman was locked out of his computer. He began reaching out to his network of investors and mentors, telling them he planned to start a new company. (He tells TIME he received so many texts his iMessage broke.) The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.
Legal and confidentiality reasons have made it difficult for the board to share specifics, the people with knowledge of the proceedings say. But the absence of examples of the “lack of candor” the board cited as the impetus for Altman’s firing contributed to rampant speculation—that the decision was driven by a personal vendetta, an ideological dispute, or perhaps sheer incompetence. The board fired Altman for “nitpicky, unfireable, not even close to fireable offenses,” says Ron Conway, the founder of SVAngel and a mentor who was one of the first people Altman called after being terminated. “It is reckless and irresponsible for a board to fire a founder over emotional reasons.”
Within hours, the company’s staff threatened to quit if the board did not resign and allow Altman to return. Under immense pressure, the board reached out to Altman the morning after his firing to discuss a potential path forward. Altman characterizes it as a request for him to come back. “I went through a range of emotions. I first was defiant,” he says. “But then, pretty quickly, there was a sense of duty and obligation, and wanting to preserve this thing I cared about so much.” The sources close to the board describe the outreach differently, casting it as an attempt to talk through ways to stabilize the company before it fell apart.
For nearly 48 hours, the negotiations dragged on. Mira Murati, OpenAI’s chief technology officer who stepped in as interim CEO, joined the rest of the company’s leadership in advocating for Altman’s return. So on the night of Sunday, Nov. 19, the board appointed a new interim CEO, Emmett Shear, the former CEO of Twitch. Microsoft boss Satya Nadella announced Altman and Brockman would be joining Microsoft to start a new advanced AI unit; Microsoft made it known that any OpenAI staff members would be welcome to join. After a tearful confrontation with Brockman’s wife, Sutskever flipped his position: “I deeply regret my participation in the board’s actions,” he posted in the early hours of Nov. 20.

By the end of that day, nearly all of OpenAI’s 770 employees had signed an open letter signaling their intention to quit if Altman was not reinstated. The same canniness that makes Altman such a talented entrepreneur also made him a formidable opponent in the standoff, able to command loyalty from huge swaths of the company and beyond.
And while mission is a powerful draw for OpenAI employees, so too is money. Nearly every full-time OpenAI employee has financial interests in OpenAI’s success, including former board members Brockman and Sutskever. (Altman, who draws a salary of $65,000, does not have equity beyond an indirect investment through his stake in YC.) A tender offer to let OpenAI employees sell shares at an $86 billion valuation to outside investors was planned for a week after Altman’s firing; employees who stood to earn millions by December feared that option would vanish. “It’s unprecedented in history to see a company go potentially to zero if everybody walks,” says one of the people familiar with the board’s discussions. “It’s unsurprising that employees banded together in the face of that particular threat.”
Unlike the staff, the three remaining board members who sought to oust Altman were employed elsewhere, had no financial stake in the company, and were not involved in its day-to-day operations. In contrast to a typical for-profit board, which makes decisions informed by quarterly earnings reports, stock prices, and concerns for shareholder value, their job was to exercise their judgment to ensure the company was acting in the best interests of humanity—a mission that is fuzzy at best, and difficult to uphold when so much money is at stake. But whether or not the board made a correct decision, their unwillingness or inability to offer examples of what they saw as Altman’s problematic behavior would ensure they lost the public relations battle in a landslide. A panel set up as a check on the CEO’s power had come to seem as though it was wielding unaccountable power of its own.
In the end, the remaining board members secured a few concessions in the agreement struck to return Altman as CEO. A new independent board would supervise an investigation into his conduct and the board’s decision to fire him. Altman and Brockman would not regain their seats, and D’Angelo would remain on the panel, rather than all independent members resigning. Still, it was a triumph for OpenAI’s leadership. “The best interests of the company and the mission always come first. It is clear that there were real misunderstandings between me and members of the board,” Altman posted on X. “I welcome the board’s independent review of all recent events.”
Two nights before Thanksgiving, staff gathered at the headquarters, popping champagne. Brockman posted a selfie with dozens of employees, with the caption: “we are so back.”
Ten days after the agreement was reached for their return, OpenAI’s leaders were resolute. “I think everyone feels like we have a second chance here to really achieve the mission. Everyone is aligned,” Brockman says. But the company is in for an overhaul. Sutskever’s future at the company is murky. The new board—former Twitter board chair Bret Taylor, former U.S. Treasury Secretary Larry Summers, and D’Angelo—will expand back to nine members and take a hard look at the company’s governance. “Clearly the current thing was not good,” Altman says.
OpenAI had tried a structure that would provide independent oversight, only to see it fall short. “One thing that has very clearly come out of this is we haven’t done a good job of solving for AI governance,” says Divya Siddarth, the co-founder of the Collective Intelligence Project, a nonprofit that works on that issue. “It has put into sharp relief that very few people are making extremely consequential decisions in a completely opaque way, which feels fine, until it blows up.”
Back in the CEO’s chair, Altman says his priorities are stabilizing the company and its relationships with external partners after the debacle; doubling down on certain research areas after the massive expansion of the past year; and supporting the new board to come up with better governance. What that looks like remains vague. “If an oracle said, Here is the way to set up the structure that is best for humanity, that’d be great,” Altman says.
Whatever role he plays going forward will receive more scrutiny. “I think these events have turned him into a political actor in the mass public’s eye in a way that he wasn’t before,” says Colson, the executive director of AIPI, who believes the episode has highlighted the danger of having risk-tolerant technologists making choices that affect all of us. “Unfortunately, that’s the dynamic that the market has set up for.”
But for now, Altman looks set to remain a leading architect of a potentially world-changing technology. “Building superintelligence is going to be a society-wide project,” he says. “We would like to be one of the shapers, but it’s not going to be something that one company just does. It will be far bigger than any one company. And I think we’re in a position where we’re gonna get to provide that input no matter what at this point. Unless we really screw up badly.” —With reporting by Will Henshall/Washington and Julia Zorthian/New York

Source: Tech – TIME | 6 Dec 2023 | 7:42 am

For the first time, TIME will debut a ranking of the World’s Top EdTech Companies, in partnership with Statista, a leading international provider of market and consumer data and rankings. This new list will identify the most innovative, impactful, and growing companies in EdTech, which have established themselves as leaders in the EdTech industry.
Companies that focus primarily on developing and providing education technology are encouraged to submit applications as part of the research phase. An application guarantees consideration for the list, but does not guarantee a spot on the list, nor is the final list limited to applicants.
To apply, click here.
More information visit: https://www.statista.com/page/ed-tech-rankings. Winners will be announced on TIME.com in April 2024.
Source: Tech – TIME | 5 Dec 2023 | 10:51 am
Source: BBC News - Technology | 5 Dec 2023 | 10:28 am

We keep hearing about how AI is going to steal women’s jobs, proliferate racial bias, make the rich richer and the poor poorer.
And if we focus solely on that fear, it very well might.
As the founder of Girls Who Code, I know as well as anyone the risks technology poses to the most vulnerable among us. But I’ve also seen how, when we’re distracted by doomsday, we miss incredible opportunities to help those same communities.
[time-brightcove not-tgx=”true”]That’s why I believe the next generation of AI will close inequality gaps—if we stop fixating on how it will widen them.
I know it because we’re already seeing it. Take education: After ChatGPT entered the zeitgeist in late 2022, many schools rushed to ban it to prevent cheating. Now, some are walking back that decision. As New York City Public Schools Chancellor David Banks, who oversees the country’s largest school system, put it: “the knee-jerk fear and risk overlooked the potential of generative AI to support students and teachers.”
And that potential is massive. Already, AI tutors are helping students who might otherwise be unable to afford one-on-one support. Tools like Avatar Buddy, which is designed with input from low-income students, are helping them boost their math grades, learn geometry, and understand Shakespeare. Khan Academy is testing an AI chatbot tutor, Khanmigo, that supports student learning with open-ended questions.
Is there still reason to worry that students could misuse ChatGPT to cheat? Of course. But when students stand to gain so much, the solution is to teach them to use AI responsibly—not back away from it entirely.
The benefits of AI outweigh its drawbacks
In short: the pros outweigh the cons. After COVID-19 disproportionately harmed students of color and lower-income students, widening the achievement gap in education, these AI tutors could very well shrink it.
AI isn’t just advancing equity and accessibility in education. We’re seeing it do wonders for people with disabilities, helping them live their lives more independently and breaking down barriers to employment. Engineers are using AI to build more resilient infrastructure—which poor communities need more than ever in the face of climate change. We’ve seen incredible advances in healthcare—including AI helping doctors diagnose patients and accelerating drug discovery, which could lower financial barriers to care and save lives.
And, starting today (Dec. 5), AI is going to support a group of people our country often puts dead last: moms. My organization, Moms First, is rolling out PaidLeave.ai, a tool to help New York parents access their paid leave benefits. We hope, one day, to expand it across the entire country.Read More: AI Should Complement Humans at Work, Not Replace Them
How exactly does paid leave advance equality? Because there’s no federally guaranteed paid family leave in the U.S., many workers—especially low-income women—take unpaid time off to care for family members or themselves. That time comes at a price: lack of paid family and medical leave costs working families $22.5 billion in wages every single year.
Even for people lucky enough to live in one of the 13 states or in Washington, DC, that has paid leave benefits, there are still sizable obstacles to accessing care. So even if parents know about their benefits, they still likely have to navigate dense, government-penned legal jargon. Remember, these are parents who barely have a moment for themselves, let alone hours to comb the internet. The reality is, many people simply give up.
This is exactly the kind of issue AI can help solve. PaidLeave.ai can streamline a dense tome of government paperwork into a simple, user-friendly experience. Parents can ask as many questions as they want, in many different languages, as if they were texting a friend—and PaidLeave.ai can come back with answers.
AI can help the public sector deliver services
And it’s just the beginning. Finally, it seems the public sector is waking up to the many ways AI can and will impact their work. Paid Leave.ai builds on the success of New York City’s MyCity Business AI chatbot, which helps people start and grow small businesses by delivering trusted information from more than 2,000 web pages.
We’re starting to see investments like these across the country. A recent report found that around 70% of government leaders have created teams to develop AI policies and resources. And federal agencies are working on roughly 700 uses cases for AI tools, with particular enthusiasm from the Department of Energy and Department of Health and Human Services. It’s early days, but it’s an encouraging sign of public-sector-spurred progress to come.
Of course, AI is not without risks. While we’ve helped make our chatbot more reliable by training it on government sources, including New York’s Paid Family Leave website, we’ve also heard stories of chatbots spitting out information that simply isn’t true. That’s why, as we continue to develop these technologies, we must make sure they can still “think” creatively while knowing fact from fiction—and why it’s so important we test them and regulate them in tandem, sooner rather than later.
If we are going to build trust in AI among people who have never used these tools before, then the people these tools are built to support must be at the table, helping design AI solutions from the start. In our case, that means centering the perspectives of moms—specifically single moms, moms of color, and moms who are hourly workers.
But this applies everywhere. If we are intentional about harnessing the power of AI to empower the most vulnerable, we lift up everyone. That’s what the great minds of tech should be focused on right now.
To ensure anyone can create these tools, we must give everyone access to them. So let’s learn from the mistakes of technologies past—and the digital divide that emerged from them—and get AI in the hands of women, young people, people of color, and low-income communities while the technology is still in its relatively early stages.
And let’s be clear: we don’t have to choose between ethical AI and innovative AI. We can develop new use cases to lessen inequality—and, at the same time, train employees and students on AI ethics.
Right now, we have an opportunity to solve some of the world’s biggest problems and help the world’s most vulnerable communities. But it will require a willingness to take bold swings, put people first, and cooperate across sectors, industries, and political parties.
Because at the end of the day, our AI is only as good as we are.
Source: Tech – TIME | 5 Dec 2023 | 10:00 am
Source: BBC News - Technology | 5 Dec 2023 | 8:48 am
Source: BBC News - Technology | 5 Dec 2023 | 8:04 am
Source: BBC News - Technology | 5 Dec 2023 | 7:54 am
Source: BBC News - Technology | 5 Dec 2023 | 3:01 am

ByteDance Ltd.’s TikTok has struck an agreement to invest in a unit of Indonesia’s GoTo Group and cooperate on an online shopping service, pioneering a template for e-commerce beyond Southeast Asia’s biggest economy.
The Chinese-owned video service has agreed broadly to work with GoTo’s Tokopedia across several areas rather than compete directly with the Indonesian platform, people familiar with the pact said. The pair aim to announce details of that tie-up as soon as next week, the people said, asking not to be identified disclosing a deal before it’s formalized.
[time-brightcove not-tgx=”true”]GoTo’s shares erased morning declines to climb as high as 5% in Jakarta. While the two companies have reached an informal agreement, the final details of that alliance are getting hammered out and could change before announcement, the people said. The pact is also subject to regulatory approval and could still fall through, they added.
Read More: Nepal Bans TikTok and Tightens Control Over All Social Media Platforms
An investment in Tokopedia will be a first of its kind for TikTok Shop, the rapidly growing arm of ByteDance’s video service that’s making inroads into online shopping from the U.S. to Europe. Its progress in Indonesia against Sea Ltd. and Tokopedia, however, came to a halt when Jakarta — responding to complaints from local merchants — forced TikTok to split payments from shopping in the country.
More From TIME
Now, a tieup with a savvy local operator could provide a model for TikTok as it pursues expansion in other markets such as Malaysia, where the government has signaled a willingness to review the influence of overseas players like TikTok. Bloomberg News reported last month that TikTok and GoTo were discussing a potential investment but another option was a joint venture. That could entail building a new e-commerce platform. Representatives for TikTok and GoTo declined to comment.
ByteDance’s ultimate goal is to revive its online-shopping service in Southeast Asia’s largest retail arena. TikTok, the only platform immediately affected by Jakarta’s new rules, has halted online shopping to comply with the curbs.
Indonesia is the first and largest market for TikTok Shop. It started the service in Indonesia in 2021 and its instant success with younger, video-first shoppers encouraged it to expand into other markets including the US.
For GoTo, Indonesia’s largest internet company, a deal with TikTok could be risky as it would help a major online-retail rival to operate in the country. But it would also give GoTo a strong global social-media partner in an arrangement that could boost shopping, logistics and payments volumes for both companies.
Chief Executive Officer Patrick Walujo, who took over in June, is trying to bring GoTo to profitability on an adjusted basis by the end of the year to show the ride-hailing and e-commerce company has long-term potential. The managing partner of shareholder Northstar Group is continuing his predecessors’ campaign to reduce losses by slashing jobs, cutting promotions and tightening expense controls.
TikTok has been attempting to engage government officials and other social media companies to figure out a way to restart its e-commerce operations in the country. Indonesian minister Teten Masduki said TikTok has spoken with five companies including Tokopedia, PT Bukalapak.com and Blibli about possible partnerships.
Read More: What to Know About the TikTok Security Concerns
Indonesia is among the first countries in Southeast Asia to push back against TikTok. Navigating the conflict will be pivotal for the company as governments across the world assess how Southeast Asia’s largest nation moves to curb the social media giant’s burgeoning e-commerce presence. TikTok said just months earlier it will invest billions of dollars into the region.
Following the Indonesian restrictions, nearby Malaysia said it is studying the possibility of regulating TikTok and its e-commerce operations. The social media leader is already facing possible bans and scrutiny in the likes of the U.S., Europe and India on national security concerns.
Source: Tech – TIME | 5 Dec 2023 | 2:30 am

Some 6.9 million 23andMe customers had their data compromised after an anonymous hacker accessed user profiles and posted them for sale on the internet earlier this year, the company said on Monday.
The compromised data included users’ ancestry information as well as, for some users, health-related information based on their genetic profiles, the company said in an email.
[time-brightcove not-tgx=”true”]Privacy advocates have long warned that sharing DNA with testing companies like 23andMe and Ancestry makes consumers vulnerable to the exposure of sensitive genetic information that can reveal health risks of individuals and those who are related to them.
Read More: DNA Testing Kits Are on Everyone’s Holiday List. 5 Things to Know If You Get One
In the case of the 23andMe breach, the hacker only directly accessed about 14,000 of 23andMe’s 14 million customers, or 0.1%. But on 23andMe, many users choose to share information with people they’re genetically related to — which can include distant cousins they have never met, in addition to direct family members — in order to learn more about their own genetics and build out their family trees. So through those 14,000 accounts, the hacker was able to access information about millions more. A much smaller subset of customers had health data accessed.
More From TIME
Users can choose whether to share different kinds of data, including name, location, ancestry and health information such as genetic predisposition to conditions such as asthma, anxiety, high-blood pressure and macular degeneration.
The exposure of such information could have concerning ramifications. In the U.S., health information is typically protected by what’s known as the Health Insurance Portability and Accountability Act, or HIPAA. But such protections only apply to health-care providers.
The 2008 Genetic Information Nondiscrimination Act (GINA), protects against discrimination in employment and health insurance should information from a DNA test make it out into the wild. This aims to protect individuals from being denied a job or insurance coverage if, for example, a DNA test reveals they are at risk of eventually developing a debilitating condition.
But the law has loopholes; both life insurers and disability insurers, for example, are free to deny people policies based on their genetic information.
There have been other high-profile hacks of DNA testing companies. But 23andMe is the first breach of a major company in which the exposure of health information was publicly disclosed. (The Federal Trade Commission recently ordered a smaller firm, Vitagene, to strengthen protections after health information was exposed.)
The hacker appeared to use what’s known as credential stuffing to access customer accounts, logging into individual 23andMe accounts by using passwords that had been recycled and used for other websites that were previously hacked. The company said there was no evidence of a breach within its own systems.
Since the hack, the company announced that it will require two-factor authentication in order to protect against credential-stuffing attacks on the site. It has said it expects to incur $1 million to $2 million in costs related to the breach.
Source: Tech – TIME | 4 Dec 2023 | 9:30 pm
Source: BBC News - Technology | 4 Dec 2023 | 8:21 am
Source: BBC News - Technology | 4 Dec 2023 | 5:25 am
Source: BBC News - Technology | 4 Dec 2023 | 4:23 am
Source: BBC News - Technology | 3 Dec 2023 | 7:12 pm
Source: BBC News - Technology | 2 Dec 2023 | 6:56 pm
Source: Reuters: Technology News | 17 Jun 2020 | 11:37 pm
Source: Reuters: Technology News | 17 Jun 2020 | 8:05 pm
Source: Reuters: Technology News | 17 Jun 2020 | 6:43 pm
Source: Reuters: Technology News | 17 Jun 2020 | 6:03 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:33 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:31 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:01 pm
Source: Reuters: Technology News | 17 Jun 2020 | 4:47 pm
Source: Reuters: Technology News | 17 Jun 2020 | 2:39 pm
Source: Reuters: Technology News | 17 Jun 2020 | 12:48 pm
Source: Latest articles for ZDNet | 4 Apr 2018 | 3:48 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 3:43 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 2:03 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 12:04 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 12:00 am
Source: Latest articles for ZDNet | 3 Apr 2018 | 8:14 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:51 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:40 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:39 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:15 pm
Source: ComputerWeekly.com | 29 Mar 2018 | 1:15 pm
Source: ComputerWeekly.com | 29 Mar 2018 | 6:58 am
Source: ComputerWeekly.com | 29 Mar 2018 | 5:00 am
Source: ComputerWeekly.com | 29 Mar 2018 | 4:45 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:48 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:15 am
Source: ComputerWeekly.com | 28 Mar 2018 | 12:30 pm
Source: ComputerWeekly.com | 28 Mar 2018 | 9:30 am
Source: ComputerWeekly.com | 28 Mar 2018 | 8:30 am
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 17 Nov 2016 | 11:12 am
Source: CNN.com - Technology | 17 Nov 2016 | 11:07 am
Source: CNN.com - Technology | 16 Nov 2016 | 12:38 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:56 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:55 pm
Source: CNN.com - Technology | 8 Nov 2016 | 1:17 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 3:37 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 9:01 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 2:55 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am