Artificial intelligence has been a tricky subject in Washington.
Most lawmakers agree that it poses significant dangers if left unregulated, yet there remains a lack of consensus on how to tackle these concerns. But speaking at a TIME100 Talks conversation on Friday ahead of the White House Correspondents Dinner, a panel of experts with backgrounds in government, national security, and social justice expressed optimism that the U.S. government will finally “get it right” so that society can reap the benefits of AI while safeguarding against potential dangers.
[time-brightcove not-tgx=”true”]“We can’t afford to get this wrong—again,” Shalanda Young, the director of the Office of Management and Budget in the Biden Administration, told TIME Senior White House Correspondent Brian Bennett. “The government was already behind the tech boom. Can you imagine if the government is a user of AI and we get that wrong?”
Read More: A Call for Embracing AI—But With a ‘Human Touch’
The panelists agreed that government action is needed to ensure the U.S. remains at the forefront of safe AI innovation. But the rapidly evolving field has raised a number of concerns that can’t be ignored, they noted, ranging from civil rights to national security. “The code is starting to write the code and that’s going to make people very uncomfortable, especially for vulnerable communities,” says Van Jones, a CNN host and social entrepreneur who founded the Dream Machine, a non-profit that fights overcrowded prisons and poverty. “If you have biased data going in, you’re going to have biased decision-making by algorithms coming out. That’s the big fear.”
The U.S. government might not have the best track record of keeping up with emerging technologies, but as AI becomes increasingly ubiquitous, Young says there’s a growing recognition among lawmakers of the need to prioritize understanding, regulation, and ethical governance of AI.
Michael Allen, managing director of Beacon Global Strategies and Former National Security Council director for President George W. Bush, suggested that in order to address a lack of confidence about the use of artificial intelligence, the government needs to ensure that humans are at the forefront of every decision-making process involving the technology—especially when it comes to national security. “Having a human in the loop is ultimately going to make the most sense,” he says.
Asked how Republicans and Democrats in Washington can talk to each other about tackling the problems and opportunities that AI presents, Young says there’s already been a bipartisan shift around science and technology policy in recent years—from President Biden’s signature CHIPS and Science Act to funding for the National Science Foundation. The common theme behind the resurgence in this bipartisan support, she says, is a strong anti-China movement in Congress.
“There’s a big China focus in the United States Congress,” says Young. “But you can’t have a China focus and just talk about the military. You’ve got to talk about our economic and science competition aspects of that. Those things have created an environment that has given us a chance for bipartisanship.”
Allen noted that in this age of geopolitical competition with China, the U.S. government needs to be at the forefront of artificial intelligence. He likened the current moment to the Nuclear Age, when the U.S. government funded atomic research. “Here in this new atmosphere, it is the private sector that is the primary engine of all of the innovative technologies,” Allen says. “The conventional wisdom is that the U.S. is in the lead, we’re still ahead of China. But I think that’s something as you begin to contemplate regulation, how can we make sure that the United States stays at the forefront of artificial intelligence because our adversaries are going to move way down the field on this.”
Congress is yet to pass any major AI legislation, but that hasn’t stopped the White House from taking action. President Joe Biden signed an executive order to set guidelines for tech companies that train and test AI models, and has also directed government agencies to vet future AI products for potential national security risks. Asked how quickly Americans can expect more guardrails on AI, Young noted that some in Congress are pushing to establish a new, independent federal agency that can help inform lawmakers about AI without a political lens, offering help on legislative solutions.
“If we don’t get this right,” Young says, “how can we keep trust in the government?”
TIME100 Talks: Responsible A.I.: Shaping and Safeguarding the Future of Innovation was presented by Booking.com.
Source: Tech – TIME | 26 Apr 2024 | 9:55 pm
WhatsApp, the popular global messaging platform owned by Meta, has rolled out new features including a different way to log in and an artificial intelligence assistant in the app.
iPhone users can now use passkeys to login—which means they can access the app using Face ID, Touch ID, or their iPhone passcode—instead of receiving an SMS to log in.
[time-brightcove not-tgx=”true”]Whatsapp said on X, formerly Twitter, on April 24 that this feature was “a more secure way to login.” It also avoids any potential challenges in receiving an SMS to log in, with the company adding: “traveling? no network? no problem.”
The messaging app already launched passkeys for Android users in October, as demonstrated by a post shared on Threads, another Meta social media platform.
People with Pixel 8 and 8 Pro Google phones can now also use Face Unlock, instead of their fingerprint or PIN, to unlock and view messages on WhatsApp, as reported by 9to5Google.
Another change also arrived recently to the messaging app. On April 18, Meta expanded a new AI assistant across its social platforms—Facebook, Instagram, Messenger, and WhatsApp.
Users can employ the assistant, called Meta Llama 3, in feeds, chats, and search across the apps to get information and generate images “without having to leave the app you’re using,” the company said.
Meta AI in English is now available in more than a dozen countries outside of the U.S.—Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe.
Source: Tech – TIME | 26 Apr 2024 | 8:40 am
On Wednesday, Joe Biden signed into law a bill that could lead to TikTok being banned in the U.S. if ByteDance, the app’s Chinese-owned parent company, does not sell it within a year. Lawmakers are increasingly worried that the app could pose national security concerns to the U.S. if the Chinese government were to access data collected by the app.
[time-brightcove not-tgx=”true”]TikTok’s CEO, Shou Zi Chew, spoke in front of Congress in January of this year, defending the platform’s position. “We have not been asked for any data by the Chinese government and we have never provided,” he testified.
Nevertheless, despite the CEO’s assurances, many governments around the world remain unconvinced, and have instituted their own TikTok bans and restrictions.
Here are all the countries that have banned or partially banned the app:
Afghanistan
The Taliban government banned TikTok in April 2022, saying that the application was “misleading youths.”
Armenia
Armenia reportedly temporarily blocked TikTok for multiple days during border clashes with Azerbaijan in September 2022.
Australia
TikTok was banned from Australian government devices in April 2023, but is still allowed on devices belonging to the general public.
Austria
Austria has banned TikTok from all government employee devices as of May 2023.
Azerbaijan
Azerbaijan blocked TikTok temporarily during border clashes with Armenia in September 2022. The application was blocked again one year later due to “anti-terrorist measures” according to Armenian local media. It was restored again on October 1, 2023.
Bangladesh
In August 2021, a Bangladeshi court ordered the government to remove TikTok and several other apps from the country’s app store in order to “save children and adolescents from moral and social degradation.” The application was later allowed to return, providing its content moderation was in line with Bangladesh’s cultural sensibilities.
Belgium
In March 2024, the Belgian government announced that it was banning the app from all government devices. “The safety of our information must prevail,” said Belgian Prime Minister Alexander De Croo. TikTok is still available on non-government affiliated devices.
Canada
Canada banned TikTok from all government-issued mobile devices in February 2023. “I suspect that as the government takes the significant step of telling all federal employees that they can no longer use TikTok on their work phones, many Canadians from business to private individuals will reflect on the security of their own data and perhaps make choices,” Prime Minister Justin Trudeau said when the ban was announced.
China
China itself does not permit the international version of TikTok to be used on the mainland. Instead, users must download Douyin, the Chinese version of TikTok which is subject to censorship from the Chinese Communist Party.
Denmark
Denmark’s Ministry of Defense banned the app from its employees’ work phones in March 2023. The country’s main public service broadcaster also instituted special protocols that mean journalists need special approval in order to use the app for reporting purposes following a warning from Denmark’s Center for Cybersecurity.
Estonia
TikTok was banned from the work phones of state officials in Estonia in March 2023.
European Union
The three main institutions that make up the E.U.–the European Parliament, European Commission and the E.U. Council–have all banned TikTok from employees’ work phones. Employees are also advised to remove the app from their personal devices.
France
France has banned several social media applications from government employees’ phones including Twitter, Instagram, and TikTok as of March 2023.
India
India is one of a handful of countries that has completely banned TikTok, including on citizens’ personal devices. The ban was implemented in July 2020 after a border clash between China and India left 20 Indian soldiers dead. The two countries have had an ongoing centuries over their border that occasionally turns violent. In the aftermath of the 2020 border clash, some Indians called for a boycott of Chinese goods, and India’s information technology ministry put out a statement claiming that certain mobile applications were “stealing and surreptitiously transmitting users’ data.” After the ban, many creators migrated to YouTube Shorts and Instagram Reels.
Indonesia
Indonesia banned TikTok Shop, a portion of the application that allows creators to sell products to their audiences, in October 2023 for violating the country’s e–commerce laws.
Iran
The Islamic Republic bans TikTok along with other internationally popular social media platforms including X and Facebook.
Ireland
TikTok was banned from government devices in Ireland in April 2023.
Jordan
TikTok has been banned in the Kingdom of Jordan since December 2022, after a police officer was killed during a protest and videos of the event flooded social media.
Kyrgyzstan
The small, formerly Soviet country banned Tiktok in August 2023, arguing that the application was harmful to the development of children.
Latvia
The app is prohibited on work phones in Latvia’s Foreign Ministry as of March 2023.
Malta
Tiktok is blocked from government-provided cell phones along with all other non-government applications in Malta.
The Netherlands
The Dutch government banned the app from employees’ work phones in March 2023. The relationship between Amsterdam and Beijing has soured in the last year after a Dutch intelligence agency called China “the biggest threat to the Dutch economic security,” per Bloomberg.
Nepal
Nepal banned TikTok for all its citizens in November 2023, saying the app was “detrimental to social harmony.”
New Zealand
Lawmakers and parliament employees in New Zealand were prohibited from having TikTok on their work phones as of March 2024. However, exceptions can be made if a lawmaker believes TikTok is necessary for their democratic duties, per the AP. The ban does not apply to employees in other branches of the government.
North Korea
North Korea restricts most of its citizens from accessing the internet. A few websites and apps are permitted for the privileged elite to visit, but TikTok is not among them.
Norway
The Norwegian parliament banned TikTok from employees’ work devices in March 2023. Municipal employees in the cities of Oslo and Bergen have also been encouraged to remove the app from their devices. “Norwegian intelligence services single out Russia and China as the main risk factors for Norway’s security interests,” said Justice Minister Emilie Enger Mehl in a statement, per EuroNews.
Pakistan
Pakistan has banned TikTok temporarily at least four times. However, the application was restored in November 2021, and has reportedly been available in the country since then.
Russia
Currently, Russians are restricted in terms of what they can view on TikTok, and viewers primarily only see videos made by Russian creators. This month, it has been reported that the Russian government intends to ban TikTok in order to encourage citizens to use domestic social media platforms, instead.
Senegal
Senegal instituted a total ban of the application in August after an opposition candidate was accused of using the platform to spread “hateful and subversive messages.” The government of Senegal has refused to reinstate the app unless a mechanism is developed that allows the government to remove specific accounts.
Somalia
In August 2023, the Somali government announced that it was banning TikTok, Telegram, and the online betting website 1XBet. Minister of Communications Jama Hassan Khalif said that the apps were used for dangerous propaganda. “The Minister of Communications orders internet companies to stop the aforementioned applications, which terrorists and immoral groups use to spread constant horrific images and misinformation to the public,” the minister said in a statement, per Reuters.
Taiwan
Taiwan banned all government devices from using Chinese-made software, including TikTok, in December 2022 after a warning from the FBI.
United Kingdom
In March 2023, the U.K. banned all government employees from using TikTok on government provided mobile devices. “Given the particular risk around government devices, which may contain sensitive information, it is both prudent and proportionate to restrict the use of certain apps, particularly when it comes to apps where a large amount of data can be stored and accessed,” Cabinet Office minister Oliver Dowden told parliament.
United States
Congress and the armed forces have banned TikTok from all of their employees’ devices. Approximately half of all states ban the app on state-owned devices, and the Federal government similarly banned the app from employees’ devices in March 2023.
Uzbekistan
TikTok has been unavailable in Uzbekistan since July 2021, after the authorities said the app was not compliant with the country’s personal data protection laws.
Source: Tech – TIME | 25 Apr 2024 | 2:01 pm
“We are all at risk of manipulation online right now.”
So begins a short animated video about a practice known as decontextualization and how it can be used to misinform people online. The video identifies signs to watch out for, including surprising or out of the ordinary content, seemingly unreliable sources, or video or audio that appear to have been manipulated or repurposed.
[time-brightcove not-tgx=”true”]Though it may not look like it, this 50-second video is actually an election ad—one of three that Google will be rolling out across five European countries next month in advance of the European Union’s June parliamentary elections. But unlike traditional election ads that are designed to persuade people how to vote, these are seeking to educate voters about how they could be misled. It’s an initiative that Google describes as preventative debunking—or, more simply, “prebunking.”
“It works like a vaccine,” Beth Goldberg, the head of research at Google’s internal Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies, tells TIME. By enabling prospective voters to recognize common manipulation techniques that could be used to mislead them—such as scapegoating or polarization—Goldberg says that prebunking “helps people to gain mental defenses proactively.”
Concerns about AI-generated disinformation and the impact it stands to have on contests around the world continues to dominate this year’s election megacycle. This is particularly true in the E.U., which recently passed a new law compelling tech firms to increase their efforts to clamp down on disinformation amid concerns that an uptick in Russian propaganda could distort the results.
Contrary to what one might expect, prebunking ads aren’t overtly political nor do they make any allusions to any specific candidates or parties. In the video about decontextualization, for example, viewers are shown a hypothetical scenario in which an AI-generated video of a lion set loose on a town square is used to stoke fear and panic. In another video, this time about scapegoating, they are shown an incident in which a community lays sole blame on another group (in this case, tourists) for the litter in their parks without exploring other possible causes.
The beauty of this approach, Goldberg notes, is that it needn’t be specific. “It doesn’t have to be actual misinformation; you can just show someone how the manipulation works,” she says, noting that keeping the content general and focusing on manipulation strategies, rather than the misinformation itself, allows these campaigns to reach people regardless of their political persuasion.
While Google’s prebunking campaign is relatively new, the tactic is not. Indeed, the concept dates back to the 1960s, when the social psychologist William McGuire sought to understand people’s susceptibility to propaganda during the Cold War and whether they could be defended against it. This culminated in what McGuire called “inoculation theory,” which rested on the premise that false narratives, like viruses, can be contagious and that by inoculating people with a dose of facts, they can become less susceptible. But it wasn’t until decades later that the theory began being applied to online information. In recent years, Jigsaw has conducted prebunking initiatives in Eastern Europe and Indonesia. Its forthcoming European campaign, which formally kicks off in May, will primarily be disseminated as short ads on YouTube and Meta platforms targeting voters in Belgium, France, Germany, Italy, and Poland. Afterwards, viewers will be invited to take a short, multiple-choice survey testing their ability to identify the manipulation technique featured in the ad.
Read More: Inside the White House Program to Share America’s Secrets
Whereas prebunking doesn’t necessarily face as much resistance as more conventional forms of combating misinformation such as fact checking or content moderation, which some critics have likened to censorship, it isn’t a panacea either. Jon Roozenbeek, an assistant professor in psychology and security at King’s College London who has spent years working with Jigsaw on prebunking, tells TIME that one of the biggest challenges in these campaigns is ensuring that the videos are captivating enough to hold viewers’ attention. Even if they do, he adds, “You can’t really expect miracles in a sense that, all of a sudden after one of these videos, people begin to behave completely differently online.” he says. “It’s just way too much to expect from a psychological intervention that is as light touch as this.”
This isn’t to say that prebunking doesn’t have an impact. In previous campaigns, post-ad surveys showed that the share of individuals who could correctly identify a manipulation technique increased by as much as 5% after viewing a prebunking video. “We’re not doubtful that the effect is real; it’s just you can argue over whether it’s large enough,” Roozenbeek says. “That’s the main discussion that we’re having.”
While Jigsaw has led the way on prebunking efforts, they’re not the only ones utilizing this approach. In the U.S., the Biden administration has sought to counter Russian disinformation in part by declassifying intelligence forecasting the kinds of narratives that it anticipated the Kremlin would use, particularly in the run up to Moscow’s 2022 full-scale invasion of Ukraine. This practice has since extended to China (where the U.S. government used declassified materials to forecast potential Chinese provocations in the Taiwan Strait) and Iran (the U.S. declassified intelligence claiming that Tehran had transferred drones and cruise missiles to Houthi militants in Yemen that were being used to attack ships in the Red Sea). What the White House has billed as strategic declassification is just prebunking by another name.
Working with academics and civil society organizations across the E.U.’s 27 member states, Jigsaw’s latest prebunking campaign is set to be its biggest and most collaborative effort yet. And in an election that will see hundreds of millions of voters go to the polls to elect what polls project could be the most far-right European Parliament today, the stakes couldn’t be higher.
Source: Tech – TIME | 25 Apr 2024 | 6:30 am
CEOs of start-ups and big tech companies spoke at the TIME100 Summit on Wednesday about innovating with artificial intelligence in an ethical way, just moments before a spirited debate on the future of the technology.
“Regulation and innovation are two sides of the same coin,” said Rosanne Kincaid-Smith, Group Chief Operating Officer of Northern Data Group, which is a signature partner of the TIME100 Summit. She added tech companies and industry leaders should work towards better regulation. “Not actively contributing through lobbying would be a huge miss for us,” she said.
[time-brightcove not-tgx=”true”]Kincaid-Smith stressed the benefits of AI and suggested that questions about whether AI is “evil” and going to negatively impact the workforce are misguided. “We forget to remind ourselves that Artificial Intelligence is artificial. It’s not natural…It’s an artifact of our own learning, of human culture, of human history…so it’s incumbent on us to make sure it reflects the best of us,” she said.
Justina Nixon-Saintil, Vice President and Chief Impact Officer of IBM, shared IBM is investing more resources into climate modeling. IBM is working with NASA to build an AI foundation model to improve the speed, accuracy and accessibility of weather forecasting.
Mathias Wikström, CEO of Doconomy, which gives banks financial tools to drive global climate action, stressed that climate literacy is key to achieving climate justice.
The TIME100 Summit convenes leaders from the global TIME100 community to spotlight solutions and encourage action toward a better world. This year’s summit features a variety of speakers across a diverse range of sectors, including politics, business, health and science, culture, and more.
Speakers for the 2024 TIME100 Summit include actor Elliot Page, designer Tory Burch, Olympic medalist Ibtihaj Muhammad, WNBA champion A’ja Wilson, author Margaret Atwood, NYSE president Lynn Martin, comedian Alex Edelman, professor Yoshua Bengio, 68th Secretary of State John Kerry, actor Jane Fonda, and many more.
The TIME100 Summit was presented by Booking.com, Citi, Merck, Northern Data Group, Glenfiddich Single Malt Scotch Whisky, and Verizon.
Source: Tech – TIME | 24 Apr 2024 | 8:34 pm
Two top artificial intelligence experts—one an optimist and the other more alarmist about the technology’s future—engaged in a spirited debate at the TIME100 Summit on Wednesday.
Both Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, a scientific hub, and Eric Schmidt, chairman of the Special Competitive Studies Project and former Google CEO, agree that A.I. is poised to transform modern society.
[time-brightcove not-tgx=”true”]But as moderator Stephanie Ruhle, an MSNBC host, put it: “Yoshua believes that the risk of AI potentially putting us into extinction should be considered a global risk, like we look at pandemics and nuclear war. Eric is super, super excited about AI.”
Bengio’s main concern with AI is the difficulty in ensuring that AI systems are used for their intended purpose and not something harmful. “We absolutely need to clear the fog; right now, scientists really have no idea” how to get AI to behave according to the norms, law and values of society.
Schmidt and Bengio also spoke about potential conflicts of interest that profit-seeking companies may face in being tasked to regulate their use of AI. Schmidt maintained that industry leaders are the ones asking for regulation. But Bengio pushed back: “It’s tricky because companies are in this race to win; at the same time, the individuals of these companies are human and don’t want catastrophes to happen.”
Schmidt says that companies have about three to five years before they really need to “get their act together.” That’s when bigger ethical issues will emerge, he says.
“That’s very fast,” Bengio says; and he argues that there aren’t yet enough laws in place to force companies to secure their AI system so that data is not stolen by bad actors.
Schmidt is optimistic that Western countries will put those laws in place but is “more worried about institutions we don’t have control over.” Everyone is watching large American companies, so there’s less danger, he says.
Bengio stresses the need for international coordination because AI breaches in other countries can harm the U.S., too. “It’s starting but we need to accelerate (efforts),” he says.
The two did agree on something, though—that an inflection point will arrive when AI systems begin operating without human control.
That moment will be “incredibly dangerous,” Schmidt says. “You know what we should do? We should unplug the computers. It’s just not OK. There are limits.”
The TIME100 Summit convenes leaders from the global TIME100 community to spotlight solutions and encourage action toward a better world. This year’s summit features a variety of speakers across a diverse range of sectors, including politics, business, health and science, culture, and more.
Speakers for the 2024 TIME100 Summit include actor Elliot Page, designer Tory Burch, Olympic medalist Ibtihaj Muhammad, WNBA champion A’ja Wilson, author Margaret Atwood, NYSE president Lynn Martin, comedian Alex Edelman, 68th Secretary of State John Kerry, actor Jane Fonda, and many more.
The TIME100 Summit was presented by Booking.com, Citi, Merck, Northern Data Group, Glenfiddich Single Malt Scotch Whisky, and Verizon.
Source: Tech – TIME | 24 Apr 2024 | 4:27 pm
Elon Musk is fighting many battles right now: Against a Brazilian Supreme Court judge, the Australian Prime Minister, Don Lemon, OpenAI, and a nonprofit watchdog, to name a few.
But Musk says that he’s now spending the majority of his work time on one of his oldest ventures: Tesla. And Tesla badly needs help. The carmaker released its quarterly earnings report yesterday and revealed that its profits fell 55% and revenue fell 9%—figures even worse than many analysts had anticipated. The company announced its intentions to lay off more than 10% of its staff, or about 14,000 people, including major cuts in California and Texas.
[time-brightcove not-tgx=”true”]Musk soothed investors on Tuesday with some lofty promises about imminent Tesla products. He is now faced with enormous challenges: to remain a leader in a rapidly-crowding electric vehicle space, cut costs while rushing out new cars, all the while forging ahead with his dream of making Tesla an AI powerhouse. “They’re still in a very dark place in this woods that they have to get out of,” says Craig Irwin, an analyst at Roth Capital Partners.
Tesla’s Struggles in the EV Market
Tesla has long been a pioneer in the EV (electric vehicles) space. It now faces two major obstacles in continuing to grow that business: waning consumer interest, and increased competition. Recent polls have shown that public interest in owning an electric car has declined, and that consumers want lower-priced EVs that do not currently exist. Many are also skeptical about the current charging infrastructure, which can make driving an EV on an everyday basis much more unwieldy than its gas counterparts.
Many automakers have also entered the space with their own electric or hybrid cars. BMW, Mercedes, Hyundai and Kia have seen recent promising EV sales. Earlier this year, the Chinese automaker BYD briefly surpassed Tesla as the world’s top-selling electric carmaker. Stateside, General Motors recently announced it would ramp up its electric vehicle production.
And it hasn’t helped that Tesla’s innovation in the EV space has slowed. In 2022, the company decided not to put out any new car models—a risky choice in an industry that relies upon new or redesigned models to keep buyers’ interest. The Cybertruck, one of Tesla’s most-hyped releases, has performed well below expectations: It sold only around 4,000 vehicles, and then faced a massive recall due to a defect that caused the accelerator to sometimes get stuck when pressed.
Musk has long dangled the dream of releasing a Model 2 EV that would cost $25,000 and bring Tesla into the mass market. But Reuters reported earlier this month that the company had scrapped plans for that make. At Tuesday’s earnings report, however, Musk announced that a “more affordable” EV would be in production by early 2025, and would be built without needing a new factory or production line. This announcement alone cheered investors: shares jumped 13% in after-hours trading.
Irwin says that an affordable EV could play especially well in Europe and Asia, where people drive less and are more conscious about gas consumption. But he and others are skeptical of Tesla’s ability to deliver on time. “Them pulling forward is kind of comical, because they’ve only ever been late,” he says.
Constant Drama
As Tesla attempts to race ahead, it will try to outrun a slew of controversies surrounding both the company and Musk himself. In 2022, a California governmental agency sued Tesla for widespread discrimination against Black workers. That suit is still pending. Tesla also faces a separate investigation from the U.S. Equal Employment Opportunity Commission (EEOC). And the company has been sued by several women over alleged workplace sexual harassment. (Tesla has maintained it does not tolerate workplace harassment.)
Then there’s the media circus that surrounds Musk’s every move. Last year, Tesla investors grew concerned that Musk was spending too much time on X (formerly known as Twitter). Musk’s antics on X, which often criticize progressive policy and “wokeness,” seem to have alienated many of his customers: While people concerned about climate change were some of the first adopters of electric vehicles, the proportion of Democrats buying Teslas fell by more than 60%, according to car buyers surveyed in October and November by researcher Strategic Vision.
Read More: Tesla’s Latest Scandals Could Hurt Its Bottom Line
Musk is also spending ample time on SpaceX—which has been plagued by workplace injuries—and his artificial intelligence startup, xAI. As he engages in many different endeavors, he has insisted that he should have more control of Tesla, not less. He demanded 25% voting control of the company, and threatened to divert his energy on making AI products outside Tesla unless the board appeased his wishes.
Will Musk’s AI Bet Pay Off?
Despite Tesla’s current struggles, Musk has much grander visions for the company beyond EVs. Tesla, he said on Tuesday, should be “thought of as an A.I. and robotics company.” He is especially gung-ho about Tesla becoming a leader in the autonomous driving space. “If somebody doesn’t believe Tesla is going to solve autonomy, I think they should not be an investor in the company,” he added.
On Tuesday, Musk reiterated his commitment to creating a self-driving car, which he dubbed the “Cybercab.” Such a product, if successful, could be enormously profitable for Musk. The management firm Ark Invest recently forecasted that robotaxis, if rolled out successfully, could generate $28 trillion over the next five to ten years.
But the road to self-driving taxis is incredibly rocky, and has been littered with failed promises. In 2019, Musk claimed that Tesla would have a million autonomous taxis on the road the next year—but none have yet materialized. Tesla does not yet have a license to test driverless vehicles in California. And one of the early players in the space, General Motors’ Cruise, suffered an extreme setback when one of their cars struck a pedestrian and then dragged her along the road.
“The reality is the software today is flawed, and there are real accidents happening due to optical apparition,” Irwin says. “Does Tesla deserve credit for pushing the envelope? Absolutely. But I think their claims are far too aggressive for where reality is going to land.”
Source: Tech – TIME | 24 Apr 2024 | 11:10 am
No, TikTok will not suddenly disappear from your phone. Nor will you go to jail if you continue using it after it is banned.
After years of attempts to ban the Chinese-owned app, including by former President Donald Trump, a measure to outlaw the popular video-sharing app has won congressional approval and is on its way to President Biden for his signature. The measure gives Beijing-based parent company ByteDance nine months to sell the company, with a possible additional three months if a sale is in progress. If it doesn’t, TikTok will be banned.
[time-brightcove not-tgx=”true”]So what does this mean for you, a TikTok user, or perhaps the parent of a TikTok user? Here are some key questions and answers.
When does the ban go into effect?
The original proposal gave ByteDance just six months to divest from its U.S. subsidiary, negotiations lengthened it to nine. Then, if the sale is already in progress, the company will get another three months to complete it.
So it would be at least a year before a ban goes into effect — but with likely court challenges, this could stretch even longer, perhaps years. TikTok has seen some success with court challenges in the past, but it has never sought to prevent federal legislation from going into effect.
What if I already downloaded it?
TikTok, which is used by more than 170 million Americans, most likely won’t disappear from your phone even if an eventual ban does take effect. But it would disappear from Apple and Google’s app stores, which means users won’t be able to download it. This would also mean that TikTok wouldn’t be able to send updates, security patches and bug fixes, and over time the app would likely become unusable — not to mention a security risk.
But surely there are workarounds?
Teenagers are known for circumventing parental controls and bans when it comes to social media, so dodging the U.S. government’s ban is certainly not outside the realm of possibilities. For instance, users could try to mask their location using a VPN, or virtual private network, use alternative app stores or even install a foreign SIM card into their phone.
But some tech savvy is required, and it’s not clear what will and won’t work. More likely, users will migrate to another platform — such as Instagram, which has a TikTok-like feature called Reels, or YouTube, which has incorporated vertical short videos in its feed to try to compete with TikTok. Often, such videos are taken directly from TikTok itself. And popular creators are likely to be found on other platforms as well, so you’ll probably be able to see the same stuff.
“The TikTok bill relies heavily on the control that Apple and Google maintain over their smartphone platforms because the bill’s primary mechanism is to direct Apple and Google to stop allowing the TikTok app on their respective app stores,” said Dean Ball, a research fellow with the Mercatus Center at George Mason University. “Such a mechanism might be much less effective in the world envisioned by many advocates of antitrust and aggressive regulation against the large tech firms.”
Should I be worried about using TikTok?
Lawmakers from both parties — as well as law enforcement and intelligence officials — have long expressed concerns that Chinese authorities could force ByteDance to hand over data on the 170 million Americans who use TikTok. The worry stems from a set of Chinese national security laws that compel organizations to assist with intelligence gathering – which ByteDance would likely be subject to – and other far-reaching ways the country’s authoritarian government exercises control.
Data privacy experts say, though, that the Chinese government could easily get information on Americans in other ways, including through commercial data brokers that sell or rent personal information.
Lawmakers and some administration officials have also expressed concerns that China could – potentially – direct or influence ByteDance to suppress or boost TikTok content that are favorable to its interests. TikTok, for its part, has denied assertions that it could be used as a tool of the Chinese government. The company has also said it has never shared U.S. user data with Chinese authorities and won’t do so if it’s asked.
More from TIME
Source: Tech – TIME | 24 Apr 2024 | 10:27 am
WASHINGTON — The Senate passed legislation Tuesday that would force TikTok’s China-based parent company to sell the social media platform under the threat of a ban, a contentious move by U.S. lawmakers that’s expected to face legal challenges and disrupt the lives of content creators who rely on the short-form video app for income.
The TikTok legislation was included as part of a larger $95 billion package that provides foreign aid to Ukraine and Israel and was passed 79-18. It now goes to President Joe Biden, who has backed the TikTok proposal and has said he will sign the package as soon as he gets it.
[time-brightcove not-tgx=”true”]A decision made by House Republicans last week to attach the TikTok bill to the high-priority package helped expedite its passage in Congress and came after negotiations with the Senate, where an earlier version of the bill had stalled. That version had given TikTok’s parent company, ByteDance, six months to divest its stakes in the platform. But it drew skepticism from some key lawmakers concerned it was too short of a window for a complex deal that could be worth tens of billions of dollars.
Read More: The House TikTok Ban Is an Empty Threat
The revised legislation extends the deadline, giving ByteDance nine months to sell TikTok, and a possible three-month extension if a sale is in progress. The bill would also bar the company from controlling TikTok’s secret sauce: the algorithm that feeds users videos based on their interests and has made the platform a trendsetting phenomenon.
The passage of the legislation is a culmination of long-held bipartisan fears in Washington over Chinese threats and the ownership of TikTok, which is used by 170 million Americans. For years, lawmakers and administration officials have expressed concerns that Chinese authorities could force ByteDance to hand over U.S. user data, or influence Americans by suppressing or promoting certain content on TikTok.
“Congress is not acting to punish ByteDance, TikTok or any other individual company,” Senate Commerce Committee Chairwoman Maria Cantwell said. “Congress is acting to prevent foreign adversaries from conducting espionage, surveillance, maligned operations, harming vulnerable Americans, our servicemen and women, and our U.S. government personnel.”
Opponents of the bill say the Chinese government could easily get information on Americans in other ways, including through commercial data brokers that traffic in personal information. The foreign aid package includes a provision that makes it illegal for data brokers to sell or rent “personally identifiable sensitive data” to North Korea, China, Russia, Iran or entities in those countries. But it has encountered some pushback, including from the American Civil Liberties Union, which says the language is written too broadly and could sweep in journalists and others who publish personal information.
Many opponents of the TikTok measure argue the best way to protect U.S. consumers is through implementing a comprehensive federal data privacy law that targets all companies regardless of their origin. They also note the U.S. has not provided public evidence that shows TikTok sharing U.S. user information with Chinese authorities, or that Chinese officials have ever tinkered with its algorithm.
Read More: The Grim Reality of Banning TikTok
“Banning TikTok would be an extraordinary step that requires extraordinary justification,” said Becca Branum, a deputy director at the Washington-based Center for Democracy & Technology, which advocates for digital rights. “Extending the divestiture deadline neither justifies the urgency of the threat to the public nor addresses the legislation’s fundamental constitutional flaws.”
China has previously said it would oppose a forced sale of TikTok, and has signaled its opposition this time around. TikTok, which has long denied it’s a security threat, is also preparing a lawsuit to block the legislation.
“At the stage that the bill is signed, we will move to the courts for a legal challenge,” Michael Beckerman, TikTok’s head of public policy for the Americas, wrote in a memo sent to employees on Saturday and obtained by The Associated Press.
“This is the beginning, not the end of this long process,” Beckerman wrote.
Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out
The company has seen some success with court challenges in the past, but it has never sought to prevent federal legislation from going into effect.
In November, a federal judge blocked a Montana law that would ban TikTok use across the state after the company and five content creators who use the platform sued. Three years before that, federal courts blocked an executive order issued by then-President Donald Trump to ban TikTok after the company sued on the grounds that the order violated free speech and due process rights.
The Trump administration then brokered a deal that had U.S. corporations Oracle and Walmart take a large stake in TikTok. But the sale never went through.
Trump, who is running for president again this year, now says he opposes the potential ban.
Read More: Why Trump Flipped on TikTok
Since then, TikTok has been in negotiations about its future with the secretive Committee on Foreign Investment in the United States, a little-known government agency tasked with investigating corporate deals for national security concerns.
On Sunday, Erich Andersen, a top attorney for ByteDance who led talks with the U.S. government for years, told his team that he was stepping down from his role.
“As I started to reflect some months ago on the stresses of the last few years and the new generation of challenges that lie ahead, I decided that the time was right to pass the baton to a new leader,” Andersen wrote in an internal memo that was obtained by the AP. He said the decision to step down was entirely his and was decided months ago in a discussion with the company’s senior leaders.
Meanwhile, TikTok content creators who rely on the app have been trying to make their voices heard. Earlier Tuesday, some creators congregated in front the Capitol building to speak out against the bill and carry signs that read “I’m 1 of the 170 million Americans on TikTok,” among other things.
Tiffany Cianci, a content creator who has more than 140,000 followers on the platform and had encouraged people to show up, said she spent Monday night picking up creators from airports in the D.C. area. Some came from as far as Nevada and California. Others drove overnight from South Carolina or took a bus from upstate New York.
Cianci says she believes TikTok is the safest platform for users right now because of Project Texas, TikTok’s $1.5 billion mitigation plan to store U.S. user data on servers owned and maintained by the tech giant Oracle.
“If our data is not safe on TikTok,” she said. “I would ask why the president is on TikTok.”
—Associated Press writers Mary Clare Jalonick and Matt O’Brien contributed to this report.
Source: Tech – TIME | 23 Apr 2024 | 10:55 pm
Recently Jon Stewart did a segment poking satire at the promises of AI highlighted by tech CEOs. I don’t think automating your toasters is the best way to show the potential of AI, as Jon did, but I do agree with the central premise of his argument that disruption caused by AI will be harnessed to prioritize profits over people. It will likely cause one of the largest and fastest labor displacements in human history.
[time-brightcove not-tgx=”true”]I run an AI company based in Silicon Valley focused on solving climate change, and I am a former policymaker for the government of India. At the World Economic Forum Davos 2024, my discussions with media, heads of state, and CEOs of Fortune 500 companies underscored global perspectives on AI, where AI was widely discussed to unlock the next productivity revolution, foster wealth creation, and uplift people out of poverty by democratizing and reducing the cost of access to information/education. Most of the risk conversations for AI focused on AI’s extinction risk and the regulatory guardrails we needed to hedge against that future. My conversations about political and societal risk coming from the biggest labor displacement from AI were met with skepticism even by my community of fellow AI founders.
The reality is what we are witnessing through AI is a force that multiplies the pace of structural displacement of labor. As an example, the internet era and globalization enabled US firms to outsource customer support to developing countries such as India, Vietnam, and Thailand at the expense of the middle class in some U.S regions; now, a chatbot developed on an LLM enterprise stack is proving to be more/equally effective as humans in the customer support making them redundant. This impact will be even more severe in high-income countries. A recent report by the IMF mentioned that closer to 60% of jobs will be impacted by AI, likely leading to an increase in inequality. Half of them will benefit, but the other half will made redundant/less effective, leading to lower wages, reduced purchasing power, and a lower standard of living in an inflationary era.
These labor displacements accumulate over time, impacting mostly the lower and middle classes and, in turn, becoming political movements. In the U.S., the wave of globalization and automation in the last 30 years created the most disruptive political movement, resulting in the election of Donald Trump. However, this time, disruption is also coming for some white-collar workers, which makes its effects unknown and potentially more dangerous.
Typically, the debate on AI’s impact on labor hinges on two arguments. One is more of a binary argument from some economists that there will be both job creation and destruction and that the net impact will be positive. Further, the workers who will lose jobs can be retrained/reskilled into newer categories of jobs that will be created and enabled by redistribution of wealth.
The second argument is that the overall promise of AI in areas such as education and research more than ethically compensates for the negative impact on society through job losses. This argument is made by the founder/investor community, who quickly point out the tech marvels of the internet boom, such as smartphones, and its impact on connecting billions in emerging economies such as India.
The first argument assumes that the pace of one dislocation—job losses in one region/sector—will be less than the pace at which reskilling can happen and that wealth redistribution is fair and smooth. America made the same assumption about the renewable energy transition for coal mining workers and thus became one of the few large economies where climate change became a polarizing issue. To wit, on wealth re-distribution, the share of the top 20% in the U.S. rose from 61% in the 1990s to 71% in 2022.
The second argument assumes that education equals job readiness, as I have seen in my home country India, where 80% of 1.5 million engineers graduating every year are considered unemployable for any job in the knowledge economy. Further in the hierarchy of needs, having a smartphone rarely takes precedence over having a decent job and an affordable meal for the family. Secondly,
This brings me to the uncomfortable truth of accepting our responsibility as AI founders to decide the course of this transformation along with the governments. Our obsession with profitability has often led us to focus more on applications such as creating more intelligent chatbots to replace customer support or sales reps rather than the most challenging yet purposeful problems for humanity, such as improving productivity for smallholder farmers in the developing world or solving cancer.
Secondly, we need to work with governments and regulators to ensure that some of the wealth created in this process is redistributed fairly and equitably. As an example, we can co-fund the creation of NGOs such as Coalfield Development. This community-based non-profit engages coal miners who couldn’t transition into renewable energy jobs by helping them acquire construction, agriculture, and solar installation skills.
Third, governments can learn lessons from Norway’s efforts to ensure the fair redistribution of wealth created by the oil boom in the 1970s. Norway established the Government Pension Fund Global (Norwegian Oil Fund) in 1990 to steward oil revenues for future generations, aiming for economic stability through global diversified investments. It now has the second lowest inequality in the world.
A lack of action now is a path towards a future that magnifies the political, social, and environmental crises we have seen in the last three decades. However, the world will still see more billionaires than ever before and possibly the world’s first AI trillionaire in the next decade.
Source: Tech – TIME | 23 Apr 2024 | 4:45 pm
Source: BBC News - Technology | 5 Dec 2023 | 10:28 am
Source: BBC News - Technology | 5 Dec 2023 | 8:48 am
Source: BBC News - Technology | 5 Dec 2023 | 8:04 am
Source: BBC News - Technology | 5 Dec 2023 | 7:54 am
Source: BBC News - Technology | 5 Dec 2023 | 3:01 am
Source: BBC News - Technology | 4 Dec 2023 | 8:21 am
Source: BBC News - Technology | 4 Dec 2023 | 5:25 am
Source: BBC News - Technology | 4 Dec 2023 | 4:23 am
Source: BBC News - Technology | 3 Dec 2023 | 7:12 pm
Source: BBC News - Technology | 2 Dec 2023 | 6:56 pm
Source: Reuters: Technology News | 17 Jun 2020 | 11:37 pm
Source: Reuters: Technology News | 17 Jun 2020 | 8:05 pm
Source: Reuters: Technology News | 17 Jun 2020 | 6:43 pm
Source: Reuters: Technology News | 17 Jun 2020 | 6:03 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:33 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:31 pm
Source: Reuters: Technology News | 17 Jun 2020 | 5:01 pm
Source: Reuters: Technology News | 17 Jun 2020 | 4:47 pm
Source: Reuters: Technology News | 17 Jun 2020 | 2:39 pm
Source: Reuters: Technology News | 17 Jun 2020 | 12:48 pm
Source: Latest articles for ZDNet | 4 Apr 2018 | 3:48 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 3:43 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 2:03 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 12:04 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 12:00 am
Source: Latest articles for ZDNet | 3 Apr 2018 | 8:14 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:51 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:40 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:39 pm
Source: Latest articles for ZDNet | 3 Apr 2018 | 7:15 pm
Source: ComputerWeekly.com | 29 Mar 2018 | 1:15 pm
Source: ComputerWeekly.com | 29 Mar 2018 | 6:58 am
Source: ComputerWeekly.com | 29 Mar 2018 | 5:00 am
Source: ComputerWeekly.com | 29 Mar 2018 | 4:45 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:48 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:15 am
Source: ComputerWeekly.com | 28 Mar 2018 | 12:30 pm
Source: ComputerWeekly.com | 28 Mar 2018 | 9:30 am
Source: ComputerWeekly.com | 28 Mar 2018 | 8:30 am
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 17 Nov 2016 | 11:12 am
Source: CNN.com - Technology | 17 Nov 2016 | 11:07 am
Source: CNN.com - Technology | 16 Nov 2016 | 12:38 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:56 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:55 pm
Source: CNN.com - Technology | 8 Nov 2016 | 1:17 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 3:37 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 9:01 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 2:55 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am